From: Ondrej Lichtner olichtne@redhat.com
v2 changes: add lnst.Common.IpAddress * modify exception string. add lnst.Common.Parameters * add docstring explaining the __getattribute__ magic * make val a property * rename x to attr in Parameters dir() iterations * fix IpParam value setter for string values * modify IpParam Exception string add lnst.Devices * make Device a Meta class and register RemoteDevice as implementing the Device interface * removed unnecessary imports * added MasterDevice class for slave* methods * made master_set to be a 'master' property setter * renamed {add, del}_route to route_{add, del} * renamed set_speed to speed_set * renamed set_autoneg to autoneg_set * removed modprobes where unnecessary * link_stats are now retrieved from netlink message * added link_stats64 * OvsBridge calls parent _type_init for modprobe * remove SoftDevice import - not required add lnst.Controller.Host * removed debug pprint import * added docstrings to requested methods * Host::run throws an exception when netns set (not supported yet) * added TODO comments to some parts that need to be implemented add lnst.Controller.MachineMapper * docstring now explains that implementing your own MachineMapper is not supported at this moment * removed debug pprint import add lnst.Controller.Recipe * renamed variable 'x' to 'attr' Machine, NetTestSlave: heavy reimplementation * improved class and module synchronization implementation * join split lines in deviceref_to_device method lnst.Common.Parameters: IpParam accepts Device objects * import Device on demand (Device class arrives on Slave later) and check isinstance Device instead of RemoteDevice (not available on Slave) * fix IpParam Exception string
v2 new commits: Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
Including the original cover letter as well:
Hi all,
the long avaited patchset for Python Recipes is here.
At this point there's still quite a lot of work left, but since the current state is functional and the tester facing API should be mostly stable I think it's a good time to get this merged into upstream so that more people can get involved with filling in the missing pieces and can slowly start experimenting and porting their XML recipes.
This I know are missing: * Recipe run summary - how the Summary should look like was described in the running proposal document, but I didn't manage to start with the implementation yet...
* Porting of all the old test_modules into the lnst.Tests package, so far we only have the IcmpPing module that should still be reworked to be more universal (ip4 and ip6 in one class). Porting test_modules should be fairly easy, but at this point there's no guide how to do it so if you're having trouble feel free to contact me either on irc or email.
* test_tools - we haven't even thought of these yet
* network namespaces - also didn't think about them yet, though I'm hoping this will be simple
* Ip address and network generators as we've discussed them on the upstream meeting
Please review and provide any comments or features I forgot that should be added to the above list.
Regards, Ondrej
Ondrej Lichtner (47): add lnst.Common.LnstError add lnst.Common.DeviceError add lnst.Common.DeviceRef add lnst.Common.IpAddress add lnst.Common.Parameters add lnst.Common.TestModule add lnst.Common.JobError add lnst.Controller.Common add lnst.Devices add lnst.Controller.Requirements add lnst.Controller.Job add lnst.Controller.Host add lnst.Controller.Config add lnst.Slave.Config add lnst.Controller.MachineMapper lnst.Controller.Machine: change object initialization add lnst.Controller.MessageDispatcher add lnst.Controller.SlavePoolManager add lnst.Controller.Recipe add lnst.Controller.Controller various files: retype exceptions add lnst.Tests package lnst.Common.Config: remove {controller, slave}_init lnst.Common.Config: remove global lnst_config Slave: use a local config object instead of a global one lnst.Common.Utils: add sha256sum function lnst.Common.ResourceCache: simplification lnst.Controller.CtlSecSocket: remove lnst_config import lnst.Controller.SlaveMachineParser: make standalone add lnst.Common.InterfaceManagerError add lnst.Slave.Job lnst.Slave.InterfaceManager: heavy reimplementation Machine, NetTestSlave: heavy reimplementation lnst.Slave.InterfaceManager: remove Device class implementation add lnst.Controller package imports lnst/__init__.py remove imports lnst.Controller: remove old modules setup.py: add new packages add lnst.Devices.VirtNetCtl lnst.Controller: move VirtUtils to VirtDomainCtl, remove VirtNetCtl class lnst.Common.Parameters: IpParam accepts Device objects add example python_recipe.py script Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
lnst-slave | 20 +- lnst/Common/Config.py | 152 +--- lnst/Common/DeviceError.py | 22 + lnst/Common/DeviceRef.py | 19 + lnst/Common/ExecCmd.py | 3 +- lnst/Common/InterfaceManagerError.py | 16 + lnst/Common/IpAddress.py | 99 +++ lnst/Common/JobError.py | 22 + lnst/Common/LnstError.py | 18 + lnst/Common/NetTestCommand.py | 5 +- lnst/Common/Parameters.py | 157 ++++ lnst/Common/ResourceCache.py | 128 ++-- lnst/Common/SecureSocket.py | 3 +- lnst/Common/ShellProcess.py | 3 +- lnst/Common/TestModule.py | 67 ++ lnst/Common/TestsCommon.py | 3 +- lnst/Common/Utils.py | 11 + lnst/Controller/Common.py | 17 + lnst/Controller/Config.py | 99 +++ lnst/Controller/Controller.py | 218 ++++++ lnst/Controller/CtlSecSocket.py | 2 - lnst/Controller/Host.py | 160 ++++ lnst/Controller/Job.py | 196 +++++ lnst/Controller/Machine.py | 1356 +++++++-------------------------- lnst/Controller/MachineMapper.py | 328 ++++++++ lnst/Controller/MessageDispatcher.py | 203 +++++ lnst/Controller/NetTestController.py | 620 --------------- lnst/Controller/Recipe.py | 97 +++ lnst/Controller/RecipeParser.py | 572 -------------- lnst/Controller/Requirements.py | 113 +++ lnst/Controller/SlaveMachineParser.py | 144 +++- lnst/Controller/SlavePool.py | 648 ---------------- lnst/Controller/SlavePoolManager.py | 273 +++++++ lnst/Controller/Task.py | 4 +- lnst/Controller/VirtDomainCtl.py | 98 +++ lnst/Controller/VirtUtils.py | 268 ------- lnst/Controller/XmlParser.py | 188 ----- lnst/Controller/XmlProcessing.py | 235 ------ lnst/Controller/XmlTemplates.py | 438 ----------- lnst/Controller/__init__.py | 3 + lnst/Devices/BondDevice.py | 38 + lnst/Devices/BridgeDevice.py | 33 + lnst/Devices/Device.py | 355 +++++++++ lnst/Devices/MacvlanDevice.py | 38 + lnst/Devices/MasterDevice.py | 33 + lnst/Devices/OvsBridgeDevice.py | 115 +++ lnst/Devices/RemoteDevice.py | 98 +++ lnst/Devices/SoftDevice.py | 51 ++ lnst/Devices/TeamDevice.py | 60 ++ lnst/Devices/VethDevice.py | 58 ++ lnst/Devices/VethPair.py | 24 + lnst/Devices/VirtNetCtl.py | 85 +++ lnst/Devices/VirtualDevice.py | 98 +++ lnst/Devices/VlanDevice.py | 35 + lnst/Devices/VtiDevice.py | 71 ++ lnst/Devices/VxlanDevice.py | 65 ++ lnst/Devices/__init__.py | 42 + lnst/RecipeCommon/ModuleWrap.py | 33 +- lnst/Slave/Config.py | 73 ++ lnst/Slave/InterfaceManager.py | 648 ++-------------- lnst/Slave/Job.py | 251 ++++++ lnst/Slave/NetTestSlave.py | 846 ++++++++++---------- lnst/Tests/IcmpPing.py | 62 ++ lnst/Tests/__init__.py | 17 + lnst/__init__.py | 1 - recipes/examples/python_recipe.py | 32 + setup.py | 2 +- 67 files changed, 4991 insertions(+), 5301 deletions(-) create mode 100644 lnst/Common/DeviceError.py create mode 100644 lnst/Common/DeviceRef.py create mode 100644 lnst/Common/InterfaceManagerError.py create mode 100644 lnst/Common/IpAddress.py create mode 100644 lnst/Common/JobError.py create mode 100644 lnst/Common/LnstError.py create mode 100644 lnst/Common/Parameters.py create mode 100644 lnst/Common/TestModule.py create mode 100644 lnst/Controller/Common.py create mode 100644 lnst/Controller/Config.py create mode 100644 lnst/Controller/Controller.py create mode 100644 lnst/Controller/Host.py create mode 100644 lnst/Controller/Job.py create mode 100644 lnst/Controller/MachineMapper.py create mode 100644 lnst/Controller/MessageDispatcher.py delete mode 100644 lnst/Controller/NetTestController.py create mode 100644 lnst/Controller/Recipe.py delete mode 100644 lnst/Controller/RecipeParser.py create mode 100644 lnst/Controller/Requirements.py delete mode 100644 lnst/Controller/SlavePool.py create mode 100644 lnst/Controller/SlavePoolManager.py create mode 100644 lnst/Controller/VirtDomainCtl.py delete mode 100644 lnst/Controller/VirtUtils.py delete mode 100644 lnst/Controller/XmlParser.py delete mode 100644 lnst/Controller/XmlProcessing.py delete mode 100644 lnst/Controller/XmlTemplates.py create mode 100644 lnst/Devices/BondDevice.py create mode 100644 lnst/Devices/BridgeDevice.py create mode 100644 lnst/Devices/Device.py create mode 100644 lnst/Devices/MacvlanDevice.py create mode 100644 lnst/Devices/MasterDevice.py create mode 100644 lnst/Devices/OvsBridgeDevice.py create mode 100644 lnst/Devices/RemoteDevice.py create mode 100644 lnst/Devices/SoftDevice.py create mode 100644 lnst/Devices/TeamDevice.py create mode 100644 lnst/Devices/VethDevice.py create mode 100644 lnst/Devices/VethPair.py create mode 100644 lnst/Devices/VirtNetCtl.py create mode 100644 lnst/Devices/VirtualDevice.py create mode 100644 lnst/Devices/VlanDevice.py create mode 100644 lnst/Devices/VtiDevice.py create mode 100644 lnst/Devices/VxlanDevice.py create mode 100644 lnst/Devices/__init__.py create mode 100644 lnst/Slave/Config.py create mode 100644 lnst/Slave/Job.py create mode 100644 lnst/Tests/IcmpPing.py create mode 100644 lnst/Tests/__init__.py create mode 100755 recipes/examples/python_recipe.py
From: Ondrej Lichtner olichtne@redhat.com
Defines the Base LNST Exception. All LNST related exceptions should inherit from this.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/LnstError.py | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 lnst/Common/LnstError.py
diff --git a/lnst/Common/LnstError.py b/lnst/Common/LnstError.py new file mode 100644 index 0000000..ae8d9bc --- /dev/null +++ b/lnst/Common/LnstError.py @@ -0,0 +1,18 @@ +""" +Defines the LnstError exception class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +class LnstError(Exception): + """Base LNST Exception type + + All LNST related Exceptions should inherit from this class. + """ + pass
From: Ondrej Lichtner olichtne@redhat.com
Defines some basic Exceptions signaling some error with a Device object (to be added in a later commit). Since these Exceptions can be transmitted to the Controller, they need to be in the Common package.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/DeviceError.py | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 lnst/Common/DeviceError.py
diff --git a/lnst/Common/DeviceError.py b/lnst/Common/DeviceError.py new file mode 100644 index 0000000..cf1b7b7 --- /dev/null +++ b/lnst/Common/DeviceError.py @@ -0,0 +1,22 @@ +""" +Defines the DeviceError, DeviceDeleted and DeviceNotFound exceptions. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.LnstError import LnstError + +class DeviceError(LnstError): + pass + +class DeviceDeleted(DeviceError): + pass + +class DeviceNotFound(DeviceError): + pass
From: Ondrej Lichtner olichtne@redhat.com
Defines the DeviceRef class that will be used by the Controller-Slave communication protocol.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/DeviceRef.py | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 lnst/Common/DeviceRef.py
diff --git a/lnst/Common/DeviceRef.py b/lnst/Common/DeviceRef.py new file mode 100644 index 0000000..cd7ee15 --- /dev/null +++ b/lnst/Common/DeviceRef.py @@ -0,0 +1,19 @@ +""" +Defines the DeviceRef class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +class DeviceRef(object): + """Device reference transferable over network + + Used in Controller-Slave commucation protocol. + """ + def __init__(self, if_index): + self.if_index = int(if_index)
From: Ondrej Lichtner olichtne@redhat.com
Defines BaseIpAddress and derived classes and the IpAddress factory method. We define our own instead of using the ipaddress module because it doesn't always fit how LNST works and would require workarounds. Defining our own classes for ip address handling therefore simplifies our code.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * modify exception string. --- lnst/Common/IpAddress.py | 99 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 lnst/Common/IpAddress.py
diff --git a/lnst/Common/IpAddress.py b/lnst/Common/IpAddress.py new file mode 100644 index 0000000..0bb5bec --- /dev/null +++ b/lnst/Common/IpAddress.py @@ -0,0 +1,99 @@ +""" +Defines BaseIpAddress and derived classes and the IpAddress factory method. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import re +from socket import inet_pton, AF_INET, AF_INET6 +from lnst.Common.LnstError import LnstError + +class BaseIpAddress(object): + def __init__(self, addr): + self.addr, self.prefixlen = self._parse_addr(addr) + + self.family = None + + def __str__(self): + return str(self.addr) + + def __eq__(self, other): + if self.addr != other.addr or\ + self.prefixlen != other.prefixlen: + return False + else: + return True + + @staticmethod + def _parse_addr(addr): + raise NotImplementedError() + +class Ip4Address(BaseIpAddress): + def __init__(self, addr): + super(Ip4Address, self).__init__(addr) + + self.family = AF_INET + + @staticmethod + def _parse_addr(addr): + addr = addr.split('/') + if len(addr) == 1: + addr = addr[0] + prefixlen = 32 + elif len(addr) == 2: + addr, prefixlen = addr + prefixlen = int(prefixlen) + else: + raise LnstError("Invalid IPv4 format.") + + try: + inet_pton(AF_INET, addr) + except: + raise LnstError("Invalid IPv4 format.") + + return addr, prefixlen + +class Ip6Address(BaseIpAddress): + def __init__(self, addr): + super(Ip6Address, self).__init__(addr) + + self.family = AF_INET6 + + @staticmethod + def _parse_addr(addr): + addr = addr.split('/') + if len(addr) == 1: + addr = addr[0] + prefixlen = 128 + elif len(addr) == 2: + addr, prefixlen = addr + prefixlen = int(prefixlen) + else: + raise LnstError("Invalid IPv6 format.") + + try: + type(inet_pton(AF_INET6, addr)) + except: + raise LnstError("Invalid IPv6 format.") + + return addr, prefixlen + +def IpAddress(addr): + """Factory method to create a BaseIpAddress object""" + if isinstance(addr, BaseIpAddress): + return addr + #TODO add switches for host, interface etc... + elif isinstance(addr, str): + try: + return Ip4Address(addr) + except: + return Ip6Address(addr) + else: + raise LnstError("Value must be a BaseIpAddress or string object." + "Not {}".format(type(addr)))
From: Ondrej Lichtner olichtne@redhat.com
This module defines the Param class, it's type specific derivatives (IntParam, StrParam) and the Parameters class which serves as a container for Param instances. This will be used to specify parameters for recipes and for test modules.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * add docstring explaining the __getattribute__ magic * make val a property * rename x to attr in Parameters dir() iterations * fix IpParam value setter for string values * modify IpParam Exception string --- lnst/Common/Parameters.py | 133 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 lnst/Common/Parameters.py
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py new file mode 100644 index 0000000..dcf9bab --- /dev/null +++ b/lnst/Common/Parameters.py @@ -0,0 +1,133 @@ +""" +This module defines the Param class, it's type specific derivatives +(IntParam, StrParam) and the Parameters class which serves as a container for +Param instances. This can be used by a BaseRecipe class to specify +optional/mandatory parameters for the entire test, or by HostReq and DeviceReq +classes to define specific parameters needed for the matching algorithm. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.IpAddress import BaseIpAddress, IpAddress +from lnst.Common.LnstError import LnstError + +class ParamError(LnstError): + pass + +class Param(object): + def __init__(self, mandatory=False, default=None): + self.mandatory = mandatory + self.default = default + self._val = None + self.set = False + if self.default: + self.val = self.default + + @property + def val(self): + return self._val + + @val.setter + def val(self, value): + self._val = value + self.set = True + + def __str__(self): + return str(self.val) + +class IntParam(Param): + @Param.val.setter + def val(self, value): + try: + self._val = int(value) + except: + raise ParamError("Value must be a valid integer") + self.set = True + + def __int__(self): + return self.val + +class FloatParam(Param): + @Param.val.setter + def val(self, value): + try: + self._val = float(value) + except: + raise ParamError("Value must be a valid float") + self.set = True + + def __float__(self): + return self.val + +class StrParam(Param): + @Param.val.setter + def val(self, value): + try: + self._val = str(value) + except: + raise ParamError("Value must be a string") + self.set = True + +class IpParam(Param): + @Param.val.setter + def val(self, value): + if isinstance(value, BaseIpAddress): + self._val = value + elif isinstance(value, str): + self._val = IpAddress(value) + else: + raise ParamError("Value must be a BaseIpAddress or string object." + "Not {}".format(type(value))) + self.set = True + +class Parameters(object): + def __getattribute__(self, name): + """ + Overriding the default __getattribute__ method is important for being + able to deepcopy a Parameters object while also allowing to return None + for undefined Parameter names. This is because the copy module relies + on an exception being raised for certain private attributes and + returning None would break it. + """ + if name[:2] == "__" or name[:1] == "_": + return object.__getattribute__(self, name) + + try: + return object.__getattribute__(self, name) + except: + return None + + def __iter__(self): + for attr in dir(self): + val = getattr(self, attr) + if isinstance(val, Param): + yield (attr, val) + + def _to_dict(self): + res = {} + for name, param in self: + res[name] = str(param.val) + return res + + def _from_dict(self, d): + for name, val in d.items(): + if isinstance(val, Param): + setattr(self, name, val) + else: + new_param = StrParam() + new_param.val = val + setattr(self, name, new_param) + + def __str__(self): + result = "" + for attr in dir(self): + val = getattr(self, attr) + if isinstance(val, Param): + result += "%s = %s\n" % (attr, str(val)) + return result
From: Ondrej Lichtner olichtne@redhat.com
Defines the BaseTestModule class and the TestModuleError exception.
BaseTestModule is a Base class for test modules, all user defined testmodule classes should inherit from this class. The class itself defines the interface for a test module that is required by LNST and implements Parameter checking.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/TestModule.py | 67 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 lnst/Common/TestModule.py
diff --git a/lnst/Common/TestModule.py b/lnst/Common/TestModule.py new file mode 100644 index 0000000..10e5b8f --- /dev/null +++ b/lnst/Common/TestModule.py @@ -0,0 +1,67 @@ +""" +Defines the BaseTestModule class and the TestModuleError exception. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import copy +from lnst.Common.Parameters import Parameters, Param +from lnst.Common.LnstError import LnstError + +class TestModuleError(LnstError): + """Exception used by BaseTestModule and derived classes""" + pass + +class BaseTestModule(object): + """Base class for test modules + + All user defined testmodule classes should inherit from this class. The + class itself defines the interface for a test module that is required by + LNST - the virtual run method. + + It also implements the __init__ method that should be called by the derived + classes as it implements Parameter checking. + + Derived classes can define the test parameters by assigning 'Param' + instances to class attributes, these will be parsed during initialization + and copied to the self.params instance attribute and loaded with values + provided to the __init__ method. This will also check mandatory attributes. + """ + def __init__(self, **kwargs): + """ + Args: + kwargs -- dictionary of arbitrary named arguments that correspond + to class attributes (Param type). Values will be parsed and + set to Param instances under the self.params object. + """ + #by defaults loads the params into self.params - no checks pseudocode: + self.params = Parameters() + for x in dir(self): + val = getattr(self, x) + if isinstance(val, Param): + setattr(self.params, x, copy.deepcopy(val)) + + for name, val in kwargs.items(): + try: + param = getattr(self.params, name) + except: + raise TestModuleError("Unknown parameter {}".format(name)) + param.val = val + + for name, param in self.params: + if param.mandatory and not param.set: + raise TestModuleError("Parameter {} is mandatory".format(name)) + + self._res_data = None + + def run(self): + raise NotImplementedError("Method 'run' MUST be defined") + + def _get_res_data(self): + return self._res_data
From: Ondrej Lichtner olichtne@redhat.com
This module defines the common JobError exception that will be used by both the Controller and the Slave Job classes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/JobError.py | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 lnst/Common/JobError.py
diff --git a/lnst/Common/JobError.py b/lnst/Common/JobError.py new file mode 100644 index 0000000..a86a8be --- /dev/null +++ b/lnst/Common/JobError.py @@ -0,0 +1,22 @@ +""" +This module defines the common JobError exception used by both the Controller +and the Slave + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.LnstError import LnstError + +class JobError(LnstError): + """Base class for client errors.""" + def __init__(self, s): + self._s = s + + def __str__(self): + return "JobError: " + str(self._s)
From: Ondrej Lichtner olichtne@redhat.com
Common Controller module. At the moment it only defines the ControllerError exception class, but other common functionality can be added later.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Common.py | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 lnst/Controller/Common.py
diff --git a/lnst/Controller/Common.py b/lnst/Controller/Common.py new file mode 100644 index 0000000..c9e5771 --- /dev/null +++ b/lnst/Controller/Common.py @@ -0,0 +1,19 @@ +""" +Common Controller module. At the moment it only defines the ControllerError +exception class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import os +import sys +from lnst.Common.LnstError import LnstError + +class ControllerError(LnstError): + pass
From: Ondrej Lichtner olichtne@redhat.com
This package contains the implementation of Device configuration/manipulation classes that are currently supported by LNST. This implementation has been moved into a separate package that will be distributed together with the Controller package. These will then be sent to the Slave during recipe execution so the slave can use them when required.
The reasoning behind this is that it is often easier to update or make changes on the Controller than on all the Slaves. Making the Slave simpler and independent of the implementation of device configuration means we don't need to update all our slaves as often and also that changes to the Device configuration implementation don't require any changes in Slave or Controller code.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * make Device a Meta class and register RemoteDevice as implementing the Device interface * removed unnecessary imports * added MasterDevice class for slave* methods * made master_set to be a 'master' property setter * renamed {add, del}_route to route_{add, del} * renamed set_speed to speed_set * renamed set_autoneg to autoneg_set * removed modprobes where unnecessary * link_stats are now retrieved from netlink message * added link_stats64 * OvsBridge calls parent _type_init for modprobe * remove SoftDevice import - not required --- lnst/Devices/BondDevice.py | 38 +++++ lnst/Devices/BridgeDevice.py | 33 ++++ lnst/Devices/Device.py | 355 ++++++++++++++++++++++++++++++++++++++++ lnst/Devices/MacvlanDevice.py | 38 +++++ lnst/Devices/MasterDevice.py | 33 ++++ lnst/Devices/OvsBridgeDevice.py | 115 +++++++++++++ lnst/Devices/RemoteDevice.py | 98 +++++++++++ lnst/Devices/SoftDevice.py | 51 ++++++ lnst/Devices/TeamDevice.py | 60 +++++++ lnst/Devices/VethDevice.py | 58 +++++++ lnst/Devices/VethPair.py | 24 +++ lnst/Devices/VirtualDevice.py | 98 +++++++++++ lnst/Devices/VlanDevice.py | 35 ++++ lnst/Devices/VtiDevice.py | 71 ++++++++ lnst/Devices/VxlanDevice.py | 65 ++++++++ lnst/Devices/__init__.py | 43 +++++ 16 files changed, 1215 insertions(+) create mode 100644 lnst/Devices/BondDevice.py create mode 100644 lnst/Devices/BridgeDevice.py create mode 100644 lnst/Devices/Device.py create mode 100644 lnst/Devices/MacvlanDevice.py create mode 100644 lnst/Devices/MasterDevice.py create mode 100644 lnst/Devices/OvsBridgeDevice.py create mode 100644 lnst/Devices/RemoteDevice.py create mode 100644 lnst/Devices/SoftDevice.py create mode 100644 lnst/Devices/TeamDevice.py create mode 100644 lnst/Devices/VethDevice.py create mode 100644 lnst/Devices/VethPair.py create mode 100644 lnst/Devices/VirtualDevice.py create mode 100644 lnst/Devices/VlanDevice.py create mode 100644 lnst/Devices/VtiDevice.py create mode 100644 lnst/Devices/VxlanDevice.py create mode 100644 lnst/Devices/__init__.py
diff --git a/lnst/Devices/BondDevice.py b/lnst/Devices/BondDevice.py new file mode 100644 index 0000000..5d75546 --- /dev/null +++ b/lnst/Devices/BondDevice.py @@ -0,0 +1,38 @@ +""" +Defines the BondDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.MasterDevice import MasterDevice + +class BondDevice(MasterDevice): + _name_template = "t_bond" + + def create(self): + exec_cmd("ip link add %s type bond" % self.name) + + def _get_bond_dir(self): + return "/sys/class/net/%s/bonding" % self.name + + def set_option(self, option, value): + if option == "primary": + ''' + "primary" option is not direct value but it's + a Device reference + ''' + value = value.name + exec_cmd('echo "%s" > %s/%s' % (value, + self._get_bond_dir(), + option)) + + def set_options(self, options): + for option, value in options: + self.set_option(option, value) diff --git a/lnst/Devices/BridgeDevice.py b/lnst/Devices/BridgeDevice.py new file mode 100644 index 0000000..c9678c4 --- /dev/null +++ b/lnst/Devices/BridgeDevice.py @@ -0,0 +1,33 @@ +""" +Defines the BridgeDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.MasterDevice import MasterDevice + +class BridgeDevice(MasterDevice): + _name_template = "t_br" + + def create(self): + exec_cmd("ip link add dev {} type bridge".format(self._name)) + + def _get_bridge_dir(self): + return "/sys/class/net/%s/bridge" % self.name + + def set_option(self, option, value): + #TODO redo to work with iproute + exec_cmd('echo "%s" > %s/%s' % (value, + self._get_bridge_dir(), + option)) + + def set_options(self, options): + for option, value in options: + self.set_option(option, value) diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py new file mode 100644 index 0000000..8ba59f2 --- /dev/null +++ b/lnst/Devices/Device.py @@ -0,0 +1,355 @@ +""" +Defines the Device class implementing the common methods for all device types. +Every other device type needs to inherit from this class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import re +from abc import ABCMeta +from lnst.Common.NetUtils import normalize_hwaddr +from lnst.Common.ExecCmd import exec_cmd +from lnst.Common.DeviceError import DeviceError, DeviceDeleted +from lnst.Common.IpAddress import Ip4Address, Ip6Address, IpAddress + +try: + from pyroute2.netlink.iproute import RTM_NEWLINK + from pyroute2.netlink.iproute import RTM_NEWADDR + from pyroute2.netlink.iproute import RTM_DELADDR +except ImportError: + from pyroute2.iproute import RTM_NEWLINK + from pyroute2.iproute import RTM_NEWADDR + from pyroute2.iproute import RTM_DELADDR + +class Device(object): + """The base Device class + + Implemented using the pyroute2 package to access different attributes of + a kernel netdevice object. + Changing attributes of a netdevice is right now implemented by calling + shell commands (e.g. from iproute2 package). + + The Controller-Slave communication is implemented in such a way that all + public methods defined in this and derived class are directly available + as a tester facing API. + """ + __metaclass__ = ABCMeta + + def __init__(self, if_manager): + self.if_index = None + self._nl_msg = None + self._devlink = None + self._if_manager = if_manager + self._enabled = True + self._deleted = False + + self._ip_addrs = [] + # TODO self._netns = None ??? + + def create(self): + """Creates a new netdevice of the corresponding type + + Method to be implemented by derived classes where applicable. + """ + msg = "Can't create a hardware ethernet device." + raise DeviceError(msg) + + def destroy(self): + """Destroys the netdevice of the corresponding type + + For the basic eth device it calls the destroy method of it's master + device (if there is one). Flushes the configured IP addresses and sets + the device 'down'. + """ + if self.master: + self.master.destroy() + self.ip_flush() + self.down() + return True + + def enable(self): + """Enables the Device object""" + self._enabled = True + + def disable(self): + """Disables the Device object + + When a Device object is disabled, any calls to it's methods will result + in a "no operation", however attribute access will still work. + + The justification for this is to disable the Device used by the + Controller-Slave connection to avoid accidental disconnects. + """ + self._enabled = False + + def __getattribute__(self, name): + what = object.__getattribute__(self, name) + + if object.__getattribute__(self, "_deleted"): + raise DeviceDeleted() + + if not callable(what): + return what + else: + if (object.__getattribute__(self, "_enabled") or + name in ["enable", "disable"]): + return what + else: + def noop(*args, **kwargs): + pass + return noop + + def _set_devlink(self, devlink_port_data): + self._devlink = devlink_port_data + + def _init_netlink(self, nl_msg): + self.if_index = nl_msg['index'] + + self._nl_msg = nl_msg + + def _update_netlink(self, nl_msg): + if self.if_index != nl_msg['index']: + msg = "if_index of netlink message (%s) doesn't match "\ + "the device's (%s)." % (nl_msg['index'], self.if_index) + raise DeviceError(msg) + + if nl_msg['header']['type'] == RTM_NEWLINK: + if self.if_index != nl_msg['index']: + raise DeviceError("RTM_NEWLINK message passed to incorrect "\ + "Device object.") + + self._nl_msg = nl_msg + elif nl_msg['header']['type'] == RTM_NEWADDR: + if self.if_index != nl_msg['index']: + raise DeviceError("RTM_NEWADDR message passed to incorrect "\ + "Device object.") + + addr = IpAddress(nl_msg.get_attr('IFA_ADDRESS')) + addr.prefixlen = nl_msg["prefixlen"] + + if addr not in self._ip_addrs: + self._ip_addrs.append(addr) + elif nl_msg['header']['type'] == RTM_DELADDR: + if self.if_index != nl_msg['index']: + raise DeviceError("RTM_DELADDR message passed to incorrect "\ + "Device object.") + + addr = IpAddress(nl_msg.get_attr('IFA_ADDRESS')) + addr.prefixlen = nl_msg["prefixlen"] + + if addr in self._ip_addrs: + self._ip_addrs.remove(addr) + + @property + def ifi_type(self): + """ifi_type attribute + + Returns the integer type of the device as reported by the kernel. + """ + return self._nl_msg['ifi_type'] + + @property + def name(self): + """name attribute + + Returns string name of the device as reported by the kernel. + """ + return self._nl_msg.get_attr("IFLA_IFNAME") + + @property + def hwaddr(self): + """hwaddr attribute + + Returns string hardware address of the device as reported by the kernel. + """ + return normalize_hwaddr(self._nl_msg.get_attr("IFLA_ADDRESS")) + + @property + def state(self): + """state attribute + + Returns string state of the device as reported by the kernel. + """ + #TODO check flags for admin up and lower up!!! + #TODO or expand this to check all possibilities? + #TODO also, add passive wait until lower up, with timeout + return self._nl_msg.get_attr("IFLA_OPERSTATE") + + @property + def ips(self): + """list of configured ip addresses + + Returns list of BaseIpAddress objects. + """ + return self._ip_addrs + + @property + def mtu(self): + """mtu attribute + + Returns integer mtu as reported by the kernel. + """ + return self._nl_msg.get_attr("IFLA_MTU") + + @property + def master(self): + """master device + + Returns Device object of the master device or None when the device has + no master. + """ + master_if_index = self._nl_msg.get_attr("IFLA_MASTER") + if master_if_index is not None: + return self._if_manager.get_device(master_if_index) + else: + return None + + @master.setter + def master(self, dev): + """set dev as the master of this device + + Args: + dev -- accepts a Device object of the master object. + When None, removes the current master from the Device.""" + if isinstance(dev, Device): + exec_cmd("ip link set %s master %s" % (self.name, dev.name)) + elif dev is None: + exec_cmd("ip link set %s nomaster" % self.name) + else: + raise DeviceError("Invalid dev argument.") + + @property + def driver(self): + """driver attribute + + Returns string name of the device driver based on an ethtool -i call + """ + if self.ifi_type == 772: #loopback ifi type + return 'loopback' + out, _ = exec_cmd("ethtool -i %s" % self.name, False, False, False) + match = re.search("^driver: (.*)$", out, re.MULTILINE) + if match is not None: + return match.group(1) + else: + return None + + @property + def link_stats(self): + """Link statistics + + Returns dictionary of interface statistics, IFLA_STATS + """ + return self._nl_msg.get_attr("IFLA_STATS") + + @property + def link_stats64(self): + """Link statistics + + Returns dictionary of interface statistics, IFLA_STATS64 + """ + return self._nl_msg.get_attr("IFLA_STATS64") + + def _clear_ips(self): + self._ip_addrs = [] + + def _clear_tc_qdisc(self): + exec_cmd("tc qdisc replace dev %s root pfifo" % self.name) + out, _ = exec_cmd("tc filter show dev %s" % self.name) + ingress_handles = re.findall("ingress (\d+):", out) + for ingress_handle in ingress_handles: + exec_cmd("tc qdisc del dev %s handle %s: ingress" % + (self.name, ingress_handle)) + out, _ = exec_cmd("tc qdisc show dev %s" % self.name) + ingress_qdiscs = re.findall("qdisc ingress (\w+):", out) + if len(ingress_qdiscs) != 0: + exec_cmd("tc qdisc del dev %s ingress" % self.name) + + def _clear_tc_filters(self): + out, _ = exec_cmd("tc filter show dev %s" % self.name) + egress_prefs = re.findall("pref (\d+) .* handle", out) + + for egress_pref in egress_prefs: + exec_cmd("tc filter del dev %s pref %s" % (self.name, + egress_pref)) + + def ip_add(self, addr): + """add an ip address + + Args: + addr -- accepts a BaseIpAddress object + """ + if addr not in self.ips: + exec_cmd("ip addr add %s/%d dev %s" % (addr, addr.prefixlen, + self.name)) + + def ip_del(self, addr): + """remove an ip address + + Args: + addr -- accepts a BaseIpAddress object + """ + if addr in self.ips: + exec_cmd("ip addr del %s/%d dev %s" % (addr, addr.prefixlen, + self.name)) + + def ip_flush(self): + """flush all ip addresses of the device""" + for ip in self.ips: + self.ip_del(ip) + + def up(self): + """set device up""" + exec_cmd("ip link set %s up" % self.name) + + def down(self): + """set device down""" + exec_cmd("ip link set %s down" % self.name) + + def route_add(self, dest): + """add specified route for this device + + Args: + dest -- string accepted by the "ip route add " command + """ + exec_cmd("ip route add %s dev %s" % (dest, self.name)) + + def route_del(self, dest): + """remove specified route for this device + + Args: + dest -- string accepted by the "ip route del " command + """ + exec_cmd("ip route del %s dev %s" % (dest, self.name)) + + def _get_if_data(self): + if_data = {"if_index": self.if_index, + "hwaddr": self.hwaddr, + "name": self.name, + "ip_addrs": self.ips, + "ifi_type": self.ifi_type, + "state": self.state, + "master": self.master, + "mtu": self.mtu, + "driver": self.driver, + "devlink": self._devlink} + return if_data + + def speed_set(self, speed): + """set the device speed + + Also disables automatic speed negotiation + + Args: + speed -- string accepted by the 'ethtool -s dev speed ' command + """ + exec_cmd("ethtool -s %s speed %s autoneg off" % (self.name, speed)) + + def autoneg_set_on(self): + """enable automatic negotiation of speed for this device""" + exec_cmd("ethtool -s %s autoneg on" % self.name) diff --git a/lnst/Devices/MacvlanDevice.py b/lnst/Devices/MacvlanDevice.py new file mode 100644 index 0000000..418cf6d --- /dev/null +++ b/lnst/Devices/MacvlanDevice.py @@ -0,0 +1,38 @@ +""" +Defines the MacvlanDevice class + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.SoftDevice import SoftDevice + +class MacvlanDevice(SoftDevice): + _name_template = "t_macvlan" + + def __init__(self, ifmanager, *args, **kwargs): + super(MacvlanDevice, self).__init__(ifmanager, args, kwargs) + + self._real_dev = kwargs["realdev"] + self._mode = kwargs.get("mode", None) + self._hwaddr = kwargs.get("hwaddr", None) + + def create(self): + create_cmd = "ip link add link {} {}".format(self._real_dev.name, + self.name) + + if self._hwaddr is not None: + create_cmd += " address {}".format(self._hwaddr) + + if self._mode is not None: + create_cmd += " mode {}".format(self._mode) + + create_cmd += " type macvlan" + + exec_cmd(create_cmd) diff --git a/lnst/Devices/MasterDevice.py b/lnst/Devices/MasterDevice.py new file mode 100644 index 0000000..afd2060 --- /dev/null +++ b/lnst/Devices/MasterDevice.py @@ -0,0 +1,33 @@ +""" +Defines the MasterDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Devices.SoftDevice import SoftDevice + +class MasterDevice(SoftDevice): + """Common class for all master device types + + Implements the slaves attribute getter and the slave_{add, del} methods. + """ + @property + def slaves(self): + ret = [] + + for dev in self._if_manager.get_devices(): + if dev.master is self: + ret.append(dev) + return ret + + def slave_add(self, dev): + dev.master_set(self) + + def slave_del(self, dev): + dev.master_set(None) diff --git a/lnst/Devices/OvsBridgeDevice.py b/lnst/Devices/OvsBridgeDevice.py new file mode 100644 index 0000000..26f75f4 --- /dev/null +++ b/lnst/Devices/OvsBridgeDevice.py @@ -0,0 +1,115 @@ +""" +Defines the OvsBridgeDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.Device import Device, DeviceError +from lnst.Devices.SoftDevice import SoftDevice + +class OvsBridgeDevice(SoftDevice): + _name_template = "t_ovsbr" + + _modulename = "openvswitch" + + @classmethod + def _type_init(cls): + if not cls._type_initialized: + super(OvsBridgeDevice, cls)._type_init() + + exec_cmd("mkdir -p /var/run/openvswitch/") + exec_cmd("ovsdb-server --detach --pidfile "\ + "--remote=punix:/var/run/openvswitch/db.sock", + die_on_err=False) + exec_cmd("ovs-vswitchd --detach --pidfile", die_on_err=False) + + cls._type_initialized = True + + def create(self): + exec_cmd("ovs-vsctl add-br %s" % self.name) + + def destroy(self): + exec_cmd("ovs-vsctl del-br %s" % self.name) + + def port_add(self, dev, **kwargs): + options = "" + for opt_name, opt_value in kwargs.items(): + options += " %s=%s" % (opt_name, opt_value) + + exec_cmd("ovs-vsctl add-port %s %s%s" % (self.name, dev.name, options)) + + def port_del(self, dev): + if isinstance(dev, Device): + exec_cmd("ovs-vsctl del-port %s %s" % (self.name, dev.name)) + elif isinstance(dev, str): + exec_cmd("ovs-vsctl del-port %s %s" % (self.name, dev)) + else: + raise DeviceError("Invalid port_del argument %s" % str(dev)) + + def bond_add(self, port_name, devices, **kwargs): + dev_names = "" + for dev in devices: + dev_names += " %s" % dev.name + + options = "" + for opt_name, opt_value in kwargs.items(): + options += " %s=%s" % (opt_name, opt_value) + + exec_cmd("ovs-vsctl add-bond %s %s %s %s" % (self.name, port_name, + dev_names, options)) + + def bond_del(self, dev): + self.port_del(dev) + + def internal_port_add(self, **kwargs): + name = self._if_manager.assign_name("int") + + options = "" + for opt_name, opt_value in kwargs.items(): + if opt_name == "name": + name = opt_value + continue + + options += " %s=%s" % (opt_name, opt_value) + + exec_cmd("ovs-vsctl add-port %s %s -- set Interface %s "\ + "type=internal %s" % (self.name, name, + name, options)) + + dev = self._if_manager.get_device_by_name(name) + return dev + + def tunnel_add(self, tunnel_type, options): + name = self._if_manager.assign_name(tunnel_type) + + options = "" + for opt_name, opt_value in options.items(): + if opt_name == "name": + name = opt_value + continue + + options += " %s=%s" % (opt_name, opt_value) + + exec_cmd("ovs-vsctl add-port %s %s -- set Interface %s "\ + "type=%s %s" % (self.name, name, name, + tunnel_type, options)) + + def tunnel_del(self, name): + self.port_del(name) + + def flow_add(self, entry): + exec_cmd("ovs-ofctl add-flow %s '%s'" % (self.name, entry)) + + def flows_add(self, entries): + for entry in entries: + self.flow_add(entry) + + def flows_del(self, entry): + exec_cmd("ovs-ofctl del-flows %s" % (self.name)) diff --git a/lnst/Devices/RemoteDevice.py b/lnst/Devices/RemoteDevice.py new file mode 100644 index 0000000..d15d0c5 --- /dev/null +++ b/lnst/Devices/RemoteDevice.py @@ -0,0 +1,98 @@ +""" +Defines the RemoteDevice class. This class wraps all other Device classes +when creating device instances on the Controller. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Devices.Device import Device +from lnst.Common.DeviceError import DeviceDeleted + +def remotedev_decorator(cls): + def func(*args, **kwargs): + return RemoteDevice(cls, args, kwargs) + return func + +class RemoteDevice(object): + """Wraps all other Device classes on the Controller + + Ensures that all public methods of Device objects also act as the tester + facing API even though the Device objects are instantiated on the Slave, + not where the recipe script is actually running. + """ + def __init__(self, dev_cls, args=[], kwargs={}): + self.__dev_cls = dev_cls + self.__dev_args = args + self.__dev_kwargs = kwargs + + self.host = None + self.if_index = None + self.deleted = False + + @property + def _dev_cls(self): + return self.__dev_cls + + @property + def _dev_args(self): + return self.__dev_args + + @property + def _dev_kwargs(self): + return self.__dev_kwargs + + def _get_dev_cls(self): + return self._dev_cls + + def __getattr__(self, name): + attr = getattr(self._dev_cls, name) + + if self.deleted: + raise DeviceDeleted("This device was deleted on the slave and does not exist anymore.") + + if callable(attr): + def dev_method(*args, **kwargs): + return self.host.rpc_call("dev_method", self.if_index, + name, args, kwargs) + return dev_method + else: + return self.host.rpc_call("dev_attr", self.if_index, name) + + def __iter__(self): + for x in dir(self._dev_cls): + if x[0] == '_' or x[0:1] == "__": + continue + attr = getattr(self._dev_cls, x) + + if not callable(attr): + yield (x, getattr(self, x)) + + def _match_update_data(self, data): + return False + +class PairedRemoteDevice(RemoteDevice): + """RemoteDevice class for paired Devices (such as veth)""" + def __init__(self, peer, dev_cls, args=[], kwargs={}): + super(PairedRemoteDevice, self).__init__(dev_cls, args, kwargs) + + self._peer = peer + + @property + def _dev_kwargs(self): + ret = super(PairedRemoteDevice, self)._dev_kwargs + ret["peer_if_id"] = self._peer.if_index + return ret + +#register the RemoteDevice class as implementing the interface of the Device +#class - this is true because it just proxies method/attribute calls to the +#remote Slave where the correct method gets called. +#registering the RemoteDevice class as an implementation of the Device +#Interface is required for isinstance() checks in Common code -- Device is +#available on both Controller and Slave, but RemoteDevice only on Controller +Device.register(RemoteDevice) diff --git a/lnst/Devices/SoftDevice.py b/lnst/Devices/SoftDevice.py new file mode 100644 index 0000000..3dc3b7f --- /dev/null +++ b/lnst/Devices/SoftDevice.py @@ -0,0 +1,51 @@ +""" +Defines the SoftDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Common.DeviceError import DeviceError +from lnst.Devices.Device import Device + +class SoftDevice(Device): + _name_template = "soft_dev" + + _modulename = "" + _moduleparams = "" + _type_initialized = False + + def __init__(self, ifmanager, *args, **kwargs): + super(SoftDevice, self).__init__(ifmanager) + + self._name = kwargs.get("name", None) + if self._name is None: + self._name = ifmanager.assign_name(self._name_template) + + self._type_init() + + @classmethod + def _type_init(cls): + if cls._modulename and not cls._type_initialized: + exec_cmd("modprobe %s %s" % (cls._modulename, cls._moduleparams)) + cls._type_initialized = True + + @property + def name(self): + try: + return super(SoftDevice, self).name + except: + return self._name + + def create(self): + msg = "Classes derived from SoftDevice MUST define a create method." + raise DeviceError(msg) + + def destroy(self): + exec_cmd("ip link del dev %s" % self.name) diff --git a/lnst/Devices/TeamDevice.py b/lnst/Devices/TeamDevice.py new file mode 100644 index 0000000..14ad0ad --- /dev/null +++ b/lnst/Devices/TeamDevice.py @@ -0,0 +1,60 @@ +""" +Defines the TeamDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import re +from lnst.Common.ExecCmd import exec_cmd +from lnst.Common.Utils import bool_it +from lnst.Devices.MasterDevice import MasterDevice + +def prepare_json_str(json_str): + if not json_str: + return "{}" + json_str = json_str.replace('"', '\"') + json_str = re.sub('\s+', ' ', json_str) + return json_str + +class TeamDevice(MasterDevice): + _name_template = "t_team" + + def __init__(self, ifmanager, *args, **kwargs): + super(TeamDevice, self).__init__(ifmanager, args, kwargs) + + self._config = kwargs.get("config", None) + self._dbus = not bool_it(kwargs.get("disable_dbus", False)) + + @property + def config(self): + return self._config + + @property + def dbus(self): + return self._dbus + + def create(self): + teamd_config = prepare_json_str(self.config) + + exec_cmd("teamd -r -d -c "%s" -t %s %s" %\ + (teamd_config, + self.name, + " -D" if self.dbus else "")) + + def destroy(self): + exec_cmd("teamd -k -t %s" % self.name) + + def slave_add(self, dev, port_config=None): + exec_cmd("teamdctl %s %s port config update %s "%s"" %\ + (" -D" if self.dbus else "", + self.name, + dev.name, + prepare_json_str(port_config))) + + dev.master_set(self) diff --git a/lnst/Devices/VethDevice.py b/lnst/Devices/VethDevice.py new file mode 100644 index 0000000..07a6c33 --- /dev/null +++ b/lnst/Devices/VethDevice.py @@ -0,0 +1,58 @@ +""" +Defines the VethDevice and PairedVethDevice classes. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from copy import deepcopy +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.Device import Device +from lnst.Devices.SoftDevice import SoftDevice + +class VethDevice(SoftDevice): + _name_template = "veth" + + def __init__(self, ifmanager, *args, **kwargs): + super(VethDevice, self).__init__(ifmanager, args, kwargs) + + self._name = kwargs.get("name", None) + self._peer_name = kwargs.get("peer_name", None) + + if self._name is None: + self._name = ifmanager.assign_name(self._name_template) + + if self._peer_name is None: + self._peer_name = ifmanager.assign_name("peer_"+self._name_template) + + def create(self): + exec_cmd("ip link add {name} type veth peer name {peer}". + format(name=self.name, + peer=self._peer_name)) + + @property + def peer(self): + if self._nl_msg is None: + return None + + peer_if_id = self._nl_msg.get_attr("IFLA_LINK") + return self._if_manager.get_device(peer_if_id) + +class PairedVethDevice(VethDevice): + def __init__(self, ifmanager, *args, **kwargs): + Device.__init__(self, ifmanager) + + self._peer_if_id = kwargs["peer_if_id"] + + def create(self): + peer = self._if_manager.get_device(self._peer_if_id) + me = peer.peer + self._init_netlink(me._nl_msg) + self._ip_addrs = deepcopy(me._ip_addrs) + + self._if_manager.replace_dev(self.if_index, self) diff --git a/lnst/Devices/VethPair.py b/lnst/Devices/VethPair.py new file mode 100644 index 0000000..2cf3ff1 --- /dev/null +++ b/lnst/Devices/VethPair.py @@ -0,0 +1,24 @@ +""" +Defines the VethPair factory method. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Devices.VethDevice import VethDevice, PairedVethDevice +from lnst.Devices.RemoteDevice import RemoteDevice, PairedRemoteDevice + +def VethPair(*args, **kwargs): + """Creates a pair of Veth Devices + + Args: + args, kwargs passed to the VethDevice constructor on the Slave. + """ + first = RemoteDevice(VethDevice, args, kwargs) + second = PairedRemoteDevice(first, PairedVethDevice) + return (first, second) diff --git a/lnst/Devices/VirtualDevice.py b/lnst/Devices/VirtualDevice.py new file mode 100644 index 0000000..49fe088 --- /dev/null +++ b/lnst/Devices/VirtualDevice.py @@ -0,0 +1,98 @@ +""" +Defines the VirtualDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +from time import sleep +from lnst.Common.Utils import check_process_running +from lnst.Common.NetUtils import normalize_hwaddr +from lnst.Devices.Device import Device, DeviceError +from lnst.Devices.RemoteDevice import RemoteDevice + +# conditional support for libvirt +if check_process_running("libvirtd"): + from lnst.Controller.VirtUtils import VirtNetCtl + +class VirtualDevice(RemoteDevice): + """Remote eth device created on the controller through libvirt + + To support creation of new devices on virtual machines, we derive from + the RemoteDevice class and override the create method (which would be + remotely called on the slave for the Device class type). Instead, the + create method is called on the controller where libvirt is running. + + The Tester shouldn't create instances of this class. They're created + automatically if matching virtual machines is allowed. + + Theoretically if a match is virtual the tester could also dynamically add + devices during test execution, however this is NOT SUPPORTED at the moment. + """ + def __init__(self, network, driver=None, hwaddr=None): + super(VirtualDevice, self).__init__(Device, None, None) + + self.virt_driver = driver if driver is not None else "virtio" + self.orig_hwaddr = hwaddr + self._network = network + + @property + def network(self): + return self._network + + @network.setter + def network(self, network): + self._network = network + + def _match_update_data(self, data): + if self.orig_hwaddr == data["hwaddr"]: + return True + + return super(VirtualDevice, self)._match_update_data(data) + + def create(self): + domain_ctl = self.host.get_domain_ctl() + + if self.orig_hwaddr: + if self.host.get_dev_by_hwaddr(self.orig_hwaddr): + msg = "Device with hwaddr %s already exists" % self.orig_hwaddr + raise DeviceError(msg) + else: + mac_pool = self.host.get_mac_pool() + while True: + hwaddr = normalize_hwaddr(mac_pool.get_addr()) + if not self.host.get_dev_by_hwaddr(hwaddr): + self.orig_hwaddr = hwaddr + break + + bridges = self.host.get_network_bridges() + if self.network in bridges: + net_ctl = bridges[self.network] + else: + bridges[self.network] = net_ctl = VirtNetCtl() + net_ctl.init() + + net_name = net_ctl.get_name() + + logging.info("Creating virtual device with hwaddr='%s' on machine %s", + self.orig_hwaddr, self.host.get_id()) + + domain_ctl.attach_interface(self.orig_hwaddr, + net_name, + self.virt_driver) + # The sleep here is necessary, because udev sometimes renames the + # newly created device + sleep(1) + + def destroy(self): + logging.info("Destroying virtual device with hwaddr='%s' on machine %s", + self.orig_hwaddr, self.host.get_id()) + + domain_ctl = self.host.get_domain_ctl() + domain_ctl.detach_interface(self.orig_hwaddr) diff --git a/lnst/Devices/VlanDevice.py b/lnst/Devices/VlanDevice.py new file mode 100644 index 0000000..00cb63d --- /dev/null +++ b/lnst/Devices/VlanDevice.py @@ -0,0 +1,35 @@ +""" +Defines the VlanDevice class + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.SoftDevice import SoftDevice + +class VlanDevice(SoftDevice): + _name_template = "t_vlan" + + def __init__(self, ifmanager, *args, **kwargs): + super(VlanDevice, self).__init__(ifmanager, args, kwargs) + + self._real_dev = kwargs["real_dev"] + self._vlan_id = int(kwargs["vlan_id"]) + + @property + def real_dev(self): + return self._real_dev + + @property + def vlan_id(self): + return self._vlan_id + + def create(self): + exec_cmd("ip link add link %s %s type vlan id %d" %\ + (self.real_dev.name, self.name, self.vlan_id)) diff --git a/lnst/Devices/VtiDevice.py b/lnst/Devices/VtiDevice.py new file mode 100644 index 0000000..53f6712 --- /dev/null +++ b/lnst/Devices/VtiDevice.py @@ -0,0 +1,71 @@ +""" +Defines the VtiDevice and Vti6Device classes. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.Device import DeviceError +from lnst.Devices.SoftDevice import SoftDevice + +class _BaseVtiDevice(SoftDevice): + def __init__(self, ifmanager, *args, **kwargs): + super(_BaseVtiDevice, self).__init__(ifmanager, args, kwargs) + + self._key = kwargs["key"] + self._local = kwargs.get("local", None) + self._remote = kwargs.get("remote", None) + self._device = kwargs.get("dev", None) + + if self.local is None and self.remote is None: + raise DeviceError("One of local/remote MUST be defined.") + + @property + def key(self): + return self._key + + @property + def local(self): + return self._local + + @property + def remote(self): + return self._remote + + @property + def device(self): + return self._device + + @property + def vti_type(self): + raise NotImplementedError + + def create(self): + exec_cmd("ip link add {name} type {type}{local}{remote}{key}{device}". + format(name=self.name, + type=self.vti_type, + local=(" local " + str(self.local) + if self.local + else ""), + remote=(" remote " + str(self.remote) + if self.remote + else ""), + key=" key " + self.key, + device=(" dev " + self.device.name + if self.device + else ""))) + + +class VtiDevice(_BaseVtiDevice): + vti_type = "vti" + _name_template = "t_vti" + +class Vti6Device(_BaseVtiDevice): + vti_type = "vti6" + _name_template = "t_ip6vti" diff --git a/lnst/Devices/VxlanDevice.py b/lnst/Devices/VxlanDevice.py new file mode 100644 index 0000000..079ced8 --- /dev/null +++ b/lnst/Devices/VxlanDevice.py @@ -0,0 +1,65 @@ +""" +Defines the VxlanDevice class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.ExecCmd import exec_cmd +from lnst.Devices.Device import DeviceError +from lnst.Devices.SoftDevice import SoftDevice + +class VxlanDevice(SoftDevice): + _name_template = "t_vxlan" + + def __init__(self, ifmanager, *args, **kwargs): + super(VxlanDevice, self).__init__(ifmanager, args, kwargs) + + self._vxlan_id = int(kwargs["vxlan_id"]) + self._real_dev = kwargs.get("real_dev", None) + self._group_ip = kwargs.get("group_ip", None) + self._remote_ip = kwargs.get("remote_ip", None) + self._dstport = int(kwargs.get("dst_port", 0)) + + if self.group_ip is None and self.remote_ip is None: + raise DeviceError("group or remote must be specified for vxlan") + + @property + def real_dev(self): + return self._real_dev + + @property + def vxlan_id(self): + return self._vxlan_id + + @property + def group_ip(self): + return self._group_ip + + @property + def remote_ip(self): + return self._remote_ip + + @property + def dst_port(self): + return self._dst_port + + def create(self): + dev_param = "dev %s" % self.real_dev.name if self.real_dev else "" + + if self.group_ip: + group_or_remote = "group %s" % self.group_ip + elif self.remote_ip: + group_or_remote = "remote %s" % self.remote_ip + + exec_cmd("ip link add %s type vxlan id %d %s %s dstport %d" + % (self.name, + self.vxlan_id, + dev_param, + group_or_remote, + self.dstport)) diff --git a/lnst/Devices/__init__.py b/lnst/Devices/__init__.py new file mode 100644 index 0000000..2f8d4d1 --- /dev/null +++ b/lnst/Devices/__init__.py @@ -0,0 +1,43 @@ +from lnst.Devices.Device import Device, DeviceError +from lnst.Devices.BridgeDevice import BridgeDevice +from lnst.Devices.OvsBridgeDevice import OvsBridgeDevice +from lnst.Devices.BondDevice import BondDevice +from lnst.Devices.TeamDevice import TeamDevice +from lnst.Devices.MacvlanDevice import MacvlanDevice +from lnst.Devices.VlanDevice import VlanDevice +from lnst.Devices.VxlanDevice import VxlanDevice +from lnst.Devices.VtiDevice import VtiDevice, Vti6Device +from lnst.Devices.VethDevice import VethDevice, PairedVethDevice +from lnst.Devices.VethPair import VethPair +from lnst.Devices.VirtualDevice import VirtualDevice +from lnst.Devices.RemoteDevice import RemoteDevice, remotedev_decorator + +device_classes = [ + ("Device", Device), + ("BridgeDevice", BridgeDevice), + ("OvsBridgeDevice", OvsBridgeDevice), + ("MacvlanDevice", MacvlanDevice), + ("VlanDevice", VlanDevice), + ("VxlanDevice", VxlanDevice), + ("VethDevice", VethDevice), + ("PairedVethDevice", PairedVethDevice), + ("VtiDevice", VtiDevice), + ("Vti6Device", Vti6Device), + ("BondDevice", BondDevice), + ("TeamDevice", TeamDevice)] + +class Devices(object): + def __init__(self, host): + self._host = host + + def __iter__(self): + for x in self._host._device_database.values(): + if isinstance(x, RemoteDevice): + yield x + +for name, cls in device_classes: + globals()[name] = remotedev_decorator(cls) + +#Remove the PairedVethDevice from globals... doesn't make sense to use it on +#it's own, not even for isinstance... VethDevice works fine for that +del globals()["PairedVethDevice"]
From: Ondrej Lichtner olichtne@redhat.com
This module defines the DeviceReq and HostReq classes, which can be used to create a global description of Requirements for a network test. You can use these to define class attributes of a BaseRecipe derived class to specify "general" requirements for that Recipe, or you can add them to an instance of a Recipe derived class based on it's parameters to define requirements "specific" for that single test run.
The module also defines a Requirements class which serves as a container for HostReq objects, while HostReq classes also serve as containers for DeviceReq objects. The object tree created this way is translated to a dictionary used by the internal LNST matching algorithm against available machines.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Requirements.py | 113 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 lnst/Controller/Requirements.py
diff --git a/lnst/Controller/Requirements.py b/lnst/Controller/Requirements.py new file mode 100644 index 0000000..ffefdfb --- /dev/null +++ b/lnst/Controller/Requirements.py @@ -0,0 +1,113 @@ +""" +This module defines the DeviceReq and HostReq classes, which can be used to +create a global description of Requirements for a network test. You can use +these to define class attributes of a BaseRecipe derived class to specify +"general" requirements for that Recipe, or you can add them to an instance of a +Recipe derived class based on it's parameters to define requirements "specific" +for that single test run. + +The module also specifies a Requirements class which serves as a container for +HostReq objects, while HostReq classes also serve as containers for DeviceReq +objects. The object tree created this way is translated to a dictionary used by +the internal LNST matching algorithm against available machines. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.LnstError import LnstError +from lnst.Common.Parameters import Parameters, Param + +class RequirementError(LnstError): + pass + +class HostReq(object): + """Specifies a Slave machine requirement + + To define a Host requirement you assign a HostReq instance to a class + attribute of a BaseRecipe derived class. + Example: + class MyRecipe(BaseRecipe): + m1 = HostReq() + + Args: + kwargs -- any other arguments will be treated as arbitrary string + parameters that will be matched to parameters of Slave machines + which can define their parameter values based on the implementation + of the SlaveMachineParser + """ + def __init__(self, **kwargs): + self.params = Parameters() + for name, val in kwargs.items(): + if name == "params": + raise RequirementError("'params' is a reserved keyword.") + p = Param() + p.val = val + setattr(self.params, name, p) + + def __iter__(self): + for x in dir(self): + val = getattr(self, x) + if isinstance(val, DeviceReq): + yield (x, val) + + def _to_dict(self): + res = {'interfaces': {}, 'params': {}} + for dev_id, dev in self: + res['interfaces'][dev_id] = dev._to_dict() + res['params'] = self.params._to_dict() + return res + +class DeviceReq(object): + """Specifies an Ethernet Device requirement + + To define a Device requirement you assign a DeviceReq instance to a HostReq + instance in a BaseRecipe derived class. + Example: + class MyRecipe(BaseRecipe): + m1 = HostReq() + m1.eth0 = DeviceReq(label="net1") + + Args: + label -- string value indicating the network the Device is connected to + kwargs -- any other arguments will be treated as arbitrary string + parameters that will be matched to parameters of Slave machines + which can define their parameter values based on the implementation + of the SlaveMachineParser + """ + def __init__(self, label, **kwargs): + self.label = label + self.params = Parameters() + for name, val in kwargs.items(): + if name == "params": + raise RequirementError("'params' is a reserved keyword.") + p = Param() + p.val = val + setattr(self.params, name, p) + + def _to_dict(self): + res = {'network': self.label, + 'params': self.params._to_dict()} + return res + +class _Requirements(object): + """Hosts a copy of requirements for a Recipe instance + + Internally used class. + """ + def _to_dict(self): + res = {} + for h_id, host in self: + res[h_id] = host._to_dict() + return res + + def __iter__(self): + for x in dir(self): + val = getattr(self, x) + if isinstance(val, HostReq): + yield (x, val)
From: Ondrej Lichtner olichtne@redhat.com
Defines the Job class, representing the tester facing API for manipulating remotely running tasks. Documented in the class doc strings.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 197 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 197 insertions(+) create mode 100644 lnst/Controller/Job.py
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py new file mode 100644 index 0000000..985364e --- /dev/null +++ b/lnst/Controller/Job.py @@ -0,0 +1,197 @@ +""" +Defines the Job class, representing the tester facing API for manipulating +remotely running tasks. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +import signal +import copy_reg +from lnst.Common.JobError import JobError +from lnst.Common.TestModule import BaseTestModule + +class Job(object): + """Tester facing Job API + + Objects of this class are created when a tester calls the 'run' method of + a Host object. A Job object can represent both a remotely running task (a + background job) or a remote task that already finished. + Example: + job = m1.run("ls ~/") + print job.stdout + """ + def __init__(self, host, what, + expect=True, json=False, netns=None, desc=None): + self._host = host + self._what = what + self._expect = expect + self._json = json + self._netns = netns + self._desc = desc + + self._res = None + + if isinstance(what, BaseTestModule): + self._type = "module" + elif type(what) == str: + self._type = "shell" + else: + raise JobError("Unable to run '%s'" % str(what)) + + self._id = None + + @property + def stdout(self): + """standard output of the Job + + Type: string + Only applicable for Jobs running a shell command + """ + try: + return self._res["res_data"]["stdout"] + except: + return "" + + @property + def stderr(self): + """standard error output of the Job + + Type: string + Only applicable for Jobs running a shell command + """ + try: + return self._res["res_data"]["stderr"] + except: + return "" + + @property + def result(self): + """result of the Job + + Type: + depends on the type of the job. For python modules it is whatever + the module sets as the _res_data attribute. + For shell commands it is a dictionary with stdout and stderr. + """ + try: + return self._res["res_data"] + except: + return None + + @property + def passed(self): + """Indicates whether or not the Job passed + + Type: Boolean + """ + try: + return self._res["passed"] + except: + return False + + @property + def finished(self): + """Indicates whether or not the Job finished running + + Type: Boolean + """ + if self._res is not None: + return True + else: + return False + + @property + def netns(self): + """name of the network namespace the Job is running in + + Not relevant yet as network namespaces aren't supported yet. + """ + if self._cmd is not None: + return self._cmd["netns"] + else: + return None + + @property + def id(self): + """id of the job + + Used internally by the Machine class to identify results coming + from the slave. + TODO make private? + """ + return self._id + + @id.setter + def id(self, val): + if self._id is not None: + raise Exception("Id already set") + self._id = val + + def wait(self, timeout=0): + """waits for the Job to finish for the specified amount of time + + Args: + timeout -- integer value indicating how long to wait for. + Default is 0, means wait forever. Don't use for infinitelly + running Jobs. + If non-zero LNST uses a timed SIGALARM signal to return from + this method. + Returns: + True if the Job finished, False if the Job is still running and + the wait method just timed out. + """ + if self.finished: + return True + if timeout < 0: + raise JobError("Negative timeout value not allowed.") + return self._host.wait_for_job(self, timeout) + + def kill(self, signal=signal.SIGKILL): + """send specified signal to the remotely running Job process + + Args: + signal -- integer value of the signal to be sent + Default is SIGKILL + Returns: + True if the Job finished before the signal was sent or if the + signal was sent successfully. + False if an exception was raised while sending the signal. + """ + logging.info("Sending signal {} to job {} on host {}".format(signal, + self._id, self._host.get_id())) + return self._host.kill(self, signal) + + def _to_dict(self): + d = {"job_id": self._id, + "type": self._type, + "json": self._json} + if self._type == "shell": + d["command"] = self._what + elif self._type == "module": + d["module"] = self._what + else: + raise JobError("Unknown Job type %s" % self._type) + return d + + def __str__(self): + attrs = ["type(%s)" % self._type] + + if self._type == "module": + attrs.append("module(%s)" % self._what.__class__.__name__) + elif self._type == "shell": + attrs.append("command(%s)" % self._what) + + if self._netns is not None: + attrs.append("netns(%s)" % self._netns) + + if not self._expect: + attrs.append("expecting FAIL") + + return ", ".join(attrs)
From: Ondrej Lichtner olichtne@redhat.com
Defines the Host class that acts as the tester facing API to manipulate and work with remote machines running the LNST Slave. Documented in the class doc strings.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * removed debug pprint import * added docstrings to requested methods * Host::run throws an exception when netns set (not supported yet) * added TODO comments to some parts that need to be implemented --- lnst/Controller/Host.py | 162 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 162 insertions(+) create mode 100644 lnst/Controller/Host.py
diff --git a/lnst/Controller/Host.py b/lnst/Controller/Host.py new file mode 100644 index 0000000..23cdadf --- /dev/null +++ b/lnst/Controller/Host.py @@ -0,0 +1,162 @@ +""" +Defines the Host class that acts as the tester facing API to manipulate and +work with remote machines running the LNST Slave. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +from lnst.Common.Colours import decorate_with_preset +from lnst.Common.Parameters import Parameters +from lnst.Common.TestModule import BaseTestModule +from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT +from lnst.Devices import Devices +from lnst.Devices.VirtualDevice import VirtualDevice +from lnst.Devices.RemoteDevice import RemoteDevice +from lnst.Controller.Common import ControllerError +from lnst.Controller.Job import Job + +class HostError(ControllerError): + pass + +class Hosts(object): + """Container object for Host class instances + + Implements the __iter__ method to allow iterating Host objects. Created + automatically by the LNST Controller class and provided to the tester + in the test() method of a BaseRecipe class as the 'matched' attribute. + """ + def __iter__(self): + for x in dir(self): + val = getattr(self, x) + if isinstance(val, Host): + yield val + +class Host(object): + """Tester facing slave Host API + + Objects of this class are created by the Controller and provided to the + Recipe object to use from it's 'test()' method. This tester facing API + allows the tester to create new Devices and run Jobs on the remote Host. + Example: + m1.bond0 = Bond() # to create a new bond device + m1.run("ip a") # to run a shell command + """ + #TODO add packet capture options + def __init__(self, host, **kwargs): + self._host = host + self.params = Parameters() + self.params._from_dict(self._host._slave_desc) + + self.devices = Devices(self._host) + self._device_mapping = {} + + def __getattr__(self, name): + """direct access to Device objects + + All mapped devices of a Host are directly accessible as attributes of + the Host objects. This is implemented by this __getattr__ override""" + try: + return self._device_mapping[name] + except: + raise AttributeError("%s object has no attribute named %r" % + (self.__class__.__name__, name)) + + def __setattr__(self, name, value): + """allows for dynamic creation of devices + + During execution of the recipes 'test' method, a tester can create new + soft devices by assigning a Device object to the Host object instance, + this is implemented by overriding this __setattr__ method. It also + handles VirtualDevice creation before recipe execution (virtual match). + """ + if isinstance(value, VirtualDevice): + # TODO creation of VirtualDevices should be disabled during test + # execution, it's commented out right now because I haven't found + # a good solution yet... + # msg = "Creating VirtualDevices in recipe execution is "\ + # "not supported right now." + # raise HostError(msg) + if name in self._device_mapping: + raise HostError("Device with name '%s' already assigned." % name) + + value.host = self._host + self._host.add_tmp_device(value) + value.create() + self._host.wait_for_tmp_devices(DEFAULT_TIMEOUT) + + self._device_mapping[name] = value + elif isinstance(value, RemoteDevice): + if name in self._device_mapping: + raise HostError("Device with name '%s' already assigned." % name) + + if value.if_index is None: + value.host = self._host + self._host.create_remote_device(value) + self._device_mapping[name] = value + else: + super(Host, self).__setattr__(name, value) + + def _map_device(self, dev_id, how): + hwaddr = how["hwaddr"] + dev = self._host.get_dev_by_hwaddr(hwaddr) + self._device_mapping[dev_id] = dev + + def run(self, what, bg=False, fail=False, timeout=DEFAULT_TIMEOUT, + json=False, netns=None, desc=None): + """ + Args: + what (mandatory) -- what should be run on the host. Can be either a + string, that will be executed on the Host as a shell command, + or a TestModule object. + bg -- run in background flag. Default 'False'. When True, the + method will return immediately after the Job request is sent + to the Slave Host. + fail -- default 'False'. If True, a Failure will be reported as PASS + timeout: time limit in seconds. Default is 60. Only respected for + jobs running in foreground (background Jobs don't have a time + limit) + json: Process JSON output into dictionary. Default 'False'. + netns: Run in the specified network namespace. Currently not + functional. + desc: Decription printed in logs. Accepts a string value. + + Returns: + a Job object that acts as a handle to access the remote Job. If + the Job was ran on foreground, the returned Job object will be + filled with result data. If the Job was ran in background, the + immediately returned Job object can be used to manipulate the + running Job remotely and when the result data arrives from the Slave + the Job object will be automatically updated. + """ + #TODO support network namespaces + if netns is not None: + raise HostError("netns parameter not supported yet.") + + job = Job(self._host, what, expect=not fail, json=json, netns=netns, + desc=desc) + + try: + self._host.run_job(job) + + if not bg: + if not job.wait(timeout): + logging.debug("Killing timed-out job") + job.kill() + except: + raise + finally: + pass + #TODO check expect result here + # if bg=True: + # add "job started" result + # else: + # add job result + + return job
From: Ondrej Lichtner olichtne@redhat.com
Defines the CtlConfig class. This is part of an effort to move from a single global lnst_config object to using a non-global config object, that uses a different scheme for Controller and Slave.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Config.py | 99 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 lnst/Controller/Config.py
diff --git a/lnst/Controller/Config.py b/lnst/Controller/Config.py new file mode 100644 index 0000000..b82787c --- /dev/null +++ b/lnst/Controller/Config.py @@ -0,0 +1,99 @@ +""" +Defines the CtlConfig class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import os +import sys +from lnst.Common.Config import DefaultRPCPort, Config + +class CtlConfig(Config): + """Configuration scheme used by the Controller""" + def _init_options(self): + self._options['environment'] = dict() + self._options['environment']['mac_pool_range'] = {\ + "value" : ['52:54:01:00:00:01', '52:54:01:FF:FF:FF'], + "additive" : False, + "action" : self.optionMacRange, + "name" : "mac_pool_range"} + self._options['environment']['rpcport'] = {\ + "value" : DefaultRPCPort, + "additive" : False, + "action" : self.optionPort, + "name" : "rpcport"} + self._options['environment']['tool_dirs'] = {\ + "value" : [], + "additive" : True, + "action" : self.optionDirList, + "name" : "test_tool_dirs"} + self._options['environment']['module_dirs'] = {\ + "value" : [], + "additive" : True, + "action" : self.optionDirList, + "name" : "test_module_dirs"} + self._options['environment']['log_dir'] = {\ + "value" : os.path.abspath(os.path.join( + os.path.dirname(sys.argv[0]), './Logs')), + "additive" : False, + "action" : self.optionPath, + "name" : "log_dir"} + self._options['environment']['resource_dir'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPath, + "name" : "resource_dir"} + self._options['environment']['xslt_url'] = { + "value" : "http://www.lnst-project.org/files/result_xslt/xml_to_html.xsl", + "additive" : False, + "action" : self.optionPlain, + "name" : "xslt_url" + } + self._options['environment']['allow_virtual'] = { + "value" : False, + "additive" : False, + "action" : self.optionBool, + "name" : "allow_virtual" + } + + self._options['perfrepo'] = dict() + self._options['perfrepo']['url'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPlain, + "name" : "url" + } + self._options['perfrepo']['username'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPlain, + "name" : "username" + } + self._options['perfrepo']['password'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPlain, + "name" : "password" + } + + self._options['pools'] = dict() + + self._options['security'] = dict() + self._options['security']['identity'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPlain, + "name" : "identity"} + self._options['security']['privkey'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPath, + "name" : "privkey"} + + self.colours_scheme()
From: Ondrej Lichtner olichtne@redhat.com
Defines the SlaveConfig class. This is part of an effort to move from a single global lnst_config object to using a non-global config object, that uses a different scheme for Controller and Slave.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/Config.py | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 lnst/Slave/Config.py
diff --git a/lnst/Slave/Config.py b/lnst/Slave/Config.py new file mode 100644 index 0000000..e749528 --- /dev/null +++ b/lnst/Slave/Config.py @@ -0,0 +1,73 @@ +""" +Defines the SlaveConfig class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import os +import sys +from lnst.Common.Config import DefaultRPCPort, Config + +class SlaveConfig(Config): + def _init_options(self): + self._options['environment'] = dict() + self._options['environment']['log_dir'] = {\ + "value" : os.path.abspath(os.path.join( + os.path.dirname(sys.argv[0]), './Logs')), + "additive" : False, + "action" : self.optionPath, + "name" : "log_dir"} + self._options['environment']['use_nm'] = {\ + "value" : True, + "additive" : False, + "action" : self.optionBool, + "name" : "use_nm"} + self._options['environment']['rpcport'] = {\ + "value" : DefaultRPCPort, + "additive" : False, + "action" : self.optionPort, + "name" : "rpcport"} + + self._options['cache'] = dict() + self._options['cache']['dir'] = {\ + "value" : os.path.abspath(os.path.join( + os.path.dirname(sys.argv[0]), './cache')), + "additive" : False, + "action" : self.optionPath, + "name" : "cache_dir"} + + self._options['cache']['expiration_period'] = {\ + "value" : 7*24*60*60, # 1 week + "additive" : False, + "action" : self.optionTimeval, + "name" : "expiration_period"} + + self._options['security'] = dict() + self._options['security']['auth_types'] = {\ + "value" : "none", + "additive" : False, + "action" : self.optionPlain, #TODO list?? + "name" : "auth_types"} + self._options['security']['auth_password'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPlain, + "name" : "auth_password"} + self._options['security']['privkey'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPath, + "name" : "privkey"} + self._options['security']['ctl_pubkeys'] = {\ + "value" : "", + "additive" : False, + "action" : self.optionPath, + "name" : "ctl_pubkeys"} + + self.colours_scheme()
From: Ondrej Lichtner olichtne@redhat.com
Defines the MachineMapper class. Implements a matching algorithm that maps requirements to available hosts. In this specific class this is implemented with backtracking, however testers are free to implement their own algorithm as long as they respect the API of this class as it needs to integrate with the rest of LNST.
Most of this class is based on the SetupMapper class and includes some functionality from the old SlavePool class, but has been modified to work with Python Recipes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * docstring now explains that implementing your own MachineMapper is not supported at this moment * removed debug pprint import --- lnst/Controller/MachineMapper.py | 328 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 328 insertions(+) create mode 100644 lnst/Controller/MachineMapper.py
diff --git a/lnst/Controller/MachineMapper.py b/lnst/Controller/MachineMapper.py new file mode 100644 index 0000000..8f98ef9 --- /dev/null +++ b/lnst/Controller/MachineMapper.py @@ -0,0 +1,328 @@ +""" +Defines the MachineMapper class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +from lnst.Controller.Common import ControllerError + +class MapperError(ControllerError): + pass + +class MachineMapper(object): + """Implements a matching algorithm that maps requirements to available hosts + + In this specific class this is implemented with backtracking, however + testers are free to implement their own algorithm as long as they respect + the API of this class as it needs to integrate with the rest of LNST. + + Since the API is not fully defined yet and depends on the interaction with + the SlavePoolManager (also needs a fully defined API), implementing your + own MachineMapper class is not recommended yet. However since, the + Controller class accepts the 'mapper' parameter it is possible if done + properly. + + TODO The Interface will be separated into an abstract class to clearly + define the required API. ABC? + """ + def __init__(self): + self._pools = {} + self._pool_stack = [] + self._pool = {} + self._pool_name = None + self._mreqs = {} + self._unmatched_req_machines = [] + self._matched_pool_machines = [] + self._machine_stack = [] + self._net_label_mapping = {} + self._virtual_matching = False + + def set_requirements(self, mreqs): + """set the requirements to be used by the matching algorithm + + This should be a specially formatted dictionary to work. + TODO Should probably be reworked to work with something more + flexible. + """ + self._mreqs = mreqs + + def set_pools(self, pools): + """set the pools to be used by the matching algorithm + + This needs to be a specially formatted dictionary returned by get_pools + method of a SlavePoolManager class. + """ + self._pools = pools + + def reset_match_state(self): + """resets the state of the backtracking algorithm""" + self._net_label_mapping = {} + self._machine_stack = [] + self._unmatched_req_machines = sorted(self._mreqs.keys(), reverse=True) + + self._pool_stack = list(self._pools.keys()) + if len(self._pool_stack) > 0: + self._pool_name = self._pool_stack.pop() + self._pool = self._pools[self._pool_name] + + self._unmatched_pool_machines = [] + for p_id, p_machine in sorted(self._pool.iteritems(), reverse=True): + if self._virtual_matching: + if "libvirt_domain" in p_machine["params"]: + self._unmatched_pool_machines.append(p_id) + else: + self._unmatched_pool_machines.append(p_id) + + if len(self._pool) > 0 and len(self._mreqs) > 0: + self._push_machine_stack() + + def matches(self, **kwargs): + """Generator method which calls the matching algorithm + + Args: + multimatch -- if False or not specified, will only return the first + match. Otherwise repeated calls of this method will return + more possible mappings until no more are possible. + + Returns: + The matched mapping or requirements to pool Machines. + """ + logging.info("Matching machines, without virtuals.") + self.reset_match_state() + matched = False + + while self._match(): + matched = True + yield self.get_mapping() + if "multimatch" not in kwargs or not kwargs["multimatch"]: + return + + if "allow_virt" in kwargs and kwargs["allow_virt"]: + logging.info("Match failed for normal machines, falling back "\ + "to matching virtual machines.") + self._virtual_matching = True + self.reset_match_state() + while self._match(): + matched = True + yield self.get_mapping() + if "multimatch" not in kwargs or not kwargs["multimatch"]: + return + if not matched: + msg = "This setup cannot be provisioned with the current pool." + raise MapperError(msg) + + def _match(self): + logging.info("Trying match with pool: %s" % self._pool_name) + while len(self._machine_stack)>0: + stack_top = self._machine_stack[-1] + if self._virtual_matching and stack_top["virt_matched"]: + if stack_top["current_match"] != None: + cur_match = stack_top["current_match"] + self._unmatched_pool_machines.append(cur_match) + stack_top["current_match"] = None + stack_top["virt_matched"] = False + + if self._if_match(): + if len(self._unmatched_req_machines) > 0: + self._push_machine_stack() + continue + else: + return True + else: + #unmap the pool machine + if stack_top["current_match"] != None: + cur_match = stack_top["current_match"] + self._unmatched_pool_machines.append(cur_match) + stack_top["current_match"] = None + + mreq_m_id = stack_top["m_id"] + while len(stack_top["remaining_matches"]) > 0: + pool_m_id = stack_top["remaining_matches"].pop() + if self._check_machine_compatibility(mreq_m_id, pool_m_id): + #map compatible pool machine + stack_top["current_match"] = pool_m_id + stack_top["unmatched_pool_ifs"] = \ + sorted(self._pool[pool_m_id]["interfaces"].keys(), + reverse=True) + self._unmatched_pool_machines.remove(pool_m_id) + break + + if stack_top["current_match"] != None: + #clear if mapping + stack_top["if_stack"] = [] + #next iteration will match the interfaces + if not self._virtual_matching: + self._push_if_stack() + continue + else: + self._pop_machine_stack() + if len(self._machine_stack) == 0 and\ + len(self._pool_stack) > 0: + logging.info("Match with pool %s not found." % + self._pool_name) + self._pool_name = self._pool_stack.pop() + self._pool = self._pools[self._pool_name] + logging.info("Trying match with pool: %s" % + self._pool_name) + + self._unmatched_pool_machines = [] + for p_id, p_machine in sorted(self._pool.iteritems(), reverse=True): + if self._virtual_matching: + if "libvirt_domain" in p_machine["params"]: + self._unmatched_pool_machines.append(p_id) + else: + self._unmatched_pool_machines.append(p_id) + + if len(self._pool) > 0 and len(self._mreqs) > 0: + self._push_machine_stack() + continue + return False + + def _if_match(self): + m_stack_top = self._machine_stack[-1] + if_stack = m_stack_top["if_stack"] + + if self._virtual_matching: + if m_stack_top["current_match"] != None: + m_stack_top["virt_matched"] = True + return True + else: + return False + + while len(if_stack) > 0: + stack_top = if_stack[-1] + + req_m = self._mreqs[m_stack_top["m_id"]] + pool_m = self._pool[m_stack_top["current_match"]] + req_if = req_m["interfaces"][stack_top["if_id"]] + req_net_label = req_if["network"] + + if stack_top["current_match"] != None: + cur_match = stack_top["current_match"] + m_stack_top["unmatched_pool_ifs"].append(cur_match) + pool_if = pool_m["interfaces"][cur_match] + pool_net_label = pool_if["network"] + net_label_mapping = self._net_label_mapping[req_net_label] + if net_label_mapping == (pool_net_label, m_stack_top["m_id"], + stack_top["if_id"]): + del self._net_label_mapping[req_net_label] + stack_top["current_match"] = None + + while len(stack_top["remaining_matches"]) > 0: + pool_if_id = stack_top["remaining_matches"].pop() + pool_if = pool_m["interfaces"][pool_if_id] + if self._check_interface_compatibility(req_if, pool_if): + #map compatible interfaces + stack_top["current_match"] = pool_if_id + if req_net_label not in self._net_label_mapping: + self._net_label_mapping[req_net_label] =\ + (pool_if["network"], + m_stack_top["m_id"], + stack_top["if_id"]) + m_stack_top["unmatched_pool_ifs"].remove(pool_if_id) + break + + if stack_top["current_match"] != None: + if len(m_stack_top["unmatched_ifs"]) > 0: + self._push_if_stack() + continue + else: + return True + else: + self._pop_if_stack() + continue + return False + + def _push_machine_stack(self): + machine_match = {} + machine_match["m_id"] = self._unmatched_req_machines.pop() + machine_match["current_match"] = None + machine_match["remaining_matches"] = list(self._unmatched_pool_machines) + machine_match["if_stack"] = [] + + machine = self._mreqs[machine_match["m_id"]] + machine_match["unmatched_ifs"] = sorted(machine["interfaces"].keys(), + reverse=True) + machine_match["unmatched_pool_ifs"] = [] + + if self._virtual_matching: + machine_match["virt_matched"] = False + + self._machine_stack.append(machine_match) + + def _pop_machine_stack(self): + stack_top = self._machine_stack.pop() + self._unmatched_req_machines.append(stack_top["m_id"]) + + def _push_if_stack(self): + m_stack_top = self._machine_stack[-1] + if_match = {} + if_match["if_id"] = m_stack_top["unmatched_ifs"].pop() + if_match["current_match"] = None + if_match["remaining_matches"] = list(m_stack_top["unmatched_pool_ifs"]) + + m_stack_top["if_stack"].append(if_match) + + def _pop_if_stack(self): + m_stack_top = self._machine_stack[-1] + if_stack_top = m_stack_top["if_stack"].pop() + m_stack_top["unmatched_ifs"].append(if_stack_top["if_id"]) + + def _check_machine_compatibility(self, req_id, pool_id): + req_machine = self._mreqs[req_id] + pool_machine = self._pool[pool_id] + for param, value in req_machine["params"].iteritems(): + if param not in pool_machine["params"] or\ + value != pool_machine["params"][param]: + return False + return True + + def _check_interface_compatibility(self, req_if, pool_if): + label_mapping = self._net_label_mapping + for req_label, mapping in label_mapping.iteritems(): + if req_label == req_if["network"] and\ + mapping[0] != pool_if["network"]: + return False + if mapping[0] == pool_if["network"] and\ + req_label != req_if["network"]: + return False + for param, value in req_if["params"].iteritems(): + if param not in pool_if["params"] or\ + value != pool_if["params"][param]: + return False + return True + + def get_mapping(self): + mapping = {"machines": {}, "networks": {}, "virtual": False, + "pool_name": self._pool_name} + + for req_label, label_map in self._net_label_mapping.iteritems(): + mapping["networks"][req_label] = label_map[0] + + for machine in self._machine_stack: + m_map = mapping["machines"][machine["m_id"]] = {} + + m_map["target"] = machine["current_match"] + + hostname = self._pool[m_map["target"]]["params"]["hostname"] + m_map["hostname"] = hostname + + interfaces = m_map["interfaces"] = {} + if_stack = machine["if_stack"] + for interface in if_stack: + i = interfaces[interface["if_id"]] = {} + i["target"] = interface["current_match"] + pool_if = self._pool[m_map["target"]]["interfaces"][i["target"]] + i["hwaddr"] = pool_if["params"]["hwaddr"] + + + if self._virtual_matching: + mapping["virtual"] = True + return mapping
From: Ondrej Lichtner olichtne@redhat.com
Changed to work with the new SlavePoolManager instead of how the old SlavePool class initialized Machine objects.
This means 2 changes: * Machine object now uses a CtlConfig object provided by the SlavePoolManager * The MessageDispatcher object is also provided by the SlavePoolManager during initialization instead of later.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Machine.py | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index e4d759a..539952e 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -19,7 +19,6 @@ import signal from time import sleep from xmlrpclib import Binary from functools import wraps -from lnst.Common.Config import lnst_config from lnst.Common.NetUtils import normalize_hwaddr from lnst.Common.Utils import wait_for, create_tar_archive from lnst.Common.Utils import check_process_running @@ -44,18 +43,19 @@ class Machine(object): deconfiguration, and running commands. """
- def __init__(self, m_id, hostname=None, libvirt_domain=None, rpcport=None, - security=None): + def __init__(self, m_id, hostname, msg_dispatcher, ctl_config, + libvirt_domain=None, rpcport=None, security=None): self._id = m_id self._hostname = hostname + self._ctl_config = ctl_config self._slave_desc = None self._connection = None self._configured = False self._system_config = {} self._security = security - self._security["identity"] = lnst_config.get_option("security", + self._security["identity"] = ctl_config.get_option("security", "identity") - self._security["privkey"] = lnst_config.get_option("security", + self._security["privkey"] = ctl_config.get_option("security", "privkey")
self._domain_ctl = None @@ -67,9 +67,9 @@ class Machine(object): if rpcport: self._port = rpcport else: - self._port = lnst_config.get_option('environment', 'rpcport') + self._port = ctl_config.get_option('environment', 'rpcport')
- self._msg_dispatcher = None + self._msg_dispatcher = msg_dispatcher self._mac_pool = None
self._interfaces = [] @@ -249,7 +249,7 @@ class Machine(object):
slave_version = slave_desc["lnst_version"] slave_is_git = self.is_git_version(slave_version) - ctl_version = lnst_config.version + ctl_version = self._ctl_config.version ctl_is_git = self.is_git_version(ctl_version) if slave_version != ctl_version: if ctl_is_git and slave_is_git:
From: Ondrej Lichtner olichtne@redhat.com
Defines the MessageDispatcher class used by the Controller to multiplex communication from all the connected Slave machines.
In addition to that it defines functions used by the MessageDispatcher to transparently translate Device objects while communicating with the Slave.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/MessageDispatcher.py | 188 +++++++++++++++++++++++++++++++++++ 1 file changed, 188 insertions(+) create mode 100644 lnst/Controller/MessageDispatcher.py
diff --git a/lnst/Controller/MessageDispatcher.py b/lnst/Controller/MessageDispatcher.py new file mode 100644 index 0000000..a6d7f44 --- /dev/null +++ b/lnst/Controller/MessageDispatcher.py @@ -0,0 +1,188 @@ +""" +Defines the MessageDispatcher class used by the Controller to multiplex +communication from all the connected Slave machines. + +In addition to that it defines functions used by the MessageDispatcher to +transparently translate Device objects while communicating with the Slave. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +from lnst.Common.ConnectionHandler import send_data, recv_data +from lnst.Common.ConnectionHandler import ConnectionHandler +from lnst.Common.DeviceRef import DeviceRef +from lnst.Controller.Common import ControllerError +from lnst.Devices.RemoteDevice import RemoteDevice + +def deviceref_to_remote_device(machine, obj): + if isinstance(obj, DeviceRef): + dev = machine.dev_db_get_if_index(obj.if_index) + return dev + elif isinstance(obj, dict): + new_dict = {} + for key, value in obj.items(): + new_dict[key] = deviceref_to_remote_device(machine, + value) + return new_dict + elif isinstance(obj, list): + new_list = [] + for value in obj: + new_list.append(deviceref_to_remote_device(machine, + value)) + return new_list + elif isinstance(obj, tuple): + new_list = [] + for value in obj: + new_list.append(deviceref_to_remote_device(machine, + value)) + return tuple(new_list) + else: + return obj + +def remote_device_to_deviceref(obj): + if isinstance(obj, RemoteDevice): + return DeviceRef(obj.if_index) + elif isinstance(obj, dict): + new_dict = {} + for key, value in obj.items(): + new_dict[key] = remote_device_to_deviceref(value) + return new_dict + elif isinstance(obj, list): + new_list = [] + for value in obj: + new_list.append(remote_device_to_deviceref(value)) + return new_list + elif isinstance(obj, tuple): + new_list = [] + for value in obj: + new_list.append(remote_device_to_deviceref(value)) + return tuple(new_list) + else: + return obj + +class ConnectionError(ControllerError): + pass + +class MessageDispatcher(ConnectionHandler): + def __init__(self, log_ctl): + super(MessageDispatcher, self).__init__() + self._log_ctl = log_ctl + self._machines = dict() + + def add_slave(self, machine, connection): + self._machines[machine] = machine + self.add_connection(machine, connection) + + def send_message(self, machine, data): + soc = self.get_connection(machine) + + if data["type"] == "command": + data["args"] = remote_device_to_deviceref(data["args"]) + data["kwargs"] = remote_device_to_deviceref(data["kwargs"]) + + if send_data(soc, data) == False: + msg = "Connection error from slave %s" % machine.get_id() + raise ConnectionError(msg) + + def wait_for_result(self, machine): + wait = True + while wait: + connected_slaves = self._connection_mapping.keys() + + messages = self.check_connections() + + remaining_slaves = self._connection_mapping.keys() + + for msg in messages: + if msg[1]["type"] == "result" and msg[0] == machine: + wait = False + result = msg[1]["result"] + + machine = self._machines[machine] + result = deviceref_to_remote_device(machine, + result) + else: + self._process_message(msg) + + if connected_slaves != remaining_slaves: + disconnected_slaves = set(connected_slaves) -\ + set(remaining_slaves) + msg = "Slaves " + str(list(disconnected_slaves)) + \ + " disconnected from the controller." + raise ConnectionError(msg) + + return result + + def wait_for_finish(self, machine, job_id): + wait = True + while wait: + connected_slaves = self._connection_mapping.keys() + + messages = self.check_connections() + + remaining_slaves = self._connection_mapping.keys() + + for msg in messages: + self._process_message(msg) + if msg[1]["type"] == "job_finished" and msg[0] == machine: + wait = False + + if connected_slaves != remaining_slaves: + disconnected_slaves = set(connected_slaves) -\ + set(remaining_slaves) + msg = "Slaves " + str(list(disconnected_slaves)) + \ + " disconnected from the controller." + raise ConnectionError(msg) + return True + + def handle_messages(self): + connected_slaves = self._connection_mapping.keys() + + messages = self.check_connections() + + remaining_slaves = self._connection_mapping.keys() + + for msg in messages: + self._process_message(msg) + + if connected_slaves != remaining_slaves: + disconnected_slaves = set(connected_slaves) -\ + set(remaining_slaves) + msg = "Slaves " + str(list(disconnected_slaves)) + \ + " disconnected from the controller." + raise ConnectionError(msg) + return True + + def _process_message(self, message): + if message[1]["type"] == "log": + record = message[1]["record"] + self._log_ctl.add_client_log(message[0].get_id(), record) + elif message[1]["type"] == "result": + msg = "Recieved result message from different slave %s" % message[0].get_id() + logging.debug(msg) + elif message[1]["type"] == "dev_created": + machine = self._machines[message[0]] + machine.device_created(message[1]["dev_data"]) + elif message[1]["type"] == "dev_deleted": + machine = self._machines[message[0]] + machine.device_delete(message[1]) + elif message[1]["type"] == "exception": + raise message[1]["Exception"] + elif message[1]["type"] == "job_finished": + machine = self._machines[message[0]] + machine.job_finished(message[1]) + else: + msg = "Unknown message type: %s" % message[1]["type"] + raise ConnectionError(msg) + + def disconnect_slave(self, machine): + soc = self.get_connection(machine) + self.remove_connection(soc) + del self._machines[machine]
From: Ondrej Lichtner olichtne@redhat.com
Defines the SlavePoolManager class that takes care of loading pools and checking machine availability. Most of the SlavePoolManager class is copied over from the old SlavePool class, but modified to work with Python recipes.
The idea is to later generalize the SlavePoolManager class into an interface to support custom tester provided poolmanager classes (e.g. to support different pool types or SlaveMachine description files).
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/SlavePoolManager.py | 273 ++++++++++++++++++++++++++++++++++++ 1 file changed, 273 insertions(+) create mode 100644 lnst/Controller/SlavePoolManager.py
diff --git a/lnst/Controller/SlavePoolManager.py b/lnst/Controller/SlavePoolManager.py new file mode 100644 index 0000000..dc61e11 --- /dev/null +++ b/lnst/Controller/SlavePoolManager.py @@ -0,0 +1,273 @@ +""" +This module contains implementaion of SlavePoolManager class that +takes care of loading pools and checking machine availability + +Most of the SlavePoolManager class is copied over from the old SlavePool class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +import os +import re +import socket +import select +from lnst.Common.NetUtils import normalize_hwaddr +from lnst.Controller.Common import ControllerError +from lnst.Controller.Machine import Machine +from lnst.Controller.SlaveMachineParser import SlaveMachineParser +from lnst.Common.Colours import decorate_with_preset +from lnst.Common.Utils import check_process_running + +class PoolManagerError(ControllerError): + pass + +class SlavePoolManager: + """ + This class is responsible for managing test machines that + are available at the controler and can be used for testing. + """ + def __init__(self, pools, msg_dispatcher, ctl_config, pool_checks=True): + self._map = {} + self._pools = {} + self._pool = {} + self._msg_dispatcher = msg_dispatcher + self._ctl_config = ctl_config + + self._allow_virt = ctl_config.get_option("environment", + "allow_virtual") + self._allow_virt &= check_process_running("libvirtd") + self._pool_checks = pool_checks + + logging.info("Checking machine pool availability.") + for pool_name, pool_dir in pools.items(): + self._pools[pool_name] = {} + self.add_dir(pool_name, pool_dir) + if len(self._pools[pool_name]) == 0: + del self._pools[pool_name] + + self._machines = {} + for pool_name, machines in self._pools.items(): + pool = self._machines[pool_name] = {} + for m_id, m_spec in machines.items(): + params = m_spec["params"] + + hostname = params["hostname"] + + if "libvirt_domain" in params: + libvirt_domain = params["libvirt_domain"] + else: + libvirt_domain = None + + if "rpc_port" in params: + rpc_port = params["rpc_port"] + else: + rpc_port = None + + pool[m_id] = Machine(m_id, hostname, self._msg_dispatcher, + ctl_config, libvirt_domain, rpc_port, + m_spec["security"]) + #TODO check if all described devices are available + + logging.info("Finished loading pools.") + + def get_pools(self): + return self._pools + + def get_pool(self, pool_name): + return self._pools[pool_name] + + def get_machine_pools(self): + return self._machines + + def get_machine_pool(self, pool_name): + return self._machines[pool_name] + + def add_dir(self, pool_name, dir_path): + logging.info("Processing pool '%s', directory '%s'" % (pool_name, + dir_path)) + pool = self._pools[pool_name] + + try: + dentries = os.listdir(dir_path) + except OSError: + logging.warn("Directory '%s' does not exist for pool '%s'" % + (dir_path, + pool_name)) + return + + for dirent in dentries: + m_id, m = self.add_file(pool_name, dir_path, dirent) + if m_id != None and m != None: + pool[m_id] = m + + if len(pool) == 0: + logging.warn("No machines found in pool '%s', directory '%s'" % + (pool_name, + dir_path)) + + max_len = 0 + for m_id in pool.keys(): + if len(m_id) > max_len: + max_len = len(m_id) + + if self._pool_checks: + check_sockets = {} + for m_id, m in sorted(pool.iteritems()): + hostname = m["params"]["hostname"] + if "rpc_port" in m["params"]: + port = int(m["params"]["rpc_port"]) + else: + port = self._ctl_config.get_option('environment', 'rpcport') + + logging.debug("Querying machine '%s': %s:%s" %\ + (m_id, hostname, port)) + + s = socket.socket() + s.settimeout(0) + try: + s.connect((hostname, port)) + except: + pass + check_sockets[s] = m_id + + while len(check_sockets) > 0: + rl, wl, el = select.select([], check_sockets.keys(), []) + for s in wl: + err = s.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) + m_id = check_sockets[s] + if err == 0: + pool[m_id]["available"] = True + s.shutdown(socket.SHUT_RDWR) + s.close() + del check_sockets[s] + else: + pool[m_id]["available"] = False + s.close() + del check_sockets[s] + else: + for m_id in pool.keys(): + pool[m_id]["available"] = True + + for m_id in sorted(list(pool.keys())): + m = pool[m_id] + if m["available"]: + if 'libvirt_domain' in m['params']: + libvirt_msg = " libvirt_domain: %s" %\ + m['params']['libvirt_domain'] + else: + libvirt_msg = "" + msg = "%s%s [%s] %s" % (m_id, (max_len - len(m_id)) * " ", + decorate_with_preset("UP", "pass"), + libvirt_msg) + else: + msg = "%s%s [%s]" % (m_id, (max_len - len(m_id)) * " ", + decorate_with_preset("DOWN", "fail")) + del pool[m_id] + + logging.info(msg) + + def add_file(self, pool_name, dir_path, dirent): + filepath = dir_path + "/" + dirent + pool = self._pools[pool_name] + if os.path.isfile(filepath) and re.search(".xml$", filepath, re.I): + dirname, basename = os.path.split(filepath) + m_id = re.sub(".[xX][mM][lL]$", "", basename) + + parser = SlaveMachineParser(filepath, self._ctl_config) + xml_data = parser.parse() + machine_spec = self._process_machine_xml_data(m_id, xml_data) + + if 'libvirt_domain' in machine_spec['params'] and \ + not self._allow_virt: + logging.debug("libvirtd not running disabled. "\ + "Removing libvirt_domain from "\ + "machine '%s'" % m_id) + del machine_spec['params']['libvirt_domain'] + + # Check if there isn't any machine with the same + # hostname or libvirt_domain already in the pool + for pm_id, m in pool.iteritems(): + pm = m["params"] + rm = machine_spec["params"] + if pm["hostname"] == rm["hostname"]: + msg = "You have the same machine listed twice in " \ + "your pool ('%s' and '%s')." % (m_id, pm_id) + raise PoolManagerError(msg) + + if "libvirt_domain" in rm and "libvirt_domain" in pm and \ + pm["libvirt_domain"] == rm["libvirt_domain"]: + msg = "You have the same libvirt_domain listed twice in " \ + "your pool ('%s' and '%s')." % (m_id, pm_id) + raise PoolManagerError(msg) + + return (m_id, machine_spec) + return (None, None) + + def _process_machine_xml_data(self, m_id, machine_xml_data): + machine_spec = {"interfaces": {}, "params":{}, "security": {}} + + # process parameters + if "params" in machine_xml_data: + for param in machine_xml_data["params"]: + name = str(param["name"]) + value = str(param["value"]) + machine_spec["params"][name] = value + + mandatory_params = ["hostname"] + for p in mandatory_params: + if p not in machine_spec["params"]: + msg = "Mandatory parameter '%s' missing for machine %s." \ + % (p, m_id) + raise PoolManagerError(msg, machine_xml_data["params"]) + + # process interfaces + if "interfaces" in machine_xml_data: + for iface in machine_xml_data["interfaces"]: + if_id = iface["id"] + iface_spec = self._process_iface_xml_data(m_id, iface) + + if if_id not in machine_spec["interfaces"]: + machine_spec["interfaces"][if_id] = iface_spec + else: + msg = "Duplicate interface id '%s'." % if_id + raise PoolManagerError(msg, iface) + else: + if "libvirt_domain" not in machine_spec["params"]: + msg = "Machine '%s' has no testing interfaces. " \ + "This setup is supported only for virtual slaves." \ + % m_id + raise PoolManagerError(msg, machine_xml_data) + + machine_spec["security"] = machine_xml_data["security"] + + return machine_spec + + def _process_iface_xml_data(self, m_id, iface): + if_id = iface["id"] + iface_spec = {"params": {}} + iface_spec["network"] = iface["network"] + + for param in iface["params"]: + name = str(param["name"]) + value = str(param["value"]) + + if name == "hwaddr": + iface_spec["params"][name] = normalize_hwaddr(value) + else: + iface_spec["params"][name] = value + + mandatory_params = ["hwaddr"] + for p in mandatory_params: + if p not in iface_spec["params"]: + msg = "Mandatory parameter '%s' missing for machine %s, " \ + "interface '%s'." % (p, m_id, if_id) + raise PoolManagerError(msg, iface["params"]) + + return iface_spec
From: Ondrej Lichtner olichtne@redhat.com
Module implementing the BaseRecipe class. Every LNST Recipe written by testers should be inherited from this class. An LNST Recipe is composed of several parts: * Requirements definition * Parameter definition (optional) * Test definition
Further documentation can be read in the classes doc string.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * renamed variable 'x' to 'attr' --- lnst/Controller/Recipe.py | 98 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 lnst/Controller/Recipe.py
diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py new file mode 100644 index 0000000..d4f33f9 --- /dev/null +++ b/lnst/Controller/Recipe.py @@ -0,0 +1,98 @@ +""" +Module implementing the BaseRecipe class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import copy +from lnst.Common.Parameters import Parameters, Param +from lnst.Controller.Requirements import _Requirements, HostReq +from lnst.Controller.Host import Hosts, Host +from lnst.Controller.Common import ControllerError + +class RecipeError(ControllerError): + """Exception thrown by the BaseRecipe class""" + pass + +class BaseRecipe(object): + """BaseRecipe class + + Every LNST Recipe written by testers should be inherited from this class. + An LNST Recipe is composed of several parts: + * Requirements definition - you define recipe requirements in a derived + class by defining class attributes of the HostReq type. You can further + specify Ethernet Device requirements by defining DeviceReq attributes + of the HostReq object. + Example: + m1 = HostReq(arch="x86_64") + m1.eth0 = DeviceReq(hwaddr="52:54:00:12:34:56") + * Parameter definition (optional) - you can define paramaters of you Recipe + by defining class attributes of the Param type (or inherited). These + parameters can then be accessed from the test() method to change it's + behaviour. Parameter validity (type) is checked during the + instantiation of the Recipe object by the base __init__ method. + You can define your own __init__ method to implement more complex + Parameter checking if needed, but you MUST call the base __init__ + method first. + * Test definition - this is done by defining the test() method, in this + method the tester has direct access to mapped LNST slave Hosts, can + manipulate them and implement his tests. + + Attributes: + matched -- when running the Recipe the Controller will fill this + attribute with a Hosts object after the Mapper finds suitable slave + hosts. + req -- instantiated Requirements object, you can optionally change the + Recipe requirements through this object during runtime (e.g. + variable number of hosts or devices of a host based on a Parameter) + params -- instantiated Parameters object, can be used to access the + calculated parameters during Recipe initialization/execution + """ + def __init__(self, **kwargs): + """ + The __init__ method does 2 things: + * copies Requirements -- since Requirements are defined as class + attributes, we need to copy the objects to avoid conflicts with + multiple instances of the same class etc... + The copied objects are stored under a Requirements object available + through the 'req' attribute. This way you can optionally change the + Requirements of an instantiated Recipe. + * copies and instantiates Parameters -- Parameters are also class + attributes so they need to be copied into a Parameters() object + (accessible in the 'params' attribute). + Next, the copied objects are loaded with values from kwargs + and checked if mandatory Parameters have values. + """ + self.matched = None + self.req = _Requirements() + self.params = Parameters() + for attr in dir(self): + val = getattr(self, attr) + if isinstance(val, HostReq): + setattr(self.req, attr, copy.deepcopy(val)) + elif isinstance(val, Param): + setattr(self.params, attr, copy.deepcopy(val)) + + for name, val in kwargs.items(): + try: + param = getattr(self.params, name) + param.val = val + except: + raise RecipeError("Unknown parameter {}".format(name)) + + for name, param in self.params: + if param.mandatory and not param.set: + raise RecipeError("Parameter {} is mandatory".format(name)) + + def _set_hosts(self, hosts): + self.matched = hosts + + def test(self): + """Method to be implemented by the Tester""" + raise NotImplementedError("Method test must be defined by a child class.")
From: Ondrej Lichtner olichtne@redhat.com
This module defines the Controller class that brings together individual implementation parts of an LNST Controller. When instantiated, it allows the tester to configure and run his own recipes with the LNST 'infrastructure'.
This way, the lnst.Controller package is a Python library allowing the creation of your own LNST Controller.
In theory multiple instances of the Controller class are possible and you can therefore run multiple recipes in parallel, though this is not supported and wasn't tested yet.
More info is available in the doc strings of the Controller class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Controller.py | 220 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 220 insertions(+) create mode 100644 lnst/Controller/Controller.py
diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py new file mode 100644 index 0000000..7848750 --- /dev/null +++ b/lnst/Controller/Controller.py @@ -0,0 +1,220 @@ +""" +This module defines the Controller class that brings together individual +implementation parts of an LNST Controller. When instantiated, it allows the +tester to configure and run his own recipes with the LNST 'infrastructure'. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import os +import sys +import datetime +import logging +import socket +from lnst.Common.Logs import LoggingCtl +from lnst.Common.NetUtils import MacPool +from lnst.Common.LnstError import LnstError +from lnst.Common.Utils import mkdir_p +from lnst.Devices import VirtualDevice +from lnst.Controller.Common import ControllerError +from lnst.Controller.Config import CtlConfig +from lnst.Controller.MessageDispatcher import MessageDispatcher +from lnst.Controller.SlavePoolManager import SlavePoolManager +from lnst.Controller.MachineMapper import MachineMapper +from lnst.Controller.Host import Hosts, Host +from lnst.Controller.Requirements import DeviceReq +from lnst.Controller.Recipe import BaseRecipe + +class Controller(object): + """The LNST Controller class + + Most importantly allows the tester to run instantiated Recipe tests using + the LNST infrastructure. + + Can be configured with custom implementation of several objects used for + setting up the infrastructure. + """ + + def __init__(self, poolMgr=SlavePoolManager, mapper=MachineMapper, + config=None, pools=[], pool_checks=True, debug=0): + """ + Args: + poolMgr -- class that implements the SlavePoolManager interface + will be instantiated by the Controller to provide the mapper + with pools available for matching, also handles the creation + of Machine objects (internal LNST class used to access the + slave hosts) + mapper -- class that implements the MachineMapper interface + will be instantiated by the Controller to match Recipe + requirements to the available pools + config -- optional LNST configuration object, if None the + Controller will load it's own configuration from default paths + pools -- a list of pool names to restrict the used pool directories + pool_checks -- boolean (default True), if False will disable + checking online status of Slaves + debug -- integer (default 0), sets debug level of LNST + """ + self._config = self._load_ctl_config(config) + config = self._config + + mac_pool_range = config.get_option('environment', 'mac_pool_range') + self._mac_pool = MacPool(mac_pool_range[0], mac_pool_range[1]) + self._log_ctl = LoggingCtl(debug, + log_dir=config.get_option('environment','log_dir'), + log_subdir=datetime.datetime.now(). + strftime("%Y-%m-%d_%H:%M:%S"), + colours=not config.get_option("colours", "disable_colours")) + + self._msg_dispatcher = MessageDispatcher(self._log_ctl) + + self._network_bridges = {} + self._mapper = mapper() + + select_pools = {} + conf_pools = config.get_pools() + if len(pools) > 0: + for pool_name in pools: + if pool_name in conf_pools: + select_pools[pool_name] = conf_pools[pool_name] + elif len(pools) == 1 and os.path.isdir(pool_name): + select_pools = {"cmd_line_pool": pool_name} + else: + raise ControllerError("Pool %s does not exist!" % pool_name) + else: + select_pools = conf_pools + + self._pools = poolMgr(select_pools, self._msg_dispatcher, config, + pool_checks) + + def run(self, recipe, **kwargs): + """Execute the provided Recipe + + This method takes care of both finding a Slave hosts matching the Recipe + requirements, provisioning them and calling the 'test' method of the + Recipe object with proper references to the mapped Hosts + + Args: + recipe -- an instantiated Recipe object (isinstance BaseRecipe) + kwargs -- optional keyword arguments passed to the configured Mapper + """ + if not isinstance(recipe, BaseRecipe): + raise ControllerError("recipe argument must be a BaseRecipe instance.") + + req = recipe.req + + self._mapper.set_pools(self._pools.get_pools()) + self._mapper.set_requirements(req._to_dict()) + + i = 0 + for match in self._mapper.matches(**kwargs): + self._log_ctl.set_recipe(recipe.__class__.__name__, + expand="match_%d" % i) + i += 1 + + self._print_match_description(match) + self._map_match(match, req) + try: + recipe._set_hosts(self._hosts) + recipe.test() + except LnstError as exc: + logging.error("Recipe execution terminated by unexpected exception") + raise + finally: + recipe._set_hosts(None) + for machine in self._machines.values(): + machine.restore_system_config() + self._cleanup_slaves() + + def _map_match(self, match, requested): + self._machines = {} + self._hosts = Hosts() + pool = self._pools.get_machine_pool(match["pool_name"]) + for m_id, m in match["machines"].items(): + machine = self._machines[m_id] = pool[m["target"]] + + machine.set_id(m_id) + self._prepare_machine(machine) + + setattr(self._hosts, m_id, Host(machine)) + host = getattr(self._hosts, m_id) + for if_id, i in m["interfaces"].items(): + host._map_device(if_id, i) + + if match["virtual"]: + req_host = getattr(requested, m_id) + for name, dev in req_host: + new_virt_dev = VirtualDevice(network=dev.label, + driver=dev.params.driver, + hwaddr=dev.params.hwaddr) + setattr(host, name, new_virt_dev) + + def _prepare_machine(self, machine): + self._log_ctl.add_slave(machine.get_id()) + machine.set_mac_pool(self._mac_pool) + machine.set_network_bridges(self._network_bridges) + + recipe_name = os.path.basename(sys.argv[0]) + machine.set_recipe(recipe_name) + + def _cleanup_slaves(self): + if self._machines == None: + return + + for m_id, machine in self._machines.iteritems(): + machine.cleanup() + #clean-up slave logger + self._log_ctl.remove_slave(m_id) + + self._machines.clear() + + # remove dynamically created bridges + for bridge in self._network_bridges.itervalues(): + bridge.cleanup() + self._network_bridges = {} + + def _load_ctl_config(self, config): + if isinstance(config, CtlConfig): + return config + else: + config = CtlConfig() + try: + config.load_config('/etc/lnst-ctl.conf') + except: + pass + + usr_cfg = os.path.expanduser('~/.lnst/lnst-ctl.conf') + if os.path.isfile(usr_cfg): + config.load_config(usr_cfg) + else: + usr_cfg_dir = os.path.dirname(usr_cfg) + pool_dir = usr_cfg_dir + "/pool" + mkdir_p(pool_dir) + global_pools = config.get_section("pools") + if (len(global_pools) == 0): + config.add_pool("default", pool_dir, usr_cfg) + with open(usr_cfg, 'w') as f: + f.write(config.dump_config()) + + dirname = os.path.dirname(sys.argv[0]) + gitcfg = os.path.join(dirname, "lnst-ctl.conf") + if os.path.isfile(gitcfg): + config.load_config(gitcfg) + + return config + + def _print_match_description(self, match): + logging.info("Pool match description:") + if match["virtual"]: + logging.info(" Setup is using virtual machines.") + for m_id, m in sorted(match["machines"].iteritems()): + logging.info(" host "%s" uses "%s"" % (m_id, m["target"])) + for if_id, match in m["interfaces"].iteritems(): + pool_id = match["target"] + logging.info(" interface "%s" matched to "%s"" %\ + (if_id, pool_id))
From: Ondrej Lichtner olichtne@redhat.com
All LNST related Exceptions should inherit from the LnstError base exception class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Config.py | 3 ++- lnst/Common/ExecCmd.py | 3 ++- lnst/Common/NetTestCommand.py | 5 +++-- lnst/Common/ResourceCache.py | 3 ++- lnst/Common/SecureSocket.py | 3 ++- lnst/Common/ShellProcess.py | 3 ++- lnst/Common/TestsCommon.py | 3 ++- lnst/Controller/Machine.py | 5 +++-- lnst/Controller/Task.py | 4 +++- lnst/Controller/VirtUtils.py | 3 ++- lnst/RecipeCommon/ModuleWrap.py | 33 +++++++++++++++++---------------- 11 files changed, 40 insertions(+), 28 deletions(-)
diff --git a/lnst/Common/Config.py b/lnst/Common/Config.py index d47f3e2..04d6f9b 100644 --- a/lnst/Common/Config.py +++ b/lnst/Common/Config.py @@ -18,10 +18,11 @@ from lnst.Common.Utils import bool_it from lnst.Common.NetUtils import verify_mac_address from lnst.Common.Colours import get_preset_conf from lnst.Common.Version import LNSTMajorVersion +from lnst.Common.LnstError import LnstError
DefaultRPCPort = 9999
-class ConfigError(Exception): +class ConfigError(LnstError): pass
class Config(): diff --git a/lnst/Common/ExecCmd.py b/lnst/Common/ExecCmd.py index 24715ae..6489766 100644 --- a/lnst/Common/ExecCmd.py +++ b/lnst/Common/ExecCmd.py @@ -12,8 +12,9 @@ jpirko@redhat.com (Jiri Pirko)
import logging import subprocess +from lnst.Common.LnstError import LnstError
-class ExecCmdFail(Exception): +class ExecCmdFail(LnstError): _cmd = None _retval = None _stderr = None diff --git a/lnst/Common/NetTestCommand.py b/lnst/Common/NetTestCommand.py index f91cc37..ad4fe8f 100644 --- a/lnst/Common/NetTestCommand.py +++ b/lnst/Common/NetTestCommand.py @@ -21,6 +21,7 @@ from time import time from lnst.Common.ExecCmd import exec_cmd, ExecCmdFail from lnst.Common.ConnectionHandler import send_data from lnst.Common.Logs import log_exc_traceback +from lnst.Common.LnstError import LnstError
DEFAULT_TIMEOUT = 60
@@ -64,7 +65,7 @@ def str_command(command):
return ", ".join(attrs)
-class CommandException(Exception): +class CommandException(LnstError): """Base class for client errors.""" def __init__(self, command): self.command = command @@ -72,7 +73,7 @@ class CommandException(Exception): def __str__(self): return "CommandException: " + str(self.command)
-class BgCommandException(Exception): +class BgCommandException(CommandException): """Base class for background command errors.""" def __init__(self, str): self._str = str diff --git a/lnst/Common/ResourceCache.py b/lnst/Common/ResourceCache.py index 98558a2..2037a0c 100644 --- a/lnst/Common/ResourceCache.py +++ b/lnst/Common/ResourceCache.py @@ -15,10 +15,11 @@ import re import time import shutil from lnst.Common.ExecCmd import exec_cmd +from lnst.Common.LnstError import LnstError
SETUP_SCRIPT_NAME = "lnst-setup.sh"
-class ResourceCacheError(Exception): +class ResourceCacheError(LnstError): pass
class ResourceCache(object): diff --git a/lnst/Common/SecureSocket.py b/lnst/Common/SecureSocket.py index b856238..84ff04a 100644 --- a/lnst/Common/SecureSocket.py +++ b/lnst/Common/SecureSocket.py @@ -20,6 +20,7 @@ import cPickle import hashlib import hmac from lnst.Common.Utils import not_imported +from lnst.Common.LnstError import LnstError
DH_GROUP = {"p": int("0xFFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD1"\ "29024E088A67CC74020BBEA63B139B22514A08798E3404DD"\ @@ -71,7 +72,7 @@ SRP_GROUP["p_size"] = bit_length(SRP_GROUP["p"])/8 if bit_length(SRP_GROUP["p"])%8: SRP_GROUP["p_size"] += 1
-class SecSocketException(Exception): +class SecSocketException(LnstError): pass
cryptography = not_imported diff --git a/lnst/Common/ShellProcess.py b/lnst/Common/ShellProcess.py index a4928ed..58c4bd1 100644 --- a/lnst/Common/ShellProcess.py +++ b/lnst/Common/ShellProcess.py @@ -13,9 +13,10 @@ import pty, os, termios, time, signal, re, select import logging, atexit from lnst.Common.Utils import wait_for from lnst.Common.ProcessManager import ProcessManager +from lnst.Common.LnstError import LnstError
class ShellProcess: - class ProcessError(Exception): + class ProcessError(LnstError): def __init__(self, patterns, output): Exception.__init__(self, patterns, output) self.patterns = patterns diff --git a/lnst/Common/TestsCommon.py b/lnst/Common/TestsCommon.py index accb6b8..471d720 100644 --- a/lnst/Common/TestsCommon.py +++ b/lnst/Common/TestsCommon.py @@ -16,6 +16,7 @@ import os import signal import time from lnst.Common.NetTestCommand import NetTestCommandGeneric +from lnst.Common.LnstError import LnstError
class testLogger(logging.Logger): def __init__(self, name, level=logging.NOTSET): @@ -50,7 +51,7 @@ try: finally: logging._releaseLock()
-class TestOptionMissing(Exception): +class TestOptionMissing(LnstError): pass
class TestGeneric(NetTestCommandGeneric): diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 539952e..3bfcf82 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -23,16 +23,17 @@ from lnst.Common.NetUtils import normalize_hwaddr from lnst.Common.Utils import wait_for, create_tar_archive from lnst.Common.Utils import check_process_running from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT +from lnst.Controller.Common import ControllerError from lnst.Controller.CtlSecSocket import CtlSecSocket
# conditional support for libvirt if check_process_running("libvirtd"): from lnst.Controller.VirtUtils import VirtNetCtl, VirtDomainCtl
-class MachineError(Exception): +class MachineError(ControllerError): pass
-class PrefixMissingError(Exception): +class PrefixMissingError(ControllerError): pass
class Machine(object): diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py index a144d00..5c288cc 100644 --- a/lnst/Controller/Task.py +++ b/lnst/Controller/Task.py @@ -18,6 +18,7 @@ from lnst.Common.Config import lnst_config from lnst.Controller.XmlTemplates import XmlTemplateError from lnst.Common.Path import Path from lnst.Controller.PerfRepoMapping import PerfRepoMapping +from lnst.Controller.Common import ControllerError from lnst.Common.Utils import Noop
try: @@ -84,7 +85,8 @@ def match(): return False
-class TaskError(Exception): pass +class TaskError(ControllerError): + pass
class ControllerAPI(object): """ An API class representing the controller. """ diff --git a/lnst/Controller/VirtUtils.py b/lnst/Controller/VirtUtils.py index 750196c..c8da28a 100644 --- a/lnst/Controller/VirtUtils.py +++ b/lnst/Controller/VirtUtils.py @@ -16,6 +16,7 @@ import libvirt from libvirt import libvirtError from lnst.Common.ExecCmd import exec_cmd, ExecCmdFail from lnst.Common.NetUtils import scan_netdevs +from lnst.Controller.Common import ControllerError
#this is a global object because opening the connection to libvirt in every #object instance that uses it sometimes fails - the libvirt server probably @@ -27,7 +28,7 @@ def init_libvirt_con(): if _libvirt_conn is None: _libvirt_conn = libvirt.open(None)
-class VirtUtilsError(Exception): +class VirtUtilsError(ControllerError): pass
def _ip(cmd): diff --git a/lnst/RecipeCommon/ModuleWrap.py b/lnst/RecipeCommon/ModuleWrap.py index 2df953c..1f7acc6 100644 --- a/lnst/RecipeCommon/ModuleWrap.py +++ b/lnst/RecipeCommon/ModuleWrap.py @@ -10,6 +10,7 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+from lnst.Common.LnstError import LnstError from lnst.Controller.Task import ctl
def ping(src, dst, options={}, expect="pass", bg=False): @@ -24,10 +25,10 @@ def ping(src, dst, options={}, expect="pass", bg=False):
options = dict(options) if 'addr' in options or 'iface' in options: - raise Exception("options can't contain keys 'addr' and 'iface'") + raise LnstError("options can't contain keys 'addr' and 'iface'")
if not isinstance(src, tuple) or len(src) < 2 or len(src) > 4: - raise Exception('Invalid source specification') + raise LnstError('Invalid source specification') try: if len(src) == 2: h1, if1 = src @@ -39,10 +40,10 @@ def ping(src, dst, options={}, expect="pass", bg=False): h1, if1, addr_index1, addr_selector1 = src options["iface"] = if1.get_ip(addr_index1, selector=addr_selector1) except: - raise Exception('Invalid source specification') + raise LnstError('Invalid source specification')
if not isinstance(dst, tuple) or len(dst) < 3 or len(dst) > 4: - raise Exception('Invalid destination specification') + raise LnstError('Invalid destination specification') try: if len(dst) == 3: h2, if2, addr_index2 = dst @@ -51,7 +52,7 @@ def ping(src, dst, options={}, expect="pass", bg=False): h2, if2, addr_index2, addr_selector2 = dst options["addr"] = if2.get_ip(addr_index2, selector=addr_selector2) except: - raise Exception('Invalid destination specification') + raise LnstError('Invalid destination specification')
ping_mod = ctl.get_module("IcmpPing", options = options) @@ -70,10 +71,10 @@ def ping6(src, dst, options={}, expect="pass", bg=False):
options = dict(options) if 'addr' in options or 'iface' in options: - raise Exception("options can't contain keys 'addr' and 'iface'") + raise LnstError("options can't contain keys 'addr' and 'iface'")
if not isinstance(src, tuple) or len(src) < 2 or len(src) > 4: - raise Exception('Invalid source specification') + raise LnstError('Invalid source specification') try: if len(src) == 2: h1, if1 = src @@ -85,10 +86,10 @@ def ping6(src, dst, options={}, expect="pass", bg=False): h1, if1, addr_index1, addr_selector1 = src options["iface"] = if1.get_ip(addr_index1, selector=addr_selector1) except: - raise Exception('Invalid source specification') + raise LnstError('Invalid source specification')
if not isinstance(dst, tuple) or len(dst) < 3 or len(dst) > 4: - raise Exception('Invalid destination specification') + raise LnstError('Invalid destination specification') try: if len(dst) == 3: h2, if2, addr_index2 = dst @@ -97,7 +98,7 @@ def ping6(src, dst, options={}, expect="pass", bg=False): h2, if2, addr_index2, addr_selector2 = dst options["addr"] = if2.get_ip(addr_index2, selector=addr_selector2) except: - raise Exception('Invalid destination specification') + raise LnstError('Invalid destination specification')
ping_mod = ctl.get_module("Icmp6Ping", options = options) @@ -123,17 +124,17 @@ def netperf(src, dst, server_opts={}, client_opts={}, baseline={}, timeout=60):
server_opts = dict(server_opts) if 'bind' in server_opts or 'role' in server_opts: - raise Exception("server_opts can't contain keys 'bind' and 'role'") + raise LnstError("server_opts can't contain keys 'bind' and 'role'")
client_opts = dict(client_opts) if 'bind' in client_opts or\ 'role' in client_opts or\ 'netperf_server' in client_opts: - raise Exception("client_opts can't contain keys 'bind', 'role' "\ + raise LnstError("client_opts can't contain keys 'bind', 'role' "\ "and 'netperf_server'")
if not isinstance(src, tuple) or len(src) < 2 or len(src) > 4: - raise Exception('Invalid source specification') + raise LnstError('Invalid source specification') try: if len(src) == 3: h1, if1, addr_index1 = src @@ -142,10 +143,10 @@ def netperf(src, dst, server_opts={}, client_opts={}, baseline={}, timeout=60): h1, if1, addr_index1, addr_selector1 = src client_ip = if1.get_ip(addr_index1, selector=addr_selector1) except: - raise Exception('Invalid source specification') + raise LnstError('Invalid source specification')
if not isinstance(dst, tuple) or len(dst) < 3 or len(dst) > 4: - raise Exception('Invalid destination specification') + raise LnstError('Invalid destination specification') try: if len(dst) == 3: h2, if2, addr_index2 = dst @@ -154,7 +155,7 @@ def netperf(src, dst, server_opts={}, client_opts={}, baseline={}, timeout=60): h2, if2, addr_index2, addr_selector2 = dst server_ip = if2.get_ip(addr_index2, addr_selector2) except: - raise Exception('Invalid destination specification') + raise LnstError('Invalid destination specification')
server_opts["role"] = "server" server_opts["bind"] = server_ip
From: Ondrej Lichtner olichtne@redhat.com
This package will contain all upstream tracked test module classes. At the moment it only contains the IcmpPing class which is a simple port from the old IcmpPing test_module. In the future it should be heavily refactored to support both ipv4 and ipv6 (instead of having 2 different modules) but for early testing of Python Recipes this is good enough.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/IcmpPing.py | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++ lnst/Tests/__init__.py | 1 + 2 files changed, 77 insertions(+) create mode 100644 lnst/Tests/IcmpPing.py create mode 100644 lnst/Tests/__init__.py
diff --git a/lnst/Tests/IcmpPing.py b/lnst/Tests/IcmpPing.py new file mode 100644 index 0000000..8b07a2c --- /dev/null +++ b/lnst/Tests/IcmpPing.py @@ -0,0 +1,76 @@ +import re +import logging +from lnst.Devices import Device +from lnst.Common.Parameters import IntParam, Param, FloatParam, IpParam +from lnst.Common.TestModule import BaseTestModule, TestModuleError +from lnst.Common.ExecCmd import exec_cmd + +class IcmpPing(BaseTestModule): + """Port of old IcmpPing test modules""" + dst = IpParam(mandatory=True) + count = IntParam(default=10) + interval = FloatParam(default=1.0) + iface = Param() + size = IntParam() + + limit_rate = IntParam(default=80) + + def __init__(self, **kwargs): + super(IcmpPing, self).__init__(**kwargs) + + if self.iface.set: + if not isinstance(self.iface.val, Device) and\ + not isinstance(self.iface.val, str): + raise TestModuleError("Invalid 'iface' parameter.") + + def _compose_cmd(self): + cmd = "ping %s" % self.params.dst.val + cmd += " -c %d" % self.params.count + cmd += " -i %f" % self.params.interval + if self.params.iface.set: + if isinstance(self.params.iface.val, str): + cmd += " -I %s" % self.params.iface + elif isinstance(self.params.iface.val, Device): + pass + # cmd += " -I %s" % iface.val.devname + if self.params.size.set: + cmd += " -s %d" % self.params.size + return cmd + + def run(self): + cmd = self._compose_cmd() + + limit_rate = self.params.limit_rate + + data_stdout = exec_cmd(cmd, die_on_err=False)[0] + stat_pttr1 = r'(\d+) packets transmitted, (\d+) received' + stat_pttr2 = r'rtt min/avg/max/mdev = (\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+) ms' + + match = re.search(stat_pttr1, data_stdout) + if not match: + self._res_data = {"msg": "expected pattern not found"} + return False + + trans_pkts, recv_pkts = match.groups() + rate = int(round((float(recv_pkts) / float(trans_pkts)) * 100)) + logging.debug("Transmitted "%s", received "%s", " + "rate "%d%%", limit_rate "%d%%"" + % (trans_pkts, recv_pkts, rate, limit_rate)) + + self._res_data = {"rate": rate, + "limit_rate": limit_rate} + + match = re.search(stat_pttr2, data_stdout) + if match: + tmin, tavg, tmax, tmdev = [float(x) for x in match.groups()] + logging.debug("rtt min "%.3f", avg "%.3f", max "%.3f", " + "mdev "%.3f"" % (tmin, tavg, tmax, tmdev)) + + self._res_data["rtt_min"] = tmin + self._res_data["rtt_max"] = tmax + + if rate < limit_rate: + self._res_data["msg"] = "rate is lower than limit" + return False + + return True diff --git a/lnst/Tests/__init__.py b/lnst/Tests/__init__.py new file mode 100644 index 0000000..60da773 --- /dev/null +++ b/lnst/Tests/__init__.py @@ -0,0 +1 @@ +from lnst.Tests.IcmpPing import IcmpPing
From: Ondrej Lichtner olichtne@redhat.com
The schema initialization methods were moved into Slave and Controller specific Config classes so they're not needed in the base Config class anymore.
In addition this adds an _init_options call to the __init__ method which ensures that the schema will be created on object creation.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Config.py | 143 ++------------------------------------------------ 1 file changed, 4 insertions(+), 139 deletions(-)
diff --git a/lnst/Common/Config.py b/lnst/Common/Config.py index 04d6f9b..afeab42 100644 --- a/lnst/Common/Config.py +++ b/lnst/Common/Config.py @@ -32,6 +32,10 @@ class Config(): def __init__(self): self._options = dict() self.version = self._get_version() + self._init_options() + + def _init_options(self): + raise NotImplementedError()
def _get_version(self): # Check if I'm in git @@ -56,145 +60,6 @@ class Config(): os.chdir(cwd) return version
- def controller_init(self): - self._options['environment'] = dict() - self._options['environment']['mac_pool_range'] = {\ - "value" : ['52:54:01:00:00:01', '52:54:01:FF:FF:FF'], - "additive" : False, - "action" : self.optionMacRange, - "name" : "mac_pool_range"} - self._options['environment']['rpcport'] = {\ - "value" : DefaultRPCPort, - "additive" : False, - "action" : self.optionPort, - "name" : "rpcport"} - self._options['environment']['tool_dirs'] = {\ - "value" : [], - "additive" : True, - "action" : self.optionDirList, - "name" : "test_tool_dirs"} - self._options['environment']['module_dirs'] = {\ - "value" : [], - "additive" : True, - "action" : self.optionDirList, - "name" : "test_module_dirs"} - self._options['environment']['log_dir'] = {\ - "value" : os.path.abspath(os.path.join( - os.path.dirname(sys.argv[0]), './Logs')), - "additive" : False, - "action" : self.optionPath, - "name" : "log_dir"} - self._options['environment']['resource_dir'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPath, - "name" : "resource_dir"} - self._options['environment']['xslt_url'] = { - "value" : "http://www.lnst-project.org/files/result_xslt/xml_to_html.xsl", - "additive" : False, - "action" : self.optionPlain, - "name" : "xslt_url" - } - self._options['environment']['allow_virtual'] = { - "value" : False, - "additive" : False, - "action" : self.optionBool, - "name" : "allow_virtual" - } - - self._options['perfrepo'] = dict() - self._options['perfrepo']['url'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPlain, - "name" : "url" - } - self._options['perfrepo']['username'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPlain, - "name" : "username" - } - self._options['perfrepo']['password'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPlain, - "name" : "password" - } - - self._options['pools'] = dict() - - self._options['security'] = dict() - self._options['security']['identity'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPlain, - "name" : "identity"} - self._options['security']['privkey'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPath, - "name" : "privkey"} - - self.colours_scheme() - - def slave_init(self): - self._options['environment'] = dict() - self._options['environment']['log_dir'] = {\ - "value" : os.path.abspath(os.path.join( - os.path.dirname(sys.argv[0]), './Logs')), - "additive" : False, - "action" : self.optionPath, - "name" : "log_dir"} - self._options['environment']['use_nm'] = {\ - "value" : True, - "additive" : False, - "action" : self.optionBool, - "name" : "use_nm"} - self._options['environment']['rpcport'] = {\ - "value" : DefaultRPCPort, - "additive" : False, - "action" : self.optionPort, - "name" : "rpcport"} - - self._options['cache'] = dict() - self._options['cache']['dir'] = {\ - "value" : os.path.abspath(os.path.join( - os.path.dirname(sys.argv[0]), './cache')), - "additive" : False, - "action" : self.optionPath, - "name" : "cache_dir"} - - self._options['cache']['expiration_period'] = {\ - "value" : 7*24*60*60, # 1 week - "additive" : False, - "action" : self.optionTimeval, - "name" : "expiration_period"} - - self._options['security'] = dict() - self._options['security']['auth_types'] = {\ - "value" : "none", - "additive" : False, - "action" : self.optionPlain, #TODO list?? - "name" : "auth_types"} - self._options['security']['auth_password'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPlain, - "name" : "auth_password"} - self._options['security']['privkey'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPath, - "name" : "privkey"} - self._options['security']['ctl_pubkeys'] = {\ - "value" : "", - "additive" : False, - "action" : self.optionPath, - "name" : "ctl_pubkeys"} - - self.colours_scheme() - def colours_scheme(self): self._options['colours'] = dict() self._options['colours']["disable_colours"] = {\
From: Ondrej Lichtner olichtne@redhat.com
All LNST object should now be using a local config object instead of a global one so this can be removed.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Config.py | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/lnst/Common/Config.py b/lnst/Common/Config.py index afeab42..3745094 100644 --- a/lnst/Common/Config.py +++ b/lnst/Common/Config.py @@ -318,9 +318,3 @@ class Config(): string = str(value)
return string - -#Global object containing lnst configuration, available across modules -#The object is created here but the contents are initialized -#in lnst-ctl and lnst-slave, after that the modules that need the configuration -#just import this object -lnst_config = Config()
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst-slave | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/lnst-slave b/lnst-slave index ebb4cd3..20696ec 100755 --- a/lnst-slave +++ b/lnst-slave @@ -17,8 +17,8 @@ import os import logging from lnst.Common.Daemon import Daemon from lnst.Common.Logs import LoggingCtl -from lnst.Common.Config import lnst_config from lnst.Common.Colours import load_presets_from_config +from lnst.Slave.Config import SlaveConfig from lnst.Slave.NetTestSlave import NetTestSlave
def usage(): @@ -50,23 +50,23 @@ def main(): usage() sys.exit()
- lnst_config.slave_init() + slave_config = SlaveConfig() dirname = os.path.dirname(sys.argv[0]) gitcfg = os.path.join(dirname, "lnst-slave.conf") if os.path.isfile(gitcfg): - lnst_config.load_config(gitcfg) + slave_config.load_config(gitcfg) else: - lnst_config.load_config('/etc/lnst-slave.conf') + slave_config.load_config('/etc/lnst-slave.conf')
usr_cfg = os.path.expanduser('~/.lnst/lnst-slave.conf') if os.path.isfile(usr_cfg): - lnst_config.load_config(usr_cfg) + slave_config.load_config(usr_cfg)
debug = False daemon = False pidfile = "/var/run/lnst-slave.pid" port = None - coloured_output = not lnst_config.get_option("colours", "disable_colours") + coloured_output = not slave_config.get_option("colours", "disable_colours") for opt, arg in opts: if opt in ("-d", "--debug"): debug = True @@ -81,17 +81,17 @@ def main(): elif opt in ("-m", "--no-colours"): coloured_output = False
- load_presets_from_config(lnst_config) + load_presets_from_config(slave_config)
log_ctl = LoggingCtl(debug, - log_dir=lnst_config.get_option('environment', 'log_dir'), + log_dir=slave_config.get_option('environment', 'log_dir'), colours=coloured_output) logging.info("Started")
if port != None: - lnst_config.set_option("environment", "rpcport", port) + slave_config.set_option("environment", "rpcport", port)
- nettestslave = NetTestSlave(log_ctl) + nettestslave = NetTestSlave(log_ctl, slave_config)
if daemon: daemon = Daemon(pidfile)
From: Ondrej Lichtner olichtne@redhat.com
Calculates a SHA256 hexdigest of a file.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Utils.py | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/lnst/Common/Utils.py b/lnst/Common/Utils.py index d6d6c57..93e9560 100644 --- a/lnst/Common/Utils.py +++ b/lnst/Common/Utils.py @@ -108,6 +108,17 @@ def md5sum(file_path, block_size=2**20):
return md5.hexdigest()
+def sha256sum(file_path): + sha256 = hashlib.sha256() + with open(file_path, "rb") as f: + while True: + data = f.read(1024) + if not data: + break + sha256.update(data) + + return sha256.hexdigest() + def create_tar_archive(input_path, target_path, compression=False): if compression: args = "cfj"
From: Ondrej Lichtner olichtne@redhat.com
Doesn't work with test_modules and test_tools anymore... for now it simply handles files (add directories in future) and hashes them with sha256. The cache will now be used for transferring python modules (Devices or Tests for now), later could be expanded to include tester specified files/directories.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/ResourceCache.py | 125 ++++++++++++++++++++----------------------- 1 file changed, 57 insertions(+), 68 deletions(-)
diff --git a/lnst/Common/ResourceCache.py b/lnst/Common/ResourceCache.py index 2037a0c..9cfac0d 100644 --- a/lnst/Common/ResourceCache.py +++ b/lnst/Common/ResourceCache.py @@ -14,10 +14,15 @@ import os import re import time import shutil +import json from lnst.Common.ExecCmd import exec_cmd +from lnst.Common.Utils import sha256sum from lnst.Common.LnstError import LnstError
-SETUP_SCRIPT_NAME = "lnst-setup.sh" +#current index version +INDEX_VERSION = 1 +#minimal supported index version -- will be updated to current one when loaded +MIN_INDEX_VERSION = 1
class ResourceCacheError(LnstError): pass @@ -25,7 +30,6 @@ class ResourceCacheError(LnstError): class ResourceCache(object): _CACHE_INDEX_FILE_NAME = "index" _root = None - _entries = {} _expiration_period = None
def __init__(self, cache_path, expiration_period): @@ -38,98 +42,83 @@ class ResourceCache(object): os.makedirs(cache_path) self._root = cache_path
+ self._index = {"index_version": INDEX_VERSION, + "entries": {}} self._read_index() self._expiration_period = expiration_period
def _read_index(self): - logging.debug("Test cache index loaded") try: - f = open(self._get_index_loc(), "r") + with open(self.index_path, "w") as f: + index = json.load(f) + if index["index_version"] > INDEX_VERSION: + raise ResourceCacheError("Incompatible ResourceCache index versions") + elif index["index_version"] == INDEX_VERSION: + self._index = index + logging.debug("Resource cache index loaded") + else: + self._index = self._update_old_index(index) + logging.debug("Resource cache index loaded") + self._save_index() except: - return - - for line in f.readlines(): - if not re.match("^\s*#", line) and not re.match("^\s*$", line): - try: - entry_hash, last_used, entry_type, \ - entry_name, entry_path = line.split() - except: - raise ResourceCacheError("Malformed cache index") + pass
- entry = {"type": entry_type, "name": entry_name, - "last_used": int(last_used), "path": entry_path } - self._entries[entry_hash] = entry + def _update_old_index(self, old): + if old["index_version"] < MIN_INDEX_VERSION: + raise ResourceCacheError("ResourceCache index version too old to update") + logging.debug("Updating old index to newer version") + return old
def _save_index(self): - logging.debug("Test cache index commited") - with open(self._get_index_loc(), "w") as f: - header = "# hash " \ - "last_used type name path\n" - f.write(header) - for entry_hash, entry in self._entries.iteritems(): - values = (entry_hash, entry["last_used"], entry["type"], - entry["name"], entry["path"]) - line = "%s %d %s %s %s\n" % values - f.write(line) - - def _get_index_loc(self): - return "%s/%s" % (self._root, self._CACHE_INDEX_FILE_NAME) + with open(self.index_path, "w") as f: + json.dump(self._index, f) + logging.debug("Resource cache index commited") + + @property + def index_path(self): + return "%s/%s" % (self.root, self._CACHE_INDEX_FILE_NAME) + + @property + def root(self): + return self._root
def query(self, res_hash): - return res_hash in self._entries + return res_hash in self._index["entries"]
def get_path(self, res_hash): - return "%s/%s" % (self._root, self._entries[res_hash]["path"]) + return self._index["entries"][res_hash]["path"]
def renew_entry(self, entry_hash): - self._entries[entry_hash]["last_used"] = int(time.time()) + self._index["entries"][entry_hash]["last_used"] = int(time.time()) self._save_index()
- def add_cache_entry(self, entry_hash, filepath, entry_name, entry_type): - if entry_hash in self._entries: + def add_file_entry(self, filepath, entry_name): + entry_hash = sha256sum(filepath) + + if entry_hash in self._index["entries"]: raise ResourceCacheError("File already in cache")
- entry_dir = "%s/%s" % (self._root, entry_hash) - if os.path.exists(entry_dir): - try: - shutil.rmtree(entry_dir) - except OSError as e: - if e.errno != 2: - raise - os.makedirs(entry_dir) - - shutil.move(filepath, entry_dir) - entry_path = "%s/%s" % (entry_dir, os.path.basename(filepath)) - - if entry_type == "module": - filename = "%s.py" % entry_name - shutil.move(entry_path, "%s/%s" % (entry_dir, filename)) - elif entry_type == "tools": - filename = entry_name - tools_dir = "%s/%s" % (entry_dir, filename) - - exec_cmd("tar xjmf "%s" -C "%s"" % (entry_path, entry_dir)) - - if os.path.exists("%s/%s" % (tools_dir, SETUP_SCRIPT_NAME)): - exec_cmd("cd "%s" && ./%s" % (tools_dir, SETUP_SCRIPT_NAME)) - else: - msg = "%s not found in %s tools, skipping initialization." % \ - (SETUP_SCRIPT_NAME, entry_name) - logging.warn(msg) + entry_path = "%s/%s" % (self._root, entry_hash) + if os.path.exists(entry_path): + os.remove(entry_path) + + shutil.move(filepath, entry_path)
- entry = {"type": entry_type, "name": entry_name, + entry = {"name": entry_name, + "path": entry_path, "last_used": int(time.time()), - "path": "%s/%s" % (entry_hash, filename)} - self._entries[entry_hash] = entry + "digest": entry_hash, + "type": "file"} + self._index["entries"][entry_hash] = entry
self._save_index()
return entry_hash
def del_cache_entry(self, entry_hash): - if entry_hash in self._entries: - shutil.rmtree("%s/%s" % (self._root, entry_hash)) - del self._entries[entry_hash] + if entry_hash in self._index["entries"]: + os.remove(self._index["entries"][entry_hash].path) + del self._index["entries"][entry_hash] self._save_index()
def del_old_entries(self): @@ -138,7 +127,7 @@ class ResourceCache(object):
rm = [] now = time.time() - for entry_hash, entry in self._entries.iteritems(): + for entry_hash, entry in self._index["entries"].iteritems(): if entry["last_used"] <= (now - self._expiration_period): rm.append(entry_hash)
From: Ondrej Lichtner olichtne@redhat.com
Wasn't used by the module.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/CtlSecSocket.py | 1 - 1 file changed, 1 deletion(-)
diff --git a/lnst/Controller/CtlSecSocket.py b/lnst/Controller/CtlSecSocket.py index fd9ba26..c80917b 100644 --- a/lnst/Controller/CtlSecSocket.py +++ b/lnst/Controller/CtlSecSocket.py @@ -18,7 +18,6 @@ import logging from lnst.Common.SecureSocket import SecureSocket from lnst.Common.SecureSocket import DH_GROUP, SRP_GROUP from lnst.Common.SecureSocket import SecSocketException -from lnst.Common.Config import lnst_config from lnst.Common.Utils import not_imported
ser = not_imported
From: Ondrej Lichtner olichtne@redhat.com
Since we're moving away from XML recipes, only slave machine description files need to parse XML files. This commit pulls required methods from the parent XmlParser into the SlaveMachineParser to make it standalone so we can later remove all other XML related code.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/SlaveMachineParser.py | 144 ++++++++++++++++++++++++++++++---- 1 file changed, 128 insertions(+), 16 deletions(-)
diff --git a/lnst/Controller/SlaveMachineParser.py b/lnst/Controller/SlaveMachineParser.py index 70af0dc..7e08152 100644 --- a/lnst/Controller/SlaveMachineParser.py +++ b/lnst/Controller/SlaveMachineParser.py @@ -12,19 +12,79 @@ rpazdera@redhat.com (Radek Pazdera) """
import os -from lnst.Controller.XmlParser import XmlParser -from lnst.Controller.XmlProcessing import XmlProcessingError, XmlData -from lnst.Controller.XmlProcessing import XmlCollection - -class SlaveMachineError(XmlProcessingError): - pass - -class SlaveMachineParser(XmlParser): - def __init__(self, sm_path): - super(SlaveMachineParser, self).__init__("schema-sm.rng", sm_path) +import sys +from lxml import etree +from lnst.Controller.Common import ControllerError + +class SlaveMachineParser(object): + def __init__(self, sm_path, ctl_config): + # locate the schema file + # try git path + dirname = os.path.dirname(sys.argv[0]) + schema_path = os.path.join(dirname, "schema-sm.rng") + if not os.path.exists(schema_path): + # try configuration + res_dir = ctl_config.get_option("environment", "resource_dir") + schema_path = os.path.join(res_dir, "schema-sm.rng") + + if not os.path.exists(schema_path): + raise Exception("The schema file was not found. " + \ + "Your LNST installation is corrupt!") + + self._path = sm_path + relaxng_doc = etree.parse(schema_path) + self._schema = etree.RelaxNG(relaxng_doc) + + def parse(self): + try: + doc = self._parse(self._path) + self._remove_comments(doc) + self._schema.assertValid(doc) + except: + err = self._schema.error_log[0] + loc = {"file": os.path.basename(err.filename), + "line": err.line, "col": err.column} + exc = XmlProcessingError(err.message) + exc.set_loc(loc) + raise exc + + return self._process(doc) + + def _parse(self, path): + try: + doc = etree.parse(path) + except etree.LxmlError as err: + # A workaround for cases when lxml (quite strangely) + # sets the filename to <string>. + if err.error_log[0].filename == "<string>": + filename = self._path + else: + filename = err.error_log[0].filename + loc = {"file": os.path.basename(filename), + "line": err.error_log[0].line, + "col": err.error_log[0].column} + exc = XmlProcessingError(err.error_log[0].message) + exc.set_loc(loc) + raise exc + except Exception as err: + loc = {"file": os.path.basename(self._path), + "line": None, + "col": None} + exc = XmlProcessingError(str(err)) + exc.set_loc(loc) + raise exc + + return doc + + def _remove_comments(self, doc): + comments = doc.xpath('//comment()') + for c in comments: + p = c.getparent() + if p is not None: + p.remove(c)
def _process(self, sm_tag): - sm = XmlData(sm_tag) + sm = {}
# params params_tag = sm_tag.find("params") @@ -35,7 +95,7 @@ class SlaveMachineParser(XmlParser): # interfaces interfaces_tag = sm_tag.find("interfaces") if interfaces_tag is not None and len(interfaces_tag) > 0: - sm["interfaces"] = XmlCollection(interfaces_tag) + sm["interfaces"] = [] for eth_tag in interfaces_tag: interface = self._process_interface(eth_tag) sm["interfaces"].append(interface) @@ -45,17 +105,17 @@ class SlaveMachineParser(XmlParser): return sm
def _process_params(self, params_tag): - params = XmlCollection(params_tag) + params = [] if params_tag is not None: for param_tag in params_tag: - param = XmlData(param_tag) + param = {} param["name"] = self._get_attribute(param_tag, "name") param["value"] = self._get_attribute(param_tag, "value") params.append(param) return params
def _process_interface(self, iface_tag): - iface = XmlData(iface_tag) + iface = {} iface["id"] = self._get_attribute(iface_tag, "id") iface["network"] = self._get_attribute(iface_tag, "label") iface["type"] = "eth" @@ -69,7 +129,7 @@ class SlaveMachineParser(XmlParser): return iface
def _process_security(self, sec_tag): - sec = XmlData(sec_tag) + sec = {}
if sec_tag is None: sec["auth_type"] = "none" @@ -95,3 +155,55 @@ class SlaveMachineParser(XmlParser): sec["pubkey_path"] = ""
return sec + + def _get_attribute(self, element, attr): + return element.attrib[attr].strip() + +class XmlProcessingError(ControllerError): + """ Exception thrown on parsing errors """ + + _filename = None + _line = None + _col = None + + def __init__(self, msg, obj=None): + super(XmlProcessingError, self).__init__() + self._msg = msg + + if obj is not None: + if hasattr(obj, "loc"): + self.set_loc(obj.loc) + elif hasattr(obj, "base") and obj.base != None: + loc = {} + loc["file"] = os.path.basename(obj.base) + if hasattr(obj, "sourceline"): + loc["line"] = obj.sourceline + self.set_loc(loc) + + def set_loc(self, loc): + self._filename = loc["file"] + self._line = loc["line"] + if "col" in loc: + self._col = loc["col"] + + def __str__(self): + line = "" + col = "" + sep = "" + loc = "" + filename = "<unknown>" + + if self._filename: + filename = self._filename + + if self._line: + line = "%d" % self._line + sep = ":" + + if self._col: + col = "%s%d" % (sep, self._col) + + if self._line or self._col: + loc = "%s%s:" % (line, col) + + return "Parser error: %s:%s %s" % (filename, loc, self._msg)
From: Ondrej Lichtner olichtne@redhat.com
Defines the Exception class for errors from the InterfaceManager.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/InterfaceManagerError.py | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 lnst/Common/InterfaceManagerError.py
diff --git a/lnst/Common/InterfaceManagerError.py b/lnst/Common/InterfaceManagerError.py new file mode 100644 index 0000000..800e6c0 --- /dev/null +++ b/lnst/Common/InterfaceManagerError.py @@ -0,0 +1,16 @@ +""" +Defines the InterfaceManagerError exception class. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +from lnst.Common.LnstError import LnstError + +class InterfaceManagerError(LnstError): + pass
From: Ondrej Lichtner olichtne@redhat.com
Defines classes of jobs to be run on a slave. Based on the old lnst.Common.NetTestCommand code, but simplified to work with newer 'Job' implementation that started on the Controller.
Defines a Job class that is used as a container object by the Slave to fork and run a GenericJob object. For now only a ShellExecJob and a ModuleJob are supported. Previous "control" commands are now only methods of the Job container object.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/Job.py | 251 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 251 insertions(+) create mode 100644 lnst/Slave/Job.py
diff --git a/lnst/Slave/Job.py b/lnst/Slave/Job.py new file mode 100644 index 0000000..234cd61 --- /dev/null +++ b/lnst/Slave/Job.py @@ -0,0 +1,251 @@ +""" +This module defines classes of jobs to be run on a slave + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import os +import re +import sys +import signal +import logging +import multiprocessing +from lnst.Common.JobError import JobError +from lnst.Common.ExecCmd import exec_cmd, ExecCmdFail +from lnst.Common.ConnectionHandler import send_data +from lnst.Common.Logs import log_exc_traceback + +def get_job_class(what): + if what["type"] == "shell": + return ShellExecJob(what) + elif what["type"] == "module": + return ModuleJob(what) + else: + logging.error("Unknown job type "%s"" % what["type"]) + raise JobError("Unknown command type "%s"" % what["type"]) + +class JobContext(object): + def __init__(self): + self._dict = {} + + def add_job(self, job): + self._dict[job.get_id()] = job + + def del_job(self, job): + del self._dict[job.get_id()] + + def get_job(self, id): + if id in self._dict: + return self._dict[id] + else: + return None + + def _kill_all_jobs(self): + for id in self._dict: + self._dict[id].kill(signal=signal.SIGKILL) + + def cleanup(self): + logging.debug("Cleaning up leftover processes.") + self._kill_all_jobs() + self._dict = {} + + def get_parent_pipes(self): + pipes = {} + for key in self._dict: + pipe = self._dict[key].get_parent_pipe() + if pipe != None: + pipes[key] = pipe + return pipes + +class Job(object): + def __init__(self, what, log_ctl): + self._job_cls = get_job_class(what) + self._what = what + + self._id = what["job_id"] + self._parent_pipe = None + self._child_pipe = None + self._process = None + self._pid = None + self._log_ctl = log_ctl + self._finished = False + + def get_id(self): + return self._id + + def get_parent_pipe(self): + return self._parent_pipe + + def run(self): + self._parent_pipe, self._child_pipe = multiprocessing.Pipe() + self._process = multiprocessing.Process(target=self._run) + + self._process.daemon = False + self._process.start() + self._pid = self._process.pid + + logging.info("Running job %d with pid "%d"" % (self._id, self._pid)) + return True + + def _run(self): + os.setpgrp() + signal.signal(signal.SIGHUP, signal.SIG_DFL) + signal.signal(signal.SIGINT, signal.SIG_DFL) + signal.signal(signal.SIGTERM, signal.SIG_DFL) + + self._log_ctl.disable_logging() + self._log_ctl.set_connection(self._child_pipe) + + result = {} + try: + self._job_cls.run() + except: + log_exc_traceback() + type, value, tb = sys.exc_info() + data = {"Exception": "%s" % value} + # self._job_cls.set_fail(data) + finally: + res_data = self._job_cls.get_result() + result["type"] = "job_finished" + result["job_id"] = self._id + result["result"] = res_data + + send_data(self._child_pipe, result) + self._child_pipe.close() + + def kill(self, signal=signal.SIGKILL): + if self._finished: + return True + try: + logging.debug("Sending signal %s to pid %d" % (signal, self._pid)) + os.kill(self._pid, signal) + return True + except OSError as exc: + logging.error(str(exc)) + return False + + def join(self): + self._process.join() + + def set_finished(self, result): + self._finished = True + self._result = result + + def get_result(self): + return self._result + +class GenericJob(object): + def __init__(self, what): + self._what = what + self._result = {"passed": False, + "res_data": None, + "msg": None} + + def run(self): + raise JobError("Method run must be defined.") + + def get_result(self): + return self._result + + # def format_res_data(self, res_data, level=0): + # self._check_res_data(res_data) + # formatted_data = "" + # if res_data: + # max_key_len = 0 + # for key in res_data.keys(): + # if len(key) > max_key_len: + # max_key_len = len(key) + # for key, value in res_data.iteritems(): + # if type(value) == dict: + # formatted_data += level*4*" " + str(key) + ":\n" + # formatted_data += self.format_res_data(value, level+1) + # if type(value) == list: + # formatted_data += level*4*" " + str(key) + ":\n" + # for i in range(0, len(value)): + # formatted_data += (level+1)*4*" " +\ + # "item %d:" % (i+1) + "\n" + # formatted_data += self.format_res_data(value[i], + # level+2) + # else: + # formatted_data += level*4*" " + str(key) + ":" + \ + # (max_key_len-len(key))*" " + \ + # "\t" + str(value) + "\n" + + # return formatted_data + + # def _check_res_data(self, res_data): + # name_start_char = u":A-Z_a-z\xC0-\xD6\xD8-\xF6\xF8-\u02FF"\ + # u"\u0370-\u037D\u037F-\u1FFF\u200C-\u200D"\ + # u"\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF"\ + # u"\uF900-\uFDCF\uFDF0-\uFFFD\U00010000-\U000EFFFF" + # name_char = name_start_char + u"-.0-9\xB7\u0300-\u036F\u203F-\u2040" + # name = u"[%s]([%s])*$" % (name_start_char, name_char) + # char_data = u"[^<&]*" + # if isinstance(res_data, dict): + # for key in res_data: + # if not re.match(name, key, re.UNICODE): + # msg = "'%s' can't be used as an xml element name!" % key + # raise JobError(msg) + # else: + # self._check_res_data(res_data[key]) + # elif isinstance(res_data, list): + # for i in res_data: + # self._check_res_data(i) + # else: + # try: + # string = str(res_data) + # except: + # msg = "res_data can only contain dictionaries, lists or "\ + # "stringable objects!" + # raise JobError(msg) + # if not re.match(char_data, string, re.UNICODE): + # msg = "'%s' can't be used as character data in xml!" % string + # raise JobError(msg) + + # def _format_cmd_res_header(self): + # if self._what["netns"] != None: + # netns = "(%s) " % self._what["netns"] + # else: + # netns = "" + + # return "%-9s" % (self._what["type"] + netns) + +class ShellExecJob(GenericJob): + def run(self): + try: + stdout, stderr = exec_cmd(self._what["command"], self._what["json"]) + self._result["passed"] = True + self._result["res_data"] = {"stdout": stdout, "stderr": stderr} + except ExecCmdFail as e: + self._result["passed"] = False + self._result["res_data"] = res_data = {"stdout": e.get_stdout(), + "stderr": e.get_stderr()} + + # def _format_cmd_res_header(self): + # cmd_type = self._what["type"] + # cmd_val = self._what["command"] + + # if self._what["netns"] != None: + # netns = "(%s) " % self._what["netns"] + # else: + # netns = "" + + # cmd = "%-9scmd: "%s"" %(cmd_type + netns, cmd_val) + # return cmd + +class ModuleJob(GenericJob): + def run(self): + try: + self._result["passed"] = self._what["module"].run() + self._result["res_data"] = self._what["module"].get_res_data() + except Exception as e: + self._result["passed"] = False + self._result["res_data"] = {"exception": str(e)} + # self._result["res_data"] = {"stdout": e.get_stdout(), + # "stderr": e.get_stderr()}
From: Ondrej Lichtner olichtne@redhat.com
Reimplements the InterfaceManager to work with dynamically provided Device classes. This means that Device creation/destruction was moved completely into a Device class implementation and the InterfaceManager only facilitates this call.
This reimplementation also completely removes the id<->if_id mapping since this is not used by the Python recipes implementation. All Devices can now be referred to only by their index (as reported by kernel), though they can be searched according to other parameters as well.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/InterfaceManager.py | 266 +++++++++++++---------------------------- 1 file changed, 80 insertions(+), 186 deletions(-)
diff --git a/lnst/Slave/InterfaceManager.py b/lnst/Slave/InterfaceManager.py index 51aa6cc..da7e712 100644 --- a/lnst/Slave/InterfaceManager.py +++ b/lnst/Slave/InterfaceManager.py @@ -15,12 +15,13 @@ olichtne@redhat.com (Ondrej Lichtner) import re import select import logging -from lnst.Slave.NetConfigDevice import NetConfigDevice from lnst.Slave.NetConfigCommon import get_option from lnst.Common.NetUtils import normalize_hwaddr from lnst.Common.NetUtils import scan_netdevs from lnst.Common.ExecCmd import exec_cmd from lnst.Common.ConnectionHandler import recv_data +from lnst.Common.DeviceError import DeviceNotFound +from lnst.Common.InterfaceManagerError import InterfaceManagerError from lnst.Slave.DevlinkManager import DevlinkManager from pyroute2 import IPRSocket from pyroute2.netlink.rtnl import RTNLGRP_IPV4_IFADDR @@ -37,51 +38,30 @@ except ImportError: from pyroute2.iproute import RTM_NEWADDR from pyroute2.iproute import RTM_DELADDR
-class IfMgrError(Exception): - pass - NL_GROUPS = RTNLGRP_IPV4_IFADDR | RTNLGRP_IPV6_IFADDR | RTNLGRP_LINK
class InterfaceManager(object): def __init__(self, server_handler): + self._device_classes = {} + self._devices = {} #if_index to device - self._id_mapping = {} #id from the ctl to if_index - self._tmp_mapping = {} #id from the ctl to newly created device
self._nl_socket = IPRSocket() self._nl_socket.bind(groups=NL_GROUPS)
self._dl_manager = DevlinkManager()
- self.rescan_devices() - self._server_handler = server_handler
- def map_if(self, if_id, if_index): - if if_id in self._id_mapping: - raise IfMgrError("Interface already mapped.") - elif if_index not in self._devices: - raise IfMgrError("No interface with index %s found." % if_index) + def clear_dev_classes(self): + self._device_classes = {}
- self._id_mapping[if_id] = if_index - return + def add_device_class(self, name, cls): + if name in self._device_classes: + raise InterfaceManagerError("Device class name conflict %s" % name)
- def unmap_if(self, if_id): - if if_id in self._id_mapping: - del self._id_mapping[if_id] - elif if_id in self._tmp_mapping: - del self._tmp_mapping[if_id] - else: - pass - - def clear_if_mapping(self): - self._id_mapping = {} - - def get_id_by_if_index(self, if_index): - for if_id, index in self._id_mapping.iteritems(): - if if_index == index: - return if_id - return None + self._device_classes[name] = cls + return cls
def reconnect_netlink(self): if self._nl_socket != None: @@ -100,42 +80,36 @@ class InterfaceManager(object): devs = scan_netdevs() for dev in devs: if dev['index'] not in self._devices: - device = None - for if_id, d in self._tmp_mapping.items(): - d_cfg = d.get_conf_dict() - if d_cfg["name"] == dev["name"]: - device = d - self._id_mapping[if_id] = dev['index'] - del self._tmp_mapping[if_id] - break - if device == None: - device = Device(self) - device.init_netlink(dev['netlink_msg']) + device = self._device_classes["Device"](self) + device._init_netlink(dev['netlink_msg']) self._devices[dev['index']] = device + + update_msg = {"type": "dev_created", + "dev_data": device._get_if_data()} + self._server_handler.send_data_to_ctl(update_msg) else: - self._devices[dev['index']].update_netlink(dev['netlink_msg']) + self._devices[dev['index']]._update_netlink(dev['netlink_msg']) devices_to_remove.remove(dev['index'])
- self._devices[dev['index']].clear_ips() + self._devices[dev['index']]._clear_ips() for addr_msg in dev['ip_addrs']: - self._devices[dev['index']].update_netlink(addr_msg) + self._devices[dev['index']]._update_netlink(addr_msg) for i in devices_to_remove: - if self._devices[i].get_netns() != None: - continue - - dev_name = self._devices[i].get_name() + dev_name = self._devices[i].name logging.debug("Deleting Device with if_index %d, name %s because "\ "it doesn't exist anymore." % (i, dev_name))
- del_msg = {"type": "if_deleted", + self._devices[i]._deleted = True + del self._devices[i] + + del_msg = {"type": "dev_deleted", "if_index": i} self._server_handler.send_data_to_ctl(del_msg) - del self._devices[i]
self._dl_manager.rescan_ports() for device in self._devices.values(): - dl_port = self._dl_manager.get_port(device.get_name()) - device.set_devlink(dl_port) + dl_port = self._dl_manager.get_port(device.name) + device._set_devlink(dl_port)
def handle_netlink_msgs(self, msgs): for msg in msgs: @@ -143,86 +117,61 @@ class InterfaceManager(object):
self._dl_manager.rescan_ports() for device in self._devices.values(): - dl_port = self._dl_manager.get_port(device.get_name()) - device.set_devlink(dl_port) + dl_port = self._dl_manager.get_port(device.name) + device._set_devlink(dl_port)
def _handle_netlink_msg(self, msg): if msg['header']['type'] in [RTM_NEWLINK, RTM_NEWADDR, RTM_DELADDR]: if msg['index'] in self._devices: - update_msg = self._devices[msg['index']].update_netlink(msg) - if update_msg != None: - for if_id, if_index in self._id_mapping.iteritems(): - if if_index == msg['index']: - update_msg["if_id"] = if_id - break - self._server_handler.send_data_to_ctl(update_msg) + self._devices[msg['index']]._update_netlink(msg) elif msg['header']['type'] == RTM_NEWLINK: - dev = None - for if_id, d in self._tmp_mapping.items(): - d_cfg = d.get_conf_dict() - if d_cfg["name"] == msg.get_attr("IFLA_IFNAME"): - dev = d - self._id_mapping[if_id] = msg['index'] - del self._tmp_mapping[if_id] - break - if dev == None: - dev = Device(self) - update_msg = dev.init_netlink(msg) + dev = self._device_classes["Device"](self) + dev._init_netlink(msg) self._devices[msg['index']] = dev
- if update_msg != None: - for if_id, if_index in self._id_mapping.iteritems(): - if if_index == msg['index']: - update_msg["if_id"] = if_id - break - self._server_handler.send_data_to_ctl(update_msg) - + update_msg = {"type": "dev_created", + "dev_data": dev._get_if_data()} + self._server_handler.send_data_to_ctl(update_msg) elif msg['header']['type'] == RTM_DELLINK: if msg['index'] in self._devices: dev = self._devices[msg['index']] - if dev.get_netns() == None and dev.get_conf_dict() == None: - dev.del_link() - del self._devices[msg['index']] + dev._deleted = True
- del_msg = {"type": "if_deleted", - "if_index": msg['index']} - self._server_handler.send_data_to_ctl(del_msg) - else: - return + del self._devices[msg['index']]
- def get_mapped_device(self, if_id): - if if_id in self._id_mapping: - if_index = self._id_mapping[if_id] - return self._devices[if_index] - elif if_id in self._tmp_mapping: - return self._tmp_mapping[if_id] + del_msg = {"type": "dev_deleted", + "if_index": msg['index']} + self._server_handler.send_data_to_ctl(del_msg) else: - return None - - def get_mapped_devices(self): - ret = {} - for if_id, if_index in self._id_mapping.iteritems(): - ret[if_id] = self._devices[if_index] - for if_id in self._tmp_mapping: - ret[if_id] = self._tmp_mapping[if_id] - return ret + return
def get_device(self, if_index): + self.rescan_devices() if if_index in self._devices: return self._devices[if_index] else: - return None + raise DeviceNotFound()
def get_devices(self): + self.rescan_devices() return self._devices.values()
def get_device_by_hwaddr(self, hwaddr): + self.rescan_devices() for dev in self._devices.values(): - if dev.get_hwaddr() == hwaddr: + if dev.hwaddr == hwaddr: return dev - return None + raise DeviceNotFound() + + def get_device_by_name(self, name): + self.rescan_devices() + for dev in self._devices.values(): + if dev.name == name: + return dev + raise DeviceNotFound()
def get_device_by_params(self, params): + self.rescan_devices() matched = None for dev in self._devices.values(): matched = dev @@ -239,63 +188,44 @@ class InterfaceManager(object):
def deconfigure_all(self): for dev in self._devices.itervalues(): - dev.clear_configuration() - - def create_device_from_config(self, if_id, config): - if config["type"] == "eth": - raise IfMgrError("Ethernet devices can't be created.") - - config["name"] = self.assign_name(config) + pass + # dev.clear_configuration()
- device = Device(self) - self._tmp_mapping[if_id] = device + def create_device(self, clsname, args=[], kwargs={}): + devcls = self._device_classes[clsname]
- device.set_configuration(config) + device = devcls(self, *args, **kwargs) device.create()
- return config["name"] - - def create_device_pair(self, if_id1, config1, if_id2, config2): - name1, name2 = self.assign_name(config1) - config1["name"] = name1 - config2["name"] = name2 - config1["peer_name"] = name2 - config2["peer_name"] = name1 - - device1 = Device(self) - device2 = Device(self) - self._tmp_mapping[if_id1] = device1 - self._tmp_mapping[if_id2] = device2 - - device1.set_configuration(config1) - device2.set_configuration(config2) - device1.create() - - device1.set_peer(device2) - device2.set_peer(device1) - return name1, name2 - - def wait_interface_init(self): - while len(self._tmp_mapping) > 0: - rl, wl, xl = select.select([self._nl_socket], [], [], 1) + devs = scan_netdevs() + for dev in devs: + if dev["name"] == device.name: + device._init_netlink(dev['netlink_msg']) + self._devices[dev['index']] = device + return device
- if len(rl) == 0: - continue + return None
- msgs = recv_data(self._nl_socket)["data"] - self.handle_netlink_msgs(msgs) + def replace_dev(self, if_id, dev): + del self._devices[if_id] + self._devices[if_id] = dev
def _is_name_used(self, name): self.rescan_devices() for device in self._devices.itervalues(): - if name == device.get_name(): - return True - for device in self._tmp_mapping.itervalues(): - if name == device.get_name(): + if name == device.name: return True + + out, _ = exec_cmd("ovs-vsctl --columns=name list Interface", + log_outputs=False, die_on_err=False) + for line in out.split("\n"): + m = re.match(r'.*: "(.*)"', line) + if m is not None: + if name == m.group(1): + return True return False
- def assign_name_generic(self, prefix): + def assign_name(self, prefix): index = 0 while (self._is_name_used(prefix + str(index))): index += 1 @@ -311,42 +241,6 @@ class InterfaceManager(object): index2 += 1 return prefix + str(index1), prefix + str(index2)
- def assign_name(self, config): - if "name" in config: - return config["name"] - dev_type = config["type"] - if dev_type == "eth": - if (not "hwaddr" in config or - "name" in config): - return - hwaddr = normalize_hwaddr(config["hwaddr"]) - for dev in self._devices: - if dev.get_hwaddr() == hwaddr: - return dev.get_name() - elif dev_type == "bond": - return self.assign_name_generic("t_bond") - elif dev_type == "bridge" or dev_type == "ovs_bridge": - return self.assign_name_generic("t_br") - elif dev_type == "macvlan": - return self.assign_name_generic("t_macvlan") - elif dev_type == "team": - return self.assign_name_generic("t_team") - elif dev_type == "vlan": - netdev_name = self.get_mapped_device(config["slaves"][0]).get_name() - vlan_tci = get_option(config, "vlan_tci") - prefix = "%s.%s_" % (netdev_name, vlan_tci) - return self.assign_name_generic(prefix) - elif dev_type == "veth": - return self._assign_name_pair("veth") - elif dev_type == "vti": - return self.assign_name_generic("vti") - elif dev_type == "vti6": - return self.assign_name_generic("t_ip6vti") - elif dev_type == "vxlan": - return self.assign_name_generic("vxlan") - else: - return self.assign_name_generic("dev") - class Device(object): def __init__(self, if_manager): self._initialized = False
From: Ondrej Lichtner olichtne@redhat.com
This commit heavily reimplements the lnst.Controller.Machine and lnst.Slave.NetTestSlave modules. Since these modules directly depend on each other, these changes are introduced together in the same commit.
On the Controller side: * this switches the Machine class implementation from working with the old XML recipe code to work purely with the Python recipe implementation, where the Machine class is instantiated by a SlavePoolManager and is used by the tester through the Host class providing a tester facing API.
* the Machine class now needs to synchronize different resources to the slave, namely: * Device classes from the lnst.Devices package * test module code, which is instantiated on the Controller and send to the Slave when required (including all parent classes required for the reconstruction of the object on the Slave). * In the future this can be extended to also send the InterfaceManager implementation, and maybe even SlaveMethods implementation.
* includes methods relaying calling the Device methods
* Removes the old Interface implementation since this is all handled by the new Device classes. And the original simple Device class.
* refactors the rpc_call method to only have one implementation for everything (including network namespaces, though they're not supported atm...).
* switches to using the new Job implementation.
On the Slave side: * accept Device, and test module classes, all the dynamically received classes are stored in the new Resource cache. If a dynamically received class is imported during recipe execution it is also cleaned up when the recipe finishes execution.
* ServerHandler now translates Device objects into Device references when sending these objects over sockets.
* switches to using the new Job implementation.
* comments out old code that has not been refactored and is not supported yet with the python recipe implementation - mostly network namespaces.
* all exceptions are now transferred to the Controller where they'll be reconstructed and raised so they can be handled there (unless they can be handled on the Slave)
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * improved class and module synchronization implementation * join split lines in deviceref_to_device method --- lnst/Controller/Machine.py | 1327 ++++++++++---------------------------------- lnst/Slave/NetTestSlave.py | 829 +++++++++++++-------------- 2 files changed, 678 insertions(+), 1478 deletions(-)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 3bfcf82..f057390 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -14,17 +14,24 @@ rpazdera@redhat.com (Radek Pazdera) import logging import socket import os +import sys import tempfile import signal from time import sleep from xmlrpclib import Binary -from functools import wraps from lnst.Common.NetUtils import normalize_hwaddr +from lnst.Common.Utils import sha256sum from lnst.Common.Utils import wait_for, create_tar_archive from lnst.Common.Utils import check_process_running +from lnst.Common.TestModule import BaseTestModule from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT +from lnst.Common.DeviceError import DeviceDeleted, DeviceNotFound from lnst.Controller.Common import ControllerError from lnst.Controller.CtlSecSocket import CtlSecSocket +from lnst.Devices import device_classes +from lnst.Devices.Device import Device +from lnst.Devices.RemoteDevice import RemoteDevice +from lnst.Devices.VirtualDevice import VirtualDevice
# conditional support for libvirt if check_process_running("libvirtd"): @@ -51,7 +58,6 @@ class Machine(object): self._ctl_config = ctl_config self._slave_desc = None self._connection = None - self._configured = False self._system_config = {} self._security = security self._security["identity"] = ctl_config.get_option("security", @@ -76,8 +82,19 @@ class Machine(object): self._interfaces = [] self._namespaces = [] self._bg_cmds = {} + self._jobs = {} + self._job_id_seq = 0
self._device_database = {} + self._tmp_device_database = [] + + self._init_connection() + + def set_id(self, new_id): + self._id = new_id + + def get_id(self): + return self._id
def get_configuration(self): configuration = {} @@ -87,150 +104,96 @@ class Machine(object): configuration["redhat_release"] = self._slave_desc["redhat_release"]
configuration["interfaces"] = {} - for i in self._interfaces: - if not isinstance(i, UnusedInterface): - configuration["interface_"+i.get_id()] = i.get_config() + for dev in self._device_database.items(): + configuration["device_"+dev.name] = dev.get_config() return configuration
- def _if_id_exists(self, if_id): - for iface in self._interfaces: - if if_id == iface.get_id(): - return True - return False + def add_tmp_device(self, dev): + self._tmp_device_database.append(dev) + + def create_remote_device(self, dev): + dev_clsname = dev._dev_cls.__name__ + dev_args = dev._dev_args + dev_kwargs = dev._dev_kwargs + ret = self.rpc_call("create_device", clsname=dev_clsname, + args=dev_args, + kwargs=dev_kwargs) + dev.if_index = ret["if_index"] + self._device_database[ret["if_index"]] = dev + + def device_created(self, dev_data): + if_index = dev_data["if_index"] + if if_index not in self._device_database: + new_dev = None + if len(self._tmp_device_database) > 0: + for dev in self._tmp_device_database: + if dev._match_update_data(dev_data): + new_dev = dev + break
- def _generate_if_id(self, if_type): - i = 0 - while True: - if_id = "gen_%s_%d" % (if_type, i) - if not self._if_id_exists(if_id): - break - i += 1 - return if_id - - def _add_interface(self, if_id, if_type, cls): - if if_id != None: - if self._if_id_exists(if_id): - msg = "Interface '%s' already exists on machine '%s'" \ - % (if_id, self._id) - raise MachineError(msg) - else: - if_id = self._generate_if_id(if_type) + if new_dev is None: + new_dev = RemoteDevice(Device) + new_dev.host = self + new_dev.if_index = if_index + else: + self._tmp_device_database.remove(new_dev)
- iface = cls(self, if_id, if_type) - self._interfaces.append(iface) - return iface + new_dev.if_index = dev_data["if_index"]
- def remove_interface(self, if_id): - iface = self.get_interface(if_id) - self._interfaces.remove(iface) + self._device_database[if_index] = new_dev
- def interface_update(self, if_data): - try: - iface = self.get_interface(if_data["if_id"]) - except: - iface = None - if iface: - iface.update(if_data['if_data']) + def device_delete(self, dev_data): + if dev_data["if_index"] in self._device_database: + self._device_database[dev_data["if_index"]].deleted = True
- if if_data["if_data"]["if_index"] in self._device_database: - dev = self._device_database[if_data["if_data"]["if_index"]] - dev.update_data(if_data['if_data']) + def dev_db_get_if_index(self, if_index): + if if_index in self._device_database: + return self._device_database[if_index] else: - dev = Device(if_data["if_data"], self) - self._device_database[if_data["if_data"]["if_index"]] = dev - - def dev_db_delete(self, update_msg): - if update_msg["if_index"] in self._device_database: - del self._device_database[update_msg["if_index"]] + return None
def dev_db_get_name(self, dev_name): + #TODO move these to Slave to optimize quering for each device for if_index, dev in self._device_database.iteritems(): if dev.get_name() == dev_name: return dev return None
- # - # Factory methods for constructing interfaces on this machine. The - # types of interfaces are explained with the classes below. - # - def new_static_interface(self, if_id, if_type): - return self._add_interface(if_id, if_type, StaticInterface) - - def new_unused_interface(self, if_type): - return self._add_interface(None, if_type, UnusedInterface) - - def new_virtual_interface(self, if_id, if_type): - return self._add_interface(if_id, if_type, VirtualInterface) - - def new_soft_interface(self, if_id, if_type): - return self._add_interface(if_id, if_type, SoftInterface) - - def new_loopback_interface(self, if_id): - return self._add_interface(if_id, 'lo', LoopbackInterface) - - def get_interface(self, if_id): - for iface in self._interfaces: - if iface.get_id != None and if_id == iface.get_id(): - return iface - - msg = "Interface '%s' not found on machine '%s'" % (if_id, self._id) - raise MachineError(msg) - - def get_interfaces(self): - return self._interfaces - - def get_ordered_interfaces(self): - ordered_list = list(self._interfaces) - change = True - while change: - change = False - swap = False - ind1 = 0 - ind2 = 0 - for i in ordered_list: - master = i.get_primary_master() - if master != None: - ind1 = ordered_list.index(i) - ind2 = ordered_list.index(master) - if ind1 > ind2: - swap = True - break - if swap: - change = True - tmp = ordered_list[ind1] - ordered_list[ind1] = ordered_list[ind2] - ordered_list[ind2] = tmp - return ordered_list - - def _rpc_call(self, method_name, *args): - data = {"type": "command", "method_name": method_name, "args": args} - - self._msg_dispatcher.send_message(self._id, data) - result = self._msg_dispatcher.wait_for_result(self._id) - - return result + def get_dev_by_hwaddr(self, hwaddr): + #TODO move these to Slave to optimize quering for each device + for if_index, dev in self._device_database.iteritems(): + if dev.hwaddr == hwaddr: + return dev + return None
- def _rpc_call_to_netns(self, netns, method_name, *args): - data = {"type": "command", "method_name": method_name, "args": args} - msg = {"type": "to_netns", "netns": netns, "data": data} + def rpc_call(self, method_name, *args, **kwargs): + if "netns" in kwargs and kwargs["netns"] is not None: + netns = kwargs["netns"] + del kwargs["netns"] + msg = {"type": "to_netns", + "netns": netns, + "data": {"type": "command", + "method_name": method_name, + "args": args, + "kwargs": kwargs}} + else: + if "netns" in kwargs: + del kwargs["netns"] + msg = {"type": "command", + "method_name": method_name, + "args": args, + "kwargs": kwargs}
- self._msg_dispatcher.send_message(self._id, msg) - result = self._msg_dispatcher.wait_for_result(self._id) + self._msg_dispatcher.send_message(self, msg) + result = self._msg_dispatcher.wait_for_result(self)
return result
- def _rpc_call_x(self, netns, method_name, *args): - if not netns: - return self._rpc_call(method_name, *args) - return self._rpc_call_to_netns(netns, method_name, *args) - - def init_connection(self, recipe_name): + def _init_connection(self): """ Initialize the slave connection
- Calling this method will initialize the rpc connection to the - machine and initialize all the interfaces. Note, that it will - *not* configure the interfaces. They need to be configured - individually later on. + This will connect to the Slave, get it's description (should be + usable for matching), and checks version compatibility """ hostname = self._hostname port = self._port @@ -242,7 +205,7 @@ class Machine(object):
self._msg_dispatcher.add_slave(self, connection)
- hello, slave_desc = self._rpc_call("hello", recipe_name) + hello, slave_desc = self.rpc_call("hello") if hello != "hello": msg = "Unable to establish RPC connection " \ "to machine %s, handshake failed!" % hostname @@ -266,14 +229,45 @@ class Machine(object):
self._slave_desc = slave_desc
- devices = self._rpc_call("get_devices") + def set_recipe(self, recipe_name): + """ Reserves the machine for the specified recipe + + Also sends Device classes from the controller and initializes the + InterfaceManager on the Slave and builds the device database. + """ + self.rpc_call("set_recipe", recipe_name) + self._send_device_classes() + self.rpc_call("init_if_manager") + + devices = self.rpc_call("get_devices") for if_index, dev in devices.items(): - self._device_database[if_index] = Device(dev, self) + remote_dev = RemoteDevice(Device) + remote_dev.host = self + remote_dev.if_index = if_index
- for iface in self._interfaces: - iface.initialize() + self._device_database[if_index] = remote_dev + + def _send_device_classes(self): + classes = [] + for cls_name, cls in device_classes: + classes.extend(reversed(self._get_base_classes(cls))) + + for cls in classes: + if cls is object: + continue + module_name = cls.__module__ + module = sys.modules[module_name] + filename = module.__file__
- self._configured = True + if filename[-3:] == "pyc": + filename = filename[:-1] + + res_hash = self.sync_resource(module_name, filename) + self.rpc_call("load_cached_module", module_name, res_hash) + + for cls_name, cls in device_classes: + module_name = cls.__module__ + self.rpc_call("map_device_class", cls_name, module_name)
def is_git_version(self, version): try: @@ -282,10 +276,12 @@ class Machine(object): except ValueError: return True
- def is_configured(self): - """ Test if the machine was configured """ - - return self._configured + def cleanup_devices(self): + self.rpc_call("destroy_devices") + for dev in self._device_database.values(): + if isinstance(dev, VirtualDevice): + dev.destroy() + self._device_database = {}
def cleanup(self): """ Clean the machine up @@ -295,114 +291,144 @@ class Machine(object): all the interfaces that have been configured on the machine, and finalize and close the rpc connection to the machine. """ - if not self._configured: - return - #connection to the slave was closed - if not self._msg_dispatcher.get_connection(self._id): + if not self._msg_dispatcher.get_connection(self): return
- ordered_ifaces = self.get_ordered_interfaces() try: #dump statistics - for iface in self._interfaces: - # Getting stats only from real interfaces - if isinstance(iface, UnusedInterface): - continue - stats = iface.link_stats() - logging.debug("%s:%s:%s: RX:\t bytes: %d\t packets: %d\t dropped: %d" % - (iface.get_netns(), iface.get_host(), iface.get_id(), - stats["rx_bytes"], stats["rx_packets"], stats["rx_dropped"])) - logging.debug("%s:%s:%s: TX:\t bytes: %d\t packets: %d\t dropped: %d" % - (iface.get_netns(), iface.get_host(), iface.get_id(), - stats["tx_bytes"], stats["tx_packets"], stats["tx_dropped"])) - - self._rpc_call("kill_cmds") + # for iface in self._interfaces: + # # Getting stats only from real interfaces + # if isinstance(iface, UnusedInterface): + # continue + # stats = iface.link_stats() + # logging.debug("%s:%s:%s: RX:\t bytes: %d\t packets: %d\t dropped: %d" % + # (iface.get_netns(), iface.get_host(), iface.get_id(), + # stats["rx_bytes"], stats["rx_packets"], stats["rx_dropped"])) + # logging.debug("%s:%s:%s: TX:\t bytes: %d\t packets: %d\t dropped: %d" % + # (iface.get_netns(), iface.get_host(), iface.get_id(), + # stats["tx_bytes"], stats["tx_packets"], stats["tx_dropped"])) + + self.rpc_call("kill_jobs") for netns in self._namespaces: - self._rpc_call_to_netns(netns, "kill_cmds") + self.rpc_call("kill_jobs", netns=netns)
self.restore_system_config() - - ordered_ifaces.reverse() - for iface in ordered_ifaces: - iface.deconfigure() - for iface in ordered_ifaces: - iface.cleanup() - - self.del_namespaces() - - self.restore_nm_option() - self._rpc_call("bye") + self.cleanup_devices() + # self.del_namespaces() + # self.restore_nm_option() + self.rpc_call("bye") except: #cleanup is only meaningful on dynamic interfaces, and should #always be called when deconfiguration happens- especially #when something on the slave breaks during deconfiguration - for iface in ordered_ifaces: - if not isinstance(iface, VirtualInterface): - continue - iface.cleanup() + self.cleanup_devices() raise - finally: - self._msg_dispatcher.disconnect_slave(self.get_id()) - - self._configured = False
def _timeout_handler(self, signum, frame): - msg = "RPC connection to machine %s timed out" % self.get_id() + msg = "Timeout expired on machine %s" % self.get_id() raise MachineError(msg)
- def run_command(self, command): - """ Run a command on the machine """ + def _get_base_classes(self, cls): + new_bases = [cls] + list(cls.__bases__) + bases = [] + while len(new_bases) != len(bases): + bases = new_bases + new_bases = list(bases) + for b in bases: + for bs in b.__bases__: + if bs not in new_bases: + new_bases.append(bs) + return new_bases + + def run_job(self, job): + job.id = self._job_id_seq + self._job_id_seq += 1 + self._jobs[job.id] = job + + if job._type == "module": + classes = [job._what] + classes.extend(self._get_base_classes(job._what.__class__)) + + for cls in reversed(classes): + if cls is object or cls is BaseTestModule: + continue + m_name = cls.__module__ + m = sys.modules[m_name] + filename = m.__file__ + if filename[-3:] == "pyc": + filename = filename[:-1]
- prev_handler = signal.signal(signal.SIGALRM, self._timeout_handler) + res_hash = self.sync_resource(m_name, filename)
- if 'bg_id' in command: - self._bg_cmds[command['bg_id']] = command - if command["type"] in ["wait", "intr", "kill"]: - bg_cmd = self._bg_cmds[command["proc_id"]] - if bg_cmd["netns"] != None: - command["netns"] = bg_cmd["netns"] - - netns = command["netns"] - if command["type"] == "wait": - logging.debug("Get remaining time of bg process with bg_id == %s" - % command["proc_id"]) - remaining_time = self._rpc_call_x(netns, "get_remaining_time", - command["proc_id"]) - logging.debug("Setting timeout to %d", remaining_time) - if remaining_time > 0: - signal.alarm(remaining_time) - else: - # 2 seconds is enough time to do wait via RPC and collect - # the result - signal.alarm(2) - else: - if "timeout" in command: - timeout = command["timeout"] - logging.debug("Setting timeout to "%d"", timeout) - signal.alarm(timeout) - else: - logging.debug("Setting default timeout (%ds)." % DEFAULT_TIMEOUT) - signal.alarm(DEFAULT_TIMEOUT) + self.rpc_call("load_cached_module", m_name, res_hash) + + logging.info("Host %s executing job %d: %s" % + (self._id, job.id, str(job))) + if job._desc is not None: + logging.info("Job description: %s" % job._desc) + + return self.rpc_call("run_job", job._to_dict(), netns=job._netns) + + def wait_for_job(self, job, timeout): + res = True + if job.id not in self._jobs: + raise MachineError("No job '%s' running on Machine %s" % + (job.id, self._id)) + + prev_handler = signal.signal(signal.SIGALRM, self._timeout_handler) + signal.alarm(timeout)
try: - cmd_res = self._rpc_call_x(netns, "run_command", command) + if timeout > 0: + logging.info("Waiting for Job %d on Host %s for %d seconds." % + (job.id, self._id, timeout)) + elif timeout == 0: + logging.info("Waiting for Job %d on Host %s." % + (job.id, self._id)) + result = self._msg_dispatcher.wait_for_finish(self, job.id) except MachineError as exc: - if "proc_id" in command: - cmd_res = self._rpc_call_x(netns, "kill_command", - command["proc_id"]) - else: - cmd_res = self._rpc_call_x(netns, "kill_command", - None) + logging.error(str(exc)) + res = False
- if "killed" in cmd_res and cmd_res["killed"]: - cmd_res["passed"] = False - cmd_res["msg"] = str(exc) + signal.alarm(0) + signal.signal(signal.SIGALRM, prev_handler) + + return res + + def wait_for_tmp_devices(self, timeout): + res = False + prev_handler = signal.signal(signal.SIGALRM, self._timeout_handler) + signal.alarm(timeout) + + try: + if timeout > 0: + logging.info("Waiting for Device creation Host %s for %d seconds." % + (self._id, timeout)) + elif timeout == 0: + logging.info("Waiting for Device creation on Host %s." % + (self._id)) + + while len(self._tmp_device_database) > 0: + result = self._msg_dispatcher.handle_messages() + except MachineError as exc: + logging.error(str(exc)) + res = False
signal.alarm(0) signal.signal(signal.SIGALRM, prev_handler) + return res + + def job_finished(self, msg): + job_id = msg["job_id"] + job = self._jobs[job_id] + job._res = msg["result"]
- return cmd_res + def kill(self, job, signal): + if job.id not in self._jobs: + raise MachineError("No job '%s' running on Machine %s" % + (job.id(), self._id)) + return self.rpc_call("kill_job", job.id, signal, netns=job.netns)
def get_hostname(self): """ Get hostname/ip of the machine @@ -415,13 +441,6 @@ class Machine(object): def get_libvirt_domain(self): return self._libvirt_domain
- def get_id(self): - """ Returns machine's id as defined in the recipe """ - return self._id - - def set_rpc(self, dispatcher): - self._msg_dispatcher = dispatcher - def get_mac_pool(self): if self._mac_pool: return self._mac_pool @@ -432,9 +451,9 @@ class Machine(object): self._mac_pool = mac_pool
def restore_system_config(self): - self._rpc_call("restore_system_config") + self.rpc_call("restore_system_config") for netns in self._namespaces: - self._rpc_call_to_netns(netns, "restore_system_config") + self.rpc_call("restore_system_config", netns=netns) return True
def set_network_bridges(self, bridges): @@ -459,7 +478,7 @@ class Machine(object):
tmp = {} for netns in namespaces: - tmp.update(self._rpc_call_x(netns, "start_packet_capture", "")) + tmp.update(self.rpc_call("start_packet_capture", "", netns=netns)) return tmp
def stop_packet_capture(self): @@ -468,10 +487,10 @@ class Machine(object): namespaces.add(iface.get_netns())
for netns in namespaces: - self._rpc_call_x(netns, "stop_packet_capture") + self.rpc_call("stop_packet_capture", netns=netns)
def copy_file_to_machine(self, local_path, remote_path=None, netns=None): - remote_path = self._rpc_call_x(netns, "start_copy_to", remote_path) + remote_path = self.rpc_call("start_copy_to", remote_path, netns=netns)
f = open(local_path, "rb")
@@ -480,14 +499,14 @@ class Machine(object): if len(data) == 0: break
- self._rpc_call_x(netns, "copy_part_to", remote_path, Binary(data)) + self.rpc_call("copy_part_to", remote_path, data, netns=netns)
- self._rpc_call_x(netns, "finish_copy_to", remote_path) + self.rpc_call("finish_copy_to", remote_path, netns=netns)
return remote_path
def copy_file_from_machine(self, remote_path, local_path): - status = self._rpc_call("start_copy_from", remote_path) + status = self.rpc_call("start_copy_from", remote_path) if not status: raise MachineError("The requested file cannot be transfered." \ "It does not exist on machine %s" % self.get_id()) @@ -495,836 +514,54 @@ class Machine(object): local_file = open(local_path, "wb")
buf_size = 1024*1024 # 1MB buffer - binary = "next" - while binary != "": - binary = self._rpc_call("copy_part_from", remote_path, buf_size) - local_file.write(binary.data) + while True: + data = self.rpc_call("copy_part_from", remote_path, buf_size) + if data == "": + break + local_file.write(data)
local_file.close() - self._rpc_call("finish_copy_from", remote_path) - - def sync_resources(self, required): - self._rpc_call("clear_resource_table") - - for res_type, resources in required.iteritems(): - for res_name, res in resources.iteritems(): - has_resource = self._rpc_call("has_resource", res["hash"]) - if not has_resource: - msg = "Transfering %s %s to machine %s" % \ - (res_name, res_type, self.get_id()) - logging.info(msg) + self.rpc_call("finish_copy_from", remote_path)
- local_path = required[res_type][res_name]["path"] + def sync_resource(self, res_name, file_path): + digest = sha256sum(file_path)
- if res_type == "tools": - archive = tempfile.NamedTemporaryFile(delete=False) - archive_path = archive.name - archive.close() + if not self.rpc_call("has_resource", digest): + msg = "Transfering %s to machine %s as '%s'" % (file_path, + self.get_id(), + res_name) + logging.debug(msg)
- create_tar_archive(local_path, archive_path, True) - local_path = archive_path + remote_path = self.copy_file_to_machine(file_path) + self.rpc_call("add_resource_to_cache", + "file", remote_path, res_name) + return digest
- remote_path = self.copy_file_to_machine(local_path) - self._rpc_call("add_resource_to_cache", res["hash"], - remote_path, res_name, res["path"], res_type) + # def enable_nm(self): + # return self._rpc_call("enable_nm")
- for ns in self._namespaces: - remote_path = self.copy_file_to_machine(local_path, - netns=ns) - self._rpc_call_to_netns(ns, "add_resource_to_cache", - res["hash"], remote_path, res_name, res["path"], - res_type) + # def disable_nm(self): + # return self._rpc_call("disable_nm")
- if res_type == "tools": - os.unlink(archive_path) - - self._rpc_call("map_resource", res["hash"], res_type, res_name) - for ns in self._namespaces: - self._rpc_call_to_netns(ns, "map_resource", res["hash"], - res_type, res_name) - - def enable_nm(self): - return self._rpc_call("enable_nm") - - def disable_nm(self): - return self._rpc_call("disable_nm") - - def restore_nm_option(self): - return self._rpc_call("restore_nm_option") + # def restore_nm_option(self): + # return self._rpc_call("restore_nm_option")
def __str__(self): return "[Machine hostname(%s) libvirt_domain(%s) interfaces(%d)]" % \ (self._hostname, self._libvirt_domain, len(self._interfaces))
- def add_netns(self, netns): - self._namespaces.append(netns) - return self._rpc_call("add_namespace", netns) + # def add_netns(self, netns): + # self._namespaces.append(netns) + # return self._rpc_call("add_namespace", netns)
- def del_netns(self, netns): - return self._rpc_call("del_namespace", netns) - - def del_namespaces(self): - for netns in self._namespaces: - self.del_netns(netns) - self._namespaces = [] - return True + # def del_netns(self, netns): + # return self._rpc_call("del_namespace", netns)
- def wait_interface_init(self): - return self._rpc_call("wait_interface_init") + # def del_namespaces(self): + # for netns in self._namespaces: + # self.del_netns(netns) + # self._namespaces = [] + # return True
def get_security(self): return self._security - -class Interface(object): - """ Abstraction of a test network interface on a slave machine - - This is a base class for object that represent test interfaces - on a test machine. - """ - def __init__(self, machine, if_id, if_type): - self._machine = machine - self._configured = False - - self._id = if_id - self._type = if_type - - self._hwaddr = None - self._devname = None - self._network = None - self._netem = None - - self._slaves = {} - self._slave_options = {} - self._addresses = [] - self._options = [] - - self._master = {"primary": None, "other": []} - - self._ovs_conf = None - - self._netns = None - self._peer = None - self._mtu = None - self._driver = None - self._devlink = None - - def get_id(self): - return self._id - - def get_type(self): - return self._type - - def get_driver(self): - return self._driver - - def set_hwaddr(self, hwaddr): - self._hwaddr = normalize_hwaddr(hwaddr) - - def get_hwaddr(self): - if not self._hwaddr: - msg = "Hardware address is not available for interface '%s'" \ - % self.get_id() - raise MachineError(msg) - return self._hwaddr - - def set_devname(self, devname): - self._devname = devname - - def get_devname(self): - if not self._devname: - msg = "Device name is not available for interface '%s'" \ - % self.get_id() - raise MachineError(msg) - return self._devname - - def set_network(self, network): - self._network = network - - def get_network(self): - if not self._network: - msg = "Network segment is not available for interface '%s'" \ - % self.get_id() - raise MachineError(msg) - return self._network - - def set_option(self, name, value): - self._options.append((name, value)) - - def set_netem(self, netem): - self._netem = netem - - def add_master(self, master, primary=True): - if primary and self._master["primary"] != None: - msg = "Interface %s already has a primary master."\ - % self.get_id() - raise MachineError(msg) - else: - if primary: - self._master["primary"] = master - else: - self._master["other"].append(master) - - def del_master(self, master): - if self._master["primary"] is master: - self._master["primary"] = None - else: - self._master["other"].remove(master) - - def get_primary_master(self): - return self._master["primary"] - - def add_slave(self, iface): - self._slaves[iface.get_id()] = iface - if self._type in ["vlan", "vxlan"]: - iface.add_master(self, primary=False) - else: - iface.add_master(self) - - def del_slave(self, iface): - iface.del_master(self) - del self._slaves[iface.get_id()] - - def set_slave_option(self, slave_id, name, value): - if slave_id not in self._slave_options: - self._slave_options[slave_id] = [] - self._slave_options[slave_id].append((name, value)) - - def add_address(self, addr): - if (type(addr) == type([])): - for one_addr in addr: - self._addresses.append(one_addr) - else: - self._addresses.append(addr) - - def get_address(self, num): - return self._addresses[num].split('/')[0] - - def get_addresses(self): - addrs = [] - for addr in self._addresses: - addrs.append(tuple(addr.split('/'))) - return addrs - - def set_ovs_conf(self, ovs_conf): - self._ovs_conf = ovs_conf - - def get_ovs_conf(self): - return self._ovs_conf - - def set_netns(self, netns): - self._netns = netns - - def get_netns(self): - return self._netns - - def get_host(self): - return self._machine.get_id() - - def set_peer(self, peer): - self._peer = peer - - def get_peer(self): - return self._peer - - def get_prefix(self, num): - try: - return self._addresses[num].split('/')[1] - except IndexError: - raise PrefixMissingError - - def get_mtu(self): - return self._mtu - - def set_mtu(self, mtu): - command = {"type": "config", - "host": self._machine.get_id(), - "persistent": False, - "options":[ - {"name": "/sys/class/net/%s/mtu" % self._devname, - "value": str(mtu)} - ]} - command["netns"] = self._netns - - self._machine.run_command(command) - self._mtu = mtu - return self._mtu - - def link_stats(self): - stats = self._machine._rpc_call_x(self._netns, "link_stats", - self._id) - return stats - - def set_addresses(self, ips): - self._addresses = ips - self._machine._rpc_call_x(self._netns, "set_addresses", - self._id, ips) - - def add_route(self, dest): - self._machine._rpc_call_x(self._netns, "add_route", - self._id, dest) - - def del_route(self, dest): - self._machine._rpc_call_x(self._netns, "del_route", - self._id, dest) - - def update_from_slave(self): - if_data = self._machine._rpc_call_x(self._netns, "get_if_data", - self._id) - - if if_data is not None: - self.update(if_data) - return - - def update(self, if_data): - self.set_hwaddr(if_data["hwaddr"]) - self.set_devname(if_data["name"]) - self._mtu = if_data["mtu"] - self._driver = if_data["driver"] - self._devlink = if_data["devlink"] - - def get_config(self): - config = {"id": self._id, - "hwaddr": self._hwaddr, - "devname": self._devname, - "network_label": self._network, - "type": self._type, - "addresses": self._addresses, - "slaves": self._slaves.keys(), - "options": self._options, - "slave_options": self._slave_options, - "master": None, - "other_masters": [], - "ovs_conf": self._ovs_conf, - "netns": self._netns, - "peer": self._peer, - "netem": self._netem, - "mtu": self._mtu, - "driver": self._driver} - - if self._master["primary"] != None: - config["master"] = self._master["primary"].get_id() - - for m in self._master["other"]: - config["other_masters"].append(m.get_id()) - - return config - - def up(self): - self._machine._rpc_call_x(self._netns, "set_device_up", self._id) - - def down(self): - self._machine._rpc_call_x(self._netns, "set_device_down", self._id) - - def set_link_up(self): - self._machine._rpc_call_x(self._netns, "set_link_up", self._id) - - def set_link_down(self): - self._machine._rpc_call_x(self._netns, "set_link_down", self._id) - - def initialize(self): - phys_devs = self._machine._rpc_call("map_if_by_hwaddr", - self._id, self._hwaddr) - - if len(phys_devs) == 1: - self.set_devname(phys_devs[0]["name"]) - elif len(phys_devs) < 1: - msg = "Device %s not found on machine %s" \ - % (self.get_id(), self._machine.get_id()) - raise MachineError(msg) - elif len(phys_devs) > 1: - msg = "More than one device with hwaddr %s found on machine %s" \ - % (self._hwaddr, self._machine.get_id()) - raise MachineError(msg) - - self.down() - - def cleanup(self): - self._machine._rpc_call("unmap_if", self._id) - - def configure(self): - if self._configured: - msg = "Unable to configure interface %s on machine %s. " \ - "It has been configured already." % (self.get_id(), - self._machine.get_id()) - raise MachineError(msg) - else: - self._configured = True - - logging.info("Configuring interface %s on machine %s", self.get_id(), - self._machine.get_id()) - - if self._netns != None: - self._machine._rpc_call("set_if_netns", self.get_id(), self._netns) - self._machine._rpc_call_x(self._netns, "configure_interface", - self.get_id(), self.get_config()) - - self.update_from_slave() - - def deconfigure(self): - if not self._configured: - return - - self._machine._rpc_call_x(self._netns, "deconfigure_interface", - self.get_id()) - if self._netns != None: - self._machine._rpc_call_to_netns(self._netns, - "return_if_netns", self.get_id()) - self._configured = False - - def add_br_vlan(self, br_vlan_info): - self._machine._rpc_call_x(self._netns, "add_br_vlan", - self._id, br_vlan_info) - - def del_br_vlan(self, br_vlan_info): - self._machine._rpc_call_x(self._netns, "del_br_vlan", - self._id, br_vlan_info) - - def get_br_vlans(self): - return self._machine._rpc_call_x(self._netns, "get_br_vlans", self._id) - - def add_br_fdb(self, br_fdb_info): - self._machine._rpc_call_x(self._netns, "add_br_fdb", - self._id, br_fdb_info) - - def del_br_fdb(self, br_fdb_info): - self._machine._rpc_call_x(self._netns, "del_br_fdb", - self._id, br_fdb_info) - - def get_br_fdbs(self): - return self._machine._rpc_call_x(self._netns, "get_br_fdbs", self._id) - - def set_br_learning(self, br_learning_info): - self._machine._rpc_call_x(self._netns, "set_br_learning", self._id, - br_learning_info) - - def set_br_learning_sync(self, br_learning_sync_info): - self._machine._rpc_call_x(self._netns, "set_br_learning_sync", self._id, - br_learning_sync_info) - - def set_br_flooding(self, br_flooding_info): - self._machine._rpc_call_x(self._netns, "set_br_flooding", self._id, - br_flooding_info) - - def set_br_state(self, br_state_info): - self._machine._rpc_call_x(self._netns, "set_br_state", self._id, - br_state_info) - - def set_speed(self, speed): - self._machine._rpc_call_x(self._netns, "set_speed", self._id, speed) - - def set_autoneg(self): - self._machine._rpc_call_x(self._netns, "set_autoneg", self._id) - - def slave_add(self, if_id): - self._machine._rpc_call_x(self._netns, "slave_add", self._id, if_id) - self.add_slave(self._machine.get_interface(if_id)) - - def slave_del(self, if_id): - self.del_slave(self._machine.get_interface(if_id)) - self._machine._rpc_call_x(self._netns, "slave_del", self._id, if_id) - - def get_devlink_name(self): - if self._devlink: - return "%s/%s" % (self._devlink["bus_name"], - self._devlink["dev_name"]) - return None - - def get_devlink_port_name(self): - if self._devlink: - return "%s/%u" % (self.get_devlink_name(), - self._devlink["port_index"]) - return None - -class StaticInterface(Interface): - """ Static interface - - This class represents interfaces that are present on the - machine. LNST will only use them for testing without performing - any special actions. - - This type is suitable for physical interfaces. - """ - def __init__(self, machine, if_id, if_type): - super(StaticInterface, self).__init__(machine, if_id, if_type) - -class LoopbackInterface(Interface): - """ Static interface - - This class represents interfaces that are present on the - machine. LNST will only use them for testing without performing - any special actions. - - This type is suitable for physical interfaces. - """ - def __init__(self, machine, if_id, if_type): - super(LoopbackInterface, self).__init__(machine, if_id, if_type) - - def initialize(self): - pass - - def cleanup(self): - pass - - def configure(self): - self._hwaddr = '00:00:00:00:00:00' - self._driver = 'loopback' - - phys_devs = self._machine._rpc_call_x(self._netns, - "map_if_by_params", self._id, - { 'hwaddr': self._hwaddr, - 'driver': self._driver }) - - if len(phys_devs) == 1: - self.set_devname(phys_devs[0]["name"]) - elif len(phys_devs) < 1: - msg = "Device %s not found on machine %s" \ - % (self.get_id(), self._machine.get_id()) - raise MachineError(msg) - elif len(phys_devs) > 1: - msg = "More than one device with hwaddr %s found on machine %s" \ - % (self._hwaddr, self._machine.get_id()) - raise MachineError(msg) - - if self._configured: - msg = "Unable to configure interface %s on machine %s. " \ - "It has been configured already." % (self.get_id(), - self._machine.get_id()) - raise MachineError(msg) - - logging.info("Configuring interface %s on machine %s", self.get_id(), - self._machine.get_id()) - - self._machine._rpc_call_x(self._netns, "configure_interface", - self.get_id(), self.get_config()) - self._configured = True - self.update_from_slave() - - def deconfigure(self): - if not self._configured: - return - - self._machine._rpc_call_x(self._netns, "deconfigure_interface", - self.get_id()) - self._machine._rpc_call_x(self._netns, "unmap_if", self.get_id()) - self._configured = False - -class VirtualInterface(Interface): - """ Dynamically created interface - - This class represents interfaces in libvirt virtual machines - that were created dynamically by LNST just for this test. - - This requires some special handling and communication with - libvirt. - """ - def __init__(self, machine, if_id, if_type): - super(VirtualInterface, self).__init__(machine, if_id, if_type) - self._driver = "virtio" - - def set_driver(self, driver): - self._driver = driver - - def get_driver(self): - return self._driver - - def get_orig_hwaddr(self): - if not self._orig_hwaddr: - msg = "Hardware address is not available for interface '%s'" \ - % self.get_id() - raise MachineError(msg) - return self._orig_hwaddr - - def initialize(self): - domain_ctl = self._machine.get_domain_ctl() - - if self._hwaddr: - query = self._machine._rpc_call('get_devices_by_hwaddr', - self._hwaddr) - if len(query): - msg = "Device with hwaddr %s already exists" % self._hwaddr - raise MachineError(msg) - else: - mac_pool = self._machine.get_mac_pool() - while True: - self._hwaddr = normalize_hwaddr(mac_pool.get_addr()) - query = self._machine._rpc_call('get_devices_by_hwaddr', - self._hwaddr) - if not len(query): - break - - bridges = self._machine.get_network_bridges() - if self._network in bridges: - net_ctl = bridges[self._network] - else: - bridges[self._network] = net_ctl = VirtNetCtl() - net_ctl.init() - - net_name = net_ctl.get_name() - - logging.info("Creating interface %s (%s) on machine %s", - self.get_id(), self._hwaddr, self._machine.get_id()) - - self._orig_hwaddr = self._hwaddr - domain_ctl.attach_interface(self._hwaddr, net_name, self._driver) - - - # The sleep here is necessary, because udev sometimes renames the - # newly created device and if the query for name comes too early, - # the controller will then try to configure an nonexistent device - sleep(1) - - ready = wait_for(self.is_ready, timeout=10) - - if not ready: - msg = "Netdevice initialization failed." \ - "Unable to create device %s (%s) on machine %s" \ - % (self.get_id(), self._hwaddr, self._machine.get_id()) - raise MachineError(msg) - - super(VirtualInterface, self).initialize() - - def cleanup(self): - self._machine._rpc_call("unmap_if", self._id) - domain_ctl = self._machine.get_domain_ctl() - domain_ctl.detach_interface(self._orig_hwaddr) - - def is_ready(self): - ifaces = self._machine._rpc_call('get_devices_by_hwaddr', self._hwaddr) - return len(ifaces) > 0 - -class SoftInterface(Interface): - """ Software interface abstraction - - This type of interface represents interfaces created in the kernel - during the runtime. This includes devices such as bonds and teams. - """ - - def __init__(self, machine, if_id, if_type): - super(SoftInterface, self).__init__(machine, if_id, if_type) - - def initialize(self): - pass - - def cleanup(self): - pass - - def configure(self): - if self._configured: - return - else: - self._configured = True - - logging.info("Configuring interface %s on machine %s", self.get_id(), - self._machine.get_id()) - - if self._type == "veth": - peer_if = self._machine.get_interface(self._peer) - peer_config = peer_if.get_config() - dev_name, peer_name = self._machine._rpc_call("create_if_pair", - self._id, self.get_config(), - self._peer, peer_config) - self.set_devname(dev_name) - peer_if.set_devname(peer_name) - self._configured = True - peer_if._configured = True - return - - dev_name = self._machine._rpc_call_x(self._netns, - "create_soft_interface", - self._id, self.get_config()) - self.set_devname(dev_name) - self.update_from_slave() - - def deconfigure(self): - if not self._configured: - return - - if self._type == "veth": - peer_if = self._machine.get_interface(self._peer) - - self._machine._rpc_call("deconfigure_if_pair", self._id, self._peer) - self._machine._rpc_call("unmap_if", self._id) - self._machine._rpc_call("unmap_if", self._peer) - - self._configured = False - peer_if._configured = False - return - - self._machine._rpc_call_x(self._netns, "deconfigure_interface", - self.get_id()) - self._machine._rpc_call_x(self._netns, "unmap_if", self.get_id()) - self._configured = False - -class UnusedInterface(Interface): - """ Unused interface for this test - - This class represents interfaces that will not be used in the - current test setup. This applies when a slave machine from a - pool has more interfaces then the machine it was matched to - from the recipe. - - LNST still needs to know about these interfaces so it can turn - them off. - """ - - def __init__(self, machine, if_id, if_type): - super(UnusedInterface, self).__init__(machine, if_id, if_type) - - def initialize(self): - self._machine._rpc_call('set_unmapped_device_down', self._hwaddr) - - def set_driver(self, driver): - pass - - def configure(self): - pass - - def deconfigure(self): - pass - - def up(self): - pass - - def down(self): - pass - - def cleanup(self): - pass - -class Device(object): - """ Represents device information received from a Slave""" - - def pre_call_decorate(func): - @wraps(func) - def func_wrapper(inst, *args, **kwargs): - inst.slave_update() - return func(inst, *args, **kwargs) - return func_wrapper - - def __init__(self, data, machine): - self._if_index = data["if_index"] - self._hwaddr = None - self._name = None - self._ip_addrs = None - self._ifi_type = None - self._state = None - self._master = None - self._slaves = None - self._netns = None - self._peer = None - self._mtu = None - self._driver = None - self._devlink = None - - self._machine = machine - - self.update_data(data) - - def update_data(self, data): - if data["if_index"] != self._if_index: - return False - - self._hwaddr = data["hwaddr"] - self._name = data["name"] - self._ip_addrs = data["ip_addrs"] - self._ifi_type = data["ifi_type"] - self._state = data["state"] - self._master = data["master"] - self._slaves = data["slaves"] - self._netns = data["netns"] - self._peer = data["peer"] - self._mtu = data["mtu"] - self._driver = data["driver"] - self._devlink = data["driver"] - return True - - def slave_update(self): - res = self._machine._rpc_call_x(self._netns, - "get_device", - self._if_index) - if res: - self.update_data(res) - return - - def get_if_index(self): - return self._if_index - - @pre_call_decorate - def get_hwaddr(self): - return self._hwaddr - - @pre_call_decorate - def get_name(self): - return self._name - - @pre_call_decorate - def get_ip_addrs(self, selector={}): - return [ip["addr"] - for ip in self._ip_addrs - if selector.items() <= ip.items()] - - @pre_call_decorate - def get_ip_addr(self, num, selector={}): - ips = self.get_ip_addrs(selector) - return ips[num] - - @pre_call_decorate - def get_ifi_type(self): - return self._ifi_type - - @pre_call_decorate - def get_state(self): - return self._state - - @pre_call_decorate - def get_master(self): - return self._master - - @pre_call_decorate - def get_slaves(self): - return self._slaves - - @pre_call_decorate - def get_netns(self): - return self._netns - - @pre_call_decorate - def get_peer(self): - return self._peer - - @pre_call_decorate - def get_mtu(self): - return self._mtu - - def set_mtu(self, mtu): - command = {"type": "config", - "host": self._machine.get_id(), - "persistent": False, - "options":[ - {"name": "/sys/class/net/%s/mtu" % self._name, - "value": str(mtu)} - ]} - command["netns"] = self._netns - - self._machine.run_command(command) - - self.slave_update() - return self._mtu - - @pre_call_decorate - def get_driver(self): - return self._driver - - @pre_call_decorate - def get_devlink_name(self): - if self._devlink: - return "%s/%s" % (self._devlink["bus_name"], - self._devlink["dev_name"]) - return None - - @pre_call_decorate - def get_devlink_port_name(self): - if self._devlink: - return "%s/%u" % (self.get_devlink_name(), - self._devlink["port_index"]) - return None diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index b18f47d..0f654e3 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -19,8 +19,10 @@ import datetime import socket import ctypes import multiprocessing +import imp +import types from time import sleep, time -from xmlrpclib import Binary +from inspect import isclass from tempfile import NamedTemporaryFile from lnst.Common.Logs import log_exc_traceback from lnst.Common.PacketCapture import PacketCapture @@ -34,118 +36,150 @@ from lnst.Common.Utils import check_process_running from lnst.Common.Utils import is_installed from lnst.Common.ConnectionHandler import send_data from lnst.Common.ConnectionHandler import ConnectionHandler -from lnst.Common.Config import lnst_config from lnst.Common.Config import DefaultRPCPort +from lnst.Common.DeviceRef import DeviceRef +from lnst.Common.LnstError import LnstError +from lnst.Common.DeviceError import DeviceDeleted +from lnst.Common.IpAddress import IpAddress +from lnst.Slave.Job import Job, JobContext from lnst.Slave.InterfaceManager import InterfaceManager from lnst.Slave.BridgeTool import BridgeTool from lnst.Slave.SlaveSecSocket import SlaveSecSocket, SecSocketException
+Devices = types.ModuleType("Devices") +Devices.__path__ = ["lnst.Devices"] + +sys.modules["lnst.Devices"] = Devices + class SlaveMethods: ''' Exported xmlrpc methods ''' - def __init__(self, command_context, log_ctl, if_manager, net_namespaces, - server_handler, slave_server): + def __init__(self, job_context, log_ctl, net_namespaces, + server_handler, slave_config, slave_server): self._packet_captures = {} - self._if_manager = if_manager - self._command_context = command_context + self._if_manager = None + self._job_context = job_context self._log_ctl = log_ctl self._net_namespaces = net_namespaces self._server_handler = server_handler self._slave_server = slave_server + self._slave_config = slave_config
self._capture_files = {} self._copy_targets = {} self._copy_sources = {} self._system_config = {}
- self._cache = ResourceCache(lnst_config.get_option("cache", "dir"), - lnst_config.get_option("cache", "expiration_period")) + self._cache = ResourceCache(slave_config.get_option("cache", "dir"), + slave_config.get_option("cache", "expiration_period")) + + self._dynamic_modules = {} + self._dynamic_classes = {} + + self._bkp_nm_opt_val = slave_config.get_option("environment", "use_nm") + + def hello(self): + logging.info("Recieved a controller connection.") + + slave_desc = {} + if check_process_running("NetworkManager"): + slave_desc["nm_running"] = True + else: + slave_desc["nm_running"] = False
- self._resource_table = {'module': {}, 'tools': {}} + k_release, _ = exec_cmd("uname -r", False, False, False) + r_release, _ = exec_cmd("cat /etc/redhat-release", False, False, False) + slave_desc["kernel_release"] = k_release.strip() + slave_desc["redhat_release"] = r_release.strip() + slave_desc["lnst_version"] = self._slave_config.version
- self._bkp_nm_opt_val = lnst_config.get_option("environment", "use_nm") + return ("hello", slave_desc)
- def hello(self, recipe_path): + def set_recipe(self, recipe_name): self.machine_cleanup() self.restore_nm_option()
- logging.info("Recieved a controller connection.") - self.clear_resource_table() self._cache.del_old_entries() self.reset_file_transfers()
- self._if_manager.rescan_devices() - date = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S") - self._log_ctl.set_recipe(recipe_path, expand=date) + self._log_ctl.set_recipe(recipe_name, expand=date) sleep(1)
- slave_desc = {} if check_process_running("NetworkManager"): logging.warning("=============================================") logging.warning("NetworkManager is running on a slave machine!") - if lnst_config.get_option("environment", "use_nm"): + if self._slave_config.get_option("environment", "use_nm"): logging.warning("Support of NM is still experimental!") else: logging.warning("Usage of NM is disabled!") logging.warning("=============================================") - slave_desc["nm_running"] = True - else: - slave_desc["nm_running"] = False
- k_release, _ = exec_cmd("uname -r", False, False, False) - r_release, _ = exec_cmd("cat /etc/redhat-release", False, False, False) - slave_desc["kernel_release"] = k_release.strip() - slave_desc["redhat_release"] = r_release.strip() - slave_desc["lnst_version"] = lnst_config.version - - return ("hello", slave_desc) + return True
def bye(self): self.restore_system_config() - self.clear_resource_table() self._cache.del_old_entries() self.reset_file_transfers() self._remove_capture_files() return "bye"
- def kill_cmds(self): - logging.info("Killing all forked processes.") - self._command_context.cleanup() - return "Commands killed" + def map_device_class(self, cls_name, module_name): + if cls_name in self._dynamic_classes: + return
- def map_if_by_hwaddr(self, if_id, hwaddr): - devices = self.map_if_by_params(if_id, {'hwaddr' : hwaddr}) + module = self._dynamic_modules[module_name] + cls = getattr(module, cls_name)
- return devices + self._dynamic_classes[cls_name] = cls
- def map_if_by_params(self, if_id, params): - devices = self.get_devices_by_params(params) + setattr(Devices, cls_name, cls)
- if len(devices) == 1: - dev = self._if_manager.get_device_by_params(params) - self._if_manager.map_if(if_id, dev.get_if_index()) + def load_cached_module(self, module_name, res_hash): + self._cache.renew_entry(res_hash) + if module_name in self._dynamic_modules: + return + module_path = self._cache.get_path(res_hash) + module = imp.load_source(module_name, module_path) + self._dynamic_modules[module_name] = module
- return devices + def init_if_manager(self): + self._if_manager = InterfaceManager(self._server_handler) + for cls_name in dir(Devices): + cls = getattr(Devices, cls_name) + if isclass(cls): + self._if_manager.add_device_class(cls_name, cls)
- def unmap_if(self, if_id): - self._if_manager.unmap_if(if_id) + self._if_manager.rescan_devices() + self._server_handler.set_if_manager(self._if_manager) + self._server_handler.add_connection('netlink', + self._if_manager.get_nl_socket()) return True
+ def dev_method(self, if_index, name, args, kwargs): + dev = self._if_manager.get_device(if_index) + method = getattr(dev, name) + + return method(*args, **kwargs) + + def dev_attr(self, if_index, name): + dev = self._if_manager.get_device(if_index) + return getattr(dev, name) + def get_devices(self): self._if_manager.rescan_devices() devices = self._if_manager.get_devices() result = {} for device in devices: - result[device._if_index] = device.get_if_data() + result[device.if_index] = device._get_if_data() return result
def get_device(self, if_index): self._if_manager.rescan_devices() device = self._if_manager.get_device(if_index) if device: - return device.get_if_data() + return device._get_if_data() else: return None
@@ -156,7 +190,6 @@ class SlaveMethods: for entry in name_scan: if entry["name"] == devname: netdevs.append(entry) - return netdevs
def get_devices_by_hwaddr(self, hwaddr): @@ -167,14 +200,13 @@ class SlaveMethods: entry = {"name": dev.get_name(), "hwaddr": dev.get_hwaddr()} matched.append(entry) - return matched
def get_devices_by_params(self, params): devices = self._if_manager.get_devices() matched = [] for dev in devices: - dev_data = dev.get_if_data() + dev_data = dev._get_if_data() entry = {"name": dev.get_name(), "hwaddr": dev.get_hwaddr()} for key, value in params.iteritems(): @@ -184,180 +216,121 @@ class SlaveMethods:
if entry is not None: matched.append(entry) - return matched
- def get_if_data(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is None: - return None - return dev.get_if_data() - - def link_stats(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is None: - logging.error("Device with id '%s' not found." % if_id) - return None - return dev.link_stats() - - def set_addresses(self, if_id, ips): - dev = self._if_manager.get_mapped_device(if_id) - if dev is None: - logging.error("Device with id '%s' not found." % if_id) - return False - dev.set_addresses(ips) - return True - - def add_route(self, if_id, dest): - dev = self._if_manager.get_mapped_device(if_id) - if dev is None: - logging.error("Device with id '%s' not found." % if_id) - return False - dev.add_route(dest) - return True - - def del_route(self, if_id, dest): - dev = self._if_manager.get_mapped_device(if_id) + def destroy_devices(self): + devices = self._if_manager.get_devices() + for dev in devices: + try: + dev.destroy() + except DeviceDeleted: + pass + self._if_manager.rescan_devices() + + # def add_route(self, if_id, dest): + # dev = self._if_manager.get_mapped_device(if_id) + # if dev is None: + # logging.error("Device with id '%s' not found." % if_id) + # return False + # dev.add_route(dest) + # return True + + # def del_route(self, if_id, dest): + # dev = self._if_manager.get_mapped_device(if_id) + # if dev is None: + # logging.error("Device with id '%s' not found." % if_id) + # return False + # dev.del_route(dest) + # return True + + def create_device(self, clsname, args=[], kwargs={}): + dev = self._if_manager.create_device(clsname, args, kwargs) if dev is None: - logging.error("Device with id '%s' not found." % if_id) - return False - dev.del_route(dest) - return True - - def set_device_up(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - dev.up() - return True - - def set_device_down(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.down() - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - - def set_link_up(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.link_up() - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - - def set_link_down(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.link_down() - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - - def set_unmapped_device_down(self, hwaddr): - dev = self._if_manager.get_device_by_hwaddr(hwaddr) - if dev is not None: - dev.down() - else: - logging.warning("Device with hwaddr '%s' not found." % hwaddr) - return True - - def configure_interface(self, if_id, config): - device = self._if_manager.get_mapped_device(if_id) - device.set_configuration(config) - device.configure() - return True - - def create_soft_interface(self, if_id, config): - dev_name = self._if_manager.create_device_from_config(if_id, config) - dev = self._if_manager.get_mapped_device(if_id) - dev.configure() - return dev_name - - def create_if_pair(self, if_id1, config1, if_id2, config2): - dev_names = self._if_manager.create_device_pair(if_id1, config1, - if_id2, config2) - dev1 = self._if_manager.get_mapped_device(if_id1) - dev2 = self._if_manager.get_mapped_device(if_id2) - - while dev1.get_if_index() == None and dev2.get_if_index() == None: - msgs = self._server_handler.get_messages_from_con('netlink') - for msg in msgs: - self._if_manager.handle_netlink_msgs(msg[1]["data"]) - - if config1["netns"] != None: - hwaddr = dev1.get_hwaddr() - self.set_if_netns(if_id1, config1["netns"]) - - msg = {"type": "command", "method_name": "configure_interface", - "args": [if_id1, config1]} - self._server_handler.send_data_to_netns(config1["netns"], msg) - result = self._slave_server.wait_for_result(config1["netns"]) - if result["result"] != True: - raise Exception("Configuration failed.") - else: - dev1.configure() - if config2["netns"] != None: - hwaddr = dev2.get_hwaddr() - self.set_if_netns(if_id2, config2["netns"]) - - msg = {"type": "command", "method_name": "configure_interface", - "args": [if_id2, config2]} - self._server_handler.send_data_to_netns(config2["netns"], msg) - result = self._slave_server.wait_for_result(config2["netns"]) - if result["result"] != True: - raise Exception("Configuration failed.") - else: - dev2.configure() - return dev_names - - def deconfigure_if_pair(self, if_id1, if_id2): - dev1 = self._if_manager.get_mapped_device(if_id1) - dev2 = self._if_manager.get_mapped_device(if_id2) - - if dev1.get_netns() == None: - dev1.deconfigure() - else: - netns = dev1.get_netns() - - msg = {"type": "command", "method_name": "deconfigure_interface", - "args": [if_id1]} - self._server_handler.send_data_to_netns(netns, msg) - result = self._slave_server.wait_for_result(netns) - if result["result"] != True: - raise Exception("Deconfiguration failed.") - - self.return_if_netns(if_id1) - - if dev2.get_netns() == None: - dev2.deconfigure() - else: - netns = dev2.get_netns() - - msg = {"type": "command", "method_name": "deconfigure_interface", - "args": [if_id2]} - self._server_handler.send_data_to_netns(netns, msg) - result = self._slave_server.wait_for_result(netns) - if result["result"] != True: - raise Exception("Deconfiguration failed.") - - self.return_if_netns(if_id2) - - dev1.destroy() - dev2.destroy() - dev1.del_configuration() - dev2.del_configuration() - return True - - def deconfigure_interface(self, if_id): - device = self._if_manager.get_mapped_device(if_id) - if device is not None: - device.clear_configuration() - else: - logging.error("No device with id '%s' to deconfigure." % if_id) - return True + raise Exception("Device creation failed") + return {"if_index": dev.if_index, "name": dev.name} + + # def create_if_pair(self, if_id1, config1, if_id2, config2): + # dev_names = self._if_manager.create_device_pair(if_id1, config1, + # if_id2, config2) + # dev1 = self._if_manager.get_mapped_device(if_id1) + # dev2 = self._if_manager.get_mapped_device(if_id2) + + # while dev1.get_if_index() == None and dev2.get_if_index() == None: + # msgs = self._server_handler.get_messages_from_con('netlink') + # for msg in msgs: + # self._if_manager.handle_netlink_msgs(msg[1]["data"]) + + # if config1["netns"] != None: + # hwaddr = dev1.get_hwaddr() + # self.set_if_netns(if_id1, config1["netns"]) + + # msg = {"type": "command", "method_name": "configure_interface", + # "args": [if_id1, config1]} + # self._server_handler.send_data_to_netns(config1["netns"], msg) + # result = self._slave_server.wait_for_result(config1["netns"]) + # if result["result"] != True: + # raise Exception("Configuration failed.") + # else: + # dev1.configure() + # if config2["netns"] != None: + # hwaddr = dev2.get_hwaddr() + # self.set_if_netns(if_id2, config2["netns"]) + + # msg = {"type": "command", "method_name": "configure_interface", + # "args": [if_id2, config2]} + # self._server_handler.send_data_to_netns(config2["netns"], msg) + # result = self._slave_server.wait_for_result(config2["netns"]) + # if result["result"] != True: + # raise Exception("Configuration failed.") + # else: + # dev2.configure() + # return dev_names + + # def deconfigure_if_pair(self, if_id1, if_id2): + # dev1 = self._if_manager.get_mapped_device(if_id1) + # dev2 = self._if_manager.get_mapped_device(if_id2) + + # if dev1.get_netns() == None: + # dev1.deconfigure() + # else: + # netns = dev1.get_netns() + + # msg = {"type": "command", "method_name": "deconfigure_interface", + # "args": [if_id1]} + # self._server_handler.send_data_to_netns(netns, msg) + # result = self._slave_server.wait_for_result(netns) + # if result["result"] != True: + # raise Exception("Deconfiguration failed.") + + # self.return_if_netns(if_id1) + + # if dev2.get_netns() == None: + # dev2.deconfigure() + # else: + # netns = dev2.get_netns() + + # msg = {"type": "command", "method_name": "deconfigure_interface", + # "args": [if_id2]} + # self._server_handler.send_data_to_netns(netns, msg) + # result = self._slave_server.wait_for_result(netns) + # if result["result"] != True: + # raise Exception("Deconfiguration failed.") + + # self.return_if_netns(if_id2) + + # dev1.destroy() + # dev2.destroy() + # dev1.del_configuration() + # dev2.del_configuration() + # return True + + # def deconfigure_interface(self, if_id): + # device = self._if_manager.get_mapped_device(if_id) + # if device is not None: + # device.clear_configuration() + # else: + # logging.error("No device with id '%s' to deconfigure." % if_id) + # return True
def start_packet_capture(self, filt): if not is_installed("tcpdump"): @@ -448,103 +421,67 @@ class SlaveMethods:
return int(remaining)
- def run_command(self, command): - cmd = NetTestCommand(self._command_context, command, - self._resource_table, self._log_ctl) + def run_job(self, job): + job_instance = Job(job, self._log_ctl) + self._job_context.add_job(job_instance)
- if self._command_context.get_cmd(cmd.get_id()) != None: - prev_cmd = self._command_context.get_cmd(cmd.get_id()) - if not prev_cmd.get_result_sent(): - if cmd.get_id() is None: - raise Exception("Previous foreground command still "\ - "running!") - else: - raise Exception("Different command with id '%s' "\ - "still running!" % cmd.get_id()) - else: - self._command_context.del_cmd(cmd) - self._command_context.add_cmd(cmd) + res = job_instance.run()
- res = cmd.run() - if not cmd.forked(): - self._command_context.del_cmd(cmd) + return res
- if command["type"] == "config": - if res["passed"]: - self._update_system_config(res["res_data"]["options"], - command["persistent"]) - else: - err = "Error occured while setting system "\ - "configuration (%s)" % res["res_data"]["err_msg"] - logging.error(err) + def kill_job(self, job_id, signal): + job = self._job_context.get_job(job_id)
- return res + if job is None: + logging.error("No job %s found" % job_id) + return False
- def kill_command(self, id): - cmd = self._command_context.get_cmd(id) - if cmd is not None: - if not cmd.get_result_sent(): - cmd.kill(None) - result = cmd.get_result() - cmd.set_result_sent() - return result - else: - pass - else: - raise Exception("No command with id '%s'." % id) + return job.kill(signal) + + def kill_jobs(self): + logging.info("Killing all forked processes.") + self._job_context.cleanup() + return "Commands killed"
def machine_cleanup(self): logging.info("Performing machine cleanup.") - self._command_context.cleanup() + self._job_context.cleanup()
self.restore_system_config()
- devs = self._if_manager.get_mapped_devices() - for if_id, dev in devs.iteritems(): - peer = dev.get_peer() - if peer == None: - dev.clear_configuration() - else: - peer_if_index = peer.get_if_index() - peer_id = self._if_manager.get_id_by_if_index(peer_if_index) - self.deconfigure_if_pair(if_id, peer_id) - - self._if_manager.deconfigure_all() + if self._if_manager is not None: + self._if_manager.deconfigure_all()
for netns in self._net_namespaces.keys(): self.del_namespace(netns) self._net_namespaces = {}
- self._if_manager.clear_if_mapping() + for cls_name, cls in self._dynamic_classes.items(): + delattr(Devices, cls_name) + + for module_name, module in self._dynamic_modules.items(): + del sys.modules[module_name] + + self._dynamic_classes = {} + self._dynamic_modules = {} + self._if_manager = None + self._server_handler.set_if_manager(None) self._cache.del_old_entries() self._remove_capture_files() return True
- def clear_resource_table(self): - self._resource_table = {'module': {}, 'tools': {}} - return True - def has_resource(self, res_hash): if self._cache.query(res_hash): return True
return False
- def map_resource(self, res_hash, res_type, res_name): - resource_location = self._cache.get_path(res_hash) - - if not res_type in self._resource_table: - self._resource_table[res_type] = {} - - self._resource_table[res_type][res_name] = resource_location - self._cache.renew_entry(res_hash) - - return True - - def add_resource_to_cache(self, file_hash, local_path, name, - res_hash, res_type): - self._cache.add_cache_entry(file_hash, local_path, name, res_type) - return True + def add_resource_to_cache(self, res_type, local_path, name): + if res_type == "file": + self._cache.add_file_entry(local_path, name) + return True + else: + raise Exception("Unknown resource type")
def start_copy_to(self, filepath=None): if filepath in self._copy_targets: @@ -559,9 +496,9 @@ class SlaveMethods:
return filepath
- def copy_part_to(self, filepath, binary_data): + def copy_part_to(self, filepath, data): if self._copy_targets[filepath]: - self._copy_targets[filepath].write(binary_data.data) + self._copy_targets[filepath].write(data) return True
return False @@ -583,7 +520,7 @@ class SlaveMethods: return True
def copy_part_from(self, filepath, buffsize): - data = Binary(self._copy_sources[filepath].read(buffsize)) + data = self._copy_sources[filepath].read(buffsize) return data
def finish_copy_from(self, filepath): @@ -607,26 +544,27 @@ class SlaveMethods: logging.warning("====================================================") logging.warning("Enabling use of NetworkManager on controller request") logging.warning("====================================================") - val = lnst_config.get_option("environment", "use_nm") - lnst_config.set_option("environment", "use_nm", True) + val = self._slave_config.get_option("environment", "use_nm") + self._slave_config.set_option("environment", "use_nm", True) return val
def disable_nm(self): logging.warning("=====================================================") logging.warning("Disabling use of NetworkManager on controller request") logging.warning("=====================================================") - val = lnst_config.get_option("environment", "use_nm") - lnst_config.set_option("environment", "use_nm", False) + val = self._slave_config.get_option("environment", "use_nm") + self._slave_config.set_option("environment", "use_nm", False) return val
def restore_nm_option(self): - val = lnst_config.get_option("environment", "use_nm") + val = self._slave_config.get_option("environment", "use_nm") if val == self._bkp_nm_opt_val: return val logging.warning("=========================================") logging.warning("Restoring use_nm option to original value") logging.warning("=========================================") - lnst_config.set_option("environment", "use_nm", self._bkp_nm_opt_val) + self._slave_config.set_option("environment", "use_nm", + self._bkp_nm_opt_val) return val
def add_namespace(self, netns): @@ -724,40 +662,40 @@ class SlaveMethods: del self._net_namespaces[netns] return True
- def set_if_netns(self, if_id, netns): - netns_pid = self._net_namespaces[netns]["pid"] - - device = self._if_manager.get_mapped_device(if_id) - dev_name = device.get_name() - device.set_netns(netns) - hwaddr = device.get_hwaddr() - - exec_cmd("ip link set %s netns %d" % (dev_name, netns_pid)) - msg = {"type": "command", "method_name": "map_if_by_hwaddr", - "args": [if_id, hwaddr]} - self._server_handler.send_data_to_netns(netns, msg) - result = self._slave_server.wait_for_result(netns) - return result - - def return_if_netns(self, if_id): - device = self._if_manager.get_mapped_device(if_id) - if device.get_netns() == None: - dev_name = device.get_name() - ppid = os.getppid() - exec_cmd("ip link set %s netns %d" % (dev_name, ppid)) - self._if_manager.unmap_if(if_id) - return True - else: - netns = device.get_netns() - msg = {"type": "command", "method_name": "return_if_netns", - "args": [if_id]} - self._server_handler.send_data_to_netns(netns, msg) - result = self._slave_server.wait_for_result(netns) - if result["result"] != True: - raise Exception("Return from netns failed.") - - device.set_netns(None) - return True + # def set_if_netns(self, if_id, netns): + # netns_pid = self._net_namespaces[netns]["pid"] + + # device = self._if_manager.get_mapped_device(if_id) + # dev_name = device.get_name() + # device.set_netns(netns) + # hwaddr = device.get_hwaddr() + + # exec_cmd("ip link set %s netns %d" % (dev_name, netns_pid)) + # msg = {"type": "command", "method_name": "map_if_by_hwaddr", + # "args": [if_id, hwaddr]} + # self._server_handler.send_data_to_netns(netns, msg) + # result = self._slave_server.wait_for_result(netns) + # return result + + # def return_if_netns(self, if_id): + # device = self._if_manager.get_mapped_device(if_id) + # if device.get_netns() == None: + # dev_name = device.get_name() + # ppid = os.getppid() + # exec_cmd("ip link set %s netns %d" % (dev_name, ppid)) + # self._if_manager.unmap_if(if_id) + # return True + # else: + # netns = device.get_netns() + # msg = {"type": "command", "method_name": "return_if_netns", + # "args": [if_id]} + # self._server_handler.send_data_to_netns(netns, msg) + # result = self._slave_server.wait_for_result(netns) + # if result["result"] != True: + # raise Exception("Return from netns failed.") + + # device.set_netns(None) + # return True
def add_br_vlan(self, if_id, br_vlan_info): dev = self._if_manager.get_mapped_device(if_id) @@ -847,48 +785,8 @@ class SlaveMethods: brt.set_state(br_state_info) return True
- def set_speed(self, if_id, speed): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.set_speed(speed) - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - - def set_autoneg(self, if_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.set_autoneg() - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - - def wait_interface_init(self): - self._if_manager.wait_interface_init() - return True - - def slave_add(self, if_id, slave_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.slave_add(slave_id) - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - - def slave_del(self, if_id, slave_id): - dev = self._if_manager.get_mapped_device(if_id) - if dev is not None: - dev.slave_del(slave_id) - else: - logging.error("Device with id '%s' not found." % if_id) - return False - return True - class ServerHandler(ConnectionHandler): - def __init__(self, addr): + def __init__(self, addr, slave_config): super(ServerHandler, self).__init__() self._netns_con_mapping = {} try: @@ -902,13 +800,34 @@ class ServerHandler(ConnectionHandler):
self._netns = None self._c_socket = None + self._c_dev = None
self._if_manager = None
- self._security = lnst_config.get_section_values("security") + self._security = slave_config.get_section_values("security")
def set_if_manager(self, if_manager): self._if_manager = if_manager + self._update_c_dev() + + def _update_c_dev(self): + if self._c_dev: + self._c_dev.enable() + self._c_dev = None + + if self._if_manager is not None: + ctl_socket = self.get_ctl_sock() + ctl_addr = ctl_socket._socket.getsockname()[0] + matched_dev = None + for dev in self._if_manager.get_devices(): + for ip in dev.ips: + if ip.addr == ctl_addr: + matched_dev = dev + break + if matched_dev: + break + self._c_dev = matched_dev + matched_dev.disable()
def accept_connection(self): self._c_socket, addr = self._s_socket.accept() @@ -932,9 +851,9 @@ class ServerHandler(ConnectionHandler):
def set_ctl_sock(self, sock): if self._c_socket != None: - self._c_socket.close() - self._c_socket = None + self.close_c_sock() self._c_socket = sock + self._update_c_dev() self.add_connection(self._c_socket[1], self._c_socket[0])
def close_s_sock(self): @@ -946,9 +865,14 @@ class ServerHandler(ConnectionHandler): self.remove_connection(self._c_socket[0]) self._c_socket = None
+ if self._c_dev: + self._c_dev.enable() + self._c_dev = None + def check_connections(self): msgs = super(ServerHandler, self).check_connections() - if 'netlink' not in self._connection_mapping: + if 'netlink' not in self._connection_mapping and\ + self._if_manager is not None: self._if_manager.reconnect_netlink() self.add_connection('netlink', self._if_manager.get_nl_socket()) return msgs @@ -970,7 +894,7 @@ class ServerHandler(ConnectionHandler): addr = self._c_socket[1] if self.get_connection(addr) == None: logging.info("Lost controller connection.") - self._c_socket = None + self.close_c_sock() return messages
def get_messages_from_con(self, con_id): @@ -1026,23 +950,72 @@ class ServerHandler(ConnectionHandler): self._connections.remove(con) self._netns_con_mapping = {}
+ +def device_to_deviceref(obj): + try: + Device = Devices.Device + except: + return obj + + if isinstance(obj, Device): + dev_ref = DeviceRef(obj.if_index) + return dev_ref + elif isinstance(obj, dict): + new_dict = {} + for key, value in obj.items(): + new_dict[key] = device_to_deviceref(value) + return new_dict + elif isinstance(obj, list): + new_list = [] + for value in obj: + new_list.append(device_to_deviceref(value)) + return new_list + elif isinstance(obj, tuple): + new_list = [] + for value in obj: + new_list.append(device_to_deviceref(value)) + return tuple(new_list) + else: + return obj + +def deviceref_to_device(if_manager, obj): + if isinstance(obj, DeviceRef): + dev = if_manager.get_device(obj.if_index) + return dev + elif isinstance(obj, dict): + new_dict = {} + for key, value in obj.items(): + new_dict[key] = deviceref_to_device(if_manager, value) + return new_dict + elif isinstance(obj, list): + new_list = [] + for value in obj: + new_list.append(deviceref_to_device(if_manager, value)) + return new_list + elif isinstance(obj, tuple): + new_list = [] + for value in obj: + new_list.append(deviceref_to_device(if_manager, value)) + return tuple(new_list) + else: + return obj + class NetTestSlave: - def __init__(self, log_ctl): + def __init__(self, log_ctl, slave_config): + self._slave_config = slave_config die_when_parent_die()
- self._cmd_context = NetTestCommandContext() - port = lnst_config.get_option("environment", "rpcport") + self._job_context = JobContext() + port = slave_config.get_option("environment", "rpcport") logging.info("Using RPC port %d." % port) - self._server_handler = ServerHandler(("", port)) - self._if_manager = InterfaceManager(self._server_handler) - - self._server_handler.set_if_manager(self._if_manager) + self._server_handler = ServerHandler(("", port), slave_config)
self._net_namespaces = {}
- self._methods = SlaveMethods(self._cmd_context, log_ctl, - self._if_manager, self._net_namespaces, - self._server_handler, self) + self._methods = SlaveMethods(self._job_context, log_ctl, + self._net_namespaces, + self._server_handler, slave_config, + self)
self.register_die_signal(signal.SIGHUP) self.register_die_signal(signal.SIGINT) @@ -1052,9 +1025,6 @@ class NetTestSlave:
self._log_ctl = log_ctl
- self._server_handler.add_connection('netlink', - self._if_manager.get_nl_socket()) - def run(self): while not self._finished: if self._server_handler.get_ctl_sock() == None: @@ -1063,9 +1033,9 @@ class NetTestSlave: logging.info("Waiting for connection.") self._server_handler.accept_connection() except (socket.error, SecSocketException): + log_exc_traceback() continue - self._log_ctl.set_connection( - self._server_handler.get_ctl_sock()) + self._log_ctl.set_connection(self._server_handler.get_ctl_sock())
msgs = self._server_handler.get_messages()
@@ -1092,24 +1062,29 @@ class NetTestSlave: if msg["type"] == "command": method = getattr(self._methods, msg["method_name"], None) if method != None: + if_manager = self._methods._if_manager + if if_manager is not None: + args = deviceref_to_device(if_manager, msg["args"]) + kwargs = deviceref_to_device(if_manager, msg["kwargs"]) + else: + args = msg["args"] + kwargs = msg["kwargs"] + try: - result = method(*msg["args"]) - except: + result = method(*args, **kwargs) + except LnstError as e: log_exc_traceback() - type, value, tb = sys.exc_info() - exc_trace = ''.join(traceback.format_exception(type, - value, tb)) - response = {"type": "exception", "Exception": value} + response = {"type": "exception", "Exception": e}
self._server_handler.send_data_to_ctl(response) return
- if result != None: - response = {"type": "result", "result": result} - self._server_handler.send_data_to_ctl(response) + response = {"type": "result", "result": result} + response = device_to_deviceref(response) + self._server_handler.send_data_to_ctl(response) else: - err = "Method '%s' not supported." % msg["method_name"] - response = {"type": "error", "err": err} + err = LnstError("Method '%s' not supported." % msg["method_name"]) + response = {"type": "exception", "Exception": err} self._server_handler.send_data_to_ctl(response) elif msg["type"] == "log": logger = logging.getLogger() @@ -1122,48 +1097,36 @@ class NetTestSlave: else: logging.debug("Recieved an exception from foreground command") logging.debug(msg["Exception"]) - cmd = self._cmd_context.get_cmd(msg["cmd_id"]) - cmd.join() - self._cmd_context.del_cmd(cmd) + job = self._job_context.get_cmd(msg["job_id"]) + job.join() + self._job_context.del_cmd(job) + self._server_handler.send_data_to_ctl(msg) + elif msg["type"] == "job_finished": + job = self._job_context.get_job(msg["job_id"]) + job.join() + + job.set_finished(msg["result"]) self._server_handler.send_data_to_ctl(msg) - elif msg["type"] == "result": - if msg["cmd_id"] == None: - del msg["cmd_id"] - self._server_handler.send_data_to_ctl(msg) - cmd = self._cmd_context.get_cmd(None) - cmd.join() - cmd.set_result_sent() - else: - cmd = self._cmd_context.get_cmd(msg["cmd_id"]) - cmd.join() - del msg["cmd_id"] - - cmd.set_result(msg["result"]) - if cmd.finished(): - msg["result"] = cmd.get_result() - self._server_handler.send_data_to_ctl(msg) - cmd.set_result_sent() elif msg["type"] == "netlink": - self._if_manager.handle_netlink_msgs(msg["data"]) + if_manager = self._methods._if_manager + if if_manager is not None: + if_manager.handle_netlink_msgs(msg["data"]) elif msg["type"] == "from_netns": self._server_handler.send_data_to_ctl(msg["data"]) elif msg["type"] == "to_netns": netns = msg["netns"] try: self._server_handler.send_data_to_netns(netns, msg["data"]) - except: + except LnstError as e: log_exc_traceback() - type, value, tb = sys.exc_info() - exc_trace = ''.join(traceback.format_exception(type, - value, tb)) - response = {"type": "exception", "Exception": value} + response = {"type": "exception", "Exception": e}
self._server_handler.send_data_to_ctl(response) return else: raise Exception("Recieved unknown command")
- pipes = self._cmd_context.get_read_pipes() + pipes = self._job_context.get_parent_pipes() self._server_handler.update_connections(pipes)
def register_die_signal(self, signum):
From: Ondrej Lichtner olichtne@redhat.com
Not needed anymore as all Device related code is now implemented in it's own package.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/InterfaceManager.py | 382 ----------------------------------------- 1 file changed, 382 deletions(-)
diff --git a/lnst/Slave/InterfaceManager.py b/lnst/Slave/InterfaceManager.py index da7e712..585adb9 100644 --- a/lnst/Slave/InterfaceManager.py +++ b/lnst/Slave/InterfaceManager.py @@ -240,385 +240,3 @@ class InterfaceManager(object): while (self._is_name_used(prefix + str(index2))): index2 += 1 return prefix + str(index1), prefix + str(index2) - -class Device(object): - def __init__(self, if_manager): - self._initialized = False - self._configured = False - self._created = False - - self._if_index = None - self._hwaddr = None - self._name = None - self._conf = None - self._conf_dict = None - self._ip_addrs = [] - self._ifi_type = None - self._state = None - self._master = {"primary": None, "other": []} - self._slaves = [] - self._netns = None - self._peer = None - self._mtu = None - self._driver = None - self._devlink = None - - self._if_manager = if_manager - - def set_devlink(self, devlink_port_data): - self._devlink = devlink_port_data - - def init_netlink(self, nl_msg): - self._if_index = nl_msg['index'] - self._ifi_type = nl_msg['ifi_type'] - self._hwaddr = normalize_hwaddr(nl_msg.get_attr("IFLA_ADDRESS")) - self._name = nl_msg.get_attr("IFLA_IFNAME") - self._state = nl_msg.get_attr("IFLA_OPERSTATE") - self._ip_addrs = [] - self.set_master(nl_msg.get_attr("IFLA_MASTER"), primary=True) - self._netns = None - self._mtu = nl_msg.get_attr("IFLA_MTU") - - if self._driver is None: - self._driver = self._ethtool_get_driver() - - self._initialized = True - - #return an update message that will be sent to the controller - return {"type": "if_update", - "if_data": self.get_if_data()} - - def update_netlink(self, nl_msg): - if self._if_index != nl_msg['index']: - return None - if nl_msg['header']['type'] == RTM_NEWLINK: - self._ifi_type = nl_msg['ifi_type'] - self._hwaddr = normalize_hwaddr(nl_msg.get_attr("IFLA_ADDRESS")) - self._name = nl_msg.get_attr("IFLA_IFNAME") - self._state = nl_msg.get_attr("IFLA_OPERSTATE") - self.set_master(nl_msg.get_attr("IFLA_MASTER"), primary=True) - self._mtu = nl_msg.get_attr("IFLA_MTU") - - link = nl_msg.get_attr("IFLA_LINK") - if link != None: - # IFLA_LINK is an index of device that's closer to physical - # interface in the stack, e.g. index of eth0 for eth0.100 - # so to properly deconfigure the stack we have to save - # parent index in the child device; this is the opposite - # to IFLA_MASTER - link_dev = self._if_manager.get_device(link) - if link_dev != None: - link_dev.set_master(self._if_index, primary=False) - # This reference shouldn't change - you can't change the realdev - # of a vlan, you need to create a new vlan. Therefore the - # the following add_slave shouldn't be a problem. - self.add_slave(link) - - if self._conf_dict: - self._conf_dict["name"] = self._name - - if self._driver is None: - self._driver = self._ethtool_get_driver() - - self._initialized = True - elif nl_msg['header']['type'] == RTM_NEWADDR: - scope = nl_msg['scope'] - addr_val = nl_msg.get_attr('IFA_ADDRESS') - prefix_len = str(nl_msg['prefixlen']) - addr = {"addr": addr_val, - "prefix": prefix_len, - "scope": scope} - if self.find_addrs(addr) == []: - self._ip_addrs.append(addr) - elif nl_msg['header']['type'] == RTM_DELADDR: - scope = nl_msg['scope'] - addr_val = nl_msg.get_attr('IFA_ADDRESS') - prefix_len = str(nl_msg['prefixlen']) - addr = {"addr": addr_val, - "prefix": prefix_len, - "scope": scope} - matching_addrs = self.find_addrs(addr) - for ip_addr in matching_addrs: - self._ip_addrs.remove(ip_addr) - - #return an update message that will be sent to the controller - return {"type": "if_update", - "if_data": self.get_if_data()} - - def del_link(self): - if self._master["primary"]: - primary_id = self._master["primary"] - primary_dev = self._if_manager.get_device(primary_id) - if primary_dev: - primary_dev.del_slave(self._if_index) - - for m_id in self._master["other"]: - m_dev = self._if_manager.get_device(m_id) - if m_dev: - m_dev.del_slave(self._if_index) - - for dev_id in self._slaves: - dev = self._if_manager.get_device(dev_id) - if dev != None: - dev.del_master(self._if_index) - - def find_addrs(self, addr_spec): - ret = [] - for addr in self._ip_addrs: - if addr_spec.items() <= addr.items(): - ret.append(addr) - return ret - - def get_if_index(self): - return self._if_index - - def get_hwaddr(self): - return self._hwaddr - - def get_name(self): - return self._name - - def get_ips(self): - return self._ip_addrs - - def clear_ips(self): - self._ip_addrs = [] - - def is_configured(self): - return self._configured - - def get_conf_dict(self): - return self._conf_dict - - def set_peer(self, dev): - self._peer = dev - - def get_peer(self): - return self._peer - - def set_configuration(self, conf): - self.clear_configuration() - if "name" not in conf or conf["name"] == None: - conf["name"] = self._name - self._conf_dict = conf - self._conf = NetConfigDevice(conf, self._if_manager) - - if not self._initialized: - self._name = conf["name"] - - def get_configuration(self): - return self._conf - - def del_configuration(self): - self._conf = None - self._conf_dict = None - - def _clear_tc_qdisc(self): - exec_cmd("tc qdisc replace dev %s root pfifo" % self._name) - out, _ = exec_cmd("tc filter show dev %s" % self._name) - ingress_handles = re.findall("ingress (\d+):", out) - for ingress_handle in ingress_handles: - exec_cmd("tc qdisc del dev %s handle %s: ingress" % - (self._name, ingress_handle)) - out, _ = exec_cmd("tc qdisc show dev %s" % self._name) - ingress_qdiscs = re.findall("qdisc ingress (\w+):", out) - if len(ingress_qdiscs) != 0: - exec_cmd("tc qdisc del dev %s ingress" % self._name) - - def _clear_tc_filters(self): - out, _ = exec_cmd("tc filter show dev %s" % self._name) - egress_prefs = re.findall("pref (\d+) .* handle", out) - - for egress_pref in egress_prefs: - exec_cmd("tc filter del dev %s pref %s" % (self._name, - egress_pref)) - - def clear_configuration(self): - if self._master["primary"]: - primary_id = self._master["primary"] - primary_dev = self._if_manager.get_device(primary_id) - if primary_dev: - primary_dev.clear_configuration() - - for m_id in self._master["other"]: - m_dev = self._if_manager.get_device(m_id) - if m_dev: - m_dev.clear_configuration() - - if self._conf != None: - self._clear_tc_qdisc() - self._clear_tc_filters() - self.down() - self.deconfigure() - self.destroy() - self._conf = None - self._conf_dict = None - - def set_master(self, if_index, primary=True): - if primary: - prev_master_id = self._master["primary"] - if prev_master_id != None and if_index != prev_master_id: - prev_master_dev = self._if_manager.get_device(prev_master_id) - if prev_master_dev != None: - prev_master_dev.del_slave(self._if_index) - - self._master["primary"] = if_index - if self._master["primary"] != None: - master_id = self._master["primary"] - master_dev = self._if_manager.get_device(master_id) - if master_dev != None: - master_dev.add_slave(self._if_index) - elif if_index not in self._master["other"]: - self._master["other"].append(if_index) - - def del_master(self, if_index): - if self._master["primary"] == if_index: - self._master["primary"] = None - elif if_index in self._master["other"]: - self._master["other"].remove(if_index) - - def add_slave(self, if_index): - if if_index not in self._slaves: - self._slaves.append(if_index) - - def del_slave(self, if_index): - if if_index in self._slaves: - self._slaves.remove(if_index) - - def create(self): - if self._conf != None and not self._created: - self._conf.create() - self._created = True - return True - return False - - def destroy(self): - if self._conf != None and self._created: - self._conf.destroy() - self._created = False - return True - return False - - def configure(self): - if self._conf != None and not self._configured: - self._conf.configure() - self._configured = True - - def deconfigure(self): - if self._master["primary"]: - primary_id = self._master["primary"] - primary_dev = self._if_manager.get_device(primary_id) - if primary_dev: - primary_dev.deconfigure() - - for m_id in self._master["other"]: - m_dev = self._if_manager.get_device(m_id) - if m_dev: - m_dev.deconfigure() - - if self._conf != None and self._configured: - self._conf.deconfigure() - self._configured = False - - def up(self): - if self._conf != None: - self._conf.up() - else: - exec_cmd("ip link set %s up" % self._name) - - def down(self): - if self._conf != None: - self._conf.down() - else: - exec_cmd("ip link set %s down" % self._name) - - def link_up(self): - exec_cmd("ip link set %s up" % self._name) - - def link_down(self): - exec_cmd("ip link set %s down" % self._name) - - def link_stats(self): - stats = {"devname": self._name, - "hwaddr": self._hwaddr} - out, _ = exec_cmd("ip -s link show %s" % self._name) - lines = iter(out.split("\n")) - for line in lines: - if (len(line.split()) == 0): - continue - if (line.split()[0] == "RX:"): - rx_stats = map(int, lines.next().split()) - stats.update({"rx_bytes" : rx_stats[0], - "rx_packets": rx_stats[1], - "rx_errors" : rx_stats[2], - "rx_dropped": rx_stats[3], - "rx_overrun": rx_stats[4], - "rx_mcast" : rx_stats[5]}) - if (line.split()[0] == "TX:"): - tx_stats = map(int, lines.next().split()) - stats.update({"tx_bytes" : tx_stats[0], - "tx_packets": tx_stats[1], - "tx_errors" : tx_stats[2], - "tx_dropped": tx_stats[3], - "tx_carrier": tx_stats[4], - "tx_collsns": tx_stats[5]}) - return stats - - def set_addresses(self, ips): - self._conf.set_addresses(ips) - exec_cmd("ip addr flush %s" % self._name) - for address in ips: - exec_cmd("ip addr add %s dev %s" % (address, self._name)) - - def add_route(self, dest): - exec_cmd("ip route add %s dev %s" % (dest, self._name)) - - def del_route(self, dest): - exec_cmd("ip route del %s dev %s" % (dest, self._name)) - - def set_netns(self, netns): - self._netns = netns - return - - def get_netns(self): - return self._netns - - def _ethtool_get_driver(self): - if self._ifi_type == 772: #loopback ifi type - return 'loopback' - out, _ = exec_cmd("ethtool -i %s" % self._name, False, False, False) - match = re.search("^driver: (.*)$", out, re.MULTILINE) - if match is not None: - return match.group(1) - else: - return None - - def get_if_data(self): - if_data = {"if_index": self._if_index, - "hwaddr": self._hwaddr, - "name": self._name, - "ip_addrs": self._ip_addrs, - "ifi_type": self._ifi_type, - "state": self._state, - "master": self._master, - "slaves": self._slaves, - "netns": self._netns, - "peer": self._peer.get_if_index() if self._peer else None, - "mtu": self._mtu, - "driver": self._driver, - "devlink": self._devlink} - return if_data - - def set_speed(self, speed): - exec_cmd("ethtool -s %s speed %s autoneg off" % (self._name, speed)) - - def set_autoneg(self): - exec_cmd("ethtool -s %s autoneg on" % self._name) - - def slave_add(self, if_id): - if self._conf != None: - self._conf.slave_add(if_id) - - def slave_del(self, if_id): - if self._conf != None: - self._conf.slave_del(if_id)
From: Ondrej Lichtner olichtne@redhat.com
To export the most commonly tester used classes needed for Recipe implementation. Mostly just convenience for the tester to shorten the import lines.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/__init__.py | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/lnst/Controller/__init__.py b/lnst/Controller/__init__.py index e69de29..1f97b40 100644 --- a/lnst/Controller/__init__.py +++ b/lnst/Controller/__init__.py @@ -0,0 +1,3 @@ +from lnst.Controller.Controller import Controller +from lnst.Controller.Recipe import BaseRecipe +from lnst.Controller.Requirements import HostReq, DeviceReq
From: Ondrej Lichtner olichtne@redhat.com
Remove imports related to the old prototype implementation of PyRecipes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/__init__.py | 1 - 1 file changed, 1 deletion(-)
diff --git a/lnst/__init__.py b/lnst/__init__.py index 924a8ff..e69de29 100644 --- a/lnst/__init__.py +++ b/lnst/__init__.py @@ -1 +0,0 @@ -from lnst.Controller.Task import match, add_host, wait, get_alias, get_module, breakpoint
From: Ondrej Lichtner olichtne@redhat.com
These are not used anymore in the new Python Recipes implementation.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/NetTestController.py | 620 --------------------------------- lnst/Controller/RecipeParser.py | 572 ------------------------------- lnst/Controller/SlavePool.py | 648 ----------------------------------- lnst/Controller/XmlParser.py | 188 ---------- lnst/Controller/XmlProcessing.py | 235 ------------- lnst/Controller/XmlTemplates.py | 438 ----------------------- 6 files changed, 2701 deletions(-) delete mode 100644 lnst/Controller/NetTestController.py delete mode 100644 lnst/Controller/RecipeParser.py delete mode 100644 lnst/Controller/SlavePool.py delete mode 100644 lnst/Controller/XmlParser.py delete mode 100644 lnst/Controller/XmlProcessing.py delete mode 100644 lnst/Controller/XmlTemplates.py
diff --git a/lnst/Controller/NetTestController.py b/lnst/Controller/NetTestController.py deleted file mode 100644 index 8f41cd0..0000000 --- a/lnst/Controller/NetTestController.py +++ /dev/null @@ -1,620 +0,0 @@ -""" -This module defines NetTestController class which does the controlling -part of network testing. - -Copyright 2011 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -jpirko@redhat.com (Jiri Pirko) -""" - -import logging -import socket -import os -import re -import cPickle -import imp -import copy -import sys -from time import sleep -from lnst.Common.NetUtils import MacPool -from lnst.Common.Utils import md5sum, dir_md5sum -from lnst.Common.Utils import check_process_running, bool_it, get_module_tools -from lnst.Common.NetTestCommand import str_command, CommandException -from lnst.Controller.RecipeParser import RecipeError -from lnst.Controller.SlavePool import SlavePool -from lnst.Controller.Machine import MachineError, VirtualInterface -from lnst.Controller.Machine import StaticInterface -from lnst.Controller.CtlSecSocket import CtlSecSocket -from lnst.Common.SecureSocket import SecSocketException -from lnst.Common.ConnectionHandler import send_data, recv_data -from lnst.Common.ConnectionHandler import ConnectionHandler -from lnst.Common.Config import lnst_config -from lnst.Common.Path import Path -from lnst.Common.Colours import decorate_with_preset -from lnst.Common.NetUtils import test_tcp_connection -import lnst.Controller.Task as Task - -# conditional support for libvirt -if check_process_running("libvirtd"): - from lnst.Controller.VirtUtils import VirtNetCtl, VirtDomainCtl - -class NetTestError(Exception): - pass - -class NoMatchError(NetTestError): - pass - -def ignore_event(**kwarg): - pass - -class NetTestController: - def __init__(self, recipe_path, log_ctl, - res_serializer=None, pool_checks=True, - packet_capture=False, - defined_aliases=None, reduce_sync=False, - restrict_pools=[], multi_match=False, - breakpoints=False): - self._res_serializer = res_serializer - self._remote_capture_files = {} - self._log_ctl = log_ctl - self._recipe_path = Path(None, recipe_path) - self._msg_dispatcher = MessageDispatcher(log_ctl) - self._packet_capture = packet_capture - self._reduce_sync = reduce_sync - self._defined_aliases = defined_aliases - self._multi_match = multi_match - - self.run_mode = "run" - self.breakpoints = breakpoints - - self._machines = {} - self._network_bridges = {} - self._tasks = [] - - mac_pool_range = lnst_config.get_option('environment', 'mac_pool_range') - self._mac_pool = MacPool(mac_pool_range[0], mac_pool_range[1]) - - conf_pools = lnst_config.get_pools() - pools = {} - if len(restrict_pools) > 0: - for pool_name in restrict_pools: - if pool_name in conf_pools: - pools[pool_name] = conf_pools[pool_name] - elif len(restrict_pools) == 1 and os.path.isdir(pool_name): - pools = {"cmd_line_pool": pool_name} - else: - raise NetTestError("Pool %s does not exist!" % pool_name) - else: - pools = conf_pools - - sp = SlavePool(pools, pool_checks) - self._slave_pool = sp - - modules_dirs = lnst_config.get_option('environment', 'module_dirs') - tools_dirs = lnst_config.get_option('environment', 'tool_dirs') - - self._resource_table = {} - self._resource_table["module"] = self._load_test_modules(modules_dirs) - self._resource_table["tools"] = self._load_test_tools(tools_dirs) - - def _get_machineinfo(self, machine_id): - try: - info = self._recipe["machines"][machine_id]["params"] - except KeyError: - msg = "Machine parameters requested, but not yet available" - raise NetTestError(msg) - - return info - - @staticmethod - def _session_die(session, status): - logging.debug("%s terminated with status %s", session.command, status) - msg = "SSH session terminated with status %s" % status - raise NetTestError(msg) - - def _prepare_network(self, resource_sync=True): - mreq = Task.get_mreq() - - machines = self._machines - for m_id in machines.keys(): - self._prepare_machine(m_id, resource_sync) - - for machine_id, machine_data in mreq.iteritems(): - m_id = machine_id - m = machines[m_id] - namespaces = set() - for if_id, iface_data in machine_data["interfaces"].iteritems(): - self._prepare_interface(m_id, if_id, iface_data) - - if iface_data["netns"] != None: - namespaces.add(iface_data["netns"]) - - if len(namespaces) > 0: - m.disable_nm() - - ifaces = m.get_ordered_interfaces() - for netns in namespaces: - m.add_netns(netns) - - for iface in ifaces: - iface.configure() - if (m._libvirt_domain is None and - isinstance(iface, StaticInterface)): - driver = iface._driver - if_id = iface._id - mapped_machine = self._slave_pool._map['machines'][m_id] - mapped_machine['interfaces'][if_id]['driver'] = driver - for iface in ifaces: - iface.up() - - m.wait_interface_init() - - def set_machine_requirements(self): - mreq = Task.get_mreq() - sp = self._slave_pool - sp.set_machine_requirements(mreq) - - def provision_machines(self): - sp = self._slave_pool - machines = self._machines - if not sp.provision_machines(machines): - msg = "This setup cannot be provisioned with the current pool." - raise NoMatchError(msg) - - def print_match_description(self): - sp = self._slave_pool - match = sp.get_match() - logging.info("Pool match description:") - if sp.is_setup_virtual(): - logging.info(" Setup is using virtual machines.") - for m_id, m in sorted(match["machines"].iteritems()): - logging.info(" host "%s" uses "%s"" % (m_id, m["target"])) - for if_id, match in m["interfaces"].iteritems(): - pool_id = match["target"] - logging.info(" interface "%s" matched to "%s"" %\ - (if_id, pool_id)) - - def get_pool_match(self): - return self._slave_pool.get_match() - - def _prepare_machine(self, m_id, resource_sync=True): - machine = self._machines[m_id] - address = socket.gethostbyname(machine.get_hostname()) - - self._log_ctl.add_slave(m_id) - machine.set_rpc(self._msg_dispatcher) - machine.set_mac_pool(self._mac_pool) - machine.set_network_bridges(self._network_bridges) - - recipe_name = os.path.basename(self._recipe_path.abs_path()) - machine.init_connection(recipe_name) - - def _prepare_interface(self, m_id, if_id, iface_data): - machine = self._machines[m_id] - - iface = machine.get_interface(if_id) - - if iface_data["netns"] != None: - iface.set_netns(iface_data["netns"]) - - def _prepare_command(self, cmd_data): - cmd = {"type": cmd_data["type"]} - if "host" in cmd_data: - cmd["host"] = cmd_data["host"] - if cmd["host"] not in self._machines: - msg = "Invalid host id '%s'." % cmd["host"] - raise RecipeError(msg, cmd_data) - - if "netns" in cmd_data: - cmd["netns"] = cmd_data["netns"] - - if "expect" in cmd_data: - expect = cmd_data["expect"] - if expect not in ["pass", "fail"]: - msg = "Illegal expect attribute value." - raise RecipeError(msg, cmd_data) - cmd["expect"] = expect == "pass" - - if cmd["type"] == "test": - cmd["module"] = cmd_data["module"] - - cmd_opts = {} - if "options" in cmd_data: - for opt in cmd_data["options"]: - name = opt["name"] - val = opt["value"] - - if name not in cmd_opts: - cmd_opts[name] = [] - - cmd_opts[name].append({"value": val}) - cmd["options"] = cmd_opts - elif cmd["type"] == "exec": - cmd["command"] = cmd_data["command"] - - if "from" in cmd_data: - cmd["from"] = cmd_data["from"] - elif cmd["type"] in ["wait", "intr", "kill"]: - # 'proc_id' is used to store bg_id for wait/kill/intr - # 'bg_id' is used for test/exec - # this is used to distinguish between the two in NetTestSlave code - cmd["proc_id"] = cmd_data["bg_id"] - elif cmd["type"] == "config": - cmd["persistent"] = False - if "persistent" in cmd_data: - cmd["persistent"] = bool_it(cmd_data["persistent"]) - - cmd["options"] = [] - for opt in cmd_data["options"]: - name = opt["name"] - value = opt["value"] - cmd["options"].append({"name": name, "value": value}) - elif cmd["type"] == "ctl_wait": - cmd["seconds"] = int(cmd_data["seconds"]) - else: - msg = "Unknown command type '%s'" % cmd["type"] - raise RecipeError(msg, cmd_data) - - - if cmd["type"] in ["test", "exec"]: - if "bg_id" in cmd_data: - cmd["bg_id"] = cmd_data["bg_id"] - - if "timeout" in cmd_data: - try: - cmd["timeout"] = int(cmd_data["timeout"]) - except ValueError: - msg = "Timeout value must be an integer." - raise RecipeError(msg, cmd_data) - - return cmd - - def _check_task(self, task): - err = False - bg_ids = {} - for i, command in enumerate(task["skeleton"]): - if command["type"] == "ctl_wait": - continue - - machine_id = command["host"] - if not machine_id in bg_ids: - bg_ids[machine_id] = set() - - cmd_type = command["type"] - if cmd_type in ["wait", "intr", "kill"]: - bg_id = command["proc_id"] - if bg_id in bg_ids[machine_id]: - bg_ids[machine_id].remove(bg_id) - else: - logging.error("Found command "%s" for bg_id "%s" on " - "host "%s" which was not previously " - "defined", cmd_type, bg_id, machine_id) - err = True - - if "bg_id" in command: - bg_id = command["bg_id"] - if not bg_id in bg_ids[machine_id]: - bg_ids[machine_id].add(bg_id) - else: - logging.error("Command "%d" uses bg_id "%s" on host" - ""%s" which is already used", - i, bg_id, machine_id) - err = True - - for machine_id in bg_ids: - for bg_id in bg_ids[machine_id]: - logging.error("bg_id "%s" on host "%s" has no kill/wait " - "command to it", bg_id, machine_id) - err = True - - return err - - def _cleanup_slaves(self): - if self._machines == None: - return - - for machine_id, machine in self._machines.iteritems(): - if machine.is_configured(): - try: - machine.cleanup() - except: - pass - - #clean-up slave logger - self._log_ctl.remove_slave(machine_id) - - for m_id in list(self._machines.keys()): - del self._machines[m_id] - - # remove dynamically created bridges - for bridge in self._network_bridges.itervalues(): - bridge.cleanup() - self._network_bridges = {} - - def match_setup(self): - self.run_mode = "match_setup" - res = self._run_python_task() - return {"passed": True} - - def run_recipe(self): - try: - res = self._run_recipe() - except Exception as exc: - logging.error("Recipe execution terminated by unexpected exception") - raise - finally: - if self._packet_capture: - self._stop_packet_capture() - self._gather_capture_files() - self._cleanup_slaves() - - return res - - def prepare_test_env(self): - try: - self.provision_machines() - self.print_match_description() - if self.run_mode == "match_setup": - return True - self._prepare_network() - Task.ctl.init_hosts(self._machines) - return True - except (NoMatchError) as exc: - self._cleanup_slaves() - return False - except (KeyboardInterrupt, Exception) as exc: - msg = "Exception raised during configuration." - logging.error(msg) - self._cleanup_slaves() - raise - - def _run_recipe(self): - overall_res = {"passed": True} - - try: - self._res_serializer.add_task() - res = self._run_python_task() - except CommandException as exc: - logging.debug(exc) - overall_res["passed"] = False - overall_res["err_msg"] = "Command exception raised." - - for machine in self._machines.itervalues(): - machine.restore_system_config() - - # task failed - if not res: - overall_res["passed"] = False - overall_res["err_msg"] = "At least one command failed." - - return overall_res - - def init_taskapi(self): - Task.ctl = Task.ControllerAPI(self) - - def _run_python_task(self): - #backup of resource table - res_table_bkp = copy.deepcopy(self._resource_table) - - cwd = os.getcwd() - task_path = self._recipe_path - name = os.path.basename(task_path.abs_path()).split(".")[0] - sys.path.append(os.path.dirname(task_path.resolve())) - os.chdir(os.path.dirname(task_path.resolve())) - imp.load_source(name, task_path.resolve()) - os.chdir(cwd) - sys.path.remove(os.path.dirname(task_path.resolve())) - - #restore resource table - self._resource_table = res_table_bkp - - return Task.ctl._result - - def _run_command(self, command): - logging.info("Executing command: [%s]", str_command(command)) - - if "desc" in command: - logging.info("Cmd description: %s", command["desc"]) - - if command["type"] == "ctl_wait": - sleep(command["seconds"]) - cmd_res = {"passed": True, - "res_header": "%-9s%ss" % ("ctl_wait", - command["seconds"]), - "msg": "", - "res_data": None} - if self._res_serializer: - self._res_serializer.add_cmd_result(command, cmd_res) - return cmd_res - - machine_id = command["host"] - machine = self._machines[machine_id] - - try: - cmd_res = machine.run_command(command) - except Exception as exc: - cmd_res = {"passed": False, - "res_data": {"Exception": str(exc)}, - "msg": "Exception raised.", - "res_header": "EXCEPTION", - "report": str(exc)} - raise - finally: - if self._res_serializer: - self._res_serializer.add_cmd_result(command, cmd_res) - - if cmd_res["passed"]: - res_str = decorate_with_preset("PASS", "pass") - else: - res_str = decorate_with_preset("FAIL", "fail") - logging.info("Result: %s" % res_str) - if "report" in cmd_res and cmd_res["report"] != "": - logging.info("Result data:") - for line in cmd_res["report"].splitlines(): - logging.info(4*" " + line) - if "msg" in cmd_res and cmd_res["msg"] != "": - logging.info("Status message from slave: "%s"" % cmd_res["msg"]) - - return cmd_res - - def _start_packet_capture(self): - logging.info("Starting packet capture") - for machine_id, machine in self._machines.iteritems(): - capture_files = machine.start_packet_capture() - self._remote_capture_files[machine_id] = capture_files - - def _stop_packet_capture(self): - logging.info("Stopping packet capture") - for machine_id, machine in self._machines.iteritems(): - machine.stop_packet_capture() - - # TODO: Move this function to logging - def _gather_capture_files(self): - logging_root = self._log_ctl.get_recipe_log_path() - logging_root = os.path.abspath(logging_root) - logging.info("Retrieving capture files from slaves") - for machine_id, machine in self._machines.iteritems(): - slave_logging_dir = os.path.join(logging_root, machine_id + "/") - try: - os.mkdir(slave_logging_dir) - except OSError as err: - if err.errno != 17: - msg = "Cannot access the logging directory %s" \ - % slave_logging_dir - raise NetTestError(msg) - - capture_files = self._remote_capture_files[machine_id] - for if_id, remote_path in capture_files.iteritems(): - filename = "%s.pcap" % if_id - local_path = os.path.join(slave_logging_dir, filename) - machine.copy_file_from_machine(remote_path, local_path) - - logging.info("pcap files from machine %s stored at %s", - machine_id, slave_logging_dir) - - def _load_test_modules(self, dirs): - modules = {} - for dir_name in dirs: - files = os.listdir(dir_name) - for f in files: - test_path = os.path.abspath("%s/%s" % (dir_name, f)) - if os.path.isfile(test_path): - match = re.match("(.+).py$", f) - if match: - test_name = match.group(1) - test_hash = md5sum(test_path) - - if test_name in modules: - msg = "Overriding previously defined test '%s' " \ - "from %s with a different one located in " \ - "%s" % (test_name, test_path, - modules[test_name]["path"]) - logging.warn(msg) - - modules[test_name] = {"path": test_path, - "hash": test_hash} - return modules - - def _load_test_tools(self, dirs): - packages = {} - for dir_name in dirs: - files = os.listdir(dir_name) - for f in files: - pkg_path = os.path.abspath("%s/%s" % (dir_name, f)) - if os.path.isdir(pkg_path): - pkg_name = os.path.basename(pkg_path.rstrip("/")) - pkg_hash = dir_md5sum(pkg_path) - - if pkg_name in packages: - msg = "Overriding previously defined tools " \ - "package '%s' from %s with a different " \ - "one located in %s" % (pkg_name, pkg_path, - packages[pkg_name]["path"]) - logging.warn(msg) - - packages[pkg_name] = {"path": pkg_path, - "hash": pkg_hash} - return packages - - def _get_alias(self, alias): - if alias in self._defined_aliases: - return self._defined_aliases[alias] - - def _get_aliases(self): - return self._defined_aliases - -class MessageDispatcher(ConnectionHandler): - def __init__(self, log_ctl): - super(MessageDispatcher, self).__init__() - self._log_ctl = log_ctl - self._machines = dict() - - def add_slave(self, machine, connection): - machine_id = machine.get_id() - self._machines[machine_id] = machine - self.add_connection(machine_id, connection) - - def send_message(self, machine_id, data): - soc = self.get_connection(machine_id) - - if send_data(soc, data) == False: - msg = "Connection error from slave %s" % machine_id - raise NetTestError(msg) - - def wait_for_result(self, machine_id): - wait = True - while wait: - connected_slaves = self._connection_mapping.keys() - - messages = self.check_connections() - - remaining_slaves = self._connection_mapping.keys() - - for msg in messages: - if msg[1]["type"] == "result" and msg[0] == machine_id: - wait = False - result = msg[1]["result"] - else: - self._process_message(msg) - - if connected_slaves != remaining_slaves: - disconnected_slaves = set(connected_slaves) -\ - set(remaining_slaves) - msg = "Slaves " + str(list(disconnected_slaves)) + \ - " disconnected from the controller." - raise NetTestError(msg) - - return result - - def _process_message(self, message): - if message[1]["type"] == "log": - record = message[1]["record"] - self._log_ctl.add_client_log(message[0], record) - elif message[1]["type"] == "result": - msg = "Recieved result message from different slave %s" % message[0] - logging.debug(msg) - elif message[1]["type"] == "if_update": - machine = self._machines[message[0]] - machine.interface_update(message[1]) - elif message[1]["type"] == "if_deleted": - machine = self._machines[message[0]] - machine.dev_db_delete(message[1]) - elif message[1]["type"] == "exception": - msg = "Slave %s: %s" % (message[0], message[1]["Exception"]) - raise CommandException(msg) - elif message[1]["type"] == "error": - msg = "Recieved an error message from slave %s: %s" %\ - (message[0], message[1]["err"]) - raise CommandException(msg) - else: - msg = "Unknown message type: %s" % message[1]["type"] - raise NetTestError(msg) - - def disconnect_slave(self, machine_id): - soc = self.get_connection(machine_id) - self.remove_connection(soc) - del self._machines[machine_id] diff --git a/lnst/Controller/RecipeParser.py b/lnst/Controller/RecipeParser.py deleted file mode 100644 index 09233a7..0000000 --- a/lnst/Controller/RecipeParser.py +++ /dev/null @@ -1,572 +0,0 @@ -""" -This module defines RecipeParser class useful to parse xml recipes - -Copyright 2013 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -rpazdera@redhat.com (Radek Pazdera) -""" - -import os -from lnst.Common.Path import Path -from lnst.Controller.XmlParser import XmlParser -from lnst.Controller.XmlProcessing import XmlProcessingError, XmlData -from lnst.Controller.XmlProcessing import XmlCollection - -class RecipeError(XmlProcessingError): - pass - -class RecipeParser(XmlParser): - def __init__(self, recipe_path): - recipe_path = Path(None, recipe_path).abs_path() - super(RecipeParser, self).__init__("schema-recipe.rng", recipe_path) - - def _process(self, lnst_recipe): - recipe = XmlData(lnst_recipe) - - # machines - machines_tag = lnst_recipe.find("network") - if machines_tag is not None: - machines = recipe["machines"] = XmlCollection(machines_tag) - for machine_tag in machines_tag: - machines.append(self._process_machine(machine_tag)) - - # tasks - tasks = recipe["tasks"] = XmlCollection() - task_tags = lnst_recipe.findall("task") - for task_tag in task_tags: - tasks.append(self._process_task(task_tag)) - - return recipe - - def _process_machine(self, machine_tag): - machine = XmlData(machine_tag) - machine["id"] = self._get_attribute(machine_tag, "id") - - # params - params_tag = machine_tag.find("params") - params = self._process_params(params_tag) - if len(params) > 0: - machine["params"] = params - - # interfaces - interfaces_tag = machine_tag.find("interfaces") - if interfaces_tag is not None and len(interfaces_tag) > 0: - machine["interfaces"] = XmlCollection(interfaces_tag) - - lo_netns = [] - unique_ids = [] - for interface_tag in interfaces_tag: - interfaces = self._process_interface(interface_tag) - - for interface in interfaces: - if interface['id'] in unique_ids: - msg = "Interface with ID "%s" has already been "\ - "defined for this machine." % interface['id'] - raise RecipeError(msg, interface_tag) - else: - unique_ids.append(interface['id']) - - if interface['type'] == 'lo': - if interface['netns'] in lo_netns: - msg = "Only one loopback device per netns "\ - "is allowed." - raise RecipeError(msg, interface_tag) - else: - lo_netns.append(interface['netns']) - elif interface['type'] == "ovs_bridge": - ovs_conf = interface["ovs_conf"] - for i in ovs_conf["tunnels"] + ovs_conf["internals"]: - if i['id'] in unique_ids: - msg = "Interface with ID "%s" has already "\ - "been defined for this machine." %\ - i['id'] - raise RecipeError(msg, i) - else: - unique_ids.append(i['id']) - - machine["interfaces"].extend(interfaces) - - return machine - - def _process_params(self, params_tag): - params = XmlCollection(params_tag) - if params_tag is not None: - for param_tag in params_tag: - param = XmlData(param_tag) - param["name"] = self._get_attribute(param_tag, "name") - param["value"] = self._get_attribute(param_tag, "value") - params.append(param) - - return params - - def _process_interface(self, iface_tag): - iface = XmlData(iface_tag) - iface["type"] = iface_tag.tag - - if iface["type"] == "veth_pair": - iface = self._process_interface(iface_tag[0])[0] - iface2 = self._process_interface(iface_tag[1])[0] - - iface["peer"] = iface2["id"] - iface2["peer"] = iface["id"] - - return [iface, iface2] - - iface["id"] = self._get_attribute(iface_tag, "id") - - iface["netns"] = None - if self._has_attribute(iface_tag, "netns"): - iface["netns"] = self._get_attribute(iface_tag, "netns") - - # netem - netem_tag = iface_tag.find("netem") - if netem_tag is not None: - iface["netem"] = self._process_netem(netem_tag) - - # params - params_tag = iface_tag.find("params") - params = self._process_params(params_tag) - if len(params) > 0: - iface["params"] = params - - # addresses - addresses_tag = iface_tag.find("addresses") - addrs = self._process_addresses(addresses_tag) - iface["addresses"] = addrs - - if iface["type"] == "eth": - iface["network"] = self._get_attribute(iface_tag, "label") - elif iface["type"] in ["bond", "bridge", "macvlan", "team"]: - # slaves - slaves_tag = iface_tag.find("slaves") - if slaves_tag is not None and len(slaves_tag) > 0: - iface["slaves"] = XmlCollection(slaves_tag) - for slave_tag in slaves_tag: - slave = XmlData(slave_tag) - slave["id"] = self._get_attribute(slave_tag, "id") - - # slave options - opts_tag = slave_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - slave["options"] = opts - - iface["slaves"].append(slave) - - # interface options - opts_tag = iface_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - iface["options"] = opts - elif iface["type"] in ["vti", "vti6"]: - # interface options - opts_tag = iface_tag.find("options") - opts = self._process_options(opts_tag) - iface["options"] = opts - elif iface["type"] in ["vlan"]: - # real_dev of the VLAN interface - slaves_tag = iface_tag.find("slaves") - if slaves_tag is None or len(slaves_tag) != 1: - msg = "VLAN '%s' need exactly one slave definition."\ - % iface["id"] - raise RecipeError(msg, iface_tag) - - iface["slaves"] = XmlCollection(slaves_tag) - - slave_tag = slaves_tag[0] - slave = XmlData(slave_tag) - slave["id"] = self._get_attribute(slave_tag, "id") - - iface["slaves"].append(slave) - - # interface options - opts_tag = iface_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - iface["options"] = opts - elif iface["type"] in ["vxlan"]: - # real_dev of the VXLAN interface - slaves_tag = iface_tag.find("slaves") - if slaves_tag is not None and len(slaves_tag) > 1: - msg = "VXLAN '%s' needs one or no slave definition."\ - % iface["id"] - raise RecipeError(msg, iface_tag) - - if slaves_tag: - iface["slaves"] = XmlCollection(slaves_tag) - slave_tag = slaves_tag[0] - slave = XmlData(slave_tag) - slave["id"] = self._get_attribute(slave_tag, "id") - iface["slaves"].append(slave) - - # interface options - opts_tag = iface_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - iface["options"] = opts - elif iface["type"] == "ovs_bridge": - slaves_tag = iface_tag.find("slaves") - iface["slaves"] = XmlCollection(slaves_tag) - ovsb_slaves = [] - - iface["ovs_conf"] = XmlData(slaves_tag) - if slaves_tag is not None: - for slave_tag in slaves_tag: - slave = XmlData(slave_tag) - slave["id"] = str(self._get_attribute(slave_tag, "id")) - ovsb_slaves.append(slave["id"]) - - opts_tag = slave_tag.find("options") - opts = self._process_options(opts_tag) - slave["options"] = opts - - iface["slaves"].append(slave) - - vlan_elems = iface_tag.findall("vlan") - vlans = iface["ovs_conf"]["vlans"] = XmlData(slaves_tag) - for vlan in vlan_elems: - vlan_tag = str(self._get_attribute(vlan, "tag")) - if vlan_tag in vlans: - msg = "VLAN '%s' already defined for "\ - "this ovs_bridge." % vlan_tag - raise RecipeError(msg, vlan) - - vlans[vlan_tag] = XmlData(vlan) - vlans[vlan_tag]["slaves"] = XmlCollection(vlan) - vlan_slaves = vlans[vlan_tag]["slaves"] - - slaves_tag = vlan.find("slaves") - for slave_tag in slaves_tag: - slave_id = str(self._get_attribute(slave_tag, "id")) - if slave_id not in ovsb_slaves: - msg = "No port with id '%s' defined for "\ - "this ovs_bridge." % slave_id - raise RecipeError(msg, slave_tag) - - if slave_id in vlan_slaves: - msg = "Port '%s' already a member of vlan %s"\ - % (slave_id, vlan_tag) - raise RecipeError(msg, slave_tag) - else: - vlan_slaves.append(slave_id) - - bonded_slaves = {} - bond_elems = iface_tag.findall("bond") - bonds = iface["ovs_conf"]["bonds"] = XmlData(slaves_tag) - for bond_tag in bond_elems: - bond_id = str(self._get_attribute(bond_tag, "id")) - if bond_id in bonds: - msg = "Bond with id '%s' already defined for "\ - "this ovs_bridge." % bond_id - raise RecipeError(msg, bond_tag) - bonds[bond_id] = XmlData(bond_tag) - bond_slaves = bonds[bond_id]["slaves"] = XmlCollection(bond_tag) - - slaves_tag = bond_tag.find("slaves") - for slave_tag in slaves_tag: - slave_id = str(self._get_attribute(slave_tag, "id")) - if slave_id not in ovsb_slaves: - msg = "No port with id '%s' defined for "\ - "this ovs_bridge." % slave_id - raise RecipeError(msg, slave_tag) - - if slave_id in bonded_slaves: - msg = "Port with id '%s' already in bond with id '%s'"\ - % (slave_id, bonded_slaves[slave_id]) - raise RecipeError(msg, slave_tag) - else: - bonded_slaves[slave_id] = bond_id - - bond_slaves.append(slave_id) - - opts_tag = bond_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - bonds[bond_id]["options"] = opts - - unique_ids = [] - tunnels = iface["ovs_conf"]["tunnels"] = XmlCollection(slaves_tag) - tunnel_elems = iface_tag.findall("tunnel") - for tunnel_elem in tunnel_elems: - tunnels.append(XmlData(tunnel_elem)) - tunnel = tunnels[-1] - tunnel["id"] = str(self._get_attribute(tunnel_elem, "id")) - if tunnel["id"] in unique_ids: - msg = "Tunnel with id '%s' already defined for "\ - "this ovs_bridge." % tunnel["id"] - raise RecipeError(msg, tunnel_elem) - else: - unique_ids.append(tunnel["id"]) - - t = str(self._get_attribute(tunnel_elem, "type")) - tunnel["type"] = t - - opts_elem = tunnel_elem.find("options") - opts = self._process_options(opts_elem) - if len(opts) > 0: - tunnel["options"] = opts - - # addresses - addresses_tag = tunnel_elem.find("addresses") - addrs = self._process_addresses(addresses_tag) - tunnel["addresses"] = addrs - - iface["ovs_conf"]["internals"] = XmlCollection(slaves_tag) - internals = iface["ovs_conf"]["internals"] - internal_elems = iface_tag.findall("internal") - for internal_elem in internal_elems: - internals.append(XmlData(internal_elem)) - internal = internals[-1] - internal["id"] = str(self._get_attribute(internal_elem, "id")) - if internal["id"] in unique_ids: - msg = "Internal id '%s' already defined for "\ - "this ovs_bridge." % internal["id"] - raise RecipeError(msg, internal_elem) - else: - unique_ids.append(internal["id"]) - - opts_elem = internal_elem.find("options") - opts = self._process_options(opts_elem) - if len(opts) > 0: - internal["options"] = opts - - # addresses - addresses_tag = internal_elem.find("addresses") - addrs = self._process_addresses(addresses_tag) - internal["addresses"] = addrs - - iface["ovs_conf"]["flow_entries"] = XmlCollection(slaves_tag) - flow_entries = iface["ovs_conf"]["flow_entries"] - flow_elems = iface_tag.findall("flow_entries") - if len(flow_elems) == 1: - entries = flow_elems[0].findall("entry") - for entry in entries: - if self._has_attribute(entry, "value"): - flow_entries.append(self._get_attribute(entry, - "value")) - else: - flow_entries.append(self._get_content(entry)) - - return [iface] - - def _process_addresses(self, addresses_tag): - addresses = XmlCollection(addresses_tag) - if addresses_tag is not None and len(addresses_tag) > 0: - for addr_tag in addresses_tag: - if self._has_attribute(addr_tag, "value"): - addr = self._get_attribute(addr_tag, "value") - else: - addr = self._get_content(addr_tag) - addresses.append(addr) - return addresses - - def _process_options(self, opts_tag): - options = XmlCollection(opts_tag) - if opts_tag is not None: - for opt_tag in opts_tag: - opt = XmlData(opt_tag) - opt["name"] = self._get_attribute(opt_tag, "name") - if self._has_attribute(opt_tag, "value"): - opt["value"] = self._get_attribute(opt_tag, "value") - else: - opt["value"] = self._get_content(opt_tag) - options.append(opt) - - return options - - def _validate_netem(self, options, netem_op, netem_tag): - if netem_op == "delay": - valid = False - jitter = False - correlation = False - distribution = False - valid_distributions = ["normal", "uniform", "pareto", "paretonormal"] - for opt in options: - if "time" in opt.values(): - valid = True - elif "distribution" in opt.values(): - if opt["value"] not in valid_distributions: - raise RecipeError("netem: invalid distribution type", netem_tag) - else: - distribution = True - elif "jitter" in opt.values(): - jitter = True - elif "correlation" in opt.values(): - correlation = True - if not jitter: - if correlation or distribution: - raise RecipeError("netem: jitter option is mandatory when using <correlation> or <distribution>", netem_tag) - if not valid: - raise RecipeError("netem: time option is mandatory for <delay>", netem_tag) - elif netem_op == "loss": - for opt in options: - if "percent" in opt.values(): - return - raise RecipeError("netem: percent option is mandatory for <loss>", netem_tag) - elif netem_op == "duplication": - for opt in options: - if "percent" in opt.values(): - return - raise RecipeError("netem: percent option is mandatory for <duplication>", netem_tag) - elif netem_op == "corrupt": - for opt in options: - if "percent" in opt.values(): - return - raise RecipeError("netem: percent option is mandatory for <corrupt>", netem_tag) - elif netem_op == "reordering": - for opt in options: - if "percent" in opt.values(): - return - raise RecipeError("netem: percent option is mandatory for <reordering>", netem_tag) - - def _process_netem(self, netem_tag): - interface = XmlData(netem_tag) - # params - for netem_op in ["delay", "loss", "duplication", "corrupt", "reordering"]: - netem_op_tag = netem_tag.find(netem_op) - if netem_op_tag is not None: - options_tag = netem_op_tag.find("options") - options = self._process_options(options_tag) - if len(options) > 0: - self._validate_netem(options, netem_op, netem_tag) - interface[netem_op] = options - return interface - - def _process_task(self, task_tag): - task = XmlData(task_tag) - - if self._has_attribute(task_tag, "quit_on_fail"): - task["quit_on_fail"] = self._get_attribute(task_tag, "quit_on_fail") - - if self._has_attribute(task_tag, "module_dir"): - base_dir = os.path.dirname(task_tag.attrib["__file"]) - dir_path = str(self._get_attribute(task_tag, "module_dir")) - exp_path = os.path.expanduser(dir_path) - abs_path = os.path.join(base_dir, exp_path) - norm_path = os.path.normpath(abs_path) - task["module_dir"] = norm_path - - if self._has_attribute(task_tag, "tools_dir"): - base_dir = os.path.dirname(task_tag.attrib["__file"]) - dir_path = str(self._get_attribute(task_tag, "tools_dir")) - exp_path = os.path.expanduser(dir_path) - abs_path = os.path.join(base_dir, exp_path) - norm_path = os.path.normpath(abs_path) - task["tools_dir"] = norm_path - - if self._has_attribute(task_tag, "python"): - task["python"] = self._get_attribute(task_tag, "python") - return task - - if len(task_tag) > 0: - task["commands"] = XmlCollection(task_tag) - for cmd_tag in task_tag: - if cmd_tag.tag == "run": - cmd = self._process_run_cmd(cmd_tag) - elif cmd_tag.tag == "config": - cmd = self._process_config_cmd(cmd_tag) - elif cmd_tag.tag == "ctl_wait": - cmd = self._process_ctl_wait_cmd(cmd_tag) - elif cmd_tag.tag in ["wait", "intr", "kill"]: - cmd = self._process_signal_cmd(cmd_tag) - else: - msg = "Unknown command '%s'." % cmd_tag.tag - raise RecipeError(msg, cmd_tag) - - task["commands"].append(cmd) - - return task - - def _process_run_cmd(self, cmd_tag): - cmd = XmlData(cmd_tag) - cmd["host"] = self._get_attribute(cmd_tag, "host") - - cmd["netns"] = None - if self._has_attribute(cmd_tag, "netns"): - cmd["netns"] = self._get_attribute(cmd_tag, "netns") - - has_module = self._has_attribute(cmd_tag, "module") - has_command = self._has_attribute(cmd_tag, "command") - has_from = self._has_attribute(cmd_tag, "from") - - if (has_module and has_command) or (has_module and has_from): - msg = "Invalid combination of attributes." - raise RecipeError(msg, cmd) - - if has_module: - cmd["type"] = "test" - cmd["module"] = self._get_attribute(cmd_tag, "module") - - # options - opts_tag = cmd_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - cmd["options"] = opts - elif has_command: - cmd["type"] = "exec" - cmd["command"] = self._get_attribute(cmd_tag, "command") - - if self._has_attribute(cmd_tag, "from"): - cmd["from"] = self._get_attribute(cmd_tag, "from") - - if self._has_attribute(cmd_tag, "bg_id"): - cmd["bg_id"] = self._get_attribute(cmd_tag, "bg_id") - - if self._has_attribute(cmd_tag, "timeout"): - cmd["timeout"] = self._get_attribute(cmd_tag, "timeout") - - if self._has_attribute(cmd_tag, "expect"): - cmd["expect"] = self._get_attribute(cmd_tag, "expect") - - return cmd - - def _process_config_cmd(self, cmd_tag): - cmd = XmlData(cmd_tag) - cmd["type"] = "config" - cmd["host"] = self._get_attribute(cmd_tag, "host") - - cmd["netns"] = None - if self._has_attribute(cmd_tag, "netns"): - cmd["netns"] = self._get_attribute(cmd_tag, "netns") - - if self._has_attribute(cmd_tag, "persistent"): - cmd["persistent"] = self._get_attribute(cmd_tag, "persistent") - - # inline option - if self._has_attribute(cmd_tag, "option"): - cmd["options"] = XmlCollection(cmd_tag) - if self._has_attribute(cmd_tag, "value"): - opt = XmlData(cmd_tag) - opt["name"] = self._get_attribute(cmd_tag, "option") - opt["value"] = self._get_attribute(cmd_tag, "value") - - cmd["options"] = XmlCollection(cmd_tag) - cmd["options"].append(opt) - else: - raise RecipeError("Missing option value.", cmd) - else: - # options - opts_tag = cmd_tag.find("options") - opts = self._process_options(opts_tag) - if len(opts) > 0: - cmd["options"] = opts - - return cmd - - def _process_ctl_wait_cmd(self, cmd_tag): - cmd = XmlData(cmd_tag) - cmd["type"] = "ctl_wait" - cmd["seconds"] = self._get_attribute(cmd_tag, "seconds") - return cmd - - def _process_signal_cmd(self, cmd_tag): - cmd = XmlData(cmd_tag) - cmd["type"] = cmd_tag.tag - cmd["host"] = self._get_attribute(cmd_tag, "host") - cmd["bg_id"] = self._get_attribute(cmd_tag, "bg_id") - cmd["netns"] = None - return cmd diff --git a/lnst/Controller/SlavePool.py b/lnst/Controller/SlavePool.py deleted file mode 100644 index 13cc34e..0000000 --- a/lnst/Controller/SlavePool.py +++ /dev/null @@ -1,648 +0,0 @@ -""" -This module contains implementaion of SlavePool class that -can be used to maintain a cluster of test machines. - -These machines can be provisioned and used in test recipes. - -Copyright 2012 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -rpazdera@redhat.com (Radek Pazdera) -""" - -import logging -import os -import re -import socket -import select -from lnst.Common.Config import lnst_config -from lnst.Common.NetUtils import normalize_hwaddr -from lnst.Controller.Machine import Machine -from lnst.Controller.SlaveMachineParser import SlaveMachineParser -from lnst.Controller.SlaveMachineParser import SlaveMachineError -from lnst.Common.Colours import decorate_with_preset -from lnst.Common.Utils import check_process_running - -class SlavePool: - """ - This class is responsible for managing test machines that - are available at the controler and can be used for testing. - """ - def __init__(self, pools, pool_checks=True): - self._map = {} - self._pools = {} - self._pool = {} - - self._machine_matches = [] - self._network_matches = [] - - self._allow_virt = lnst_config.get_option("environment", - "allow_virtual") - self._allow_virt &= check_process_running("libvirtd") - self._pool_checks = pool_checks - - self._mapper = SetupMapper() - self._mreqs = None - - logging.info("Checking machine pool availability.") - for pool_name, pool_dir in pools.items(): - self._pools[pool_name] = {} - self.add_dir(pool_name, pool_dir) - if len(self._pools[pool_name]) == 0: - del self._pools[pool_name] - - self._mapper.set_pools(self._pools) - logging.info("Finished loading pools.") - - def get_pools(self): - return self._pools - - def add_dir(self, pool_name, dir_path): - logging.info("Processing pool '%s', directory '%s'" % (pool_name, - dir_path)) - pool = self._pools[pool_name] - - try: - dentries = os.listdir(dir_path) - except OSError: - logging.warn("Directory '%s' does not exist for pool '%s'" % - (dir_path, - pool_name)) - return - - for dirent in dentries: - m_id, m = self.add_file(pool_name, dir_path, dirent) - if m_id != None and m != None: - pool[m_id] = m - - if len(pool) == 0: - logging.warn("No machines found in pool '%s', directory '%s'" % - (pool_name, - dir_path)) - - max_len = 0 - for m_id in pool.keys(): - if len(m_id) > max_len: - max_len = len(m_id) - - if self._pool_checks: - check_sockets = {} - for m_id, m in sorted(pool.iteritems()): - hostname = m["params"]["hostname"] - if "rpc_port" in m["params"]: - port = m["params"]["rpc_port"] - else: - port = lnst_config.get_option('environment', 'rpcport') - - logging.debug("Querying machine '%s': %s:%s" %\ - (m_id, hostname, port)) - - s = socket.socket() - s.settimeout(0) - try: - s.connect((hostname, port)) - except: - pass - check_sockets[s] = m_id - - while len(check_sockets) > 0: - rl, wl, el = select.select([], check_sockets.keys(), []) - for s in wl: - err = s.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) - m_id = check_sockets[s] - if err == 0: - pool[m_id]["available"] = True - s.shutdown(socket.SHUT_RDWR) - s.close() - del check_sockets[s] - else: - pool[m_id]["available"] = False - s.close() - del check_sockets[s] - else: - for m_id in pool.keys(): - pool[m_id]["available"] = True - - for m_id in sorted(list(pool.keys())): - m = pool[m_id] - if m["available"]: - if 'libvirt_domain' in m['params']: - libvirt_msg = " libvirt_domain: %s" %\ - m['params']['libvirt_domain'] - else: - libvirt_msg = "" - msg = "%s%s [%s] %s" % (m_id, (max_len - len(m_id)) * " ", - decorate_with_preset("UP", "pass"), - libvirt_msg) - else: - msg = "%s%s [%s]" % (m_id, (max_len - len(m_id)) * " ", - decorate_with_preset("DOWN", "fail")) - del pool[m_id] - - logging.info(msg) - - def add_file(self, pool_name, dir_path, dirent): - filepath = dir_path + "/" + dirent - pool = self._pools[pool_name] - if os.path.isfile(filepath) and re.search(".xml$", filepath, re.I): - dirname, basename = os.path.split(filepath) - m_id = re.sub(".[xX][mM][lL]$", "", basename) - - parser = SlaveMachineParser(filepath) - xml_data = parser.parse() - machine_spec = self._process_machine_xml_data(m_id, xml_data) - - if 'libvirt_domain' in machine_spec['params'] and \ - not self._allow_virt: - logging.debug("libvirtd not running or allow_virtual "\ - "disabled. Removing libvirt_domain from "\ - "machine '%s'" % m_id) - del machine_spec['params']['libvirt_domain'] - - # Check if there isn't any machine with the same - # hostname or libvirt_domain already in the pool - for pm_id, m in pool.iteritems(): - pm = m["params"] - rm = machine_spec["params"] - if pm["hostname"] == rm["hostname"]: - msg = "You have the same machine listed twice in " \ - "your pool ('%s' and '%s')." % (m_id, pm_id) - raise SlaveMachineError(msg) - - if "libvirt_domain" in rm and "libvirt_domain" in pm and \ - pm["libvirt_domain"] == rm["libvirt_domain"]: - msg = "You have the same libvirt_domain listed twice in " \ - "your pool ('%s' and '%s')." % (m_id, pm_id) - raise SlaveMachineError(msg) - - return (m_id, machine_spec) - return (None, None) - - def _process_machine_xml_data(self, m_id, machine_xml_data): - machine_spec = {"interfaces": {}, "params":{}, "security": {}} - - # process parameters - if "params" in machine_xml_data: - for param in machine_xml_data["params"]: - name = str(param["name"]) - value = str(param["value"]) - - if name == "rpc_port": - machine_spec["params"][name] = int(value) - else: - machine_spec["params"][name] = value - - mandatory_params = ["hostname"] - for p in mandatory_params: - if p not in machine_spec["params"]: - msg = "Mandatory parameter '%s' missing for machine %s." \ - % (p, m_id) - raise SlaveMachineError(msg, machine_xml_data["params"]) - - # process interfaces - if "interfaces" in machine_xml_data: - for iface in machine_xml_data["interfaces"]: - if_id = iface["id"] - iface_spec = self._process_iface_xml_data(m_id, iface) - - if if_id not in machine_spec["interfaces"]: - machine_spec["interfaces"][if_id] = iface_spec - else: - msg = "Duplicate interface id '%s'." % if_id - raise SlaveMachineError(msg, iface) - else: - if "libvirt_domain" not in machine_spec["params"]: - msg = "Machine '%s' has no testing interfaces. " \ - "This setup is supported only for virtual slaves." \ - % m_id - raise SlaveMachineError(msg, machine_xml_data) - - machine_spec["security"] = machine_xml_data["security"] - - return machine_spec - - def _process_iface_xml_data(self, m_id, iface): - if_id = iface["id"] - iface_spec = {"params": {}} - iface_spec["network"] = iface["network"] - - for param in iface["params"]: - name = str(param["name"]) - value = str(param["value"]) - - if name == "hwaddr": - iface_spec["params"][name] = normalize_hwaddr(value) - else: - iface_spec["params"][name] = value - - mandatory_params = ["hwaddr"] - for p in mandatory_params: - if p not in iface_spec["params"]: - msg = "Mandatory parameter '%s' missing for machine %s, " \ - "interface '%s'." % (p, m_id, if_id) - raise SlaveMachineError(msg, iface["params"]) - - return iface_spec - - def set_machine_requirements(self, mreqs): - self._mreqs = mreqs - self._mapper.set_requirements(mreqs) - self._mapper.reset_match_state() - - def provision_machines(self, machines): - """ - This method will try to map a dictionary of machines' - requirements to a pool of machines that is available to - this instance. - - :param templates: Setup request (dict of required machines) - :type templates: dict - - :return: XML machineconfigs of requested machines - :rtype: dict - """ - mapper = self._mapper - logging.info("Matching machines, without virtuals.") - res = mapper.match() - - if not res and not mapper.get_virtual() and self._allow_virt: - logging.info("Match failed for normal machines, falling back "\ - "to matching virtual machines.") - mapper.set_virtual(self._allow_virt) - mapper.reset_match_state() - res = mapper.match() - - if res: - self._map = mapper.get_mapping() - else: - self._map = {} - - if self._map == {}: - self._pool = {} - return False - else: - self._pool = self._pools[self._map["pool_name"]] - - if self._map["virtual"]: - mreqs = self._mreqs - for m_id in self._map["machines"]: - machines[m_id] = self._prepare_virtual_slave(m_id, mreqs[m_id]) - else: - for m_id in self._map["machines"]: - machines[m_id] = self._get_mapped_slave(m_id) - - return True - - def is_setup_virtual(self): - return self._map["virtual"] - - def get_match(self): - return self._map - - def _get_machine_mapping(self, m_id): - return self._map["machines"][m_id]["target"] - - def _get_interface_mapping(self, m_id, if_id): - return self._map["machines"][m_id]["interfaces"][if_id] - - def _get_network_mapping(self, net_id): - return self._map["networks"][net_id] - - def _get_mapped_slave(self, tm_id): - pm_id = self._get_machine_mapping(tm_id) - pm = self._pool[pm_id] - - hostname = pm["params"]["hostname"] - - rpcport = None - if "rpc_port" in pm["params"]: - rpcport = pm["params"]["rpc_port"] - - machine = Machine(tm_id, hostname, None, rpcport, pm["security"]) - - used = [] - if_map = self._map["machines"][tm_id]["interfaces"] - for t_if, p_if in if_map.iteritems(): - pool_id = p_if["target"] - used.append(pool_id) - if_data = pm["interfaces"][pool_id] - - iface = machine.new_static_interface(t_if, "eth") - iface.set_hwaddr(if_data["params"]["hwaddr"]) - - for t_net, p_net in self._map["networks"].iteritems(): - if pm["interfaces"][pool_id]["network"] == p_net: - iface.set_network(t_net) - break - - for if_id, if_data in pm["interfaces"].iteritems(): - if if_id not in used: - iface = machine.new_unused_interface("eth") - iface.set_hwaddr(if_data["params"]["hwaddr"]) - iface.set_network(None) - - return machine - - def _prepare_virtual_slave(self, tm_id, tm): - pm_id = self._get_machine_mapping(tm_id) - pm = self._pool[pm_id] - - hostname = pm["params"]["hostname"] - libvirt_domain = pm["params"]["libvirt_domain"] - - rpcport = None - if "rpc_port" in pm["params"]: - rpcport = pm["params"]["rpc_port"] - - machine = Machine(tm_id, hostname, libvirt_domain, rpcport, - pm["security"]) - - # make all the existing unused - for if_id, if_data in pm["interfaces"].iteritems(): - iface = machine.new_unused_interface("eth") - iface.set_hwaddr(if_data["params"]["hwaddr"]) - iface.set_network(None) - - # add all the other devices - for if_id, if_data in tm["interfaces"].iteritems(): - iface = machine.new_virtual_interface(if_id, "eth") - iface.set_network(if_data["network"]) - if "hwaddr" in if_data["params"]: - iface.set_hwaddr(if_data["params"]["hwaddr"]) - if "driver" in if_data["params"]: - iface.set_driver(if_data["params"]["driver"]) - - return machine - -class MapperError(Exception): - pass - -class SetupMapper(object): - def __init__(self): - self._pools = {} - self._pool_stack = [] - self._pool = {} - self._pool_name = None - self._mreqs = {} - self._unmatched_req_machines = [] - self._matched_pool_machines = [] - self._machine_stack = [] - self._net_label_mapping = {} - self._virtual_matching = False - - def set_requirements(self, mreqs): - self._mreqs = mreqs - - def set_pools(self, pools): - self._pools = pools - - def set_virtual(self, virt_value): - self._virtual_matching = virt_value - - for m_id, m in self._mreqs.iteritems(): - for if_id, interface in m["interfaces"].iteritems(): - if "params" in interface: - for name, val in interface["params"].iteritems(): - if name not in ["hwaddr", "driver"]: - msg = "Dynamically created interfaces "\ - "only support the 'hwaddr' and 'driver' "\ - "option. '%s=%s' found on machine '%s' "\ - "interface '%s'" % (name, val, - m_id, if_id) - raise MapperError(msg) - - def get_virtual(self): - return self._virtual_matching - - def reset_match_state(self): - self._net_label_mapping = {} - self._machine_stack = [] - self._unmatched_req_machines = sorted(self._mreqs.keys(), reverse=True) - - self._pool_stack = list(self._pools.keys()) - if len(self._pool_stack) > 0: - self._pool_name = self._pool_stack.pop() - self._pool = self._pools[self._pool_name] - - self._unmatched_pool_machines = [] - for p_id, p_machine in sorted(self._pool.iteritems(), reverse=True): - if self._virtual_matching: - if "libvirt_domain" in p_machine["params"]: - self._unmatched_pool_machines.append(p_id) - else: - self._unmatched_pool_machines.append(p_id) - - if len(self._pool) > 0 and len(self._mreqs) > 0: - self._push_machine_stack() - - def match(self): - logging.info("Trying match with pool: %s" % self._pool_name) - while len(self._machine_stack)>0: - stack_top = self._machine_stack[-1] - if self._virtual_matching and stack_top["virt_matched"]: - if stack_top["current_match"] != None: - cur_match = stack_top["current_match"] - self._unmatched_pool_machines.append(cur_match) - stack_top["current_match"] = None - stack_top["virt_matched"] = False - - if self._if_match(): - if len(self._unmatched_req_machines) > 0: - self._push_machine_stack() - continue - else: - return True - else: - #unmap the pool machine - if stack_top["current_match"] != None: - cur_match = stack_top["current_match"] - self._unmatched_pool_machines.append(cur_match) - stack_top["current_match"] = None - - mreq_m_id = stack_top["m_id"] - while len(stack_top["remaining_matches"]) > 0: - pool_m_id = stack_top["remaining_matches"].pop() - if self._check_machine_compatibility(mreq_m_id, pool_m_id): - #map compatible pool machine - stack_top["current_match"] = pool_m_id - stack_top["unmatched_pool_ifs"] = \ - sorted(self._pool[pool_m_id]["interfaces"].keys(), - reverse=True) - self._unmatched_pool_machines.remove(pool_m_id) - break - - if stack_top["current_match"] != None: - #clear if mapping - stack_top["if_stack"] = [] - #next iteration will match the interfaces - if not self._virtual_matching: - self._push_if_stack() - continue - else: - self._pop_machine_stack() - if len(self._machine_stack) == 0 and\ - len(self._pool_stack) > 0: - logging.info("Match with pool %s not found." % - self._pool_name) - self._pool_name = self._pool_stack.pop() - self._pool = self._pools[self._pool_name] - logging.info("Trying match with pool: %s" % - self._pool_name) - - self._unmatched_pool_machines = [] - for p_id, p_machine in sorted(self._pool.iteritems(), reverse=True): - if self._virtual_matching: - if "libvirt_domain" in p_machine["params"]: - self._unmatched_pool_machines.append(p_id) - else: - self._unmatched_pool_machines.append(p_id) - - if len(self._pool) > 0 and len(self._mreqs) > 0: - self._push_machine_stack() - continue - return False - - def _if_match(self): - m_stack_top = self._machine_stack[-1] - if_stack = m_stack_top["if_stack"] - - if self._virtual_matching: - if m_stack_top["current_match"] != None: - m_stack_top["virt_matched"] = True - return True - else: - return False - - while len(if_stack) > 0: - stack_top = if_stack[-1] - - req_m = self._mreqs[m_stack_top["m_id"]] - pool_m = self._pool[m_stack_top["current_match"]] - req_if = req_m["interfaces"][stack_top["if_id"]] - req_net_label = req_if["network"] - - if stack_top["current_match"] != None: - cur_match = stack_top["current_match"] - m_stack_top["unmatched_pool_ifs"].append(cur_match) - pool_if = pool_m["interfaces"][cur_match] - pool_net_label = pool_if["network"] - net_label_mapping = self._net_label_mapping[req_net_label] - if net_label_mapping == (pool_net_label, m_stack_top["m_id"], - stack_top["if_id"]): - del self._net_label_mapping[req_net_label] - stack_top["current_match"] = None - - while len(stack_top["remaining_matches"]) > 0: - pool_if_id = stack_top["remaining_matches"].pop() - pool_if = pool_m["interfaces"][pool_if_id] - if self._check_interface_compatibility(req_if, pool_if): - #map compatible interfaces - stack_top["current_match"] = pool_if_id - if req_net_label not in self._net_label_mapping: - self._net_label_mapping[req_net_label] =\ - (pool_if["network"], - m_stack_top["m_id"], - stack_top["if_id"]) - m_stack_top["unmatched_pool_ifs"].remove(pool_if_id) - break - - if stack_top["current_match"] != None: - if len(m_stack_top["unmatched_ifs"]) > 0: - self._push_if_stack() - continue - else: - return True - else: - self._pop_if_stack() - continue - return False - - def _push_machine_stack(self): - machine_match = {} - machine_match["m_id"] = self._unmatched_req_machines.pop() - machine_match["current_match"] = None - machine_match["remaining_matches"] = list(self._unmatched_pool_machines) - machine_match["if_stack"] = [] - - machine = self._mreqs[machine_match["m_id"]] - machine_match["unmatched_ifs"] = sorted(machine["interfaces"].keys(), - reverse=True) - machine_match["unmatched_pool_ifs"] = [] - - if self._virtual_matching: - machine_match["virt_matched"] = False - - self._machine_stack.append(machine_match) - - def _pop_machine_stack(self): - stack_top = self._machine_stack.pop() - self._unmatched_req_machines.append(stack_top["m_id"]) - - def _push_if_stack(self): - m_stack_top = self._machine_stack[-1] - if_match = {} - if_match["if_id"] = m_stack_top["unmatched_ifs"].pop() - if_match["current_match"] = None - if_match["remaining_matches"] = list(m_stack_top["unmatched_pool_ifs"]) - - m_stack_top["if_stack"].append(if_match) - - def _pop_if_stack(self): - m_stack_top = self._machine_stack[-1] - if_stack_top = m_stack_top["if_stack"].pop() - m_stack_top["unmatched_ifs"].append(if_stack_top["if_id"]) - - def _check_machine_compatibility(self, req_id, pool_id): - req_machine = self._mreqs[req_id] - pool_machine = self._pool[pool_id] - for param, value in req_machine["params"].iteritems(): - if param not in pool_machine["params"] or\ - value != pool_machine["params"][param]: - return False - return True - - def _check_interface_compatibility(self, req_if, pool_if): - label_mapping = self._net_label_mapping - for req_label, mapping in label_mapping.iteritems(): - if req_label == req_if["network"] and\ - mapping[0] != pool_if["network"]: - return False - if mapping[0] == pool_if["network"] and\ - req_label != req_if["network"]: - return False - for param, value in req_if["params"].iteritems(): - if param not in pool_if["params"] or\ - value != pool_if["params"][param]: - return False - return True - - def get_mapping(self): - mapping = {"machines": {}, "networks": {}, "virtual": False, - "pool_name": self._pool_name} - - for req_label, label_map in self._net_label_mapping.iteritems(): - mapping["networks"][req_label] = label_map[0] - - for machine in self._machine_stack: - m_map = mapping["machines"][machine["m_id"]] = {} - - m_map["target"] = machine["current_match"] - - hostname = self._pool[m_map["target"]]["params"]["hostname"] - m_map["hostname"] = hostname - - interfaces = m_map["interfaces"] = {} - if_stack = machine["if_stack"] - for interface in if_stack: - i = interfaces[interface["if_id"]] = {} - i["target"] = interface["current_match"] - pool_if = self._pool[m_map["target"]]["interfaces"][i["target"]] - i["hwaddr"] = pool_if["params"]["hwaddr"] - - - if self._virtual_matching: - mapping["virtual"] = True - return mapping diff --git a/lnst/Controller/XmlParser.py b/lnst/Controller/XmlParser.py deleted file mode 100644 index 355b5e8..0000000 --- a/lnst/Controller/XmlParser.py +++ /dev/null @@ -1,188 +0,0 @@ -""" -This module contains the XmlParser and LnstParser classes. - -Copyright 2013 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -rpazdera@redhat.com (Radek Pazdera) -""" - -import os -import re -import sys -import copy -from lxml import etree -from urllib2 import urlopen -from lnst.Common.Config import lnst_config -from lnst.Controller.XmlTemplates import XmlTemplates -from lnst.Controller.XmlProcessing import XmlProcessingError - -class XmlParser(object): - XINCLUDE_RE = r"{http://www.w3.org/[0-9]{4}/XInclude}include" - - def __init__(self, schema_file, xml_path): - # locate the schema file - # try git path - dirname = os.path.dirname(sys.argv[0]) - schema_path = os.path.join(dirname, schema_file) - if not os.path.exists(schema_path): - # try configuration - res_dir = lnst_config.get_option("environment", "resource_dir") - schema_path = os.path.join(res_dir, schema_file) - - if not os.path.exists(schema_path): - raise Exception("The recipe schema file was not found. " + \ - "Your LNST installation is corrupt!") - - self._template_proc = XmlTemplates() - - self._path = xml_path - relaxng_doc = etree.parse(schema_path) - self._schema = etree.RelaxNG(relaxng_doc) - - def parse(self): - doc = self._parse(self._path) - self._remove_comments(doc) - - # Due to a weird implementation of XInclude in lxml, the - # XmlParser resolves included documents on it's own. - # - # To be able to tell later on where each tag was located - # in the XML document, we add a '__file' attribute to - # each element of the tree during the parsing. - # - # However, these special attributes are of course not - # valid according to our schemas. To solve this, a copy of - # the tree is made and the '__file' attributes are removed - # before validation. - # - # XXX This is a *EXTREMELY* dirty hack. Ideas/proposals - # for cleaner solutions are more than welcome! - root_tag = self._init_loc(doc.getroot(), self._path) - self._expand_xinclude(root_tag, os.path.dirname(self._path)) - - self._template_proc.process_aliases(root_tag) - - try: - self._validate(doc) - except: - err = self._schema.error_log[0] - loc = {"file": os.path.basename(err.filename), - "line": err.line, "col": err.column} - exc = XmlProcessingError(err.message) - exc.set_loc(loc) - raise exc - - return self._process(root_tag) - - def _parse(self, path): - try: - if path.startswith('https'): - doc = etree.parse(urlopen(path)) - else: - doc = etree.parse(path) - except etree.LxmlError as err: - # A workaround for cases when lxml (quite strangely) - # sets the filename to <string>. - if err.error_log[0].filename == "<string>": - filename = self._path - else: - filename = err.error_log[0].filename - loc = {"file": os.path.basename(filename), - "line": err.error_log[0].line, - "col": err.error_log[0].column} - exc = XmlProcessingError(err.error_log[0].message) - exc.set_loc(loc) - raise exc - except Exception as err: - loc = {"file": os.path.basename(self._path), - "line": None, - "col": None} - exc = XmlProcessingError(str(err)) - exc.set_loc(loc) - raise exc - - return doc - - def _process(self, root_tag): - pass - - def set_machines(self, machines): - self._template_proc.set_machines(machines) - - def set_aliases(self, defined, overriden): - self._template_proc.set_aliases(defined, overriden) - - def _has_attribute(self, element, attr): - return attr in element.attrib - - def _get_attribute(self, element, attr): - text = element.attrib[attr].strip() - return self._template_proc.expand_functions(text) - - def _get_content(self, element): - text = etree.tostring(element, method="text").strip() - return self._template_proc.expand_functions(text) - - def _expand_xinclude(self, elem, base_url=""): - for e in elem: - if re.match(self.XINCLUDE_RE, str(e.tag)): - href = os.path.join(base_url, e.get("href")) - filename = os.path.basename(href) - - doc = self._parse(href) - self._remove_comments(doc) - node = doc.getroot() - - node = self._init_loc(node, href) - - if e.tail: - node.tail = (node.tail or "") + e.tail - self._expand_xinclude(node, os.path.dirname(href)) - - parent = e.getparent() - if parent is None: - return node - - parent.replace(e, node) - else: - self._expand_xinclude(e, base_url) - return elem - - def _remove_comments(self, doc): - comments = doc.xpath('//comment()') - for c in comments: - p = c.getparent() - if p is not None: - p.remove(c) - - def _init_loc(self, elem, filename): - """ Remove all coment tags from the tree """ - - elem.attrib["__file"] = filename - for e in elem: - self._init_loc(e, filename) - - return elem - - def _validate(self, original): - """ - Make a copy of the tree, remove the '__file' attributes - and validate against the appropriate schema. - - Very unfortunate solution. - """ - doc = copy.deepcopy(original) - root = doc.getroot() - - self._prepare_tree_for_validation(root) - self._schema.assertValid(doc) - - def _prepare_tree_for_validation(self, elem): - if "__file" in elem.attrib: - del elem.attrib["__file"] - for e in elem: - self._prepare_tree_for_validation(e) diff --git a/lnst/Controller/XmlProcessing.py b/lnst/Controller/XmlProcessing.py deleted file mode 100644 index b80c3a3..0000000 --- a/lnst/Controller/XmlProcessing.py +++ /dev/null @@ -1,235 +0,0 @@ -""" -This module contains code code for XML parsing and processing. - -Copyright 2012 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -rpazdera@redhat.com (Radek Pazdera) -""" - -import os - -class XmlProcessingError(Exception): - """ Exception thrown on parsing errors """ - - _filename = None - _line = None - _col = None - - def __init__(self, msg, obj=None): - super(XmlProcessingError, self).__init__() - self._msg = msg - - if obj is not None: - if hasattr(obj, "loc"): - self.set_loc(obj.loc) - elif hasattr(obj, "attrib") and "__file" in obj.attrib: - loc = {} - loc["file"] = obj.attrib["__file"] - if hasattr(obj, "sourceline"): - loc["line"] = obj.sourceline - self.set_loc(loc) - elif hasattr(obj, "base") and obj.base != None: - loc = {} - loc["file"] = os.path.basename(obj.base) - if hasattr(obj, "sourceline"): - loc["line"] = obj.sourceline - self.set_loc(loc) - - - def set_loc(self, loc): - self._filename = loc["file"] - self._line = loc["line"] - if "col" in loc: - self._col = loc["col"] - - def __str__(self): - line = "" - col = "" - sep = "" - loc = "" - filename = "<unknown>" - - if self._filename: - filename = self._filename - - if self._line: - line = "%d" % self._line - sep = ":" - - if self._col: - col = "%s%d" % (sep, self._col) - - if self._line or self._col: - loc = "%s%s:" % (line, col) - - return "Parser error: %s:%s %s" % (filename, loc, self._msg) - -class XmlDataIterator: - def __init__(self, iterator): - self._iterator = iterator - - def __iter__(self): - return self - - def next(self): - n = self._iterator.next() - - # For normal iterators - if type(n) == XmlTemplateString: - return str(n) - - # For iteritems() iterators - if type(n) == tuple and len(n) == 2 and type(n[1]) == XmlTemplateString: - return (n[0], str(n[1])) - - return n - -class XmlCollection(list): - def __init__(self, node=None): - super(XmlCollection, self).__init__() - if node is not None: - if hasattr(node, "loc"): - self.loc = node.loc - elif "__file" in node.attrib: - loc = {} - loc["file"] = node.attrib["__file"] - if hasattr(node, "sourceline"): - loc["line"] = node.sourceline - self.loc = loc - elif hasattr(node, "base") and node.base != None: - loc = {} - loc["file"] = os.path.basename(node.base) - if hasattr(node, "sourceline"): - loc["line"] = node.sourceline - self.loc = loc - - def __getitem__(self, key): - value = super(XmlCollection, self).__getitem__(key) - if type(value) == XmlData or type(value) == XmlCollection: - return value - - return str(value) - - def __iter__(self): - it = super(XmlCollection, self).__iter__() - return XmlDataIterator(it) - - def to_list(self): - new_list = list() - for value in self: - if isinstance(value, XmlData): - new_val = value.to_dict() - elif isinstance(value, XmlCollection): - new_val = value.to_list() - elif isinstance(value, XmlTemplateString): - new_val = str(value) - else: - new_val = value - new_list.append(new_val) - - return new_list - -class XmlData(dict): - def __init__(self, node=None): - super(XmlData, self).__init__() - if node is not None: - if hasattr(node, "loc"): - self.loc = node.loc - elif "__file" in node.attrib: - loc = {} - loc["file"] = node.attrib["__file"] - if hasattr(node, "sourceline"): - loc["line"] = node.sourceline - self.loc = loc - elif hasattr(node, "base") and node.base != None: - loc = {} - loc["file"] = os.path.basename(node.base) - if hasattr(node, "sourceline"): - loc["line"] = node.sourceline - self.loc = loc - - def __getitem__(self, key): - value = super(XmlData, self).__getitem__(key) - if type(value) == XmlData or type(value) == XmlCollection\ - or value == None: - return value - - return str(value) - - def __iter__(self): - it = super(XmlData, self).__iter__() - return XmlDataIterator(it) - - def iteritems(self): - it = super(XmlData, self).iteritems() - return XmlDataIterator(it) - - def iterkeys(self): - it = super(XmlData, self).iterkeys() - return XmlDataIterator(it) - - def itervalues(self): - it = super(XmlData, self).itervalues() - return XmlDataIterator(it) - - def to_dict(self): - new_dict = dict() - for key, value in self.iteritems(): - if isinstance(value, XmlData): - new_val = value.to_dict() - elif isinstance(value, XmlCollection): - new_val = value.to_list() - elif isinstance(value, XmlTemplateString): - new_val = str(value) - else: - new_val = value - new_dict[key] = new_val - - return new_dict - -class XmlTemplateString(object): - def __init__(self, param=None, node=None): - if type(param) == str: - self._parts = [param] - elif type(param) == list: - self._parts = param - else: - self._parts = [] - - if node and hasattr(node, "loc"): - self.loc = node.loc - - def __add__(self, other): - if type(other) is str: - self.add_part(other) - elif type(other) is self.__class__: - self._parts += other._parts - else: - raise XmlProcessingError("Cannot concatenate %s and %s" % \ - str(type(self)), str(type(other))) - return self - - def __str__(self): - string = "" - for part in self._parts: - string += str(part) - return string - - def __hash__(self): - return hash(str(self)) - - def __eq__(self, other): - return str(self) == str(other) - - def __ne__(self, other): - return str(self) != str(other) - - def __len__(self): - return len(str(self)) - - def add_part(self, part): - self._parts.append(part) diff --git a/lnst/Controller/XmlTemplates.py b/lnst/Controller/XmlTemplates.py deleted file mode 100644 index a1541db..0000000 --- a/lnst/Controller/XmlTemplates.py +++ /dev/null @@ -1,438 +0,0 @@ -""" -This module contains code to aid processing templates in XML files/recipes -while they're being parsed. - -Templates are strings enclosed in curly braces {} and can be present -in all text elements of the XML file (this includes tag values or -attribute values). Templates cannot be used as a stubstitution for tag -names, attribute names or any other structural elements of the document. - -There are two supported types of templates: - - * aliases - $alias_name - * functions - function_name(param1, param2) - -Copyright 2012 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -rpazdera@redhat.com (Radek Pazdera) -""" - -import re -from lxml import etree -from lnst.Controller.XmlProcessing import XmlTemplateString -from lnst.Controller.Machine import MachineError, PrefixMissingError - -class XmlTemplateError(Exception): - pass - -class TemplateFunc(object): - def __init__(self, args, machines): - self._check_args(args) - self._args = args - - self._machines = machines - - def __str__(self): - return self._implementation() - - def _check_args(self, args): - pass - - def _implementation(self): - pass - -class IpFunc(TemplateFunc): - def _check_args(self, args): - if len(args) > 3: - msg = "Function ip() takes at most 3 arguments, %d passed" \ - % len(args) - raise XmlTemplateError(msg) - if len(args) < 2: - msg = "Function ip() must have at least 2 arguments, %d passed" \ - % len(args) - raise XmlTemplateError(msg) - - if len(args) == 3: - try: - int(args[2]) - except ValueError: - msg = "The third argument of ip() function must be an integer" - raise XmlTemplateError(msg) - - def _implementation(self): - m_id = self._args[0] - if_id = self._args[1] - addr = 0 - if len(self._args) == 3: - addr = self._args[2] - - try: - machine = self._machines[m_id] - except KeyError: - msg = "First parameter of function ip() is invalid: " \ - "Machine %s does not exist." % m_id - raise XmlTemplateError(msg) - - try: - iface = machine.get_interface(if_id) - except MachineError: - msg = "Second parameter of function ip() is invalid: "\ - "Interface %s does not exist." % if_id - raise XmlTemplateError(msg) - - try: - return iface.get_address(int(addr)) - except IndexError: - msg = "There is no address with index %s on machine %s, " \ - "interface %s." % (addr, m_id, if_id) - raise XmlTemplateError(msg) - -class DevnameFunc(TemplateFunc): - def _check_args(self, args): - if len(args) != 2: - msg = "Function devname() takes 2 arguments, %d passed." % len(args) - raise XmlTemplateError(msg) - - def _implementation(self): - m_id = self._args[0] - if_id = self._args[1] - - try: - machine = self._machines[m_id] - except KeyError: - msg = "First parameter of function devname() is invalid: " \ - "Machine %s does not exist." % m_id - raise XmlTemplateError(msg) - - try: - iface = machine.get_interface(if_id) - except MachineError: - msg = "Second parameter of function devname() is invalid: "\ - "Interface %s does not exist." % if_id - raise XmlTemplateError(msg) - - try: - return iface.get_devname() - except MachineError: - msg = "Devname not availablefor interface '%s' on machine '%s'." \ - % (m_id, if_id) - raise XmlTemplateError(msg) - -class PrefixFunc(TemplateFunc): - def _check_args(self, args): - if len(args) > 3: - msg = "Function prefix() takes at most 3 arguments, %d passed" \ - % len(args) - raise XmlTemplateError(msg) - if len(args) < 2: - msg = "Function prefix() must have at least 2 arguments, %d " \ - "passed" % len(args) - raise XmlTemplateError(msg) - - if len(args) == 3: - try: - int(args[2]) - except ValueError: - msg = "The third argument of prefix() function must be an " \ - "integer" - raise XmlTemplateError(msg) - - def _implementation(self): - m_id = self._args[0] - if_id = self._args[1] - addr = 0 - if len(self._args) == 3: - addr = self._args[2] - - try: - machine = self._machines[m_id] - except KeyError: - msg = "First parameter of function prefix() is invalid: " \ - "Machine %s does not exist." % m_id - raise XmlTemplateError(msg) - - try: - iface = machine.get_interface(if_id) - except MachineError: - msg = "Second parameter of function prefix() is invalid: "\ - "Interface %s does not exist." % if_id - raise XmlTemplateError(msg) - - try: - return iface.get_prefix(int(addr)) - except IndexError: - msg = "There is no address with index %s on machine %s, " \ - "interface %s." % (addr, m_id, if_id) - raise XmlTemplateError(msg) - except PrefixMissingError: - msg = "Address with the index %s for the interface %s on machine" \ - "%s does not contain any prefix" % (addr, m_id, if_id) - -class HwaddrFunc(TemplateFunc): - def _check_args(self, args): - if len(args) != 2: - msg = "Function hwaddr() takes 2 arguments, %d passed." % len(args) - raise XmlTemplateError(msg) - - def _implementation(self): - m_id = self._args[0] - if_id = self._args[1] - - try: - machine = self._machines[m_id] - except KeyError: - msg = "First parameter of function hwaddr() is invalid: " \ - "Machine %s does not exist." % m_id - raise XmlTemplateError(msg) - - try: - iface = machine.get_interface(if_id) - except MachineError: - msg = "Second parameter of function hwaddr() is invalid: "\ - "Interface %s does not exist." % if_id - raise XmlTemplateError(msg) - - try: - return iface.get_hwaddr() - except MachineError: - msg = "Hwaddr not availablefor interface '%s' on machine '%s'." \ - % (m_id, if_id) - raise XmlTemplateError(msg) - -class XmlTemplates: - """ This class serves as template processor """ - - _alias_re = "{$([a-zA-Z0-9_]+)}" - _func_re = "{([a-zA-Z0-9_]+)(([^()]*))}" - - _func_map = {"ip": IpFunc, "hwaddr": HwaddrFunc, "devname": DevnameFunc, \ - "prefix": PrefixFunc } - - def __init__(self, definitions=None): - if definitions: - self._definitions = [definitions] - else: - self._definitions = [{}] - - self._machines = {} - self._reserved_aliases = [] - - def set_definitions(self, defs): - """ Set alias definitions - - All existing definitions and namespace levels are - destroyed and replaced with new definitions. - """ - del self._definitions - self._definitions = [defs] - - def get_definitions(self): - """ Return definitions dict - - Definitions are returned as a single dictionary of - all currently defined aliases, regardless the internal - division to namespace levels. - """ - defs = {} - for level in self._definitions: - for name, val in level.iteritems(): - defs[name] = val - - return defs - - def set_machines(self, machines): - """ Assign machine information - - XmlTemplates use these information about the machines - to resolve template functions within the recipe. - """ - self._machines = machines - - def set_aliases(self, defined, overriden): - """ Set aliases defined or overriden from CLI """ - - for name, value in defined.iteritems(): - self.define_alias(name, value) - - self._overriden_aliases = overriden - - def define_alias(self, name, value): - """ Associate an alias name with some value - - The value can be of an atomic type or an array. The - definition is added to the current namespace level. - """ - - if not name in self._reserved_aliases: - self._definitions[-1][name] = value - else: - raise XmlTemplateError("Alias name '%s' is reserved" % name) - - def add_namespace_level(self): - """ Create new namespace level - - This method will create a new level for definitions on - the stack. All aliases, that will be defined after this - call will be dropped as soon as `drop_namespace_level' - is called. - """ - self._definitions.append({}) - - def drop_namespace_level(self): - """ Remove one namespace level - - This method will erease all defined aliases since the - last call of `add_namespace_level' method. All aliases, - that were defined beforehand will be kept. - """ - self._definitions.pop() - - def _find_definition(self, name): - if name in self._overriden_aliases: - return self._overriden_aliases[name] - - for level in reversed(self._definitions): - if name in level: - return level[name] - - err = "Alias '%s' is not defined here" % name - raise XmlTemplateError(err) - - def _dump_definitions(self): - dump = self._overriden_aliases.copy() - - for level in self._definitions: - for name in level: - if not name in dump: - dump[name] = level[name] - - return dump - - def process_aliases(self, element): - """ Expand aliases within an element and its children - - This method will iterate through the element tree that is - passed and expand aliases in all the text content and - attributes. - """ - if element.text != None: - element.text = self.expand_aliases(element.text) - - if element.tail != None: - element.tail = self.expand_aliases(element.tail) - - for name, value in element.attrib.iteritems(): - element.set(name, self.expand_aliases(value)) - - if element.tag == "define": - for alias in element.getchildren(): - name = alias.attrib["name"].strip() - if "value" in alias.attrib: - value = alias.attrib["value"].strip() - else: - value = etree.tostring(element, method="text").strip() - self.define_alias(name, value) - parent = element.getparent() - parent.remove(element) - return - - self.add_namespace_level() - - for child in element.getchildren(): - self.process_aliases(child) - - # do not drop alias definitions when at top-level so that python - # tasks are able to access them - if element.tag != "lnstrecipe": - self.drop_namespace_level() - - def expand_aliases(self, string): - while True: - alias_match = re.search(self._alias_re, string) - - if alias_match: - template = alias_match.group(0) - result = self._process_alias_template(template) - string = string.replace(template, result) - else: - break - - return string - - def _process_alias_template(self, string): - result = None - - alias_match = re.match(self._alias_re, string) - if alias_match: - alias_name = alias_match.group(1) - result = self._find_definition(alias_name) - - return result - - def expand_functions(self, string, node=None): - """ Process a string and expand it into a XmlTemplateString """ - - parts = self._partition_string(string) - value = XmlTemplateString(node=node) - - for part in parts: - value.add_part(part) - - return value - - def _partition_string(self, string): - """ Process templates in a string - - This method will process and expand all template functions - in a string. - - The function returns an array of string partitions and - unresolved template functions for further processing. - """ - - result = None - - func_match = re.search(self._func_re, string) - if func_match: - prefix = string[0:func_match.start(0)] - suffix = string[func_match.end(0):] - - template = func_match.group(0) - func = self._process_func_template(template) - - return self._partition_string(prefix) + [func] + \ - self._partition_string(suffix) - - return [string] - - def _process_func_template(self, string): - func_match = re.match(self._func_re, string) - if func_match: - func_name = func_match.group(1) - func_args = func_match.group(2) - - if func_args == None: - func_args = [] - else: - func_args = func_args.split(",") - - param_values = [] - for param in func_args: - param = param.strip() - if re.match(self._alias_re, param): - param = self._process_alias_template(param) - param_values.append(param) - - if func_name not in self._func_map: - msg = "Unknown template function '%s'." % func_name - raise XmlTemplateError(msg) - - func = self._func_map[func_name](param_values, self._machines) - return func - else: - msg = "The passed string is not a template function." - raise XmlTemplateError(msg)
From: Ondrej Lichtner olichtne@redhat.com
This adds the lnst.Devices and lnst.Tests packages so that they're recognized and installed by the setup.py install script.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/setup.py b/setup.py index 41b86b3..322731f 100755 --- a/setup.py +++ b/setup.py @@ -105,7 +105,7 @@ project website https://fedorahosted.org/lnst. """
PACKAGES = ["lnst", "lnst.Common", "lnst.Controller", "lnst.Slave", - "lnst.RecipeCommon" ] + "lnst.RecipeCommon", "lnst.Devices", "lnst.Tests" ] SCRIPTS = ["lnst-ctl", "lnst-slave", "lnst-pool-wizard"]
RECIPE_FILES = []
From: Ondrej Lichtner olichtne@redhat.com
Moving the VirtNetCtl class to the lnst.Devices package since it is only used by the VirtualDevice class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/VirtNetCtl.py | 85 +++++++++++++++++++++++++++++++++++++++++++ lnst/Devices/VirtualDevice.py | 2 +- 2 files changed, 86 insertions(+), 1 deletion(-) create mode 100644 lnst/Devices/VirtNetCtl.py
diff --git a/lnst/Devices/VirtNetCtl.py b/lnst/Devices/VirtNetCtl.py new file mode 100644 index 0000000..2795baf --- /dev/null +++ b/lnst/Devices/VirtNetCtl.py @@ -0,0 +1,85 @@ +""" +Defines the VirtNetCtl class used for dynamically adding and removing network +interface cards to libvirt guests. + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + +import logging +import libvirt +from libvirt import libvirtError +from lnst.Common.LnstError import LnstError + +#this is a global object because opening the connection to libvirt in every +#object instance that uses it sometimes fails - the libvirt server probably +#can't handle that many connections at a time +_libvirt_conn = None + +def init_libvirt_con(): + global _libvirt_conn + if _libvirt_conn is None: + _libvirt_conn = libvirt.open(None) + +class VirtNetCtlError(LnstError): + pass + +class VirtNetCtl(object): + _network_template = """ + <network ipv6='yes'> + <name>{0}</name> + <bridge name='virbr_{0}' stp='off' delay='0' /> + <domain name='{0}'/> + </network> + """ + + def __init__(self, name=None): + init_libvirt_con() + + if not name: + name = self._generate_name() + self._name = name + + def _generate_name(self): + devs = _libvirt_conn.listNetworks() + + index = 0 + while True: + name = "lnst_net%d" % index + index += 1 + if name not in devs: + return name + + def get_name(self): + return self._name + + def init(self): + try: + network_xml = self._network_template.format(self._name) + _libvirt_conn.networkCreateXML(network_xml) + logging.debug("libvirt network '%s' created" % self._name) + return True + except libvirtError as e: + raise VirtNetCtlError(str(e)) + + def cleanup(self): + try: + network = _libvirt_conn.networkLookupByName(self._name) + network.destroy() + logging.debug("libvirt network '%s' destroyed" % self._name) + return True + except libvirtError as e: + raise VirtNetCtlError(str(e)) + + @classmethod + def network_exist(cls, net_name): + try: + _libvirt_conn.networkLookupByName(net_name) + return True + except: + return False diff --git a/lnst/Devices/VirtualDevice.py b/lnst/Devices/VirtualDevice.py index 49fe088..5c41a12 100644 --- a/lnst/Devices/VirtualDevice.py +++ b/lnst/Devices/VirtualDevice.py @@ -19,7 +19,7 @@ from lnst.Devices.RemoteDevice import RemoteDevice
# conditional support for libvirt if check_process_running("libvirtd"): - from lnst.Controller.VirtUtils import VirtNetCtl + from lnst.Devices.VirtNetCtl import VirtNetCtl
class VirtualDevice(RemoteDevice): """Remote eth device created on the controller through libvirt
From: Ondrej Lichtner olichtne@redhat.com
Since the VirtNetCtl class was moved into the lnst.Devices package, it is not needed in this module anymore. Removing it along with the BridgeCtl class which wasn't used in a long time (since 2013).
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Machine.py | 2 +- lnst/Controller/VirtDomainCtl.py | 98 ++++++++++++++ lnst/Controller/VirtUtils.py | 269 --------------------------------------- 3 files changed, 99 insertions(+), 270 deletions(-) create mode 100644 lnst/Controller/VirtDomainCtl.py delete mode 100644 lnst/Controller/VirtUtils.py
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index f057390..2c54f23 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -35,7 +35,7 @@ from lnst.Devices.VirtualDevice import VirtualDevice
# conditional support for libvirt if check_process_running("libvirtd"): - from lnst.Controller.VirtUtils import VirtNetCtl, VirtDomainCtl + from lnst.Controller.VirtDomainCtl import VirtDomainCtl
class MachineError(ControllerError): pass diff --git a/lnst/Controller/VirtDomainCtl.py b/lnst/Controller/VirtDomainCtl.py new file mode 100644 index 0000000..96c7913 --- /dev/null +++ b/lnst/Controller/VirtDomainCtl.py @@ -0,0 +1,98 @@ +""" +Utilities for manipulating virtualization host, its guests and +connections between them + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +rpazdera@redhat.com (Radek Pazdera) +""" + +import logging +import libvirt +from libvirt import libvirtError +from lnst.Controller.Common import ControllerError + +#this is a global object because opening the connection to libvirt in every +#object instance that uses it sometimes fails - the libvirt server probably +#can't handle that many connections at a time +_libvirt_conn = None + +def init_libvirt_con(): + global _libvirt_conn + if _libvirt_conn is None: + _libvirt_conn = libvirt.open(None) + +class VirtDomainCtlError(ControllerError): + pass + +class VirtDomainCtl(object): + _net_device_template = """ + <interface type='network'> + <mac address='{0}'/> + <source network='{1}'/> + <model type='{2}'/> + </interface> + """ + _net_device_bare_template = """ + <interface> + <mac address='{0}'/> + </interface> + """ + + def __init__(self, domain_name): + self._name = domain_name + self._created_interfaces = {} + + init_libvirt_con() + + try: + self._domain = _libvirt_conn.lookupByName(domain_name) + except: + raise VirtDomainCtlError("Domain '%s' doesn't exist!" % domain_name) + + def start(self): + self._domain.create() + + def stop(self): + self._domain.destroy() + + def restart(self): + self._domain.reboot() + + def attach_interface(self, hw_addr, net_name, driver="virtio"): + try: + device_xml = self._net_device_template.format(hw_addr, + net_name, + driver) + self._domain.attachDevice(device_xml) + logging.debug("libvirt device with hwaddr '%s' " + "driver '%s' attached" % (hw_addr, driver)) + self._created_interfaces[hw_addr] = device_xml + return True + except libvirtError as e: + raise VirtDomainCtlError(str(e)) + + def detach_interface(self, hw_addr): + if hw_addr in self._created_interfaces: + device_xml = self._created_interfaces[hw_addr] + else: + device_xml = self._net_device_bare_template.format(hw_addr) + + try: + self._domain.detachDevice(device_xml) + logging.debug("libvirt device with hwaddr '%s' detached" % hw_addr) + return True + except libvirtError as e: + raise VirtDomainCtlError(str(e)) + + @classmethod + def domain_exist(cls, domain_name): + try: + _libvirt_conn.lookupByName(domain_name) + return True + except: + return False diff --git a/lnst/Controller/VirtUtils.py b/lnst/Controller/VirtUtils.py deleted file mode 100644 index c8da28a..0000000 --- a/lnst/Controller/VirtUtils.py +++ /dev/null @@ -1,269 +0,0 @@ -""" -Utilities for manipulating virtualization host, its guests and -connections between them - -Copyright 2012 Red Hat, Inc. -Licensed under the GNU General Public License, version 2 as -published by the Free Software Foundation; see COPYING for details. -""" - -__author__ = """ -rpazdera@redhat.com (Radek Pazdera) -""" - -import logging -import libvirt -from libvirt import libvirtError -from lnst.Common.ExecCmd import exec_cmd, ExecCmdFail -from lnst.Common.NetUtils import scan_netdevs -from lnst.Controller.Common import ControllerError - -#this is a global object because opening the connection to libvirt in every -#object instance that uses it sometimes fails - the libvirt server probably -#can't handle that many connections at a time -_libvirt_conn = None - -def init_libvirt_con(): - global _libvirt_conn - if _libvirt_conn is None: - _libvirt_conn = libvirt.open(None) - -class VirtUtilsError(ControllerError): - pass - -def _ip(cmd): - try: - exec_cmd("ip %s" % cmd) - except ExecCmdFail as err: - raise VirtUtilsError("ip command error: %s" % err) - -def _brctl(cmd): - try: - exec_cmd("brctl %s" % cmd) - except ExecCmdFail as err: - raise VirtUtilsError("brctl error: %s" % err) - -def _iptables(cmd): - try: - exec_cmd("iptables %s" % cmd) - except ExecCmdFail as err: - raise VirtUtilsError("iptables error: %s" % err) - -def _ip6tables(cmd): - try: - exec_cmd("ip6tables %s" % cmd) - except ExecCmdFail as err: - raise VirtUtilsError("ip6tables error: %s" % err) - -def _virsh(cmd): - try: - exec_cmd("virsh %s" % cmd, log_outputs=False) - except ExecCmdFail as err: - raise VirtUtilsError("virsh error: %s" % err) - -class VirtDomainCtl: - _net_device_template = """ - <interface type='network'> - <mac address='{0}'/> - <source network='{1}'/> - <model type='{2}'/> - </interface> - """ - _net_device_bare_template = """ - <interface> - <mac address='{0}'/> - </interface> - """ - - def __init__(self, domain_name): - self._name = domain_name - self._created_interfaces = {} - - init_libvirt_con() - - try: - self._domain = _libvirt_conn.lookupByName(domain_name) - except: - raise VirtUtilsError("Domain '%s' doesn't exist!" % domain_name) - - def start(self): - self._domain.create() - - def stop(self): - self._domain.destroy() - - def restart(self): - self._domain.reboot() - - def attach_interface(self, hw_addr, net_name, driver="virtio"): - try: - device_xml = self._net_device_template.format(hw_addr, - net_name, - driver) - self._domain.attachDevice(device_xml) - logging.debug("libvirt device with hwaddr '%s' " - "driver '%s' attached" % (hw_addr, driver)) - self._created_interfaces[hw_addr] = device_xml - return True - except libvirtError as e: - raise VirtUtilsError(str(e)) - - def detach_interface(self, hw_addr): - if hw_addr in self._created_interfaces: - device_xml = self._created_interfaces[hw_addr] - else: - device_xml = self._net_device_bare_template.format(hw_addr) - - try: - self._domain.detachDevice(device_xml) - logging.debug("libvirt device with hwaddr '%s' detached" % hw_addr) - return True - except libvirtError as e: - raise VirtUtilsError(str(e)) - - @classmethod - def domain_exist(cls, domain_name): - try: - _libvirt_conn.lookupByName(domain_name) - return True - except: - return False - -class NetCtl(object): - def __init__(self, name): - self._name = name - - def get_name(self): - return self._name - - def init(self): - pass - - def cleanup(self): - pass - -class VirtNetCtl(NetCtl): - _network_template = """ - <network ipv6='yes'> - <name>{0}</name> - <bridge name='virbr_{0}' stp='off' delay='0' /> - <domain name='{0}'/> - </network> - """ - - def __init__(self, name=None): - init_libvirt_con() - - if not name: - name = self._generate_name() - self._name = name - - def _generate_name(self): - devs = _libvirt_conn.listNetworks() - - index = 0 - while True: - name = "lnst_net%d" % index - index += 1 - if name not in devs: - return name - - def init(self): - try: - network_xml = self._network_template.format(self._name) - _libvirt_conn.networkCreateXML(network_xml) - logging.debug("libvirt network '%s' created" % self._name) - return True - except libvirtError as e: - raise VirtUtilsError(str(e)) - - def cleanup(self): - try: - network = _libvirt_conn.networkLookupByName(self._name) - network.destroy() - logging.debug("libvirt network '%s' destroyed" % self._name) - return True - except libvirtError as e: - raise VirtUtilsError(str(e)) - - @classmethod - def network_exist(cls, net_name): - try: - _libvirt_conn.networkLookupByName(net_name) - return True - except: - return False - -class BridgeCtl(NetCtl): - def __init__(self, name=None): - if not name: - name = self._generate_name() - - self._check_name(name) - self._name = name - self._remove = False - - def get_name(self): - return self._name - - def set_remove(self, remove): - self._remove = remove - - @staticmethod - def _check_name(name): - if len(name) > 16: - msg = "Bridge name '%s' longer than 16 characters" % name - raise VirtUtilsError(msg) - - @staticmethod - def _generate_name(): - devs = scan_netdevs() - - index = 0 - while True: - name = "lnstbr%d" % index - index += 1 - unique = True - for dev in devs: - if name == dev["name"]: - unique = False - break - - if unique: - return name - - def _exists(self): - devs = scan_netdevs() - for dev in devs: - if self._name == dev["name"]: - return True - - return False - - def init(self): - if not self._exists(): - _brctl("addbr %s" % self._name) - _iptables("-I FORWARD 1 -j REJECT -i %s -o any" % self._name) - _iptables("-I FORWARD 1 -j REJECT -i any -o %s" % self._name) - _iptables("-I FORWARD 1 -j ACCEPT -i %s -o %s" % - (self._name, self._name)) - _ip6tables("-I FORWARD 1 -j REJECT -i %s -o any" % self._name) - _ip6tables("-I FORWARD 1 -j REJECT -i any -o %s" % self._name) - _ip6tables("-I FORWARD 1 -j ACCEPT -i %s -o %s" % - (self._name, self._name)) - self._remove = True - - _ip("link set %s up" % self._name) - - def cleanup(self): - if self._remove: - _ip("link set %s down" % self._name) - _brctl("delbr %s" % self._name) - _iptables("-D FORWARD -j REJECT -i %s -o any" % self._name) - _iptables("-D FORWARD -j REJECT -i any -o %s" % self._name) - _iptables("-D FORWARD -j ACCEPT -i %s -o %s" % - (self._name, self._name)) - _ip6tables("-D FORWARD -j REJECT -i %s -o any" % self._name) - _ip6tables("-D FORWARD -j REJECT -i any -o %s" % self._name) - _ip6tables("-D FORWARD -j ACCEPT -i %s -o %s" % - (self._name, self._name))
From: Ondrej Lichtner olichtne@redhat.com
You can now set a Device (RemoteDevice on the Controller) objects as values for IpParameters. This will get the ip address list of the Device and use the first address as the value for the IpParam object.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
--- v2: * import Device on demand (Device class arrives on Slave later) and check isinstance Device instead of RemoteDevice (not available on Slave) * fix IpParam Exception string --- lnst/Common/Parameters.py | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py index dcf9bab..16385b4 100644 --- a/lnst/Common/Parameters.py +++ b/lnst/Common/Parameters.py @@ -77,12 +77,17 @@ class StrParam(Param): class IpParam(Param): @Param.val.setter def val(self, value): + #runtime import this because the Device class arrives on the Slave + #during recipe execution, not during Slave init + from lnst.Devices.Device import Device if isinstance(value, BaseIpAddress): self._val = value elif isinstance(value, str): self._val = IpAddress(value) + elif isinstance(value, Device): + self.val = value.ips[0] else: - raise ParamError("Value must be a BaseIpAddress or string object." + raise ParamError("Value must be a BaseIpAddress, string or Device object." "Not {}".format(type(value))) self.set = True
From: Ondrej Lichtner olichtne@redhat.com
This is an example python recipe that can be run as an executable script. Performs a simple ping between two hosts.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- recipes/examples/python_recipe.py | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100755 recipes/examples/python_recipe.py
diff --git a/recipes/examples/python_recipe.py b/recipes/examples/python_recipe.py new file mode 100755 index 0000000..b15bc60 --- /dev/null +++ b/recipes/examples/python_recipe.py @@ -0,0 +1,31 @@ +#!/bin/python2 +""" +This is an example python recipe that can be run as an executable script. +Performs a simple ping between two hosts. +""" + +from lnst.Common.Parameters import IpParam +from lnst.Common.IpAddress import IpAddress +from lnst.Controller import Controller +from lnst.Controller import BaseRecipe +from lnst.Controller import HostReq, DeviceReq + +from lnst.Tests import IcmpPing + +class MyRecipe(BaseRecipe): + m1 = HostReq() + m1.eth0 = DeviceReq(label="net1") + + m2 = HostReq() + m2.eth0 = DeviceReq(label="net1") + + def test(self): + self.matched.m1.eth0.ip_add(IpAddress("192.168.1.1/24")) + self.matched.m2.eth0.ip_add(IpAddress("192.168.1.2/24")) + ping_job = self.matched.m1.run(IcmpPing(dst=self.matched.m2.eth0, + interval=0)) + +ctl = Controller(debug=1) + +r = MyRecipe() +ctl.run(r, allow_virt=True)
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Common.py | 2 -- lnst/Controller/Controller.py | 4 +--- lnst/Controller/CtlSecSocket.py | 1 - lnst/Controller/Host.py | 2 -- lnst/Controller/Job.py | 1 - lnst/Controller/Machine.py | 8 -------- lnst/Controller/MessageDispatcher.py | 2 +- lnst/Controller/Recipe.py | 1 - lnst/Devices/Device.py | 2 +- lnst/Devices/__init__.py | 1 - 10 files changed, 3 insertions(+), 21 deletions(-)
diff --git a/lnst/Controller/Common.py b/lnst/Controller/Common.py index c9e5771..5e00de6 100644 --- a/lnst/Controller/Common.py +++ b/lnst/Controller/Common.py @@ -11,8 +11,6 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
-import os -import sys from lnst.Common.LnstError import LnstError
class ControllerError(LnstError): diff --git a/lnst/Controller/Controller.py b/lnst/Controller/Controller.py index 7848750..dd0dd41 100644 --- a/lnst/Controller/Controller.py +++ b/lnst/Controller/Controller.py @@ -16,19 +16,17 @@ import os import sys import datetime import logging -import socket from lnst.Common.Logs import LoggingCtl from lnst.Common.NetUtils import MacPool from lnst.Common.LnstError import LnstError from lnst.Common.Utils import mkdir_p -from lnst.Devices import VirtualDevice +from lnst.Devices.VirtualDevice import VirtualDevice from lnst.Controller.Common import ControllerError from lnst.Controller.Config import CtlConfig from lnst.Controller.MessageDispatcher import MessageDispatcher from lnst.Controller.SlavePoolManager import SlavePoolManager from lnst.Controller.MachineMapper import MachineMapper from lnst.Controller.Host import Hosts, Host -from lnst.Controller.Requirements import DeviceReq from lnst.Controller.Recipe import BaseRecipe
class Controller(object): diff --git a/lnst/Controller/CtlSecSocket.py b/lnst/Controller/CtlSecSocket.py index c80917b..0d30a10 100644 --- a/lnst/Controller/CtlSecSocket.py +++ b/lnst/Controller/CtlSecSocket.py @@ -13,7 +13,6 @@ olichtne@redhat.com (Ondrej Lichtner)
import os import hashlib -import math import logging from lnst.Common.SecureSocket import SecureSocket from lnst.Common.SecureSocket import DH_GROUP, SRP_GROUP diff --git a/lnst/Controller/Host.py b/lnst/Controller/Host.py index 23cdadf..09e79f8 100644 --- a/lnst/Controller/Host.py +++ b/lnst/Controller/Host.py @@ -12,9 +12,7 @@ olichtne@redhat.com (Ondrej Lichtner) """
import logging -from lnst.Common.Colours import decorate_with_preset from lnst.Common.Parameters import Parameters -from lnst.Common.TestModule import BaseTestModule from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT from lnst.Devices import Devices from lnst.Devices.VirtualDevice import VirtualDevice diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 985364e..ce323a8 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -13,7 +13,6 @@ olichtne@redhat.com (Ondrej Lichtner)
import logging import signal -import copy_reg from lnst.Common.JobError import JobError from lnst.Common.TestModule import BaseTestModule
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 2c54f23..dc150d6 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -13,19 +13,11 @@ rpazdera@redhat.com (Radek Pazdera)
import logging import socket -import os import sys -import tempfile import signal -from time import sleep -from xmlrpclib import Binary -from lnst.Common.NetUtils import normalize_hwaddr from lnst.Common.Utils import sha256sum -from lnst.Common.Utils import wait_for, create_tar_archive from lnst.Common.Utils import check_process_running from lnst.Common.TestModule import BaseTestModule -from lnst.Common.NetTestCommand import DEFAULT_TIMEOUT -from lnst.Common.DeviceError import DeviceDeleted, DeviceNotFound from lnst.Controller.Common import ControllerError from lnst.Controller.CtlSecSocket import CtlSecSocket from lnst.Devices import device_classes diff --git a/lnst/Controller/MessageDispatcher.py b/lnst/Controller/MessageDispatcher.py index a6d7f44..0f3d23d 100644 --- a/lnst/Controller/MessageDispatcher.py +++ b/lnst/Controller/MessageDispatcher.py @@ -15,7 +15,7 @@ olichtne@redhat.com (Ondrej Lichtner) """
import logging -from lnst.Common.ConnectionHandler import send_data, recv_data +from lnst.Common.ConnectionHandler import send_data from lnst.Common.ConnectionHandler import ConnectionHandler from lnst.Common.DeviceRef import DeviceRef from lnst.Controller.Common import ControllerError diff --git a/lnst/Controller/Recipe.py b/lnst/Controller/Recipe.py index d4f33f9..445cb38 100644 --- a/lnst/Controller/Recipe.py +++ b/lnst/Controller/Recipe.py @@ -13,7 +13,6 @@ olichtne@redhat.com (Ondrej Lichtner) import copy from lnst.Common.Parameters import Parameters, Param from lnst.Controller.Requirements import _Requirements, HostReq -from lnst.Controller.Host import Hosts, Host from lnst.Controller.Common import ControllerError
class RecipeError(ControllerError): diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index 8ba59f2..cf7c5ca 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -16,7 +16,7 @@ from abc import ABCMeta from lnst.Common.NetUtils import normalize_hwaddr from lnst.Common.ExecCmd import exec_cmd from lnst.Common.DeviceError import DeviceError, DeviceDeleted -from lnst.Common.IpAddress import Ip4Address, Ip6Address, IpAddress +from lnst.Common.IpAddress import IpAddress
try: from pyroute2.netlink.iproute import RTM_NEWLINK diff --git a/lnst/Devices/__init__.py b/lnst/Devices/__init__.py index 2f8d4d1..743c8f5 100644 --- a/lnst/Devices/__init__.py +++ b/lnst/Devices/__init__.py @@ -9,7 +9,6 @@ from lnst.Devices.VxlanDevice import VxlanDevice from lnst.Devices.VtiDevice import VtiDevice, Vti6Device from lnst.Devices.VethDevice import VethDevice, PairedVethDevice from lnst.Devices.VethPair import VethPair -from lnst.Devices.VirtualDevice import VirtualDevice from lnst.Devices.RemoteDevice import RemoteDevice, remotedev_decorator
device_classes = [
From: Ondrej Lichtner olichtne@redhat.com
The lnst.Tests package is present on the Controller and provides upstream maintained test classes. These can be instantiated and used to run tests on the Slaves. While doing so they're sent over the network to the Slaves where they're dynamically imported. Since the lnst.Tests package doesn't originaly exist on the slave, every imported module imported this way will result in a python runtime warning of the package not existing. This doesn't break anything but it does add clutter to the logs. Pre-creating the package on the slave will fix the issue.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/NetTestSlave.py | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index 0f654e3..99914df 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -51,6 +51,11 @@ Devices.__path__ = ["lnst.Devices"]
sys.modules["lnst.Devices"] = Devices
+Tests = types.ModuleType("Tests") +Tests.__path__ = ["lnst.Tests"] + +sys.modules["lnst.Tests"] = Tests + class SlaveMethods: ''' Exported xmlrpc methods
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/__init__.py | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/lnst/Tests/__init__.py b/lnst/Tests/__init__.py index 60da773..69b757a 100644 --- a/lnst/Tests/__init__.py +++ b/lnst/Tests/__init__.py @@ -1 +1,17 @@ +""" +Package for all LNST Test classes. It will contain all Test classes provided +and maintained by the LNST upstream and later it will also import test classes +based on user configured directories (from lnst-ctl.conf). + +Copyright 2017 Red Hat, Inc. +Licensed under the GNU General Public License, version 2 as +published by the Free Software Foundation; see COPYING for details. +""" + +__author__ = """ +olichtne@redhat.com (Ondrej Lichtner) +""" + from lnst.Tests.IcmpPing import IcmpPing + +#TODO add support for test classes from lnst-ctl.conf
From: Ondrej Lichtner olichtne@redhat.com
This parameter class accepts a Device or DeviceRef objects as possible values. Since RemoteDevice is also recognized as a Device instance this will work on the Controller where the parameter value is set.
DeviceRef is then used by the Controller-Slave communication methods to transfer the information. For this to work properly the TestModule is deepcopied and modified (only DeviceParam) before transmission. Since DeviceParam can refer to a VirtualDevice object it cannot be deepcopied so we override it's deepcopy operation to only do a shallowcopy.
It's important to note that at the moment this works in just one direction: Controller -> Slave and not the other way around. However this shouldn't be a problem since passing Parameters the other way around doesn't really make sense.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Parameters.py | 19 +++++++++++++++++++ lnst/Controller/MessageDispatcher.py | 15 +++++++++++++++ lnst/Slave/NetTestSlave.py | 12 ++++++++++++ 3 files changed, 46 insertions(+)
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py index 16385b4..e21c8d6 100644 --- a/lnst/Common/Parameters.py +++ b/lnst/Common/Parameters.py @@ -14,6 +14,7 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
+from lnst.Common.DeviceRef import DeviceRef from lnst.Common.IpAddress import BaseIpAddress, IpAddress from lnst.Common.LnstError import LnstError
@@ -91,6 +92,24 @@ class IpParam(Param): "Not {}".format(type(value))) self.set = True
+class DeviceParam(Param): + @Param.val.setter + def val(self, value): + #runtime import this because the Device class arrives on the Slave + #during recipe execution, not during Slave init + from lnst.Devices.Device import Device + if isinstance(value, Device) or isinstance(value, DeviceRef): + self._val = value + else: + raise ParamError("Value must be a Device or DeviceRef object." + "Not {}".format(type(value))) + self.set = True + + def __deepcopy__(self, memo): + newone = type(self)() + newone.__dict__.update(self.__dict__) + return newone + class Parameters(object): def __getattribute__(self, name): """ diff --git a/lnst/Controller/MessageDispatcher.py b/lnst/Controller/MessageDispatcher.py index 0f3d23d..1609dd5 100644 --- a/lnst/Controller/MessageDispatcher.py +++ b/lnst/Controller/MessageDispatcher.py @@ -15,8 +15,11 @@ olichtne@redhat.com (Ondrej Lichtner) """
import logging +import copy from lnst.Common.ConnectionHandler import send_data from lnst.Common.ConnectionHandler import ConnectionHandler +from lnst.Common.TestModule import BaseTestModule +from lnst.Common.Parameters import Parameters, DeviceParam from lnst.Common.DeviceRef import DeviceRef from lnst.Controller.Common import ControllerError from lnst.Devices.RemoteDevice import RemoteDevice @@ -64,6 +67,18 @@ def remote_device_to_deviceref(obj): for value in obj: new_list.append(remote_device_to_deviceref(value)) return tuple(new_list) + elif isinstance(obj, DeviceParam): + new_param = DeviceParam() + new_param.val = remote_device_to_deviceref(obj.val) + return new_param + elif isinstance(obj, Parameters): + for param_name, param in obj: + setattr(obj, param_name, remote_device_to_deviceref(param)) + return obj + elif isinstance(obj, BaseTestModule): + new_test = copy.deepcopy(obj) + new_test.params = remote_device_to_deviceref(new_test.params) + return new_test else: return obj
diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py index 99914df..9a29808 100644 --- a/lnst/Slave/NetTestSlave.py +++ b/lnst/Slave/NetTestSlave.py @@ -41,6 +41,8 @@ from lnst.Common.DeviceRef import DeviceRef from lnst.Common.LnstError import LnstError from lnst.Common.DeviceError import DeviceDeleted from lnst.Common.IpAddress import IpAddress +from lnst.Common.TestModule import BaseTestModule +from lnst.Common.Parameters import Parameters, DeviceParam from lnst.Slave.Job import Job, JobContext from lnst.Slave.InterfaceManager import InterfaceManager from lnst.Slave.BridgeTool import BridgeTool @@ -1002,6 +1004,16 @@ def deviceref_to_device(if_manager, obj): for value in obj: new_list.append(deviceref_to_device(if_manager, value)) return tuple(new_list) + elif isinstance(obj, DeviceParam): + obj.val = deviceref_to_device(if_manager, obj.val) + return obj + elif isinstance(obj, Parameters): + for param_name, param in obj: + deviceref_to_device(if_manager, param) + return obj + elif isinstance(obj, BaseTestModule): + deviceref_to_device(if_manager, obj.params) + return obj else: return obj
From: Ondrej Lichtner olichtne@redhat.com
We can now use the DeviceParam class instead of the generic Param class and skip additional parameter check implementation by the __init__ method.
This also modifies the example python recipe to show usage of this parameter.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/IcmpPing.py | 20 +++----------------- recipes/examples/python_recipe.py | 3 ++- 2 files changed, 5 insertions(+), 18 deletions(-)
diff --git a/lnst/Tests/IcmpPing.py b/lnst/Tests/IcmpPing.py index 8b07a2c..f28d2cf 100644 --- a/lnst/Tests/IcmpPing.py +++ b/lnst/Tests/IcmpPing.py @@ -1,7 +1,6 @@ import re import logging -from lnst.Devices import Device -from lnst.Common.Parameters import IntParam, Param, FloatParam, IpParam +from lnst.Common.Parameters import IntParam, FloatParam, IpParam, DeviceParam from lnst.Common.TestModule import BaseTestModule, TestModuleError from lnst.Common.ExecCmd import exec_cmd
@@ -10,29 +9,16 @@ class IcmpPing(BaseTestModule): dst = IpParam(mandatory=True) count = IntParam(default=10) interval = FloatParam(default=1.0) - iface = Param() + iface = DeviceParam() size = IntParam() - limit_rate = IntParam(default=80)
- def __init__(self, **kwargs): - super(IcmpPing, self).__init__(**kwargs) - - if self.iface.set: - if not isinstance(self.iface.val, Device) and\ - not isinstance(self.iface.val, str): - raise TestModuleError("Invalid 'iface' parameter.") - def _compose_cmd(self): cmd = "ping %s" % self.params.dst.val cmd += " -c %d" % self.params.count cmd += " -i %f" % self.params.interval if self.params.iface.set: - if isinstance(self.params.iface.val, str): - cmd += " -I %s" % self.params.iface - elif isinstance(self.params.iface.val, Device): - pass - # cmd += " -I %s" % iface.val.devname + cmd += " -I %s" % self.params.iface.val.name if self.params.size.set: cmd += " -s %d" % self.params.size return cmd diff --git a/recipes/examples/python_recipe.py b/recipes/examples/python_recipe.py index b15bc60..09b65c4 100755 --- a/recipes/examples/python_recipe.py +++ b/recipes/examples/python_recipe.py @@ -23,7 +23,8 @@ class MyRecipe(BaseRecipe): self.matched.m1.eth0.ip_add(IpAddress("192.168.1.1/24")) self.matched.m2.eth0.ip_add(IpAddress("192.168.1.2/24")) ping_job = self.matched.m1.run(IcmpPing(dst=self.matched.m2.eth0, - interval=0)) + interval=0, + iface=self.matched.m1.eth0))
ctl = Controller(debug=1)
Fri, May 19, 2017 at 12:12:45PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
v2 changes: add lnst.Common.IpAddress
- modify exception string.
add lnst.Common.Parameters
- add docstring explaining the __getattribute__ magic
- make val a property
- rename x to attr in Parameters dir() iterations
- fix IpParam value setter for string values
- modify IpParam Exception string
add lnst.Devices
- make Device a Meta class and register RemoteDevice as implementing the Device interface
- removed unnecessary imports
- added MasterDevice class for slave* methods
- made master_set to be a 'master' property setter
- renamed {add, del}_route to route_{add, del}
- renamed set_speed to speed_set
- renamed set_autoneg to autoneg_set
- removed modprobes where unnecessary
- link_stats are now retrieved from netlink message
- added link_stats64
- OvsBridge calls parent _type_init for modprobe
- remove SoftDevice import - not required
add lnst.Controller.Host
- removed debug pprint import
- added docstrings to requested methods
- Host::run throws an exception when netns set (not supported yet)
- added TODO comments to some parts that need to be implemented
add lnst.Controller.MachineMapper
- docstring now explains that implementing your own MachineMapper is not supported at this moment
- removed debug pprint import
add lnst.Controller.Recipe
- renamed variable 'x' to 'attr'
Machine, NetTestSlave: heavy reimplementation
- improved class and module synchronization implementation
- join split lines in deviceref_to_device method
lnst.Common.Parameters: IpParam accepts Device objects
- import Device on demand (Device class arrives on Slave later) and check isinstance Device instead of RemoteDevice (not available on Slave)
- fix IpParam Exception string
v2 new commits: Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
Looks good on a quick look. Lets apply this and continue in-tree.
Including the original cover letter as well:
Hi all,
the long avaited patchset for Python Recipes is here.
At this point there's still quite a lot of work left, but since the current state is functional and the tester facing API should be mostly stable I think it's a good time to get this merged into upstream so that more people can get involved with filling in the missing pieces and can slowly start experimenting and porting their XML recipes.
This I know are missing:
- Recipe run summary - how the Summary should look like was described in the
running proposal document, but I didn't manage to start with the implementation yet...
- Porting of all the old test_modules into the lnst.Tests package, so far we
only have the IcmpPing module that should still be reworked to be more universal (ip4 and ip6 in one class). Porting test_modules should be fairly easy, but at this point there's no guide how to do it so if you're having trouble feel free to contact me either on irc or email.
test_tools - we haven't even thought of these yet
network namespaces - also didn't think about them yet, though I'm hoping this
will be simple
- Ip address and network generators as we've discussed them on the upstream
meeting
Please review and provide any comments or features I forgot that should be added to the above list.
Regards, Ondrej
Ondrej Lichtner (47): add lnst.Common.LnstError add lnst.Common.DeviceError add lnst.Common.DeviceRef add lnst.Common.IpAddress add lnst.Common.Parameters add lnst.Common.TestModule add lnst.Common.JobError add lnst.Controller.Common add lnst.Devices add lnst.Controller.Requirements add lnst.Controller.Job add lnst.Controller.Host add lnst.Controller.Config add lnst.Slave.Config add lnst.Controller.MachineMapper lnst.Controller.Machine: change object initialization add lnst.Controller.MessageDispatcher add lnst.Controller.SlavePoolManager add lnst.Controller.Recipe add lnst.Controller.Controller various files: retype exceptions add lnst.Tests package lnst.Common.Config: remove {controller, slave}_init lnst.Common.Config: remove global lnst_config Slave: use a local config object instead of a global one lnst.Common.Utils: add sha256sum function lnst.Common.ResourceCache: simplification lnst.Controller.CtlSecSocket: remove lnst_config import lnst.Controller.SlaveMachineParser: make standalone add lnst.Common.InterfaceManagerError add lnst.Slave.Job lnst.Slave.InterfaceManager: heavy reimplementation Machine, NetTestSlave: heavy reimplementation lnst.Slave.InterfaceManager: remove Device class implementation add lnst.Controller package imports lnst/__init__.py remove imports lnst.Controller: remove old modules setup.py: add new packages add lnst.Devices.VirtNetCtl lnst.Controller: move VirtUtils to VirtDomainCtl, remove VirtNetCtl class lnst.Common.Parameters: IpParam accepts Device objects add example python_recipe.py script Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
lnst-slave | 20 +- lnst/Common/Config.py | 152 +--- lnst/Common/DeviceError.py | 22 + lnst/Common/DeviceRef.py | 19 + lnst/Common/ExecCmd.py | 3 +- lnst/Common/InterfaceManagerError.py | 16 + lnst/Common/IpAddress.py | 99 +++ lnst/Common/JobError.py | 22 + lnst/Common/LnstError.py | 18 + lnst/Common/NetTestCommand.py | 5 +- lnst/Common/Parameters.py | 157 ++++ lnst/Common/ResourceCache.py | 128 ++-- lnst/Common/SecureSocket.py | 3 +- lnst/Common/ShellProcess.py | 3 +- lnst/Common/TestModule.py | 67 ++ lnst/Common/TestsCommon.py | 3 +- lnst/Common/Utils.py | 11 + lnst/Controller/Common.py | 17 + lnst/Controller/Config.py | 99 +++ lnst/Controller/Controller.py | 218 ++++++ lnst/Controller/CtlSecSocket.py | 2 - lnst/Controller/Host.py | 160 ++++ lnst/Controller/Job.py | 196 +++++ lnst/Controller/Machine.py | 1356 +++++++-------------------------- lnst/Controller/MachineMapper.py | 328 ++++++++ lnst/Controller/MessageDispatcher.py | 203 +++++ lnst/Controller/NetTestController.py | 620 --------------- lnst/Controller/Recipe.py | 97 +++ lnst/Controller/RecipeParser.py | 572 -------------- lnst/Controller/Requirements.py | 113 +++ lnst/Controller/SlaveMachineParser.py | 144 +++- lnst/Controller/SlavePool.py | 648 ---------------- lnst/Controller/SlavePoolManager.py | 273 +++++++ lnst/Controller/Task.py | 4 +- lnst/Controller/VirtDomainCtl.py | 98 +++ lnst/Controller/VirtUtils.py | 268 ------- lnst/Controller/XmlParser.py | 188 ----- lnst/Controller/XmlProcessing.py | 235 ------ lnst/Controller/XmlTemplates.py | 438 ----------- lnst/Controller/__init__.py | 3 + lnst/Devices/BondDevice.py | 38 + lnst/Devices/BridgeDevice.py | 33 + lnst/Devices/Device.py | 355 +++++++++ lnst/Devices/MacvlanDevice.py | 38 + lnst/Devices/MasterDevice.py | 33 + lnst/Devices/OvsBridgeDevice.py | 115 +++ lnst/Devices/RemoteDevice.py | 98 +++ lnst/Devices/SoftDevice.py | 51 ++ lnst/Devices/TeamDevice.py | 60 ++ lnst/Devices/VethDevice.py | 58 ++ lnst/Devices/VethPair.py | 24 + lnst/Devices/VirtNetCtl.py | 85 +++ lnst/Devices/VirtualDevice.py | 98 +++ lnst/Devices/VlanDevice.py | 35 + lnst/Devices/VtiDevice.py | 71 ++ lnst/Devices/VxlanDevice.py | 65 ++ lnst/Devices/__init__.py | 42 + lnst/RecipeCommon/ModuleWrap.py | 33 +- lnst/Slave/Config.py | 73 ++ lnst/Slave/InterfaceManager.py | 648 ++-------------- lnst/Slave/Job.py | 251 ++++++ lnst/Slave/NetTestSlave.py | 846 ++++++++++---------- lnst/Tests/IcmpPing.py | 62 ++ lnst/Tests/__init__.py | 17 + lnst/__init__.py | 1 - recipes/examples/python_recipe.py | 32 + setup.py | 2 +- 67 files changed, 4991 insertions(+), 5301 deletions(-) create mode 100644 lnst/Common/DeviceError.py create mode 100644 lnst/Common/DeviceRef.py create mode 100644 lnst/Common/InterfaceManagerError.py create mode 100644 lnst/Common/IpAddress.py create mode 100644 lnst/Common/JobError.py create mode 100644 lnst/Common/LnstError.py create mode 100644 lnst/Common/Parameters.py create mode 100644 lnst/Common/TestModule.py create mode 100644 lnst/Controller/Common.py create mode 100644 lnst/Controller/Config.py create mode 100644 lnst/Controller/Controller.py create mode 100644 lnst/Controller/Host.py create mode 100644 lnst/Controller/Job.py create mode 100644 lnst/Controller/MachineMapper.py create mode 100644 lnst/Controller/MessageDispatcher.py delete mode 100644 lnst/Controller/NetTestController.py create mode 100644 lnst/Controller/Recipe.py delete mode 100644 lnst/Controller/RecipeParser.py create mode 100644 lnst/Controller/Requirements.py delete mode 100644 lnst/Controller/SlavePool.py create mode 100644 lnst/Controller/SlavePoolManager.py create mode 100644 lnst/Controller/VirtDomainCtl.py delete mode 100644 lnst/Controller/VirtUtils.py delete mode 100644 lnst/Controller/XmlParser.py delete mode 100644 lnst/Controller/XmlProcessing.py delete mode 100644 lnst/Controller/XmlTemplates.py create mode 100644 lnst/Devices/BondDevice.py create mode 100644 lnst/Devices/BridgeDevice.py create mode 100644 lnst/Devices/Device.py create mode 100644 lnst/Devices/MacvlanDevice.py create mode 100644 lnst/Devices/MasterDevice.py create mode 100644 lnst/Devices/OvsBridgeDevice.py create mode 100644 lnst/Devices/RemoteDevice.py create mode 100644 lnst/Devices/SoftDevice.py create mode 100644 lnst/Devices/TeamDevice.py create mode 100644 lnst/Devices/VethDevice.py create mode 100644 lnst/Devices/VethPair.py create mode 100644 lnst/Devices/VirtNetCtl.py create mode 100644 lnst/Devices/VirtualDevice.py create mode 100644 lnst/Devices/VlanDevice.py create mode 100644 lnst/Devices/VtiDevice.py create mode 100644 lnst/Devices/VxlanDevice.py create mode 100644 lnst/Devices/__init__.py create mode 100644 lnst/Slave/Config.py create mode 100644 lnst/Slave/Job.py create mode 100644 lnst/Tests/IcmpPing.py create mode 100644 lnst/Tests/__init__.py create mode 100755 recipes/examples/python_recipe.py
-- 2.13.0 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org
On Tue, May 23, 2017 at 04:52:55PM +0200, Jiri Pirko wrote:
Fri, May 19, 2017 at 12:12:45PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
v2 changes: add lnst.Common.IpAddress
- modify exception string.
add lnst.Common.Parameters
- add docstring explaining the __getattribute__ magic
- make val a property
- rename x to attr in Parameters dir() iterations
- fix IpParam value setter for string values
- modify IpParam Exception string
add lnst.Devices
- make Device a Meta class and register RemoteDevice as implementing the Device interface
- removed unnecessary imports
- added MasterDevice class for slave* methods
- made master_set to be a 'master' property setter
- renamed {add, del}_route to route_{add, del}
- renamed set_speed to speed_set
- renamed set_autoneg to autoneg_set
- removed modprobes where unnecessary
- link_stats are now retrieved from netlink message
- added link_stats64
- OvsBridge calls parent _type_init for modprobe
- remove SoftDevice import - not required
add lnst.Controller.Host
- removed debug pprint import
- added docstrings to requested methods
- Host::run throws an exception when netns set (not supported yet)
- added TODO comments to some parts that need to be implemented
add lnst.Controller.MachineMapper
- docstring now explains that implementing your own MachineMapper is not supported at this moment
- removed debug pprint import
add lnst.Controller.Recipe
- renamed variable 'x' to 'attr'
Machine, NetTestSlave: heavy reimplementation
- improved class and module synchronization implementation
- join split lines in deviceref_to_device method
lnst.Common.Parameters: IpParam accepts Device objects
- import Device on demand (Device class arrives on Slave later) and check isinstance Device instead of RemoteDevice (not available on Slave)
- fix IpParam Exception string
v2 new commits: Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
Looks good on a quick look. Lets apply this and continue in-tree.
Sorry this took a while, I was busy with certification training this week, Pushed to the origin/next branch. I also removed the origin/next_devel branch.
Next on the agenda is to schedule a meeting where we'll discuss the Device API so that we can settle on something that will not change anymore. I also have an idea for Host API that I'd like to discuss.
Next week mostly works for me, I'd prefer later than sooner (I have to do some catchup in school)... What works for you and/or other members of the mailing list?
-Ondrej
Including the original cover letter as well:
Hi all,
the long avaited patchset for Python Recipes is here.
At this point there's still quite a lot of work left, but since the current state is functional and the tester facing API should be mostly stable I think it's a good time to get this merged into upstream so that more people can get involved with filling in the missing pieces and can slowly start experimenting and porting their XML recipes.
This I know are missing:
- Recipe run summary - how the Summary should look like was described in the
running proposal document, but I didn't manage to start with the implementation yet...
- Porting of all the old test_modules into the lnst.Tests package, so far we
only have the IcmpPing module that should still be reworked to be more universal (ip4 and ip6 in one class). Porting test_modules should be fairly easy, but at this point there's no guide how to do it so if you're having trouble feel free to contact me either on irc or email.
test_tools - we haven't even thought of these yet
network namespaces - also didn't think about them yet, though I'm hoping this
will be simple
- Ip address and network generators as we've discussed them on the upstream
meeting
Please review and provide any comments or features I forgot that should be added to the above list.
Regards, Ondrej
Ondrej Lichtner (47): add lnst.Common.LnstError add lnst.Common.DeviceError add lnst.Common.DeviceRef add lnst.Common.IpAddress add lnst.Common.Parameters add lnst.Common.TestModule add lnst.Common.JobError add lnst.Controller.Common add lnst.Devices add lnst.Controller.Requirements add lnst.Controller.Job add lnst.Controller.Host add lnst.Controller.Config add lnst.Slave.Config add lnst.Controller.MachineMapper lnst.Controller.Machine: change object initialization add lnst.Controller.MessageDispatcher add lnst.Controller.SlavePoolManager add lnst.Controller.Recipe add lnst.Controller.Controller various files: retype exceptions add lnst.Tests package lnst.Common.Config: remove {controller, slave}_init lnst.Common.Config: remove global lnst_config Slave: use a local config object instead of a global one lnst.Common.Utils: add sha256sum function lnst.Common.ResourceCache: simplification lnst.Controller.CtlSecSocket: remove lnst_config import lnst.Controller.SlaveMachineParser: make standalone add lnst.Common.InterfaceManagerError add lnst.Slave.Job lnst.Slave.InterfaceManager: heavy reimplementation Machine, NetTestSlave: heavy reimplementation lnst.Slave.InterfaceManager: remove Device class implementation add lnst.Controller package imports lnst/__init__.py remove imports lnst.Controller: remove old modules setup.py: add new packages add lnst.Devices.VirtNetCtl lnst.Controller: move VirtUtils to VirtDomainCtl, remove VirtNetCtl class lnst.Common.Parameters: IpParam accepts Device objects add example python_recipe.py script Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
lnst-slave | 20 +- lnst/Common/Config.py | 152 +--- lnst/Common/DeviceError.py | 22 + lnst/Common/DeviceRef.py | 19 + lnst/Common/ExecCmd.py | 3 +- lnst/Common/InterfaceManagerError.py | 16 + lnst/Common/IpAddress.py | 99 +++ lnst/Common/JobError.py | 22 + lnst/Common/LnstError.py | 18 + lnst/Common/NetTestCommand.py | 5 +- lnst/Common/Parameters.py | 157 ++++ lnst/Common/ResourceCache.py | 128 ++-- lnst/Common/SecureSocket.py | 3 +- lnst/Common/ShellProcess.py | 3 +- lnst/Common/TestModule.py | 67 ++ lnst/Common/TestsCommon.py | 3 +- lnst/Common/Utils.py | 11 + lnst/Controller/Common.py | 17 + lnst/Controller/Config.py | 99 +++ lnst/Controller/Controller.py | 218 ++++++ lnst/Controller/CtlSecSocket.py | 2 - lnst/Controller/Host.py | 160 ++++ lnst/Controller/Job.py | 196 +++++ lnst/Controller/Machine.py | 1356 +++++++-------------------------- lnst/Controller/MachineMapper.py | 328 ++++++++ lnst/Controller/MessageDispatcher.py | 203 +++++ lnst/Controller/NetTestController.py | 620 --------------- lnst/Controller/Recipe.py | 97 +++ lnst/Controller/RecipeParser.py | 572 -------------- lnst/Controller/Requirements.py | 113 +++ lnst/Controller/SlaveMachineParser.py | 144 +++- lnst/Controller/SlavePool.py | 648 ---------------- lnst/Controller/SlavePoolManager.py | 273 +++++++ lnst/Controller/Task.py | 4 +- lnst/Controller/VirtDomainCtl.py | 98 +++ lnst/Controller/VirtUtils.py | 268 ------- lnst/Controller/XmlParser.py | 188 ----- lnst/Controller/XmlProcessing.py | 235 ------ lnst/Controller/XmlTemplates.py | 438 ----------- lnst/Controller/__init__.py | 3 + lnst/Devices/BondDevice.py | 38 + lnst/Devices/BridgeDevice.py | 33 + lnst/Devices/Device.py | 355 +++++++++ lnst/Devices/MacvlanDevice.py | 38 + lnst/Devices/MasterDevice.py | 33 + lnst/Devices/OvsBridgeDevice.py | 115 +++ lnst/Devices/RemoteDevice.py | 98 +++ lnst/Devices/SoftDevice.py | 51 ++ lnst/Devices/TeamDevice.py | 60 ++ lnst/Devices/VethDevice.py | 58 ++ lnst/Devices/VethPair.py | 24 + lnst/Devices/VirtNetCtl.py | 85 +++ lnst/Devices/VirtualDevice.py | 98 +++ lnst/Devices/VlanDevice.py | 35 + lnst/Devices/VtiDevice.py | 71 ++ lnst/Devices/VxlanDevice.py | 65 ++ lnst/Devices/__init__.py | 42 + lnst/RecipeCommon/ModuleWrap.py | 33 +- lnst/Slave/Config.py | 73 ++ lnst/Slave/InterfaceManager.py | 648 ++-------------- lnst/Slave/Job.py | 251 ++++++ lnst/Slave/NetTestSlave.py | 846 ++++++++++---------- lnst/Tests/IcmpPing.py | 62 ++ lnst/Tests/__init__.py | 17 + lnst/__init__.py | 1 - recipes/examples/python_recipe.py | 32 + setup.py | 2 +- 67 files changed, 4991 insertions(+), 5301 deletions(-) create mode 100644 lnst/Common/DeviceError.py create mode 100644 lnst/Common/DeviceRef.py create mode 100644 lnst/Common/InterfaceManagerError.py create mode 100644 lnst/Common/IpAddress.py create mode 100644 lnst/Common/JobError.py create mode 100644 lnst/Common/LnstError.py create mode 100644 lnst/Common/Parameters.py create mode 100644 lnst/Common/TestModule.py create mode 100644 lnst/Controller/Common.py create mode 100644 lnst/Controller/Config.py create mode 100644 lnst/Controller/Controller.py create mode 100644 lnst/Controller/Host.py create mode 100644 lnst/Controller/Job.py create mode 100644 lnst/Controller/MachineMapper.py create mode 100644 lnst/Controller/MessageDispatcher.py delete mode 100644 lnst/Controller/NetTestController.py create mode 100644 lnst/Controller/Recipe.py delete mode 100644 lnst/Controller/RecipeParser.py create mode 100644 lnst/Controller/Requirements.py delete mode 100644 lnst/Controller/SlavePool.py create mode 100644 lnst/Controller/SlavePoolManager.py create mode 100644 lnst/Controller/VirtDomainCtl.py delete mode 100644 lnst/Controller/VirtUtils.py delete mode 100644 lnst/Controller/XmlParser.py delete mode 100644 lnst/Controller/XmlProcessing.py delete mode 100644 lnst/Controller/XmlTemplates.py create mode 100644 lnst/Devices/BondDevice.py create mode 100644 lnst/Devices/BridgeDevice.py create mode 100644 lnst/Devices/Device.py create mode 100644 lnst/Devices/MacvlanDevice.py create mode 100644 lnst/Devices/MasterDevice.py create mode 100644 lnst/Devices/OvsBridgeDevice.py create mode 100644 lnst/Devices/RemoteDevice.py create mode 100644 lnst/Devices/SoftDevice.py create mode 100644 lnst/Devices/TeamDevice.py create mode 100644 lnst/Devices/VethDevice.py create mode 100644 lnst/Devices/VethPair.py create mode 100644 lnst/Devices/VirtNetCtl.py create mode 100644 lnst/Devices/VirtualDevice.py create mode 100644 lnst/Devices/VlanDevice.py create mode 100644 lnst/Devices/VtiDevice.py create mode 100644 lnst/Devices/VxlanDevice.py create mode 100644 lnst/Devices/__init__.py create mode 100644 lnst/Slave/Config.py create mode 100644 lnst/Slave/Job.py create mode 100644 lnst/Tests/IcmpPing.py create mode 100644 lnst/Tests/__init__.py create mode 100755 recipes/examples/python_recipe.py
-- 2.13.0 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org
Fri, May 26, 2017 at 03:33:41PM CEST, olichtne@redhat.com wrote:
On Tue, May 23, 2017 at 04:52:55PM +0200, Jiri Pirko wrote:
Fri, May 19, 2017 at 12:12:45PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
v2 changes: add lnst.Common.IpAddress
- modify exception string.
add lnst.Common.Parameters
- add docstring explaining the __getattribute__ magic
- make val a property
- rename x to attr in Parameters dir() iterations
- fix IpParam value setter for string values
- modify IpParam Exception string
add lnst.Devices
- make Device a Meta class and register RemoteDevice as implementing the Device interface
- removed unnecessary imports
- added MasterDevice class for slave* methods
- made master_set to be a 'master' property setter
- renamed {add, del}_route to route_{add, del}
- renamed set_speed to speed_set
- renamed set_autoneg to autoneg_set
- removed modprobes where unnecessary
- link_stats are now retrieved from netlink message
- added link_stats64
- OvsBridge calls parent _type_init for modprobe
- remove SoftDevice import - not required
add lnst.Controller.Host
- removed debug pprint import
- added docstrings to requested methods
- Host::run throws an exception when netns set (not supported yet)
- added TODO comments to some parts that need to be implemented
add lnst.Controller.MachineMapper
- docstring now explains that implementing your own MachineMapper is not supported at this moment
- removed debug pprint import
add lnst.Controller.Recipe
- renamed variable 'x' to 'attr'
Machine, NetTestSlave: heavy reimplementation
- improved class and module synchronization implementation
- join split lines in deviceref_to_device method
lnst.Common.Parameters: IpParam accepts Device objects
- import Device on demand (Device class arrives on Slave later) and check isinstance Device instead of RemoteDevice (not available on Slave)
- fix IpParam Exception string
v2 new commits: Controller, Devices: removing unused imports NetTestSlave: create dynamic lnst.Tests package lnst.Tests: add package docstring lnst.Common.Parameters: add DeviceParam lnst.Tests.IcmpPing: use DeviceParam for iface parameter
Looks good on a quick look. Lets apply this and continue in-tree.
Sorry this took a while, I was busy with certification training this week, Pushed to the origin/next branch. I also removed the origin/next_devel branch.
Next on the agenda is to schedule a meeting where we'll discuss the Device API so that we can settle on something that will not change anymore. I also have an idea for Host API that I'd like to discuss.
Next week mostly works for me, I'd prefer later than sooner (I have to do some catchup in school)... What works for you and/or other members of the mailing list?
Works for me any time this week, please schedule.
lnst-developers@lists.fedorahosted.org