From: Ondrej Lichtner olichtne@redhat.com
Hi all,
apologies for the long patch set... it contains my work from the past couple of months on porting our first ENRT recipe. I went through numerous iterations of the ported recipe and refactored it several times which is why it took so long. At the same time I expect more refactoring as we port more recipes and this is very likely not the final version of the recipe. Overall though I'm very happy with the abstraction and the overall organization of the recipe which is why I've decided to send the patchset for some upstream review.
While porting the recipe I also modified the base LNST code which made the patchset a bit longer, these contain bug fixes, refactoring some code and some changes to the tester facing API.
I've also added ResultLevels that can be used to filter results and print the result summary based on importance of a job.
Ondrej Lichtner (25): lnst.Slave.Job: kill should target full process group lnst.Slave.Job: set Job result if SIGKILLed lnst.*: pass Namespace objects instead of names lnst.Controller.RecipeResults: add ResultLevel enum lnst.Controller.Job: add level attribute lnst.Controller.Namespace: expose api to set job level lnst.Controller.RecipeResults: expose result levels in Result clas lnst.Controller.RunSummaryFormatter: filter results by result level lnst.Tests: add Iperf test module lnst.Tests: reimplement IcmpPing as Ping lnst.Devices.Device: add ips_filter method lnst.Controller.Namespace: devices property lists only mapped devices, add device_database setup.py: fix version call lnst.Controller.Namespace: remove __str__ method lnst.Common.Parameters: add new types and modify some old add lnst.Recipes module add lnst.Recipes.ENRT module remove recipes/regression_tests/* add lnst.RecipeCommon.PerfResult lnst.RecipeCommon: add Perf and Ping lnst.RecipeCommon: add IperfMeasurementTool lnst.Recipes.ENRT: add BaseEnrtRecipe lnst.Recipes.ENRT: add SimplePerfRecipe TODO updated lnst.Common.IpAddress: remove Device case from ipaddress factory
TODO | 87 ++-- lnst/Common/IpAddress.py | 8 - lnst/Common/Parameters.py | 49 +- lnst/Controller/Job.py | 15 +- lnst/Controller/Machine.py | 10 +- lnst/Controller/Namespace.py | 25 +- lnst/Controller/RecipeResults.py | 24 +- lnst/Controller/RunSummaryFormatter.py | 10 +- lnst/Devices/Device.py | 15 + lnst/Devices/RemoteDevice.py | 6 +- lnst/RecipeCommon/IperfMeasurementTool.py | 83 ++++ lnst/RecipeCommon/Perf.py | 114 +++++ lnst/RecipeCommon/PerfResult.py | 152 ++++++ lnst/RecipeCommon/Ping.py | 45 ++ lnst/Recipes/ENRT/BaseEnrtRecipe.py | 212 ++++++++ lnst/Recipes/ENRT/SimplePerfRecipe.py | 75 +++ lnst/Recipes/ENRT/__init__.py | 0 lnst/Recipes/__init__.py | 0 lnst/Slave/Job.py | 18 +- lnst/Tests/IcmpPing.py | 63 --- lnst/Tests/Iperf.py | 147 ++++++ lnst/Tests/Ping.py | 93 ++++ lnst/Tests/__init__.py | 3 +- .../regression_tests/phase1/3_vlans.README | 75 --- recipes/regression_tests/phase1/3_vlans.py | 332 ------------- recipes/regression_tests/phase1/3_vlans.xml | 107 ---- .../3_vlans_over_active_backup_bond.README | 84 ---- .../3_vlans_over_active_backup_bond.xml | 121 ----- .../phase1/3_vlans_over_bond.py | 332 ------------- .../3_vlans_over_round_robin_bond.README | 84 ---- .../phase1/3_vlans_over_round_robin_bond.xml | 114 ----- .../phase1/active_backup_bond.README | 81 --- .../phase1/active_backup_bond.xml | 50 -- .../phase1/active_backup_double_bond.README | 81 --- .../phase1/active_backup_double_bond.xml | 60 --- .../regression_tests/phase1/bonding_test.py | 305 ------------ .../regression_tests/phase1/ping_flood.README | 38 -- .../regression_tests/phase1/ping_flood.xml | 30 -- .../phase1/round_robin_bond.README | 81 --- .../phase1/round_robin_bond.xml | 50 -- .../phase1/round_robin_double_bond.README | 81 --- .../phase1/round_robin_double_bond.xml | 58 --- .../phase1/simple_netperf.README | 72 --- .../regression_tests/phase1/simple_netperf.py | 277 ----------- .../phase1/simple_netperf.xml | 39 -- .../regression_tests/phase1/simple_ping.py | 43 -- ...dge_2_vlans_over_active_backup_bond.README | 106 ---- ...bridge_2_vlans_over_active_backup_bond.xml | 174 ------- .../virtual_bridge_2_vlans_over_bond.py | 402 --------------- .../virtual_bridge_vlan_in_guest.README | 82 --- .../phase1/virtual_bridge_vlan_in_guest.py | 331 ------------ .../phase1/virtual_bridge_vlan_in_guest.xml | 80 --- .../phase1/virtual_bridge_vlan_in_host.README | 82 --- .../phase1/virtual_bridge_vlan_in_host.py | 331 ------------ .../phase1/virtual_bridge_vlan_in_host.xml | 80 --- .../3_vlans_over_active_backup_team.README | 84 ---- .../3_vlans_over_active_backup_team.xml | 125 ----- .../3_vlans_over_round_robin_team.README | 84 ---- .../phase2/3_vlans_over_round_robin_team.xml | 118 ----- .../phase2/3_vlans_over_team.py | 332 ------------- .../phase2/active_backup_double_team.README | 81 --- .../phase2/active_backup_double_team.xml | 68 --- .../phase2/active_backup_team.README | 81 --- .../phase2/active_backup_team.xml | 54 -- ...e_backup_team_vs_active_backup_bond.README | 81 --- ...tive_backup_team_vs_active_backup_bond.xml | 64 --- ...ive_backup_team_vs_round_robin_bond.README | 81 --- ...active_backup_team_vs_round_robin_bond.xml | 64 --- .../phase2/round_robin_double_team.README | 81 --- .../phase2/round_robin_double_team.xml | 68 --- .../phase2/round_robin_team.README | 81 --- .../phase2/round_robin_team.xml | 52 -- ...nd_robin_team_vs_active_backup_bond.README | 81 --- ...round_robin_team_vs_active_backup_bond.xml | 64 --- ...ound_robin_team_vs_round_robin_bond.README | 81 --- .../round_robin_team_vs_round_robin_bond.xml | 64 --- recipes/regression_tests/phase2/team_test.py | 470 ------------------ ...dge_2_vlans_over_active_backup_bond.README | 77 --- ..._bridge_2_vlans_over_active_backup_bond.py | 381 -------------- ...bridge_2_vlans_over_active_backup_bond.xml | 135 ----- .../virtual_ovs_bridge_vlan_in_guest.README | 55 -- .../virtual_ovs_bridge_vlan_in_guest.py | 319 ------------ .../virtual_ovs_bridge_vlan_in_guest.xml | 76 --- .../virtual_ovs_bridge_vlan_in_host.README | 58 --- .../phase2/virtual_ovs_bridge_vlan_in_host.py | 319 ------------ .../virtual_ovs_bridge_vlan_in_host.xml | 74 --- .../phase3/2_virt_ovs_vxlan.README | 129 ----- .../phase3/2_virt_ovs_vxlan.py | 279 ----------- .../phase3/2_virt_ovs_vxlan.xml | 145 ------ .../phase3/novirt_ovs_vxlan.README | 93 ---- .../phase3/novirt_ovs_vxlan.py | 216 -------- .../phase3/novirt_ovs_vxlan.xml | 87 ---- .../phase3/vxlan_multicast.README | 118 ----- .../phase3/vxlan_multicast.py | 249 ---------- .../phase3/vxlan_multicast.xml | 99 ---- .../phase3/vxlan_remote.README | 86 ---- .../regression_tests/phase3/vxlan_remote.py | 215 -------- .../regression_tests/phase3/vxlan_remote.xml | 65 --- setup.py | 6 +- 99 files changed, 1112 insertions(+), 10045 deletions(-) create mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py create mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/PerfResult.py create mode 100644 lnst/RecipeCommon/Ping.py create mode 100644 lnst/Recipes/ENRT/BaseEnrtRecipe.py create mode 100644 lnst/Recipes/ENRT/SimplePerfRecipe.py create mode 100644 lnst/Recipes/ENRT/__init__.py create mode 100644 lnst/Recipes/__init__.py delete mode 100644 lnst/Tests/IcmpPing.py create mode 100644 lnst/Tests/Iperf.py create mode 100644 lnst/Tests/Ping.py delete mode 100644 recipes/regression_tests/phase1/3_vlans.README delete mode 100644 recipes/regression_tests/phase1/3_vlans.py delete mode 100644 recipes/regression_tests/phase1/3_vlans.xml delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_bond.py delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase1/active_backup_bond.README delete mode 100644 recipes/regression_tests/phase1/active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase1/active_backup_double_bond.README delete mode 100644 recipes/regression_tests/phase1/active_backup_double_bond.xml delete mode 100644 recipes/regression_tests/phase1/bonding_test.py delete mode 100644 recipes/regression_tests/phase1/ping_flood.README delete mode 100644 recipes/regression_tests/phase1/ping_flood.xml delete mode 100644 recipes/regression_tests/phase1/round_robin_bond.README delete mode 100644 recipes/regression_tests/phase1/round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase1/round_robin_double_bond.README delete mode 100644 recipes/regression_tests/phase1/round_robin_double_bond.xml delete mode 100644 recipes/regression_tests/phase1/simple_netperf.README delete mode 100644 recipes/regression_tests/phase1/simple_netperf.py delete mode 100644 recipes/regression_tests/phase1/simple_netperf.xml delete mode 100644 recipes/regression_tests/phase1/simple_ping.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_active_backup_team.README delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_active_backup_team.xml delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_round_robin_team.README delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_round_robin_team.xml delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_team.py delete mode 100644 recipes/regression_tests/phase2/active_backup_double_team.README delete mode 100644 recipes/regression_tests/phase2/active_backup_double_team.xml delete mode 100644 recipes/regression_tests/phase2/active_backup_team.README delete mode 100644 recipes/regression_tests/phase2/active_backup_team.xml delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.README delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_double_team.README delete mode 100644 recipes/regression_tests/phase2/round_robin_double_team.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_team.README delete mode 100644 recipes/regression_tests/phase2/round_robin_team.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.README delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase2/team_test.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.README delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.xml delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.README delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.xml delete mode 100644 recipes/regression_tests/phase3/2_virt_ovs_vxlan.README delete mode 100644 recipes/regression_tests/phase3/2_virt_ovs_vxlan.py delete mode 100644 recipes/regression_tests/phase3/2_virt_ovs_vxlan.xml delete mode 100644 recipes/regression_tests/phase3/novirt_ovs_vxlan.README delete mode 100644 recipes/regression_tests/phase3/novirt_ovs_vxlan.py delete mode 100644 recipes/regression_tests/phase3/novirt_ovs_vxlan.xml delete mode 100644 recipes/regression_tests/phase3/vxlan_multicast.README delete mode 100644 recipes/regression_tests/phase3/vxlan_multicast.py delete mode 100644 recipes/regression_tests/phase3/vxlan_multicast.xml delete mode 100644 recipes/regression_tests/phase3/vxlan_remote.README delete mode 100644 recipes/regression_tests/phase3/vxlan_remote.py delete mode 100644 recipes/regression_tests/phase3/vxlan_remote.xml
From: Ondrej Lichtner olichtne@redhat.com
The kill command on a slave job should target the whole process group of the child, since the Job process (a python process running LNST code) can launch additional applications (e.g. iperf server) that should get killed as well.
I also added some debug logging in case the job finished before the kill method was called.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/Job.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lnst/Slave/Job.py b/lnst/Slave/Job.py index bf159a6..36e5160 100644 --- a/lnst/Slave/Job.py +++ b/lnst/Slave/Job.py @@ -49,7 +49,7 @@ class JobContext(object):
def _kill_all_jobs(self): for id in self._dict: - self._dict[id].kill(signal=signal.SIGKILL) + self._dict[id].kill(sig=signal.SIGKILL)
def cleanup(self): logging.debug("Cleaning up leftover processes.") @@ -124,10 +124,11 @@ class Job(object):
def kill(self, signal=signal.SIGKILL): if self._finished: + logging.debug("Job finished before sending the signal") return True try: logging.debug("Sending signal %s to pid %d" % (signal, self._pid)) - os.kill(self._pid, signal) + os.killpg(self._pid, signal) return True except OSError as exc: logging.error(str(exc))
From: Ondrej Lichtner olichtne@redhat.com
In case the slave job was killed with the SIGKILL signal we need to set some default Job result explaining the event.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/Job.py | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/lnst/Slave/Job.py b/lnst/Slave/Job.py index 36e5160..26c5493 100644 --- a/lnst/Slave/Job.py +++ b/lnst/Slave/Job.py @@ -122,13 +122,22 @@ class Job(object): send_data(self._child_pipe, result) self._child_pipe.close()
- def kill(self, signal=signal.SIGKILL): + def kill(self, sig=signal.SIGKILL): if self._finished: logging.debug("Job finished before sending the signal") return True try: - logging.debug("Sending signal %s to pid %d" % (signal, self._pid)) - os.killpg(self._pid, signal) + logging.debug("Sending signal %s to pid %d" % (sig, self._pid)) + os.killpg(self._pid, sig) + + if sig == signal.SIGKILL: + self.set_finished(dict(type = "job_finished", + job_id = self._id, + result = dict(passed = False, + res_data = "Job killed", + type = "result"))) + + send_data(self._child_pipe, self.get_result()) return True except OSError as exc: logging.error(str(exc))
From: Ondrej Lichtner olichtne@redhat.com
Method calls should be passing Namespace objects instead of their names. Except for the final rpc_call to the slave where the objects don't exist.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 2 +- lnst/Controller/Machine.py | 10 +++++----- lnst/Controller/Namespace.py | 5 ++--- lnst/Devices/RemoteDevice.py | 6 +++--- 4 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 1ed079f..6cd4a15 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -189,7 +189,7 @@ class Job(object): attrs.append("command(%s)" % self._what)
if self._netns is not None: - attrs.append("netns(%s)" % self._netns) + attrs.append("netns(%s)" % self._netns.name)
if not self._expect: attrs.append("expecting FAIL") diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py index 67907be..f51798c 100644 --- a/lnst/Controller/Machine.py +++ b/lnst/Controller/Machine.py @@ -126,7 +126,7 @@ class Machine(object): self._device_database[ret["ifindex"]] = dev
def remote_device_set_netns(self, dev, dst, src): - self.rpc_call("set_dev_netns", dev, dst, netns=src) + self.rpc_call("set_dev_netns", dev, dst.name, netns=src)
def device_created(self, dev_data): ifindex = dev_data["ifindex"] @@ -175,11 +175,11 @@ class Machine(object): return None
def rpc_call(self, method_name, *args, **kwargs): - if "netns" in kwargs and kwargs["netns"] is not None: + if kwargs.get("netns") in self._namespaces: netns = kwargs["netns"] del kwargs["netns"] msg = {"type": "to_netns", - "netns": netns, + "netns": netns.name, "data": {"type": "command", "method_name": method_name, "args": args, @@ -366,7 +366,7 @@ class Machine(object): if job._desc is not None: logging.info("Job description: %s" % job._desc)
- res = self.rpc_call("run_job", job._to_dict(), netns=job.netns.name) + res = self.rpc_call("run_job", job._to_dict(), netns=job.netns)
self._recipe.current_run.add_result(JobStartResult(job, res)) return res @@ -436,7 +436,7 @@ class Machine(object): if job.id not in self._jobs: raise MachineError("No job '%s' running on Machine %s" % (job.id(), self._id)) - return self.rpc_call("kill_job", job.id, signal, netns=job.netns.name) + return self.rpc_call("kill_job", job.id, signal, netns=job.netns)
def get_hostname(self): """ Get hostname/ip of the machine diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index 6b81d44..16458a0 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -146,8 +146,7 @@ class Namespace(object): if value.ifindex is not None: old_ns = value.netns old_ns._unset(value) - self._machine.remote_device_set_netns(value, self.name, - old_ns.name) + self._machine.remote_device_set_netns(value, self, old_ns) value.netns = self self._objects[name] = value return True @@ -170,7 +169,7 @@ class Namespace(object): else: value._machine = self._machine value.netns = self - self._machine.remote_device_create(value, netns=self.name) + self._machine.remote_device_create(value, netns=self)
self._objects[name] = value return True diff --git a/lnst/Devices/RemoteDevice.py b/lnst/Devices/RemoteDevice.py index 6e51977..ddd47f1 100644 --- a/lnst/Devices/RemoteDevice.py +++ b/lnst/Devices/RemoteDevice.py @@ -92,11 +92,11 @@ class RemoteDevice(object): def dev_method(*args, **kwargs): return self._machine.rpc_call("dev_method", self.ifindex, name, args, kwargs, - netns=self.netns.name) + netns=self.netns) return dev_method else: return self._machine.rpc_call("dev_attr", self.ifindex, name, - netns=self.netns.name) + netns=self.netns)
def __setattr__(self, name, value): if not self._inited: @@ -105,7 +105,7 @@ class RemoteDevice(object): try: getattr(self._dev_cls, name) return self._machine.rpc_call("dev_set_attr", self.ifindex, name, value, - netns=self.netns.name) + netns=self.netns) except AttributeError: return super(RemoteDevice, self).__setattr__(name, value)
From: Ondrej Lichtner olichtne@redhat.com
Enum defining various result levels that can be used for filtering when viewing.
Setting the default level of the BaseResult class to DEBUG.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RecipeResults.py | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index b79516d..2dcafdc 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -14,6 +14,12 @@ olichtne@redhat.com (Ondrej Lichtner) """
import time +from enum import IntEnum + +class ResultLevel(IntEnum): + IMPORTANT = 1 + NORMAL = 2 + DEBUG = 3
class BaseResult(object): """Base class for storing result data @@ -39,6 +45,10 @@ class BaseResult(object): def data(self): return None
+ @property + def level(self): + return ResultLevel.DEBUG + class JobResult(BaseResult): """Base class for storing result data of Jobs
From: Ondrej Lichtner olichtne@redhat.com
The level attribute of a Job indicates it's importance level with regards to results and can be used when filtering results.
Setting the default result level of a job to DEBUG.
Overriding the JobResult level getter to return the level of the associated Job object.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Job.py | 13 ++++++++++++- lnst/Controller/RecipeResults.py | 6 +++++- 2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/lnst/Controller/Job.py b/lnst/Controller/Job.py index 6cd4a15..4416d9a 100644 --- a/lnst/Controller/Job.py +++ b/lnst/Controller/Job.py @@ -15,6 +15,7 @@ import logging import signal from lnst.Common.JobError import JobError from lnst.Common.TestModule import BaseTestModule +from lnst.Controller.RecipeResults import ResultLevel
class Job(object): """Tester facing Job API @@ -27,12 +28,14 @@ class Job(object): print job.stdout """ def __init__(self, namespace, what, - expect=True, json=False, desc=None): + expect=True, json=False, desc=None, + level=ResultLevel.DEBUG): self._what = what self._expect = expect self._json = json self._netns = namespace self._desc = desc + self._level = level
self._res = None
@@ -109,6 +112,14 @@ class Job(object): except: return None
+ @property + def level(self): + return self._level + + @level.setter + def level(self, value): + self._level = value + @property def passed(self): """Indicates whether or not the Job passed diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index 2dcafdc..97308bf 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -62,6 +62,10 @@ class JobResult(BaseResult): def job(self): return self._job
+ @BaseResult.level.getter + def level(self): + return self.job.level + class JobStartResult(JobResult): """Generated automatically when a Job is succesfully started on a slave""" @BaseResult.short_desc.getter @@ -74,7 +78,7 @@ class JobFinishResult(JobResult): success depends on the Job passed value and returns the data returned as a result of the Job.""" def __init__(self, job): - super(JobFinishResult, self).__init__(job, True) + super(JobFinishResult, self).__init__(job, None)
@BaseResult.success.getter def success(self):
From: Ondrej Lichtner olichtne@redhat.com
The tester should be able to assign a specific level to the job when running it.
Use level DEBUG by default.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Namespace.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index 16458a0..caee05b 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -22,6 +22,7 @@ from lnst.Devices.VirtualDevice import VirtualDevice from lnst.Devices.RemoteDevice import RemoteDevice from lnst.Controller.Common import ControllerError from lnst.Controller.Job import Job +from lnst.Controller.RecipeResults import ResultLevel
class HostError(ControllerError): pass @@ -71,7 +72,7 @@ class Namespace(object): return self._name
def run(self, what, bg=False, fail=False, timeout=DEFAULT_TIMEOUT, - json=False, desc=None): + json=False, desc=None, job_level=ResultLevel.DEBUG): """ Args: what (mandatory) -- what should be run on the host. Can be either a @@ -96,7 +97,8 @@ class Namespace(object): the Job object will be automatically updated. """
- job = Job(self, what, expect=not fail, json=json, desc=desc) + job = Job(self, what, expect=not fail, json=json, desc=desc, + level=job_level)
try: self._machine.run_job(job)
From: Ondrej Lichtner olichtne@redhat.com
The Result class should be used for results explicitly created by the tester (instead of automatic results of LNST jobs). This makes the Result class instances more important so I set the default level to IMPORTANT.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RecipeResults.py | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/lnst/Controller/RecipeResults.py b/lnst/Controller/RecipeResults.py index 97308bf..6b42a83 100644 --- a/lnst/Controller/RecipeResults.py +++ b/lnst/Controller/RecipeResults.py @@ -97,11 +97,13 @@ class Result(BaseResult):
Will be created when the tester calls the Recipe interface for adding results.""" - def __init__(self, success, short_desc="", data=None): + def __init__(self, success, short_desc="", data=None, + level=ResultLevel.IMPORTANT): super(Result, self).__init__(success)
self._short_desc = short_desc self._data = data + self._level = level
@BaseResult.short_desc.getter def short_desc(self): @@ -110,3 +112,7 @@ class Result(BaseResult): @BaseResult.data.getter def data(self): return self._data + + @BaseResult.level.getter + def level(self): + return self._level
From: Ondrej Lichtner olichtne@redhat.com
The format_run method will now only process Result objects with a result level equal or lower than the result level set during the initialization of the formatter.
Default result level of the formatter is set to IMPORTANT, but we might change this in the future based on use experience.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index d035717..182b103 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -17,14 +17,16 @@ from lnst.Controller.MachineMapper import format_match_description from lnst.Controller.Recipe import BaseRecipe, RecipeRun from lnst.Controller.RecipeResults import BaseResult, JobResult, Result from lnst.Controller.RecipeResults import JobStartResult, JobFinishResult +from lnst.Controller.RecipeResults import ResultLevel
class RunFormatterException(ControllerError): pass
class RunSummaryFormatter(object): - def __init__(self): + def __init__(self, level=ResultLevel.IMPORTANT): #TODO changeable format? self._format = "" + self._level = level
def _format_success(self, success): if success: @@ -75,12 +77,14 @@ class RunSummaryFormatter(object):
output_lines.extend(format_match_description(run.match).split('\n'))
+ filtered_results = [res for res in run.results if + res.success == False or res.level <= self._level] overall_result = True - for i, res in enumerate(run.results): + for i, res in enumerate(filtered_results): overall_result = overall_result and res.success
try: - next_res = run.results[i+1] + next_res = filtered_results[i+1] if (isinstance(res, JobStartResult) and isinstance(next_res, JobFinishResult) and res.job.host == next_res.job.host and
From: Ondrej Lichtner olichtne@redhat.com
Implements TestModule classes for the IperfClient and IperfServer that can be used to measure network performance. Class parameters map directly to command line arguments of the iperf3 utility, the PASS/FAIL of the client or server jobs indicates whether or not the throughput measurement was succesful, no check for 0 or baseline is made by the TestModules.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 147 +++++++++++++++++++++++++++++++++++++++++ lnst/Tests/__init__.py | 1 + 2 files changed, 148 insertions(+) create mode 100644 lnst/Tests/Iperf.py
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py new file mode 100644 index 0000000..ec85f3b --- /dev/null +++ b/lnst/Tests/Iperf.py @@ -0,0 +1,147 @@ +import logging +import errno +import re +import signal +import time +import subprocess +import json +from lnst.Common.TestModule import BaseTestModule, TestModuleError +from lnst.Common.Parameters import IntParam, IpParam, StrParam, Param, BoolParam +from lnst.Common.Parameters import HostnameParam +from lnst.Common.Utils import is_installed + +class IperfBase(BaseTestModule): + def run(self): + self._res_data = {} + if not is_installed("iperf3"): + self._res_data["msg"] = "Iperf is not installed on this machine!" + logging.error(self._res_data["msg"]) + return False + + cmd = self._compose_cmd() + + logging.debug("compiled command: %s" % cmd) + logging.debug("running as {} ...".format(self._role)) + + server = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, + stderr=subprocess.PIPE, close_fds=True) + + try: + stdout, stderr = server.communicate() + except KeyboardInterrupt: + pass + + try: + self._res_data["data"] = json.loads(stdout) + except: + self._res_data["data"] = stdout + + self._res_data["stderr"] = stderr + + if stderr != "": + self._res_data["msg"] = "errors reported by iperf" + logging.error(self._res_data["msg"]) + logging.error(self._res_data["stderr"]) + return False + + if server.returncode > 0: + self._res_data["msg"] = "{} returncode = {}".format( + self._role, server.returncode) + logging.error(self._res_data["msg"]) + return False + + return True + +class IperfServer(IperfBase): + bind = HostnameParam() + port = IntParam() + cpu_bind = IntParam() + opts = StrParam() + oneoff = BoolParam(default=False) + + _role = "server" + def _compose_cmd(self): + bind = "" + port = "" + + if "bind" in self.params: + bind = "-B {}".format(self.params.bind) + + if "port" in self.params: + port = "-p {}".format(self.params.port) + + if "cpu_bind" in self.params: + cpu = "-A {:d}".format(self.params.cpu_bind) + else: + cpu = "" + + if "oneoff" in self.params and self.params.oneoff: + oneoff = "-1" + + cmd = "iperf3 -s {bind} -J {port} {cpu} {oneoff} {opts}".format( + bind=bind, port=port, cpu=cpu, oneoff=oneoff, + opts=self.params.opts if "opts" in self.params else "") + + return cmd + + +class IperfClient(IperfBase): + server = HostnameParam(mandatory=True) + duration = IntParam(default=10) + udp = BoolParam(default=False) + sctp = BoolParam(default=False) + port = IntParam() + blksize = IntParam() + mss = IntParam() + cpu_bind = IntParam() + parallel = IntParam() + opts = StrParam() + + _role = "client" + + def __init__(self, **kwargs): + super(IperfClient, self).__init__(**kwargs) + + if self.params.udp and self.params.sctp: + raise TestModuleError("Parameters udp and sctp are mutually exclusive!") + + def _compose_cmd(self): + port = "" + + if "port" in self.params: + port = "-p {:d}".format(self.params.port) + + if "blksize" in self.params: + blksize = "-l {:d}".format(self.params.blksize) + else: + blksize = "" + + if "mss" in self.params: + mss = "-M {:d}".format(self.params.mss) + else: + mss = "" + + if "cpu_bind" in self.params: + cpu = "-A {:d}".format(self.params.cpu_bind) + else: + cpu = "" + + if "parallel" in self.params: + parallel = "-P {:d}".format(self.params.parallel) + + if self.params.udp: + test = "--udp" + elif self.params.sctp: + test = "--sctp" + else: + test = "" + + cmd = ("iperf3 -c {server} -J -t {duration}" + " {cpu} {test} {mss} {blksize} {parallel}" + " {opts}".format( + server=self.params.server, duration=self.params.duration, + cpu=cpu, test=test, mss=mss, blksize=blksize, + parallel=parallel, + opts=self.params.opts if "opts" in self.params else "")) + + return cmd diff --git a/lnst/Tests/__init__.py b/lnst/Tests/__init__.py index 69b757a..bca4893 100644 --- a/lnst/Tests/__init__.py +++ b/lnst/Tests/__init__.py @@ -13,5 +13,6 @@ olichtne@redhat.com (Ondrej Lichtner) """
from lnst.Tests.IcmpPing import IcmpPing +from lnst.Tests.Iperf import IperfClient, IperfServer
#TODO add support for test classes from lnst-ctl.conf
From: Ondrej Lichtner olichtne@redhat.com
Renamed the IcmpPing testmodule as just Ping. Implements the Ping TestModule class that can be used to test connectivity between hosts. The module returns statistics on the measured success rate of the Ping. The PASS/FAIL of the Ping testmodule job indicates whether or not the measurement was succesfull, actual connectivity status needs to be evaluated outside of this class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/IcmpPing.py | 63 ---------------------------- lnst/Tests/Ping.py | 93 ++++++++++++++++++++++++++++++++++++++++++ lnst/Tests/__init__.py | 2 +- 3 files changed, 94 insertions(+), 64 deletions(-) delete mode 100644 lnst/Tests/IcmpPing.py create mode 100644 lnst/Tests/Ping.py
diff --git a/lnst/Tests/IcmpPing.py b/lnst/Tests/IcmpPing.py deleted file mode 100644 index ea3cd99..0000000 --- a/lnst/Tests/IcmpPing.py +++ /dev/null @@ -1,63 +0,0 @@ -import re -import logging -from lnst.Common.Parameters import IntParam, FloatParam, IpParam, DeviceParam -from lnst.Common.TestModule import BaseTestModule, TestModuleError -from lnst.Common.ExecCmd import exec_cmd - -class IcmpPing(BaseTestModule): - """Port of old IcmpPing test modules""" - dst = IpParam(mandatory=True) - count = IntParam(default=10) - interval = FloatParam(default=1.0) - iface = DeviceParam() - size = IntParam() - limit_rate = IntParam(default=80) - - def _compose_cmd(self): - cmd = "ping %s" % self.params.dst - cmd += " -c %d" % self.params.count - cmd += " -i %f" % self.params.interval - if "iface" in self.params: - cmd += " -I %s" % self.params.iface.name - - if "size" in self.params: - cmd += " -s %d" % self.params.size - return cmd - - def run(self): - cmd = self._compose_cmd() - - limit_rate = self.params.limit_rate - - data_stdout = exec_cmd(cmd, die_on_err=False)[0] - stat_pttr1 = r'(\d+) packets transmitted, (\d+) received' - stat_pttr2 = r'rtt min/avg/max/mdev = (\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+) ms' - - match = re.search(stat_pttr1, data_stdout) - if not match: - self._res_data = {"msg": "expected pattern not found"} - return False - - trans_pkts, recv_pkts = match.groups() - rate = int(round((float(recv_pkts) / float(trans_pkts)) * 100)) - logging.debug("Transmitted "%s", received "%s", " - "rate "%d%%", limit_rate "%d%%"" - % (trans_pkts, recv_pkts, rate, limit_rate)) - - self._res_data = {"rate": rate, - "limit_rate": limit_rate} - - match = re.search(stat_pttr2, data_stdout) - if match: - tmin, tavg, tmax, tmdev = [float(x) for x in match.groups()] - logging.debug("rtt min "%.3f", avg "%.3f", max "%.3f", " - "mdev "%.3f"" % (tmin, tavg, tmax, tmdev)) - - self._res_data["rtt_min"] = tmin - self._res_data["rtt_max"] = tmax - - if rate < limit_rate: - self._res_data["msg"] = "rate is lower than limit" - return False - - return True diff --git a/lnst/Tests/Ping.py b/lnst/Tests/Ping.py new file mode 100644 index 0000000..f2dabe3 --- /dev/null +++ b/lnst/Tests/Ping.py @@ -0,0 +1,93 @@ +import re +import logging +import subprocess +from lnst.Common.Parameters import IntParam, FloatParam, IpParam, DeviceOrIpParam +from lnst.Common.TestModule import BaseTestModule, TestModuleError +from lnst.Common.ExecCmd import exec_cmd +from lnst.Common.Utils import is_installed + +class Ping(BaseTestModule): + """Port of old IcmpPing test modules""" + dst = IpParam(mandatory=True) + count = IntParam(default=10) + interval = FloatParam(default=1.0) + interface = DeviceOrIpParam(mandatory=False) + size = IntParam() + + def _compose_cmd(self): + cmd = "ping %s" % self.params.dst + cmd += " -c %d" % self.params.count + cmd += " -i %f" % self.params.interval + if "interface" in self.params: + from lnst.Devices.Device import Device + if isinstance(self.params.interface, Device): + cmd += " -I %s" % self.params.interface.name + else: + cmd += " -I %s" % str(self.params.interface) + + if "size" in self.params: + cmd += " -s %d" % self.params.size + return cmd + + def run(self): + self._res_data = {} + if not is_installed("ping"): + self._res_data["msg"] = "Ping is not installed on this machine!" + logging.error(self._res_data["msg"]) + return False + + cmd = self._compose_cmd() + + logging.debug("compiled command: {}".format(cmd)) + + ping_process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, + stderr=subprocess.PIPE, close_fds=True) + + try: + stdout, stderr = ping_process.communicate() + except KeyboardInterrupt: + pass + + self._res_data["stderr"] = stderr + + if stderr != "": + self._res_data["msg"] = "errors reported by ping" + logging.error(self._res_data["msg"]) + logging.error(self._res_data["stderr"]) + return False + + if ping_process.returncode > 0: + self._res_data["msg"] = "returncode = {}".format(ping_process.returncode) + logging.error(self._res_data["msg"]) + return False + + stat_pttr1 = r'(\d+) packets transmitted, (\d+) received' + stat_pttr2 = r'rtt min/avg/max/mdev = (\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+) ms' + + match = re.search(stat_pttr1, stdout) + if not match: + self._res_data = {"msg": "expected pattern not found"} + logging.error(self._res_data["msg"]) + return False + else: + trans_pkts, recv_pkts = match.groups() + rate = int(round((float(recv_pkts) / float(trans_pkts)) * 100)) + logging.debug("Transmitted '{}', received '{}', " + "rate '{}%'".format(trans_pkts, recv_pkts, rate)) + + self._res_data = {"trans_pkts": trans_pkts, + "recv_pkts": recv_pkts, + "rate": rate} + + match = re.search(stat_pttr2, stdout) + if match: + tmin, tavg, tmax, tmdev = [float(x) for x in match.groups()] + logging.debug("rtt min "%.3f", avg "%.3f", max "%.3f", " + "mdev "%.3f"" % (tmin, tavg, tmax, tmdev)) + + self._res_data["rtt_min"] = tmin + self._res_data["rtt_max"] = tmax + self._res_data["rtt_avg"] = tavg + self._res_data["rtt_mdev"] = tmdev + + return True diff --git a/lnst/Tests/__init__.py b/lnst/Tests/__init__.py index bca4893..a65890c 100644 --- a/lnst/Tests/__init__.py +++ b/lnst/Tests/__init__.py @@ -12,7 +12,7 @@ __author__ = """ olichtne@redhat.com (Ondrej Lichtner) """
-from lnst.Tests.IcmpPing import IcmpPing +from lnst.Tests.Ping import Ping from lnst.Tests.Iperf import IperfClient, IperfServer
#TODO add support for test classes from lnst-ctl.conf
Mon, May 21, 2018 at 10:42:45AM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Renamed the IcmpPing testmodule as just Ping. Implements the Ping TestModule class that can be used to test connectivity between hosts. The module returns statistics on the measured success rate of the Ping. The PASS/FAIL of the Ping testmodule job indicates whether or not the measurement was succesfull, actual connectivity status needs to be evaluated outside of this class.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
diff --git a/lnst/Tests/Ping.py b/lnst/Tests/Ping.py new file mode 100644 index 0000000..f2dabe3 --- /dev/null +++ b/lnst/Tests/Ping.py @@ -0,0 +1,93 @@
match = re.search(stat_pttr1, stdout)
if not match:
self._res_data = {"msg": "expected pattern not found"}
logging.error(self._res_data["msg"])
return False
else:
trans_pkts, recv_pkts = match.groups()
rate = int(round((float(recv_pkts) / float(trans_pkts)) * 100))
logging.debug("Transmitted '{}', received '{}', "
"rate '{}%'".format(trans_pkts, recv_pkts, rate))
self._res_data = {"trans_pkts": trans_pkts,
"recv_pkts": recv_pkts,
"rate": rate}
match = re.search(stat_pttr2, stdout)
if match:
tmin, tavg, tmax, tmdev = [float(x) for x in match.groups()]
logging.debug("rtt min \"%.3f\", avg \"%.3f\", max \"%.3f\", "
"mdev \"%.3f\"" % (tmin, tavg, tmax, tmdev))
self._res_data["rtt_min"] = tmin
self._res_data["rtt_max"] = tmax
self._res_data["rtt_avg"] = tavg
self._res_data["rtt_mdev"] = tmdev
missing "if not match" as in stat_pttr1 matching
From: Ondrej Lichtner olichtne@redhat.com
The method retuns a list of ip addresses that match the provided selectors. The selectors are key=value pairs where 'key' is some attribute of the ipaddress object, if an ipaddress object matches all the selectors it is added to the return list of the method.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/Device.py | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index 0f84a02..2fa4a58 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -521,6 +521,21 @@ class Device(object): log_exc_traceback() raise DeviceConfigError("IP address flush failed")
+ def ips_filter(self, **selectors): + result = [] + for addr in self.ips: + match = True + for selector, value in selectors.items(): + try: + if getattr(addr, selector) != value: + match = False + break + except: + pass + if match: + result.append(addr) + return result + def up(self): """set device up""" with pyroute2.IPRoute() as ipr:
On Mon, 2018-05-21 at 10:42 +0200, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
The method retuns a list of ip addresses that match the provided selectors. The selectors are key=value pairs where 'key' is some attribute of the ipaddress object, if an ipaddress object matches all the selectors it is added to the return list of the method.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Devices/Device.py | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index 0f84a02..2fa4a58 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -521,6 +521,21 @@ class Device(object): log_exc_traceback() raise DeviceConfigError("IP address flush failed")
- def ips_filter(self, **selectors):
result = []
for addr in self.ips:
match = True
for selector, value in selectors.items():
try:
if getattr(addr, selector) != value:
match = False
break
except:
If exception is raised by "getattr" function or comparison, variable "match" is not set to False.
except: match = False pass
pass
if match:
result.append(addr)
return result
- def up(self): """set device up""" with pyroute2.IPRoute() as ipr:
-- 2.17.0 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.or g To unsubscribe send an email to lnst-developers-leave@lists.fedorahos ted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelin es List Archives: https://lists.fedoraproject.org/archives/list/lnst-dev elopers@lists.fedorahosted.org/message/3DFOTAMCHJXYA244TSY4DXEPDGOQNU GA/
From: Ondrej Lichtner olichtne@redhat.com
The devices property of a Namespace should only list the mapped devices of the namespace, I added a device_database property to access all the devices (including unmapped).
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Namespace.py | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index caee05b..3f99eaa 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -56,7 +56,16 @@ class Namespace(object):
@property def devices(self): - """List of devices available in the Namespace""" + """List of mapped devices available in the Namespace""" + ret = [] + for x in self._objects.values(): + if isinstance(x, Device) and x.netns == self: + ret.append(x) + return ret + + @property + def device_database(self): + """List of all devices (including unmapped) available in the Namespace""" ret = [] for x in self._machine._device_database.values(): if isinstance(x, Device) and x.netns == self:
From: Ondrej Lichtner olichtne@redhat.com
I changed versioning related code a couple of commits ago breaking the setup.py script. This fixes the issue.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- setup.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/setup.py b/setup.py index 322731f..870380c 100755 --- a/setup.py +++ b/setup.py @@ -23,7 +23,7 @@ import gzip import os from time import gmtime, strftime from distutils.core import setup -from lnst.Common.Version import LNSTMajorVersion +from lnst.Common.Version import lnst_version
def process_template(template_path, values): template_name_re = ".in$" @@ -182,7 +182,7 @@ DATA_FILES = CONFIG + TEST_MODULES + MULTICAST_TEST_TOOLS + MAN_PAGES + \ SCHEMAS + BASH_COMP + RECIPE_FILES + RESULT_XSLT_DATA
setup(name="lnst", - version=LNSTMajorVersion, + version=lnst_version.version, description="Linux Network Stack Test", author="LNST Team", author_email="lnst-developers@lists.fedorahosted.org",
From: Ondrej Lichtner olichtne@redhat.com
Namespace objects shouldn't auto convert to strings using the namespace name. This can lead to confusion while debugging recipes or lnst code and trying to print/compare namespace objects.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Namespace.py | 3 --- 1 file changed, 3 deletions(-)
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index 3f99eaa..af5bda1 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -211,6 +211,3 @@ class Namespace(object): return True else: return False - - def __str__(self): - return str(self.name)
From: Ondrej Lichtner olichtne@redhat.com
Parameter types added: * BoolParam - checks if the value is a boolean * HostnameParam - checks if the value is an ip address or an accesible Hostname * DictParam - checks if the value is a dictionary * DeviceOrIpParam - checks if the value is a Device object/reference or an ip address
Modified IpParam - type check is based on isinstance BaseIpAddress rather than ipaddress factory method
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/Parameters.py | 49 +++++++++++++++++++++++++++++++++++---- 1 file changed, 44 insertions(+), 5 deletions(-)
diff --git a/lnst/Common/Parameters.py b/lnst/Common/Parameters.py index 43faf6d..0a331e3 100644 --- a/lnst/Common/Parameters.py +++ b/lnst/Common/Parameters.py @@ -15,8 +15,9 @@ olichtne@redhat.com (Ondrej Lichtner) """
import copy +import socket from lnst.Common.DeviceRef import DeviceRef -from lnst.Common.IpAddress import ipaddress +from lnst.Common.IpAddress import BaseIpAddress from lnst.Common.LnstError import LnstError
class ParamError(LnstError): @@ -52,13 +53,31 @@ class StrParam(Param): except ValueError: raise ParamError("Value must be a string")
+class BoolParam(Param): + def type_check(self, value): + if isinstance(value, bool): + return value + else: + raise ParamError("Value must be a boolean") + class IpParam(Param): def type_check(self, value): + if isinstance(value, BaseIpAddress): + return value + else: + raise ParamError("Value must be a BaseIpAddress object") + +class HostnameParam(Param): + def type_check(self, value): + if isinstance(value, BaseIpAddress): + return value + try: - return ipaddress(value) - except ValueError: - raise ParamError("Value must be a BaseIpAddress, string ip address or a Device object. Not {}" - .format(type(value))) + #TODO check by regex? + socket.getaddrinfo(value) + return value + except: + raise ParamError("Value must be a BaseIpAddress object or a valid hostname")
class DeviceParam(Param): def type_check(self, value): @@ -71,6 +90,26 @@ class DeviceParam(Param): raise ParamError("Value must be a Device or DeviceRef object." " Not {}".format(type(value)))
+class DeviceOrIpParam(Param): + def type_check(self, value): + #runtime import this because the Device class arrives on the Slave + #during recipe execution, not during Slave init + from lnst.Devices.Device import Device + if (isinstance(value, Device) or isinstance(value, DeviceRef) or + isinstance(value, BaseIpAddress)): + return value + else: + raise ParamError("Value must be a Device, DeviceRef or BaseIpAddress object." + " Not {}".format(type(value))) + +class DictParam(Param): + def type_check(self, value): + if not isinstance(value, dict): + raise ParamError("Value must be a Dictionary. Not {}" + .format(type(value))) + else: + return value + class Parameters(object): def __init__(self): self._attrs = {}
From: Ondrej Lichtner olichtne@redhat.com
This python module will contain all the upstream tracked recipe classes, the same way that we previously had the ./recipes/ directory for XML based recipes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/__init__.py | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 lnst/Recipes/__init__.py
diff --git a/lnst/Recipes/__init__.py b/lnst/Recipes/__init__.py new file mode 100644 index 0000000..e69de29
From: Ondrej Lichtner olichtne@redhat.com
This module will contain our Early Network Regression Tests ported from the ./recipes/regression_tests/phase* recipes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/__init__.py | 0 setup.py | 2 +- 2 files changed, 1 insertion(+), 1 deletion(-) create mode 100644 lnst/Recipes/ENRT/__init__.py
diff --git a/lnst/Recipes/ENRT/__init__.py b/lnst/Recipes/ENRT/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/setup.py b/setup.py index 870380c..872d2c1 100755 --- a/setup.py +++ b/setup.py @@ -105,7 +105,7 @@ project website https://fedorahosted.org/lnst. """
PACKAGES = ["lnst", "lnst.Common", "lnst.Controller", "lnst.Slave", - "lnst.RecipeCommon", "lnst.Devices", "lnst.Tests" ] + "lnst.RecipeCommon", "lnst.Recipes", "lnst.Devices", "lnst.Tests" ] SCRIPTS = ["lnst-ctl", "lnst-slave", "lnst-pool-wizard"]
RECIPE_FILES = []
From: Ondrej Lichtner olichtne@redhat.com
The regression tests will be ported into the lnst.Recipes.ENRT module based on their versions on the master branch so I'm removing these outdated xml recipes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../regression_tests/phase1/3_vlans.README | 75 --- recipes/regression_tests/phase1/3_vlans.py | 332 ------------- recipes/regression_tests/phase1/3_vlans.xml | 107 ---- .../3_vlans_over_active_backup_bond.README | 84 ---- .../3_vlans_over_active_backup_bond.xml | 121 ----- .../phase1/3_vlans_over_bond.py | 332 ------------- .../3_vlans_over_round_robin_bond.README | 84 ---- .../phase1/3_vlans_over_round_robin_bond.xml | 114 ----- .../phase1/active_backup_bond.README | 81 --- .../phase1/active_backup_bond.xml | 50 -- .../phase1/active_backup_double_bond.README | 81 --- .../phase1/active_backup_double_bond.xml | 60 --- .../regression_tests/phase1/bonding_test.py | 305 ------------ .../regression_tests/phase1/ping_flood.README | 38 -- .../regression_tests/phase1/ping_flood.xml | 30 -- .../phase1/round_robin_bond.README | 81 --- .../phase1/round_robin_bond.xml | 50 -- .../phase1/round_robin_double_bond.README | 81 --- .../phase1/round_robin_double_bond.xml | 58 --- .../phase1/simple_netperf.README | 72 --- .../regression_tests/phase1/simple_netperf.py | 277 ----------- .../phase1/simple_netperf.xml | 39 -- .../regression_tests/phase1/simple_ping.py | 43 -- ...dge_2_vlans_over_active_backup_bond.README | 106 ---- ...bridge_2_vlans_over_active_backup_bond.xml | 174 ------- .../virtual_bridge_2_vlans_over_bond.py | 402 --------------- .../virtual_bridge_vlan_in_guest.README | 82 --- .../phase1/virtual_bridge_vlan_in_guest.py | 331 ------------ .../phase1/virtual_bridge_vlan_in_guest.xml | 80 --- .../phase1/virtual_bridge_vlan_in_host.README | 82 --- .../phase1/virtual_bridge_vlan_in_host.py | 331 ------------ .../phase1/virtual_bridge_vlan_in_host.xml | 80 --- .../3_vlans_over_active_backup_team.README | 84 ---- .../3_vlans_over_active_backup_team.xml | 125 ----- .../3_vlans_over_round_robin_team.README | 84 ---- .../phase2/3_vlans_over_round_robin_team.xml | 118 ----- .../phase2/3_vlans_over_team.py | 332 ------------- .../phase2/active_backup_double_team.README | 81 --- .../phase2/active_backup_double_team.xml | 68 --- .../phase2/active_backup_team.README | 81 --- .../phase2/active_backup_team.xml | 54 -- ...e_backup_team_vs_active_backup_bond.README | 81 --- ...tive_backup_team_vs_active_backup_bond.xml | 64 --- ...ive_backup_team_vs_round_robin_bond.README | 81 --- ...active_backup_team_vs_round_robin_bond.xml | 64 --- .../phase2/round_robin_double_team.README | 81 --- .../phase2/round_robin_double_team.xml | 68 --- .../phase2/round_robin_team.README | 81 --- .../phase2/round_robin_team.xml | 52 -- ...nd_robin_team_vs_active_backup_bond.README | 81 --- ...round_robin_team_vs_active_backup_bond.xml | 64 --- ...ound_robin_team_vs_round_robin_bond.README | 81 --- .../round_robin_team_vs_round_robin_bond.xml | 64 --- recipes/regression_tests/phase2/team_test.py | 470 ------------------ ...dge_2_vlans_over_active_backup_bond.README | 77 --- ..._bridge_2_vlans_over_active_backup_bond.py | 381 -------------- ...bridge_2_vlans_over_active_backup_bond.xml | 135 ----- .../virtual_ovs_bridge_vlan_in_guest.README | 55 -- .../virtual_ovs_bridge_vlan_in_guest.py | 319 ------------ .../virtual_ovs_bridge_vlan_in_guest.xml | 76 --- .../virtual_ovs_bridge_vlan_in_host.README | 58 --- .../phase2/virtual_ovs_bridge_vlan_in_host.py | 319 ------------ .../virtual_ovs_bridge_vlan_in_host.xml | 74 --- .../phase3/2_virt_ovs_vxlan.README | 129 ----- .../phase3/2_virt_ovs_vxlan.py | 279 ----------- .../phase3/2_virt_ovs_vxlan.xml | 145 ------ .../phase3/novirt_ovs_vxlan.README | 93 ---- .../phase3/novirt_ovs_vxlan.py | 216 -------- .../phase3/novirt_ovs_vxlan.xml | 87 ---- .../phase3/vxlan_multicast.README | 118 ----- .../phase3/vxlan_multicast.py | 249 ---------- .../phase3/vxlan_multicast.xml | 99 ---- .../phase3/vxlan_remote.README | 86 ---- .../regression_tests/phase3/vxlan_remote.py | 215 -------- .../regression_tests/phase3/vxlan_remote.xml | 65 --- 75 files changed, 9897 deletions(-) delete mode 100644 recipes/regression_tests/phase1/3_vlans.README delete mode 100644 recipes/regression_tests/phase1/3_vlans.py delete mode 100644 recipes/regression_tests/phase1/3_vlans.xml delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_bond.py delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README delete mode 100644 recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase1/active_backup_bond.README delete mode 100644 recipes/regression_tests/phase1/active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase1/active_backup_double_bond.README delete mode 100644 recipes/regression_tests/phase1/active_backup_double_bond.xml delete mode 100644 recipes/regression_tests/phase1/bonding_test.py delete mode 100644 recipes/regression_tests/phase1/ping_flood.README delete mode 100644 recipes/regression_tests/phase1/ping_flood.xml delete mode 100644 recipes/regression_tests/phase1/round_robin_bond.README delete mode 100644 recipes/regression_tests/phase1/round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase1/round_robin_double_bond.README delete mode 100644 recipes/regression_tests/phase1/round_robin_double_bond.xml delete mode 100644 recipes/regression_tests/phase1/simple_netperf.README delete mode 100644 recipes/regression_tests/phase1/simple_netperf.py delete mode 100644 recipes/regression_tests/phase1/simple_netperf.xml delete mode 100644 recipes/regression_tests/phase1/simple_ping.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py delete mode 100644 recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_active_backup_team.README delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_active_backup_team.xml delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_round_robin_team.README delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_round_robin_team.xml delete mode 100644 recipes/regression_tests/phase2/3_vlans_over_team.py delete mode 100644 recipes/regression_tests/phase2/active_backup_double_team.README delete mode 100644 recipes/regression_tests/phase2/active_backup_double_team.xml delete mode 100644 recipes/regression_tests/phase2/active_backup_team.README delete mode 100644 recipes/regression_tests/phase2/active_backup_team.xml delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.README delete mode 100644 recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_double_team.README delete mode 100644 recipes/regression_tests/phase2/round_robin_double_team.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_team.README delete mode 100644 recipes/regression_tests/phase2/round_robin_team.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.README delete mode 100644 recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.xml delete mode 100644 recipes/regression_tests/phase2/team_test.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.README delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.xml delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.README delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.xml delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.README delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py delete mode 100644 recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.xml delete mode 100644 recipes/regression_tests/phase3/2_virt_ovs_vxlan.README delete mode 100644 recipes/regression_tests/phase3/2_virt_ovs_vxlan.py delete mode 100644 recipes/regression_tests/phase3/2_virt_ovs_vxlan.xml delete mode 100644 recipes/regression_tests/phase3/novirt_ovs_vxlan.README delete mode 100644 recipes/regression_tests/phase3/novirt_ovs_vxlan.py delete mode 100644 recipes/regression_tests/phase3/novirt_ovs_vxlan.xml delete mode 100644 recipes/regression_tests/phase3/vxlan_multicast.README delete mode 100644 recipes/regression_tests/phase3/vxlan_multicast.py delete mode 100644 recipes/regression_tests/phase3/vxlan_multicast.xml delete mode 100644 recipes/regression_tests/phase3/vxlan_remote.README delete mode 100644 recipes/regression_tests/phase3/vxlan_remote.py delete mode 100644 recipes/regression_tests/phase3/vxlan_remote.xml
diff --git a/recipes/regression_tests/phase1/3_vlans.README b/recipes/regression_tests/phase1/3_vlans.README deleted file mode 100644 index b559a44..0000000 --- a/recipes/regression_tests/phase1/3_vlans.README +++ /dev/null @@ -1,75 +0,0 @@ -Topology: - - switch - VLAN10 +------+ VLAN10 - +-------------------+ | | +-------------------+ - | VLAN20 | | | | VLAN20 | - | +-------------------+ +-------------------+ | - | | VLAN30 | | | | VLAN30 | | - | | +-----------+ | | +-----------+ | | - | | | +------+ | | | - | | | | | | - +-------+ +-------+ - | | - +--+--+ +--+--+ -+----| eth |----+ +----| eth |----+ -| +-----+ | | +-----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+---------------+ +---------------+ - - -Number of hosts: 2 -Host #1 description: - One ethernet device with 3 VLAN subinterfaces -Host #2 description: - One ethernet device with 3 VLAN subinterfaces -Test name: - 3_vlans.py -Test description (3_vlans.py): - Ping: - + count: 100 - + interval: 0.1s - + between interfaces in the same VLAN (these should pass) - + between interfaces in different VLANs (these should fail) - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between interfaces in the same VLAN - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - 3_vlans.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/3_vlans.py b/recipes/regression_tests/phase1/3_vlans.py deleted file mode 100644 index 8144815..0000000 --- a/recipes/regression_tests/phase1/3_vlans.py +++ /dev/null @@ -1,332 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -vlans = ["vlan10", "vlan20", "vlan30"] -offloads = ["gro", "gso", "tso", "rx", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]] - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -m1_phy1 = m1.get_interface("eth1") -m1_phy1.set_mtu(mtu) -m2_phy1 = m2.get_interface("eth1") -m2_phy1.set_mtu(mtu) - -for vlan in vlans: - vlan_if1 = m1.get_interface(vlan) - vlan_if1.set_mtu(mtu) - vlan_if2 = m2.get_interface(vlan) - vlan_if2.set_mtu(mtu) - -if nperf_cpupin: - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - # this will pin devices irqs to cpu #0 - for m, d in [ (m1, m1_phy1), (m2, m2_phy1) ]: - pin_dev_irqs(m, d, 0) - -ctl.wait(15) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "count" : 100, - "interval" : 0.1 - }) -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "count" : 100, - "interval" : 0.1 - }) - -m1_vlan1 = m1.get_interface(vlans[0]) -m2_vlan1 = m2.get_interface(vlans[0]) - -p_opts = "-L %s" % (m2_vlan1.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (m2_vlan1.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind": m1_vlan1.get_ip(0) - }) -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind": m1_vlan1.get_ip(1), - "netperf_opts" : " -6" - }) -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -for setting in offload_settings: - #apply offload setting - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - - m1.run("ethtool -K %s %s" % (m1_phy1.get_devname(), - dev_features)) - m2.run("ethtool -K %s %s" % (m2_phy1.get_devname(), - dev_features)) - if ("rx", "off") in setting: - # when rx offload is turned off some of the cards might get reset - # and link goes down, so wait a few seconds until NIC is ready - ctl.wait(15) - - # Ping test - for vlan1 in vlans: - m1_vlan1 = m1.get_interface(vlan1) - for vlan2 in vlans: - m2_vlan2 = m2.get_interface(vlan2) - - ping_mod.update_options({"addr": m2_vlan2.get_ip(0), - "iface": m1_vlan1.get_devname()}) - - ping_mod6.update_options({"addr": m2_vlan2.get_ip(1), - "iface": m1_vlan1.get_ip(1)}) - - if vlan1 == vlan2: - # These tests should pass - # Ping between same VLANs - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod) - - if ipv in [ 'ipv6', 'both' ]: - m1.run(ping_mod6) - else: - # These tests should fail - # Ping across different VLAN - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod, expect="fail") - - # Netperf test (both TCP and UDP) - if ipv in [ 'ipv4', 'both' ]: - srv_proc = m1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_tcp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_udp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = m2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - srv_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - srv_proc = m1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_tcp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_tcp.set_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_udp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_udp.set_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = m2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - srv_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -m1.run("ethtool -K %s %s" % (m1_phy1.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (m2_phy1.get_devname(), dev_features)) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/3_vlans.xml b/recipes/regression_tests/phase1/3_vlans.xml deleted file mode 100644 index 5c0f1fe..0000000 --- a/recipes/regression_tests/phase1/3_vlans.xml +++ /dev/null @@ -1,107 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="3_vlans.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - <alias name="vlan20_net" value="192.168.20"/> - <alias name="vlan20_tag" value="20"/> - <alias name="vlan30_net" value="192.168.30"/> - <alias name="vlan30_tag" value="30"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan10_net}.1/24" /> - <address value="fc00:0:0:10::1/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan20_net}.1/24" /> - <address value="fc00:0:0:20::1/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="{$vlan30_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan30_net}.1/24" /> - <address value="fc00:0:0:30::1/64" /> - </addresses> - </vlan> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan10_net}.2/24" /> - <address value="fc00:0:0:10::2/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan20_net}.2/24" /> - <address value="fc00:0:0:20::2/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="{$vlan30_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan30_net}.2/24" /> - <address value="fc00:0:0:30::2/64" /> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="3_vlans.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README b/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README deleted file mode 100644 index 83e3537..0000000 --- a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README +++ /dev/null @@ -1,84 +0,0 @@ -Topology: - - switch - VLAN10 +------+ VLAN10 - +-------------------+ | | +-------------------+ - | VLAN20 | | | | VLAN20 | - | +-------------------+ +-------------------+ | - | | VLAN30 | | | | VLAN30 | | - | | +-----------+ | | +-----------+ | | - | | | +------+ | | | - | | | | | | - +-------+ +-------+ - | | - | | - | | - +----+---+ | - | BOND | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +--+-+ -+---|eth1|--|eth2|---+ +-------|eth1|-------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, in active-backup bond mode - 3 VLANs on bond interface -Host #2 description: - One ethernet device - 3 VLANs on the ethernet interface -Test name: - 3_vlans_over_bond.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between interfaces in the same VLAN (these should pass) - + between interfaces in different VLANs (these should fail) - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between interfaces in the same VLAN - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - 3_vlans_over_active_backup_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml b/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml deleted file mode 100644 index f9b4922..0000000 --- a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml +++ /dev/null @@ -1,121 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="3_vlans_over_active_backup_bond.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - <alias name="vlan20_net" value="192.168.20"/> - <alias name="vlan20_tag" value="20"/> - <alias name="vlan30_net" value="192.168.30"/> - <alias name="vlan30_tag" value="30"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_bond"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="2002::1/64" /> - </addresses> - </bond> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="test_bond" /> - </slaves> - <addresses> - <address value="{$vlan10_net}.1/24" /> - <address value="fc00:0:0:10::1/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="test_bond" /> - </slaves> - <addresses> - <address value="{$vlan20_net}.1/24" /> - <address value="fc00:0:0:20::1/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="{$vlan30_tag}" /> - </options> - <slaves> - <slave id="test_bond" /> - </slaves> - <addresses> - <address value="{$vlan30_net}.1/24" /> - <address value="fc00:0:0:30::1/64" /> - </addresses> - </vlan> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan10_net}.2/24" /> - <address value="fc00:0:0:10::2/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan20_net}.2/24" /> - <address value="fc00:0:0:20::2/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="{$vlan30_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan30_net}.2/24" /> - <address value="fc00:0:0:30::2/64" /> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="3_vlans_over_bond.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/3_vlans_over_bond.py b/recipes/regression_tests/phase1/3_vlans_over_bond.py deleted file mode 100644 index 41f2b95..0000000 --- a/recipes/regression_tests/phase1/3_vlans_over_bond.py +++ /dev/null @@ -1,332 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -vlans = ["vlan10", "vlan20", "vlan30"] -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]] - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -m1_bond = m1.get_interface("test_bond") -m1_bond.set_mtu(mtu) -m1_phy1 = m1.get_interface("eth1") -m1_phy2 = m1.get_interface("eth2") -m2_phy1 = m2.get_interface("eth1") -m2_phy1.set_mtu(mtu) - -for vlan in vlans: - vlan_if1 = m1.get_interface(vlan) - vlan_if1.set_mtu(mtu) - vlan_if2 = m2.get_interface(vlan) - vlan_if2.set_mtu(mtu) - -if nperf_cpupin: - # this will pin devices irqs to cpu #0 - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - for m, d in [ (m1, m1_phy1), (m1, m1_phy2), (m2, m2_phy1) ]: - pin_dev_irqs(m, d, 0) - -ctl.wait(15) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "count" : 100, - "interval" : 0.1 - }) -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "count" : 100, - "interval" : 0.1 - }) - -m1_vlan1 = m1.get_interface(vlans[0]) -m2_vlan1 = m2.get_interface(vlans[0]) - -p_opts = "-L %s" % (m2_vlan1.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (m2_vlan1.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind": m1_vlan1.get_ip(0) - }) -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind": m1_vlan1.get_ip(1), - "netperf_opts" : " -6" - }) -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -for setting in offload_settings: - #apply offload setting - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - - m1.run("ethtool -K %s %s" % (m1_phy1.get_devname(), - dev_features)) - m1.run("ethtool -K %s %s" % (m1_phy2.get_devname(), - dev_features)) - m2.run("ethtool -K %s %s" % (m2_phy1.get_devname(), - dev_features)) - - # Ping test - for vlan1 in vlans: - m1_vlan1 = m1.get_interface(vlan1) - for vlan2 in vlans: - m2_vlan2 = m2.get_interface(vlan2) - - ping_mod.update_options({"addr": m2_vlan2.get_ip(0), - "iface": m1_vlan1.get_devname()}) - - ping_mod6.update_options({"addr": m2_vlan2.get_ip(1), - "iface": m1_vlan1.get_ip(1)}) - - if vlan1 == vlan2: - # These tests should pass - # Ping between same VLANs - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod) - - if ipv in [ 'ipv6', 'both' ]: - m1.run(ping_mod6) - else: - # These tests should fail - # Ping across different VLAN - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod, expect="fail") - - # Netperf test (both TCP and UDP) - if ipv in [ 'ipv4', 'both' ]: - srv_proc = m1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_tcp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_udp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = m2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - srv_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - srv_proc = m1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_tcp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_udp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = m2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - srv_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -m1.run("ethtool -K %s %s" % (m1_phy1.get_devname(), dev_features)) -m1.run("ethtool -K %s %s" % (m1_phy2.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (m2_phy1.get_devname(), dev_features)) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README b/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README deleted file mode 100644 index ad18ae7..0000000 --- a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README +++ /dev/null @@ -1,84 +0,0 @@ -Topology: - - switch - VLAN10 +------+ VLAN10 - +-------------------+ | | +-------------------+ - | VLAN20 | | | | VLAN20 | - | +-------------------+ +-------------------+ | - | | VLAN30 | | | | VLAN30 | | - | | +-----------+ | | +-----------+ | | - | | | +------+ | | | - | | | | | | - +-------+ +-------+ - | | - | | - | | - +----+---+ | - | BOND | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +--+-+ -+---|eth1|--|eth2|---+ +-------|eth1|-------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, in round-robin bond mode - 3 VLANs on bond interface -Host #2 description: - One ethernet device - 3 VLANs on the ethernet interface -Test name: - 3_vlans_over_bond.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between interfaces in the same VLAN (these should pass) - + between interfaces in different VLANs (these should fail) - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between interfaces in the same VLAN - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - 3_vlans_over_round_robin_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml b/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml deleted file mode 100644 index d22fa7b..0000000 --- a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml +++ /dev/null @@ -1,114 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="3_vlans_over_round_robin_bond.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_bond"> - <options> - <option name="mode" value="balance-rr" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="1.2.3.4/24" /> - </addresses> - </bond> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="10" /> - </options> - <slaves> - <slave id="test_bond" /> - </slaves> - <addresses> - <address value="192.168.10.1/24" /> - <address value="2002::10:1/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="20" /> - </options> - <slaves> - <slave id="test_bond" /> - </slaves> - <addresses> - <address value="192.168.20.1/24" /> - <address value="2002::20:1/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="30" /> - </options> - <slaves> - <slave id="test_bond" /> - </slaves> - <addresses> - <address value="192.168.30.1/24" /> - <address value="2002::30:1/64" /> - </addresses> - </vlan> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="10" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="192.168.10.2/24" /> - <address value="2002::10:2/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="20" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="192.168.20.2/24" /> - <address value="2002::20:2/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="30" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="192.168.30.2/24" /> - <address value="2002::30:2/64" /> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="3_vlans_over_bond.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/active_backup_bond.README b/recipes/regression_tests/phase1/active_backup_bond.README deleted file mode 100644 index 5cc8c2d..0000000 --- a/recipes/regression_tests/phase1/active_backup_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ | - | BOND | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +-+--+ -+---|eth1|--|eth2|---+ +-------|eth1|------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +-------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, in active-backup bond mode -Host #2 description: - One ethernet device -Test name: - bonding_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - active_backup_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/active_backup_bond.xml b/recipes/regression_tests/phase1/active_backup_bond.xml deleted file mode 100644 index 0b79f4c..0000000 --- a/recipes/regression_tests/phase1/active_backup_bond.xml +++ /dev/null @@ -1,50 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="active_backup_bond.mapping" /> - <alias name="net" value="192.168.0"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.1/24" /> - <address value="fc00:0:0:0::1/64"/> - </addresses> - </bond> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="test_if" label="tnet"> - <addresses> - <address value="{$net}.2/24" /> - <address value="fc00:0:0:0::2/64"/> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="bonding_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/active_backup_double_bond.README b/recipes/regression_tests/phase1/active_backup_double_bond.README deleted file mode 100644 index fdb2368..0000000 --- a/recipes/regression_tests/phase1/active_backup_double_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | BOND | | BOND | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, in active-backup bond mode -Host #2 description: - Two ethernet devices, in active-backup bond mode -Test name: - bonding_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - active_backup_double_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/active_backup_double_bond.xml b/recipes/regression_tests/phase1/active_backup_double_bond.xml deleted file mode 100644 index bc39db8..0000000 --- a/recipes/regression_tests/phase1/active_backup_double_bond.xml +++ /dev/null @@ -1,60 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="active_backup_double_bond.mapping" /> - <alias name="net" value="192.168.0"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.1/24" /> - <address value="fc00:0:0:0::1/64"/> - </addresses> - </bond> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.2/24" /> - <address value="fc00:0:0:0::2/64"/> - </addresses> - </bond> - </interfaces> - </host> - </network> - - <task python="bonding_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/bonding_test.py b/recipes/regression_tests/phase1/bonding_test.py deleted file mode 100644 index 39e7df8..0000000 --- a/recipes/regression_tests/phase1/bonding_test.py +++ /dev/null @@ -1,305 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]] - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -test_if1 = m1.get_interface("test_if") -test_if1.set_mtu(mtu) -test_if2 = m2.get_interface("test_if") -test_if2.set_mtu(mtu) - -if nperf_cpupin: - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - m1_phy1 = m1.get_interface("eth1") - m1_phy2 = m1.get_interface("eth2") - dev_list = [(m1, m1_phy1), (m1, m1_phy2)] - - if test_if2.get_type() == "bond": - m2_phy1 = m2.get_interface("eth1") - m2_phy2 = m2.get_interface("eth2") - dev_list.extend([(m2, m2_phy1), (m2, m2_phy2)]) - else: - dev_list.append((m2, test_if2)) - - # this will pin devices irqs to cpu #0 - for m, d in dev_list: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : test_if2.get_ip(0), - "count" : 100, - "iface" : test_if1.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : test_if2.get_ip(1), - "count" : 100, - "iface" : test_if1.get_devname(), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options = { - "role" : "server", - "bind" : test_if1.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : test_if1.get_ip(1), - "netperf_opts" : " -6" - }) - -p_opts = "-L %s" % (test_if2.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (test_if2.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_cli_tcp = ctl.get_module("Netperf", - options = { - "role" : "client", - "netperf_server" : test_if1.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options = { - "role" : "client", - "netperf_server" : test_if1.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - test_if1.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - test_if1.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - m1.run("ethtool -K %s %s" % (test_if1.get_devname(), dev_features)) - m2.run("ethtool -K %s %s" % (test_if2.get_devname(), dev_features)) - - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod) - - server_proc = m1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = m2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - m1.run(ping_mod6) - - server_proc = m1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = m2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -m1.run("ethtool -K %s %s" % (test_if1.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (test_if2.get_devname(), dev_features)) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/ping_flood.README b/recipes/regression_tests/phase1/ping_flood.README deleted file mode 100644 index 516f2ac..0000000 --- a/recipes/regression_tests/phase1/ping_flood.README +++ /dev/null @@ -1,38 +0,0 @@ -Topology: - +--------+ - | | - +----------------------+ switch +----------------------+ - | | | | - | +--------+ | - | | - | | - | | - | | - +--+-+ +-+--+ -+-------|eth1|-------+ +-------|eth1|-------+ -| +----+ | | +----+ | -| | | | -| | | | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - - -Number of hosts: 2 -Host #1 description: - One ethernet device -Host #2 description: - One ethernet device -Test name: - simple_ping.py -Test description: - Ping: - + count: 100 - + interval: 0.2s - + from host1 to host2 diff --git a/recipes/regression_tests/phase1/ping_flood.xml b/recipes/regression_tests/phase1/ping_flood.xml deleted file mode 100644 index 5ccac21..0000000 --- a/recipes/regression_tests/phase1/ping_flood.xml +++ /dev/null @@ -1,30 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="net" value="192.168.101" /> - </define> - <network> - <host id="machine1"> - <interfaces> - <eth id="testiface" label="testnet"> - <addresses> - <address>{$net}.10/24</address> - <address>fc00:0:0:0::1/64</address> - </addresses> - </eth> - </interfaces> - </host> - <host id="machine2"> - <interfaces> - <eth id="testiface" label="testnet"> - <addresses> - <address>{$net}.11/24</address> - <address>fc00:0:0:0::2/64</address> - </addresses> - </eth> - </interfaces> - </host> - </network> - <task python="simple_ping.py"/> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/round_robin_bond.README b/recipes/regression_tests/phase1/round_robin_bond.README deleted file mode 100644 index 2a6db04..0000000 --- a/recipes/regression_tests/phase1/round_robin_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ | - | BOND | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +-+--+ -+---|eth1|--|eth2|---+ +-------|eth1|------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +-------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, in round-robin bond mode -Host #2 description: - One ethernet device -Test name: - bonding_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - round_robin_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/round_robin_bond.xml b/recipes/regression_tests/phase1/round_robin_bond.xml deleted file mode 100644 index 9b28cbc..0000000 --- a/recipes/regression_tests/phase1/round_robin_bond.xml +++ /dev/null @@ -1,50 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="round_robin_bond.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="balance-rr" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64"/> - </addresses> - </bond> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="test_if" label="tnet"> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64"/> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="bonding_test.py" /> -</lnstrecipe> - - diff --git a/recipes/regression_tests/phase1/round_robin_double_bond.README b/recipes/regression_tests/phase1/round_robin_double_bond.README deleted file mode 100644 index 5366ce0..0000000 --- a/recipes/regression_tests/phase1/round_robin_double_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | BOND | | BOND | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, in round-robin bond mode -Host #2 description: - Two ethernet devices, in round-robin bond mode -Test name: - bonding_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - round_robin_double_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/round_robin_double_bond.xml b/recipes/regression_tests/phase1/round_robin_double_bond.xml deleted file mode 100644 index 15798f1..0000000 --- a/recipes/regression_tests/phase1/round_robin_double_bond.xml +++ /dev/null @@ -1,58 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="round_robin_double_bond.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="balance-rr" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64"/> - </addresses> - </bond> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="balance-rr" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64"/> - </addresses> - </bond> - </interfaces> - </host> - </network> - - <task python="bonding_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/simple_netperf.README b/recipes/regression_tests/phase1/simple_netperf.README deleted file mode 100644 index abc5b48..0000000 --- a/recipes/regression_tests/phase1/simple_netperf.README +++ /dev/null @@ -1,72 +0,0 @@ -Topology: - +--------+ - | | - +----------------------+ switch +----------------------+ - | | | | - | +--------+ | - | | - | | - | | - | | - +--+-+ +-+--+ -+-------|eth1|-------+ +-------|eth1|-------+ -| +----+ | | +----+ | -| | | | -| | | | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - - -Number of hosts: 2 -Host #1 description: - One ethernet device -Host #2 description: - One ethernet device -Test name: - simple_netperf.py - -Test description (simple_netperf.py): - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between physical interfaces - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - simple_netperf.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison against baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find it in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/simple_netperf.py b/recipes/regression_tests/phase1/simple_netperf.py deleted file mode 100644 index fe9d96b..0000000 --- a/recipes/regression_tests/phase1/simple_netperf.py +++ /dev/null @@ -1,277 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("machine1") -m2 = ctl.get_host("machine2") - -m1.sync_resources(modules=["Netperf"]) -m2.sync_resources(modules=["Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]] - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -m1_testiface = m1.get_interface("testiface") -m2_testiface = m2.get_interface("testiface") - -m1_testiface.set_mtu(mtu) -m2_testiface.set_mtu(mtu) - -if nperf_cpupin: - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - for m, d in [ (m1, m1_testiface), (m2, m2_testiface) ]: - pin_dev_irqs(m, d, 0) - -p_opts = "-L %s" % (m2_testiface.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (m2_testiface.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : m1_testiface.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs" : nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : m1_testiface.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs" : nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : m1_testiface.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs" : nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : m1_testiface.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs" : nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : m1_testiface.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : m1_testiface.get_ip(1) - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel" : nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel" : nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel" : nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel" : nperf_num_parallel}) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - m1.run("ethtool -K %s %s" % (m1_testiface.get_devname(), dev_features)) - m2.run("ethtool -K %s %s" % (m2_testiface.get_devname(), dev_features)) - - if ("rx", "off") in setting: - # when rx offload is turned off some of the cards might get reset - # and link goes down, so wait a few seconds until NIC is ready - ctl.wait(15) - - # Netperf test - if ipv in [ 'ipv4', 'both' ]: - srv_proc = m1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter("num_parallel", nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - tcp_res_data = m2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter("num_parallel", nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - udp_res_data = m2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - srv_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - srv_proc = m1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter("num_parallel", nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - tcp_res_data = m2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter("num_parallel", nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - udp_res_data = m2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - srv_proc.intr() - -# reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") - -m1.run("ethtool -K %s %s" % (m1_testiface.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (m2_testiface.get_devname(), dev_features)) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/simple_netperf.xml b/recipes/regression_tests/phase1/simple_netperf.xml deleted file mode 100644 index 0f106aa..0000000 --- a/recipes/regression_tests/phase1/simple_netperf.xml +++ /dev/null @@ -1,39 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default" /> - <alias name="nperf_num_parallel" value="2" /> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="simple_netperf.mapping" /> - <alias name="net" value="192.168.101" /> - </define> - <network> - <host id="machine1"> - <interfaces> - <eth id="testiface" label="testnet"> - <addresses> - <address>{$net}.10/24</address> - <address>fc00:0:0:0::1/64</address> - </addresses> - </eth> - </interfaces> - </host> - <host id="machine2"> - <interfaces> - <eth id="testiface" label="testnet"> - <addresses> - <address>{$net}.11/24</address> - <address>fc00:0:0:0::2/64</address> - </addresses> - </eth> - </interfaces> - </host> - </network> - <task python="simple_netperf.py"/> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/simple_ping.py b/recipes/regression_tests/phase1/simple_ping.py deleted file mode 100644 index 3591402..0000000 --- a/recipes/regression_tests/phase1/simple_ping.py +++ /dev/null @@ -1,43 +0,0 @@ -from lnst.Controller.Task import ctl - -hostA = ctl.get_host("machine1") -hostB = ctl.get_host("machine2") - -hostA.sync_resources(modules=["Icmp6Ping", "IcmpPing"]) -hostB.sync_resources(modules=["Icmp6Ping", "IcmpPing"]) - -hostA_testiface = hostA.get_interface("testiface") -hostB_testiface = hostB.get_interface("testiface") - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr": hostB_testiface.get_ip(0), - "count": 100, - "interval": 0.2, - "iface" : hostA_testiface.get_devname(), - "limit_rate": 90}) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr": hostB_testiface.get_ip(1), - "count": 100, - "interval": 0.2, - "iface" : hostA_testiface.get_ip(1), - "limit_rate": 90}) - -ctl.wait(15) - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") - -test_if1 = hostA.get_interface("testiface") -test_if1.set_mtu(mtu) -test_if2 = hostB.get_interface("testiface") -test_if2.set_mtu(mtu) - - -if ipv in [ 'ipv6', 'both' ]: - hostA.run(ping_mod6) - -if ipv in [ 'ipv4', 'both' ]: - hostA.run(ping_mod) diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README deleted file mode 100644 index ce3fb8e..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README +++ /dev/null @@ -1,106 +0,0 @@ -Topology: - - switch - +--------+ - | | - +-------------------------+ +--------------------------+ - | | | | - | +---------------+ +----------------+ | - | | | | | | - | | +--------+ | | - | | | | - | | | | - +--+-+ +--+-+ +-+--+ +-+--+ -+----+eth1+----+eth2+----+ +----+eth1+----+eth2+----+ -| +-+--+ +--+-+ | | +-+--+ +--+-+ | -| +--++ ++--+ | | +--++ ++--+ | -| | | | | | | | -| | | | | | | | -| +-+--+-+ | | +-+--+-+ | -| | bond | | | | bond | | -| VLAN10 +-+--+-+ VLAN20 | | VLAN10 +-+--+-+ VLAN20 | -| +---+-+ +-+---+ | | +---+-+ +-+---+ | -| | | | | | | | -| +-+-+ +-+-+ | | +-+-+ +-+-+ | -| |br0| host1 |br1| | | |br0| host2 |br1| | -| +-+-+ +-+-+ | | +-+-+ +-+-+ | -| | | | | | | | -| | | | | | | | -| | | | | | | | -| +-+-+ +-+-+ | | +-+-+ +-+-+ | -+--+tap+----------+tap+--+ +--+tap+----------+tap+--+ - +-+-+ +-+-+ +-+-+ +-+-+ - | | | | - +-+-+ +-+-+ +-+-+ +-+-+ -+--+eth+--+ +--+eth+--+ +--+eth+--+ +--+eth+--+ -| +---+ | | +---+ | | +---+ | | +---+ | -| | | | | | | | -| guest1 | | guest2 | | guest3 | | guest4 | -| | | | | | | | -| | | | | | | | -+---------+ +---------+ +---------+ +---------+ - -Number of hosts: 4 -Host #1 description: - Two ethernet devices - Two tap devices - One bond in active-backup mode, bonding ethernet devices - Two VLANs over bond device - Two bridge devices, bridging VLAN and tap devices - Host for guest1 and guest2 virtual machines -Host #2 description: - Two ethernet devices - Two tap devices - One bond in active-backup mode, bonding ethernet devices - Two VLANs over bond device - Two bridge devices, bridging VLAN and tap devices - Host for guest3 and guest4 virtual machines -Guest #1 description: - One ethernet device -Guest #2 description: - One ethernet device -Guest #3 description: - One ethernet device -Guest #4 description: - One ethernet device -Test name: - virtual_bridge_2_vlans_over_bond.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between guests in same VLANs - Netperf: - + duration: 5 - + TCP_STREAM and UDP_STREAM - + between guests in same VLANs - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - virtual_bridge_2_vlans_over_active_backup_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml deleted file mode 100644 index 5b5223e..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml +++ /dev/null @@ -1,174 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mtu" value="1500" /> - <alias name="mapping_file" value="virtual_bridge_2_vlans_over_active_backup_bond.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - <alias name="vlan20_net" value="192.168.20"/> - <alias name="vlan20_tag" value="20"/> - </define> - <network> - <host id="host1"> - <interfaces> - <eth id="nic1" label="to_switch" /> - <eth id="nic2" label="to_switch" /> - <eth id="tap1" label="to_guest1" /> - <eth id="tap2" label="to_guest2" /> - <bond id="bond"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="nic1" /> - <slave id="nic2" /> - </slaves> - <addresses> - <address>1.2.3.4/24</address> - </addresses> - </bond> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="bond" /> - </slaves> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="bond" /> - </slaves> - </vlan> - <bridge id="br1"> - <slaves> - <slave id="tap1" /> - <slave id="vlan10" /> - </slaves> - <addresses> - <address>{$vlan10_net}.10/24</address> - </addresses> - </bridge> - <bridge id="br2"> - <slaves> - <slave id="tap2" /> - <slave id="vlan20" /> - </slaves> - <addresses> - <address>{$vlan20_net}.10/24</address> - </addresses> - </bridge> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="guestnic" label="to_guest1"> - <addresses> - <address>{$vlan10_net}.100/24</address> - <address value="fc00:0:0:10::100/64"/> - </addresses> - </eth> - </interfaces> - </host> - <host id="guest2"> - <interfaces> - <eth id="guestnic" label="to_guest2"> - <addresses> - <address>{$vlan20_net}.100/24</address> - <address value="fc00:0:0:20::100/64"/> - </addresses> - </eth> - </interfaces> - </host> - - <host id="host2"> - <interfaces> - <eth id="nic1" label="to_switch"/> - <eth id="nic2" label="to_switch"/> - <eth id="tap1" label="to_guest3"/> - <eth id="tap2" label="to_guest4"/> - <bond id="bond"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="nic1" /> - <slave id="nic2" /> - </slaves> - <addresses> - <address>1.2.3.4/24</address> - </addresses> - </bond> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="bond" /> - </slaves> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="bond" /> - </slaves> - </vlan> - <bridge id="br1"> - <slaves> - <slave id="vlan10"/> - <slave id="tap1"/> - </slaves> - <addresses> - <address>{$vlan10_net}.11/24</address> - </addresses> - </bridge> - <bridge id="br2"> - <slaves> - <slave id="vlan20"/> - <slave id="tap2"/> - </slaves> - <addresses> - <address>{$vlan20_net}.11/24</address> - </addresses> - </bridge> - </interfaces> - </host> - <host id="guest3"> - <interfaces> - <eth id="guestnic" label="to_guest3"> - <addresses> - <address>{$vlan10_net}.101/24</address> - <address value="fc00:0:0:10::101/64"/> - </addresses> - </eth> - </interfaces> - </host> - <host id="guest4"> - <interfaces> - <eth id="guestnic" label="to_guest4"> - <addresses> - <address>{$vlan20_net}.101/24</address> - <address value="fc00:0:0:20::101/64"/> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="virtual_bridge_2_vlans_over_bond.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py deleted file mode 100644 index 37f703f..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py +++ /dev/null @@ -1,402 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -# Host 1 + guests 1 and 2 -h1 = ctl.get_host("host1") -g1 = ctl.get_host("guest1") -g1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -g2 = ctl.get_host("guest2") -g2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# Host 2 + guests 3 and 4 -h2 = ctl.get_host("host2") -g3 = ctl.get_host("guest3") -g3.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -g4 = ctl.get_host("guest4") -g4.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]] - -ipv = ctl.get_alias("ipv") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, g1, g2, h2, g3, g4], pr_user_comment) - -mtu = ctl.get_alias("mtu") -enable_udp_perf = ctl.get_alias("enable_udp_perf") - -h1_nic1 = h1.get_interface("nic1") -h1_nic2 = h1.get_interface("nic2") -h2_nic1 = h2.get_interface("nic1") -h2_nic2 = h2.get_interface("nic2") -g1_guestnic = g1.get_interface("guestnic") -g2_guestnic = g2.get_interface("guestnic") -g3_guestnic = g3.get_interface("guestnic") -g4_guestnic = g4.get_interface("guestnic") - -h1.run("service irqbalance stop") -h2.run("service irqbalance stop") - -# this will pin devices irqs to cpu #0 -for m, d in [ (h1, h1_nic1), (h2, h2_nic1) , (h1, h1_nic2), (h2, h2_nic2) ]: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : g3_guestnic.get_ip(0), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) -ping_mod2 = ctl.get_module("IcmpPing", - options={ - "addr" : g2_guestnic.get_ip(0), - "count" : 100, - "iface" : g4_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : g3_guestnic.get_ip(1), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod62 = ctl.get_module("Icmp6Ping", - options={ - "addr" : g2_guestnic.get_ip(1), - "count" : 100, - "iface" : g4_guestnic.get_devname(), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role": "server", - "bind" : g1_guestnic.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_guestnic.get_ip(1), - "netperf_opts" : " -6", - }) - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : "-L %s" % - (g3_guestnic.get_ip(0)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : "-L %s" % - (g3_guestnic.get_ip(0)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : - "-L %s -6" % (g3_guestnic.get_ip(1)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : - "-L %s -6" % (g3_guestnic.get_ip(1)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -ping_mod_bad = ctl.get_module("IcmpPing", - options={ - "addr" : g4_guestnic.get_ip(0), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod_bad2 = ctl.get_module("IcmpPing", - options={ - "addr" : g2_guestnic.get_ip(0), - "count" : 100, - "iface" : g3_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6_bad = ctl.get_module("Icmp6Ping", - options={ - "addr" : g4_guestnic.get_ip(1), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6_bad2 = ctl.get_module("Icmp6Ping", - options={ - "addr" : g2_guestnic.get_ip(1), - "count" : 100, - "iface" : g3_guestnic.get_devname(), - "interval" : 0.1 - }) - -# configure mtu -h1.get_interface("bond").set_mtu(mtu) -h1.get_interface("tap1").set_mtu(mtu) -h1.get_interface("tap2").set_mtu(mtu) -h1.get_interface("vlan10").set_mtu(mtu) -h1.get_interface("vlan20").set_mtu(mtu) -h1.get_interface("br1").set_mtu(mtu) -h1.get_interface("br2").set_mtu(mtu) - -h2.get_interface("bond").set_mtu(mtu) -h2.get_interface("tap1").set_mtu(mtu) -h2.get_interface("tap2").set_mtu(mtu) -h2.get_interface("vlan10").set_mtu(mtu) -h2.get_interface("vlan20").set_mtu(mtu) -h2.get_interface("br1").set_mtu(mtu) -h2.get_interface("br2").set_mtu(mtu) - -g1.get_interface("guestnic").set_mtu(mtu) -g2.get_interface("guestnic").set_mtu(mtu) -g3.get_interface("guestnic").set_mtu(mtu) -g4.get_interface("guestnic").set_mtu(mtu) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - h1.run("ethtool -K %s %s" % (h1_nic1.get_devname(), dev_features)) - h1.run("ethtool -K %s %s" % (h1_nic2.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic1.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic2.get_devname(), dev_features)) - g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) - g2.run("ethtool -K %s %s" % (g2_guestnic.get_devname(), dev_features)) - g3.run("ethtool -K %s %s" % (g3_guestnic.get_devname(), dev_features)) - g4.run("ethtool -K %s %s" % (g4_guestnic.get_devname(), dev_features)) - - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - g4.run(ping_mod2) - g1.run(ping_mod_bad, expect="fail") - g3.run(ping_mod_bad2, expect="fail") - - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = g3.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - if enable_udp_perf is not None: - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = g3.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - g4.run(ping_mod62) - g1.run(ping_mod6_bad, expect="fail") - g3.run(ping_mod6_bad2, expect="fail") - - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = g3.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - if enable_udp_perf is not None: - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = g3.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -h1.run("ethtool -K %s %s" % (h1_nic1.get_devname(), dev_features)) -h1.run("ethtool -K %s %s" % (h1_nic2.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic1.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic2.get_devname(), dev_features)) -g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) -g2.run("ethtool -K %s %s" % (g2_guestnic.get_devname(), dev_features)) -g3.run("ethtool -K %s %s" % (g3_guestnic.get_devname(), dev_features)) -g4.run("ethtool -K %s %s" % (g4_guestnic.get_devname(), dev_features)) - -h1.run("service irqbalance start") -h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README deleted file mode 100644 index d6c26ed..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README +++ /dev/null @@ -1,82 +0,0 @@ -Topology: - - +----------+ - | | VLAN10 - +-----------------+ switch +-----------------+ - | | | | - | +----------+ | - | | - +-+-+ | -+------|nic|------+ +-+-+ -| +-+-+ | +------|nic|------+ -| | | | +---+ | -| +----+ | | | -| | | | | -| +-+-+ | | | -| |br0| | | host2 | -| +-+-+ host1 | | | -| | | | | -| +-+-+ | | | -+-|tap|-----------+ | | - +-+-+ +-----------------+ - | - |VLAN10 - | - +-+-+ -+-|nic|--+ -| +---+ | -| guest1 | -| | -+--------+ - -Number of hosts: 3 -Host #1 description: - One ethernet device - One tap device - One bridge device, bridging ethernet and tap devices - Host for guest1 virtual machine -Host #2 description: - One ethernet device with one VLAN subinterface -Guest #1 description: - One ethernet device with one VLAN subinterface -Test name: - virtual_bridge_vlan_in_guest.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between guest1's VLAN10 and host2's VLAN10 - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between guest1's VLAN10 and host2's VLAN10 - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - virtual_bridge_vlan_in_guest.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py deleted file mode 100644 index b4b5c6c..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py +++ /dev/null @@ -1,331 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -h1 = ctl.get_host("host1") -g1 = ctl.get_host("guest1") - -h2 = ctl.get_host("host2") - -g1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -h2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "rx", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]] - -ipv = ctl.get_alias("ipv") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment) - -mtu = ctl.get_alias("mtu") -enable_udp_perf = ctl.get_alias("enable_udp_perf") - -h2_vlan10 = h2.get_interface("vlan10") -g1_vlan10 = g1.get_interface("vlan10") -g1_guestnic = g1.get_interface("guestnic") -h1_nic = h1.get_interface("nic") -h2_nic = h2.get_interface("nic") - -if nperf_cpupin: - h1.run("service irqbalance stop") - h2.run("service irqbalance stop") - - # this will pin devices irqs to cpu #0 - for m, d in [ (h1, h1_nic), (h2, h2_nic) ]: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : h2_vlan10.get_ip(0), - "count" : 100, - "iface" : g1_vlan10.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : h2_vlan10.get_ip(1), - "count" : 100, - "iface" : g1_vlan10.get_ip(1), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_vlan10.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_vlan10.get_ip(1), - "netperf_opts" : " -6", - }) - -p_opts = "-L %s" % (h2_vlan10.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s" % nperf_cpupin - -p_opts6 = "-L %s -6" % (h2_vlan10.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s" % nperf_cpupin - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_vlan10.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_vlan10.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_vlan10.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_vlan10.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -# configure mtu -h1.get_interface("nic").set_mtu(mtu) -h1.get_interface("tap").set_mtu(mtu) -h1.get_interface("br").set_mtu(mtu) - -g1.get_interface("guestnic").set_mtu(mtu) -g1.get_interface("vlan10").set_mtu(mtu) - -h2.get_interface("nic").set_mtu(mtu) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) - h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - - if ("rx", "off") in setting: - # when rx offload is turned off some of the cards might get reset - # and link goes down, so wait a few seconds until NIC is ready - ctl.wait(15) - - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - if enable_udp_perf is not None: - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = h2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - if enable_udp_perf is not None: - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = h2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) -h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - -if nperf_cpupin: - h1.run("service irqbalance start") - h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml deleted file mode 100644 index 7768542..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml +++ /dev/null @@ -1,80 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mtu" value="1500" /> - <alias name="mapping_file" value="virtual_bridge_vlan_in_guest.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - </define> - <network> - <host id="host1"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch" /> - <eth id="tap" label="to_guest" /> - <bridge id="br"> - <slaves> - <slave id="tap" /> - <slave id="nic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.1/24</address> - </addresses> - </bridge> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="guestnic" label="to_guest"> - <params> - <param name="driver" value="virtio" /> - </params> - </eth> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="guestnic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.10/24</address> - <address>fc00:0:0:10::10/64</address> - </addresses> - </vlan> - </interfaces> - </host> - <host id="host2"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="nic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.11/24</address> - <address>fc00:0:0:10::11/64</address> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="virtual_bridge_vlan_in_guest.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README deleted file mode 100644 index 7910b28..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README +++ /dev/null @@ -1,82 +0,0 @@ -Topology: - - +----------+ - | | VLAN10 - +-----------------+ switch +-----------------+ - | | | | - | +----------+ | - | | - +-+-+ | -+------|nic|------+ +-+-+ -| +-+-+ | +------|nic|------+ -| VLAN10 | | | +---+ | -| +----+ | | | -| | | | | -| +-+-+ | | | -| |br0| | | host2 | -| +-+-+ host1 | | | -| | | | | -| +-+-+ | | | -+-|tap|-----------+ | | - +-+-+ +-----------------+ - | - | - | - +-+-+ -+-|nic|--+ -| +---+ | -| guest1 | -| | -+--------+ - -Number of hosts: 3 -Host #1 description: - One ethernet device with one VLAN subinterface - One tap device - One bridge device, bridging VLAN and tap devices - Host for guest1 virtual machine -Host #2 description: - One ethernet device with one VLAN subinterface -Guest #1 description: - One ethernet device -Test name: - virtual_bridge_vlan_in_host.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between guest1's NIC and host2's VLAN10 - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between guest1's NIC and host2's VLAN10 - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - virtual_bridge_vlan_in_host.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py deleted file mode 100644 index 1ebdd91..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py +++ /dev/null @@ -1,331 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -h1 = ctl.get_host("host1") -g1 = ctl.get_host("guest1") - -h2 = ctl.get_host("host2") - -g1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -h2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "rx", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]] - -ipv = ctl.get_alias("ipv") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment) - -mtu = ctl.get_alias("mtu") -enable_udp_perf = ctl.get_alias("enable_udp_perf") - -h2_vlan10 = h2.get_interface("vlan10") -g1_guestnic= g1.get_interface("guestnic") -h1_nic = h1.get_interface("nic") -h2_nic = h2.get_interface("nic") - -if nperf_cpupin: - h1.run("service irqbalance stop") - h2.run("service irqbalance stop") - - # this will pin devices irqs to cpu #0 - for m, d in [ (h1, h1_nic), (h2, h2_nic) ]: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : h2_vlan10.get_ip(0), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : h2_vlan10.get_ip(1), - "count" : 100, - "iface" : g1_guestnic.get_ip(1), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_guestnic.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_guestnic.get_ip(1), - "netperf_opts" : " -6", - }) - -p_opts = "-L %s" % (h2_vlan10.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s" % nperf_cpupin - -p_opts6 = "-L %s -6" % (h2_vlan10.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s" % nperf_cpupin - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -# configure mtu -h1.get_interface("nic").set_mtu(mtu) -h1.get_interface("tap").set_mtu(mtu) -h1.get_interface("vlan10").set_mtu(mtu) -h1.get_interface("br").set_mtu(mtu) - -g1.get_interface("guestnic").set_mtu(mtu) - -h2.get_interface("nic").set_mtu(mtu) -h2.get_interface("vlan10").set_mtu(mtu) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) - h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - - if ("rx", "off") in setting: - # when rx offload is turned off some of the cards might get reset - # and link goes down, so wait a few seconds until NIC is ready - ctl.wait(15) - - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - if enable_udp_perf is not None: - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = h2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - if enable_udp_perf: - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=['kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = h2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) -h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - -if nperf_cpupin: - h1.run("service irqbalance start") - h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml deleted file mode 100644 index a38848b..0000000 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml +++ /dev/null @@ -1,80 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mtu" value="1500" /> - <alias name="mapping_file" value="virtual_bridge_vlan_in_host.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - </define> - <network> - <host id="host1"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch" /> - <eth id="tap" label="to_guest" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="nic" /> - </slaves> - </vlan> - <bridge id="br"> - <slaves> - <slave id="tap" /> - <slave id="vlan10" /> - </slaves> - <addresses> - <address>{$vlan10_net}.1/24</address> - </addresses> - </bridge> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="guestnic" label="to_guest"> - <params> - <param name="driver" value="virtio" /> - </params> - <addresses> - <address>{$vlan10_net}.10/24</address> - <address>fc00:0:0:10::10/64</address> - </addresses> - </eth> - </interfaces> - </host> - <host id="host2"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="nic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.11/24</address> - <address>fc00:0:0:10::11/64</address> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="virtual_bridge_vlan_in_host.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/3_vlans_over_active_backup_team.README b/recipes/regression_tests/phase2/3_vlans_over_active_backup_team.README deleted file mode 100644 index 8a6183f..0000000 --- a/recipes/regression_tests/phase2/3_vlans_over_active_backup_team.README +++ /dev/null @@ -1,84 +0,0 @@ -Topology: - - switch - VLAN10 +------+ VLAN10 - +-------------------+ | | +-------------------+ - | VLAN20 | | | | VLAN20 | - | +-------------------+ +-------------------+ | - | | VLAN30 | | | | VLAN30 | | - | | +-----------+ | | +-----------+ | | - | | | +------+ | | | - | | | | | | - +-------+ +-------+ - | | - | | - | | - +----+---+ | - | TEAM | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +--+-+ -+---|eth1|--|eth2|---+ +-------|eth1|-------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in active-backup mode - 3 VLANs on team interface -Host #2 description: - One ethernet device - 3 VLANs on the ethernet interface -Test name: - 3_vlans_over_team.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between interfaces in the same VLAN (these should pass) - + between interfaces in different VLANs (these should fail) - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between interfaces in the same VLAN - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - 3_vlans_over_active_backup_team.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/3_vlans_over_active_backup_team.xml b/recipes/regression_tests/phase2/3_vlans_over_active_backup_team.xml deleted file mode 100644 index c2a5117..0000000 --- a/recipes/regression_tests/phase2/3_vlans_over_active_backup_team.xml +++ /dev/null @@ -1,125 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="3_vlans_over_active_backup_team.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - <alias name="vlan20_net" value="192.168.20"/> - <alias name="vlan20_tag" value="20"/> - <alias name="vlan30_net" value="192.168.30"/> - <alias name="vlan30_tag" value="30"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "activebackup"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="1.2.3.4/24" /> - </addresses> - </team> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="test_if" /> - </slaves> - <addresses> - <address value="{$vlan10_net}.1/24" /> - <address value="fc00:0:0:10::1/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="test_if" /> - </slaves> - <addresses> - <address value="{$vlan20_net}.1/24" /> - <address value="fc00:0:0:20::1/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="{$vlan30_tag}" /> - </options> - <slaves> - <slave id="test_if" /> - </slaves> - <addresses> - <address value="{$vlan30_net}.1/24" /> - <address value="fc00:0:0:30::1/64" /> - </addresses> - </vlan> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan10_net}.2/24" /> - <address value="fc00:0:0:10::2/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="{$vlan20_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan20_net}.2/24" /> - <address value="fc00:0:0:20::2/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="{$vlan30_tag}" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="{$vlan30_net}.2/24" /> - <address value="fc00:0:0:30::2/64" /> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="3_vlans_over_team.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/3_vlans_over_round_robin_team.README b/recipes/regression_tests/phase2/3_vlans_over_round_robin_team.README deleted file mode 100644 index 21f3eda..0000000 --- a/recipes/regression_tests/phase2/3_vlans_over_round_robin_team.README +++ /dev/null @@ -1,84 +0,0 @@ -Topology: - - switch - VLAN10 +------+ VLAN10 - +-------------------+ | | +-------------------+ - | VLAN20 | | | | VLAN20 | - | +-------------------+ +-------------------+ | - | | VLAN30 | | | | VLAN30 | | - | | +-----------+ | | +-----------+ | | - | | | +------+ | | | - | | | | | | - +-------+ +-------+ - | | - | | - | | - +----+---+ | - | TEAM | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +--+-+ -+---|eth1|--|eth2|---+ +-------|eth1|-------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in round-robin mode - 3 VLANs on team interface -Host #2 description: - One ethernet device - 3 VLANs on the ethernet interface -Test name: - 3_vlans_over_team.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + between interfaces in the same VLAN (these should pass) - + between interfaces in different VLANs (these should fail) - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between interfaces in the same VLAN - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - 3_vlans_over_round_robin_team.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/3_vlans_over_round_robin_team.xml b/recipes/regression_tests/phase2/3_vlans_over_round_robin_team.xml deleted file mode 100644 index 7576ec0..0000000 --- a/recipes/regression_tests/phase2/3_vlans_over_round_robin_team.xml +++ /dev/null @@ -1,118 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="3_vlans_over_round_robin_team.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "roundrobin"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="1.2.3.4/24" /> - </addresses> - </team> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="10" /> - </options> - <slaves> - <slave id="test_if" /> - </slaves> - <addresses> - <address value="192.168.10.1/24" /> - <address value="2002::10:1/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="20" /> - </options> - <slaves> - <slave id="test_if" /> - </slaves> - <addresses> - <address value="192.168.20.1/24" /> - <address value="2002::20:1/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="30" /> - </options> - <slaves> - <slave id="test_if" /> - </slaves> - <addresses> - <address value="192.168.30.1/24" /> - <address value="2002::30:1/64" /> - </addresses> - </vlan> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="10" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="192.168.10.2/24" /> - <address value="2002::10:2/64" /> - </addresses> - </vlan> - <vlan id="vlan20"> - <options> - <option name="vlan_tci" value="20" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="192.168.20.2/24" /> - <address value="2002::20:2/64" /> - </addresses> - </vlan> - <vlan id="vlan30"> - <options> - <option name="vlan_tci" value="30" /> - </options> - <slaves> - <slave id="eth1" /> - </slaves> - <addresses> - <address value="192.168.30.2/24" /> - <address value="2002::30:2/64" /> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="3_vlans_over_team.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/3_vlans_over_team.py b/recipes/regression_tests/phase2/3_vlans_over_team.py deleted file mode 100644 index e9cae83..0000000 --- a/recipes/regression_tests/phase2/3_vlans_over_team.py +++ /dev/null @@ -1,332 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -vlans = ["vlan10", "vlan20", "vlan30"] -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]] - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -m1_team = m1.get_interface("test_if") -m1_team.set_mtu(mtu) -m1_phy1 = m1.get_interface("eth1") -m1_phy2 = m1.get_interface("eth2") -m2_phy1 = m2.get_interface("eth1") -m2_phy1.set_mtu(mtu) - -for vlan in vlans: - vlan_if1 = m1.get_interface(vlan) - vlan_if1.set_mtu(mtu) - vlan_if2 = m2.get_interface(vlan) - vlan_if2.set_mtu(mtu) - -if nperf_cpupin: - # this will pin devices irqs to cpu #0 - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - for m, d in [ (m1, m1_phy1), (m1, m1_phy2), (m2, m2_phy1) ]: - pin_dev_irqs(m, d, 0) - -ctl.wait(15) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "count" : 100, - "interval" : 0.1 - }) -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "count" : 100, - "interval" : 0.1 - }) - -m1_vlan1 = m1.get_interface(vlans[0]) -m2_vlan1 = m2.get_interface(vlans[0]) - -p_opts = "-L %s" % (m2_vlan1.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (m2_vlan1.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind": m1_vlan1.get_ip(0) - }) -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind": m1_vlan1.get_ip(1), - "netperf_opts" : " -6" - }) -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server": m1_vlan1.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -for setting in offload_settings: - #apply offload setting - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - - m1.run("ethtool -K %s %s" % (m1_phy1.get_devname(), - dev_features)) - m1.run("ethtool -K %s %s" % (m1_phy2.get_devname(), - dev_features)) - m2.run("ethtool -K %s %s" % (m2_phy1.get_devname(), - dev_features)) - - # Ping test - for vlan1 in vlans: - m1_vlan1 = m1.get_interface(vlan1) - for vlan2 in vlans: - m2_vlan2 = m2.get_interface(vlan2) - - ping_mod.update_options({"addr": m2_vlan2.get_ip(0), - "iface": m1_vlan1.get_devname()}) - - ping_mod6.update_options({"addr": m2_vlan2.get_ip(1), - "iface": m1_vlan1.get_ip(1)}) - - if vlan1 == vlan2: - # These tests should pass - # Ping between same VLANs - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod) - - if ipv in [ 'ipv6', 'both' ]: - m1.run(ping_mod6) - else: - # These tests should fail - # Ping across different VLAN - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod, expect="fail") - - # Netperf test (both TCP and UDP) - if ipv in [ 'ipv4', 'both' ]: - srv_proc = m1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_tcp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_udp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = m2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - srv_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - srv_proc = m1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_tcp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server_on_vlan', vlans[0]) - result_udp.set_parameter('netperf_client_on_vlan', vlans[0]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = m2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - srv_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -m1.run("ethtool -K %s %s" % (m1_phy1.get_devname(), dev_features)) -m1.run("ethtool -K %s %s" % (m1_phy2.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (m2_phy1.get_devname(), dev_features)) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase2/active_backup_double_team.README b/recipes/regression_tests/phase2/active_backup_double_team.README deleted file mode 100644 index b5abee5..0000000 --- a/recipes/regression_tests/phase2/active_backup_double_team.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | TEAM | | TEAM | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in active-backup mode -Host #2 description: - Two ethernet devices, as slaves of a team interface in active-backup mode -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - active_backup_double_team.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/active_backup_double_team.xml b/recipes/regression_tests/phase2/active_backup_double_team.xml deleted file mode 100644 index 3825fb0..0000000 --- a/recipes/regression_tests/phase2/active_backup_double_team.xml +++ /dev/null @@ -1,68 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="active_backup_double_team.mapping" /> - <alias name="net" value="192.168.0"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "activebackup"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.1/24" /> - <address value="fc00:0:0:0::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "10:22:33:44:55:66", - "runner" : {"name" : "activebackup"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.2/24" /> - <address value="fc00:0:0:0::2/64" /> - </addresses> - </team> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/active_backup_team.README b/recipes/regression_tests/phase2/active_backup_team.README deleted file mode 100644 index fba6dc3..0000000 --- a/recipes/regression_tests/phase2/active_backup_team.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ | - | TEAM | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +-+--+ -+---|eth1|--|eth2|---+ +-------|eth1|------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +-------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in active-backup mode -Host #2 description: - One ethernet device -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - active_backup_team.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/active_backup_team.xml b/recipes/regression_tests/phase2/active_backup_team.xml deleted file mode 100644 index abf4451..0000000 --- a/recipes/regression_tests/phase2/active_backup_team.xml +++ /dev/null @@ -1,54 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="active_backup_team.mapping" /> - <alias name="net" value="192.168.0"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "activebackup"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.1/24" /> - <address value="fc00:0:0:0::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="test_if" label="tnet"> - <addresses> - <address value="{$net}.2/24" /> - <address value="fc00:0:0:0::2/64" /> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.README b/recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.README deleted file mode 100644 index c69442b..0000000 --- a/recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | TEAM | | BOND | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in active-backup mode -Host #2 description: - Two ethernet devices, as slaves of a bond interface in active-backup mode -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - active_backup_team_vs_active_backup_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.xml b/recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.xml deleted file mode 100644 index e0cb9c1..0000000 --- a/recipes/regression_tests/phase2/active_backup_team_vs_active_backup_bond.xml +++ /dev/null @@ -1,64 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="active_backup_team_vs_active_backup_bond.mapping" /> - <alias name="net" value="192.168.0"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "activebackup"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.1/24" /> - <address value="fc00:0:0:0::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="{$net}.2/24" /> - <address value="fc00:0:0:0::2/64" /> - </addresses> - </bond> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.README b/recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.README deleted file mode 100644 index 2412401..0000000 --- a/recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | TEAM | | BOND | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in active-backup mode -Host #2 description: - Two ethernet devices, as slaves of a bond interface in round-robin mode -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - active_backup_team_vs_round_robin_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.xml b/recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.xml deleted file mode 100644 index 983a512..0000000 --- a/recipes/regression_tests/phase2/active_backup_team_vs_round_robin_bond.xml +++ /dev/null @@ -1,64 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="active_backup_team_vs_round_robin_bond.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "activebackup"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="balance-rr" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64" /> - </addresses> - </bond> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> - - diff --git a/recipes/regression_tests/phase2/round_robin_double_team.README b/recipes/regression_tests/phase2/round_robin_double_team.README deleted file mode 100644 index 0d5ad2f..0000000 --- a/recipes/regression_tests/phase2/round_robin_double_team.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | TEAM | | TEAM | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in round-robin mode -Host #2 description: - Two ethernet devices, as slaves of a team interface in round-robin mode -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - round_robin_double_team.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/round_robin_double_team.xml b/recipes/regression_tests/phase2/round_robin_double_team.xml deleted file mode 100644 index a4cb9cb..0000000 --- a/recipes/regression_tests/phase2/round_robin_double_team.xml +++ /dev/null @@ -1,68 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="round_robin_double_team.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "10:22:33:44:55:66", - "runner" : {"name" : "roundrobin"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "roundrobin"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64" /> - </addresses> - </team> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> - - diff --git a/recipes/regression_tests/phase2/round_robin_team.README b/recipes/regression_tests/phase2/round_robin_team.README deleted file mode 100644 index 744cbb8..0000000 --- a/recipes/regression_tests/phase2/round_robin_team.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ | - | TEAM | | - +---++---+ | - || | - +--++--+ | - | | | - +--+-+ +-+--+ +-+--+ -+---|eth1|--|eth2|---+ +-------|eth1|------+ -| +----+ +----+ | | +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +-------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in round-robin mode -Host #2 description: - One ethernet device -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - round_robin_team.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/round_robin_team.xml b/recipes/regression_tests/phase2/round_robin_team.xml deleted file mode 100644 index 9ff89eb..0000000 --- a/recipes/regression_tests/phase2/round_robin_team.xml +++ /dev/null @@ -1,52 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="round_robin_team.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "roundrobin"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="test_if" label="tnet"> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64" /> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.README b/recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.README deleted file mode 100644 index 1d021db..0000000 --- a/recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | TEAM | | BOND | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in round-robin mode -Host #2 description: - Two ethernet devices, as slaves of a bond interface in active-backup mode -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - round_robin_team_vs_active_backup_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.xml b/recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.xml deleted file mode 100644 index 0d15dda..0000000 --- a/recipes/regression_tests/phase2/round_robin_team_vs_active_backup_bond.xml +++ /dev/null @@ -1,64 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="round_robin_team_vs_active_backup_bond.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "roundrobin"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="active-backup" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64" /> - </addresses> - </bond> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> - - diff --git a/recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.README b/recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.README deleted file mode 100644 index 774eefb..0000000 --- a/recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.README +++ /dev/null @@ -1,81 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------------+ +-------------------+ - | | | | - | | | | - | +------+ | - | | - | | - | | - | | - | | - +----+---+ +----+---+ - | TEAM | | BOND | - +---++---+ +---++---+ - || || - +--++--+ +--++--+ - | | | | - +--+-+ +-+--+ +--+-+ +-+--+ -+---|eth1|--|eth2|---+ +---|eth1|--|eth2|---+ -| +----+ +----+ | | +----+ +----+ | -| | | | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+--------------------+ +--------------------+ - -Number of hosts: 2 -Host #1 description: - Two ethernet devices, as slaves of a team interface in round-robin mode -Host #2 description: - Two ethernet devices, as slaves of a bond interface in round-robin mode -Test name: - team_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + from both sides - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + from both sides - Offloads: - + TSO, GRO, GSO - + tested both on/off variants - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - round_robin_team_vs_round_robin_bond.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.xml b/recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.xml deleted file mode 100644 index 4b31194..0000000 --- a/recipes/regression_tests/phase2/round_robin_team_vs_round_robin_bond.xml +++ /dev/null @@ -1,64 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1500" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="round_robin_team_vs_round_robin_bond.mapping" /> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <team id="test_if"> - <options> - <option name="teamd_config"> - { - "hwaddr" : "00:11:22:33:44:55", - "runner" : {"name" : "roundrobin"} - } - </option> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.1/24" /> - <address value="2002::1/64" /> - </addresses> - </team> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth1" label="tnet" /> - <eth id="eth2" label="tnet" /> - <bond id="test_if"> - <options> - <option name="mode" value="balance-rr" /> - <option name="miimon" value="100" /> - </options> - <slaves> - <slave id="eth1" /> - <slave id="eth2" /> - </slaves> - <addresses> - <address value="192.168.0.2/24" /> - <address value="2002::2/64" /> - </addresses> - </bond> - </interfaces> - </host> - </network> - - <task python="team_test.py" /> -</lnstrecipe> - - diff --git a/recipes/regression_tests/phase2/team_test.py b/recipes/regression_tests/phase2/team_test.py deleted file mode 100644 index 1aa0d0f..0000000 --- a/recipes/regression_tests/phase2/team_test.py +++ /dev/null @@ -1,470 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]] - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -test_if1 = m1.get_interface("test_if") -test_if1.set_mtu(mtu) -test_if2 = m2.get_interface("test_if") -test_if2.set_mtu(mtu) - -if nperf_cpupin: - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - m1_phy1 = m1.get_interface("eth1") - m1_phy2 = m1.get_interface("eth2") - dev_list = [(m1, m1_phy1), (m1, m1_phy2)] - - if test_if2.get_type() == "team": - m2_phy1 = m2.get_interface("eth1") - m2_phy2 = m2.get_interface("eth2") - dev_list.extend([(m2, m2_phy1), (m2, m2_phy2)]) - else: - dev_list.append((m2, test_if2)) - - # this will pin devices irqs to cpu #0 - for m, d in dev_list: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : test_if2.get_ip(0), - "count" : 100, - "iface" : test_if1.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : test_if2.get_ip(1), - "count" : 100, - "iface" : test_if1.get_ip(1), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options = { - "role" : "server", - "bind" : test_if1.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options = { - "role" : "server", - "bind" : test_if1.get_ip(1), - "netperf_opts" : " -6" - }) - -p_opts = "-L %s" % (test_if2.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (test_if2.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_cli_tcp = ctl.get_module("Netperf", - options = { - "role" : "client", - "netperf_server" : test_if1.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options = { - "role" : "client", - "netperf_server" : test_if1.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - test_if1.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - test_if1.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - m1.run("ethtool -K %s %s" % (test_if1.get_devname(), dev_features)) - m2.run("ethtool -K %s %s" % (test_if2.get_devname(), dev_features)) - - if ipv in [ 'ipv4', 'both' ]: - m1.run(ping_mod) - - server_proc = m1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server', "testmachine1") - result_tcp.set_parameter('netperf_client', "testmachine2") - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server', "testmachine1") - result_udp.set_parameter('netperf_client', "testmachine2") - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = m2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - if ipv in [ 'ipv6', 'both' ]: - m1.run(ping_mod6) - - server_proc = m1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server', "testmachine1") - result_tcp.set_parameter('netperf_client', "testmachine2") - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = m2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*5) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server', "testmachine1") - result_udp.set_parameter('netperf_client', "testmachine2") - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = m2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*5) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -m1.run("ethtool -K %s %s" % (test_if1.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (test_if2.get_devname(), dev_features)) - -ping_mod.update_options({"addr" : test_if1.get_ip(0), - "iface" : test_if2.get_devname()}) - -ping_mod6.update_options({"addr" : test_if1.get_ip(1), - "iface" : test_if2.get_devname()}) - -netperf_srv.update_options({"bind" : test_if2.get_ip(0)}) - -netperf_srv6.update_options({"bind" : test_if2.get_ip(1)}) - -p_opts = "-L %s" % (test_if1.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -p_opts6 = "-L %s -6" % (test_if1.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -netperf_cli_tcp.update_options({"netperf_server" : test_if2.get_ip(0), - "netperf_opts" : p_opts}) - -netperf_cli_udp.update_options({"netperf_server" : test_if2.get_ip(0), - "netperf_opts" : p_opts}) - -netperf_cli_tcp6.update_options({"netperf_server" : test_if2.get_ip(1), - "netperf_opts" : p_opts6 }) - -netperf_cli_udp6.update_options({"netperf_server" : test_if2.get_ip(1), - "netperf_opts" : p_opts6 }) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - m1.run("ethtool -K %s %s" % (test_if1.get_devname(), dev_features)) - m2.run("ethtool -K %s %s" % (test_if2.get_devname(), dev_features)) - - if ipv in [ 'ipv4', 'both' ]: - m2.run(ping_mod) - - server_proc = m2.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server', "testmachine2") - result_tcp.set_parameter('netperf_client', "testmachine1") - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = m1.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server', "testmachine2") - result_udp.set_parameter('netperf_client', "testmachine1") - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = m1.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - if ipv in [ 'ipv6', 'both' ]: - m2.run(ping_mod6) - - server_proc = m2.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.set_parameter('netperf_server', "testmachine2") - result_tcp.set_parameter('netperf_client', "testmachine1") - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = m1.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.set_parameter('netperf_server', "testmachine2") - result_udp.set_parameter('netperf_client', "testmachine1") - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = m1.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -m1.run("ethtool -K %s %s" % (test_if1.get_devname(), dev_features)) -m2.run("ethtool -K %s %s" % (test_if2.get_devname(), dev_features)) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.README b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.README deleted file mode 100644 index 20fb6ac..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.README +++ /dev/null @@ -1,77 +0,0 @@ -Topology: - - switch - +--------+ - | | - +-------------------------+ +--------------------------+ - | | | | - | +---------------+ +----------------+ | - | | | | | | - | | +--------+ | | - | | | | - | | | | - +--+-+ +--+-+ +--+-+ +--+-+ -+----+eth1+----+eth2+----+ +----+eth1+----+eth2+----+ -| +-+--+ +--+-+ | | +-+--+ +--+-+ | -| | | | | | | | -| | | | | | | | -| | | | | | | | -| +----+----------+----+ | | +----+----------+----+ | -| | +---bond---+ | | | | +---bond---+ | | -| | | | | | | | -| | ovs_bridge | | | | ovs_bridge | | -| | | | | | | | -| | vlan10 vlan20 | | | | vlan10 vlan20 | | -| +--+--------------+--+ | | +--+--------------+--+ | -| | host1 | | | | host2 | | -| | | | | | | | -| | | | | | | | -| +-+-+ +-+-+ | | +-+-+ +-+-+ | -+--+tap+----------+tap+--+ +--+tap+----------+tap+--+ - +-+-+ +-+-+ +-+-+ +-+-+ - | | | | - +-+-+ +-+-+ +-+-+ +-+-+ -+--+eth+--+ +--+eth+--+ +--+eth+--+ +--+eth+--+ -| +---+ | | +---+ | | +---+ | | +---+ | -| | | | | | | | -| guest1 | | guest2 | | guest3 | | guest4 | -| | | | | | | | -| | | | | | | | -+---------+ +---------+ +---------+ +---------+ - -Number of hosts: 6 -Host #1 description: - Two ethernet devices - Two tap devices - One Open vSwitch bridge that connects the ethernet devices into a bond and - uses the tap devices as access ports for vlans 10 and 20. - Host for guest1 and guest2 virtual machines -Host #2 description: - Two ethernet devices - Two tap devices - One Open vSwitch bridge that connects the ethernet devices into a bond and - uses the tap devices as access ports for vlans 10 and 20. - Host for guest3 and guest4 virtual machines -Guest #1 description: - One ethernet device -Guest #2 description: - One ethernet device -Guest #3 description: - One ethernet device -Guest #4 description: - One ethernet device -Test name: - virtual_ovs_bridge_2_vlans_over_bond.py -Test description: - Set offload: - + gso, gro, tso - + guest ethernet devices - Ping: - + count: 100 - + interval: 0.1s - + between guests in same VLANs (should pass) - + between guests in different VLANs (should fail) - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between guests in same VLANs diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py deleted file mode 100644 index d795714..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.py +++ /dev/null @@ -1,381 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -# Host 1 + guests 1 and 2 -h1 = ctl.get_host("host1") -g1 = ctl.get_host("guest1") -g1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -g2 = ctl.get_host("guest2") -g2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# Host 2 + guests 3 and 4 -h2 = ctl.get_host("host2") -g3 = ctl.get_host("guest3") -g3.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -g4 = ctl.get_host("guest4") -g4.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]] - -ipv = ctl.get_alias("ipv") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, g1, g2, h2, g3, g4], pr_user_comment) - -h1_nic1 = h1.get_interface("nic1") -h1_nic2 = h1.get_interface("nic2") -h2_nic1 = h2.get_interface("nic1") -h2_nic2 = h2.get_interface("nic2") -g1_guestnic = g1.get_interface("guestnic") -g2_guestnic = g2.get_interface("guestnic") -g3_guestnic = g3.get_interface("guestnic") -g4_guestnic = g4.get_interface("guestnic") - -h1.run("service irqbalance stop") -h2.run("service irqbalance stop") - -# this will pin devices irqs to cpu #0 -for m, d in [ (h1, h1_nic1), (h2, h2_nic1) , (h1, h1_nic2), (h2, h2_nic2) ]: - pin_dev_irqs(m, d, 0) - - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : g3_guestnic.get_ip(0), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod2 = ctl.get_module("IcmpPing", - options={ - "addr" : g2_guestnic.get_ip(0), - "count" : 100, - "iface" : g4_guestnic.get_ip(0), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : g3_guestnic.get_ip(1), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod62 = ctl.get_module("Icmp6Ping", - options={ - "addr" : g2_guestnic.get_ip(1), - "count" : 100, - "iface" : g4_guestnic.get_devname(), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role": "server", - "bind" : g1_guestnic.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_guestnic.get_ip(1), - "netperf_opts" : " -6", - }) - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : "-L %s" % - (g3_guestnic.get_ip(0)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : "-L %s" % - (g3_guestnic.get_ip(0)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : - "-L %s -6" % (g3_guestnic.get_ip(1)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : - "-L %s -6" % (g3_guestnic.get_ip(1)), - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -ping_mod_bad = ctl.get_module("IcmpPing", - options={ - "addr" : g4_guestnic.get_ip(0), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - - -ping_mod_bad2 = ctl.get_module("IcmpPing", - options={ - "addr" : g2_guestnic.get_ip(0), - "count" : 100, - "iface" : g3_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6_bad = ctl.get_module("Icmp6Ping", - options={ - "addr" : g4_guestnic.get_ip(1), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6_bad2 = ctl.get_module("Icmp6Ping", - options={ - "addr" : g2_guestnic.get_ip(1), - "count" : 100, - "iface" : g3_guestnic.get_devname(), - "interval" : 0.1 - }) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - h1.run("ethtool -K %s %s" % (h1_nic1.get_devname(), dev_features)) - h1.run("ethtool -K %s %s" % (h1_nic2.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic1.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic2.get_devname(), dev_features)) - g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) - g2.run("ethtool -K %s %s" % (g2_guestnic.get_devname(), dev_features)) - g3.run("ethtool -K %s %s" % (g3_guestnic.get_devname(), dev_features)) - g4.run("ethtool -K %s %s" % (g4_guestnic.get_devname(), dev_features)) - - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - g4.run(ping_mod2) - g1.run(ping_mod_bad, expect="fail") - g3.run(ping_mod_bad2, expect="fail") - - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = g3.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = g3.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - g4.run(ping_mod62) - g1.run(ping_mod6_bad, expect="fail") - g3.run(ping_mod6_bad2, expect="fail") - - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = g3.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = g3.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -h1.run("ethtool -K %s %s" % (h1_nic1.get_devname(), dev_features)) -h1.run("ethtool -K %s %s" % (h1_nic2.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic1.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic2.get_devname(), dev_features)) -g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) -g2.run("ethtool -K %s %s" % (g2_guestnic.get_devname(), dev_features)) -g3.run("ethtool -K %s %s" % (g3_guestnic.get_devname(), dev_features)) -g4.run("ethtool -K %s %s" % (g4_guestnic.get_devname(), dev_features)) - -h1.run("service irqbalance start") -h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.xml b/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.xml deleted file mode 100644 index 4f34ac5..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_2_vlans_over_active_backup_bond.xml +++ /dev/null @@ -1,135 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="virtual_ovs_bridge_2_vlans_over_active_backup_bond.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - <alias name="vlan20_net" value="192.168.20"/> - <alias name="vlan20_tag" value="20"/> - </define> - <network> - <host id="host1"> - <interfaces> - <eth id="nic1" label="to_switch" /> - <eth id="nic2" label="to_switch" /> - <eth id="tap1" label="to_guest1" /> - <eth id="tap2" label="to_guest2" /> - <ovs_bridge id="bridge"> - <slaves> - <slave id="nic1"/> - <slave id="nic2"/> - <slave id="tap1"/> - <slave id="tap2"/> - </slaves> - <vlan tag="{$vlan10_tag}"> - <slaves> - <slave id="tap1"/> - </slaves> - </vlan> - <vlan tag="{$vlan20_tag}"> - <slaves> - <slave id="tap2"/> - </slaves> - </vlan> - <bond id="bond"> - <slaves> - <slave id="nic1"/> - <slave id="nic2"/> - </slaves> - <options> - <option name="bond_mode" value="active-backup" /> - <option name="other_config:bond-miimon-interval" value="100" /> - </options> - </bond> - </ovs_bridge> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="guestnic" label="to_guest1"> - <addresses> - <address>{$vlan10_net}.100/24</address> - <address>fc00:0:0:10::100/64</address> - </addresses> - </eth> - </interfaces> - </host> - <host id="guest2"> - <interfaces> - <eth id="guestnic" label="to_guest2"> - <addresses> - <address>{$vlan20_net}.100/24</address> - <address>fc00:0:0:20::100/64</address> - </addresses> - </eth> - </interfaces> - </host> - - <host id="host2"> - <interfaces> - <eth id="nic1" label="to_switch"/> - <eth id="nic2" label="to_switch"/> - <eth id="tap1" label="to_guest3"/> - <eth id="tap2" label="to_guest4"/> - <ovs_bridge id="bridge"> - <slaves> - <slave id="nic1"/> - <slave id="nic2"/> - <slave id="tap1"/> - <slave id="tap2"/> - </slaves> - <vlan tag="{$vlan10_tag}"> - <slaves> - <slave id="tap1"/> - </slaves> - </vlan> - <vlan tag="{$vlan20_tag}"> - <slaves> - <slave id="tap2"/> - </slaves> - </vlan> - <bond id="bond"> - <slaves> - <slave id="nic1"/> - <slave id="nic2"/> - </slaves> - <options> - <option name="bond_mode" value="active-backup" /> - <option name="other_config:bond-miimon-interval" value="100" /> - </options> - </bond> - </ovs_bridge> - </interfaces> - </host> - <host id="guest3"> - <interfaces> - <eth id="guestnic" label="to_guest3"> - <addresses> - <address>{$vlan10_net}.101/24</address> - <address>fc00:0:0:10::101/64</address> - </addresses> - </eth> - </interfaces> - </host> - <host id="guest4"> - <interfaces> - <eth id="guestnic" label="to_guest4"> - <addresses> - <address>{$vlan20_net}.101/24</address> - <address>fc00:0:0:20::101/64</address> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="virtual_ovs_bridge_2_vlans_over_active_backup_bond.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.README b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.README deleted file mode 100644 index e14e853..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.README +++ /dev/null @@ -1,55 +0,0 @@ -Topology: - - +----------+ - | | VLAN10 - +-----------------+ switch +-----------------+ - | | | | - | +----------+ | - | | - +-+-+ | -+------|nic|------+ +-+-+ -| +-+-+ | +------|nic|------+ -| | | | +---+ | -| +----+ | | | -| | | | | -| +-+---------+ | | | -| | ovs_bridge| | | host2 | -| +-+---------+ | | | -| | host1 | | | -| +-+-+ | | | -+-|tap|-----------+ | | - +-+-+ +-----------------+ - | - |VLAN10 - | - +-+-+ -+-|nic|--+ -| +---+ | -| guest1 | -| | -+--------+ - -Number of hosts: 3 -Host #1 description: - One ethernet device - One tap device - One Open vSwitch bridge device, bridging ethernet and tap devices - Host for guest1 virtual machine -Host #2 description: - One ethernet device with one VLAN subinterface -Guest #1 description: - One ethernet device with one VLAN subinterface -Test name: - virtual_ovs_bridge_vlan_in_guest.py -Test description: - Set device offload parameters: - + gso, gro, tso - + Guest#1 and Host#2 ethernet devices - Ping: - + count: 100 - + interval: 0.1s - + between guest1's VLAN10 and host2's VLAN10 - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between guest1's VLAN10 and host2's VLAN10 diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py deleted file mode 100644 index c96e2a6..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.py +++ /dev/null @@ -1,319 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -h1 = ctl.get_host("host1") -g1 = ctl.get_host("guest1") - -h2 = ctl.get_host("host2") - -g1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -h2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "rx", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]] - -ipv = ctl.get_alias("ipv") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment) - -h2_vlan10 = h2.get_interface("vlan10") -g1_vlan10 = g1.get_interface("vlan10") -h1_nic = h1.get_interface("nic") -h2_nic = h2.get_interface("nic") -g1_guestnic = g1.get_interface("guestnic") - -if nperf_cpupin: - h1.run("service irqbalance stop") - h2.run("service irqbalance stop") - - # this will pin devices irqs to cpu #0 - for m, d in [ (h1, h1_nic), (h2, h2_nic) ]: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : h2_vlan10.get_ip(0), - "count" : 100, - "iface" : g1_vlan10.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : h2_vlan10.get_ip(1), - "count" : 100, - "iface" : g1_vlan10.get_ip(1), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_vlan10.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_vlan10.get_ip(1), - "netperf_opts" : " -6", - }) - -p_opts = "-L %s" % (h2_vlan10.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s" % nperf_cpupin - -p_opts6 = "-L %s -6" % (h2_vlan10.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s" % nperf_cpupin - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_vlan10.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_vlan10.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_vlan10.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_vlan10.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) - g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - - if ("rx", "off") in setting: - # when rx offload is turned off some of the cards might get reset - # and link goes down, so wait a few seconds until NIC is ready - ctl.wait(15) - - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = h2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = h2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) -g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - -if nperf_cpupin: - h1.run("service irqbalance start") - h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.xml b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.xml deleted file mode 100644 index bfea7a0..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_guest.xml +++ /dev/null @@ -1,76 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="virtual_ovs_bridge_vlan_in_guest.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - </define> - <network> - <host id="host1"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch" /> - <eth id="tap" label="to_guest" /> - <ovs_bridge id="br"> - <slaves> - <slave id="tap" /> - <slave id="nic" /> - </slaves> - </ovs_bridge> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="guestnic" label="to_guest"> - <params> - <param name="driver" value="virtio" /> - </params> - </eth> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="guestnic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.10/24</address> - <address>fc00:0:0:10::10/64</address> - </addresses> - </vlan> - </interfaces> - </host> - <host id="host2"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch" /> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="nic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.11/24</address> - <address>fc00:0:0:10::11/64</address> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="virtual_ovs_bridge_vlan_in_guest.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.README b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.README deleted file mode 100644 index 0674251..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.README +++ /dev/null @@ -1,58 +0,0 @@ -Topology: - - +----------+ - | | VLAN10 - +-----------------+ switch +--------------------+ - | | | | - | +----------+ | - | | - +-+-+ | -+------|nic|---------+ +-+-+ -| +-+-+ | +------|nic|------+ -| | | | +---+ | -| | | | | -| +------+-------+ | | | -| | vlan10 | | | host2 | -| | | | | | -| | ovs_bridge | | | | -| | | | | | -| +-+------------+ | +-----------------+ -| | | -| +-+-+ | -+-|tap|--------------+ - +-+-+ - | - | - | - +-+-+ -+-|nic|--+ -| +---+ | -| guest1 | -| | -+--------+ - -Number of hosts: 3 -Host #1 description: - One ethernet device - One tap device - One Open vSwitch bridge device, bridging the tap device and the ethernet - device, the ethernet device is used as an access port for VLAN 10 - Host for guest1 virtual machine -Host #2 description: - One ethernet device with one VLAN subinterface -Guest #1 description: - One ethernet device -Test name: - virtual_bridge_vlan_in_guest.py -Test description: - Set offload parameters: - + gso, gro, tso - + Guest#1 and Host#2 ethernet devices - Ping: - + count: 100 - + interval: 0.1s - + between guest1's NIC and host2's VLAN10 - Netperf: - + duration: 60s - + TCP_STREAM and UDP_STREAM - + between guest1's NIC and host2's VLAN10 diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py deleted file mode 100644 index eaa1cab..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.py +++ /dev/null @@ -1,319 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import netperf_baseline_template -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -h1 = ctl.get_host("host1") -g1 = ctl.get_host("guest1") - -h2 = ctl.get_host("host2") - -g1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -h2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -offloads = ["gro", "gso", "tso", "rx", "tx"] -offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")], - [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]] - -ipv = ctl.get_alias("ipv") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_mode = ctl.get_alias("nperf_mode") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, g1, h2], pr_user_comment) - -h2_vlan10 = h2.get_interface("vlan10") -g1_guestnic = g1.get_interface("guestnic") -h1_nic = h1.get_interface("nic") -h2_nic = h2.get_interface("nic") - -if nperf_cpupin: - h1.run("service irqbalance stop") - h2.run("service irqbalance stop") - - # this will pin devices irqs to cpu #0 - for m, d in [ (h1, h1_nic), (h2, h2_nic) ]: - pin_dev_irqs(m, d, 0) - -ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : h2_vlan10.get_ip(0), - "count" : 100, - "iface" : g1_guestnic.get_devname(), - "interval" : 0.1 - }) - -ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : h2_vlan10.get_ip(1), - "count" : 100, - "iface" : g1_guestnic.get_ip(1), - "interval" : 0.1 - }) - -netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_guestnic.get_ip(0) - }) - -netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : g1_guestnic.get_ip(1), - "netperf_opts" : " -6", - }) - -p_opts = "-L %s" % (h2_vlan10.get_ip(0)) -if nperf_cpupin and nperf_mode != "multi": - p_opts += " -T%s" % nperf_cpupin - -p_opts6 = "-L %s -6" % (h2_vlan10.get_ip(1)) -if nperf_cpupin and nperf_mode != "multi": - p_opts6 += " -T%s" % nperf_cpupin - -netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : g1_guestnic.get_ip(0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - g1_guestnic.get_ip(1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : p_opts6, - "debug" : nperf_debug, - "max_deviation" : nperf_max_dev - }) - -if nperf_mode == "multi": - netperf_cli_tcp.unset_option("confidence") - netperf_cli_udp.unset_option("confidence") - netperf_cli_tcp6.unset_option("confidence") - netperf_cli_udp6.unset_option("confidence") - - netperf_cli_tcp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_tcp6.update_options({"num_parallel": nperf_num_parallel}) - netperf_cli_udp6.update_options({"num_parallel": nperf_num_parallel}) - -ctl.wait(15) - -for setting in offload_settings: - dev_features = "" - for offload in setting: - dev_features += " %s %s" % (offload[0], offload[1]) - h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) - g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) - h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - - if ("rx", "off") in setting: - # when rx offload is turned off some of the cards might get reset - # and link goes down, so wait a few seconds until NIC is ready - ctl.wait(15) - - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp, baseline) - - udp_res_data = h2.run(netperf_cli_udp, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_tcp.set_parameter(offload[0], offload[1]) - result_tcp.add_tag(product_name) - if nperf_mode == "multi": - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - netperf_baseline_template(netperf_cli_tcp6, baseline) - - tcp_res_data = h2.run(netperf_cli_tcp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - 'kernel_release', - 'redhat_release', - r'guest\d+.hostname', - r'guest\d+..*hwaddr', - r'host\d+..*tap\d*.hwaddr', - r'host\d+..*tap\d*.devname']) - for offload in setting: - result_udp.set_parameter(offload[0], offload[1]) - result_udp.add_tag(product_name) - if nperf_mode == "multi": - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - netperf_baseline_template(netperf_cli_udp6, baseline) - - udp_res_data = h2.run(netperf_cli_udp6, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - - server_proc.intr() - -#reset offload states -dev_features = "" -for offload in offloads: - dev_features += " %s %s" % (offload, "on") -h1.run("ethtool -K %s %s" % (h1_nic.get_devname(), dev_features)) -g1.run("ethtool -K %s %s" % (g1_guestnic.get_devname(), dev_features)) -h2.run("ethtool -K %s %s" % (h2_nic.get_devname(), dev_features)) - -if nperf_cpupin: - h1.run("service irqbalance start") - h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.xml b/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.xml deleted file mode 100644 index ea8f41a..0000000 --- a/recipes/regression_tests/phase2/virtual_ovs_bridge_vlan_in_host.xml +++ /dev/null @@ -1,74 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5" /> - <alias name="nperf_mode" value="default"/> - <alias name="nperf_num_parallel" value="2"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="virtual_ovs_bridge_vlan_in_host.mapping" /> - <alias name="vlan10_net" value="192.168.10"/> - <alias name="vlan10_tag" value="10"/> - </define> - <network> - <host id="host1"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch"/> - <eth id="tap" label="to_guest"/> - <ovs_bridge id="ovs_br"> - <slaves> - <slave id="tap" /> - <slave id="nic" /> - </slaves> - <vlan tag="{$vlan10_tag}"> - <slaves> - <slave id="tap"/> - </slaves> - </vlan> - </ovs_bridge> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="guestnic" label="to_guest"> - <params> - <param name="driver" value="virtio"/> - </params> - <addresses> - <address>{$vlan10_net}.10/24</address> - <address>fc00:0:0:10::10/64</address> - </addresses> - </eth> - </interfaces> - </host> - <host id="host2"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="nic" label="to_switch"> - </eth> - <vlan id="vlan10"> - <options> - <option name="vlan_tci" value="{$vlan10_tag}" /> - </options> - <slaves> - <slave id="nic" /> - </slaves> - <addresses> - <address>{$vlan10_net}.11/24</address> - <address>fc00:0:0:10::11/64</address> - </addresses> - </vlan> - </interfaces> - </host> - </network> - - <task python="virtual_ovs_bridge_vlan_in_host.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase3/2_virt_ovs_vxlan.README b/recipes/regression_tests/phase3/2_virt_ovs_vxlan.README deleted file mode 100644 index e7c8c90..0000000 --- a/recipes/regression_tests/phase3/2_virt_ovs_vxlan.README +++ /dev/null @@ -1,129 +0,0 @@ -Topology: - - switch - +--------+ - | | - +--------------------+ +----------------------+ - | | | | - | | | | - | | | | - | +--------+ | - | | - | | - +--+--+ +--+--+ -+---------| eth |--------+ +---------| eth |--------+ -| +-----+ | | +-----+ | -| | | | -| +----------------------------------------------------+ | -| | | | | | -| +----------+---------+ | | +----------+---------+ | -| | vxlan | | | | vxlan | | -| | | | | | | | -| | ovs_bridge | | | | ovs_bridge | | -| |tun_id tun_id| | | |tun_id tun_id| | -| | 100 200 | | | | 100 200 | | -| +--+--------------+--+ | | +--+--------------+--+ | -| | host1 | | | | host2 | | -| | | | | | | | -| | | | | | | | -| +-+-+ +-+-+ | | +-+-+ +-+-+ | -+--+tap+----------+tap+--+ +--+tap+----------+tap+--+ - +-+-+ +-+-+ +-+-+ +-+-+ - | | | | - +-+-+ +-+-+ +-+-+ +-+-+ -+--+eth+--+ +--+eth+--+ +--+eth+--+ +--+eth+--+ -| +---+ | | +---+ | | +---+ | | +---+ | -| | | | | | | | -| guest1 | | guest2 | | guest3 | | guest4 | -| | | | | | | | -| | | | | | | | -+---------+ +---------+ +---------+ +---------+ - -Number of hosts: 6 -Host #1 description: - One ethernet device configured with {$net}.1/24 ip address - Two tap devices - One Open vSwitch bridge that separates the guests into two distinct VXLANs - with tun_id=100 and tun_id=200. The vxlan port remote_ip points to the - ethernet interface of Host #2. - Host for guest1 and guest2 virtual machines -Host #2 description: - One ethernet device configured with {$net}.2/24 ip address - Two tap devices - One Open vSwitch bridge that separates the guests into two distinct VXLANs - with tun_id=100 and tun_id=200. The vxlan port remote_ip points to the - ethernet interface of Host #1. - Host for guest3 and guest4 virtual machines -Guest #1 description: - One ethernet device configured with ip addresses: - {$vxlan_net}.1/24 - {$vxlan_net6}::1/64 -Guest #2 description: - One ethernet device configured with ip addresses: - {$vxlan_net}.2/24 - {$vxlan_net6}::2/64 -Guest #3 description: - One ethernet device configured with ip addresses: - {$vxlan_net}.3/24 - {$vxlan_net6}::3/64 -Guest #4 description: - One ethernet device configured with ip addresses: - {$vxlan_net}.4/24 - {$vxlan_net6}::4/64 -Test name: - 2_virt_ovs_vxlan.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + guest1 -> guest2 expecting FAIL - + guest1 -> guest3 expecting PASS - + guest1 -> guest4 expecting FAIL - + guest2 -> guest3 expecting FAIL - + guest2 -> guest4 expecting PASS - + guest3 -> guest4 expecting FAIL - Ping6: - + count: 100 - + interval: 0.1s - + guest1 -> guest2 expecting FAIL - + guest1 -> guest3 expecting PASS - + guest1 -> guest4 expecting FAIL - + guest2 -> guest3 expecting FAIL - + guest2 -> guest4 expecting PASS - + guest3 -> guest4 expecting FAIL - Netperf: - + duration: 60s, repeated 5 times to calculate confidence - + guest1 -> guest3 TCP_STREAM ipv4 - + guest1 -> guest3 UDP_STREAM ipv4 - + guest1 -> guest3 TCP_STREAM ipv6 - + guest1 -> guest3 UDP_STREAM ipv6 - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - 2_virt_ovs_vxlan.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase3/2_virt_ovs_vxlan.py b/recipes/regression_tests/phase3/2_virt_ovs_vxlan.py deleted file mode 100644 index 5ebaacc..0000000 --- a/recipes/regression_tests/phase3/2_virt_ovs_vxlan.py +++ /dev/null @@ -1,279 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import perfrepo_baseline_to_dict -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.ModuleWrap import ping, ping6, netperf -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -# hosts -host1 = ctl.get_host("h1") -host2 = ctl.get_host("h2") - -# guest machines -guest1 = ctl.get_host("test_host1") -guest2 = ctl.get_host("test_host2") -guest3 = ctl.get_host("test_host3") -guest4 = ctl.get_host("test_host4") - -for h in [guest1, guest2, guest3, guest4]: - h.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([guest1, guest2, guest3, guest4], - pr_user_comment) - -g1_nic = guest1.get_interface("if1") -g2_nic = guest2.get_interface("if1") -g3_nic = guest3.get_interface("if1") -g4_nic = guest4.get_interface("if1") - -g1_nic.set_mtu(mtu) -g2_nic.set_mtu(mtu) -g3_nic.set_mtu(mtu) -g4_nic.set_mtu(mtu) - -if nperf_cpupin: - host1.run("service irqbalance stop") - host2.run("service irqbalance stop") - guest1.run("service irqbalance stop") - guest2.run("service irqbalance stop") - guest3.run("service irqbalance stop") - guest4.run("service irqbalance stop") - h1_if = host1.get_interface("if1") - h2_if = host2.get_interface("if1") - - #this will pin devices irqs to cpu #0 - for m, d in [(host1, h1_if), (host2, h2_if), (guest1, g1_nic), (guest2, g2_nic), (guest3, g3_nic), (guest4, g4_nic)]: - pin_dev_irqs(m, d, 0) - -nperf_opts = "" -if nperf_cpupin and nperf_num_parallel == 1: - nperf_opts = " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -ctl.wait(15) - -#pings -ping_opts = {"count": 100, "interval": 0.1} -if ipv in ['ipv4', 'both']: - ping((guest1, g1_nic, 0, {"scope": 0}), - (guest2, g2_nic, 0, {"scope": 0}), - options=ping_opts, expect="fail") - ping((guest1, g1_nic, 0, {"scope": 0}), - (guest3, g3_nic, 0, {"scope": 0}), - options=ping_opts) - ping((guest1, g1_nic, 0, {"scope": 0}), - (guest4, g4_nic, 0, {"scope": 0}), - options=ping_opts, expect="fail") - - ping((guest2, g2_nic, 0, {"scope": 0}), - (guest3, g3_nic, 0, {"scope": 0}), - options=ping_opts, expect="fail") - ping((guest2, g2_nic, 0, {"scope": 0}), - (guest4, g4_nic, 0, {"scope": 0}), - options=ping_opts) - - ping((guest3, g3_nic, 0, {"scope": 0}), - (guest4, g4_nic, 0, {"scope": 0}), - options=ping_opts, expect="fail") - -if ipv in ['ipv6', 'both']: - ping6((guest1, g1_nic, 1, {"scope": 0}), - (guest2, g2_nic, 1, {"scope": 0}), - options=ping_opts, expect="fail") - ping6((guest1, g1_nic, 1, {"scope": 0}), - (guest3, g3_nic, 1, {"scope": 0}), - options=ping_opts) - ping6((guest1, g1_nic, 1, {"scope": 0}), - (guest4, g4_nic, 1, {"scope": 0}), - options=ping_opts, expect="fail") - - ping6((guest2, g2_nic, 1, {"scope": 0}), - (guest3, g3_nic, 1, {"scope": 0}), - options=ping_opts, expect="fail") - ping6((guest2, g2_nic, 1, {"scope": 0}), - (guest4, g4_nic, 1, {"scope": 0}), - options=ping_opts) - - ping6((guest3, g3_nic, 1, {"scope": 0}), - (guest4, g4_nic, 1, {"scope": 0}), - options=ping_opts, expect="fail") - - -if ipv in [ 'ipv4', 'both' ]: - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_host\d+.hostname', - r'test_host\d+..*hwaddr', - r'machine_h\d+..*ovs\d*.hwaddr', - r'machine_h\d+..*tap\d*.hwaddr', - r'machine_h\d+..*tap\d*.devname']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((guest1, g1_nic, 0), (guest3, g3_nic, 0), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": nperf_opts, - "debug": nperf_debug, - "max_deviation": nperf_max_dev}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_host\d+.hostname', - r'test_host\d+..*hwaddr', - r'machine_h\d+..*ovs\d*.hwaddr', - r'machine_h\d+..*tap\d*.hwaddr', - r'machine_h\d+..*tap\d*.devname']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((guest1, g1_nic, 0), (guest3, g3_nic, 0), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts": nperf_opts, - "debug": nperf_debug, - "max_deviation": nperf_max_dev}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) -if ipv in [ 'ipv6', 'both' ]: - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_host\d+.hostname', - r'test_host\d+..*hwaddr', - r'machine_h\d+..*ovs\d*.hwaddr', - r'machine_h\d+..*tap\d*.hwaddr', - r'machine_h\d+..*tap\d*.devname']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((guest1, g1_nic, 1), (guest3, g3_nic, 1), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : nperf_opts + "-6", - "debug": nperf_debug, - "max_deviation": nperf_max_dev}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - #prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_host\d+.hostname', - r'test_host\d+..*hwaddr', - r'machine_h\d+..*ovs\d*.hwaddr', - r'machine_h\d+..*tap\d*.hwaddr', - r'machine_h\d+..*tap\d*.devname']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((guest1, g1_nic, 1), (guest3, g3_nic, 1), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "netperf_opts" : nperf_opts + "-6", - "debug": nperf_debug, - "max_deviation": nperf_max_dev}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - -if nperf_cpupin: - host1.run("service irqbalance start") - host2.run("service irqbalance start") - guest1.run("service irqbalance start") - guest2.run("service irqbalance start") - guest3.run("service irqbalance start") - guest4.run("service irqbalance start") diff --git a/recipes/regression_tests/phase3/2_virt_ovs_vxlan.xml b/recipes/regression_tests/phase3/2_virt_ovs_vxlan.xml deleted file mode 100644 index 009d648..0000000 --- a/recipes/regression_tests/phase3/2_virt_ovs_vxlan.xml +++ /dev/null @@ -1,145 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1450" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_num_parallel" value="1"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="2_virt_ovs_vxlan.mapping" /> - <alias name="net" value="192.168.2"/> - <alias name="vxlan_net" value="192.168.100"/> - <alias name="vxlan_net6" value="fc00:0:0:0"/> - </define> - <network> - <host id="h1"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="if1" label="n1"> - <addresses> - <address value="{$net}.1/24"/> - </addresses> - </eth> - <eth id="tap1" label="to_guest1"/> - <eth id="tap2" label="to_guest2"/> - <ovs_bridge id="ovs1"> - <slaves> - <slave id="tap1"> - <options> - <option name="ofport_request" value="5"/> - </options> - </slave> - <slave id="tap2"> - <options> - <option name="ofport_request" value="6"/> - </options> - </slave> - </slaves> - <tunnel id="vxlan1" type="vxlan"> - <options> - <option name="option:remote_ip" value="{$net}.2"/> - <option name="option:key" value="flow"/> - <option name="ofport_request" value="10"/> - </options> - </tunnel> - <flow_entries> - <entry>table=0,in_port=5,actions=set_field:100->tun_id,output:10</entry> - <entry>table=0,in_port=6,actions=set_field:200->tun_id,output:10</entry> - <entry>table=0,in_port=10,tun_id=100,actions=output:5</entry> - <entry>table=0,in_port=10,tun_id=200,actions=output:6</entry> - <entry>table=0,priority=100,actions=drop</entry> - </flow_entries> - </ovs_bridge> - </interfaces> - </host> - <host id="test_host1"> - <interfaces> - <eth id="if1" label="to_guest1"> - <addresses> - <address value="{$vxlan_net}.1/24"/> - <address value="{$vxlan_net6}::1/64"/> - </addresses> - </eth> - </interfaces> - </host> - <host id="test_host2"> - <interfaces> - <eth id="if1" label="to_guest2"> - <addresses> - <address value="{$vxlan_net}.2/24"/> - <address value="{$vxlan_net6}::2/64"/> - </addresses> - </eth> - </interfaces> - </host> - <host id="h2"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="if1" label="n1"> - <addresses> - <address value="{$net}.2/24"/> - </addresses> - </eth> - <eth id="tap1" label="to_guest3"/> - <eth id="tap2" label="to_guest4"/> - <ovs_bridge id="ovs2"> - <slaves> - <slave id="tap1"> - <options> - <option name="ofport_request" value="5"/> - </options> - </slave> - <slave id="tap2"> - <options> - <option name="ofport_request" value="6"/> - </options> - </slave> - </slaves> - <tunnel id="vxlan1" type="vxlan"> - <options> - <option name="option:remote_ip" value="{$net}.1"/> - <option name="option:key" value="flow"/> - <option name="ofport_request" value="10"/> - </options> - </tunnel> - <flow_entries> - <entry>table=0,in_port=5,actions=set_field:100->tun_id,output:10</entry> - <entry>table=0,in_port=6,actions=set_field:200->tun_id,output:10</entry> - <entry>table=0,in_port=10,tun_id=100,actions=output:5</entry> - <entry>table=0,in_port=10,tun_id=200,actions=output:6</entry> - <entry>table=0,priority=100,actions=drop</entry> - </flow_entries> - </ovs_bridge> - </interfaces> - </host> - <host id="test_host3"> - <interfaces> - <eth id="if1" label="to_guest3"> - <addresses> - <address value="{$vxlan_net}.3/24"/> - <address value="{$vxlan_net6}::3/64"/> - </addresses> - </eth> - </interfaces> - </host> - <host id="test_host4"> - <interfaces> - <eth id="if1" label="to_guest4"> - <addresses> - <address value="{$vxlan_net}.4/24"/> - <address value="{$vxlan_net6}::4/64"/> - </addresses> - </eth> - </interfaces> - </host> - </network> - - <task python="2_virt_ovs_vxlan.py"/> -</lnstrecipe> diff --git a/recipes/regression_tests/phase3/novirt_ovs_vxlan.README b/recipes/regression_tests/phase3/novirt_ovs_vxlan.README deleted file mode 100644 index 8570ebd..0000000 --- a/recipes/regression_tests/phase3/novirt_ovs_vxlan.README +++ /dev/null @@ -1,93 +0,0 @@ -Topology: - - switch - +--------+ - | | - +--------------------+ +----------------------+ - | | | | - | | | | - | | | | - | +--------+ | - | | - | | - +--+--+ +--+--+ -+---------| eth |--------+ +---------| eth |--------+ -| +-----+ | | +-----+ | -| | | | -| +----------------------------------------------------+ | -| | | | | | -| +----------+---------+ | | +----------+---------+ | -| | vxlan | | | | vxlan | | -| | | | | | | | -| | ovs_bridge | | | | ovs_bridge | | -| | | | | | | | -| | +----+ | | | | +----+ | | -| +-------|int0|-------+ | | +-------|int0|-------+ | -| +----+ | | +----+ | -| | | | -| host1 | | host2 | -| | | | -+------------------------+ +------------------------+ - -Number of hosts: 6 -Host #1 description: - One ethernet device configured with {$net}.1/24 ip address - One Open vSwitch bridge configured with: - * a VXLAN tunnel with remote_ip set to the ethernet interface of Host #2. - * an 'internal' port tagged with tun_id=100 and ip addresses: - {$vxlan_net}.1/24 - {$vxlan_net6}::1/64 -Host #2 description: - One ethernet device configured with {$net}.2/24 ip address - One Open vSwitch bridge configured with: - * a VXLAN tunnel with remote_ip set to the ethernet interface of Host #1. - * an 'internal' port tagged with tun_id=100 and ip addresses: - {$vxlan_net}.2/24 - {$vxlan_net6}::2/64 -Test name: - novirt_ovs_vxlan.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + host1.int0 -> host2.int0 expecting PASS - Ping6: - + count: 100 - + interval: 0.1s - + host1.int0 -> host2.int0 expecting PASS - Netperf: - + duration: 60s, repeated 5 times to calculate confidence - + host1.int0 -> host2.int0 TCP_STREAM ipv4 - + host1.int0 -> host2.int0 UDP_STREAM ipv4 - + host1.int0 -> host2.int0 TCP_STREAM ipv6 - + host1.int0 -> host2.int0 UDP_STREAM ipv6 - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - novirt_ovs_vxlan.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase3/novirt_ovs_vxlan.py b/recipes/regression_tests/phase3/novirt_ovs_vxlan.py deleted file mode 100644 index 5419ad9..0000000 --- a/recipes/regression_tests/phase3/novirt_ovs_vxlan.py +++ /dev/null @@ -1,216 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import perfrepo_baseline_to_dict -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.ModuleWrap import ping, ping6, netperf -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -# test hosts -h1 = ctl.get_host("test_host1") -h2 = ctl.get_host("test_host2") - -for h in [h1, h2]: - h.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - -# ------ -# TESTS -# ------ - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([h1, h2], pr_user_comment) - -h1_nic = h1.get_device("int0") -h2_nic = h2.get_device("int0") - -h1_nic.set_mtu(mtu) -h2_nic.set_mtu(mtu) - -if nperf_cpupin: - h1.run("service irqbalance stop") - h2.run("service irqbalance stop") - - # this will pin devices irqs to cpu #0 - for m, d in [(h1, h1_nic), (h2, h2_nic)]: - pin_dev_irqs(m, d, 0) - -nperf_opts = "" -if nperf_cpupin and nperf_num_parallel == 1: - nperf_opts = " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -ctl.wait(15) - -#pings -ping_opts = {"count": 100, "interval": 0.1} -if ipv in [ 'ipv4', 'both' ]: - ping((h1, h1_nic, 0, {"scope": 0}), - (h2, h2_nic, 0, {"scope": 0}), - options=ping_opts) - -if ipv in [ 'ipv6', 'both' ]: - ping6((h1, h1_nic, 1, {"scope": 0}), - (h2, h2_nic, 1, {"scope": 0}), - options=ping_opts) - -#netperfs -if ipv in [ 'ipv4', 'both' ]: - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'interface_ovs\d+.hwaddr']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((h1, h1_nic, 0, {"scope": 0}), - (h2, h2_nic, 0, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts": nperf_opts}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'interface_ovs\d+.hwaddr']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((h1, h1_nic, 0, {"scope": 0}), - (h2, h2_nic, 0, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts": nperf_opts}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) -if ipv in [ 'ipv6', 'both' ]: - ctl.wait(2) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'interface_ovs\d+.hwaddr']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((h1, h1_nic, 1, {"scope": 0}), - (h2, h2_nic, 1, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts" : nperf_opts + "-6"}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'interface_ovs\d+.hwaddr']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((h1, h1_nic, 1, {"scope": 0}), - (h2, h2_nic, 1, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts" : nperf_opts + "-6"}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - -if nperf_cpupin: - h1.run("service irqbalance start") - h2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase3/novirt_ovs_vxlan.xml b/recipes/regression_tests/phase3/novirt_ovs_vxlan.xml deleted file mode 100644 index 98f5f1b..0000000 --- a/recipes/regression_tests/phase3/novirt_ovs_vxlan.xml +++ /dev/null @@ -1,87 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1450" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_num_parallel" value="1"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="novirt_ovs_vxlan.mapping" /> - <alias name="net" value="192.168.2"/> - <alias name="vxlan_net" value="192.168.100"/> - <alias name="vxlan_net6" value="fc00:0:0:0"/> - </define> - <network> - <host id="test_host1"> - <interfaces> - <eth id="if1" label="n1"> - <addresses> - <address value="{$net}.1/24"/> - </addresses> - </eth> - <ovs_bridge id="ovs1"> - <internal id="int0"> - <addresses> - <address value="{$vxlan_net}.1/24"/> - <address value="{$vxlan_net6}::1/64"/> - </addresses> - <options> - <option name="ofport_request" value="5"/> - <option name="name" value="int0"/> - </options> - </internal> - <tunnel id="vxlan1" type="vxlan"> - <options> - <option name="option:remote_ip" value="{$net}.2"/> - <option name="option:key" value="flow"/> - <option name="ofport_request" value="10"/> - </options> - </tunnel> - <flow_entries> - <entry>table=0,in_port=5,actions=set_field:100->tun_id,output:10</entry> - <entry>table=0,in_port=10,tun_id=100,actions=output:5</entry> - <entry>table=0,priority=100,actions=drop</entry> - </flow_entries> - </ovs_bridge> - </interfaces> - </host> - <host id="test_host2"> - <interfaces> - <eth id="if1" label="n1"> - <addresses> - <address value="{$net}.2/24"/> - </addresses> - </eth> - <ovs_bridge id="ovs2"> - <internal id="int0"> - <options> - <option name="ofport_request" value="5"/> - <option name="name" value="int0"/> - </options> - <addresses> - <address value="{$vxlan_net}.2/24"/> - <address value="{$vxlan_net6}::2/24"/> - </addresses> - </internal> - <tunnel id="vxlan1" type="vxlan"> - <options> - <option name="option:remote_ip" value="{$net}.1"/> - <option name="option:key" value="flow"/> - <option name="ofport_request" value="10"/> - </options> - </tunnel> - <flow_entries> - <entry>table=0,in_port=5,actions=set_field:100->tun_id,output:10</entry> - <entry>table=0,in_port=10,tun_id=100,actions=output:5</entry> - <entry>table=0,priority=100,actions=drop</entry> - </flow_entries> - </ovs_bridge> - </interfaces> - </host> - </network> - - <task python="novirt_ovs_vxlan.py"/> -</lnstrecipe> diff --git a/recipes/regression_tests/phase3/vxlan_multicast.README b/recipes/regression_tests/phase3/vxlan_multicast.README deleted file mode 100644 index a354144..0000000 --- a/recipes/regression_tests/phase3/vxlan_multicast.README +++ /dev/null @@ -1,118 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------+ +-------------+ - | | | | - | | | | - | +------+ | - | | - | | - +-+--+ +-+--+ -+-------|eth1|------+ +-------|eth1|------+ -|host1 +---++ | |host2 +-+--+ | -| | | | | | -| | | | +-----+ | -|+-----+ +-+-+ | | |vxlan| | -||vxlan|--+br0| | | +-----+ | -|+-----+ +-+-+ | | | -| | | | | -| | | | | -| | | | | -| +-+-+ | | | -+---------|tap|-----+ +-------------------+ - +-+-+ - | - +-+--+ -+---------|eth1|----+ -|guest1 +-+--+ | -| | | -| +--+--+ | -| |vxlan| | -| +-----+ | -| | -+---------+---------+ - -Number of hosts: 3 -Host #1 description: - One ethernet device - Tap device connecting to a guest machine - Bridge br0 enslaving eth1 and tap devices, configured with ip address - {$net}.1/24 - VXLAN interface on top of the bridge interface using group_ip 239.1.1.1 - configured with ip addresses: - {$vxlan_net}.1/24 - {$vxlan_net6}::1/64 -Guest #1 description: - One ethernet device configured with ip address {$net}.2/24 - VXLAN interface on top of the ethernet interface using group_ip 239.1.1.1 - configured with ip addresses: - {$vxlan_net}.2/24 - {$vxlan_net6}::2/64 -Host #2 description: - One ethernet device configured with ip address {$net}.3/24 - VXLAN interface on top of the ethernet interface using group_ip 239.1.1.1 - configured with ip addresses: - {$vxlan_net}.3/24 - {$vxlan_net6}::3/64 -Test name: - vxlan_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + host1.vxlan -> host2.vxlan expecting PASS - + host1.vxlan -> guest1.vxlan expecting PASS - + host2.vxlan -> host1.vxlan expecting PASS - + host2.vxlan -> guest1.vxlan expecting PASS - + guest1.vxlan -> host1.vxlan expecting PASS - + guest1.vxlan -> host2.vxlan expecting PASS - All pings are executed in parallel - Ping6: - + count: 100 - + interval: 0.1s - + host1.vxlan -> host2.vxlan expecting PASS - + host1.vxlan -> guest1.vxlan expecting PASS - + host2.vxlan -> host1.vxlan expecting PASS - + host2.vxlan -> guest1.vxlan expecting PASS - + guest1.vxlan -> host1.vxlan expecting PASS - + guest1.vxlan -> host2.vxlan expecting PASS - All pings are executed in parallel - Netperf: - + duration: 60s, repeated 5 times to calculate confidence - + host1.vxlan -> host2.vxlan TCP_STREAM ipv4 - + host1.vxlan -> host2.vxlan UDP_STREAM ipv4 - + host1.vxlan -> host2.vxlan TCP_STREAM ipv6 - + host1.vxlan -> host2.vxlan UDP_STREAM ipv6 - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - vxlan_multicast.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase3/vxlan_multicast.py b/recipes/regression_tests/phase3/vxlan_multicast.py deleted file mode 100644 index 0c29977..0000000 --- a/recipes/regression_tests/phase3/vxlan_multicast.py +++ /dev/null @@ -1,249 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import perfrepo_baseline_to_dict -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.ModuleWrap import ping, ping6, netperf -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") -g1 = ctl.get_host("guest1") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - - -# ------ -# TESTS -# ------ - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -test_if1 = m1.get_interface("test_if") -test_if1.set_mtu(mtu) -test_if2 = m2.get_interface("test_if") -test_if2.set_mtu(mtu) -test_if3 = g1.get_interface("test_if") -test_if3.set_mtu(mtu) - -if nperf_cpupin: - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - g1.run("service irqbalance stop") - - m1_phy1 = m1.get_interface("eth1") - m1_phy2 = m1.get_interface("tap1") - m2_phy1 = m2.get_interface("eth1") - g1_phy1 = g1.get_interface("eth1") - dev_list = [(m1, m1_phy1), (m1, m1_phy2), (m2, m2_phy1), (g1, g1_phy1)] - - # this will pin devices irqs to cpu #0 - for m, d in dev_list: - pin_dev_irqs(m, d, 0) - -nperf_opts = "" -if nperf_cpupin and nperf_num_parallel == 1: - nperf_opts = " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -ctl.wait(15) - -ping_opts = {"count": 100, "interval": 0.1} - -ipv4_endpoints = [(m1, test_if1, 0, {"scope": 0}), - (m2, test_if2, 0, {"scope": 0}), - (g1, test_if3, 0, {"scope": 0})] -ipv6_endpoints = [(m1, test_if1, 1, {"scope": 0}), - (m2, test_if2, 1, {"scope": 0}), - (g1, test_if3, 1, {"scope": 0})] - -ipv4_pings = [] -for x in ipv4_endpoints: - for y in ipv4_endpoints: - if not x == y: - ipv4_pings.append(ping(x, y, options=ping_opts, bg=True)) - -for i in ipv4_pings: - i.wait() - -ipv6_pings = [] -for x in ipv6_endpoints: - for y in ipv6_endpoints: - if not x == y: - ipv6_pings.append(ping6(x, y, options=ping_opts, bg=True)) - -for i in ipv6_pings: - i.wait() - -ctl.wait(2) -if ipv in [ 'ipv4', 'both' ]: - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'testmachine\d+.interface_tap\d+.hwaddr', - r'guest\d+.hostname', - r'guest\d+.interface_eth\d+.hwaddr', - r'test_if.hwaddr']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((m1, test_if1, 0, {"scope": 0}), - (m2, test_if2, 0, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts": nperf_opts}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'testmachine\d+.interface_tap\d+.hwaddr', - r'guest\d+.hostname', - r'guest\d+.interface_eth\d+.hwaddr', - r'test_if.hwaddr']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((m1, test_if1, 0, {"scope": 0}), - (m2, test_if2, 0, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts": nperf_opts}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - -if ipv in [ 'ipv6', 'both' ]: - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'testmachine\d+.interface_tap\d+.hwaddr', - r'guest\d+.hostname', - r'guest\d+.interface_eth\d+.hwaddr', - r'test_if.hwaddr']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((m1, test_if1, 1, {"scope": 0}), - (m2, test_if2, 1, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts" : nperf_opts + " -6"}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'testmachine\d+.interface_tap\d+.hwaddr', - r'guest\d+.hostname', - r'guest\d+.interface_eth\d+.hwaddr', - r'test_if.hwaddr']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((m1, test_if1, 1, {"scope": 0}), - (m2, test_if2, 1, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts" : nperf_opts + "-6"}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase3/vxlan_multicast.xml b/recipes/regression_tests/phase3/vxlan_multicast.xml deleted file mode 100644 index 734c854..0000000 --- a/recipes/regression_tests/phase3/vxlan_multicast.xml +++ /dev/null @@ -1,99 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1450" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_num_parallel" value="1"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="vxlan_multicast.mapping" /> - <alias name="net" value="192.168.0"/> - <alias name="vxlan_net" value="192.168.100"/> - <alias name="vxlan_net6" value="fc00:0:0:0"/> - </define> - <network> - <host id="testmachine1"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="eth1" label="tnet"/> - <eth id="tap1" label="to_guest1"/> - <bridge id="br0"> - <slaves> - <slave id="eth1"/> - <slave id="tap1"/> - </slaves> - <addresses> - <address value="{$net}.1/24" /> - </addresses> - </bridge> - <vxlan id="test_if"> - <options> - <option name="id" value="1"/> - <option name="group_ip" value="239.1.1.1"/> - </options> - <slaves> - <slave id="br0"/> - </slaves> - <addresses> - <address value="{$vxlan_net}.1/24" /> - <address value="{$vxlan_net6}::1/64" /> - </addresses> - </vxlan> - </interfaces> - </host> - <host id="guest1"> - <interfaces> - <eth id="eth1" label="to_guest1"> - <addresses> - <address value="{$net}.2/24" /> - </addresses> - </eth> - <vxlan id="test_if"> - <options> - <option name="id" value="1"/> - <option name="group_ip" value="239.1.1.1"/> - </options> - <slaves> - <slave id="eth1"/> - </slaves> - <addresses> - <address value="{$vxlan_net}.2/24" /> - <address value="{$vxlan_net6}::2/64" /> - </addresses> - </vxlan> - </interfaces> - </host> - <host id="testmachine2"> - <params> - <param name="machine_type" value="baremetal"/> - </params> - <interfaces> - <eth id="eth1" label="tnet"> - <addresses> - <address value="{$net}.3/24" /> - </addresses> - </eth> - <vxlan id="test_if"> - <options> - <option name="id" value="1"/> - <option name="group_ip" value="239.1.1.1"/> - </options> - <slaves> - <slave id="eth1"/> - </slaves> - <addresses> - <address value="{$vxlan_net}.3/24" /> - <address value="{$vxlan_net6}::3/64" /> - </addresses> - </vxlan> - </interfaces> - </host> - </network> - - <task python="vxlan_multicast.py" /> -</lnstrecipe> diff --git a/recipes/regression_tests/phase3/vxlan_remote.README b/recipes/regression_tests/phase3/vxlan_remote.README deleted file mode 100644 index 08fc8ef..0000000 --- a/recipes/regression_tests/phase3/vxlan_remote.README +++ /dev/null @@ -1,86 +0,0 @@ -Topology: - - switch - +------+ - | | - | | - +-------------+ +-------------+ - | | | | - | | | | - | +------+ | - | | - | | - +-+--+ +-+--+ -+-------|eth1|------+ +-------|eth1|------+ -| +-+--+ | | +-+--+ | -| | | | | | -| +-----+ | | +-----+ | -| |vxlan| | | |vxlan| | -| +-----+ | | +-----+ | -| | | | -| host1 | | host2 | -| | | | -| | | | -| | | | -+-------------------+ +-------------------+ - -Number of hosts: 2 -Host #1 description: - One ethernet device configured with ip address {$net}.1/24 - VXLAN interface on top of the ethernet interface using remote_ip {$net}.2 - configured with ip addresses: - {$vxlan_net}.1/24 - {$vxlan_net6}::1/64 -Host #2 description: - One ethernet device configured with ip address {$net}.2/24 - VXLAN interface on top of the ethernet interface using remote_ip {$net}.1 - configured with ip addresses: - {$vxlan_net}.2/24 - {$vxlan_net6}::2/64 -Test name: - vxlan_test.py -Test description: - Ping: - + count: 100 - + interval: 0.1s - + host1.vxlan -> host2.vxlan expecting PASS - Ping6: - + count: 100 - + interval: 0.1s - + host1.vxlan -> host2.vxlan expecting PASS - Netperf: - + duration: 60s, repeated 5 times to calculate confidence - + host1.vxlan -> host2.vxlan TCP_STREAM ipv4 - + host1.vxlan -> host2.vxlan UDP_STREAM ipv4 - + host1.vxlan -> host2.vxlan TCP_STREAM ipv6 - + host1.vxlan -> host2.vxlan UDP_STREAM ipv6 - -PerfRepo integration: - First, preparation in PerfRepo is required - you need to create Test objects - through the web interface that properly describe the individual Netperf - tests that this recipe runs. Don't forget to also add appropriate metrics. - For these Netperf tests it's always: - * throughput - * throughput_min - * throughput_max - * throughput_deviation - - After that, to enable support for PerfRepo you need to create the file - vxlan_remote.mapping and define the following id mappings: - tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object - - To enable result comparison agains baselines you need to create a Report in - PerfRepo that will store the baseline. Set up the Report to only contain results - with the same hash tag and then add a new mapping to the mapping file, with - this format: - <some_hash> = <report_id> - - The hash value is automatically generated during test execution and added - to each result stored in PerfRepo. To get the Report id you need to open - that report in our browser and find if in the URL. - - When running this recipe you should also define the 'product_name' alias - (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase3/vxlan_remote.py b/recipes/regression_tests/phase3/vxlan_remote.py deleted file mode 100644 index 12050d9..0000000 --- a/recipes/regression_tests/phase3/vxlan_remote.py +++ /dev/null @@ -1,215 +0,0 @@ -from lnst.Controller.Task import ctl -from lnst.Controller.PerfRepoUtils import perfrepo_baseline_to_dict -from lnst.Controller.PerfRepoUtils import netperf_result_template - -from lnst.RecipeCommon.ModuleWrap import ping, ping6, netperf -from lnst.RecipeCommon.IRQ import pin_dev_irqs -from lnst.RecipeCommon.PerfRepo import generate_perfrepo_comment - -# ------ -# SETUP -# ------ - -mapping_file = ctl.get_alias("mapping_file") -perf_api = ctl.connect_PerfRepo(mapping_file) - -product_name = ctl.get_alias("product_name") - -m1 = ctl.get_host("testmachine1") -m2 = ctl.get_host("testmachine2") - -m1.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) -m2.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"]) - - -# ------ -# TESTS -# ------ - -ipv = ctl.get_alias("ipv") -mtu = ctl.get_alias("mtu") -netperf_duration = int(ctl.get_alias("netperf_duration")) -nperf_reserve = int(ctl.get_alias("nperf_reserve")) -nperf_confidence = ctl.get_alias("nperf_confidence") -nperf_max_runs = int(ctl.get_alias("nperf_max_runs")) -nperf_cpupin = ctl.get_alias("nperf_cpupin") -nperf_cpu_util = ctl.get_alias("nperf_cpu_util") -nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel")) -nperf_debug = ctl.get_alias("nperf_debug") -nperf_max_dev = ctl.get_alias("nperf_max_dev") -pr_user_comment = ctl.get_alias("perfrepo_comment") - -pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment) - -test_if1 = m1.get_interface("test_if") -test_if1.set_mtu(mtu) -test_if2 = m2.get_interface("test_if") -test_if2.set_mtu(mtu) - -if nperf_cpupin: - m1.run("service irqbalance stop") - m2.run("service irqbalance stop") - - m1_phy1 = m1.get_interface("eth") - m2_phy1 = m2.get_interface("eth") - dev_list = [(m1, m1_phy1), (m2, m2_phy1)] - - # this will pin devices irqs to cpu #0 - for m, d in dev_list: - pin_dev_irqs(m, d, 0) - -nperf_opts = "" -if nperf_cpupin and nperf_num_parallel == 1: - nperf_opts = " -T%s,%s" % (nperf_cpupin, nperf_cpupin) - -ctl.wait(15) - -ping_opts = {"count": 100, "interval": 0.1} - -if ipv in [ 'ipv4', 'both' ]: - ping((m1, test_if1, 0, {"scope": 0}), - (m2, test_if2, 0, {"scope": 0}), - options=ping_opts) - - ctl.wait(2) - - # prepare PerfRepo result for tcp - result_tcp = perf_api.new_result("tcp_ipv4_id", - "tcp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_if.hwaddr']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((m1, test_if1, 0, {"scope": 0}), - (m2, test_if2, 0, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts": nperf_opts}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp - result_udp = perf_api.new_result("udp_ipv4_id", - "udp_ipv4_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_if.hwaddr']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((m1, test_if1, 0, {"scope": 0}), - (m2, test_if2, 0, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts": nperf_opts}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - -if ipv in [ 'ipv6', 'both' ]: - ping6((m1, test_if1, 1, {"scope": 0}), - (m2, test_if2, 1, {"scope": 0}), - options=ping_opts) - - # prepare PerfRepo result for tcp ipv6 - result_tcp = perf_api.new_result("tcp_ipv6_id", - "tcp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_if.hwaddr']) - result_tcp.add_tag(product_name) - if nperf_num_parallel > 1: - result_tcp.add_tag("multithreaded") - result_tcp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_tcp) - baseline = perfrepo_baseline_to_dict(baseline) - - tcp_res_data = netperf((m1, test_if1, 1, {"scope": 0}), - (m2, test_if2, 1, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "TCP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts" : nperf_opts + " -6"}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_tcp, tcp_res_data) - result_tcp.set_comment(pr_comment) - perf_api.save_result(result_tcp) - - # prepare PerfRepo result for udp ipv6 - result_udp = perf_api.new_result("udp_ipv6_id", - "udp_ipv6_result", - hash_ignore=[ - r'kernel_release', - r'redhat_release', - r'test_if.hwaddr']) - result_udp.add_tag(product_name) - if nperf_num_parallel > 1: - result_udp.add_tag("multithreaded") - result_udp.set_parameter('num_parallel', nperf_num_parallel) - - baseline = perf_api.get_baseline_of_result(result_udp) - baseline = perfrepo_baseline_to_dict(baseline) - - udp_res_data = netperf((m1, test_if1, 1, {"scope": 0}), - (m2, test_if2, 1, {"scope": 0}), - client_opts={"duration" : netperf_duration, - "testname" : "UDP_STREAM", - "confidence" : nperf_confidence, - "num_parallel" : nperf_num_parallel, - "cpu_util" : nperf_cpu_util, - "runs": nperf_max_runs, - "debug": nperf_debug, - "max_deviation": nperf_max_dev, - "netperf_opts" : nperf_opts + "-6"}, - baseline = baseline, - timeout = (netperf_duration + nperf_reserve)*nperf_max_runs) - - netperf_result_template(result_udp, udp_res_data) - result_udp.set_comment(pr_comment) - perf_api.save_result(result_udp) - -if nperf_cpupin: - m1.run("service irqbalance start") - m2.run("service irqbalance start") diff --git a/recipes/regression_tests/phase3/vxlan_remote.xml b/recipes/regression_tests/phase3/vxlan_remote.xml deleted file mode 100644 index 040479e..0000000 --- a/recipes/regression_tests/phase3/vxlan_remote.xml +++ /dev/null @@ -1,65 +0,0 @@ -<lnstrecipe> - <define> - <alias name="ipv" value="both" /> - <alias name="mtu" value="1450" /> - <alias name="netperf_duration" value="60" /> - <alias name="nperf_reserve" value="20" /> - <alias name="nperf_confidence" value="99,5" /> - <alias name="nperf_max_runs" value="5"/> - <alias name="nperf_num_parallel" value="1"/> - <alias name="nperf_debug" value="0"/> - <alias name="nperf_max_dev" value="20%"/> - <alias name="mapping_file" value="vxlan_remote.mapping" /> - <alias name="net" value="192.168.0"/> - <alias name="vxlan_net" value="192.168.100"/> - <alias name="vxlan_net6" value="fc00:0:0:0"/> - </define> - <network> - <host id="testmachine1"> - <interfaces> - <eth id="eth" label="tnet"> - <addresses> - <address value="{$net}.1/24" /> - </addresses> - </eth> - <vxlan id="test_if"> - <options> - <option name="id" value="1"/> - <option name="remote_ip" value="{$net}.2"/> - </options> - <slaves> - <slave id="eth"/> - </slaves> - <addresses> - <address value="{$vxlan_net}.1/24" /> - <address value="{$vxlan_net6}::1/64" /> - </addresses> - </vxlan> - </interfaces> - </host> - <host id="testmachine2"> - <interfaces> - <eth id="eth" label="tnet"> - <addresses> - <address value="{$net}.2/24" /> - </addresses> - </eth> - <vxlan id="test_if"> - <options> - <option name="id" value="1"/> - <option name="remote_ip" value="{$net}.1"/> - </options> - <slaves> - <slave id="eth"/> - </slaves> - <addresses> - <address value="{$vxlan_net}.2/24" /> - <address value="{$vxlan_net6}::2/64" /> - </addresses> - </vxlan> - </interfaces> - </host> - </network> - - <task python="vxlan_remote.py" /> -</lnstrecipe>
From: Ondrej Lichtner olichtne@redhat.com
The module defines classes for storing performance measurement data and a statistics class that calculates some statistic properties of this data for use in reporting the results.
This commit adds a prototype implementation of these classes, more work on these is expected.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/PerfResult.py | 152 ++++++++++++++++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 lnst/RecipeCommon/PerfResult.py
diff --git a/lnst/RecipeCommon/PerfResult.py b/lnst/RecipeCommon/PerfResult.py new file mode 100644 index 0000000..ef43436 --- /dev/null +++ b/lnst/RecipeCommon/PerfResult.py @@ -0,0 +1,152 @@ +from lnst.Common.LnstError import LnstError +from lnst.Common.Utils import std_deviation + +class PerfStatMixin(object): + @property + def average(self): + return float(self.value) / self.duration + + @property + def std_deviation(self): + return std_deviation([i.value for i in self]) + +class PerfInterval(PerfStatMixin): + def __init__(self, value, duration, unit): + self._value = value + self._duration = duration + self._unit = unit + + @property + def value(self): + return self._value + + @property + def duration(self): + return self._duration + + @property + def unit(self): + return self._unit + +class PerfList(list): + _sub_type = None + + def __init__(self, iterable=[]): + unit = None + + for i, item in enumerate(iterable): + if not isinstance(item, self._sub_type): + raise LnstError("{} only accepts {} objects." + .format(self.__class__.__name__, + self._sub_type.__name__)) + + if i == 0: + unit = item.unit + + if item.unit != unit: + print "XXXXXXXXXX", item.unit, unit + raise LnstError("PerfInterval items must have the same unit.") + + super(PerfList, self).__init__(iterable) + + def _validate_item(self, item): + if not isinstance(item, self._sub_type): + raise LnstError("{} only accepts {} objects." + .format(self.__class__.__name__, + self._sub_type.__name__)) + + if len(self) > 0 and item.unit != self[0].unit: + raise LnstError("PerfInterval items must have the same unit.") + + def append(self, item): + self._validate_item(item) + + super(PerfList, self).append(item) + + def extend(self, iterable): + for i in iterable: + self._validate_item(i) + + super(PerfList, self).extend(iterable) + + def insert(self, index, item): + self._validate_item(item) + + super(PerfList, self).insert(index, item) + + def __add__(self, iterable): + for i in iterable: + self._validate_item(i) + + super(PerfList, self).__add__(iterable) + + def __iadd__(self, iterable): + for i in iterable: + self._validate_item(i) + + super(PerfList, self).__iadd__(iterable) + + def __setitem__(self, i, item): + self._validate_item(item) + + super(PerfList, self).__setitem__(i, item) + + def __setslice__(self, i, j, iterable): + for i in iterable: + self._validate_item(i) + + super(PerfList, self).__setslice__(i, j, iterable) + +class StreamPerf(PerfList, PerfStatMixin): + _sub_type = PerfInterval + + @property + def value(self): + return sum([i.value for i in self]) + + @property + def duration(self): + return sum([i.duration for i in self]) + + @property + def unit(self): + if len(self) > 0: + return self[0].unit + else: + return None + +class MultiStreamPerf(PerfList, PerfStatMixin): + _sub_type = StreamPerf + + @property + def value(self): + return sum([i.value for i in self]) + + @property + def duration(self): + return max([i.duration for i in self]) + + @property + def unit(self): + if len(self) > 0: + return self[0].unit + else: + return None + +class MultiRunPerf(PerfList, PerfStatMixin): + _sub_type = MultiStreamPerf + + @property + def value(self): + return sum([i.value for i in self]) + + @property + def duration(self): + return sum([i.duration for i in self]) + + @property + def unit(self): + if len(self) > 0: + return self[0].unit + else: + return None
Mon, May 21, 2018 at 10:42:54AM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
The module defines classes for storing performance measurement data and a statistics class that calculates some statistic properties of this data for use in reporting the results.
This commit adds a prototype implementation of these classes, more work on these is expected.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/RecipeCommon/PerfResult.py | 152 ++++++++++++++++++++++++++++++++ 1 file changed, 152 insertions(+) create mode 100644 lnst/RecipeCommon/PerfResult.py
diff --git a/lnst/RecipeCommon/PerfResult.py b/lnst/RecipeCommon/PerfResult.py new file mode 100644 index 0000000..ef43436 --- /dev/null +++ b/lnst/RecipeCommon/PerfResult.py @@ -0,0 +1,152 @@ +from lnst.Common.LnstError import LnstError +from lnst.Common.Utils import std_deviation
+class PerfStatMixin(object):
- @property
- def average(self):
return float(self.value) / self.duration
- @property
- def std_deviation(self):
return std_deviation([i.value for i in self])
+class PerfInterval(PerfStatMixin):
- def __init__(self, value, duration, unit):
self._value = value
self._duration = duration
self._unit = unit
- @property
- def value(self):
return self._value
- @property
- def duration(self):
return self._duration
- @property
- def unit(self):
return self._unit
+class PerfList(list):
- _sub_type = None
- def __init__(self, iterable=[]):
unit = None
for i, item in enumerate(iterable):
if not isinstance(item, self._sub_type):
raise LnstError("{} only accepts {} objects."
.format(self.__class__.__name__,
self._sub_type.__name__))
if i == 0:
unit = item.unit
if item.unit != unit:
print "XXXXXXXXXX", item.unit, unit
^^^ use logging.debug()?
raise LnstError("PerfInterval items must have the same unit.")
the name PerfInterval does not match class name, should be PerfList?
super(PerfList, self).__init__(iterable)
- def _validate_item(self, item):
if not isinstance(item, self._sub_type):
raise LnstError("{} only accepts {} objects."
.format(self.__class__.__name__,
self._sub_type.__name__))
if len(self) > 0 and item.unit != self[0].unit:
raise LnstError("PerfInterval items must have the same unit.")
the name PerfInterval does not match class name, should be PerfList?
- def append(self, item):
self._validate_item(item)
super(PerfList, self).append(item)
- def extend(self, iterable):
for i in iterable:
self._validate_item(i)
super(PerfList, self).extend(iterable)
- def insert(self, index, item):
self._validate_item(item)
super(PerfList, self).insert(index, item)
- def __add__(self, iterable):
for i in iterable:
self._validate_item(i)
super(PerfList, self).__add__(iterable)
- def __iadd__(self, iterable):
for i in iterable:
self._validate_item(i)
super(PerfList, self).__iadd__(iterable)
- def __setitem__(self, i, item):
self._validate_item(item)
super(PerfList, self).__setitem__(i, item)
- def __setslice__(self, i, j, iterable):
for i in iterable:
self._validate_item(i)
super(PerfList, self).__setslice__(i, j, iterable)
+class StreamPerf(PerfList, PerfStatMixin):
- _sub_type = PerfInterval
- @property
- def value(self):
return sum([i.value for i in self])
- @property
- def duration(self):
return sum([i.duration for i in self])
- @property
- def unit(self):
if len(self) > 0:
return self[0].unit
else:
return None
+class MultiStreamPerf(PerfList, PerfStatMixin):
- _sub_type = StreamPerf
- @property
- def value(self):
return sum([i.value for i in self])
- @property
- def duration(self):
return max([i.duration for i in self])
- @property
- def unit(self):
if len(self) > 0:
return self[0].unit
else:
return None
+class MultiRunPerf(PerfList, PerfStatMixin):
- _sub_type = MultiStreamPerf
- @property
- def value(self):
return sum([i.value for i in self])
- @property
- def duration(self):
return sum([i.duration for i in self])
- @property
- def unit(self):
if len(self) > 0:
return self[0].unit
else:
return None
-- 2.17.0 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/lnst-developers@lists.fedoraho...
From: Ondrej Lichtner olichtne@redhat.com
The RecipeCommon Perf and Ping modules define BaseRecipe derived classes (PerfTestAndEvaluate and PingTestAndEvaluate) defining common methods used by a recipe testing performance or connectivity between two endpoints. I expect these to be a fairly common pattern in recipes which is why I wanted to separate them into "common" classes.
A recipe can inherit from both to combine their functionality and on their own they don't define a fully functional recipe.
The module also defines "*Conf" classes specifying the configuration that should be used by the Perf or Ping class. The configurations define the endpoints that should be used and the specific configuration of the Perf or Ping tool. This serves as an abstraction of the configuration that is later translated to specific parameters of a test module that is used by the PerfTestAndEvaluate and PingTestAndEvaluate classes.
The Perf module also defines a PerfMeasurementTool abstract class that defines the interface of a class that will perform the actual perf test using a specific measurement tool, e.g. Iperf or Netperf. The goal is to be able to choose which measurement tool should be used.
The Ping module at this time doesn't need this because there's just one ping module, but we could extend the code later if required.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Perf.py | 114 ++++++++++++++++++++++++++++++++++++++ lnst/RecipeCommon/Ping.py | 45 +++++++++++++++ 2 files changed, 159 insertions(+) create mode 100644 lnst/RecipeCommon/Perf.py create mode 100644 lnst/RecipeCommon/Ping.py
diff --git a/lnst/RecipeCommon/Perf.py b/lnst/RecipeCommon/Perf.py new file mode 100644 index 0000000..49fa81f --- /dev/null +++ b/lnst/RecipeCommon/Perf.py @@ -0,0 +1,114 @@ +from lnst.Controller.Recipe import BaseRecipe +from lnst.RecipeCommon.PerfResult import MultiRunPerf + +class PerfConf(object): + def __init__(self, + perf_tool, + client, client_bind, + server, server_bind, + test_type, + msg_size, duration, iterations, streams): + self._perf_tool = perf_tool + self._client = client + self._client_bind = client_bind + self._server = server + self._server_bind = server_bind + + self._test_type = test_type + + self._msg_size = msg_size + self._duration = duration + self._iterations = iterations + self._streams = streams + + @property + def perf_tool(self): + return self._perf_tool + + @property + def client(self): + return self._client + + @property + def client_bind(self): + return self._client_bind + + @property + def server(self): + return self._server + + @property + def server_bind(self): + return self._server_bind + + @property + def test_type(self): + return self._test_type + + @property + def msg_size(self): + return self._msg_size + + @property + def duration(self): + return self._duration + + @property + def iterations(self): + return self._iterations + + @property + def streams(self): + return self._streams + +class PerfMeasurementTool(object): + @staticmethod + def perf_measure(perf_conf): + raise NotImplementedError + +class PerfTestAndEvaluate(BaseRecipe): + def perf_test(self, perf_conf): + client_measurements = MultiRunPerf() + server_measurements = MultiRunPerf() + for i in range(perf_conf.iterations): + client, server = perf_conf.perf_tool.perf_measure(perf_conf) + + client_measurements.append(client) + server_measurements.append(server) + + return client_measurements, server_measurements + + def perf_evaluate_and_report(self, perf_conf, results, baseline): + self.perf_evaluate(perf_conf, results, baseline) + + self.perf_report(perf_conf, results, baseline) + + def perf_evaluate(self, perf_conf, results, baseline): + client, server = results + + if client.average > 0: + self.add_result(True, "Client reported non-zero throughput") + else: + self.add_result(False, "Client reported zero throughput") + + if server.average > 0: + self.add_result(True, "Server reported non-zero throughput") + else: + self.add_result(False, "Server reported zero throughput") + + + def perf_report(self, perf_conf, results, baseline): + client, server = results + + self.add_result(True, + "Client measured throughput: {tput} +-{deviation} {unit} per second" + .format(tput=client.average, + deviation=client.std_deviation, + unit=client.unit), + data = client) + self.add_result(True, + "Server measured throughput: {tput} +-{deviation} {unit} per second" + .format(tput=server.average, + deviation=server.std_deviation, + unit=server.unit), + data = server) diff --git a/lnst/RecipeCommon/Ping.py b/lnst/RecipeCommon/Ping.py new file mode 100644 index 0000000..0f9f800 --- /dev/null +++ b/lnst/RecipeCommon/Ping.py @@ -0,0 +1,45 @@ +from lnst.Controller.Recipe import BaseRecipe +from lnst.Tests import Ping + +class PingConf(object): + def __init__(self, + client, client_bind, + destination, destination_address): + self._client = client + self._client_bind = client_bind + self._destination = destination + self._destination_address = destination_address + + @property + def client(self): + return self._client + + @property + def client_bind(self): + return self._client_bind + + @property + def destination(self): + return self._destination + + @property + def destination_address(self): + return self._destination_address + +class PingTestAndEvaluate(BaseRecipe): + def ping_test(self, ping_config): + client = ping_config.client + destination = ping_config.destination + + ping = Ping(dst = ping_config.destination_address, + interface = ping_config.client_bind) + + ping_job = client.run(ping) + return ping_job.result + + def ping_evaluate_and_report(self, ping_config, results): + # do we want to use the "perf" measurements (store a baseline etc...) as well? + if results["rate"] > 50: + self.add_result(True, "Ping succesful", results) + else: + self.add_result(False, "Ping unsuccesful", results)
From: Ondrej Lichtner olichtne@redhat.com
Implements the PerfMeasurementTool interface defined in the RecipeCommon.Perf module using the IperfClient and IperfServer test modules. The implementation translates the PerfConf object and translates it into Iperf compatible parameters, launches both jobs on the specified endpoints and finally parses the reported results into PerfResult objects.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/IperfMeasurementTool.py | 83 +++++++++++++++++++++++ 1 file changed, 83 insertions(+) create mode 100644 lnst/RecipeCommon/IperfMeasurementTool.py
diff --git a/lnst/RecipeCommon/IperfMeasurementTool.py b/lnst/RecipeCommon/IperfMeasurementTool.py new file mode 100644 index 0000000..55fe18e --- /dev/null +++ b/lnst/RecipeCommon/IperfMeasurementTool.py @@ -0,0 +1,83 @@ +import time +import signal +from lnst.Common.IpAddress import ipaddress +from lnst.Controller.Recipe import RecipeError +from lnst.Controller.RecipeResults import ResultLevel +from lnst.RecipeCommon.Perf import PerfConf, PerfMeasurementTool +from lnst.RecipeCommon.PerfResult import PerfInterval, StreamPerf +from lnst.RecipeCommon.PerfResult import MultiStreamPerf +from lnst.Tests.Iperf import IperfClient, IperfServer + +class IperfMeasurementTool(PerfMeasurementTool): + @staticmethod + def perf_measure(perf_conf): + _iperf_duration_overhead = 5 + + server_params = dict(bind = ipaddress(perf_conf.server_bind), + oneoff = True) + + client_params = dict(server = server_params["bind"], + duration = perf_conf.duration, + parallel = perf_conf.streams) + + if perf_conf.test_type == "tcp_stream": + #tcp stream is the default for iperf3 + pass + elif perf_conf.test_type == "udp_stream": + client_params["udp"] = True + elif perf_conf.test_type == "sctp_stream": + client_params["sctp"] = True + else: + raise RecipeError("Unsupported test type '{}'" + .format(perf_conf.test_type)) + + server = IperfServer(**server_params) + client = IperfClient(**client_params) + + server_host = perf_conf.server + client_host = perf_conf.client + result = None + try: + server_job = server_host.run(server, bg=True, + job_level=ResultLevel.NORMAL) + + #wait for server to start, TODO can this be improved? + time.sleep(2) + + duration = client.params.duration + _iperf_duration_overhead + client_job = client_host.run(client, timeout=duration, + job_level=ResultLevel.NORMAL) + + server_job.wait(timeout=5) + finally: + if client_job and not client_job.finished: + client_job.kill() + + if server_job and not server_job.finished: + server_job.kill() + + #TODO return something if not passed + if client_job.passed: + client_result = MultiStreamPerf() + for i in client_job.result["data"]["end"]["streams"]: + client_result.append(StreamPerf()) + + for interval in client_job.result["data"]["intervals"]: + for i, stream in enumerate(interval["streams"]): + client_result[i].append(PerfInterval(stream["bytes"] * 8, + stream["seconds"], + "bits")) + + #TODO return something if not passed + if server_job.passed: + server_result = MultiStreamPerf() + for i in server_job.result["data"]["end"]["streams"]: + server_result.append(StreamPerf()) + + for interval in server_job.result["data"]["intervals"]: + for i, stream in enumerate(interval["streams"]): + server_result[i].append(PerfInterval(stream["bytes"] * 8, + stream["seconds"], + "bits")) + + return client_result, server_result
From: Ondrej Lichtner olichtne@redhat.com
Adding the BaseEnrtRecipe module that defines a class with the same name. The class inherits from both the PingTestAndEvaluate and PerfTestAndEvaluate classes to implement a shared scheme for our ENRT recipes. It is expected that this will go through more refactoring as we port more recipes but the idea is that we will avoid code duplication at all costs.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 212 ++++++++++++++++++++++++++++ 1 file changed, 212 insertions(+) create mode 100644 lnst/Recipes/ENRT/BaseEnrtRecipe.py
diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py new file mode 100644 index 0000000..64c8677 --- /dev/null +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -0,0 +1,212 @@ + +from lnst.Common.LnstError import LnstError +from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam +from lnst.Common.IpAddress import AF_INET, AF_INET6 + +from lnst.Controller.Recipe import BaseRecipe + +from lnst.RecipeCommon.Ping import PingTestAndEvaluate, PingConf +from lnst.RecipeCommon.Perf import PerfTestAndEvaluate, PerfConf +from lnst.RecipeCommon.IperfMeasurementTool import IperfMeasurementTool + +class EnrtConfiguration(object): + def __init__(self): + self._endpoint1 = None + self._endpoint2 = None + self._endpoint1_coalescing = None + + @property + def endpoint1(self): + return self._endpoint1 + + @endpoint1.setter + def endpoint1(self, value): + self._endpoint1 = value + + @property + def endpoint2(self): + return self._endpoint2 + + @endpoint2.setter + def endpoint2(self, value): + self._endpoint2 = value + + @property + def endpoint1_coalescing(self): + self._endpoint1_coalescing + + @endpoint1_coalescing.setter + def endpoint1_coalescing(self, value): + self._endpoint1_coalescing = value + +class EnrtSubConfiguration(object): + def __init__(self): + self._ip_version = None + self._perf_test = None + self._offload_settings = None + + @property + def ip_version(self): + return self._ip_version + + @ip_version.setter + def ip_version(self, value): + self._ip_version = value + + @property + def offload_settings(self): + return self._offload_settings + + @offload_settings.setter + def offload_settings(self, value): + self._offload_settings = value + +class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate): + ip_versions = Param(default=("ipv4", "ipv6")) + perf_tests = Param(default=("tcp_stream", "udp_stream", "sctp_stream")) + + offload_combinations = Param(default=( + dict(gro="on", gso="on", tso="on", tx="on", rx="on"))) + + adaptive_coalescing = BoolParam(default=True) + + mtu = IntParam(mandatory=False) + + dev_intr_cpu = IntParam(default=0) + + perf_duration = IntParam(default=60) + perf_iterations = IntParam(default=5) + perf_streams = IntParam(default=1) + perf_msg_size = IntParam(default=123) + + perf_usr_comment = StrParam(default="") + + perf_max_deviation = IntParam(default=10) #TODO required? + + perf_tool = Param(default=IperfMeasurementTool) + + def test(self): + main_config = self.test_wide_configuration() + + for sub_config in self.generate_sub_configurations(main_config): + self.apply_sub_configuration(main_config, sub_config) + + for ping_config in self.generate_ping_configurations(main_config, + sub_config): + result = self.ping_test(ping_config) + self.ping_evaluate_and_report(ping_config, result) + + for perf_config in self.generate_perf_configurations(main_config, + sub_config): + result = self.perf_test(perf_config) + self.perf_evaluate_and_report(perf_config, result, baseline=None) + + self.remove_sub_configuration(main_config, sub_config) + + self.test_wide_deconfiguration(main_config) + + def generate_sub_configurations(self, main_config): + for offload_settings in self.params.offload_combinations: + sub_config = EnrtSubConfiguration() + sub_config.offload_settings = offload_settings + + yield sub_config + + def apply_sub_configuration(self, main_config, sub_config): + client_nic = main_config.endpoint1 + server_nic = main_config.endpoint2 + client_netns = client_nic.netns + server_netns = server_nic.netns + + ethtool_offload_string = "" + for name, value in sub_config.offload_settings.items(): + ethtool_offload_string += " %s %s" % (name, value) + + client_netns.run("ethtool -K {} {}".format(client_nic.name, + ethtool_offload_string)) + server_netns.run("ethtool -K {} {}".format(server_nic.name, + ethtool_offload_string)) + + def remove_sub_configuration(self, main_config, sub_config): + client_nic = main_config.endpoint1 + server_nic = main_config.endpoint2 + client_netns = client_nic.netns + server_netns = server_nic.netns + + ethtool_offload_string = "" + for name, value in sub_config.offload_settings.items(): + ethtool_offload_string += " %s %s" % (name, "on") + + #set all the offloads back to 'on' state + client_netns.run("ethtool -K {} {}".format(client_nic.name, + ethtool_offload_string)) + server_netns.run("ethtool -K {} {}".format(server_nic.name, + ethtool_offload_string)) + + def generate_ping_configurations(self, main_config, sub_config): + client_nic = main_config.endpoint1 + server_nic = main_config.endpoint2 + client_netns = client_nic.netns + server_netns = server_nic.netns + + for ipv in self.params.ip_versions: + if ipv == "ipv4": + family = AF_INET + elif ipv == "ipv6": + family = AF_INET6 + + client_bind = client_nic.ips_filter(family=family)[0] + server_bind = server_nic.ips_filter(family=family)[0] + + yield PingConf(client = client_netns, + client_bind = client_bind, + destination = server_netns, + destination_address = server_bind) + + def generate_perf_configurations(self, main_config, sub_config): + client_nic = main_config.endpoint1 + server_nic = main_config.endpoint2 + client_netns = client_nic.netns + server_netns = server_nic.netns + + for ipv in self.params.ip_versions: + if ipv == "ipv4": + family = AF_INET + elif ipv == "ipv6": + family = AF_INET6 + + client_bind = client_nic.ips_filter(family=family)[0] + server_bind = server_nic.ips_filter(family=family)[0] + + for perf_test in self.params.perf_tests: + yield PerfConf(perf_tool = self.params.perf_tool, + client = client_netns, + client_bind = client_bind, + server = server_netns, + server_bind = server_bind, + test_type = perf_test, + msg_size = self.params.perf_msg_size, + duration = self.params.perf_duration, + iterations = self.params.perf_iterations, + streams = self.params.perf_streams) + + def _pin_dev_interrupts(self, dev, cpu): + netns = dev.netns + + res = netns.run("grep {} /proc/interrupts | cut -f1 -d: | sed 's/ //'" + .format(dev.name)) + intrs = res.stdout + split = res.stdout.split("\n") + if len(split) == 1 and split[0] == '': + res = netns.run("dev_irqs=/sys/class/net/{}/device/msi_irqs; " + "[ -d $dev_irqs ] && ls -1 $dev_irqs" + .format(dev.name)) + intrs = res.stdout + + for intr in intrs.split("\n"): + try: + int(intr) + netns.run("echo -n {} > /proc/irq/{}/smp_affinity_list" + .format(cpu, intr.strip())) + except: + pass
Mon, May 21, 2018 at 10:42:57AM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Adding the BaseEnrtRecipe module that defines a class with the same name. The class inherits from both the PingTestAndEvaluate and PerfTestAndEvaluate classes to implement a shared scheme for our ENRT recipes. It is expected that this will go through more refactoring as we port more recipes but the idea is that we will avoid code duplication at all costs.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst/Recipes/ENRT/BaseEnrtRecipe.py | 212 ++++++++++++++++++++++++++++ 1 file changed, 212 insertions(+) create mode 100644 lnst/Recipes/ENRT/BaseEnrtRecipe.py
diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py new file mode 100644 index 0000000..64c8677 --- /dev/null +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -0,0 +1,212 @@ +class BaseEnrtRecipe(PingTestAndEvaluate, PerfTestAndEvaluate):
- ip_versions = Param(default=("ipv4", "ipv6"))
- perf_tests = Param(default=("tcp_stream", "udp_stream", "sctp_stream"))
- offload_combinations = Param(default=(
dict(gro="on", gso="on", tso="on", tx="on", rx="on")))
- adaptive_coalescing = BoolParam(default=True)
- mtu = IntParam(mandatory=False)
- dev_intr_cpu = IntParam(default=0)
- perf_duration = IntParam(default=60)
- perf_iterations = IntParam(default=5)
- perf_streams = IntParam(default=1)
- perf_msg_size = IntParam(default=123)
- perf_usr_comment = StrParam(default="")
- perf_max_deviation = IntParam(default=10) #TODO required?
- perf_tool = Param(default=IperfMeasurementTool)
- def test(self):
main_config = self.test_wide_configuration()
The self.test_wide_configuration() and self.test_wide_deconfiguration() should be abstract methods here. I see them defined in the next patch only where you inherit from this base class.
for sub_config in self.generate_sub_configurations(main_config):
self.apply_sub_configuration(main_config, sub_config)
for ping_config in self.generate_ping_configurations(main_config,
sub_config):
result = self.ping_test(ping_config)
self.ping_evaluate_and_report(ping_config, result)
for perf_config in self.generate_perf_configurations(main_config,
sub_config):
result = self.perf_test(perf_config)
self.perf_evaluate_and_report(perf_config, result, baseline=None)
self.remove_sub_configuration(main_config, sub_config)
self.test_wide_deconfiguration(main_config)
From: Ondrej Lichtner olichtne@redhat.com
Adding the first ported recipe SimplePerfRecipe, based on the old simple_netperf.xml recipe. The test defines a simple one to one host topology and implements the test_wide_configuration method corresponding to this topology.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/SimplePerfRecipe.py | 75 +++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 lnst/Recipes/ENRT/SimplePerfRecipe.py
diff --git a/lnst/Recipes/ENRT/SimplePerfRecipe.py b/lnst/Recipes/ENRT/SimplePerfRecipe.py new file mode 100644 index 0000000..7c0a401 --- /dev/null +++ b/lnst/Recipes/ENRT/SimplePerfRecipe.py @@ -0,0 +1,75 @@ + +from lnst.Common.LnstError import LnstError +from lnst.Common.Parameters import IntParam, Param, StrParam, BoolParam +from lnst.Common.IpAddress import ipaddress, AF_INET, AF_INET6 + +from lnst.Controller import HostReq, DeviceReq + +from lnst.Recipes.ENRT.BaseEnrtRecipe import BaseEnrtRecipe, EnrtConfiguration + +class SimplePerfRecipe(BaseEnrtRecipe): + m1 = HostReq() + m1.eth0 = DeviceReq(label="net1") + + m2 = HostReq() + m2.eth0 = DeviceReq(label="net1") + + offload_combinations = Param(default=( + dict(gro="on", gso="on", tso="on", tx="on", rx="on"), + dict(gro="off", gso="on", tso="on", tx="on", rx="on"), + dict(gro="on", gso="off", tso="off", tx="on", rx="on"), + dict(gro="on", gso="on", tso="off", tx="off", rx="on"), + dict(gro="on", gso="on", tso="on", tx="on", rx="off"))) + + def test_wide_configuration(self): + m1, m2 = self.matched.m1, self.matched.m2 + + configuration = EnrtConfiguration() + configuration.endpoint1 = m1.eth0 + configuration.endpoint2 = m2.eth0 + + if "mtu" in self.params: + m1.eth0.mtu = self.params.mtu + m2.eth0.mtu = self.params.mtu + + #TODO redo + # configuration.saved_coalescing_state = dict( + # m1_if = dict(tx = m1.eth0.adaptive_tx_coalescing, + # rx = m1.eth0.adaptive_rx_coalescing), + # m2_if = dict(tx = m2.eth0.adaptive_tx_coalescing, + # rx = m2.eth0.adaptive_rx_coalescing)) + + # m1.eth0.adaptive_tx_coalescing = self.params.adaptive_coalescing + # m1.eth0.adaptive_rx_coalescing = self.params.adaptive_coalescing + # m2.eth0.adaptive_tx_coalescing = self.params.adaptive_coalescing + # m2.eth0.adaptive_rx_coalescing = self.params.adaptive_coalescing + + m1.eth0.ip_add(ipaddress("192.168.101.1/24")) + m1.eth0.ip_add(ipaddress("fc00::1/64")) + m1.eth0.up() + + m2.eth0.ip_add(ipaddress("192.168.101.2/24")) + m2.eth0.ip_add(ipaddress("fc00::2/64")) + m2.eth0.up() + + #TODO better service handling through HostAPI + m1.run("service irqbalance stop") + m2.run("service irqbalance stop") + for m in self.matched: + for dev in m.devices: + self._pin_dev_interrupts(dev, self.params.dev_intr_cpu) + + return configuration + + def test_wide_deconfiguration(self, config): + m1, m2 = self.matched.m1, self.matched.m2 + + #TODO better service handling through HostAPI + m1.run("service irqbalance start") + m2.run("service irqbalance start") + + # redo + # m1.eth0.adaptive_tx_coalescing = self.saved_coalescing_state["m1_if"]["tx"] + # m1.eth0.adaptive_rx_coalescing = self.saved_coalescing_state["m1_if"]["rx"] + # m2.eth0.adaptive_tx_coalescing = self.saved_coalescing_state["m2_if"]["tx"] + # m2.eth0.adaptive_rx_coalescing = self.saved_coalescing_state["m2_if"]["rx"]
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- TODO | 87 ++++++++++++++++++++++++++++++++---------------------------- 1 file changed, 47 insertions(+), 40 deletions(-)
diff --git a/TODO b/TODO index 13890cc..0c6f4cf 100644 --- a/TODO +++ b/TODO @@ -7,8 +7,6 @@ them to be implemented in Python Recipes, with notes where relevant. * Device::destroy rename to delete or rename DeviceDeleted exception to Destroyed?
-* fix git version check - if git not installed raise Exception/report error - * wait/sleep method that cycles handles ctl <-> slave communication in the background * should include wait for condition functionality, e.g. wait for Device @@ -18,43 +16,11 @@ them to be implemented in Python Recipes, with notes where relevant. wait for 5 seconds, or if the device has been LOWER_UP for e.g. 3 seconds, we will wait for additional 2 seconds, however if it has been up for e.g. 8 seconds the wait will return immediatelly. + * currently supporting waiting on condition (a method) on the controller, + possible next work is to expand this to the slave as well (problems with + object references)
-* Results implementation - * Result objects should be stored in a transparent way and there should be - a way to access and export them in different ways, default being the - Result summary at the end of stdout logs. - * Summary format proposal (copied from the api description document): - Since I've changed how Job execution is handled, I've also wrote down a - proposal to change how we log Recipe results - the RESULTS SUMMARY logs at the - end of a recipe run. I haven't started working on it yet, I've just wrote an - example on paper which I'm copying here. Any comments are appreciated. - - RESULTS SUMMARY: - Host m1 Job 1 XYZ PASS/FAIL - Formatted results: - ... - Host m2 Job 1 XYZ started - Host m1 Job 3 XYZ PASS/FAIL - Formatted results: - ... - Host m2 Job 1 XYZ PASS/FAIL - Formatted results: - ... - Custom summary record.... (optional PASS/FAIL) - ... optional additional data - ... i still need to figure out how this will look like - - The main difference to the old results summary is that Jobs have numerical ids - that are unique per host, and you ALWAYS see the id (previously only background - commands had ids). Since all Jobs "run in background" this will make matching - "started" "finished" logs easier. There also won't be any more "kill cmd" "intr - cmd" logs here since these commands don't exist anymore. - - Since "all Jobs are in background" it means that in reality all of them - generate a "started" and "finished" log, however, if these are in a direct - sequence after each other they get shortened to just the PASS/FAIL log. This - will also be true for background commands if there were no results to report - between their start and finish. +* netlink descriptive error logging - ask jbenc
* Current machine configuration dump describing the "full"(relevant?) configuration of a host. We need this for PerfRepo integration to generate @@ -67,8 +33,6 @@ them to be implemented in Python Recipes, with notes where relevant. Device classes, but we have the capability to sync arbitrary python code so this shouldn't be too big a problem
-* API for "first" ip in list limited by selector - * RPM: * add python-ethtool as dependency * pyroute2 dependency needs an update to the version, some devices (vti @@ -131,3 +95,46 @@ them to be implemented in Python Recipes, with notes where relevant. * netlink sockets leaking
* go through master to check for bugs that can be backported + +* fix git version check - if git not installed raise Exception/report error + version checks are now implemented differently + +* Results implementation + * Result objects should be stored in a transparent way and there should be + a way to access and export them in different ways, default being the + Result summary at the end of stdout logs. + * Summary format proposal (copied from the api description document): + Since I've changed how Job execution is handled, I've also wrote down a + proposal to change how we log Recipe results - the RESULTS SUMMARY logs at the + end of a recipe run. I haven't started working on it yet, I've just wrote an + example on paper which I'm copying here. Any comments are appreciated. + + RESULTS SUMMARY: + Host m1 Job 1 XYZ PASS/FAIL + Formatted results: + ... + Host m2 Job 1 XYZ started + Host m1 Job 3 XYZ PASS/FAIL + Formatted results: + ... + Host m2 Job 1 XYZ PASS/FAIL + Formatted results: + ... + Custom summary record.... (optional PASS/FAIL) + ... optional additional data + ... i still need to figure out how this will look like + + The main difference to the old results summary is that Jobs have numerical ids + that are unique per host, and you ALWAYS see the id (previously only background + commands had ids). Since all Jobs "run in background" this will make matching + "started" "finished" logs easier. There also won't be any more "kill cmd" "intr + cmd" logs here since these commands don't exist anymore. + + Since "all Jobs are in background" it means that in reality all of them + generate a "started" and "finished" log, however, if these are in a direct + sequence after each other they get shortened to just the PASS/FAIL log. This + will also be true for background commands if there were no results to report + between their start and finish. + +* API for "first" ip in list limited by selector + added ips_filter method to Device class
From: Ondrej Lichtner olichtne@redhat.com
The factory method shouldn't arbitrarily select the first ipaddress of the provided Device. I think the factory should just accept something that can be parsed to create an IpAddress object, in all other cases we should explicitly use property getters to access already existing objects.
My reasoning is that I don't think hiding device.ips[0] fits within what a user would expect to see from a factory method. The specific "magic" [0] selection should be explicitly visible to the user to avoid confusion so I think the user should choose to write it himself.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Common/IpAddress.py | 8 -------- 1 file changed, 8 deletions(-)
diff --git a/lnst/Common/IpAddress.py b/lnst/Common/IpAddress.py index 73802c5..e54553d 100644 --- a/lnst/Common/IpAddress.py +++ b/lnst/Common/IpAddress.py @@ -99,9 +99,6 @@ class Ip6Address(BaseIpAddress):
def ipaddress(addr): """Factory method to create a BaseIpAddress object""" - #runtime import this because the Device class arrives on the Slave - #during recipe execution, not during Slave init - from lnst.Devices.Device import Device if isinstance(addr, BaseIpAddress): return addr elif isinstance(addr, str): @@ -109,11 +106,6 @@ def ipaddress(addr): return Ip4Address(addr) except: return Ip6Address(addr) - elif isinstance(addr, Device): - try: - return addr.ips[0] - except IndexError: - raise LnstError("No usable Ip Addresses on the provided Device.") else: raise LnstError("Value must be a BaseIpAddress or string object." " Not {}".format(type(addr)))
Mon, May 21, 2018 at 10:42:35AM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Hi all,
apologies for the long patch set... it contains my work from the past couple of months on porting our first ENRT recipe. I went through numerous iterations of the ported recipe and refactored it several times which is why it took so long. At the same time I expect more refactoring as we port more recipes and this is very likely not the final version of the recipe. Overall though I'm very happy with the abstraction and the overall organization of the recipe which is why I've decided to send the patchset for some upstream review.
While porting the recipe I also modified the base LNST code which made the patchset a bit longer, these contain bug fixes, refactoring some code and some changes to the tester facing API.
I've also added ResultLevels that can be used to filter results and print the result summary based on importance of a job.
Besides my replies to individual patches, ack to series. Very nice work!
Acked-by: Jan Tluka jtluka@redhat.com
lnst-developers@lists.fedorahosted.org