From: Ondrej Lichtner olichtne@redhat.com
According to PEP 0394: in preparation for an eventual change in the default version of Python, Python 2 only scripts should either be updated to be source compatible with Python 3 or else to use python2 in the shebang line.
It's not a complete solution but it solves problems of running LNST on Arch Linux until more work is done on issue #105.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst-ctl | 2 +- lnst-pool-wizard | 2 +- lnst-slave | 2 +- misc/recipe_conv.py | 2 +- recipes/smoke/generate-recipes.py | 2 +- setup.py | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/lnst-ctl b/lnst-ctl index 9a321b6..947f3ad 100755 --- a/lnst-ctl +++ b/lnst-ctl @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2 """ Net test controller
diff --git a/lnst-pool-wizard b/lnst-pool-wizard index c788688..2b49744 100755 --- a/lnst-pool-wizard +++ b/lnst-pool-wizard @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2 """ Machine pool wizard
diff --git a/lnst-slave b/lnst-slave index d4345c3..6238769 100755 --- a/lnst-slave +++ b/lnst-slave @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2 """ Net test slave
diff --git a/misc/recipe_conv.py b/misc/recipe_conv.py index a5c19c4..a18a0d2 100755 --- a/misc/recipe_conv.py +++ b/misc/recipe_conv.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python2 """ Recipe converter
diff --git a/recipes/smoke/generate-recipes.py b/recipes/smoke/generate-recipes.py index bed434b..24806ad 100755 --- a/recipes/smoke/generate-recipes.py +++ b/recipes/smoke/generate-recipes.py @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2
# LNST Smoke Tests # Author: Ondrej Lichtner olichtne@redhat.com diff --git a/setup.py b/setup.py index d281fe9..4033d4a 100755 --- a/setup.py +++ b/setup.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python2 """ Install script for lnst
From: Ondrej Lichtner olichtne@redhat.com
In case ethtool output is not matched by the regex the match object is None and an exception will be raised on the match.group(1) call. This patch fixes that by instead returning None (based on a patch from Artem Savkov).
The issue can appear if the slave machine doesn't have ethtool installed which doesn't normally happen (because of dependency resolution) but can appear when lnst is installed/running from the git version.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Slave/InterfaceManager.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lnst/Slave/InterfaceManager.py b/lnst/Slave/InterfaceManager.py index 01b23e5..09f3be9 100644 --- a/lnst/Slave/InterfaceManager.py +++ b/lnst/Slave/InterfaceManager.py @@ -498,7 +498,10 @@ class Device(object): return 'loopback' out, _ = exec_cmd("ethtool -i %s" % self._name, False, False, False) match = re.search("^driver: (.*)$", out, re.MULTILINE) - return match.group(1) + if match is not None: + return match.group(1) + else: + return None
def get_if_data(self): if_data = {"devname": self._name,
From: Ondrej Lichtner olichtne@redhat.com
Changed the interface ip adresses to be from a different network. The reason is that network 192.168.122.0/24 is used as the default network of NATed interfaces in libvirt.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- regression-tests/tests/12/sm01.xml | 2 +- regression-tests/tests/12/sm02.xml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/regression-tests/tests/12/sm01.xml b/regression-tests/tests/12/sm01.xml index 3f3a042..96a4f6f 100644 --- a/regression-tests/tests/12/sm01.xml +++ b/regression-tests/tests/12/sm01.xml @@ -1,7 +1,7 @@ <interfaces> <eth id="1" label="testnet"> <addresses> - <address value="192.168.122.10/24" /> + <address value="192.168.100.10/24" /> </addresses> </eth> </interfaces> diff --git a/regression-tests/tests/12/sm02.xml b/regression-tests/tests/12/sm02.xml index ef59e63..f1e1f02 100644 --- a/regression-tests/tests/12/sm02.xml +++ b/regression-tests/tests/12/sm02.xml @@ -1,7 +1,7 @@ <interfaces> <eth id="1" label="testnet"> <addresses> - <address value="192.168.122.20/24" /> + <address value="192.168.100.20/24" /> </addresses> </eth> </interfaces>
From: Ondrej Lichtner olichtne@redhat.com
This patch brings more refactorization to the Netperf test module. Instead of parsing options multiple times in different functions all the options are parsed in the __init__ method of the class. Except for the "netperf_server" option there's no need to check if the slaves role is client or server so all of the options are parsed in the same place.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- test_modules/Netperf.py | 155 +++++++++++++++++++++++++++--------------------- 1 file changed, 87 insertions(+), 68 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index b930596..a538520 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -18,78 +18,102 @@ class Netperf(TestGeneric): supported_tests = ["TCP_STREAM", "TCP_RR", "UDP_STREAM", "UDP_RR", "SCTP_STREAM", "SCTP_STREAM_MANY", "SCTP_RR"]
- def _compose_cmd(self, role): + def __init__(self, command): + super(TestGeneric, self).__init__(command) + + self._role = self.get_mopt("role") + + if self._role == "client": + self._netperf_server = self.get_mopt("netperf_server", + opt_type="addr") + + self._netperf_opts = self.get_opt("netperf_opts") + self._duration = self.get_opt("duration") + self._port = self.get_opt("port") + self._testname = self.get_opt("testname", default="TCP_STREAM") + self._bind = self.get_opt("bind", opt_type="addr") + self._family = self.get_opt("family") + + self._runs = self.get_opt("runs", default=1) + + self._threshold = self._parse_threshold(self.get_opt("threshold")) + self._threshold_deviation = self._parse_threshold( + self.get_opt("threshold_deviation")) + if self._threshold_deviation is None: + self._threshold_deviation = {"rate" : 0.0, + "unit" : "bps"} + + if self._threshold is not None: + rate = self._threshold["rate"] + deviation = self._threshold_deviation["rate"] + self._threshold_interval = (rate - deviation, + rate + deviation) + else: + self._threshold_interval = None + + def _compose_cmd(self): """ composes commands for netperf and netserver based on xml recipe """ - netperf_opts = self.get_opt("netperf_opts") - if role == "client": - netperf_server = self.get_mopt("netperf_server", opt_type="addr") - duration = self.get_opt("duration") - port = self.get_opt("port") - testname = self.get_opt("testname", default="TCP_STREAM") - cmd = "netperf -H %s -f k" % netperf_server - if port is not None: + if self._role == "client": + cmd = "netperf -H %s -f k" % self._netperf_server + if self._port is not None: """ client connects on this port """ - cmd += " -p %s" % port - if duration is not None: + cmd += " -p %s" % self._port + if self._duration is not None: """ test will last this duration """ - cmd += " -l %s" % duration - if testname is not None: + cmd += " -l %s" % self._duration + if self._testname is not None: """ test that will be performed """ - if testname not in self.supported_tests: + if self._testname not in self.supported_tests: logging.warning("Only TCP_STREAM, TCP_RR, UDP_STREAM, " "UDP_RR, SCTP_STREAM, SCTP_STREAM_MANY and SCTP_RR tests " "are now officialy supported by LNST. You " "can use other tests, but test result may not be correct.") - cmd += " -t %s" % testname + cmd += " -t %s" % self._testname
- if netperf_opts is not None: + if self._netperf_opts is not None: """ custom options for netperf """ - cmd += " %s" % netperf_opts - elif role == "server": - bind = self.get_opt("bind", opt_type="addr") - port = self.get_opt("port") - family = self.get_opt("family") + cmd += " %s" % self._netperf_opts + elif self._role == "server": cmd = "netserver -D" - if bind is not None: + if self._bind is not None: """ server is bound to this address """ - cmd += " -L %s" % bind - if port is not None: + cmd += " -L %s" % self._bind + if self._port is not None: """ server listens on this port """ - cmd += " -p %s" % port - if netperf_opts is not None: + cmd += " -p %s" % self._port + if self._netperf_opts is not None: """ custom options for netperf """ - cmd += " %s" % netperf_opts + cmd += " %s" % self._netperf_opts return cmd
def _parse_output(self, output): - testname = self.get_opt("testname", default="TCP_STREAM") - if testname == "UDP_STREAM": + if self._testname == "UDP_STREAM": # pattern for UDP_STREAM throughput output # decimal float decimal (float) pattern_udp_stream = "\d+\s+\d+.\d+\s+\d+\s+(\d+(.\d+){0,1})\n" r2 = re.search(pattern_udp_stream, output.lower()) - elif testname == "TCP_STREAM": + elif self._testname == "TCP_STREAM": # pattern for TCP_STREAM throughput output # decimal decimal decimal float (float) pattern_tcp_stream = "\d+\s+\d+\s+\d+\s+\d+.\d+\s+(\d+(.\d+){0,1})" r2 = re.search(pattern_tcp_stream, output.lower()) - elif testname == "TCP_RR" or testname == "UDP_RR" or testname == "SCTP_RR": + elif self._testname == "TCP_RR" or testname == "UDP_RR" or testname == "SCTP_RR": # pattern for TCP_RR, UDP_RR and SCTP_RR throughput output # decimal decimal decimal decimal float (float) pattern_tcp_rr = "\d+\s+\d+\s+\d+\s+\d+\s+\d+.\d+\s+(\d+(.\d+){0,1})" @@ -108,7 +132,6 @@ class Netperf(TestGeneric): return {"rate": rate_in_kb*1000, "unit": "bps"}
- def _parse_threshold(self, threshold): if threshold is None: return None @@ -116,9 +139,10 @@ class Netperf(TestGeneric): # group(1) ... threshold value # group(3) ... threshold units # group(4) ... bytes/bits - testname = self.get_opt("testname", default="TCP_STREAM") - if (testname == "TCP_STREAM" or testname == "UDP_STREAM" or - testname == "SCTP_STREAM" or testname == "SCTP_STREAM_MANY"): + if (self._testname == "TCP_STREAM" or + self._testname == "UDP_STREAM" or + self._testname == "SCTP_STREAM" or + self._testname == "SCTP_STREAM_MANY"): pattern_stream = "(\d*(.\d*)?)\s*([ kmgtKMGT])(bits|bytes)/sec" r1 = re.search(pattern_stream, threshold) if r1 is None: @@ -143,8 +167,8 @@ class Netperf(TestGeneric): if threshold_unit_type == "bytes": threshold_rate *= 8 threshold_unit_type = "bps" - elif (testname == "TCP_RR" or testname == "UDP_RR" or - testname == "SCTP_RR"): + elif (self._testname == "TCP_RR" or self._testname == "UDP_RR" or + self._testname == "SCTP_RR"): pattern_rr = "(\d*(.\d*)?)\s*trans./sec" r1 = re.search(pattern_rr, threshold.lower()) if r1 is None: @@ -173,11 +197,10 @@ class Netperf(TestGeneric): res_data = {}
rv = 0 - runs = self.get_opt("runs", default=1) results = [] rates = [] - for i in range(1, runs+1): - if runs > 1: + for i in range(1, self._runs+1): + if self._runs > 1: logging.info("Netperf starting run %d" % i) client = ShellProcess(cmd) try: @@ -191,59 +214,56 @@ class Netperf(TestGeneric): results.append(self._parse_output(output)) rates.append(results[-1]["rate"])
- if runs > 1: + if results > 1: res_data["results"] = results
if len(rates) > 0: rate = sum(rates)/len(rates) else: rate = 0.0 - rate_std_deviation = std_deviation(rates) - res_data["rate"] = rate - res_data["rate_std_deviation"] = rate_std_deviation
- threshold = self._parse_threshold(self.get_opt("threshold")) - threshold_std_deviation = self._parse_threshold(self.get_opt("threshold_std_deviation")) + if len(rates) > 1: + rate_deviation = std_deviation(rates) + else: + rate_deviation = 0.0 + + res_data["rate"] = rate + res_data["rate_deviation"] = rate_deviation
res_val = False - if threshold is not None: - threshold = threshold["rate"] - if threshold_std_deviation is None: - threshold_std_deviation = 0.0 - else: - threshold_std_deviation = threshold_std_deviation["rate"] - result_interval = (rate - rate_std_deviation, - rate + rate_std_deviation) - threshold_interval = (threshold - threshold_std_deviation, - threshold + threshold_std_deviation) + if self._threshold_interval is not None: + result_interval = (rate - rate_deviation, + rate + rate_deviation)
- if threshold_interval[0] > result_interval[1]: + if self._threshold_interval[0] > result_interval[1]: res_val = False res_data["msg"] = "Measured rate %.2f +-%.2f bps is lower "\ "than threshold %.2f +-%.2f" %\ - (rate, rate_std_deviation, - threshold, threshold_std_deviation) + (rate, rate_deviation, + self._threshold["rate"], + self._threshold_deviation["rate"]) else: res_val = True res_data["msg"] = "Measured rate %.2f +-%.2f bps is higher "\ "than threshold %.2f +-%.2f" %\ - (rate, rate_std_deviation, - threshold, threshold_std_deviation) + (rate, rate_deviation, + self._threshold["rate"], + self._threshold_deviation["rate"]) else: if rate > 0.0: res_val = True else: res_val = False res_data["msg"] = "Measured rate was %.2f +-%.2f bps" %\ - (rate, rate_std_deviation) + (rate, rate_deviation)
- if rv != 0 and runs == 0: + if rv != 0 and self._runs == 1: res_data["msg"] = "Could not get performance throughput! Are you "\ "sure netperf is installed on both machines and "\ "machines are mutually accessible?" logging.info(res_data["msg"]) return (False, res_data) - elif rv != 0 and runs > 1: + elif rv != 0 and self._runs > 1: res_data["msg"] = "At least one of the Netperf runs failed, "\ "check the logs and result data for more "\ "information." @@ -252,13 +272,12 @@ class Netperf(TestGeneric): return (res_val, res_data)
def run(self): - self.role = self.get_mopt("role") - cmd = self._compose_cmd(self.role) + cmd = self._compose_cmd() logging.debug("compiled command: %s" % cmd) - if self.role == "client": + if self._role == "client": (rv, res_data) = self._run_client(cmd) if rv == False: return self.set_fail(res_data) return self.set_pass(res_data) - elif self.role == "server": + elif self._role == "server": self._run_server(cmd)
From: Ondrej Lichtner olichtne@redhat.com
This patch adds support for the confidence feature of Netperf. You can start using this feature by defining the 'confidence' option of the test module. The format is identical to what is accepted by the netperf client so: lvl[,intvl] where: lvl is either 95 or 99 and specifies the confidence level intlvl is the width of the confidence interval as a percentage
You can further configure the min and max number of runs by specifying the -i option using the 'netperf_opts' test module option.
The resulting measured confidence is used to compute the deviation of the result and creating the interval that will be used for comparison against a threshold.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- test_modules/Netperf.py | 44 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 42 insertions(+), 2 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index a538520..eebf834 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -31,10 +31,15 @@ class Netperf(TestGeneric): self._duration = self.get_opt("duration") self._port = self.get_opt("port") self._testname = self.get_opt("testname", default="TCP_STREAM") + self._confidence = self.get_opt("confidence") self._bind = self.get_opt("bind", opt_type="addr") self._family = self.get_opt("family")
self._runs = self.get_opt("runs", default=1) + if self._runs > 1 and self._confidence is not None: + logging.warning("Ignoring 'runs' because 'confidence' "\ + "was specified.") + self._runs = 1
self._threshold = self._parse_threshold(self.get_opt("threshold")) self._threshold_deviation = self._parse_threshold( @@ -78,6 +83,12 @@ class Netperf(TestGeneric): "can use other tests, but test result may not be correct.") cmd += " -t %s" % self._testname
+ if self._confidence is not None: + """ + confidence level that Netperf should try to achieve + """ + cmd += " -I %s" % self._confidence + if self._netperf_opts is not None: """ custom options for netperf @@ -129,8 +140,34 @@ class Netperf(TestGeneric): else: rate_in_kb = float(r2.group(1))
- return {"rate": rate_in_kb*1000, - "unit": "bps"} + if self._confidence is not None: + confidence = self._parse_confidence(output) + return {"rate": rate_in_kb*1000, + "unit": "bps", + "confidence": confidence} + else: + return {"rate": rate_in_kb*1000, + "unit": "bps"} + + def _parse_confidence(self, output): + normal_pattern = r'+/-(\d+.\d*)% @ (\d+)% conf.' + warning_pattern = r'!!! Confidence intervals: Throughput\s+: (\d+.\d*)%' + normal_confidence = re.search(normal_pattern, output) + warning_confidence = re.search(warning_pattern, output) + + if normal_confidence is None: + logging.error("Failed to parse confidence!!") + return (0, 0.0) + + if warning_confidence is None: + real_confidence = (float(normal_confidence.group(2)), + float(normal_confidence.group(1))) + else: + real_confidence = (float(normal_confidence.group(2)), + float(warning_confidence.group(1))) + + return real_confidence +
def _parse_threshold(self, threshold): if threshold is None: @@ -224,6 +261,9 @@ class Netperf(TestGeneric):
if len(rates) > 1: rate_deviation = std_deviation(rates) + elif len(rates) == 1 and self._confidence is not None: + result = results[0] + rate_deviation = rate * (result["confidence"][1] / 100) else: rate_deviation = 0.0
From: Ondrej Lichtner olichtne@redhat.com
In case the alias is undefined we should return None instead of raising an exception. This patch fixes that.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Task.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py index f27b499..0d437f3 100644 --- a/lnst/Controller/Task.py +++ b/lnst/Controller/Task.py @@ -17,6 +17,7 @@ from lnst.Controller.PerfRepo import PerfRepoTestExecution from lnst.Controller.PerfRepo import PerfRepoValue from lnst.Common.Utils import dict_to_dot, list_to_dot from lnst.Common.Config import lnst_config +from lnst.Controller.XmlTemplates import XmlTemplateError
# The handle to be imported from each task ctl = None @@ -103,7 +104,10 @@ class ControllerAPI(object): :return: value of a user defined alias :rtype: string """ - return self._ctl._get_alias(alias) + try: + return self._ctl._get_alias(alias) + except XmlTemplateError: + return None
def connect_PerfRepo(self, url=None, username=None, password=None): if not self._perf_repo_api.connected():
From: Ondrej Lichtner olichtne@redhat.com
This patch adds initial support for PerfRepo integration to the phase1 regression test recipes. All recipes with PerfRepo integration have an updated README file that describes how to enable this feature. Please remember that you also need to configure the LNST Controller to be able to connect to a PerfRepo instance.
In addition to that I refactored the usage of test modules by reusing the same object with updated parameters for multiple runs of the same test.
The patch also contains a minor fix connected to the test duration alias - changing the test duration wasn't reflected in the timeout value of the test therefore resulting in timeouts in case you defined a high enough value in the alias.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- recipes/regression_tests/phase1/3_vlans.README | 30 ++ recipes/regression_tests/phase1/3_vlans.py | 313 ++++++++++++++++----- recipes/regression_tests/phase1/3_vlans.xml | 1 + .../phase1/3_vlans_over_active_backup_bond.README | 30 ++ .../phase1/3_vlans_over_active_backup_bond.xml | 1 + .../regression_tests/phase1/3_vlans_over_bond.py | 313 ++++++++++++++++----- .../phase1/3_vlans_over_round_robin_bond.README | 32 ++- .../phase1/3_vlans_over_round_robin_bond.xml | 1 + .../phase1/active_backup_bond.README | 30 ++ .../regression_tests/phase1/active_backup_bond.xml | 1 + .../phase1/active_backup_double_bond.README | 30 ++ .../phase1/active_backup_double_bond.xml | 1 + recipes/regression_tests/phase1/bonding_test.py | 175 +++++++++++- .../phase1/round_robin_bond.README | 30 ++ .../regression_tests/phase1/round_robin_bond.xml | 1 + .../phase1/round_robin_double_bond.README | 30 ++ .../phase1/round_robin_double_bond.xml | 1 + ...l_bridge_2_vlans_over_active_backup_bond.README | 30 ++ ...tual_bridge_2_vlans_over_active_backup_bond.xml | 1 + .../phase1/virtual_bridge_2_vlans_over_bond.py | 173 +++++++++++- .../phase1/virtual_bridge_vlan_in_guest.README | 30 ++ .../phase1/virtual_bridge_vlan_in_guest.py | 175 +++++++++++- .../phase1/virtual_bridge_vlan_in_guest.xml | 1 + .../phase1/virtual_bridge_vlan_in_host.README | 32 ++- .../phase1/virtual_bridge_vlan_in_host.py | 207 ++++++++++++-- .../phase1/virtual_bridge_vlan_in_host.xml | 1 + 26 files changed, 1467 insertions(+), 203 deletions(-)
diff --git a/recipes/regression_tests/phase1/3_vlans.README b/recipes/regression_tests/phase1/3_vlans.README index 54037a5..b559a44 100644 --- a/recipes/regression_tests/phase1/3_vlans.README +++ b/recipes/regression_tests/phase1/3_vlans.README @@ -43,3 +43,33 @@ Test description (3_vlans.py): Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + 3_vlans.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/3_vlans.py b/recipes/regression_tests/phase1/3_vlans.py index 7e2360e..1a2c834 100644 --- a/recipes/regression_tests/phase1/3_vlans.py +++ b/recipes/regression_tests/phase1/3_vlans.py @@ -1,9 +1,31 @@ +import logging from lnst.Controller.Task import ctl +from lnst.Controller.PerfRepoUtils import parse_id_mapping, get_id
# ------ # SETUP # ------
+mapping_file = ctl.get_alias("mapping_file") +mapping = parse_id_mapping(mapping_file) + +product_name = ctl.get_alias("product_name") + +tcp_ipv4_id = get_id(mapping, "tcp_ipv4_id") +tcp_ipv6_id = get_id(mapping, "tcp_ipv6_id") +udp_ipv4_id = get_id(mapping, "udp_ipv4_id") +udp_ipv6_id = get_id(mapping, "udp_ipv6_id") + +if tcp_ipv4_id is not None or\ + tcp_ipv6_id is not None or\ + udp_ipv4_id is not None or\ + udp_ipv6_id is not None: + perf_api = ctl.connect_PerfRepo() + logging.info("PerfRepo support enabled for this run.") +else: + logging.info("PerfRepo support disabled for this run.") + + m1 = ctl.get_host("testmachine1") m2 = ctl.get_host("testmachine2")
@@ -34,80 +56,77 @@ for vlan in vlans:
ctl.wait(15)
+ping_mod = ctl.get_module("IcmpPing", + options={ + "count" : 100, + "interval" : 0.1 + }) +ping_mod6 = ctl.get_module("Icmp6Ping", + options={ + "count" : 100, + "interval" : 0.1 + }) +netperf_srv = ctl.get_module("Netperf", + options={ + "role" : "server" + }) +netperf_srv6 = ctl.get_module("Netperf", + options={ + "role" : "server", + "netperf_opts" : " -6" + }) +netperf_cli_tcp = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "TCP_STREAM", + "confidence" : "99,5" + }) +netperf_cli_udp = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "UDP_STREAM", + "confidence" : "99,5" + }) +netperf_cli_tcp6 = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "TCP_STREAM", + "confidence" : "99,5" + }) +netperf_cli_udp6 = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "UDP_STREAM", + "confidence" : "99,5" + }) + for vlan1 in vlans: for vlan2 in vlans: - ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : m2.get_ip(vlan2, 0), - "count" : 100, - "iface" : m1.get_devname(vlan1), - "interval" : 0.1 - }) - - ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : m2.get_ip(vlan2, 1), - "count" : 100, - "iface" : m1.get_ip(vlan1, 1), - "interval" : 0.1 - }) - - netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : m1.get_ip(vlan1, 0), - }) - - netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : m1.get_ip(vlan1, 1), - "netperf_opts" : " -6", - }) - - netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "netperf_opts" : - "-L %s" % m2.get_ip(vlan1) - }) - - netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "netperf_opts" : - "-L %s" % m2.get_ip(vlan1) - }) - - netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "netperf_opts" : - "-L %s -6" % m2.get_ip(vlan1, 1) - }) - - netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "netperf_opts" : - "-L %s -6" % m2.get_ip(vlan1, 1) - }) + ping_mod.update_options({"addr": m2.get_ip(vlan2, 0), + "iface": m1.get_devname(vlan1)}) + + ping_mod6.update_options({"addr": m2.get_ip(vlan2, 1), + "iface": m1.get_ip(vlan1, 1)}) + + netperf_srv.update_options({"bind": m1.get_ip(vlan1, 0)}) + + netperf_srv6.update_options({"bind": m1.get_ip(vlan1, 1)}) + + netperf_cli_tcp.update_options({"netperf_server": m1.get_ip(vlan1, 0), + "netperf_opts": "-i 5 -L %s" % m2.get_ip(vlan1, 0)}) + + netperf_cli_udp.update_options({"netperf_server": m1.get_ip(vlan1, 0), + "netperf_opts": "-i 5 -L %s" % m2.get_ip(vlan1, 0)}) + + netperf_cli_tcp6.update_options({"netperf_server": m1.get_ip(vlan1, 1), + "netperf_opts": "-i 5 -L %s -6" % m2.get_ip(vlan1, 1)}) + + netperf_cli_udp6.update_options({"netperf_server": m1.get_ip(vlan1, 1), + "netperf_opts": "-i 5 -L %s -6" % m2.get_ip(vlan1, 1)})
if vlan1 == vlan2: # These tests should pass @@ -123,24 +142,166 @@ for vlan1 in vlans: # Ping test m1.run(ping_mod)
+ # prepare PerfRepo result for tcp + result_tcp = None + result_udp = None + if tcp_ipv4_id is not None: + result_tcp = perf_api.new_result(tcp_ipv4_id, "tcp_ipv4_result") + result_tcp.set_parameter(offload, state) + result_tcp.set_parameter('netperf_server_on_vlan', vlan1) + result_tcp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp + if udp_ipv4_id is not None: + result_udp = perf_api.new_result(udp_ipv4_id, "udp_ipv4_result") + result_udp.set_parameter(offload, state) + result_udp.set_parameter('netperf_server_on_vlan', vlan1) + result_udp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # Netperf test (both TCP and UDP) srv_proc = m1.run(netperf_srv, bg=True) ctl.wait(2) - m2.run(netperf_cli_tcp, timeout=70) - m2.run(netperf_cli_udp, timeout=70) + tcp_res_data = m2.run(netperf_cli_tcp, timeout = int(netperf_duration)*5 + 20) + udp_res_data = m2.run(netperf_cli_udp, timeout = int(netperf_duration)*5 + 20) srv_proc.intr()
+ if result_tcp is not None and\ + tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + if ipv in [ 'ipv6', 'both' ]: # Ping test m1.run(ping_mod6)
+ # prepare PerfRepo result for tcp ipv6 + result_tcp = None + result_udp = None + if tcp_ipv6_id is not None: + result_tcp = perf_api.new_result(tcp_ipv6_id, "tcp_ipv6_result") + result_tcp.set_parameter(offload, state) + result_tcp.set_parameter('netperf_server_on_vlan', vlan1) + result_tcp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp ipv6 + if udp_ipv6_id is not None: + result_udp = perf_api.new_result(udp_ipv6_id, "udp_ipv6_result") + result_udp.set_parameter(offload, state) + result_udp.set_parameter('netperf_server_on_vlan', vlan1) + result_udp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + # Netperf test (both TCP and UDP) srv_proc = m1.run(netperf_srv6, bg=True) ctl.wait(2) - m2.run(netperf_cli_tcp6, timeout=70) - m2.run(netperf_cli_udp6, timeout=70) + tcp_res_data = m2.run(netperf_cli_tcp6, timeout = int(netperf_duration)*5 + 20) + udp_res_data = m2.run(netperf_cli_udp6, timeout = int(netperf_duration)*5 + 20) srv_proc.intr()
+ if result_tcp is not None and tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + # These tests should fail # Ping across different VLAN else: diff --git a/recipes/regression_tests/phase1/3_vlans.xml b/recipes/regression_tests/phase1/3_vlans.xml index aeb3ed1..fbbc583 100644 --- a/recipes/regression_tests/phase1/3_vlans.xml +++ b/recipes/regression_tests/phase1/3_vlans.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="3_vlans.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README b/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README index f2d2c4c..83e3537 100644 --- a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README +++ b/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.README @@ -52,3 +52,33 @@ Test description: Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + 3_vlans_over_active_backup_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml b/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml index bcaf221..925d9f1 100644 --- a/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml +++ b/recipes/regression_tests/phase1/3_vlans_over_active_backup_bond.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="3_vlans_over_active_backup_bond.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/3_vlans_over_bond.py b/recipes/regression_tests/phase1/3_vlans_over_bond.py index 6fa1d80..c9d3286 100644 --- a/recipes/regression_tests/phase1/3_vlans_over_bond.py +++ b/recipes/regression_tests/phase1/3_vlans_over_bond.py @@ -1,9 +1,31 @@ +import logging from lnst.Controller.Task import ctl +from lnst.Controller.PerfRepoUtils import parse_id_mapping, get_id
# ------ # SETUP # ------
+mapping_file = ctl.get_alias("mapping_file") +mapping = parse_id_mapping(mapping_file) + +product_name = ctl.get_alias("product_name") + +tcp_ipv4_id = get_id(mapping, "tcp_ipv4_id") +tcp_ipv6_id = get_id(mapping, "tcp_ipv6_id") +udp_ipv4_id = get_id(mapping, "udp_ipv4_id") +udp_ipv6_id = get_id(mapping, "udp_ipv6_id") + +if tcp_ipv4_id is not None or\ + tcp_ipv6_id is not None or\ + udp_ipv4_id is not None or\ + udp_ipv6_id is not None: + perf_api = ctl.connect_PerfRepo() + logging.info("PerfRepo support enabled for this run.") +else: + logging.info("PerfRepo support disabled for this run.") + + m1 = ctl.get_host("testmachine1") m2 = ctl.get_host("testmachine2")
@@ -34,80 +56,77 @@ for vlan in vlans:
ctl.wait(15)
+ping_mod = ctl.get_module("IcmpPing", + options={ + "count" : 100, + "interval" : 0.1 + }) +ping_mod6 = ctl.get_module("Icmp6Ping", + options={ + "count" : 100, + "interval" : 0.1 + }) +netperf_srv = ctl.get_module("Netperf", + options={ + "role" : "server" + }) +netperf_srv6 = ctl.get_module("Netperf", + options={ + "role" : "server", + "netperf_opts" : " -6" + }) +netperf_cli_tcp = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "TCP_STREAM", + "confidence" : "99,5" + }) +netperf_cli_udp = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "UDP_STREAM", + "confidence" : "99,5" + }) +netperf_cli_tcp6 = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "TCP_STREAM", + "confidence" : "99,5" + }) +netperf_cli_udp6 = ctl.get_module("Netperf", + options={ + "role" : "client", + "duration" : netperf_duration, + "testname" : "UDP_STREAM", + "confidence" : "99,5" + }) + for vlan1 in vlans: for vlan2 in vlans: - ping_mod = ctl.get_module("IcmpPing", - options={ - "addr" : m2.get_ip(vlan2, 0), - "count" : 100, - "iface" : m1.get_devname(vlan1), - "interval" : 0.1 - }) - - ping_mod6 = ctl.get_module("Icmp6Ping", - options={ - "addr" : m2.get_ip(vlan2, 1), - "count" : 100, - "iface" : m1.get_ip(vlan1, 1), - "interval" : 0.1 - }) - - netperf_srv = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : m1.get_ip(vlan1, 0), - }) - - netperf_srv6 = ctl.get_module("Netperf", - options={ - "role" : "server", - "bind" : m1.get_ip(vlan1, 1), - "netperf_opts" : " -6", - }) - - netperf_cli_tcp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 0), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "netperf_opts" : - "-L %s" % m2.get_ip(vlan1, 0) - }) - - netperf_cli_udp = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 0), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "netperf_opts" : - "-L %s" % m2.get_ip(vlan1, 0) - }) - - netperf_cli_tcp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 1), - "duration" : netperf_duration, - "testname" : "TCP_STREAM", - "netperf_opts" : - "-L %s -6" % m2.get_ip(vlan1, 1) - }) - - netperf_cli_udp6 = ctl.get_module("Netperf", - options={ - "role" : "client", - "netperf_server" : - m1.get_ip(vlan1, 1), - "duration" : netperf_duration, - "testname" : "UDP_STREAM", - "netperf_opts" : - "-L %s -6" % m2.get_ip(vlan1, 1) - }) + ping_mod.update_options({"addr": m2.get_ip(vlan2, 0), + "iface": m1.get_devname(vlan1)}) + + ping_mod6.update_options({"addr": m2.get_ip(vlan2, 1), + "iface": m1.get_ip(vlan1, 1)}) + + netperf_srv.update_options({"bind": m1.get_ip(vlan1, 0)}) + + netperf_srv6.update_options({"bind": m1.get_ip(vlan1, 1)}) + + netperf_cli_tcp.update_options({"netperf_server": m1.get_ip(vlan1, 0), + "netperf_opts": "-i 5 -L %s" % m2.get_ip(vlan1, 0)}) + + netperf_cli_udp.update_options({"netperf_server": m1.get_ip(vlan1, 0), + "netperf_opts": "-i 5 -L %s" % m2.get_ip(vlan1, 0)}) + + netperf_cli_tcp6.update_options({"netperf_server": m1.get_ip(vlan1, 1), + "netperf_opts": "-i 5 -L %s -6" % m2.get_ip(vlan1, 1)}) + + netperf_cli_udp6.update_options({"netperf_server": m1.get_ip(vlan1, 1), + "netperf_opts": "-i 5 -L %s -6" % m2.get_ip(vlan1, 1)})
if vlan1 == vlan2: # These tests should pass @@ -125,24 +144,166 @@ for vlan1 in vlans: # Ping test m1.run(ping_mod)
+ # prepare PerfRepo result for tcp + result_tcp = None + result_udp = None + if tcp_ipv4_id is not None: + result_tcp = perf_api.new_result(tcp_ipv4_id, "tcp_ipv4_result") + result_tcp.set_parameter(offload, state) + result_tcp.set_parameter('netperf_server_on_vlan', vlan1) + result_tcp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp + if udp_ipv4_id is not None: + result_udp = perf_api.new_result(udp_ipv4_id, "udp_ipv4_result") + result_udp.set_parameter(offload, state) + result_udp.set_parameter('netperf_server_on_vlan', vlan1) + result_udp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # Netperf test (both TCP and UDP) srv_proc = m1.run(netperf_srv, bg=True) ctl.wait(2) - m2.run(netperf_cli_tcp, timeout=70) - m2.run(netperf_cli_udp, timeout=70) + tcp_res_data = m2.run(netperf_cli_tcp, timeout = int(netperf_duration)*5 + 20) + udp_res_data = m2.run(netperf_cli_udp, timeout = int(netperf_duration)*5 + 20) srv_proc.intr()
+ if result_tcp is not None and\ + tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + if ipv in [ 'ipv6', 'both' ]: # Ping test m1.run(ping_mod6)
+ # prepare PerfRepo result for tcp ipv6 + result_tcp = None + result_udp = None + if tcp_ipv6_id is not None: + result_tcp = perf_api.new_result(tcp_ipv6_id, "tcp_ipv6_result") + result_tcp.set_parameter(offload, state) + result_tcp.set_parameter('netperf_server_on_vlan', vlan1) + result_tcp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp ipv6 + if udp_ipv6_id is not None: + result_udp = perf_api.new_result(udp_ipv6_id, "udp_ipv6_result") + result_udp.set_parameter(offload, state) + result_udp.set_parameter('netperf_server_on_vlan', vlan1) + result_udp.set_parameter('netperf_client_on_vlan', vlan2) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + # Netperf test (both TCP and UDP) srv_proc = m1.run(netperf_srv6, bg=True) ctl.wait(2) - m2.run(netperf_cli_tcp6, timeout=70) - m2.run(netperf_cli_udp6, timeout=70) + tcp_res_data = m2.run(netperf_cli_tcp6, timeout = int(netperf_duration)*5 + 20) + udp_res_data = m2.run(netperf_cli_udp6, timeout = int(netperf_duration)*5 + 20) srv_proc.intr()
+ if result_tcp is not None and tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + # These tests should fail # Ping across different VLAN else: diff --git a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README b/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README index 2d31025..ad18ae7 100644 --- a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README +++ b/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.README @@ -31,7 +31,7 @@ Topology: +--------------------+ +--------------------+
Number of hosts: 2 -Host #1 decsription: +Host #1 description: Two ethernet devices, in round-robin bond mode 3 VLANs on bond interface Host #2 description: @@ -52,3 +52,33 @@ Test description: Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + 3_vlans_over_round_robin_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml b/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml index 922f696..2fd3ede 100644 --- a/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml +++ b/recipes/regression_tests/phase1/3_vlans_over_round_robin_bond.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="3_vlans_over_round_robin_bond.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/active_backup_bond.README b/recipes/regression_tests/phase1/active_backup_bond.README index 8980098..5cc8c2d 100644 --- a/recipes/regression_tests/phase1/active_backup_bond.README +++ b/recipes/regression_tests/phase1/active_backup_bond.README @@ -49,3 +49,33 @@ Test description: Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + active_backup_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/active_backup_bond.xml b/recipes/regression_tests/phase1/active_backup_bond.xml index 720c6c2..cfd44fc 100644 --- a/recipes/regression_tests/phase1/active_backup_bond.xml +++ b/recipes/regression_tests/phase1/active_backup_bond.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="active_backup_bond.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/active_backup_double_bond.README b/recipes/regression_tests/phase1/active_backup_double_bond.README index 5deb92f..fdb2368 100644 --- a/recipes/regression_tests/phase1/active_backup_double_bond.README +++ b/recipes/regression_tests/phase1/active_backup_double_bond.README @@ -49,3 +49,33 @@ Test description: Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + active_backup_double_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/active_backup_double_bond.xml b/recipes/regression_tests/phase1/active_backup_double_bond.xml index 417622c..cf83e7a 100644 --- a/recipes/regression_tests/phase1/active_backup_double_bond.xml +++ b/recipes/regression_tests/phase1/active_backup_double_bond.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="active_backup_double_bond.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/bonding_test.py b/recipes/regression_tests/phase1/bonding_test.py index acc3748..29f251a 100644 --- a/recipes/regression_tests/phase1/bonding_test.py +++ b/recipes/regression_tests/phase1/bonding_test.py @@ -1,9 +1,30 @@ +import logging from lnst.Controller.Task import ctl +from lnst.Controller.PerfRepoUtils import parse_id_mapping, get_id
# ------ # SETUP # ------
+mapping_file = ctl.get_alias("mapping_file") +mapping = parse_id_mapping(mapping_file) + +product_name = ctl.get_alias("product_name") + +tcp_ipv4_id = get_id(mapping, "tcp_ipv4_id") +tcp_ipv6_id = get_id(mapping, "tcp_ipv6_id") +udp_ipv4_id = get_id(mapping, "udp_ipv4_id") +udp_ipv6_id = get_id(mapping, "udp_ipv6_id") + +if tcp_ipv4_id is not None or\ + tcp_ipv6_id is not None or\ + udp_ipv4_id is not None or\ + udp_ipv6_id is not None: + perf_api = ctl.connect_PerfRepo() + logging.info("PerfRepo support enabled for this run.") +else: + logging.info("PerfRepo support disabled for this run.") + m1 = ctl.get_host("testmachine1") m2 = ctl.get_host("testmachine2")
@@ -61,7 +82,8 @@ netperf_cli_tcp = ctl.get_module("Netperf", "netperf_server" : m1.get_ip("test_if", 0), "duration" : netperf_duration, "testname" : "TCP_STREAM", - "netperf_opts" : "-L %s" % m2.get_ip("test_if", 0) + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % m2.get_ip("test_if", 0) })
netperf_cli_udp = ctl.get_module("Netperf", @@ -70,7 +92,8 @@ netperf_cli_udp = ctl.get_module("Netperf", "netperf_server" : m1.get_ip("test_if", 0), "duration" : netperf_duration, "testname" : "UDP_STREAM", - "netperf_opts" : "-L %s" % m2.get_ip("test_if", 0) + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % m2.get_ip("test_if", 0) })
netperf_cli_tcp6 = ctl.get_module("Netperf", @@ -80,8 +103,9 @@ netperf_cli_tcp6 = ctl.get_module("Netperf", m1.get_ip("test_if", 1), "duration" : netperf_duration, "testname" : "TCP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % m2.get_ip("test_if", 1) + "-i 5 -L %s -6" % m2.get_ip("test_if", 1) }) netperf_cli_udp6 = ctl.get_module("Netperf", options={ @@ -90,8 +114,9 @@ netperf_cli_udp6 = ctl.get_module("Netperf", m1.get_ip("test_if", 1), "duration" : netperf_duration, "testname" : "UDP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % m2.get_ip("test_if", 1) + "-i 5 -L %s -6" % m2.get_ip("test_if", 1) })
ctl.wait(15) @@ -104,16 +129,150 @@ for offload in offloads: state)) if ipv in [ 'ipv4', 'both' ]: m1.run(ping_mod) + + # prepare PerfRepo result for tcp + result_tcp = None + result_udp = None + if tcp_ipv4_id is not None: + result_tcp = perf_api.new_result(tcp_ipv4_id, "tcp_ipv4_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + # prepare PerfRepo result for udp + if udp_ipv4_id is not None: + result_udp = perf_api.new_result(udp_ipv4_id, "udp_ipv4_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + server_proc = m1.run(netperf_srv, bg=True) ctl.wait(2) - m2.run(netperf_cli_tcp, timeout=70) - m2.run(netperf_cli_udp, timeout=70) + tcp_res_data = m2.run(netperf_cli_tcp, timeout = int(netperf_duration)*5 + 20) + udp_res_data = m2.run(netperf_cli_udp, timeout = int(netperf_duration)*5 + 20) server_proc.intr()
+ if result_tcp is not None and\ + tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + if ipv in [ 'ipv6', 'both' ]: m1.run(ping_mod6) + + # prepare PerfRepo result for tcp ipv6 + result_tcp = None + result_udp = None + if tcp_ipv6_id is not None: + result_tcp = perf_api.new_result(tcp_ipv6_id, "tcp_ipv6_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp ipv6 + if udp_ipv6_id is not None: + result_udp = perf_api.new_result(udp_ipv6_id, "udp_ipv6_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + server_proc = m1.run(netperf_srv6, bg=True) ctl.wait(2) - m2.run(netperf_cli_tcp6, timeout=70) - m2.run(netperf_cli_udp6, timeout=70) + tcp_res_data = m2.run(netperf_cli_tcp6, timeout = int(netperf_duration)*5 + 20) + udp_res_data = m2.run(netperf_cli_udp6, timeout = int(netperf_duration)*5 + 20) server_proc.intr() + + if result_tcp is not None and tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) diff --git a/recipes/regression_tests/phase1/round_robin_bond.README b/recipes/regression_tests/phase1/round_robin_bond.README index 06244f8..2a6db04 100644 --- a/recipes/regression_tests/phase1/round_robin_bond.README +++ b/recipes/regression_tests/phase1/round_robin_bond.README @@ -49,3 +49,33 @@ Test description: Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + round_robin_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/round_robin_bond.xml b/recipes/regression_tests/phase1/round_robin_bond.xml index 619b4f5..ec43383 100644 --- a/recipes/regression_tests/phase1/round_robin_bond.xml +++ b/recipes/regression_tests/phase1/round_robin_bond.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="round_robin_bond.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/round_robin_double_bond.README b/recipes/regression_tests/phase1/round_robin_double_bond.README index e5b127d..5366ce0 100644 --- a/recipes/regression_tests/phase1/round_robin_double_bond.README +++ b/recipes/regression_tests/phase1/round_robin_double_bond.README @@ -49,3 +49,33 @@ Test description: Offloads: + TSO, GRO, GSO + tested both on/off variants + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + round_robin_double_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/round_robin_double_bond.xml b/recipes/regression_tests/phase1/round_robin_double_bond.xml index 4c00f70..c283826 100644 --- a/recipes/regression_tests/phase1/round_robin_double_bond.xml +++ b/recipes/regression_tests/phase1/round_robin_double_bond.xml @@ -3,6 +3,7 @@ <alias name="ipv" value="both" /> <alias name="mtu" value="1500" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="round_robin_double_bond.mapping" /> </define> <network> <host id="testmachine1"> diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README index 876c89c..ce3fb8e 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README +++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.README @@ -74,3 +74,33 @@ Test description: + duration: 5 + TCP_STREAM and UDP_STREAM + between guests in same VLANs + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + virtual_bridge_2_vlans_over_active_backup_bond.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml index 2c9ed11..c320a93 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml +++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_active_backup_bond.xml @@ -2,6 +2,7 @@ <define> <alias name="ipv" value="both" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="virtual_bridge_2_vlans_over_bond.mapping" /> </define> <network> <host id="host1"> diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py index 8347eaf..39d3079 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py +++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py @@ -1,9 +1,30 @@ +import logging from lnst.Controller.Task import ctl +from lnst.Controller.PerfRepoUtils import parse_id_mapping, get_id
# ------ # SETUP # ------
+mapping_file = ctl.get_alias("mapping_file") +mapping = parse_id_mapping(mapping_file) + +product_name = ctl.get_alias("product_name") + +tcp_ipv4_id = get_id(mapping, "tcp_ipv4_id") +tcp_ipv6_id = get_id(mapping, "tcp_ipv6_id") +udp_ipv4_id = get_id(mapping, "udp_ipv4_id") +udp_ipv6_id = get_id(mapping, "udp_ipv6_id") + +if tcp_ipv4_id is not None or\ + tcp_ipv6_id is not None or\ + udp_ipv4_id is not None or\ + udp_ipv6_id is not None: + perf_api = ctl.connect_PerfRepo() + logging.info("PerfRepo support enabled for this run.") +else: + logging.info("PerfRepo support disabled for this run.") + # Host 1 + guests 1 and 2 h1 = ctl.get_host("host1") g1 = ctl.get_host("guest1") @@ -77,7 +98,8 @@ netperf_cli_tcp = ctl.get_module("Netperf", "netperf_server" : g1.get_ip("guestnic"), "duration" : netperf_duration, "testname" : "TCP_STREAM", - "netperf_opts" : "-L %s" % + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % g3.get_ip("guestnic") })
@@ -87,7 +109,8 @@ netperf_cli_udp = ctl.get_module("Netperf", "netperf_server" : g1.get_ip("guestnic"), "duration" : netperf_duration, "testname" : "UDP_STREAM", - "netperf_opts" : "-L %s" % + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % g3.get_ip("guestnic") })
@@ -98,8 +121,9 @@ netperf_cli_tcp6 = ctl.get_module("Netperf", g1.get_ip("guestnic", 1), "duration" : netperf_duration, "testname" : "TCP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % g3.get_ip("guestnic", 1) + "-i 5 -L %s -6" % g3.get_ip("guestnic", 1) })
netperf_cli_udp6 = ctl.get_module("Netperf", @@ -109,8 +133,9 @@ netperf_cli_udp6 = ctl.get_module("Netperf", g1.get_ip("guestnic", 1), "duration" : netperf_duration, "testname" : "UDP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % g3.get_ip("guestnic", 1) + "-i 5 -L %s -6" % g3.get_ip("guestnic", 1) })
ping_mod_bad = ctl.get_module("IcmpPing", @@ -171,20 +196,152 @@ for offload in offloads: g1.run(ping_mod_bad, expect="fail") g3.run(ping_mod_bad2, expect="fail")
+ # prepare PerfRepo result for tcp + result_tcp = None + result_udp = None + if tcp_ipv4_id is not None: + result_tcp = perf_api.new_result(tcp_ipv4_id, "tcp_ipv4_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + # prepare PerfRepo result for udp + if udp_ipv4_id is not None: + result_udp = perf_api.new_result(udp_ipv4_id, "udp_ipv4_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + server_proc = g1.run(netperf_srv, bg=True) ctl.wait(2) - g3.run(netperf_cli_tcp, timeout=70) - g3.run(netperf_cli_udp, timeout=70) + g3.run(netperf_cli_tcp, timeout = int(netperf_duration)*5 + 20) + g3.run(netperf_cli_udp, timeout = int(netperf_duration)*5 + 20) server_proc.intr()
+ if result_tcp is not None and\ + tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + if ipv in [ 'ipv6', 'both' ]: g1.run(ping_mod6) g4.run(ping_mod62) g1.run(ping_mod6_bad, expect="fail") g3.run(ping_mod6_bad2, expect="fail")
+ # prepare PerfRepo result for tcp ipv6 + result_tcp = None + result_udp = None + if tcp_ipv6_id is not None: + result_tcp = perf_api.new_result(tcp_ipv6_id, "tcp_ipv6_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp ipv6 + if udp_ipv6_id is not None: + result_udp = perf_api.new_result(udp_ipv6_id, "udp_ipv6_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + server_proc = g1.run(netperf_srv6, bg=True) ctl.wait(2) - g3.run(netperf_cli_tcp6, timeout=70) - g3.run(netperf_cli_udp6, timeout=70) + g3.run(netperf_cli_tcp6, timeout = int(netperf_duration)*5 + 20) + g3.run(netperf_cli_udp6, timeout = int(netperf_duration)*5 + 20) server_proc.intr() + + if result_tcp is not None and tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README index 673a6c2..d6c26ed 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README +++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.README @@ -50,3 +50,33 @@ Test description: + duration: 60s + TCP_STREAM and UDP_STREAM + between guest1's VLAN10 and host2's VLAN10 + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + virtual_bridge_vlan_in_guest.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py index bc90f5d..44e9d42 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py +++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.py @@ -1,9 +1,30 @@ +import logging from lnst.Controller.Task import ctl +from lnst.Controller.PerfRepoUtils import parse_id_mapping, get_id
# ------ # SETUP # ------
+mapping_file = ctl.get_alias("mapping_file") +mapping = parse_id_mapping(mapping_file) + +product_name = ctl.get_alias("product_name") + +tcp_ipv4_id = get_id(mapping, "tcp_ipv4_id") +tcp_ipv6_id = get_id(mapping, "tcp_ipv6_id") +udp_ipv4_id = get_id(mapping, "udp_ipv4_id") +udp_ipv6_id = get_id(mapping, "udp_ipv6_id") + +if tcp_ipv4_id is not None or\ + tcp_ipv6_id is not None or\ + udp_ipv4_id is not None or\ + udp_ipv6_id is not None: + perf_api = ctl.connect_PerfRepo() + logging.info("PerfRepo support enabled for this run.") +else: + logging.info("PerfRepo support disabled for this run.") + h1 = ctl.get_host("host1") g1 = ctl.get_host("guest1")
@@ -56,7 +77,8 @@ netperf_cli_tcp = ctl.get_module("Netperf", "netperf_server" : g1.get_ip("vlan10"), "duration" : netperf_duration, "testname" : "TCP_STREAM", - "netperf_opts" : "-L %s" % + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % h2.get_ip("vlan10") })
@@ -66,7 +88,8 @@ netperf_cli_udp = ctl.get_module("Netperf", "netperf_server" : g1.get_ip("vlan10"), "duration" : netperf_duration, "testname" : "UDP_STREAM", - "netperf_opts" : "-L %s" % + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % h2.get_ip("vlan10") })
@@ -77,8 +100,9 @@ netperf_cli_tcp6 = ctl.get_module("Netperf", g1.get_ip("vlan10", 1), "duration" : netperf_duration, "testname" : "TCP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % h2.get_ip("vlan10", 1) + "-i 5 -L %s -6" % h2.get_ip("vlan10", 1) })
netperf_cli_udp6 = ctl.get_module("Netperf", @@ -88,8 +112,9 @@ netperf_cli_udp6 = ctl.get_module("Netperf", g1.get_ip("vlan10", 1), "duration" : netperf_duration, "testname" : "UDP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % h2.get_ip("vlan10", 1) + "-i 5 -L %s -6" % h2.get_ip("vlan10", 1) })
ctl.wait(15) @@ -105,18 +130,152 @@ for offload in offloads:
if ipv in [ 'ipv4', 'both' ]: g1.run(ping_mod) + + # prepare PerfRepo result for tcp + result_tcp = None + result_udp = None + if tcp_ipv4_id is not None: + result_tcp = perf_api.new_result(tcp_ipv4_id, "tcp_ipv4_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + # prepare PerfRepo result for udp + if udp_ipv4_id is not None: + result_udp = perf_api.new_result(udp_ipv4_id, "udp_ipv4_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + server_proc = g1.run(netperf_srv, bg=True) ctl.wait(2) - h2.run(netperf_cli_tcp, timeout=70) - h2.run(netperf_cli_udp, timeout=70) + tcp_res_data = h2.run(netperf_cli_tcp, timeout = int(netperf_duration)*5 + 20) + udp_res_data = h2.run(netperf_cli_udp, timeout = int(netperf_duration)*5 + 20)
server_proc.intr()
+ if result_tcp is not None and\ + tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + if ipv in [ 'ipv6', 'both' ]: g1.run(ping_mod6) + + # prepare PerfRepo result for tcp ipv6 + result_tcp = None + result_udp = None + if tcp_ipv6_id is not None: + result_tcp = perf_api.new_result(tcp_ipv6_id, "tcp_ipv6_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp ipv6 + if udp_ipv6_id is not None: + result_udp = perf_api.new_result(udp_ipv6_id, "udp_ipv6_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + server_proc = g1.run(netperf_srv6, bg=True) ctl.wait(2) - h2.run(netperf_cli_tcp6, timeout=70) - h2.run(netperf_cli_udp6, timeout=70) + tcp_res_data = h2.run(netperf_cli_tcp6, timeout = int(netperf_duration)*5 + 20) + udp_res_data = h2.run(netperf_cli_udp6, timeout = int(netperf_duration)*5 + 20)
server_proc.intr() + + if result_tcp is not None and tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml index ea49082..a980bfb 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml +++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_guest.xml @@ -2,6 +2,7 @@ <define> <alias name="ipv" value="both" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="virtual_bridge_vlan_in_guest.mapping" /> </define> <network> <host id="host1"> diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README index 9ab0b5a..7910b28 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README +++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.README @@ -40,7 +40,7 @@ Host #2 description: Guest #1 description: One ethernet device Test name: - virtual_bridge_vlan_in_guest.py + virtual_bridge_vlan_in_host.py Test description: Ping: + count: 100 @@ -50,3 +50,33 @@ Test description: + duration: 60s + TCP_STREAM and UDP_STREAM + between guest1's NIC and host2's VLAN10 + +PerfRepo integration: + First, preparation in PerfRepo is required - you need to create Test objects + through the web interface that properly describe the individual Netperf + tests that this recipe runs. Don't forget to also add appropriate metrics. + For these Netperf tests it's always: + * throughput + * throughput_min + * throughput_max + * throughput_deviation + + After that, to enable support for PerfRepo you need to create the file + virtual_bridge_vlan_in_host.mapping and define the following id mappings: + tcp_ipv4_id -> to store ipv4 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + tcp_ipv6_id -> to store ipv6 TCP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv4_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + udp_ipv6_id -> to store ipv4 UDP_STREAM Netperf test results, maps to TestUid of a PerfRepo Test object + + To enable result comparison agains baselines you need to create a Report in + PerfRepo that will store the baseline. Set up the Report to only contain results + with the same hash tag and then add a new mapping to the mapping file, with + this format: + <some_hash> = <report_id> + + The hash value is automatically generated during test execution and added + to each result stored in PerfRepo. To get the Report id you need to open + that report in our browser and find if in the URL. + + When running this recipe you should also define the 'product_name' alias + (e.g. RHEL7) in order to tag the result object in PerfRepo. diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py index 0344641..fcfd303 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py +++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.py @@ -1,9 +1,28 @@ +import logging from lnst.Controller.Task import ctl +from lnst.Controller.PerfRepoUtils import parse_id_mapping, get_id
# ------ # SETUP # ------
+mapping_file = ctl.get_alias("mapping_file") +mapping = parse_id_mapping(mapping_file) + +tcp_ipv4_id = get_id(mapping, "tcp_ipv4_id") +tcp_ipv6_id = get_id(mapping, "tcp_ipv6_id") +udp_ipv4_id = get_id(mapping, "udp_ipv4_id") +udp_ipv6_id = get_id(mapping, "udp_ipv6_id") + +if tcp_ipv4_id is not None or\ + tcp_ipv6_id is not None or\ + udp_ipv4_id is not None or\ + udp_ipv6_id is not None: + perf_api = ctl.connect_PerfRepo() + logging.info("PerfRepo support enabled for this run.") +else: + logging.info("PerfRepo support disabled for this run.") + h1 = ctl.get_host("host1") g1 = ctl.get_host("guest1")
@@ -56,7 +75,8 @@ netperf_cli_tcp = ctl.get_module("Netperf", "netperf_server" : g1.get_ip("guestnic"), "duration" : netperf_duration, "testname" : "TCP_STREAM", - "netperf_opts" : "-L %s" % + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % h2.get_ip("vlan10") })
@@ -66,7 +86,8 @@ netperf_cli_udp = ctl.get_module("Netperf", "netperf_server" : g1.get_ip("guestnic"), "duration" : netperf_duration, "testname" : "UDP_STREAM", - "netperf_opts" : "-L %s" % + "confidence" : "99,5", + "netperf_opts" : "-i 5 -L %s" % h2.get_ip("vlan10") })
@@ -77,8 +98,9 @@ netperf_cli_tcp6 = ctl.get_module("Netperf", g1.get_ip("guestnic", 1), "duration" : netperf_duration, "testname" : "TCP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % h2.get_ip("vlan10", 1) + "-i 5 -L %s -6" % h2.get_ip("vlan10", 1) })
netperf_cli_udp6 = ctl.get_module("Netperf", @@ -88,32 +110,167 @@ netperf_cli_udp6 = ctl.get_module("Netperf", g1.get_ip("guestnic", 1), "duration" : netperf_duration, "testname" : "UDP_STREAM", + "confidence" : "99,5", "netperf_opts" : - "-L %s -6" % h2.get_ip("vlan10", 1) + "-i 5 -L %s -6" % h2.get_ip("vlan10", 1) })
ctl.wait(15)
for offload in offloads: for state in ["off", "on"]: - g1.run("ethtool -K %s %s %s" % (g1.get_devname("guestnic"), - offload, state)) - h1.run("ethtool -K %s %s %s" % (h1.get_devname("nic"), - offload, state)) - h2.run("ethtool -K %s %s %s" % (h2.get_devname("nic"), - offload, state)) - if ipv in [ 'ipv4', 'both' ]: - g1.run(ping_mod) - server_proc = g1.run(netperf_srv, bg=True) - ctl.wait(2) - h2.run(netperf_cli_tcp, timeout=70) - h2.run(netperf_cli_udp, timeout=70) - server_proc.intr() - - if ipv in [ 'ipv6', 'both' ]: - g1.run(ping_mod6) - server_proc = g1.run(netperf_srv6, bg=True) - ctl.wait(2) - h2.run(netperf_cli_tcp6, timeout=70) - h2.run(netperf_cli_udp6, timeout=70) - server_proc.intr() + g1.run("ethtool -K %s %s %s" % (g1.get_devname("guestnic"), + offload, state)) + h1.run("ethtool -K %s %s %s" % (h1.get_devname("nic"), + offload, state)) + h2.run("ethtool -K %s %s %s" % (h2.get_devname("nic"), + offload, state)) + if ipv in [ 'ipv4', 'both' ]: + g1.run(ping_mod) + + # prepare PerfRepo result for tcp + result_tcp = None + result_udp = None + if tcp_ipv4_id is not None: + result_tcp = perf_api.new_result(tcp_ipv4_id, "tcp_ipv4_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + # prepare PerfRepo result for udp + if udp_ipv4_id is not None: + result_udp = perf_api.new_result(udp_ipv4_id, "udp_ipv4_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + server_proc = g1.run(netperf_srv, bg=True) + ctl.wait(2) + tcp_res_data = h2.run(netperf_cli_tcp, timeout = int(netperf_duration)*5 + 20) + udp_res_data = h2.run(netperf_cli_udp, timeout = int(netperf_duration)*5 + 20) + server_proc.intr() + + if result_tcp is not None and\ + tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) + + if ipv in [ 'ipv6', 'both' ]: + g1.run(ping_mod6) + + # prepare PerfRepo result for tcp ipv6 + result_tcp = None + result_udp = None + if tcp_ipv6_id is not None: + result_tcp = perf_api.new_result(tcp_ipv6_id, "tcp_ipv6_result") + result_tcp.set_parameter(offload, state) + if product_name is not None: + result_tcp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_tcp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_tcp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + # prepare PerfRepo result for udp ipv6 + if udp_ipv6_id is not None: + result_udp = perf_api.new_result(udp_ipv6_id, "udp_ipv6_result") + result_udp.set_parameter(offload, state) + if product_name is not None: + result_udp.set_tag(product_name) + res_hash = result_tcp.generate_hash(['kernel-release', + 'redhat-release']) + result_udp.set_tag(res_hash) + + baseline = None + report_id = get_id(mapping, res_hash) + if report_id is not None: + baseline = perf_api.get_baseline(report_id) + + if baseline is not None: + baseline_throughput = baseline.get_value('throughput').get_result() + baseline_deviation = baseline.get_value('throughput_deviation').get_result() + netperf_cli_udp.update_options({'threshold': '%s bits/sec' % baseline_throughput, + 'threshold_deviation': '%s bits/sec' % baseline_deviation}) + + server_proc = g1.run(netperf_srv6, bg=True) + ctl.wait(2) + tcp_res_data = h2.run(netperf_cli_tcp6, timeout = int(netperf_duration)*5 + 20) + udp_res_data = h2.run(netperf_cli_udp6, timeout = int(netperf_duration)*5 + 20) + server_proc.intr() + + if result_tcp is not None and tcp_res_data.get_result() is not None and\ + tcp_res_data.get_result()['res_data'] is not None: + rate = tcp_res_data.get_result()['res_data']['rate'] + deviation = tcp_res_data.get_result()['res_data']['rate_deviation'] + + result_tcp.add_value('throughput', rate) + result_tcp.add_value('throughput_min', rate - deviation) + result_tcp.add_value('throughput_max', rate + deviation) + result_tcp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_tcp) + + if result_udp is not None and udp_res_data.get_result() is not None and\ + result_udp.get_result()['res_data'] is not None: + rate = udp_res_data.get_result()['res_data']['rate'] + deviation = udp_res_data.get_result()['res_data']['rate_deviation'] + + result_udp.add_value('throughput', rate) + result_udp.add_value('throughput_min', rate - deviation) + result_udp.add_value('throughput_max', rate + deviation) + result_udp.add_value('throughput_deviation', deviation) + perf_api.save_result(result_udp) diff --git a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml index 3649ca9..fe79146 100644 --- a/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml +++ b/recipes/regression_tests/phase1/virtual_bridge_vlan_in_host.xml @@ -2,6 +2,7 @@ <define> <alias name="ipv" value="both" /> <alias name="netperf_duration" value="60" /> + <alias name="mapping_file" value="virtual_bridge_vlan_in_host.mapping" /> </define> <network> <host id="host1">
Ack to series.
Acked-by: Jan Tluka jtluka@redhat.com
Mon, Jul 27, 2015 at 02:06:48PM CEST, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
According to PEP 0394: in preparation for an eventual change in the default version of Python, Python 2 only scripts should either be updated to be source compatible with Python 3 or else to use python2 in the shebang line.
It's not a complete solution but it solves problems of running LNST on Arch Linux until more work is done on issue #105.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
lnst-ctl | 2 +- lnst-pool-wizard | 2 +- lnst-slave | 2 +- misc/recipe_conv.py | 2 +- recipes/smoke/generate-recipes.py | 2 +- setup.py | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/lnst-ctl b/lnst-ctl index 9a321b6..947f3ad 100755 --- a/lnst-ctl +++ b/lnst-ctl @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2 """ Net test controller
diff --git a/lnst-pool-wizard b/lnst-pool-wizard index c788688..2b49744 100755 --- a/lnst-pool-wizard +++ b/lnst-pool-wizard @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2 """ Machine pool wizard
diff --git a/lnst-slave b/lnst-slave index d4345c3..6238769 100755 --- a/lnst-slave +++ b/lnst-slave @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2 """ Net test slave
diff --git a/misc/recipe_conv.py b/misc/recipe_conv.py index a5c19c4..a18a0d2 100755 --- a/misc/recipe_conv.py +++ b/misc/recipe_conv.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python2 """ Recipe converter
diff --git a/recipes/smoke/generate-recipes.py b/recipes/smoke/generate-recipes.py index bed434b..24806ad 100755 --- a/recipes/smoke/generate-recipes.py +++ b/recipes/smoke/generate-recipes.py @@ -1,4 +1,4 @@ -#! /usr/bin/env python +#! /usr/bin/env python2
# LNST Smoke Tests # Author: Ondrej Lichtner olichtne@redhat.com diff --git a/setup.py b/setup.py index d281fe9..4033d4a 100755 --- a/setup.py +++ b/setup.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python +#!/usr/bin/env python2 """ Install script for lnst
-- 2.4.6
LNST-developers mailing list LNST-developers@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/lnst-developers
lnst-developers@lists.fedorahosted.org