Hello everyone,
this small patchset modifies LNST Netperf module. It presents minor refactorization of the module which does not have any influence on the functionality but it also adds support for RR tests. Since this module is used for our ENRT testing, I'd like to ask you to review this carefully so we don't affect our PerfRepo objects.
Support for RR tests consists of these items: * added THROUGHPUT_UNIT omni output selector * parsing of unit based from this selector * when running STREAM test, keep the output same as previous version (rate in bit/s, unit called bps) * when running RR test, rate is Trans/sec and unit is called tps
I've run locally TCP_STREAM, UDP_STREAM, TCP_RR, TCP_CRR and I need to run in Beaker SCTP tests so I have ready full suite of tests we are currently using. In the pastebins below, there is test recipe, test task and debug log from the run on this patchset. Once I'll have the results from SCTP tests, I'll post them as well.
https://pastebin.com/bjFLs1Nj rr_test.xml https://pastebin.com/jeCBGsrf rr_test.py https://pastebin.com/W63GCGVj log.debug
Jiri Prochazka (3): test_modules/netperf: use bit/s in STREAM tests and parse RR tests test_modules/netperf: split _pretty_rate method in two and add support for RR tests test_modules/netperf: add TCP_CRR to supported and omni tests
test_modules/Netperf.py | 107 +++++++++++++++++++++++++++++------------------- 1 file changed, 66 insertions(+), 41 deletions(-)
We used kbit/s as output rate for STREAM tests, then we converted it to bit/s. This patch changes that so the output rate is in bit/s for STREAM tests and Trans/s for RR tests.
It also adds THROUGHPUT_UNITS omni output selector for omni tests and adds support for parsing units - both bps and tps
Signed-off-by: Jiri Prochazka jprochaz@redhat.com --- test_modules/Netperf.py | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index 5a0e989..fbc4885 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -68,7 +68,12 @@ class Netperf(TestGeneric): composes commands for netperf and netserver based on xml recipe """ if self._role == "client": - cmd = "netperf -H %s -f k" % self._netperf_server + # for request response test transactions per seconds are used as unit + if "RR" in self._testname: + cmd = "netperf -H %s -f x" % self._netperf_server + # else 10^0bits/s are used as unit + else: + cmd = "netperf -H %s -f b" % self._netperf_server if self._is_omni(): # -P 0 disables banner header of output cmd += " -P 0" @@ -134,7 +139,8 @@ class Netperf(TestGeneric):
# Print only relevant output if self._is_omni(): - cmd += ' -- -k "THROUGHPUT, LOCAL_CPU_UTIL, REMOTE_CPU_UTIL, '\ + cmd += ' -- -k "THROUGHPUT, THROUGHPUT_UNITS, '\ + 'LOCAL_CPU_UTIL, REMOTE_CPU_UTIL, '\ 'CONFIDENCE_LEVEL, THROUGHPUT_CONFID, LOCAL_SEND_SIZE, '\ 'REMOTE_RECV_SIZE, LOCAL_SEND_THROUGHPUT, '\ 'REMOTE_RECV_THROUGHPUT, LOCAL_CPU_PEAK_UTIL, '\ @@ -195,13 +201,20 @@ class Netperf(TestGeneric): pattern_throughput = "THROUGHPUT=(\d+.\d+)" throughput = re.search(pattern_throughput, output)
+ pattern_throughput_units = "THROUGHPUT_UNITS=(.*)" + throughput_units = re.search(pattern_throughput_units, output).group(1) + if throughput is None: - rate_in_kb = 0.0 + rate = 0.0 else: - rate_in_kb = float(throughput.group(1)) + rate = float(throughput.group(1))
- res_val["rate"] = rate_in_kb*1000 - res_val["unit"] = "bps" + if throughput_units == "10^0bits/s": + res_val["unit"] = "bps" + elif throughput_units == "Trans/s": + res_val["unit"] = "tps" + + res_val["rate"] = rate
if self._cpu_util is not None: if self._cpu_util == "local" or self._cpu_util == "both":
Thu, Apr 20, 2017 at 05:21:45PM CEST, jprochaz@redhat.com wrote:
We used kbit/s as output rate for STREAM tests, then we converted it to bit/s. This patch changes that so the output rate is in bit/s for STREAM tests and Trans/s for RR tests.
Unfortunately there's a reason behind kbit/s.
Ondrej's patch comment (commit 831b1861f2054d02efd12e5f9d9edee34d57746a):
* The client measures speeds in kilobits by default, previously this wasn't set so whatever Netperf deemed appropriate was used. This helps with result parsing.
If you don't specify it netperf would report in units it thinks are appropriate. I know that you're now parsing the OMNI output that includes the tput units but that would not work for non-omni tests.
-Jan
It also adds THROUGHPUT_UNITS omni output selector for omni tests and adds support for parsing units - both bps and tps
Signed-off-by: Jiri Prochazka jprochaz@redhat.com
test_modules/Netperf.py | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index 5a0e989..fbc4885 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -68,7 +68,12 @@ class Netperf(TestGeneric): composes commands for netperf and netserver based on xml recipe """ if self._role == "client":
cmd = "netperf -H %s -f k" % self._netperf_server
# for request response test transactions per seconds are used as unit
^^^ beware! tab/space mixing!
if "RR" in self._testname:
cmd = "netperf -H %s -f x" % self._netperf_server
# else 10^0bits/s are used as unit
^^^ beware! tab/space mixing!
else:
cmd = "netperf -H %s -f b" % self._netperf_server if self._is_omni(): # -P 0 disables banner header of output cmd += " -P 0"
@@ -134,7 +139,8 @@ class Netperf(TestGeneric):
# Print only relevant output if self._is_omni():
cmd += ' -- -k "THROUGHPUT, LOCAL_CPU_UTIL, REMOTE_CPU_UTIL, '\
cmd += ' -- -k "THROUGHPUT, THROUGHPUT_UNITS, '\
'LOCAL_CPU_UTIL, REMOTE_CPU_UTIL, '\ 'CONFIDENCE_LEVEL, THROUGHPUT_CONFID, LOCAL_SEND_SIZE, '\ 'REMOTE_RECV_SIZE, LOCAL_SEND_THROUGHPUT, '\ 'REMOTE_RECV_THROUGHPUT, LOCAL_CPU_PEAK_UTIL, '\
@@ -195,13 +201,20 @@ class Netperf(TestGeneric): pattern_throughput = "THROUGHPUT=(\d+.\d+)" throughput = re.search(pattern_throughput, output)
pattern_throughput_units = "THROUGHPUT_UNITS=(.*)"
throughput_units = re.search(pattern_throughput_units, output).group(1)
if throughput is None:
rate_in_kb = 0.0
rate = 0.0 else:
rate_in_kb = float(throughput.group(1))
rate = float(throughput.group(1))
res_val["rate"] = rate_in_kb*1000
res_val["unit"] = "bps"
if throughput_units == "10^0bits/s":
res_val["unit"] = "bps"
elif throughput_units == "Trans/s":
res_val["unit"] = "tps"
res_val["rate"] = rate if self._cpu_util is not None: if self._cpu_util == "local" or self._cpu_util == "both":
-- 2.9.3 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org
2017-04-20 18:38 GMT+02:00 Jan Tluka jtluka@redhat.com:
Thu, Apr 20, 2017 at 05:21:45PM CEST, jprochaz@redhat.com wrote:
We used kbit/s as output rate for STREAM tests, then we converted it to bit/s. This patch changes that so the output rate is in bit/s for STREAM tests and Trans/s for RR tests.
Unfortunately there's a reason behind kbit/s.
Ondrej's patch comment (commit 831b1861f2054d02efd12e5f9d9edee34d57746a):
* The client measures speeds in kilobits by default, previously this wasn't set so whatever Netperf deemed appropriate was used. This helps with result parsing.
If you don't specify it netperf would report in units it thinks are appropriate. I know that you're now parsing the OMNI output that includes the tput units but that would not work for non-omni tests.
-Jan
Yes, but Ondrej's patch uses -f k, so the rate is measured in kbit/s, and then it is converted to bits/s in the code.
In my patch, for STREAM tests, -f b is used, so the rate is measured in bit/s and no further conversion is needed. And this does work for both omni and non-omni tests.
It also adds THROUGHPUT_UNITS omni output selector for omni tests and adds support for parsing units - both bps and tps
Signed-off-by: Jiri Prochazka jprochaz@redhat.com
test_modules/Netperf.py | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index 5a0e989..fbc4885 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -68,7 +68,12 @@ class Netperf(TestGeneric): composes commands for netperf and netserver based on xml recipe """ if self._role == "client":
cmd = "netperf -H %s -f k" % self._netperf_server
# for request response test transactions per seconds are used as unit
^^^ beware! tab/space mixing!
if "RR" in self._testname:
cmd = "netperf -H %s -f x" % self._netperf_server
# else 10^0bits/s are used as unit
^^^ beware! tab/space mixing!
else:
cmd = "netperf -H %s -f b" % self._netperf_server if self._is_omni(): # -P 0 disables banner header of output cmd += " -P 0"
@@ -134,7 +139,8 @@ class Netperf(TestGeneric):
# Print only relevant output if self._is_omni():
cmd += ' -- -k "THROUGHPUT, LOCAL_CPU_UTIL, REMOTE_CPU_UTIL, '\
cmd += ' -- -k "THROUGHPUT, THROUGHPUT_UNITS, '\
'LOCAL_CPU_UTIL, REMOTE_CPU_UTIL, '\ 'CONFIDENCE_LEVEL, THROUGHPUT_CONFID, LOCAL_SEND_SIZE, '\ 'REMOTE_RECV_SIZE, LOCAL_SEND_THROUGHPUT, '\ 'REMOTE_RECV_THROUGHPUT, LOCAL_CPU_PEAK_UTIL, '\
@@ -195,13 +201,20 @@ class Netperf(TestGeneric): pattern_throughput = "THROUGHPUT=(\d+.\d+)" throughput = re.search(pattern_throughput, output)
pattern_throughput_units = "THROUGHPUT_UNITS=(.*)"
throughput_units = re.search(pattern_throughput_units, output).group(1)
if throughput is None:
rate_in_kb = 0.0
rate = 0.0 else:
rate_in_kb = float(throughput.group(1))
rate = float(throughput.group(1))
res_val["rate"] = rate_in_kb*1000
res_val["unit"] = "bps"
if throughput_units == "10^0bits/s":
res_val["unit"] = "bps"
elif throughput_units == "Trans/s":
res_val["unit"] = "tps"
res_val["rate"] = rate if self._cpu_util is not None: if self._cpu_util == "local" or self._cpu_util == "both":
-- 2.9.3 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org
Fri, Apr 21, 2017 at 12:53:52PM CEST, jprochaz@redhat.com wrote:
2017-04-20 18:38 GMT+02:00 Jan Tluka jtluka@redhat.com:
Thu, Apr 20, 2017 at 05:21:45PM CEST, jprochaz@redhat.com wrote:
We used kbit/s as output rate for STREAM tests, then we converted it to bit/s. This patch changes that so the output rate is in bit/s for STREAM tests and Trans/s for RR tests.
Unfortunately there's a reason behind kbit/s.
Ondrej's patch comment (commit 831b1861f2054d02efd12e5f9d9edee34d57746a):
* The client measures speeds in kilobits by default, previously this wasn't set so whatever Netperf deemed appropriate was used. This helps with result parsing.
If you don't specify it netperf would report in units it thinks are appropriate. I know that you're now parsing the OMNI output that includes the tput units but that would not work for non-omni tests.
-Jan
Yes, but Ondrej's patch uses -f k, so the rate is measured in kbit/s, and then it is converted to bits/s in the code.
In my patch, for STREAM tests, -f b is used, so the rate is measured in bit/s and no further conversion is needed. And this does work for both omni and non-omni tests.
Oh, I did not know that '-f b' is possible. Then it sounds good.
-Jan
This method had two uses. When no param unit was specified it converted the rate to highest pretty unit. When param unit was specified, it converted deviation rate to the same unit as the rate.
This patch splits these methods in two - _pretty_rate takes only rate as param and converts it to highest pretty unit. For RR tests it doesn't modify the rate and uses Trans/sec as unit. - _pretty_dev_rate takes two arguments, rate (deviation) and unit (which is equal to unit of the rate (throughput)) and converts the deviation rate to the same unit as the throughput rate. For RR tests it doesn't change anything.
Occurences in the code have been updated to match the new layout of the functions.
Signed-off-by: Jiri Prochazka jprochaz@redhat.com --- test_modules/Netperf.py | 78 ++++++++++++++++++++++++++++--------------------- 1 file changed, 45 insertions(+), 33 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index fbc4885..b9227eb 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -392,9 +392,13 @@ class Netperf(TestGeneric): if e.errno == errno.EINTR: server.kill()
- def _pretty_rate(self, rate, unit=None): + def _pretty_rate(self, rate): pretty_rate = {} - if unit is None: + + if "RR" in self._testname: + pretty_rate["unit"] = "Trans/sec" + pretty_rate["rate"] = rate + else: if rate < 1000: pretty_rate["unit"] = "bits/sec" pretty_rate["rate"] = rate @@ -410,34 +414,42 @@ class Netperf(TestGeneric): elif rate < 1000 * 1000 * 1000 * 1000 * 1000: pretty_rate["unit"] = "tbits/sec" pretty_rate["rate"] = rate / (1000 * 1000 * 1000 * 1000) - else: - if unit == "bits/sec": - pretty_rate["unit"] = "bits/sec" - pretty_rate["rate"] = rate - elif unit == "Kbits/sec": - pretty_rate["unit"] = "Kbits/sec" - pretty_rate["rate"] = rate / 1024 - elif unit == "kbits/sec": - pretty_rate["unit"] = "kbits/sec" - pretty_rate["rate"] = rate / 1000 - elif unit == "Mbits/sec": - pretty_rate["unit"] = "Mbits/sec" - pretty_rate["rate"] = rate / (1024 * 1024) - elif unit == "mbits/sec": - pretty_rate["unit"] = "mbits/sec" - pretty_rate["rate"] = rate / (1000 * 1000) - elif unit == "Gbits/sec": - pretty_rate["unit"] = "Gbits/sec" - pretty_rate["rate"] = rate / (1024 * 1024 * 1024) - elif unit == "gbits/sec": - pretty_rate["unit"] = "gbits/sec" - pretty_rate["rate"] = rate / (1000 * 1000 * 1000) - elif unit == "Tbits/sec": - pretty_rate["unit"] = "Tbits/sec" - pretty_rate["rate"] = rate / (1024 * 1024 * 1024 * 1024) - elif unit == "tbits/sec": - pretty_rate["unit"] = "tbits/sec" - pretty_rate["rate"] = rate / (1000 * 1000 * 1000 * 1000) + + return pretty_rate + + def _pretty_dev_rate(self, rate, unit): + pretty_rate = {} + + if unit == "bits/sec": + pretty_rate["unit"] = "bits/sec" + pretty_rate["rate"] = rate + elif unit == "Kbits/sec": + pretty_rate["unit"] = "Kbits/sec" + pretty_rate["rate"] = rate / 1024 + elif unit == "kbits/sec": + pretty_rate["unit"] = "kbits/sec" + pretty_rate["rate"] = rate / 1000 + elif unit == "Mbits/sec": + pretty_rate["unit"] = "Mbits/sec" + pretty_rate["rate"] = rate / (1024 * 1024) + elif unit == "mbits/sec": + pretty_rate["unit"] = "mbits/sec" + pretty_rate["rate"] = rate / (1000 * 1000) + elif unit == "Gbits/sec": + pretty_rate["unit"] = "Gbits/sec" + pretty_rate["rate"] = rate / (1024 * 1024 * 1024) + elif unit == "gbits/sec": + pretty_rate["unit"] = "gbits/sec" + pretty_rate["rate"] = rate / (1000 * 1000 * 1000) + elif unit == "Tbits/sec": + pretty_rate["unit"] = "Tbits/sec" + pretty_rate["rate"] = rate / (1024 * 1024 * 1024 * 1024) + elif unit == "tbits/sec": + pretty_rate["unit"] = "tbits/sec" + pretty_rate["rate"] = rate / (1000 * 1000 * 1000 * 1000) + elif unit == "Trans/sec": + pretty_rate["unit"] = "Trans/sec" + pretty_rate["rate"] = rate
return pretty_rate
@@ -505,7 +517,7 @@ class Netperf(TestGeneric): res_data["rate_deviation"] = rate_deviation
rate_pretty = self._pretty_rate(rate) - rate_dev_pretty = self._pretty_rate(rate_deviation, unit=rate_pretty["unit"]) + rate_dev_pretty = self._pretty_dev_rate(rate_deviation, unit=rate_pretty["unit"])
if rv != 0 and self._runs == 1: res_data["msg"] = "Could not get performance throughput!" @@ -543,7 +555,7 @@ class Netperf(TestGeneric): return (res_val, res_data) elif self._max_deviation["type"] == "absolute": if rate_deviation > self._max_deviation["value"]["rate"]: - pretty_deviation = self._pretty_rate(self._max_deviation["value"]["rate"]) + pretty_deviation = self._pretty_dev_rate(self._max_deviation["value"]["rate"]) res_val = False res_data["msg"] = "Measured rate %.2f +-%.2f %s has bigger "\ "deviation than allowed (+-%.2f %s)" %\ @@ -558,7 +570,7 @@ class Netperf(TestGeneric): rate + rate_deviation)
threshold_pretty = self._pretty_rate(self._threshold["rate"]) - threshold_dev_pretty = self._pretty_rate(self._threshold_deviation["rate"], + threshold_dev_pretty = self._pretty_dev_rate(self._threshold_deviation["rate"], unit = threshold_pretty["unit"])
if self._threshold_interval[0] > result_interval[1]:
Thu, Apr 20, 2017 at 05:21:46PM CEST, jprochaz@redhat.com wrote:
This method had two uses. When no param unit was specified it converted the rate to highest pretty unit. When param unit was specified, it converted deviation rate to the same unit as the rate.
This patch splits these methods in two
Why do you need to split it? My guess is that it's related to previous patch and expectation that you'll have bit/s units by default.
-Jan
- _pretty_rate takes only rate as param and converts it to highest pretty unit.
For RR tests it doesn't modify the rate and uses Trans/sec as unit.
- _pretty_dev_rate takes two arguments, rate (deviation) and unit (which
is equal to unit of the rate (throughput)) and converts the deviation rate to the same unit as the throughput rate. For RR tests it doesn't change anything.
Occurences in the code have been updated to match the new layout of the functions.
Signed-off-by: Jiri Prochazka jprochaz@redhat.com
test_modules/Netperf.py | 78 ++++++++++++++++++++++++++++--------------------- 1 file changed, 45 insertions(+), 33 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index fbc4885..b9227eb 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -392,9 +392,13 @@ class Netperf(TestGeneric): if e.errno == errno.EINTR: server.kill()
- def _pretty_rate(self, rate, unit=None):
- def _pretty_rate(self, rate): pretty_rate = {}
if unit is None:
if "RR" in self._testname:
pretty_rate["unit"] = "Trans/sec"
pretty_rate["rate"] = rate
else: if rate < 1000: pretty_rate["unit"] = "bits/sec" pretty_rate["rate"] = rate
@@ -410,34 +414,42 @@ class Netperf(TestGeneric): elif rate < 1000 * 1000 * 1000 * 1000 * 1000: pretty_rate["unit"] = "tbits/sec" pretty_rate["rate"] = rate / (1000 * 1000 * 1000 * 1000)
else:
if unit == "bits/sec":
pretty_rate["unit"] = "bits/sec"
pretty_rate["rate"] = rate
elif unit == "Kbits/sec":
pretty_rate["unit"] = "Kbits/sec"
pretty_rate["rate"] = rate / 1024
elif unit == "kbits/sec":
pretty_rate["unit"] = "kbits/sec"
pretty_rate["rate"] = rate / 1000
elif unit == "Mbits/sec":
pretty_rate["unit"] = "Mbits/sec"
pretty_rate["rate"] = rate / (1024 * 1024)
elif unit == "mbits/sec":
pretty_rate["unit"] = "mbits/sec"
pretty_rate["rate"] = rate / (1000 * 1000)
elif unit == "Gbits/sec":
pretty_rate["unit"] = "Gbits/sec"
pretty_rate["rate"] = rate / (1024 * 1024 * 1024)
elif unit == "gbits/sec":
pretty_rate["unit"] = "gbits/sec"
pretty_rate["rate"] = rate / (1000 * 1000 * 1000)
elif unit == "Tbits/sec":
pretty_rate["unit"] = "Tbits/sec"
pretty_rate["rate"] = rate / (1024 * 1024 * 1024 * 1024)
elif unit == "tbits/sec":
pretty_rate["unit"] = "tbits/sec"
pretty_rate["rate"] = rate / (1000 * 1000 * 1000 * 1000)
return pretty_rate
def _pretty_dev_rate(self, rate, unit):
pretty_rate = {}
if unit == "bits/sec":
pretty_rate["unit"] = "bits/sec"
pretty_rate["rate"] = rate
elif unit == "Kbits/sec":
pretty_rate["unit"] = "Kbits/sec"
pretty_rate["rate"] = rate / 1024
elif unit == "kbits/sec":
pretty_rate["unit"] = "kbits/sec"
pretty_rate["rate"] = rate / 1000
elif unit == "Mbits/sec":
pretty_rate["unit"] = "Mbits/sec"
pretty_rate["rate"] = rate / (1024 * 1024)
elif unit == "mbits/sec":
pretty_rate["unit"] = "mbits/sec"
pretty_rate["rate"] = rate / (1000 * 1000)
elif unit == "Gbits/sec":
pretty_rate["unit"] = "Gbits/sec"
pretty_rate["rate"] = rate / (1024 * 1024 * 1024)
elif unit == "gbits/sec":
pretty_rate["unit"] = "gbits/sec"
pretty_rate["rate"] = rate / (1000 * 1000 * 1000)
elif unit == "Tbits/sec":
pretty_rate["unit"] = "Tbits/sec"
pretty_rate["rate"] = rate / (1024 * 1024 * 1024 * 1024)
elif unit == "tbits/sec":
pretty_rate["unit"] = "tbits/sec"
pretty_rate["rate"] = rate / (1000 * 1000 * 1000 * 1000)
elif unit == "Trans/sec":
pretty_rate["unit"] = "Trans/sec"
pretty_rate["rate"] = rate return pretty_rate
@@ -505,7 +517,7 @@ class Netperf(TestGeneric): res_data["rate_deviation"] = rate_deviation
rate_pretty = self._pretty_rate(rate)
rate_dev_pretty = self._pretty_rate(rate_deviation, unit=rate_pretty["unit"])
rate_dev_pretty = self._pretty_dev_rate(rate_deviation, unit=rate_pretty["unit"]) if rv != 0 and self._runs == 1: res_data["msg"] = "Could not get performance throughput!"
@@ -543,7 +555,7 @@ class Netperf(TestGeneric): return (res_val, res_data) elif self._max_deviation["type"] == "absolute": if rate_deviation > self._max_deviation["value"]["rate"]:
pretty_deviation = self._pretty_rate(self._max_deviation["value"]["rate"])
pretty_deviation = self._pretty_dev_rate(self._max_deviation["value"]["rate"]) res_val = False res_data["msg"] = "Measured rate %.2f +-%.2f %s has bigger "\ "deviation than allowed (+-%.2f %s)" %\
@@ -558,7 +570,7 @@ class Netperf(TestGeneric): rate + rate_deviation)
threshold_pretty = self._pretty_rate(self._threshold["rate"])
threshold_dev_pretty = self._pretty_rate(self._threshold_deviation["rate"],
threshold_dev_pretty = self._pretty_dev_rate(self._threshold_deviation["rate"], unit = threshold_pretty["unit"]) if self._threshold_interval[0] > result_interval[1]:
-- 2.9.3 _______________________________________________ LNST-developers mailing list -- lnst-developers@lists.fedorahosted.org To unsubscribe send an email to lnst-developers-leave@lists.fedorahosted.org
Thu, Apr 20, 2017 at 06:43:02PM CEST, jtluka@redhat.com wrote:
Thu, Apr 20, 2017 at 05:21:46PM CEST, jprochaz@redhat.com wrote:
This method had two uses. When no param unit was specified it converted the rate to highest pretty unit. When param unit was specified, it converted deviation rate to the same unit as the rate.
This patch splits these methods in two
Why do you need to split it? My guess is that it's related to previous patch and expectation that you'll have bit/s units by default.
-Jan
If it was not obvious I'd like to keep just one method for this.
-Jan
2017-04-20 18:56 GMT+02:00 Jan Tluka jtluka@redhat.com:
Thu, Apr 20, 2017 at 06:43:02PM CEST, jtluka@redhat.com wrote:
Thu, Apr 20, 2017 at 05:21:46PM CEST, jprochaz@redhat.com wrote:
This method had two uses. When no param unit was specified it converted the rate to highest pretty unit. When param unit was specified, it converted deviation rate to the same unit as the rate.
This patch splits these methods in two
Why do you need to split it? My guess is that it's related to previous patch and expectation that you'll have bit/s units by default.
-Jan
If it was not obvious I'd like to keep just one method for this.
-Jan
IMHO having only one method, the way it is now, it's not very clear what does it do as it does two separate things under same name.
This test will be used in short-lived connections test. Since all Request/Response test will use parsing from omni output, it needs to be added in list of omni output supported tests.
Signed-off-by: Jiri Prochazka jprochaz@redhat.com --- test_modules/Netperf.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py index b9227eb..8c170cb 100644 --- a/test_modules/Netperf.py +++ b/test_modules/Netperf.py @@ -16,9 +16,9 @@ from lnst.Common.Utils import std_deviation, is_installed, int_it class Netperf(TestGeneric):
supported_tests = ["TCP_STREAM", "TCP_RR", "UDP_STREAM", "UDP_RR", - "SCTP_STREAM", "SCTP_STREAM_MANY", "SCTP_RR"] + "SCTP_STREAM", "SCTP_STREAM_MANY", "SCTP_RR", "TCP_CRR"]
- omni_tests = ["TCP_STREAM", "TCP_RR", "UDP_STREAM", "UDP_RR"] + omni_tests = ["TCP_STREAM", "UDP_STREAM", "TCP_RR", "UDP_RR", "TCP_CRR"]
def __init__(self, command): super(Netperf, self).__init__(command)
Found a bug in SCTP_STREAM parsing, will send v2 with fix + fix tabs in one of the patches (thanks Jan Tluka for noticing)
2017-04-20 17:21 GMT+02:00 Jiri Prochazka jprochaz@redhat.com:
Hello everyone,
this small patchset modifies LNST Netperf module. It presents minor refactorization of the module which does not have any influence on the functionality but it also adds support for RR tests. Since this module is used for our ENRT testing, I'd like to ask you to review this carefully so we don't affect our PerfRepo objects.
Support for RR tests consists of these items:
- added THROUGHPUT_UNIT omni output selector
- parsing of unit based from this selector
- when running STREAM test, keep the output same as previous version (rate in bit/s, unit called bps)
- when running RR test, rate is Trans/sec and unit is called tps
I've run locally TCP_STREAM, UDP_STREAM, TCP_RR, TCP_CRR and I need to run in Beaker SCTP tests so I have ready full suite of tests we are currently using. In the pastebins below, there is test recipe, test task and debug log from the run on this patchset. Once I'll have the results from SCTP tests, I'll post them as well.
https://pastebin.com/bjFLs1Nj rr_test.xml https://pastebin.com/jeCBGsrf rr_test.py https://pastebin.com/W63GCGVj log.debug
Jiri Prochazka (3): test_modules/netperf: use bit/s in STREAM tests and parse RR tests test_modules/netperf: split _pretty_rate method in two and add support for RR tests test_modules/netperf: add TCP_CRR to supported and omni tests
test_modules/Netperf.py | 107 +++++++++++++++++++++++++++++------------------- 1 file changed, 66 insertions(+), 41 deletions(-)
-- 2.9.3
lnst-developers@lists.fedorahosted.org