From: Ondrej Lichtner olichtne@redhat.com
Hi all,
this patch set introduces another subpackage to the lnst.RecipeCommon.Perf package called Evaluators. This package will be used to implement classes that implement evaluation logic over results generated by the various Perf.Measurements.
For now I implemented just a basic NonZero evaluation for flow measurements and a Baseline evaluation for CPU and Flow measurements.
The baseline evaluators however don't implement any logic on how to actually retrieve the baselines to compare against. For our use-case within RH we'll be using a database that stores these results and so we'll have some lnst extension code that connects to this database and looks for the correct data to use as a baseline. Since the specific database application isn't upstream (yet?), I won't include that code in the upstream lnst repository either so it doesn't confuse others.
-Ondrej
Ondrej Lichtner (12): RecipeCommon.Perf.Recipe: add support for registering evaluators RecipeCommon.Perf.Recipe: order results RecipeCommon.Perf.Recipe: remove unused variable lnst.Controller.Host: fix namespace property lnst.Devices.Device: don't raise exception when coalescing unsupported lnst.Controller.Namespace: add hostname property lnst.RecipeCommon.Perf.Measurements: add BaseMeasurementResults class lnst.RecipeCommon.Perf.Measurements: add name and version properties add lnst.RecipeCommon.Perf.Evaluators package lnst.Recipes.ENRT.BaseEnrtRecipe: register measurement evaluators lnst.Recipes.ENRT.BaseEnrtRecipe: remove unused code gitignore: add build/ directory that contains localy byte-compiled code
.gitignore | 1 + lnst/Controller/Host.py | 2 +- lnst/Controller/Namespace.py | 4 ++ lnst/Devices/Device.py | 5 +- .../Perf/Evaluators/BaseEvaluator.py | 3 + .../Evaluators/BaselineCPUAverageEvaluator.py | 36 +++++++++++ .../BaselineFlowAverageEvaluator.py | 57 +++++++++++++++++ .../Perf/Evaluators/EvaluationError.py | 4 ++ .../Perf/Evaluators/NonzeroFlowEvaluator.py | 26 ++++++++ lnst/RecipeCommon/Perf/Evaluators/__init__.py | 4 ++ .../Perf/Measurements/BaseCPUMeasurement.py | 31 +++------- .../Perf/Measurements/BaseFlowMeasurement.py | 24 +++---- .../Perf/Measurements/BaseMeasurement.py | 22 +++++-- .../Perf/Measurements/IperfFlowMeasurement.py | 29 ++++++++- .../Perf/Measurements/StatCPUMeasurement.py | 6 +- .../Perf/Measurements/TRexFlowMeasurement.py | 2 +- lnst/RecipeCommon/Perf/Recipe.py | 28 ++++++++- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 62 +++++++++++-------- 18 files changed, 272 insertions(+), 74 deletions(-) create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaseEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaselineCPUAverageEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaselineFlowAverageEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/EvaluationError.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/NonzeroFlowEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/__init__.py
From: Ondrej Lichtner olichtne@redhat.com
The Perf.RecipeConf class now tracks a list of evaluators associated with a given measurement and allows registering them.
At the same time, the Perf.Recipe class will now call these evaluators from the evaluate_results method or log that no evaluators were configured.
This commit also removes the now useless classmethod evaluate_results of the BaseMeasurement class and it's derived classes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Evaluators/EvaluationError.py | 4 ++++ .../Perf/Measurements/BaseCPUMeasurement.py | 11 --------- .../Perf/Measurements/BaseFlowMeasurement.py | 5 ---- .../Perf/Measurements/BaseMeasurement.py | 5 ---- lnst/RecipeCommon/Perf/Recipe.py | 24 ++++++++++++++++++- 5 files changed, 27 insertions(+), 22 deletions(-) create mode 100644 lnst/RecipeCommon/Perf/Evaluators/EvaluationError.py
diff --git a/lnst/RecipeCommon/Perf/Evaluators/EvaluationError.py b/lnst/RecipeCommon/Perf/Evaluators/EvaluationError.py new file mode 100644 index 0000000..40aa294 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Evaluators/EvaluationError.py @@ -0,0 +1,4 @@ +from lnst.Common.LnstError import LnstError + +class EvaluationError(LnstError): + pass diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py index f81e8df..fc47091 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -61,17 +61,6 @@ class BaseCPUMeasurement(BaseMeasurement): for host_results in results_by_host.values(): cls._report_host_results(recipe, host_results)
- @classmethod - def evaluate_results(cls, recipe, results): - #TODO split off into a separate evaluator class - hosts = [] - for result in results: - if result.host.hostid not in hosts: - hosts.append(result.host.hostid) - recipe.add_result(True, - "CPU evaluation for results from hosts {} not implemented" - .format(hosts)) - @classmethod def _divide_results_by_host(cls, results): results_by_host = {} diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py index eda36a9..aba0e02 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -156,11 +156,6 @@ class BaseFlowMeasurement(BaseMeasurement): for flow_results in results: cls._report_flow_results(recipe, flow_results)
- @classmethod - def evaluate_results(cls, recipe, results): - #TODO split off into a separate evaluator class - recipe.add_result(True, "Flow result evaluation not implemented") - @classmethod def _report_flow_results(cls, recipe, flow_results): generator = flow_results.generator_results diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py index 8059308..782ffa5 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -19,11 +19,6 @@ class BaseMeasurement(object): def report_results(recipe, results): raise NotImplementedError()
- @classmethod - def evaluate_results(recipe, results): - #TODO split off into separate evaluator classes - raise NotImplementedError() - @classmethod def aggregate_results(first, second): raise NotImplementedError() diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py index e305310..13fc35b 100644 --- a/lnst/RecipeCommon/Perf/Recipe.py +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -1,3 +1,6 @@ +import logging + +from lnst.Common.LnstError import LnstError from lnst.Controller.Recipe import BaseRecipe from lnst.RecipeCommon.Perf.Results import SequentialPerfResult from lnst.RecipeCommon.Perf.Results import ParallelPerfResult @@ -5,12 +8,23 @@ from lnst.RecipeCommon.Perf.Results import ParallelPerfResult class RecipeConf(object): def __init__(self, measurements, iterations): self._measurements = measurements + self._evaluators = dict() self._iterations = iterations
@property def measurements(self): return self._measurements
+ @property + def evaluators(self): + return dict(self._evaluators) + + def register_evaluators(self, measurement, evaluators): + if measurement not in self.measurements: + raise LnstError("Can't register evaluators for an unknown measurement") + + self._evaluators[measurement] = list(evaluators) + @property def iterations(self): return self._iterations @@ -69,5 +83,13 @@ class Recipe(BaseRecipe): self.add_result(False, "No results available to evaluate.") return
+ perf_conf = recipe_results.perf_conf + for measurement, results in recipe_results.results.items(): - measurement.evaluate_results(self, results) + evaluators = perf_conf.evaluators.get(measurement, []) + for evaluator in evaluators: + evaluator.evaluate_results(self, results) + + if len(evaluators) == 0: + logging.debug("No evaluator registered for measurement {}" + .format(measurement))
From: Ondrej Lichtner olichtne@redhat.com
Store measurement results in an OrderedDict so that iteration during reporting or evaluation is deterministic between recipe executions.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Perf/Recipe.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py index 13fc35b..e4d9d42 100644 --- a/lnst/RecipeCommon/Perf/Recipe.py +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -1,4 +1,5 @@ import logging +from collections import OrderedDict
from lnst.Common.LnstError import LnstError from lnst.Controller.Recipe import BaseRecipe @@ -32,7 +33,7 @@ class RecipeConf(object): class RecipeResults(object): def __init__(self, perf_conf): self._perf_conf = perf_conf - self._results = {} + self._results = OrderedDict()
@property def perf_conf(self):
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/RecipeCommon/Perf/Recipe.py | 1 - 1 file changed, 1 deletion(-)
diff --git a/lnst/RecipeCommon/Perf/Recipe.py b/lnst/RecipeCommon/Perf/Recipe.py index e4d9d42..6cef240 100644 --- a/lnst/RecipeCommon/Perf/Recipe.py +++ b/lnst/RecipeCommon/Perf/Recipe.py @@ -54,7 +54,6 @@ class Recipe(BaseRecipe): results = RecipeResults(recipe_conf)
for i in range(recipe_conf.iterations): - run_results = [] for measurement in recipe_conf.measurements: measurement.start() for measurement in reversed(recipe_conf.measurements):
From: Ondrej Lichtner olichtne@redhat.com
Iteration should be through all "_objects" instead of _machine._objects, since no such attribute exists...
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Host.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lnst/Controller/Host.py b/lnst/Controller/Host.py index a77b8f6..ee36803 100644 --- a/lnst/Controller/Host.py +++ b/lnst/Controller/Host.py @@ -50,7 +50,7 @@ class Host(Namespace):
Does not include the init namespace (self).""" ret = [] - for x in self._machine._objects.values(): + for x in self._objects.values(): if isinstance(x, NetNamespace): ret.append(x) return ret
From: Ondrej Lichtner olichtne@redhat.com
Log the error and return None value pair, but don't raise an exception that halts the rest of the recipe.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Devices/Device.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lnst/Devices/Device.py b/lnst/Devices/Device.py index b78b295..9587a88 100644 --- a/lnst/Devices/Device.py +++ b/lnst/Devices/Device.py @@ -650,8 +650,9 @@ class Device(object): try: res = exec_cmd("ethtool -c %s" % self.name) except: - raise DeviceError("Could not read coalescence values of " - "%s." % self.name) + logging.debug("Could not read coalescence values of %s." % self.name) + return (None, None) + regex = "Adaptive RX: (on|off) TX: (on|off)" try: res = re.search(regex, res[0]).groups()
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/Namespace.py | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/lnst/Controller/Namespace.py b/lnst/Controller/Namespace.py index c50152d..35155be 100644 --- a/lnst/Controller/Namespace.py +++ b/lnst/Controller/Namespace.py @@ -54,6 +54,10 @@ class Namespace(object): def hostid(self): return self._machine.get_id()
+ @property + def hostname(self): + return self._machine.get_hostname() + @property def devices(self): """List of mapped devices available in the Namespace"""
From: Ondrej Lichtner olichtne@redhat.com
Adding a new base class to introduce a common hierarchy for measurement results. For now it just defines common initialization and a single property which is a back reference to the BaseMeasurement instance that created the results object.
Updated all the related *MeasurementResults classes.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Measurements/BaseCPUMeasurement.py | 20 +++++++++---------- .../Perf/Measurements/BaseFlowMeasurement.py | 19 +++++++++--------- .../Perf/Measurements/BaseMeasurement.py | 9 +++++++++ .../Perf/Measurements/IperfFlowMeasurement.py | 4 +++- .../Perf/Measurements/StatCPUMeasurement.py | 2 +- .../Perf/Measurements/TRexFlowMeasurement.py | 2 +- 6 files changed, 34 insertions(+), 22 deletions(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py index fc47091..cd19b18 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -1,10 +1,12 @@ import signal from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurementResults from lnst.RecipeCommon.Perf.Results import SequentialPerfResult
-class CPUMeasurementResults(object): - def __init__(self, host, cpu): +class CPUMeasurementResults(BaseMeasurementResults): + def __init__(self, measurement, host, cpu): + super(CPUMeasurementResults, self).__init__(measurement) self._host = host self._cpu = cpu
@@ -21,8 +23,8 @@ class CPUMeasurementResults(object): raise NotImplementedError()
class AggregatedCPUMeasurementResults(CPUMeasurementResults): - def __init__(self, host, cpu): - super(AggregatedCPUMeasurementResults, self).__init__(host, cpu) + def __init__(self, measurement, host, cpu): + super(AggregatedCPUMeasurementResults, self).__init__(measurement, host, cpu) self._individual_results = []
@property @@ -45,13 +47,12 @@ class AggregatedCPUMeasurementResults(CPUMeasurementResults): raise MeasurementError("Adding incorrect results.")
class BaseCPUMeasurement(BaseMeasurement): - @classmethod - def aggregate_results(cls, old, new): + def aggregate_results(self, old, new): aggregated = [] if old is None: old = [None] * len(new) for old_measurements, new_measurements in zip(old, new): - aggregated.append(cls._aggregate_hostcpu_results( + aggregated.append(self._aggregate_hostcpu_results( old_measurements, new_measurements)) return aggregated
@@ -89,13 +90,12 @@ class BaseCPUMeasurement(BaseMeasurement):
recipe.add_result(True, "\n".join(desc), data=cpu_data)
- @classmethod - def _aggregate_hostcpu_results(cls, old, new): + def _aggregate_hostcpu_results(self, old, new): if (old is not None and (old.host is not new.host or old.cpu != new.cpu)): raise MeasurementError("Aggregating incompatible CPU Results")
- new_result = AggregatedCPUMeasurementResults(new.host, new.cpu) + new_result = AggregatedCPUMeasurementResults(self, new.host, new.cpu) new_result.add_results(old) new_result.add_results(new) return new_result diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py index aba0e02..c4448b7 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -1,6 +1,7 @@ import signal from lnst.RecipeCommon.Perf.Measurements.MeasurementError import MeasurementError from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurement +from lnst.RecipeCommon.Perf.Measurements.BaseMeasurement import BaseMeasurementResults from lnst.RecipeCommon.Perf.Results import SequentialPerfResult
class Flow(object): @@ -75,8 +76,9 @@ class NetworkFlowTest(object): def client_job(self): return self._client_job
-class FlowMeasurementResults(object): - def __init__(self, flow): +class FlowMeasurementResults(BaseMeasurementResults): + def __init__(self, measurement, flow): + super(FlowMeasurementResults, self).__init__(measurement) self._flow = flow self._generator_results = None self._generator_cpu_stats = None @@ -120,7 +122,8 @@ class FlowMeasurementResults(object): self._receiver_cpu_stats = value
class AggregatedFlowMeasurementResults(FlowMeasurementResults): - def __init__(self, flow): + def __init__(self, measurement, flow): + super(FlowMeasurementResults, self).__init__(measurement) self._flow = flow self._generator_results = SequentialPerfResult() self._generator_cpu_stats = SequentialPerfResult() @@ -190,21 +193,19 @@ class BaseFlowMeasurement(BaseMeasurement): receiver_flow_data=receiver, receiver_cpu_data=receiver_cpu))
- @classmethod - def aggregate_results(cls, old, new): + def aggregate_results(self, old, new): aggregated = [] if old is None: old = [None] * len(new) for old_flow, new_flow in zip(old, new): - aggregated.append(cls._aggregate_flows(old_flow, new_flow)) + aggregated.append(self._aggregate_flows(old_flow, new_flow)) return aggregated
- @classmethod - def _aggregate_flows(cls, old_flow, new_flow): + def _aggregate_flows(self, old_flow, new_flow): if old_flow is not None and old_flow.flow is not new_flow.flow: raise MeasurementError("Aggregating incompatible Flows")
- new_result = AggregatedFlowMeasurementResults(new_flow.flow) + new_result = AggregatedFlowMeasurementResults(measurement=self, flow=new_flow.flow)
new_result.add_results(old_flow) new_result.add_results(new_flow) diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py index 782ffa5..5c51b9a 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -22,3 +22,12 @@ class BaseMeasurement(object): @classmethod def aggregate_results(first, second): raise NotImplementedError() + + +class BaseMeasurementResults(object): + def __init__(self, measurement): + self._measurement = measurement + + @property + def measurement(self): + return self._measurement diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py index 8ffd815..d3c02ec 100644 --- a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -55,7 +55,9 @@ class IperfFlowMeasurement(BaseFlowMeasurement):
results = [] for test_flow in test_flows: - flow_results = FlowMeasurementResults(test_flow.flow) + flow_results = FlowMeasurementResults( + measurement=self, + flow=test_flow.flow) flow_results.generator_results = self._parse_job_streams( test_flow.client_job) flow_results.generator_cpu_stats = self._parse_job_cpu( diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py index 5aa316f..e21ac43 100644 --- a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -66,7 +66,7 @@ class StatCPUMeasurement(BaseCPUMeasurement):
for cpu, cpu_intervals in parsed_sample.items(): if cpu not in job_results: - job_results[cpu] = StatCPUMeasurementResults(host, cpu) + job_results[cpu] = StatCPUMeasurementResults(self, host, cpu) cpu_results = job_results[cpu] cpu_results.update_intervals(cpu_intervals)
diff --git a/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py index f47ec67..3c973ca 100644 --- a/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/TRexFlowMeasurement.py @@ -104,7 +104,7 @@ class TRexFlowMeasurement(BaseFlowMeasurement): return result
def _parse_results_by_port(self, job, port, flow): - results = FlowMeasurementResults(flow) + results = FlowMeasurementResults(measurement=self, flow=flow) results.generator_results = SequentialPerfResult() results.generator_cpu_stats = SequentialPerfResult()
From: Ondrej Lichtner olichtne@redhat.com
These will be useful when generating descriptions of the measurements for either the database of when printing human readable outputs.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Measurements/BaseMeasurement.py | 8 ++++++ .../Perf/Measurements/IperfFlowMeasurement.py | 25 +++++++++++++++++++ .../Perf/Measurements/StatCPUMeasurement.py | 4 +++ 3 files changed, 37 insertions(+)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py index 5c51b9a..f119879 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -2,6 +2,14 @@ class BaseMeasurement(object): def __init__(self, conf): self._conf = conf
+ @property + def name(self): + return self.__class__.__name__ + + @property + def version(self): + raise NotImplementedError() + @property def conf(self): return self._conf diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py index d3c02ec..76b9e6f 100644 --- a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -1,3 +1,4 @@ +import re import time
from lnst.Common.IpAddress import ipaddress @@ -15,11 +16,35 @@ from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurem from lnst.Tests.Iperf import IperfClient, IperfServer
class IperfFlowMeasurement(BaseFlowMeasurement): + _MEASUREMENT_VERSION = 1 + def __init__(self, *args): super(IperfFlowMeasurement, self).__init__(*args) self._running_measurements = [] self._finished_measurements = []
+ self._hosts_versions = {} + + @property + def version(self): + if not self._hosts_versions: + for flow in self._conf: + if flow.receiver not in self._hosts_versions: + self._hosts_versions[flow.receiver] = self._get_host_iperf_version(flow.receiver) + if flow.generator not in self._hosts_versions: + self._hosts_versions[flow.generator] = self._get_host_iperf_version(flow.generator) + + return {"measurement_version": self._MEASUREMENT_VERSION, + "hosts_iperf_versions": self._hosts_versions} + + def _get_host_iperf_version(self, host): + version_job = host.run("iperf3 --version") + if version_job.passed: + match = re.match(r"iperf (.+?) .*", version_job.stdout) + if match: + return match.group(1) + return None + def start(self): if len(self._running_measurements) > 0: raise MeasurementError("Measurement already running!") diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py index e21ac43..f74f47f 100644 --- a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -31,6 +31,10 @@ class StatCPUMeasurement(BaseCPUMeasurement): self._running_measurements = [] self._finished_measurements = []
+ @property + def version(self): + return "1" + def start(self): jobs = [] for host in self._conf:
On Thu, Apr 04, 2019 at 04:27:46PM +0200, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
These will be useful when generating descriptions of the measurements for either the database of when printing human readable outputs.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com
.../Perf/Measurements/BaseMeasurement.py | 8 ++++++ .../Perf/Measurements/IperfFlowMeasurement.py | 25 +++++++++++++++++++ .../Perf/Measurements/StatCPUMeasurement.py | 4 +++ 3 files changed, 37 insertions(+)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py index 5c51b9a..f119879 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseMeasurement.py @@ -2,6 +2,14 @@ class BaseMeasurement(object): def __init__(self, conf): self._conf = conf
- @property
- def name(self):
return self.__class__.__name__
- @property
- def version(self):
raise NotImplementedError()
- @property def conf(self): return self._conf
diff --git a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py index d3c02ec..76b9e6f 100644 --- a/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/IperfFlowMeasurement.py @@ -1,3 +1,4 @@ +import re import time
from lnst.Common.IpAddress import ipaddress @@ -15,11 +16,35 @@ from lnst.RecipeCommon.Perf.Measurements.BaseFlowMeasurement import FlowMeasurem from lnst.Tests.Iperf import IperfClient, IperfServer
class IperfFlowMeasurement(BaseFlowMeasurement):
_MEASUREMENT_VERSION = 1
def __init__(self, *args): super(IperfFlowMeasurement, self).__init__(*args) self._running_measurements = [] self._finished_measurements = []
self._hosts_versions = {}
@property
def version(self):
if not self._hosts_versions:
for flow in self._conf:
if flow.receiver not in self._hosts_versions:
self._hosts_versions[flow.receiver] = self._get_host_iperf_version(flow.receiver)
if flow.generator not in self._hosts_versions:
self._hosts_versions[flow.generator] = self._get_host_iperf_version(flow.generator)
return {"measurement_version": self._MEASUREMENT_VERSION,
"hosts_iperf_versions": self._hosts_versions}
^ I have a whitespace error here...
- def _get_host_iperf_version(self, host):
version_job = host.run("iperf3 --version")
if version_job.passed:
match = re.match(r"iperf (.+?) .*", version_job.stdout)
if match:
return match.group(1)
return None
- def start(self): if len(self._running_measurements) > 0: raise MeasurementError("Measurement already running!")
diff --git a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py index e21ac43..f74f47f 100644 --- a/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/StatCPUMeasurement.py @@ -31,6 +31,10 @@ class StatCPUMeasurement(BaseCPUMeasurement): self._running_measurements = [] self._finished_measurements = []
- @property
- def version(self):
return "1"
- def start(self): jobs = [] for host in self._conf:
-- 2.21.0
From: Ondrej Lichtner olichtne@redhat.com
This package defines the class hierarchy for ResultMeasurementResults evaluators, starting with the base class "BaseEvaluator". Adding a couple of simple and obvious evaluators now though I expect more to be added later.
NonzeroFlowEvaluator - simply checks if the flow measurement results are zero and reports FAIL if they are or PASS if not. BaselineFlowAverageEvaluator - compares the flow measurement result against a different flow measurement result. Based on the required initialization parameter defining the percentage of allowed difference it reports a PASS if the difference is smaller or FAIL if the difference is higher. This is an incomplete class in the sense that it defines a "get_baseline" method which should be overriden to find and return the flow measurement result to compare against, the default implementation simply returns None. BaselineCPUAverageEvaluator - same as BaselineFlowAverageEvaluator but for CPU measurement results.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Evaluators/BaseEvaluator.py | 3 + .../Evaluators/BaselineCPUAverageEvaluator.py | 36 ++++++++++++ .../BaselineFlowAverageEvaluator.py | 57 +++++++++++++++++++ .../Perf/Evaluators/NonzeroFlowEvaluator.py | 26 +++++++++ lnst/RecipeCommon/Perf/Evaluators/__init__.py | 4 ++ 5 files changed, 126 insertions(+) create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaseEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaselineCPUAverageEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaselineFlowAverageEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/NonzeroFlowEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/__init__.py
diff --git a/lnst/RecipeCommon/Perf/Evaluators/BaseEvaluator.py b/lnst/RecipeCommon/Perf/Evaluators/BaseEvaluator.py new file mode 100644 index 0000000..be82841 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Evaluators/BaseEvaluator.py @@ -0,0 +1,3 @@ +class BaseEvaluator(object): + def evaluate_results(self, recipe, results): + raise NotImplementedError() diff --git a/lnst/RecipeCommon/Perf/Evaluators/BaselineCPUAverageEvaluator.py b/lnst/RecipeCommon/Perf/Evaluators/BaselineCPUAverageEvaluator.py new file mode 100644 index 0000000..de0b83d --- /dev/null +++ b/lnst/RecipeCommon/Perf/Evaluators/BaselineCPUAverageEvaluator.py @@ -0,0 +1,36 @@ +from __future__ import division + +from .BaseEvaluator import BaseEvaluator + +from ..Measurements.BaseCPUMeasurement import ( + CPUMeasurementResults, + AggregatedCPUMeasurementResults, +) + + +class BaselineCPUAverageEvaluator(BaseEvaluator): + def __init__(self, pass_difference): + self._pass_difference = pass_difference + + def evaluate_results(self, recipe, results): + for result in results: + baseline = self.get_baseline(recipe, result) + self._compare_result_with_baseline(recipe, result, baseline) + + def get_baseline(self, recipe, result): + return None + + def _compare_result_with_baseline(self, recipe, result, baseline): + comparison_result = True + result_text = [ + "CPU Baseline average evaluation".format(), + "Configured {}% difference as acceptable".format(self._pass_difference), + ] + if baseline is None: + comparison_result = False + result_text.append("No baseline found for this CPU measurement") + else: + result_text.append("I don't know how to compare CPU averages yet!!!") + comparison_result = False + + recipe.add_result(comparison_result, "\n".join(result_text)) diff --git a/lnst/RecipeCommon/Perf/Evaluators/BaselineFlowAverageEvaluator.py b/lnst/RecipeCommon/Perf/Evaluators/BaselineFlowAverageEvaluator.py new file mode 100644 index 0000000..3f49ab0 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Evaluators/BaselineFlowAverageEvaluator.py @@ -0,0 +1,57 @@ +from __future__ import division + +from .BaseEvaluator import BaseEvaluator + +from ..Measurements.BaseFlowMeasurement import ( + FlowMeasurementResults, + AggregatedFlowMeasurementResults, +) + + +class BaselineFlowAverageEvaluator(BaseEvaluator): + def __init__(self, pass_difference): + self._pass_difference = pass_difference + + def evaluate_results(self, recipe, results): + for result in results: + baseline = self.get_baseline(recipe, result) + self._compare_result_with_baseline(recipe, result, baseline) + + def get_baseline(self, recipe, result): + return None + + def _compare_result_with_baseline(self, recipe, result, baseline): + comparison_result = True + result_text = [ + "Flow {} Baseline average evaluation".format(result.flow), + "Configured {}% difference as acceptable".format(self._pass_difference), + ] + if baseline is None: + comparison_result = False + result_text.append("No baseline found for this flow") + else: + generator_diff = _result_averages_difference( + result.generator_results, + baseline.generator_results) + result_text.append( + "Generator average is {:.2f}% different from the baseline generator average" + .format(generator_diff)) + + receiver_diff = _result_averages_difference( + result.receiver_results, + baseline.receiver_results) + result_text.append( + "Receiver average is {:.2f}% different from the baseline receiver average" + .format(receiver_diff)) + + if ( + abs(generator_diff) > self._pass_difference + or abs(receiver_diff) > self._pass_difference + ): + comparison_result = False + + recipe.add_result(comparison_result, "\n".join(result_text)) + + +def _result_averages_difference(a, b): + return 100 - ((a.average / b.average)*100) diff --git a/lnst/RecipeCommon/Perf/Evaluators/NonzeroFlowEvaluator.py b/lnst/RecipeCommon/Perf/Evaluators/NonzeroFlowEvaluator.py new file mode 100644 index 0000000..5ee9853 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Evaluators/NonzeroFlowEvaluator.py @@ -0,0 +1,26 @@ +from .BaseEvaluator import BaseEvaluator + +from ..Measurements.BaseFlowMeasurement import ( + FlowMeasurementResults, + AggregatedFlowMeasurementResults, +) + + +class NonzeroFlowEvaluator(BaseEvaluator): + def evaluate_results(self, recipe, results): + for flow_results in results: + result = True + result_text = ["Flow {} Nonzero evaluation".format(flow_results.flow)] + if flow_results.generator_results.average > 0: + result_text.append("Generator reported non-zero throughput") + else: + result = False + result_text.append("Generator reported zero throughput") + + if flow_results.receiver_results.average > 0: + result_text.append("Receiver reported non-zero throughput") + else: + result = False + result_text.append("Receiver reported zero throughput") + + recipe.add_result(result, "\n".join(result_text)) diff --git a/lnst/RecipeCommon/Perf/Evaluators/__init__.py b/lnst/RecipeCommon/Perf/Evaluators/__init__.py new file mode 100644 index 0000000..9186ba7 --- /dev/null +++ b/lnst/RecipeCommon/Perf/Evaluators/__init__.py @@ -0,0 +1,4 @@ +from .NonzeroFlowEvaluator import NonzeroFlowEvaluator +from .BaselineFlowAverageEvaluator import BaselineFlowAverageEvaluator + +from .BaselineCPUAverageEvaluator import BaselineCPUAverageEvaluator
From: Ondrej Lichtner olichtne@redhat.com
Adding two new properties for cpu performance measurements and net performance measurements. By default, no cpu measurement evaluators are used and only the NonzeroFlowEvaluator is assigned to the network performance measurements.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 51 ++++++++++++++++++++--------- 1 file changed, 36 insertions(+), 15 deletions(-)
diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index d21d4bd..90f1d64 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -11,6 +11,7 @@ from lnst.RecipeCommon.Perf.Recipe import RecipeConf as PerfRecipeConf from lnst.RecipeCommon.Perf.Measurements import Flow as PerfFlow from lnst.RecipeCommon.Perf.Measurements import IperfFlowMeasurement from lnst.RecipeCommon.Perf.Measurements import StatCPUMeasurement +from lnst.RecipeCommon.Perf.Evaluators import NonzeroFlowEvaluator
class EnrtConfiguration(object): def __init__(self): @@ -199,6 +200,30 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): client_netns = client_nic.netns server_netns = server_nic.netns
+ cpu_measurement = self.params.cpu_perf_tool( + [client_netns, server_netns]) + + flow_combinations = self.generate_flow_combinations(main_config, sub_config) + + for flows in flow_combinations: + flows_measurement = self.params.net_perf_tool(flows) + + perf_conf = PerfRecipeConf( + measurements=[cpu_measurement, flows_measurement], + iterations=self.params.perf_iterations) + + perf_conf.register_evaluators( + cpu_measurement, self.cpu_perf_evaluators) + perf_conf.register_evaluators( + flows_measurement, self.net_perf_evaluators) + + yield perf_conf + + def generate_flow_combinations(self, main_config, sub_config): + client_nic = main_config.endpoint1 + server_nic = main_config.endpoint2 + client_netns = client_nic.netns + server_netns = server_nic.netns for ipv in self.params.ip_versions: if ipv == "ipv4": family = AF_INET @@ -228,24 +253,20 @@ class BaseEnrtRecipe(PingTestAndEvaluate, PerfRecipe): parallel_streams = self.params.perf_parallel_streams, cpupin = self.params.perf_tool_cpu if "perf_tool_cpu" in self.params else None ) - - flow_measurement = self.params.net_perf_tool([flow]) - yield PerfRecipeConf( - measurements=[ - self.params.cpu_perf_tool([client_netns, server_netns]), - flow_measurement - ], - iterations=self.params.perf_iterations) + yield [flow]
if "perf_reverse" in self.params and self.params.perf_reverse: reverse_flow = self._create_reverse_flow(flow) - reverse_flow_measurement = self.params.net_perf_tool([reverse_flow]) - yield PerfRecipeConf( - measurements=[ - self.params.cpu_perf_tool([server_netns, client_netns]), - reverse_flow_measurement - ], - iterations=self.params.perf_iterations) + yield [reverse_flow] + + @property + def cpu_perf_evaluators(self): + return [] + + @property + def net_perf_evaluators(self): + return [NonzeroFlowEvaluator()] +
def _create_reverse_flow(self, flow): rev_flow = PerfFlow(
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-)
diff --git a/lnst/Recipes/ENRT/BaseEnrtRecipe.py b/lnst/Recipes/ENRT/BaseEnrtRecipe.py index 90f1d64..7d5c00b 100644 --- a/lnst/Recipes/ENRT/BaseEnrtRecipe.py +++ b/lnst/Recipes/ENRT/BaseEnrtRecipe.py @@ -1,4 +1,5 @@ import re + from lnst.Common.LnstError import LnstError from lnst.Common.Parameters import Param, IntParam, StrParam, BoolParam from lnst.Common.IpAddress import AF_INET, AF_INET6 @@ -17,7 +18,6 @@ class EnrtConfiguration(object): def __init__(self): self._endpoint1 = None self._endpoint2 = None - self._endpoint1_coalescing = None
@property def endpoint1(self): @@ -35,18 +35,9 @@ class EnrtConfiguration(object): def endpoint2(self, value): self._endpoint2 = value
- @property - def endpoint1_coalescing(self): - self._endpoint1_coalescing - - @endpoint1_coalescing.setter - def endpoint1_coalescing(self, value): - self._endpoint1_coalescing = value - class EnrtSubConfiguration(object): def __init__(self): self._ip_version = None - self._perf_test = None self._offload_settings = None
@property
From: Ondrej Lichtner olichtne@redhat.com
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .gitignore | 1 + 1 file changed, 1 insertion(+)
diff --git a/.gitignore b/.gitignore index 187b4b7..45da7da 100644 --- a/.gitignore +++ b/.gitignore @@ -5,6 +5,7 @@ .#*
Logs/ +build/
# vim swap files *.swp
On Thu, Apr 04, 2019 at 04:27:38PM +0200, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
Hi all,
this patch set introduces another subpackage to the lnst.RecipeCommon.Perf package called Evaluators. This package will be used to implement classes that implement evaluation logic over results generated by the various Perf.Measurements.
For now I implemented just a basic NonZero evaluation for flow measurements and a Baseline evaluation for CPU and Flow measurements.
The baseline evaluators however don't implement any logic on how to actually retrieve the baselines to compare against. For our use-case within RH we'll be using a database that stores these results and so we'll have some lnst extension code that connects to this database and looks for the correct data to use as a baseline. Since the specific database application isn't upstream (yet?), I won't include that code in the upstream lnst repository either so it doesn't confuse others.
-Ondrej
Ondrej Lichtner (12): RecipeCommon.Perf.Recipe: add support for registering evaluators RecipeCommon.Perf.Recipe: order results RecipeCommon.Perf.Recipe: remove unused variable lnst.Controller.Host: fix namespace property lnst.Devices.Device: don't raise exception when coalescing unsupported lnst.Controller.Namespace: add hostname property lnst.RecipeCommon.Perf.Measurements: add BaseMeasurementResults class lnst.RecipeCommon.Perf.Measurements: add name and version properties add lnst.RecipeCommon.Perf.Evaluators package lnst.Recipes.ENRT.BaseEnrtRecipe: register measurement evaluators lnst.Recipes.ENRT.BaseEnrtRecipe: remove unused code gitignore: add build/ directory that contains localy byte-compiled code
.gitignore | 1 + lnst/Controller/Host.py | 2 +- lnst/Controller/Namespace.py | 4 ++ lnst/Devices/Device.py | 5 +- .../Perf/Evaluators/BaseEvaluator.py | 3 + .../Evaluators/BaselineCPUAverageEvaluator.py | 36 +++++++++++ .../BaselineFlowAverageEvaluator.py | 57 +++++++++++++++++ .../Perf/Evaluators/EvaluationError.py | 4 ++ .../Perf/Evaluators/NonzeroFlowEvaluator.py | 26 ++++++++ lnst/RecipeCommon/Perf/Evaluators/__init__.py | 4 ++ .../Perf/Measurements/BaseCPUMeasurement.py | 31 +++------- .../Perf/Measurements/BaseFlowMeasurement.py | 24 +++---- .../Perf/Measurements/BaseMeasurement.py | 22 +++++-- .../Perf/Measurements/IperfFlowMeasurement.py | 29 ++++++++- .../Perf/Measurements/StatCPUMeasurement.py | 6 +- .../Perf/Measurements/TRexFlowMeasurement.py | 2 +- lnst/RecipeCommon/Perf/Recipe.py | 28 ++++++++- lnst/Recipes/ENRT/BaseEnrtRecipe.py | 62 +++++++++++-------- 18 files changed, 272 insertions(+), 74 deletions(-) create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaseEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaselineCPUAverageEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/BaselineFlowAverageEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/EvaluationError.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/NonzeroFlowEvaluator.py create mode 100644 lnst/RecipeCommon/Perf/Evaluators/__init__.py
-- 2.21.0
pushed.
-Ondrej
lnst-developers@lists.fedorahosted.org