From: Ondrej Lichtner olichtne@redhat.com
This patchset is just a collection of not really related fixes and improvements that serve as preparation for a larger patchset targeting perf result evaluation code.
Sending this now so others can base their code on these fixes since I've had them for a while...
-Ondrej
Ondrej Lichtner (4): lnst.Controller.RunSummaryFormatter: add colourize parameter RecipeCommon.Perf.Measurements.BaseCPUMeasurement: adjust base evaluation method RecipeCommon.Perf.Measurements.BaseFlowMeasurement: modify evaluation result lnst.Tests.Iperf: don't fail on stderr messages
lnst/Controller/RunSummaryFormatter.py | 11 ++++++++--- .../Perf/Measurements/BaseCPUMeasurement.py | 9 ++++++--- .../Perf/Measurements/BaseFlowMeasurement.py | 11 +---------- lnst/Tests/Iperf.py | 1 - 4 files changed, 15 insertions(+), 17 deletions(-)
From: Ondrej Lichtner olichtne@redhat.com
The parameter can be used to enable/disable colourization. Currently, only the PASS/FAIL strings indicating a job success state are supported for colourization, however this might change later.
Default value is False, this changes the current behaviour where PASS and FAIL was automatically coloured, but this probably wasn't a good idea since a normal expectation for a formatter would be to do just the simplest operation unless specifically configured to something more.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Controller/RunSummaryFormatter.py | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/lnst/Controller/RunSummaryFormatter.py b/lnst/Controller/RunSummaryFormatter.py index 670c1f7..b569b50 100644 --- a/lnst/Controller/RunSummaryFormatter.py +++ b/lnst/Controller/RunSummaryFormatter.py @@ -24,16 +24,21 @@ class RunFormatterException(ControllerError): pass
class RunSummaryFormatter(object): - def __init__(self, level=ResultLevel.IMPORTANT): + def __init__(self, level=ResultLevel.IMPORTANT, colourize=False): #TODO changeable format? self._format = "" self._level = level + self._colourize = colourize
def _format_success(self, success): if success: - return decorate_with_preset("PASS", "pass") + return (decorate_with_preset("PASS", "pass") + if self._colourize + else "PASS") else: - return decorate_with_preset("FAIL", "fail") + return (decorate_with_preset("FAIL", "fail") + if self._colourize + else "FAIL")
def _format_source(self, res): if isinstance(res, JobResult):
From: Ondrej Lichtner olichtne@redhat.com
Add just a single result instead of on per host-cpu since that can get spammy on machines with a lot of cores...
The result message also clearly indicates that this is just a placeholder result because an actual evaluation method wasn't implemented.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py index 2507f3c..f81e8df 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseCPUMeasurement.py @@ -64,10 +64,13 @@ class BaseCPUMeasurement(BaseMeasurement): @classmethod def evaluate_results(cls, recipe, results): #TODO split off into a separate evaluator class + hosts = [] for result in results: - recipe.add_result(True, - "Base CPU evaluation for host {}, cpu {}".format( - result.host.hostid, result.cpu)) + if result.host.hostid not in hosts: + hosts.append(result.host.hostid) + recipe.add_result(True, + "CPU evaluation for results from hosts {} not implemented" + .format(hosts))
@classmethod def _divide_results_by_host(cls, results):
From: Ondrej Lichtner olichtne@redhat.com
The evaluation result object created by the base evaluation method should indicate that this is in fact a "not implemented" evaluation. This should be overriden by a specific evaluation policy.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- .../Perf/Measurements/BaseFlowMeasurement.py | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-)
diff --git a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py index da0b5f4..9e0b92e 100644 --- a/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py +++ b/lnst/RecipeCommon/Perf/Measurements/BaseFlowMeasurement.py @@ -154,16 +154,7 @@ class BaseFlowMeasurement(BaseMeasurement): @classmethod def evaluate_results(cls, recipe, results): #TODO split off into a separate evaluator class - for flow_results in results: - if flow_results.generator_results.average > 0: - recipe.add_result(True, "Generator reported non-zero throughput") - else: - recipe.add_result(False, "Generator reported zero throughput") - - if flow_results.receiver_results.average > 0: - recipe.add_result(True, "Receiver reported non-zero throughput") - else: - recipe.add_result(False, "Receiver reported zero throughput") + recipe.add_result(True, "Flow result evaluation not implemented")
@classmethod def _report_flow_results(cls, recipe, flow_results):
From: Ondrej Lichtner olichtne@redhat.com
Iperf uses stderr to report warnings as well as error messages. However the test can still run and report results. If there's an actual failure to execute iperf indicates it by returning a non-zero return value. We should therefore only report a Fail result on the measurement in case iperf fails as well.
Signed-off-by: Ondrej Lichtner olichtne@redhat.com --- lnst/Tests/Iperf.py | 1 - 1 file changed, 1 deletion(-)
diff --git a/lnst/Tests/Iperf.py b/lnst/Tests/Iperf.py index 970d994..0a14619 100644 --- a/lnst/Tests/Iperf.py +++ b/lnst/Tests/Iperf.py @@ -38,7 +38,6 @@ class IperfBase(BaseTestModule): self._res_data["msg"] = "errors reported by iperf" logging.error(self._res_data["msg"]) logging.error(self._res_data["stderr"]) - return False
if server.returncode > 0: self._res_data["msg"] = "{} returncode = {}".format(
Wed, Feb 27, 2019 at 01:44:39PM CET, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
This patchset is just a collection of not really related fixes and improvements that serve as preparation for a larger patchset targeting perf result evaluation code.
Sending this now so others can base their code on these fixes since I've had them for a while...
-Ondrej
Ondrej Lichtner (4): lnst.Controller.RunSummaryFormatter: add colourize parameter RecipeCommon.Perf.Measurements.BaseCPUMeasurement: adjust base evaluation method RecipeCommon.Perf.Measurements.BaseFlowMeasurement: modify evaluation result lnst.Tests.Iperf: don't fail on stderr messages
lnst/Controller/RunSummaryFormatter.py | 11 ++++++++--- .../Perf/Measurements/BaseCPUMeasurement.py | 9 ++++++--- .../Perf/Measurements/BaseFlowMeasurement.py | 11 +---------- lnst/Tests/Iperf.py | 1 - 4 files changed, 15 insertions(+), 17 deletions(-)
Looks good.
Acked-by: Jan Tluka jtluka@redhat.com
On Wed, Feb 27, 2019 at 01:44:39PM +0100, olichtne@redhat.com wrote:
From: Ondrej Lichtner olichtne@redhat.com
This patchset is just a collection of not really related fixes and improvements that serve as preparation for a larger patchset targeting perf result evaluation code.
Sending this now so others can base their code on these fixes since I've had them for a while...
-Ondrej
Ondrej Lichtner (4): lnst.Controller.RunSummaryFormatter: add colourize parameter RecipeCommon.Perf.Measurements.BaseCPUMeasurement: adjust base evaluation method RecipeCommon.Perf.Measurements.BaseFlowMeasurement: modify evaluation result lnst.Tests.Iperf: don't fail on stderr messages
lnst/Controller/RunSummaryFormatter.py | 11 ++++++++--- .../Perf/Measurements/BaseCPUMeasurement.py | 9 ++++++--- .../Perf/Measurements/BaseFlowMeasurement.py | 11 +---------- lnst/Tests/Iperf.py | 1 - 4 files changed, 15 insertions(+), 17 deletions(-)
-- 2.21.0
pushed.
-Ondrej
lnst-developers@lists.fedorahosted.org