Hi, I would like to ask on your opinion on https://bugzilla.redhat.com/show_bug.cgi?id=912960#c25
Mamoru have in spec in %check section: ruby -Ilib:test ./test/run-test.rb || echo "Please investigate this"
I'm saying that it should not be waived this way as it waive all failure. I'm saying that if there is no failure, it should be just: ruby -Ilib:test ./test/run-test.rb and if there happened some problem which need upstream attention and could not be fixed immediatelly, it should be replaced with something like: FILE=$(mktemp) ruby -Ilib:test ./test/run-test.rb | tee $FILE || : # test foo is failing, reported as http://foo/issue/1 cat $FILE | grep "4 tests, 4 assertions, 1 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
Mamoru disagree. For me it is blocker. I would like to ask for your opinions. Which way is correct and should it be blocker for review?
Thanks in advance.
Mirek
On Thu, 05 Sep 2013 09:21:31 +0200, Miroslav Suchy wrote:
Hi, I would like to ask on your opinion on https://bugzilla.redhat.com/show_bug.cgi?id=912960#c25
Mamoru have in spec in %check section: ruby -Ilib:test ./test/run-test.rb || echo "Please investigate this"
I'm saying that it should not be waived this way as it waive all failure. I'm saying that if there is no failure, it should be just: ruby -Ilib:test ./test/run-test.rb and if there happened some problem which need upstream attention and could not be fixed immediatelly, it should be replaced with something like: FILE=$(mktemp) ruby -Ilib:test ./test/run-test.rb | tee $FILE || : # test foo is failing, reported as http://foo/issue/1 cat $FILE | grep "4 tests, 4 assertions, 1 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
Mamoru disagree. For me it is blocker. I would like to ask for your opinions. Which way is correct and should it be blocker for review?
So far, it's not a blocker.
There are no guidelines for the %check section other than this: https://fedoraproject.org/wiki/Packaging:Guidelines#Test_Suites
Some packages don't execute test-suites at all, because they would fail. That's not a blocker either. In some cases, upstream is aware of the tests being out-of-sync with the rest of the code.
Once you go down the '|| :' road, at least the test results will be available in the build.log. That's acceptable and a bit better than not running the tests at all, but may be dangerous depending on what fails.
Anything beyond that is up the maintainer. How often would you look at the output from the "grep"? How often will it change with new releases? I could imagine someone grepping for the number of known failures and adding a guard that would make the build fail.
On Thu, Sep 5, 2013 at 11:51 AM, Michael Schwendt mschwendt@gmail.com wrote:
On Thu, 05 Sep 2013 09:21:31 +0200, Miroslav Suchy wrote:
Mamoru disagree. For me it is blocker. I would like to ask for your opinions. Which way is correct and should it be blocker for review?
So far, it's not a blocker.
But reviewers can always refuse to accept something they disagree with, step aside from the review, and wish the submitter good luck finding someone else to take over.
On Thu, 5 Sep 2013 12:51:18 +0300, Ville Skyttä wrote:
On Thu, 05 Sep 2013 09:21:31 +0200, Miroslav Suchy wrote:
Mamoru disagree. For me it is blocker. I would like to ask for your opinions. Which way is correct and should it be blocker for review?
So far, it's not a blocker.
But reviewers can always refuse to accept something they disagree with, step aside from the review, and wish the submitter good luck finding someone else to take over.
Well, both sides should be willing to compromise about how to continue.
The submitter cannot force the reviewer to approve the package. And the reviewer cannot force the submitter to change something, because the submitter may reply "no, thanks, then I prefer providing the package in my personal repo".
Talking about approved packages, let's not forget packagers who reintroduce mistakes (including disabling %check sections) some time later. ;-)
On 09/05/2013 12:41 PM, Michael Schwendt wrote:
On Thu, 5 Sep 2013 12:51:18 +0300, Ville Skyttä wrote:
On Thu, 05 Sep 2013 09:21:31 +0200, Miroslav Suchy wrote:
Mamoru disagree. For me it is blocker. I would like to ask for your opinions. Which way is correct and should it be blocker for review?
So far, it's not a blocker.
Correct, it's a should - Why?
Because * we want to encourage upstreams to write testsuites and not let package reviews fail reviews because they are supplying a testsuites and to pass those packages whose upstreams don't ship a testsuite. * testsuites are of very widely ranging quality and content/coverage. Starting with primitive API-checking linker-tests, over simple "self-test" testsuites up to very extensive testsuites, which occasionally require a huge infrastructure.
In other words: Reviewers are supposed to apply common sense and brains wrt. testsuites. A failing testsuite can mean nothing or everything.
Remember: Reviews are not a mere bureaucratic act. Reviewers are supposed to make a judgment (*review*) - This also occasionally means to compromise and occasionally means to reject something.
But reviewers can always refuse to accept something they disagree with, step aside from the review, and wish the submitter good luck finding someone else to take over.
Well, both sides should be willing to compromise about how to continue.
Depends. There are situations, where compromises are impossible.
E.g. when a testsuite's results indicate bugs and deficienicies or even dysfunctionality of a package.
The submitter cannot force the reviewer to approve the package. And the reviewer cannot force the submitter to change something, because the submitter may reply "no, thanks, then I prefer providing the package in my personal repo".
Depends.
Talking about approved packages, let's not forget packagers who reintroduce mistakes (including disabling %check sections) some time later. ;-)
And even this some times is inevitable :-)
Ralf
On Thu, Sep 05, 2013 at 09:21:31AM +0200, Miroslav Suchy wrote:
Hi, I would like to ask on your opinion on https://bugzilla.redhat.com/show_bug.cgi?id=912960#c25
Mamoru have in spec in %check section: ruby -Ilib:test ./test/run-test.rb || echo "Please investigate this"
I'm saying that it should not be waived this way as it waive all failure. I'm saying that if there is no failure, it should be just: ruby -Ilib:test ./test/run-test.rb and if there happened some problem which need upstream attention and could not be fixed immediatelly, it should be replaced with something like: FILE=$(mktemp) ruby -Ilib:test ./test/run-test.rb | tee $FILE || : # test foo is failing, reported as http://foo/issue/1 cat $FILE | grep "4 tests, 4 assertions, 1 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
Mamoru disagree. For me it is blocker. I would like to ask for your opinions. Which way is correct and should it be blocker for review?
As mschwendt and scop have replied, at the moment there is nothing in the guidelines to prevent disabling of the entire testsuite but reviewers are able to exercise their best judgement here.
When I run across failures in testsuites in my own packages my order of fixing is:
1) Try to fix the code and submit the patch upstream 1b) If I can't figure out the issue, submit the problem upstream and hope they'll come up with a solution quickly. 2) If upstream's "quickly" isn't quick enough, I may disable just that one test. Sometimes this can be done with a command line switch to exclude a named test. Other times I have to resort to a temporary patch to remove this. Comment why the test is disabled. This depends on me feeling that the test isn't instrumental to the package's functionality (for me, this falls under: "SHOULD: The reviewer should test that the package functions as described. A package should not segfault instead of running, for example." from https://fedoraproject.org/wiki/Packaging:ReviewGuidelines )
For my own packages, I have rejected build fixes that disable the entire testsuite for a few failing tests.
When reviewing packages, I do require people to follow a variation of what I mention above. I provide patches to disable specific tests if that is necessary. I've never encountered a situation where the packager actively wants to disable the whole suite, however, so I don't know whether I would continue the review in that situation or not.
For https://bugzilla.redhat.com/show_bug.cgi?id=912960#c25 I do agree with Mamoru that grep is not a good solution. I would propose a patch in this case. This particular test failure looks like it's a difference in rounding the results which the GDK docs appear to allow: https://developer.gnome.org/gdk3/stable/gdk3-RGBA-Colors.html#gdk-rgba-to-st...
So I'm thinking this is a test case that's too-strict. I'm neither a ruby nor a gtk guy but I think I have a patch that you could submit upstream based on this. I'll attach it here.
Also, not disabling the tests en masse seems to be a good idea -- I just tried to scratch build this on f20 and found that there's been changes that cause a different error to be thrown. I'll attach a patch for that as well (note -- you may have to conditionalize which Fedora releases that gets applied on. I'm not certain when the API changed.)
And just for good measure, I'll attach the spec file I used as well
Successful build: http://koji.fedoraproject.org/koji/taskinfo?taskID=5900043
-Toshio
packaging@lists.fedoraproject.org