Hi all,
BuildBot instance is up and running for some time now [1].
There is a couple of issues I would like to discuss here:
1) The tests are fragile
When some test fails to clean after itself it might affect other test or the next run of the test. We want to make sure that we make the tests as solid as possible but there will always be troubles (like issues in provider backends - blivet, NetworkManager). I'm working on improving this - I'm going to remove all openlmi-* packages, CIMOM and any other leftovers (like CIMOM repository in /var) before (or rather after?) the test.
2) Results are not visible enough
There is IRC bot on #openlmi channel of freenode notifying when test status changes (it used to fail and now it succeeds and vice versa). But it not visible enough. Also we don't want to send email to this list because it will be quite a lot of traffic. So, should I create another mailing list? openlmi- build? openlmi-ci?
3) Building machines updates
Now the test machines are updated manually but this should be probably automated too. But we need to know whether the build/test fails because of change in providers or system update. So the workflow should probably be like:
* Build our packages and run the test suite * Update the system * Build and test our packages again to see if the result is the same * Report it
Does it make sense? Any better ideas?
Tips and suggestions how to deal with these issues are welcomed.
Radek Novacek
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08/27/2013 10:32 AM, Radek Novacek wrote:
Hi all,
BuildBot instance is up and running for some time now [1].
There is a couple of issues I would like to discuss here:
- The tests are fragile
When some test fails to clean after itself it might affect other test or the next run of the test. We want to make sure that we make the tests as solid as possible but there will always be troubles (like issues in provider backends - blivet, NetworkManager). I'm working on improving this - I'm going to remove all openlmi-* packages, CIMOM and any other leftovers (like CIMOM repository in /var) before (or rather after?) the test.
Can we run the tests in a chroot similar to mock, so we could just wipe it and regenerate it from a tarball known-good state? (Or use LVM snapshots, etc.?)
- Results are not visible enough
There is IRC bot on #openlmi channel of freenode notifying when test status changes (it used to fail and now it succeeds and vice versa). But it not visible enough. Also we don't want to send email to this list because it will be quite a lot of traffic. So, should I create another mailing list? openlmi- build? openlmi-ci?
We could reuse the reviews list for this. It's already fairly high-traffic.
- Building machines updates
Now the test machines are updated manually but this should be probably automated too. But we need to know whether the build/test fails because of change in providers or system update. So the workflow should probably be like:
- Build our packages and run the test suite * Update the system *
Build and test our packages again to see if the result is the same
- Report it
Does it make sense? Any better ideas?
Why don't we do '--disablerepo=* --enablerepo=openlmi' when updating for the CI builds, that way we rule out any system updates.
Tips and suggestions how to deal with these issues are welcomed.
Radek Novacek
[1] http://openlmi-rnovacek.rhcloud.com/waterfall _______________________________________________ openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
openlmi-devel@lists.stg.fedorahosted.org