I'm trying to follow the following guide:
https://fedoraproject.org/wiki/CI/Tests#Wrapping
Let's say I've done the following:
$ fedpkgclone gzip
$ cd gzip/tests/
Now I want to run the tests in Docker. The guide says:
> Try running this example test against an Atomic Host or Docker
Container. It should pass.
"Docker Container" leads to:
https://fedoraproject.org/wiki/CI/Standard_Test_Roles#Container
So I've done:
$ export TEST_SUBJECTS=docker:docker.io/library/fedora:26
$ ansible-playbook --tags=container tests.yml
I see a lot of cows (cute), yet the error is:
> "Destination /usr/local/bin not writable"
It's also amazingly fast, so I think no Fedora 26 docker image is being
pulled in. I don't think docker is invoked here at all.
What piece am I missing?
--
Miro Hrončok
--
Phone: +420777974800
IRC: mhroncok
https://github.com/ostreedev/ostree/pull/1462
Some things I like specifically:
- The qcow2 inventory is useful (though I want it to be more configurable)
- Ansible is mostly better than shell script
- It's a layer of stuff that's abstract across "execution environments" like Jenkins vs whatever else
- The concept that playbooks can in theory be more easily shared
And of course that it makes things a bit more obvious how to
reuse *upstream* tests stored in upstream git in the
downstream dist-git (though I need to actually plumb that through).
BTW we're also in the process of completely revamping our "CI execution"
or https://github.com/projectatomic/papr to use Kubernetes/OpenShift
more. In our case then the upstream tests using STI would actually
execute as Kubernetes Jobs; in our particular case then we'd be moving
strongly towards VM-in-container. See:
https://github.com/projectatomic/papr/pull/70https://github.com/projectatomic/paci/pulls?q=is%3Apr+is%3Aclosed
etc.; also part of this is moving our CI into CentOS CI's OpenShift instance.
Hi!
During the last weeks we've been experimenting with the Flexible
Metadata Format proof of concept on some real-life components:
There's a small SELinux example showing how FMF could be used for
filtering relevant tests [1], Jakub Krysl successfully used FMF
for generating different device setups for his VDO testing [2] and
Jan Scotka is now integrating FMF with the Meta Test Family.
As the next step we would like to start discussions about the test
metadata content, that is to define essential attributes which
should be stored close to the test code, directly in git repo. If
you are interested in contributing to this effort, please join our
discussions which will happen on the Fedora CI list.
I've created an initial draft for the first couple of attributes
on the Fedora wiki [3]. Please, review and share your thoughts.
Thanks.
psss...
[1] https://src.fedoraproject.org/tests/selinux/pull-request/1
[2] http://fmf.readthedocs.io/en/latest/examples.html#setups
[3] https://fedoraproject.org/wiki/CI/Metadata
On 23 January 2018 at 09:18, Petr Šplíchal <psplicha(a)redhat.com> wrote:
> Hi,
>
> simple proof of concept is ready for experimenting:
>
> https://github.com/psss/fmf
> http://fmf.readthedocs.io/
>
> Looking for the first impressions & feedback. Thanks.
>
> psss...
>
> On 8 January 2018 at 15:49, Petr Splichal <psplicha(a)redhat.com> wrote:
>>
>> Hi!
>>
>> In order to keep test execution efficient when number of test
>> cases grows, it is crucial to maintain corresponding metadata,
>> which define some aspects of how the test coverage is executed.
>> For example limiting environment combinations where the test is
>> relevant or selecting a subset of important test cases for quick
>> verification of essential features when testing a security update.
>>
>> Within the BaseOS QE team we were thinking (for a long time) about
>> an efficient metadata solution which would cover our use cases and
>> would be open source. Recently we've been involved in the Upstream
>> First initiative which increased the need for an open metadata
>> solution which would enable us to more easily share test code
>> between Red Hat Enterprise Linux and Fedora.
>>
>> We've put together a draft solution which covers some of the most
>> important stories we've gathered so far. It does not cover all use
>> cases and it is not complete. In this early stage we would like to
>> invite others who might have similar use cases to gather your
>> feedback, share your experience or even join the project:
>>
>> https://fedoraproject.org/wiki/Flexible_Metadata_Format
>>
>> The page lists some of our core user stories as well as a couple of
>> real-life examples to demonstrate proposed features of the format.
>> Can you see similar user stories in your team? Is this something
>> that could be useful for you as well? Do you know of a different
>> solution for these use cases? Any other relevant ideas?
>>
>> To illustrate where we could be heading: In the ideal future there
>> could be just a single test case for a particular feature stored
>> in public with a single set of metadata attached close to the test
>> code and together used for testing in both upstream and downstream
>> without need to duplicate the test code (maintain both copies).
>>
>> This proposal does not suggest in any way to replace tests.yml [1]
>> files defined by the Standard Test Interface. The new format could
>> serve as an extension for selecting the right tests to be executed
>> (e.g. filtering tests by tag instead of listing them manually).
>>
>> Looking forward to your feedback!
>>
>> psss...
>>
>> [1] https://fedoraproject.org/wiki/CI/Tests
Hi there,
Is the CI pipeline running on Fedora and gating? It's really hard to
find test results.
I'm looking around both in Bodhi:
https://bodhi.fedoraproject.org/updates/FEDORA-2018-5c64b23a18
and here Pagure:
https://src.fedoraproject.org/rpms/cockpit/pull-request/3
The Cockpit Git repository has a "test-canary" test that checks whether
CI gates on test failure. Here's what you see when you run the tests in
dist-git locally:
TASK [standard-test-scripts : Check the results]
*******************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "grep \"^FAIL\"
/var/tmp/artifacts/test.log", "delta": "0:00:00.002181", "end":
"2018-03-26 14:04:10.414274", "failed_when_result": true, "rc": 0,
"start": "2018-03-26 14:04:10.412093", "stderr": "", "stderr_lines": [],
"stdout": "FAIL ./test-canary", "stdout_lines": ["FAIL ./test-canary"]}
/tests.retry
Cheers,
Stef