Hi,
thanks all for attending F17 PM Test day, late :) recap follows.
General results: Number of attendees: 39 Reports received: 52 Unique machines tested: 49 Bugs reported (all trackers counted): 21 Bugs closed so far: 13
Test cases (TC) results (in braces are F16 results): TC passed: 82.48 % (79 %) TC passed with warning: 3.62 % (7 %) TC failed: 13.90 % (14 %)
Power consumption benchmark results (active idle test, data mostly taken from ACPI battery by provided script): Average power consumption (average from all machines): 23.36 W Average power savings with tuned enabled: 1.91 W
List of bugs reported: FRD Bug 47965 - I can't modify brightness with nVidia 1000m Quadro Bug 713687 - [abrt] kernel: BUG: soft lockup - CPU#0 stuck for 67s! [modprobe:499] in ath5k_pci_eeprom_read Bug 745202 - gnome-shell does not display correctly with NV3x adapters - multicolor corruption of panel, Shell-style menus and text [nvfx] Bug 783715 - Running bluetooth.service makes soft blocked wifi be hard blocked after resume from suspend Bug 784741 - [abrt] kernel: WARNING: at lib/list_debug.c:53 __list_del_entry+0xa1/0xd0() ( Bug 797559 - [abrt] kernel: WARNING: at fs/sysfs/group.c:138 sysfs_remove_group+0xfa/0x100() Bug 807855 - Please add support for our new tuned 2.0 Bug 809294 - SELinux is preventing /usr/bin/python from using the 'signal' accesses on a process. I was running: systemctl start tuned.service Bug 809812 - No brigtness controls in gnome-control-center screen on lenovo x200 Bug 809832 - avc on tuned-adm profile powersave Bug 809836 - SELinux is preventing tuned from 'execute_no_trans' accesses on the file /usr/lib/tuned/balanced/script.sh. Bug 809837 - SELinux is preventing ls from 'getattr' accesses on the blk_file /dev/sdb. Bug 809838 - SELinux is preventing sysctl from 'getattr' accesses on the file /proc/sys/kernel/nmi_watchdog. Bug 810127 - Brightness level indicator is not shown when changing brightness Bug 810202 - [abrt] kernel: [809524.902004] kernel BUG at drivers/gpu/drm/i915/i915_gem.c:3415! Bug 810584 - "Brightness and Lock" window does not show settings for Brightness Bug 810616 - [HP Elitebook 8460p] Pressing Fn+Brightness control keys has no effect Bug 811018 - HP Elitebook 8560w can't suspend or hibernate Bug 813899 - [abrt] kernel: WARNING: at drivers/base/firmware_class.c:538 _request_firmware+0x488/0x4d0() Bug 813900 - [abrt] kernel: WARNING: at drivers/base/firmware_class.c:538 _request_firmware+0x488/0x4d0() Bug 813904 - SELinux is preventing /usr/libexec/gstreamer-0.10/gst-plugin-scanner from 'create' accesses on the directory .orc.
49 machines (mostly laptops) seems to be nice number to get us a little overview how good (or bad) we are doing.
As you probably know we also held this event on-site in Red Hat Brno office. There was prepared meeting room with internet connection, various F17 test day live CDs/USBs, USB CD-ROMs, serial cables for catching kernel backtraces and calibrated Chroma 66202 wattmeter so attendees was able to precisely measure power consumption of their machines. We also focused on newcomers. We had there three spare laptops and newcomers without their own HW could play with Fedora there. They could also (virtually) attend test day with these machines to observe that it is really easy task (one of the goal of this event was to encourage them to attend future test days on their own). I can say that the event went well, but available capacity (space) was a bit underestimated - we did not expect such an interest :).
During the event it proved that managing dozens of attendants become pain with the current test day infrastructure. For newcomers it was hard to understand how to fill results into wiki (or the concept of the wiki itself). It was even harder for remotees. Several times we received plain text reports and we had to transfer them into wiki ourself. In rush hours there were so many conflicting edits in the wiki that we had to utilize one people who worked only as a wiki corrector. I cannot imagine how to handle e.g. double number of participants with the current system. I think that some more robust and intuitive system is needed to attract/handle more participants. If designed the right way it could also simplify evaluation of results and could give answers to various queries like "what HW worked on which version of Fedora".
So again many thanks to all attendees and supporters and hope to see you during F18 PM test day :)
thanks & regards
Jaroslav
On Tue, 2012-07-31 at 09:30 -0400, Jaroslav Skarvada wrote:
During the event it proved that managing dozens of attendants become pain with the current test day infrastructure. For newcomers it was hard to understand how to fill results into wiki (or the concept of the wiki itself). It was even harder for remotees. Several times we received plain text reports and we had to transfer them into wiki ourself. In rush hours there were so many conflicting edits in the wiki that we had to utilize one people who worked only as a wiki corrector. I cannot imagine how to handle e.g. double number of participants with the current system. I think that some more robust and intuitive system is needed to attract/handle more participants. If designed the right way it could also simplify evaluation of results and could give answers to various queries like "what HW worked on which version of Fedora".
So again many thanks to all attendees and supporters and hope to see you during F18 PM test day :)
Thanks for the recap and the process feedback!
Yeah, the wiki system certainly has limitations. The problem is that any replacement is likely to be significantly more complex on the infrastructure side and also for Test Day organizers than simply using the Wiki, so it's something of a trade-off; we've never found any kind of 'drop-in' system that does exactly what we want. At least as far as we've found so far, any replacement would make it somewhat more difficult to organize a Test Day.
I've run X Test Weeks for several releases which can get upwards of 100 responses sometimes, and the Wiki has mostly held out to that.
Still, I agree it's not an optimal system, and if anyone has any suggestions for alternatives we'd be glad to have them.
On Tue, 2012-07-31 at 10:40 -0700, Adam Williamson wrote:
On Tue, 2012-07-31 at 09:30 -0400, Jaroslav Skarvada wrote:
During the event it proved that managing dozens of attendants become pain with the current test day infrastructure. For newcomers it was hard to understand how to fill results into wiki (or the concept of the wiki itself). It was even harder for remotees. Several times we received plain text reports and we had to transfer them into wiki ourself. In rush hours there were so many conflicting edits in the wiki that we had to utilize one people who worked only as a wiki corrector. I cannot imagine how to handle e.g. double number of participants with the current system. I think that some more robust and intuitive system is needed to attract/handle more participants. If designed the right way it could also simplify evaluation of results and could give answers to various queries like "what HW worked on which version of Fedora".
So again many thanks to all attendees and supporters and hope to see you during F18 PM test day :)
Thanks for the recap and the process feedback!
Yeah, the wiki system certainly has limitations. The problem is that any replacement is likely to be significantly more complex on the infrastructure side and also for Test Day organizers than simply using the Wiki, so it's something of a trade-off; we've never found any kind of 'drop-in' system that does exactly what we want. At least as far as we've found so far, any replacement would make it somewhat more difficult to organize a Test Day.
I've run X Test Weeks for several releases which can get upwards of 100 responses sometimes, and the Wiki has mostly held out to that.
Still, I agree it's not an optimal system, and if anyone has any suggestions for alternatives we'd be glad to have them.
Forgot to mention - please consider the Test Day SOP to be guidelines, not hard and fast rules - if you think of a way to adjust the process which you think would be beneficial for your Test Day, by all means go ahead and do it, no-one will be angry! So if you can think of any methods that you think might improve the ease of the feedback process, please do go ahead and try them for the f18 PM test day, and let us know how they go.
Adam Williamson wrote:
Still, I agree it's not an optimal system, and if anyone has any suggestions for alternatives we'd be glad to have them.
Sounds like you need a client app that will talk to a Fedora infra server and dump the data into a database. A wiki extension could be made to connect to that database and display the data.
I use something[1] like I described to display Bugzilla bugs into wiki pages.
[1] http://www.mediawiki.org/wiki/Extension:Bugzilla_Reports
On Tue, 2012-07-31 at 14:14 -0500, Michael Cronenworth wrote:
Adam Williamson wrote:
Still, I agree it's not an optimal system, and if anyone has any suggestions for alternatives we'd be glad to have them.
Sounds like you need a client app that will talk to a Fedora infra server and dump the data into a database. A wiki extension could be made to connect to that database and display the data.
I use something[1] like I described to display Bugzilla bugs into wiki pages.
[1] http://www.mediawiki.org/wiki/Extension:Bugzilla_Reports
I didn't really cover the problem space very well, sorry for that.
It's one of those things that gets bigger each time you look at it. In traditional 'big grown-up QA' terms, what we're doing is using the Wiki as a TCMS - test case management system. Real big grown-up QA tends to base a lot of work around these, which are pretty complex projects that aim to handle both the creation and management of test cases, and tracking test results.
Test Days are really just one application, is the thing - at least looking at things from the traditional perspective, it wouldn't make a lot of sense to write a small, Test Day-specific client/server system, because really Test Days are just events where we get lots of people together to iterate intensively over a specific set of test cases. In theory the test cases aren't 'special' - they can be run in other contexts besides Test Days, which might want different things from the results. TCMSes tend to wind up as big sophisticated beasts which can track results in lots of different ways.
TCMSes unfortunately tend to be built with an assumption that they'll be used by a small group of trusted and relatively savvy users: they'll be used by a dedicated QA team in a 'closed' environment, essentially. So they don't go out of their way to be easy to use and often aren't architected in a way which would make them particularly easy to deploy in an environment like Fedora, where we want to be open to engagement by very casual testers and people who aren't necessarily trusted and experienced QA members.
There are several open source TCMSes and similar systems that we could, in theory, use for Fedora QA; we've evaluated several of them in the past. The 'obvious' candidate for Fedora is Nitrate, which is Red Hat's TCMS - https://fedoraproject.org/wiki/Nitrate . Red Hat QE, which is a much more grown-up and 'proper' effort than Fedora QA (but also basically a closed shop within RH), bases all its work around Nitrate. Even Nitrate, though, has quite a lot of disadvantages compared to the Wiki when it comes to the unique context of Fedora testing - https://fedoraproject.org/wiki/Tcms_Comparison is an evaluation of nitrate against the various features of 'wiki as a TCMS' which we've found useful or which have become integrated into the way in which we do testing for Fedora. Other open source TCMSes all have similar issues, or more, in the context of Fedora QA. None of them would be a straightforward drop-in replacement for the way we currently order things, with clear advantages and only a few drawbacks - they'd all represent a significant trade-off in capability and would take a lot of engineering work and re-design of our processes. This doesn't mean we shouldn't do it, of course, but it's not a straightforward call.
There are, of course, always other ways of looking at things. You could take the point of view that a lot of successful open source code resulted from someone starting out by doing something small in a simplistic, unsophisticated way, and building from there. Perhaps the fact that we've never found a drop-in TCMS that entirely makes sense for Fedora is an indication that we really ought to grow our own, and making a simple limited system specifically for Test Days, or some other specific use Fedora makes of test cases, wouldn't be such a bad way to start out: I'm certainly sympathetic to the view that sometimes it works out better to do a good job on a small task in a relatively short time with concrete results than to look at everything Fedora QA would ultimately want from a TCMS and try to knock it all off on the first try.
So that's a very longwinded way of saying - 'yes, with reservations' :). I could certainly see value in an effort to write a relatively simple system for handling test cases and results for Fedora test days, tailored to the somewhat unusual requirements of collaborative testing for an open project like Fedora. But I think if anyone's going to do it, they should do it with all of the above in mind - bear in mind that they're not exactly breaking new ground, but approaching a relatively mature field from a slightly unique angle. Be aware that, ultimately, what we really want is a more complex and flexible system around which we could base all of Fedora's QA efforts; something that can _only_ ever work for the Test Day format would be a bit of a dead end.
Hope that made sense and made things a bit clearer :)
Adam Williamson wrote: [large snip] I hate to snip such a large body, but it will save other eyes from having to skip past it here.
Hope that made sense and made things a bit clearer :)
I keep up with the test list (and occasionally enter QA meetings) so I am familiar with what you are attempting to do with TCMS. My suggestion was purely generic in nature. In the end I think you still need a wiki extension to handle the data entry as mediawiki is limited in large database-like data handling.
For instance: instead of a client app it could be a web app that accepts test case input from a user and stores it in a database for display on the corresponding wiki page. The wiki page would have a link on it eg. "input test data" that would link to the web app.
In any case I'm sure the Smart Minds of RHQA will figure it out. :P
On Tue, 2012-07-31 at 15:35 -0500, Michael Cronenworth wrote:
Adam Williamson wrote: [large snip] I hate to snip such a large body, but it will save other eyes from having to skip past it here.
Hope that made sense and made things a bit clearer :)
I keep up with the test list (and occasionally enter QA meetings) so I am familiar with what you are attempting to do with TCMS. My suggestion was purely generic in nature. In the end I think you still need a wiki extension to handle the data entry as mediawiki is limited in large database-like data handling.
For instance: instead of a client app it could be a web app that accepts test case input from a user and stores it in a database for display on the corresponding wiki page. The wiki page would have a link on it eg. "input test data" that would link to the web app.
In any case I'm sure the Smart Minds of RHQA will figure it out. :P
A client app to input results into the existing Test Day and release validation processes might be an interesting place to start, for sure.
There was some discussion back in 2009 of using a mediawiki extension called semantic to do this, for some test days. IIRC, the Sugar folks used it that way. See https://lists.fedoraproject.org/pipermail/test/2009-September/084628.html . Unfortunately the test implementation isn't there any more.
On Tue, 31 Jul 2012 15:35:35 -0500 Michael Cronenworth mike@cchtml.com wrote:
Adam Williamson wrote: [large snip] I hate to snip such a large body, but it will save other eyes from having to skip past it here.
Hope that made sense and made things a bit clearer :)
I keep up with the test list (and occasionally enter QA meetings) so I am familiar with what you are attempting to do with TCMS. My suggestion was purely generic in nature. In the end I think you still need a wiki extension to handle the data entry as mediawiki is limited in large database-like data handling.
For instance: instead of a client app it could be a web app that accepts test case input from a user and stores it in a database for display on the corresponding wiki page. The wiki page would have a link on it eg. "input test data" that would link to the web app.
In any case I'm sure the Smart Minds of RHQA will figure it out. :P
[ cc'ing test@ since this is relevant to QA ]
I'm going to avoid Adam's discussion of TCMSs for the moment since any solution like that would be a large undertaking and a little different from the original issue raised with our current system(s).
As a short term solution, I think that we write up a webapp that took in results and dumped to a wiki page either at regular intervals or at the end of the test day without too much effort. That way we could at least get around the problem of conflicting wiki writes without coming up with a completely new system. Assuming we did a decent job writing the app, it could also make result reporting less confusing to people who aren't familiar with the test day process.
I'm still unsure of the immediate need for more than what we already have. I realize that the PM test day caused problems - this isn't the first time that the issues from that test day have come up. However, I'm not currently aware of any other test days that had similar issues of write conflicts and confusion. Have enough other test days hit similar problems to make it worth the effort to write, maintain, host and support anything beyond what we currently have?
Has anyone else run into similar issues when running a test day? How much need is there for something more than what the wiki can currently provide?
Tim
On 07/31/2012 01:30 PM, Jaroslav Skarvada wrote:
During the event it proved that managing dozens of attendants become pain with the current test day infrastructure. For newcomers it was hard to understand how to fill results into wiki (or the concept of the wiki itself). It was even harder for remotees. Several times we received plain text reports and we had to transfer them into wiki ourself. In rush hours there were so many conflicting edits in the wiki that we had to utilize one people who worked only as a wiki corrector. I cannot imagine how to handle e.g. double number of participants with the current system. I think that some more robust and intuitive system is needed to attract/handle more participants. If designed the right way it could also simplify evaluation of results and could give answers to various queries like "what HW worked on which version of Fedora".
At the time we looked at various testing system but all of them fell short one way or another thus we decide to settle on something reporters where familiar with as an short stop until we found or came up with something better and we had couple of ideas how that should look like which well let's say was quite different from the traditional tcms.
In any case this discussion and how it can be improved belongs on the -test list where the QA community resides...
JBG