On Wed, 12 Dec 2018 16:07:49 -0500 Jeff Moyer jmoyer@redhat.com wrote:
Thanks for your insight. Doesn't look good for my use of BFQ.
Note that you can change the current I/O scheduler for any block device by echo-ing into /sys/block/<dev>/queue/scheduler. Cat-ing that file will give you the list of available schedulers.
That's part of the problem. BFQ doesn't appear in the list of available schedulers. When I cat that location for my disks, I see [noop]. Since CFQ does appear there if it is compiled into the kernel, I'll have to look into what is done for CFQ and see how hard it would be to patch the kernel to repeat that behavior for BFQ.
My use case in not mq, so after reading one of the links in this thread about performance, I saw that BFQ gave ~20 to 30 % boost in disk io performance, and enhanced low latency performance (desktop responsiveness) for single queue. That's what I want to capture by using BFQ. I wonder if that is my problem. From what Chris said, an mq scheduler is required in order to use BFQ, whether it is for mq or single queue use. I'll try that. I normally use deadline and CFQ for scheduling. Back to the compiler.
I'm surprised this is so difficult. It's been in the kernel since the 2.x series, and usually the configuration options are excellent for allowing variation in how the kernel is configured.
On the plus side, I notice only slight degradation in behavior using noop scheduling. :-) Maybe I should just skip scheduling. :-D
On Wed, 12 Dec 2018 14:41:37 -0700 stan stanl-fedorauser@vfemail.net wrote:
On Wed, 12 Dec 2018 16:07:49 -0500 Jeff Moyer jmoyer@redhat.com wrote:
Thanks for your insight. Doesn't look good for my use of BFQ.
Note that you can change the current I/O scheduler for any block device by echo-ing into /sys/block/<dev>/queue/scheduler. Cat-ing that file will give you the list of available schedulers.
That's part of the problem. BFQ doesn't appear in the list of available schedulers. When I cat that location for my disks, I see [noop]. Since CFQ does appear there if it is compiled into the kernel, I'll have to look into what is done for CFQ and see how hard it would be to patch the kernel to repeat that behavior for BFQ.
Enabled deadline and cfq again, but still no bfq available.
$ cat /sys/block/sda/queue/scheduler noop deadline [cfq]
Il giorno 12 dic 2018, alle ore 22:41, stan stanl-fedorauser@vfemail.net ha scritto:
On Wed, 12 Dec 2018 16:07:49 -0500 Jeff Moyer jmoyer@redhat.com wrote:
Thanks for your insight. Doesn't look good for my use of BFQ.
Note that you can change the current I/O scheduler for any block device by echo-ing into /sys/block/<dev>/queue/scheduler. Cat-ing that file will give you the list of available schedulers.
That's part of the problem. BFQ doesn't appear in the list of available schedulers. When I cat that location for my disks, I see [noop]. Since CFQ does appear there if it is compiled into the kernel, I'll have to look into what is done for CFQ and see how hard it would be to patch the kernel to repeat that behavior for BFQ.
My use case in not mq, so after reading one of the links in this thread about performance, I saw that BFQ gave ~20 to 30 % boost in disk io performance, and enhanced low latency performance (desktop responsiveness) for single queue. That's what I want to capture by using BFQ. I wonder if that is my problem. From what Chris said, an mq scheduler is required in order to use BFQ, whether it is for mq or single queue use. I'll try that. I normally use deadline and CFQ for scheduling. Back to the compiler.
I'm surprised this is so difficult. It's been in the kernel since the 2.x series, and usually the configuration options are excellent for allowing variation in how the kernel is configured.
On the plus side, I notice only slight degradation in behavior using noop scheduling. :-) Maybe I should just skip scheduling. :-D
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line - starts the reading of 5 files plus the writing of 5 other files - replays, for three times, the I/O that gnome terminal does while; starting up (if you want I can tell you how to change the last command line so as to execute the original application, but you would get the same results); - for each attempt, measures how long this start-up I/O takes to complete.
Paolo
kernel mailing list -- kernel@lists.fedoraproject.org To unsubscribe send an email to kernel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/kernel@lists.fedoraproject.org
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente paolo.valente@linaro.org wrote:
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line
- starts the reading of 5 files plus the writing of 5 other files
- replays, for three times, the I/O that gnome terminal does while; starting up (if you want I can tell you how to change the last
command line so as to execute the original application, but you would get the same results);
- for each attempt, measures how long this start-up I/O takes to complete.
Thanks for this. I suspect I wasn't really stressing my system when I was evaluating it, and it was subjective. I'm running a kernel with cfq right now, but I will boot the noop kernel when I get a chance and test it. I suppose I could just switch to noop io scheduling instead. Should be interesting.
Il giorno 13 dic 2018, alle ore 17:17, stan stanl-fedorauser@vfemail.net ha scritto:
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente paolo.valente@linaro.org wrote:
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line
- starts the reading of 5 files plus the writing of 5 other files
- replays, for three times, the I/O that gnome terminal does while;
starting up (if you want I can tell you how to change the last command line so as to execute the original application, but you would get the same results);
- for each attempt, measures how long this start-up I/O takes to
complete.
Thanks for this. I suspect I wasn't really stressing my system when I was evaluating it, and it was subjective. I'm running a kernel with cfq right now, but I will boot the noop kernel when I get a chance and test it. I suppose I could just switch to noop io scheduling instead. Should be interesting.
Consider that noop means legacy block too. From 4.21, the equivalent of noop will be none, in blk-mq.
At any rate, you can do these tests with cfq too. Results may surprise you ...
And, if results will feel like just numbers to you, I'll tell you how to change the command line for starting real applications.
Thanks, Paolo
kernel mailing list -- kernel@lists.fedoraproject.org To unsubscribe send an email to kernel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/kernel@lists.fedoraproject.org
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente paolo.valente@linaro.org wrote:
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line
- starts the reading of 5 files plus the writing of 5 other files
- replays, for three times, the I/O that gnome terminal does while; starting up (if you want I can tell you how to change the last
command line so as to execute the original application, but you would get the same results);
- for each attempt, measures how long this start-up I/O takes to complete.
Results for cfq and noop, haven't enabled bfq yet. I interpret these as showing that cfq was a large improvement for all categories except write throughput, where it actually degraded performance.
cfq
Latency statistics: min max avg std_dev conf99% 22.142 27.157 24.1967 2.6273 52.5604 Aggregated throughput: min max avg std_dev conf99% 67.29 139.74 105.491 19.245 39.7628 Read throughput: min max avg std_dev conf99% 51.73 135.67 102.402 21.3985 44.2123 Write throughput: min max avg std_dev conf99% 0.01 46.29 3.08857 8.37179 17.2972
noop
Latency statistics: min max avg std_dev conf99% 40.861 42.021 41.3637 0.595266 11.9086 Aggregated throughput: min max avg std_dev conf99% 45.66 72.89 55.9847 5.99054 9.87365 Read throughput: min max avg std_dev conf99% 41.69 70.85 51.9495 6.02467 9.9299 Write throughput: min max avg std_dev conf99% 0 7.9 4.03527 1.62392 2.67656
Il giorno 13 dic 2018, alle ore 17:41, stan stanl-fedorauser@vfemail.net ha scritto:
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente paolo.valente@linaro.org wrote:
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line
- starts the reading of 5 files plus the writing of 5 other files
- replays, for three times, the I/O that gnome terminal does while;
starting up (if you want I can tell you how to change the last command line so as to execute the original application, but you would get the same results);
- for each attempt, measures how long this start-up I/O takes to
complete.
Results for cfq and noop, haven't enabled bfq yet. I interpret these as showing that cfq was a large improvement for all categories except write throughput, where it actually degraded performance.
Great!
You don't have bfq for a comparison, but you can still get an idea of how good your system is, by comparing these start-up times with how long the same application takes to start when there is no I/O. Just do
sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 0 0 seq 3 "replay-startup-io gnometerm"
and get ready to be surprised (next surprise when/if you'll try with bfq ...)
Thanks, Paolo
cfq
Latency statistics: min max avg std_dev conf99% 22.142 27.157 24.1967 2.6273 52.5604 Aggregated throughput: min max avg std_dev conf99% 67.29 139.74 105.491 19.245 39.7628 Read throughput: min max avg std_dev conf99% 51.73 135.67 102.402 21.3985 44.2123 Write throughput: min max avg std_dev conf99% 0.01 46.29 3.08857 8.37179 17.2972
noop
Latency statistics: min max avg std_dev conf99% 40.861 42.021 41.3637 0.595266 11.9086 Aggregated throughput: min max avg std_dev conf99% 45.66 72.89 55.9847 5.99054 9.87365 Read throughput: min max avg std_dev conf99% 41.69 70.85 51.9495 6.02467 9.9299 Write throughput: min max avg std_dev conf99% 0 7.9 4.03527 1.62392 2.67656 _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, 13 Dec 2018 17:46:30 +0100 Paolo Valente paolo.valente@linaro.org wrote:
Il giorno 13 dic 2018, alle ore 17:41, stan stanl-fedorauser@vfemail.net ha scritto:
You don't have bfq for a comparison, but you can still get an idea of how good your system is, by comparing these start-up times with how long the same application takes to start when there is no I/O. Just do
sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 0 0 seq 3 "replay-startup-io gnometerm"
cfq with the above command (without I/O): *BIG* difference.
Latency statistics: min max avg std_dev conf99% 1.34 1.704 1.53367 0.183118 3.66336 Aggregated throughput: min max avg std_dev conf99% 0 8.03 5.23143 2.60745 15.4099 Read throughput: min max avg std_dev conf99% 0 8.03 5.22571 2.60522 15.3967 Write throughput: min max avg std_dev conf99% 0 0.02 0.00571429 0.00786796 0.0464991
and get ready to be surprised (next surprise when/if you'll try with bfq ...)
I had a response saying that bfq isn't available for single queue devices, but there might be a workaround. So it might or might not happen, depending on whether I can get it working.
cfq
Latency statistics: min max avg std_dev conf99% 22.142 27.157 24.1967 2.6273 52.5604 Aggregated throughput: min max avg std_dev conf99% 67.29 139.74 105.491 19.245 39.7628 Read throughput: min max avg std_dev conf99% 51.73 135.67 102.402 21.3985 44.2123 Write throughput: min max avg std_dev conf99% 0.01 46.29 3.08857 8.37179 17.2972
noop
Latency statistics: min max avg std_dev conf99% 40.861 42.021 41.3637 0.595266 11.9086 Aggregated throughput: min max avg std_dev conf99% 45.66 72.89 55.9847 5.99054 9.87365 Read throughput: min max avg std_dev conf99% 41.69 70.85 51.9495 6.02467 9.9299 Write throughput: min max avg std_dev conf99% 0 7.9 4.03527 1.62392 2.67656
Il giorno 13 dic 2018, alle ore 18:34, stan stanl-fedorauser@vfemail.net ha scritto:
On Thu, 13 Dec 2018 17:46:30 +0100 Paolo Valente paolo.valente@linaro.org wrote:
Il giorno 13 dic 2018, alle ore 17:41, stan stanl-fedorauser@vfemail.net ha scritto:
You don't have bfq for a comparison, but you can still get an idea of how good your system is, by comparing these start-up times with how long the same application takes to start when there is no I/O. Just do
sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 0 0 seq 3 "replay-startup-io gnometerm"
cfq with the above command (without I/O): *BIG* difference.
Great! (for bfq :) )
Latency statistics: min max avg std_dev conf99% 1.34 1.704 1.53367 0.183118 3.66336 Aggregated throughput: min max avg std_dev conf99% 0 8.03 5.23143 2.60745 15.4099 Read throughput: min max avg std_dev conf99% 0 8.03 5.22571 2.60522 15.3967 Write throughput: min max avg std_dev conf99% 0 0.02 0.00571429 0.00786796 0.0464991
and get ready to be surprised (next surprise when/if you'll try with bfq ...)
I had a response saying that bfq isn't available for single queue devices, but there might be a workaround. So it might or might not happen, depending on whether I can get it working.
Actually, there's still a little confusion on this point. First, blk-mq *is not* only for multi-queue devices. blk-mq is for any kind of block device. If you have a fast, single-queue SSD, then blk-mq is likely to make it go faster. If you have a multi-queue drive, which implicitly means that your drive is very fast (according to the current standards for 'fast'), then it is 100% sure that blk-mq is the only way to utilize a high portion of the max speed of your multi-queue monster.
To use blk-mq, i.e., to have blk-mq handle your storage, you need (only) to tell the I/O stack that you want blk-mq to manage the I/O for the driver of your storage. In this respect, SCSI is for sure the most used generic storage driver. So, according to the instructions already provided by others, you can have blk-mq handle your storage device by, e.g., adding "scsi_mod.use_blk_mq=y" as kernel boot option. Such a choice of yours is not constrained, in any respect, by the nature of your drive, be it an SD Card, eMMC, HDD, SSD or whatever you want. As for multi-queue devices, they are handled by the NVMe driver, and for that one only blk-mq is available.
Once you have switched to blk-mq for your drive, you will have the set of I/O schedulers that live in blk-mq. bfq is among these schedulers. Actually, there is also an out-of-tree bfq available also for the good old legacy block, but this is another story.
Finally, from 4.21 there will be no legacy block any longer. Only blk-mq will be available, so only blk-mq I/O schedulers will be available.
Thanks for trying my tests, Paolo
cfq
Latency statistics: min max avg std_dev conf99% 22.142 27.157 24.1967 2.6273 52.5604 Aggregated throughput: min max avg std_dev conf99% 67.29 139.74 105.491 19.245 39.7628 Read throughput: min max avg std_dev conf99% 51.73 135.67 102.402 21.3985 44.2123 Write throughput: min max avg std_dev conf99% 0.01 46.29 3.08857 8.37179 17.2972
noop
Latency statistics: min max avg std_dev conf99% 40.861 42.021 41.3637 0.595266 11.9086 Aggregated throughput: min max avg std_dev conf99% 45.66 72.89 55.9847 5.99054 9.87365 Read throughput: min max avg std_dev conf99% 41.69 70.85 51.9495 6.02467 9.9299 Write throughput: min max avg std_dev conf99% 0 7.9 4.03527 1.62392 2.67656
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, 13 Dec 2018 19:59:14 +0100 Paolo Valente paolo.valente@linaro.org wrote:
Il giorno 13 dic 2018, alle ore 18:34, stan stanl-fedorauser@vfemail.net ha scritto:
On Thu, 13 Dec 2018 17:46:30 +0100 Paolo Valente paolo.valente@linaro.org wrote:
Il giorno 13 dic 2018, alle ore 17:41, stan stanl-fedorauser@vfemail.net ha scritto:
You don't have bfq for a comparison, but you can still get an idea of how good your system is, by comparing these start-up times with how long the same application takes to start when there is no I/O. Just do
sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 0 0 seq 3 "replay-startup-io gnometerm"
cfq with the above command (without I/O): *BIG* difference.
Great! (for bfq :) )
Latency statistics: min max avg std_dev conf99% 1.34 1.704 1.53367 0.183118 3.66336 Aggregated throughput: min max avg std_dev conf99% 0 8.03 5.23143 2.60745 15.4099 Read throughput: min max avg std_dev conf99% 0 8.03 5.22571 2.60522 15.3967 Write throughput: min max avg std_dev conf99% 0 0.02 0.00571429 0.00786796 0.0464991
and get ready to be surprised (next surprise when/if you'll try with bfq ...)
I had a response saying that bfq isn't available for single queue devices, but there might be a workaround. So it might or might not happen, depending on whether I can get it working.
Actually, there's still a little confusion on this point. First, blk-mq *is not* only for multi-queue devices. blk-mq is for any kind of block device. If you have a fast, single-queue SSD, then blk-mq is likely to make it go faster. If you have a multi-queue drive, which implicitly means that your drive is very fast (according to the current standards for 'fast'), then it is 100% sure that blk-mq is the only way to utilize a high portion of the max speed of your multi-queue monster.
To use blk-mq, i.e., to have blk-mq handle your storage, you need (only) to tell the I/O stack that you want blk-mq to manage the I/O for the driver of your storage. In this respect, SCSI is for sure the most used generic storage driver. So, according to the instructions already provided by others, you can have blk-mq handle your storage device by, e.g., adding "scsi_mod.use_blk_mq=y" as kernel boot option. Such a choice of yours is not constrained, in any respect, by the nature of your drive, be it an SD Card, eMMC, HDD, SSD or whatever you want. As for multi-queue devices, they are handled by the NVMe driver, and for that one only blk-mq is available.
Once you have switched to blk-mq for your drive, you will have the set of I/O schedulers that live in blk-mq. bfq is among these schedulers. Actually, there is also an out-of-tree bfq available also for the good old legacy block, but this is another story.
Finally, from 4.21 there will be no legacy block any longer. Only blk-mq will be available, so only blk-mq I/O schedulers will be available.
And finally, once I added scsi_mod.use_blk_mq=y to the kernel command line, I was able to set bfq for my I/O scheduler (for me, under blk-mq there is only none or bfq). Here are the results of your utility using bfq. The latency nearly matches no I/O load. Thanks for writing a utility that is so easy to use, and allows the evaluation of different schedulers.
bfq
Latency statistics: min max avg std_dev conf99% 1.51 2.583 1.92533 0.576087 11.5249 Aggregated throughput: min max avg std_dev conf99% 30.12 142.3 73.5314 45.6032 269.512 Read throughput: min max avg std_dev conf99% 26.45 136.92 69.5629 44.7932 264.725 Write throughput: min max avg std_dev conf99% 2.44 5.38 3.96857 0.948603 5.60619
cfq
Latency statistics: min max avg std_dev conf99% 22.142 27.157 24.1967 2.6273 52.5604 Aggregated throughput: min max avg std_dev conf99% 67.29 139.74 105.491 19.245 39.7628 Read throughput: min max avg std_dev conf99% 51.73 135.67 102.402 21.3985 44.2123 Write throughput: min max avg std_dev conf99% 0.01 46.29 3.08857 8.37179 17.2972
noop
Latency statistics: min max avg std_dev conf99% 40.861 42.021 41.3637 0.595266 11.9086 Aggregated throughput: min max avg std_dev conf99% 45.66 72.89 55.9847 5.99054 9.87365 Read throughput: min max avg std_dev conf99% 41.69 70.85 51.9495 6.02467 9.9299 Write throughput: min max avg std_dev conf99% 0 7.9 4.03527 1.62392 2.67656
And thank you also to all the other contributors to this thread.
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente paolo.valente@linaro.org wrote:
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line
- starts the reading of 5 files plus the writing of 5 other files
- replays, for three times, the I/O that gnome terminal does while; starting up (if you want I can tell you how to change the last
command line so as to execute the original application, but you would get the same results);
- for each attempt, measures how long this start-up I/O takes to complete.
Just a note: I would feel a lot more comfortable with this utility if it didn't have to run as root. Paranoia. Could you add the functionality that if it is run as a normal user, it tests the I/O scheduling scheme currently enabled. That is, it checks if it is running as root. If it isn't, it just uses whatever I/O scheduler is currently set, ignoring any parameter on the command line. Running as root, it behaves exactly as it does now.
The user would be responsible for issuing the
echo <scheduler-you-want-to-test> > /sys/block/<device-you-want-to-test>/queue/scheduler
as root if they wanted to run as a normal user.
Il giorno 13 dic 2018, alle ore 17:53, stan stanl-fedorauser@vfemail.net ha scritto:
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente paolo.valente@linaro.org wrote:
To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O?
A super quick way to do this is
git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh <scheduler-you-want-to-test> 5 5 seq 3 "replay-startup-io gnometerm"
The last command line
- starts the reading of 5 files plus the writing of 5 other files
- replays, for three times, the I/O that gnome terminal does while;
starting up (if you want I can tell you how to change the last command line so as to execute the original application, but you would get the same results);
- for each attempt, measures how long this start-up I/O takes to
complete.
Just a note: I would feel a lot more comfortable with this utility if it didn't have to run as root. Paranoia. Could you add the functionality that if it is run as a normal user, it tests the I/O scheduling scheme currently enabled. That is, it checks if it is running as root. If it isn't, it just uses whatever I/O scheduler is currently set, ignoring any parameter on the command line. Running as root, it behaves exactly as it does now.
The user would be responsible for issuing the
echo <scheduler-you-want-to-test> > /sys/block/<device-you-want-to-test>/queue/scheduler
as root if they wanted to run as a normal user.
I do agree with your point, and I already tried to make this run as non root. The actual problem is not the scheduler switch, but the need to drop caches before every start-up attempt. Without that, only the first attempt might be reliable, in case data are not already in the cache even at the first iteration. To drop caches, it seems necessary to be root. Any suggestion to work around this issue would super welcome!
Thanks, Paolo
kernel mailing list -- kernel@lists.fedoraproject.org To unsubscribe send an email to kernel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/kernel@lists.fedoraproject.org
kernel@lists.fedoraproject.org