On Fr, 05.10.18 12:28, Nicholas Miell (nmiell@gmail.com) wrote:
This is not quite the 1M you appear to ask for though… I picked 256K mostly because I wanted to stay lower than the kernel built-in max (which is 1M, i.e. /proc/sys/fs/nr_open), and needed to pick something. Do you have any particular reason to prefer 1M over 256K? I am completely open to suggestions there...
The upstream esync branch requests setting the hard limit to 1M.
https://github.com/zfigura/wine/blob/esync/README.esync
I haven't torn apart the project to see if 1M is really necessary so a different limit may be up for discussion.
esync uses eventfd to reduce IPC to wineserver when emulating Windows kernel objects, the exact number of eventfds needed depends entirely on the behavior of the Windows application you are running.
So, any idea why they picked 1M? Are there typical apps that require really that many?
I mean, it could be two things:
1) yes, they ran into real-life apps that require 500K fds and hence set the limit to 1M since they can't set it any higher anyway, and it's far away from (i.e. double of) 500K.
2) no, they didn't run into real-life apps like this, but didn't want to figure out what a good limit is, hence they set it to the kernel's built-in maximum of 1M.
If it's #1 then I figure we should bump the systemd upstream to 1M too. If it's #2, then I figure we can start with 256K as my PR currently does, for now.
Lennart
On Fri, Oct 5, 2018 at 10:20 PM Lennart Poettering mzerqung@0pointer.de wrote:
I have thus prepared this a few days ago:
This is great, thank you.
So, any idea why they picked 1M? Are there typical apps that require
really that many?
I've emailed Zebediah Figura, the esync author. I asked him to either get back to me and I'll resend his reply here, or to comment in your pull request. Hopefully he can tell us what his process was for picking the 1M value.
desktop@lists.stg.fedoraproject.org