I tried to find out why our client scripts are so slow and I got quite surprising results.
With default lmishell and https:// through loopback, I can get ~5-10 complete requests+responses/sec.
Bottleneck are:
1) lmishell - it did some unnecessary queries and it was promptly fixed (http://reviewboard-openlmi.rhcloud.com/r/817/).
2) lmishell - it creates nice Python objects for everything and it takes non-trivial amount of time (and memory), so I did my performance tests with native pywbem. There is not much we can do about it, I can only suggest to use lmishell for tasks, where performance is not that important.
2) pywbem - it opens new TCP connection for each request (rhbz#1004295) and TLS takes some time to establish.
3) sblim-cmpi-base, it's really badly written and forks *a lot* for each request it gets, e.g. they compute amount of swap by calling system("cat /proc/swaps | awk '{print $3;}' | sed 1d");
Results with LMI_Account provider:
pywbem + https: 11 requests/sec. pywbem + http: 21 req/sec pywbem + unix socket: 25 req/sec pegasus native C++ library + http: 25 req/sec pegasus native C++ library + unix socket: 33 req/sec
Conclusions: 1) HTTPS decreases performance by 25%, I hope it gets much better when rhbz#1004295 is fixed.
2) python decreases performance by another 25% (!), my wild guess is that it's because their XML parser.
3) even with local unix socket and native c++ library, Pegasus takes 30ms to process one request.
Note that the tests were done on virtual HW and the amount of requests/sec should be taken as relative measure.
Jan
On Thu, 2013-09-05 at 09:40 +0200, Jan Safranek wrote:
I tried to find out why our client scripts are so slow and I got quite surprising results.
With default lmishell and https:// through loopback, I can get ~5-10 complete requests+responses/sec.
Bottleneck are:
- lmishell - it did some unnecessary queries and it was promptly fixed
(http://reviewboard-openlmi.rhcloud.com/r/817/).
- lmishell - it creates nice Python objects for everything and it takes
non-trivial amount of time (and memory), so I did my performance tests with native pywbem. There is not much we can do about it, I can only suggest to use lmishell for tasks, where performance is not that important.
Jan, can you give us at least a rough estimate of lmishell performance? Are we talking about a factor of 2 performance difference? Factor of 10? Factor of 100?
- pywbem - it opens new TCP connection for each request (rhbz#1004295)
and TLS takes some time to establish.
- sblim-cmpi-base, it's really badly written and forks *a lot* for each
request it gets, e.g. they compute amount of swap by calling system("cat /proc/swaps | awk '{print $3;}' | sed 1d");
Ouch! How common are the really bad operations? Once per session, every call, depends on what you are doing?
Results with LMI_Account provider:
pywbem + https: 11 requests/sec. pywbem + http: 21 req/sec pywbem + unix socket: 25 req/sec pegasus native C++ library + http: 25 req/sec pegasus native C++ library + unix socket: 33 req/sec
Conclusions:
- HTTPS decreases performance by 25%, I hope it gets much better when
rhbz#1004295 is fixed.
- python decreases performance by another 25% (!), my wild guess is
that it's because their XML parser.
- even with local unix socket and native c++ library, Pegasus takes
30ms to process one request.
Note that the tests were done on virtual HW and the amount of requests/sec should be taken as relative measure.
Jan _______________________________________________ openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
On 09/05/2013 03:04 PM, Russell Doty wrote:
On Thu, 2013-09-05 at 09:40 +0200, Jan Safranek wrote:
I tried to find out why our client scripts are so slow and I got quite surprising results.
With default lmishell and https:// through loopback, I can get ~5-10 complete requests+responses/sec.
Bottleneck are:
- lmishell - it did some unnecessary queries and it was promptly fixed
(http://reviewboard-openlmi.rhcloud.com/r/817/).
- lmishell - it creates nice Python objects for everything and it takes
non-trivial amount of time (and memory), so I did my performance tests with native pywbem. There is not much we can do about it, I can only suggest to use lmishell for tasks, where performance is not that important.
Jan, can you give us at least a rough estimate of lmishell performance? Are we talking about a factor of 2 performance difference? Factor of 10? Factor of 100?
pywbem is roughly 2x faster than lmishell
- sblim-cmpi-base, it's really badly written and forks *a lot* for each
request it gets, e.g. they compute amount of swap by calling system("cat /proc/swaps | awk '{print $3;}' | sed 1d");
Ouch! How common are the really bad operations? Once per session, every call, depends on what you are doing?
It depends. Usually it influences only queries to Linux_ComputerSystem, Linux_OperatingSystem and Linux_UnixProcess, but as almost everything in CIM has association to Linux_ComputerSystem, it may negatively impact also other CIM requests to storage or account or network.
We need sblim-cmpi-base only to provide Linux_ComputerSystem object that represent the actual managed system. I checked that Pegasus provides PG_ComputerSystem, which we can use for the same purpose, we just need to add some config files to our providers.
Jan
On Thu, 2013-09-05 at 16:49 +0200, Jan Safranek wrote:
On 09/05/2013 03:04 PM, Russell Doty wrote:
On Thu, 2013-09-05 at 09:40 +0200, Jan Safranek wrote:
I tried to find out why our client scripts are so slow and I got quite surprising results.
With default lmishell and https:// through loopback, I can get ~5-10 complete requests+responses/sec.
Bottleneck are:
- lmishell - it did some unnecessary queries and it was promptly fixed
(http://reviewboard-openlmi.rhcloud.com/r/817/).
- lmishell - it creates nice Python objects for everything and it takes
non-trivial amount of time (and memory), so I did my performance tests with native pywbem. There is not much we can do about it, I can only suggest to use lmishell for tasks, where performance is not that important.
Jan, can you give us at least a rough estimate of lmishell performance? Are we talking about a factor of 2 performance difference? Factor of 10? Factor of 100?
pywbem is roughly 2x faster than lmishell
OK. Not good but not as bad as it sounded.
- sblim-cmpi-base, it's really badly written and forks *a lot* for each
request it gets, e.g. they compute amount of swap by calling system("cat /proc/swaps | awk '{print $3;}' | sed 1d");
Ouch! How common are the really bad operations? Once per session, every call, depends on what you are doing?
It depends. Usually it influences only queries to Linux_ComputerSystem, Linux_OperatingSystem and Linux_UnixProcess, but as almost everything in CIM has association to Linux_ComputerSystem, it may negatively impact also other CIM requests to storage or account or network.
We need sblim-cmpi-base only to provide Linux_ComputerSystem object that represent the actual managed system. I checked that Pegasus provides PG_ComputerSystem, which we can use for the same purpose, we just need to add some config files to our providers.
Hmmm... Should we do our own implementation of Linux_ComputerSystem in OpenLMI?
Jan _______________________________________________ openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 09/05/2013 11:02 AM, Russell Doty wrote:
On Thu, 2013-09-05 at 16:49 +0200, Jan Safranek wrote:
On 09/05/2013 03:04 PM, Russell Doty wrote:
On Thu, 2013-09-05 at 09:40 +0200, Jan Safranek wrote:
I tried to find out why our client scripts are so slow and I got quite surprising results.
With default lmishell and https:// through loopback, I can get ~5-10 complete requests+responses/sec.
Bottleneck are:
- lmishell - it did some unnecessary queries and it was
promptly fixed (http://reviewboard-openlmi.rhcloud.com/r/817/).
- lmishell - it creates nice Python objects for everything
and it takes non-trivial amount of time (and memory), so I did my performance tests with native pywbem. There is not much we can do about it, I can only suggest to use lmishell for tasks, where performance is not that important.
Jan, can you give us at least a rough estimate of lmishell performance? Are we talking about a factor of 2 performance difference? Factor of 10? Factor of 100?
pywbem is roughly 2x faster than lmishell
OK. Not good but not as bad as it sounded.
- sblim-cmpi-base, it's really badly written and forks *a
lot* for each request it gets, e.g. they compute amount of swap by calling system("cat /proc/swaps | awk '{print $3;}' | sed 1d");
Ouch! How common are the really bad operations? Once per session, every call, depends on what you are doing?
It depends. Usually it influences only queries to Linux_ComputerSystem, Linux_OperatingSystem and Linux_UnixProcess, but as almost everything in CIM has association to Linux_ComputerSystem, it may negatively impact also other CIM requests to storage or account or network.
We need sblim-cmpi-base only to provide Linux_ComputerSystem object that represent the actual managed system. I checked that Pegasus provides PG_ComputerSystem, which we can use for the same purpose, we just need to add some config files to our providers.
Hmmm... Should we do our own implementation of Linux_ComputerSystem in OpenLMI?
"Don't reinvent any wheel we can steal". If PG_ComputerSystem provides the same functionality with better performance, we should just prefer that (if it's available, obviously it won't be with sfcb).
On 09/05/2013 05:02 PM, Russell Doty wrote:
On Thu, 2013-09-05 at 16:49 +0200, Jan Safranek wrote:
On 09/05/2013 03:04 PM, Russell Doty wrote:
On Thu, 2013-09-05 at 09:40 +0200, Jan Safranek wrote:
I tried to find out why our client scripts are so slow and I got quite surprising results.
With default lmishell and https:// through loopback, I can get ~5-10 complete requests+responses/sec.
Bottleneck are:
- lmishell - it did some unnecessary queries and it was promptly fixed
(http://reviewboard-openlmi.rhcloud.com/r/817/).
- lmishell - it creates nice Python objects for everything and it takes
non-trivial amount of time (and memory), so I did my performance tests with native pywbem. There is not much we can do about it, I can only suggest to use lmishell for tasks, where performance is not that important.
Jan, can you give us at least a rough estimate of lmishell performance? Are we talking about a factor of 2 performance difference? Factor of 10? Factor of 100?
pywbem is roughly 2x faster than lmishell
OK. Not good but not as bad as it sounded.
Well, it is a wrapper around pywbem, so lmishell won't be as fast as pywbem itself. Lot's of CIM classes are being wrapped so it looks more natural (object model) in the shell.
- sblim-cmpi-base, it's really badly written and forks *a lot* for each
request it gets, e.g. they compute amount of swap by calling system("cat /proc/swaps | awk '{print $3;}' | sed 1d");
Ouch! How common are the really bad operations? Once per session, every call, depends on what you are doing?
It depends. Usually it influences only queries to Linux_ComputerSystem, Linux_OperatingSystem and Linux_UnixProcess, but as almost everything in CIM has association to Linux_ComputerSystem, it may negatively impact also other CIM requests to storage or account or network.
We need sblim-cmpi-base only to provide Linux_ComputerSystem object that represent the actual managed system. I checked that Pegasus provides PG_ComputerSystem, which we can use for the same purpose, we just need to add some config files to our providers.
Hmmm... Should we do our own implementation of Linux_ComputerSystem in OpenLMI?
Jan _______________________________________________ openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
openlmi-devel@lists.stg.fedorahosted.org