Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
- these python functions try to hide the object model - we assume that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
- we should probably split these high-level function to several modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
Hi, On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
agreed
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
- these python functions try to hide the object model - we assume that
administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
agreed
- these python functions are synchronous, i.e. they do stuff and return
once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
agreed
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
-> global configuration file(s)
- we should probably split these high-level function to several modules
by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
agreed
- it should be easy to build command-line versions for these high-level
functions
So, in these "devel scripts" we will have "all functionality"? And only some higher level (cmdline tools) will just cover common cases? The main question here is if we really need to encapsulate everything or, if not everything, what to provide?
-> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
I'm not sure here, but alias (or something similar) could help here.
- we should introduce some 'lmi' metacommand, which would wrap these
command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell:
$ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
ok... on which "version" of lmishell we will build upon? We can try to start creating scripts, report issues and on some point mark lmishell and scripts as "good enough". Any other thoughts?
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
I'm missing one piece - management of more systems at once. But I think it's on the TODO list of lmishell. So it could be easy to achieve this.
RR
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
On Jun 12, 2013, at 5:23 AM, Roman Rakus rrakus@redhat.com wrote:
Hi, On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
agreed
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
- these python functions try to hide the object model - we assume that
administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
agreed
- these python functions are synchronous, i.e. they do stuff and return
once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
agreed
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
-> global configuration file(s)
No, I think the idea here is that we would have you establish the namespace connection once and store that as a global variable in the lmishell. After that, unless you specifically pointed at a different namespace, we would just use that first one under the hood. We can't put this in a config file easily, because it implies an active connection and we don't want to be reconnecting constantly.
- we should probably split these high-level function to several modules
by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
agreed
- it should be easy to build command-line versions for these high-level
functions
So, in these "devel scripts" we will have "all functionality"? And only some higher level (cmdline tools) will just cover common cases? The main question here is if we really need to encapsulate everything or, if not everything, what to provide?
The ideal case would be for the cmdline tools to cover any aspect of the script on that is usefully exposed to the use, but this will not always everything that the script on can provide. I suggest that we just use our best judgement here on which pieces to turn into cmdline tools. We can always extend it later if we need to.
-> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
I'm not sure here, but alias (or something similar) could help here.
I think that since the stated goal of OpenLMI is to provide consistency between configuration tools where none previously existed, we should build the cleanup version first and leave the "mimic" approach as a nice-to-have RFE for the future. However, I would like Russ to comment on this.
- we should introduce some 'lmi' metacommand, which would wrap these
command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell:
$ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
ok... on which "version" of lmishell we will build upon? We can try to start creating scripts, report issues and on some point mark lmishell and scripts as "good enough". Any other thoughts?
Let's try to keep the lmishell "core" as lean as possible and do most real work in the scriptons. My vision here is that lmishell should essentially just be a better wbemcli (plus scripton enablers), but when you add a scripton atop it, it becomes something usable by a human being. That's my opinion, of course. I am open to counter-arguments.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
I'm missing one piece - management of more systems at once. But I think it's on the TODO list of lmishell. So it could be easy to achieve this.
Right now, our plan is to recommend that the end-user just builds a loop through machines in the outer script that calls the scriptons. As Jan mentioned, the more we tried to solve that directly in the scriptons, the more corner cases we had to fight through.
RR
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
- these python functions try to hide the object model - we assume that
administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and return
once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
- we should probably split these high-level function to several modules
by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these high-level
functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap these
command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell:
$ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I would go with the metacommand style (ala virsh).
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
As for the logging, maybe use something similar to the logging decorators we now use in openlmi-storage? They would tell the lmishell (which I suppose would be used as an 'interpreter' for the scriptons) if it should log somehow or not. That would make it easier to create a centralized logging policy/style/output.
-- Jan Synacek Software Engineer, Red Hat
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/12/2013 06:20 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
Most admins are not really familiar with object-oriented programming. The largest set of admins we're targeting tend towards bash scripting with command-line tools. We want to capture that group and encourage them to use OpenLMI.
By making the calls useful and procedural, we can get them to start using OpenLMI. We're not changing the underlying OO API underneath. Once people are using our interface, they will always have the option of extending their usage to call the low-level OpenLMI object-oriented functions.
The point of the lmishell is to be *very* easy for admins to use. Object-oriented programming is (perceived to be) hard and will scare away a fair number of admins.
- these python functions try to hide the object model - we assume
that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and
return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
Again, the point here is to simplify the interface into something that admins are comfortable with. Some of them will understand async processing, but most won't. In order for us to have an async interface, we'll need to provide a set of job-processing tools to wait for results and we'd have to train admins to know when to block and wait (or how to write a mainloop and do full async processing). Our view was that this was *far* too complicated for the average user (and as we went down the path of trying to figure out how to make it easier, we hit so many edge-cases that it became clear that providing async needs to be at earliest a "2.0" feature).
Remember again that what we're trying to do here is capture admins whose usual behavior is to just call command-line applications and wait for their return. This is little different from their perspective. Async is a difficult problem to solve, and while there are obvious performance gains to being able to run some activities in parallel, it introduces the possibility of race-conditions and other concurrency bugs.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
There's only one parameter, namespace (which encompasses both the connection and namespace on which it operates). There will effectively be a global object that will save the state. The idea is that when we create a connection, we'll set the global variable internally. If you create multiple connections, the last one created will be the default.
Then, if you want to run a routine for a connection *other* than the default, you will need to specify the namespace parameter.
So for the majority of cases, this argument will simply be left out.
- we should probably split these high-level function to several
modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these
high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap
these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I would go with the metacommand style (ala virsh).
I'm in favor of the metacommand style as well. As Jan and I discussed that day, much of the point of lmishell is going to be to reduce the number of *different* commands an admin needs to learn. Thus, duplicating the existing commands would go against that effort.
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
As for the logging, maybe use something similar to the logging decorators we now use in openlmi-storage? They would tell the lmishell (which I suppose would be used as an 'interpreter' for the scriptons) if it should log somehow or not. That would make it easier to create a centralized logging policy/style/output.
Some observations:
* A main goal is for sysadmins used to bash and shell scripts to easily move to lmishell and scriptons - we want them to think of it as "a familiar environment on steroids".
* Having said this, we want to take advantage of the power of the Python language as a scripting tool. However, going to a full OO interface will be a step too far...
* Radek suggested that we should do some prototyping. This makes a lot of sense, and has certainly served us well so far. I would like to see some prototypes before we firm up best practices for scriptons anyway, so having the prototyping phase include both procedural and OO examples is reasonable.
On a completely different topic, would it make sense to rename "lmishell" to just "lmi" before we go any further? "lmi" is shorter and easier to type, and I don't see where including "shell" adds any information?
Russ
On Fri, 2013-06-21 at 08:28 -0400, Stephen Gallagher wrote:
On 06/12/2013 06:20 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
Most admins are not really familiar with object-oriented programming. The largest set of admins we're targeting tend towards bash scripting with command-line tools. We want to capture that group and encourage them to use OpenLMI.
By making the calls useful and procedural, we can get them to start using OpenLMI. We're not changing the underlying OO API underneath. Once people are using our interface, they will always have the option of extending their usage to call the low-level OpenLMI object-oriented functions.
The point of the lmishell is to be *very* easy for admins to use. Object-oriented programming is (perceived to be) hard and will scare away a fair number of admins.
- these python functions try to hide the object model - we assume
that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and
return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
Again, the point here is to simplify the interface into something that admins are comfortable with. Some of them will understand async processing, but most won't. In order for us to have an async interface, we'll need to provide a set of job-processing tools to wait for results and we'd have to train admins to know when to block and wait (or how to write a mainloop and do full async processing). Our view was that this was *far* too complicated for the average user (and as we went down the path of trying to figure out how to make it easier, we hit so many edge-cases that it became clear that providing async needs to be at earliest a "2.0" feature).
Remember again that what we're trying to do here is capture admins whose usual behavior is to just call command-line applications and wait for their return. This is little different from their perspective. Async is a difficult problem to solve, and while there are obvious performance gains to being able to run some activities in parallel, it introduces the possibility of race-conditions and other concurrency bugs.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
There's only one parameter, namespace (which encompasses both the connection and namespace on which it operates). There will effectively be a global object that will save the state. The idea is that when we create a connection, we'll set the global variable internally. If you create multiple connections, the last one created will be the default.
Then, if you want to run a routine for a connection *other* than the default, you will need to specify the namespace parameter.
So for the majority of cases, this argument will simply be left out.
- we should probably split these high-level function to several
modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these
high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap
these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I would go with the metacommand style (ala virsh).
I'm in favor of the metacommand style as well. As Jan and I discussed that day, much of the point of lmishell is going to be to reduce the number of *different* commands an admin needs to learn. Thus, duplicating the existing commands would go against that effort.
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
As for the logging, maybe use something similar to the logging decorators we now use in openlmi-storage? They would tell the lmishell (which I suppose would be used as an 'interpreter' for the scriptons) if it should log somehow or not. That would make it easier to create a centralized logging policy/style/output.
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
On 06/21/2013 02:28 PM, Stephen Gallagher wrote:
On 06/12/2013 06:20 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
Most admins are not really familiar with object-oriented programming. The largest set of admins we're targeting tend towards bash scripting with command-line tools. We want to capture that group and encourage them to use OpenLMI.
By making the calls useful and procedural, we can get them to start using OpenLMI. We're not changing the underlying OO API underneath. Once people are using our interface, they will always have the option of extending their usage to call the low-level OpenLMI object-oriented functions.
The point of the lmishell is to be *very* easy for admins to use. Object-oriented programming is (perceived to be) hard and will scare away a fair number of admins.
Ok, thank you for clarifying.
- these python functions try to hide the object model - we assume
that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and
return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
Again, the point here is to simplify the interface into something that admins are comfortable with. Some of them will understand async processing, but most won't. In order for us to have an async interface, we'll need to provide a set of job-processing tools to wait for results and we'd have to train admins to know when to block and wait (or how to write a mainloop and do full async processing). Our view was that this was *far* too complicated for the average user (and as we went down the path of trying to figure out how to make it easier, we hit so many edge-cases that it became clear that providing async needs to be at earliest a "2.0" feature).
Remember again that what we're trying to do here is capture admins whose usual behavior is to just call command-line applications and wait for their return. This is little different from their perspective. Async is a difficult problem to solve, and while there are obvious performance gains to being able to run some activities in parallel, it introduces the possibility of race-conditions and other concurrency bugs.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
There's only one parameter, namespace (which encompasses both the connection and namespace on which it operates). There will effectively be a global object that will save the state. The idea is that when we create a connection, we'll set the global variable internally. If you create multiple connections, the last one created will be the default.
Then, if you want to run a routine for a connection *other* than the default, you will need to specify the namespace parameter.
Hmm, I don't think that we really have to pollute the high-level API with this one parameter.
Currently, lmishell doesn't have any internal knowledge of its active connections. What if lmishell established an internal object for every connect() that is called and kept these connection objects in a list, for example? Maybe it would even make sense to have something like an iterator that would point to the currently selected connection, so it can be used as a default for all the high-level calls.
Then, lmishell could also be extended to define something like
def set_global_state(...): # changes the global state ....
def get_global_state(): return _currently_selected_connection
Or perhaps, if it makes sense to have multiple selected connections, those functions could operate with lists/tuples. We could then define our high-level functions like so:
def create_mount(device, mountpoint, options=None, flags=None): c = get_global_state() # use c here to do all the lowlevel stuff # ... pass
Do this sound reasonable? I may be repeating something that has already been written here, but I wanted to be explicit about it.
So for the majority of cases, this argument will simply be left out.
- we should probably split these high-level function to several
modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these
high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap
these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I would go with the metacommand style (ala virsh).
I'm in favor of the metacommand style as well. As Jan and I discussed that day, much of the point of lmishell is going to be to reduce the number of *different* commands an admin needs to learn. Thus, duplicating the existing commands would go against that effort.
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
As for the logging, maybe use something similar to the logging decorators we now use in openlmi-storage? They would tell the lmishell (which I suppose would be used as an 'interpreter' for the scriptons) if it should log somehow or not. That would make it easier to create a centralized logging policy/style/output.
On 06/26/2013 03:39 PM, Jan Synacek wrote:
On 06/21/2013 02:28 PM, Stephen Gallagher wrote:
On 06/12/2013 06:20 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
There's only one parameter, namespace (which encompasses both the connection and namespace on which it operates). There will effectively be a global object that will save the state. The idea is that when we create a connection, we'll set the global variable internally. If you create multiple connections, the last one created will be the default.
Then, if you want to run a routine for a connection *other* than the default, you will need to specify the namespace parameter.
Hmm, I don't think that we really have to pollute the high-level API with this one parameter.
Currently, lmishell doesn't have any internal knowledge of its active connections. What if lmishell established an internal object for every connect() that is called and kept these connection objects in a list, for example? Maybe it would even make sense to have something like an iterator that would point to the currently selected connection, so it can be used as a default for all the high-level calls.
Then, lmishell could also be extended to define something like
def set_global_state(...): # changes the global state ....
def get_global_state(): return _currently_selected_connection
Or perhaps, if it makes sense to have multiple selected connections, those functions could operate with lists/tuples. We could then define our high-level functions like so:
def create_mount(device, mountpoint, options=None, flags=None): c = get_global_state() # use c here to do all the lowlevel stuff # ... pass
Do this sound reasonable? I may be repeating something that has already been written here, but I wanted to be explicit about it.
We expect that admins will build scrips by calling our functions, e.g.:
partition_create(...) vg_create(...) lv_create(...)
Now, if each function automatically iterates over global list of connections, we must stop on the first error.
But if the admin (maybe with some help of lmishell) does the loop manually and even in parallel, some machines can succeed executing the script and possibly only small fraction of them will stop on an error:
for c in m_connections: partition_create(..., ns=c) vg_create(..., ns=c) lv_create(..., ns=c)
Note that it would be nice to add some support for parallel execution of the loop and also nice logging and error reporting so the admin can easily see which machines failed and how.
Jan
Hi,
On Wed 12 of Jun 2013 12:20:14 Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
I agree, OOP is not something evil we want to hide from our users. It might be sometimes more convenient to use object-based api than procedural paradigm. Also we should first agreed how the api should look, set some common functions and make all of the scripton as similar as possible.
Compare following examples of service scripton API (with questions we should agree upon as comments):
1) procedural:
def get_services(nm=None): """ Returns list of services """ # How? as list of LMI_Service instances? or as list of service names?
def service_start(service, nm=None): """ Start given service """ # What is service parameter? String with service name? LMI_Service instance? Both?
def service_status(service, nm=None): """ Get status of given service """ # What to return? Some enum values?
2) object-based:
class Service(object): def __init__(self, name, nm=None): """ Get service by given name """
def start(self): """ Start the service """
def status(self): """ Get status of the service """ # What to return? Some enum values?
@classmethod def get_all(cls, nm=None): """ Get instances of all services on the system """ # Or have this rather a top level function?
This is just an example how I imagine that the scripton API for services might look. I personally like the object-based API more, but I'm not targeted user :) Maybe we could prototype both and show it to some potential users (sysadmins).
But we should definitely agree upon how the API should look for all of the scriptons. We really don't want to have just another set of different APIs with different look-and-feel.
- these python functions try to hide the object model - we assume that
administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and return
once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
Yes, I think we should have the possibility to let the longterm methods to run asynchronously (as a non-default option).
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
Having two extra parameters for each function sounds like a huge API bloat. I think that having some 'global' one, i.e. some kind of a state object that the underlying layer (lmishell?) would use, would be better.
One have to add the namespace parameter only one (in constructor) in the object-based approach.
- we should probably split these high-level function to several modules
by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these high-level
functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
I would go for creating new set of commands and not to mimic existing cmdline tools. This new tools should follow the scripton API closely so user familiar with scriptons will feel comfortable when using the cmdline tools and vice versa.
- we should introduce some 'lmi' metacommand, which would wrap these
command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell:
$ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I would go with the metacommand style (ala virsh).
I don't know, do we really want to have two "shells"? One lmishell for bare CIM access, and second one for running scriptions. Isn't it better to integrate both of them to one shell?
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them
provide the same user experience.
Yeah, this is the most important part of doing scriptons at all. We already have the lowlevel ugly API that can do almost everything one can imagine (CIM). We should focus on how to make it usable for our users, to make it comfortable to use and make it consistent.
Radek Novacek
On 06/12/2013 12:20 PM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
Hello!
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
Define "our users". Are they admins that will use the scriptons from python scripts? If yes, I think that degrading the CIM API from OOP to pure procedural just doesn't sound right. Or maybe I'm just misunderstanding something here.
The CIM API will still be object-oriented, we won't hide _LmiInstance, used in lmishell to wrap CIM objects. However, we will offer functions to work with them (vg_create) instead of class (VG.create()).
- these python functions try to hide the object model - we assume that
administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and return
once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
What about leaving an option for functions that are asynchronous to run asynchronously? Or create both async and sync version of such functions. Because, again, forcing something that can be run asynchronously into a synchronous mode is degrading what we already have, needlessly. IMO we shouldn't make things more simple than simple.
This has already been agreed on with Peter Hatina. All async methods will have synchronous variant, while the async is still available.
Jan
On Tue, 11 Jun 2013 16:52:51 -0400 Jan Safranek jsafrane@redhat.com wrote:
- we should introduce some 'lmi' metacommand, which would wrap these
command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell:
$ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
Hm... Interesting. This idea came up already before -- we were talking with Jan Synacek about something like virsh (this is a better inspiration since koji or fedpkg always connect to the same remote end). We need some easy way to tell what host, credentials, namespace, etc... use for the command.
Regards,
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/11/2013 04:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
We agreed that:
- we should provide python modules with high-level functions
(we were thinking about nice classes, e.g. VolumeGroup with methods to extend/destroy/examine a volume group, but it would end up in duplicating the API we already have. We also assume that our users are not familiar with OOP).
- these python functions try to hide the object model - we assume
that administrators won't remember association names and won't use e.g. vg.associators(AssocClass="LMI_VGAssociatedComponentExtent") to get list of physical volumes of a vg. We want nice vg_get_pvs(vg) function. We will expose CIM classes and properties though.
- these python functions are synchronous, i.e. they do stuff and
return once the stuff is finished. They can do stuff in parallel inside (e.g. format multiple devices simultaneously) but from outside perspective, the stuff is completed once the function returns.
(we were thinking about python functions just scheduling multiple actions and do stuff in parallel massively, but we quickly got into lot of corner cases)
- each high-level function takes a LmiNamespace parameter, which
specifies the WBEM connection + the namespace on which it operates -> i.e. applications/other scripts can run our functions on multiple connections -> if the LmiNamespace is not provided by caller, some 'global' one will be used (so users just connect once and this connection is then used for all high-level functions)
- we should probably split these high-level function to several
modules by functionality, i.e. have lmi.networking and lmi.storage.vg, lmi.storage.lv etc.
- it should be easy to build command-line versions for these
high-level functions -> it is not clear if we should mimic existing cmdline tools (mdadm, vgcreate, ip, ...) or make some cleanup (so creation of MD raid looks the same like creation of a VG)
- we should introduce some 'lmi' metacommand, which would wrap
these command line tools, like 'lmi vgcreate mygroup /dev/sda1 /dev/sdb1' and 'lmi ip addr show'. It's quite similar to fedpkg or koji command line utilities.
- 'lmi' metacommand could also have a shell: $ lmi shell
vgcreate mygroup /dev/sda1 /dev/sdb1 ip addr show
I tried to create a simple module for volume group management. I ran into several issues with lmishell (see trac tickets), attached you can find first proposal. It's quite crude and misses several important aspects like proper logging and error handling.
Please look at it and let us know what you think. It is just a proposal, we can change it in any way.
Once we agree on the concept, we must also define strict documentation and logging standards so all functions and scripts are nicely documented and all of them provide the same user experience.
Jan
P.S.: note that I'm out of office for next week and with sporadic email access this week.
On a call with Russ and Tomáš this morning, we had a few additional thoughts that we probably need to incorporate into the design.
First, I think we should plan for all LMI exceptions to descend from a single LMIException class. This base class should always provide a human-readable and *localizable* error (in Unicode format). The individual descendants of this object can and should carry more information in specific manner, but the idea of the base class is this:
When invoking the LMI module from the command-line (either directly using the lmishell interpreter or via the 'lmi' meta-commands), we should always have a catch-all for LMIException in the main function that will return the human-readable error on STDERR. This will make it much easier for admins to identify where something went wrong (without seeing a scary python traceback).
Note: if we get back an exception that is NOT an LMIException, we should probably allow it to crash out, since it's most likely a programming bug (an error case we didn't handle properly that was cascaded back up the stack). We will want to know about those.
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
<snip>
I created a scripton that uses the LogicalFile provider. I just wanted to be sure that I understood everything that has been discussed on the list so far.
The script itself is not written using lmishell, see [1]. Feel free to test it and criticize it;) Just don't forget to call use_exception(True) if running via the shell.
Also, we should decide where we want to put the scriptons. We could store them next to the providers' sources, but since the scriptons will probably have a common module, they should be put together. Any ideas?
[1] https://fedorahosted.org/openlmi/ticket/109
On 07/01/2013 10:44 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
<snip>
I created a scripton that uses the LogicalFile provider. I just wanted to be sure that I understood everything that has been discussed on the list so far.
The script itself is not written using lmishell, see [1]. Feel free to test it and criticize it;) Just don't forget to call use_exception(True) if running via the shell.
Also, we should decide where we want to put the scriptons. We could store them next to the providers' sources, but since the scriptons will probably have a common module, they should be put together. Any ideas?
Aha! Actually attaching the scripton...
Jan, great start! I've attached some high level comments.
Thanks, Russ
On Mon, 2013-07-01 at 10:53 +0200, Jan Synacek wrote:
On 07/01/2013 10:44 AM, Jan Synacek wrote:
On 06/11/2013 10:52 PM, Jan Safranek wrote:
Hi,
I've been talking to Stephen Gallagher how to proceed with client script development in lmishell. The goal is to provide high-level functionality to manage remote systems without complete knowledge of the CIM API.
<snip>
I created a scripton that uses the LogicalFile provider. I just wanted to be sure that I understood everything that has been discussed on the list so far.
The script itself is not written using lmishell, see [1]. Feel free to test it and criticize it;) Just don't forget to call use_exception(True) if running via the shell.
Also, we should decide where we want to put the scriptons. We could store them next to the providers' sources, but since the scriptons will probably have a common module, they should be put together. Any ideas?
Aha! Actually attaching the scripton...
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
<We should start with the name of the scripton, description, and arguments> <Add versioning information so we can try to tell when scriptons have changed and what the most recent version is.> <Need comments in the body of the scripton - a major goal is documentation and example.> <Should this be a single scripton or multiple scriptons? I would lean toward having separate scriptons to create a new directory and to list the files in an existing directory>
import pywbem
# TODO rewrite this scripton using lmishell # https://fedorahosted.org/openlmi/ticket/109 cliconn = pywbem.WBEMConnection('https://localhost', ('root', 'secret')) NS='root/cimv2' FS='Unknown' <Should be handled by a session object>
# return a list of files in a directory <List of arguments, etc.> <How do we control the results returned? Is any filtering or wildcard support available? File details - "ls" vs. "ls -l"?> <What happens if there are 10,000 files in the directory?>
def cmd_ls(path='/', ns=None): _ns = ns if ns else NS cop = pywbem.CIMInstanceName(classname='LMI_UnixDirectory', namespace=_ns, keybindings={
'CSCreationClassName':'Linux_ComputerSystem', 'CSName':'rawhide-virt',
'CreationClassName':'LMI_UnixDirectory',
'FSCreationClassName':'LMI_LocalFileSystem', 'FSName':FS, 'Name':path }) <We should explain what this function is doing, since we want scriptons to be used for documentation> assocs = cliconn.Associators(cop, AssocClass='LMI_DirectoryContainsFile') return [a['Name'] for a in assocs]
<How do we handle errors - directory doesn't exist, don't have permission to read directory, no files in directory, etc.?>
def cmd_cd(path, ns=None): # change directory to path? # have a global current path variable and use it somehow for something? pass <error handling?>
def cmd_mkdir(path, ns=None): _ns = ns if ns else NS cop = pywbem.CIMInstanceName(classname='LMI_UnixDirectory', namespace=_ns, keybindings={
'CSCreationClassName':'Linux_ComputerSystem', 'CSName':'rawhide-virt',
'CreationClassName':'LMI_UnixDirectory',
'FSCreationClassName':'LMI_LocalFileSystem', 'FSName':FS, 'Name':path })
inst = pywbem.CIMInstance('LMI_UnixDirectory') inst['CSCreationClassName'] = 'Linux_ComputerSystem' inst['CSName'] = 'rawhide-virt' inst['CreationClassName'] = 'LMI_UnixDirectory' inst['FSCreationClassName'] = 'LMI_LocalFileSystem' inst['FSName'] = 'Unknown' inst['Name'] = path inst.path = cop cliconn.CreateInstance(inst) <Error handling? What happens if the directory exists?>
def cmd_rmdir(path, ns=None): _ns = ns if ns else NS cop = pywbem.CIMInstanceName(classname='LMI_UnixDirectory', namespace=_ns, keybindings={
'CSCreationClassName':'Linux_ComputerSystem', 'CSName':'rawhide-virt',
'CreationClassName':'LMI_UnixDirectory',
'FSCreationClassName':'LMI_LocalFileSystem', 'FSName':FS, 'Name':path }) cliconn.DeleteInstance(cop) <What happens if there are files in the directory?>
On 07/01/2013 04:21 PM, Russell Doty wrote:
Jan, great start! I've attached some high level comments.
<We should start with the name of the scripton, description, and arguments> <Add versioning information so we can try to tell when scriptons have changed and what the most recent version is.> <Need comments in the body of the scripton - a major goal is documentation and example.>
I didn't write any of these since this script was meant to be a quick example and the commands themselves are pretty clear. But, I agree, everything that you listed should be there.
<Should this be a single scripton or multiple scriptons? I would lean toward having separate scriptons to create a new directory and to list the files in an existing directory>
In this particular case, I think that they should all be in one file -- all commands are very simple and all are related to the LogicalFile provider.
import pywbem
# TODO rewrite this scripton using lmishell # https://fedorahosted.org/openlmi/ticket/109 cliconn = pywbem.WBEMConnection('https://localhost', ('root', 'secret')) NS='root/cimv2' FS='Unknown'
<Should be handled by a session object>
Agreed, should be part of the shell.
# return a list of files in a directory <List of arguments, etc.> <How do we control the results returned? Is any filtering or wildcard support available? File details - "ls" vs. "ls -l"?>
I simply return all the files (their names) in a list. I think that support for additional parameters could be added in the scripton itself, but it really depends on how deep we want to go before recreating the whole operating system:) If I wanted, for example, to see the files' access rights, I would have to call GetInstance() on every file to get its properties and that might be quite slow, especially if the CIMOM and the scripton are not on the same host.
<What happens if there are 10,000 files in the directory?>
Currently, the provider returns an error that there are too many files in a directory.
def cmd_ls(path='/', ns=None): _ns = ns if ns else NS cop = pywbem.CIMInstanceName(classname='LMI_UnixDirectory', namespace=_ns, keybindings={
'CSCreationClassName':'Linux_ComputerSystem', 'CSName':'rawhide-virt',
'CreationClassName':'LMI_UnixDirectory',
'FSCreationClassName':'LMI_LocalFileSystem', 'FSName':FS, 'Name':path }) <We should explain what this function is doing, since we want scriptons to be used for documentation> assocs = cliconn.Associators(cop, AssocClass='LMI_DirectoryContainsFile') return [a['Name'] for a in assocs]
<How do we handle errors - directory doesn't exist, don't have permission to read directory, no files in directory, etc.?>
An exception with a user-friendly error message is returned. The same applies for all of your error handling related questions.
def cmd_cd(path, ns=None): # change directory to path? # have a global current path variable and use it somehow for something? pass <error handling?>
def cmd_mkdir(path, ns=None): _ns = ns if ns else NS cop = pywbem.CIMInstanceName(classname='LMI_UnixDirectory', namespace=_ns, keybindings={
'CSCreationClassName':'Linux_ComputerSystem', 'CSName':'rawhide-virt',
'CreationClassName':'LMI_UnixDirectory',
'FSCreationClassName':'LMI_LocalFileSystem', 'FSName':FS, 'Name':path })
inst = pywbem.CIMInstance('LMI_UnixDirectory') inst['CSCreationClassName'] = 'Linux_ComputerSystem' inst['CSName'] = 'rawhide-virt' inst['CreationClassName'] = 'LMI_UnixDirectory' inst['FSCreationClassName'] = 'LMI_LocalFileSystem' inst['FSName'] = 'Unknown' inst['Name'] = path inst.path = cop cliconn.CreateInstance(inst)
<Error handling? What happens if the directory exists?>
def cmd_rmdir(path, ns=None): _ns = ns if ns else NS cop = pywbem.CIMInstanceName(classname='LMI_UnixDirectory', namespace=_ns, keybindings={
'CSCreationClassName':'Linux_ComputerSystem', 'CSName':'rawhide-virt',
'CreationClassName':'LMI_UnixDirectory',
'FSCreationClassName':'LMI_LocalFileSystem', 'FSName':FS, 'Name':path }) cliconn.DeleteInstance(cop) <What happens if there are files in the directory?>
On 07/02/2013 08:38 AM, Jan Synacek wrote:
# return a list of files in a directory <List of arguments, etc.> <How do we control the results returned? Is any filtering or wildcard support available? File details - "ls" vs. "ls -l"?>
I simply return all the files (their names) in a list. I think that support for additional parameters could be added in the scripton itself, but it really depends on how deep we want to go before recreating the whole operating system:) If I wanted, for example, to see the files' access rights, I would have to call GetInstance() on every file to get its properties and that might be quite slow, especially if the CIMOM and the scripton are not on the same host.
Maybe we could add new association between UnixDirectory and UnixFiles (=all UnixFiles in a directory), so you could get both LogicalFiles and UnixFiles in a directory just in two Associators calls. Is it possible to match these two sets of object together?
Jan
On 07/02/2013 08:59 AM, Jan Safranek wrote:
On 07/02/2013 08:38 AM, Jan Synacek wrote:
# return a list of files in a directory <List of arguments, etc.> <How do we control the results returned? Is any filtering or wildcard support available? File details - "ls" vs. "ls -l"?>
I simply return all the files (their names) in a list. I think that support for additional parameters could be added in the scripton itself, but it really depends on how deep we want to go before recreating the whole operating system:) If I wanted, for example, to see the files' access rights, I would have to call GetInstance() on every file to get its properties and that might be quite slow, especially if the CIMOM and the scripton are not on the same host.
Maybe we could add new association between UnixDirectory and UnixFiles (=all UnixFiles in a directory), so you could get both LogicalFiles and UnixFiles in a directory just in two Associators calls. Is it possible to match these two sets of object together?
Hmm, scratch what I wrote... After calling Associators with AssocClass='LMI_DirectoryContainsFile', I already have all the instances.
Summarizing email replies and discussion around - in addition to the initial email, following also applies to scripts:
* scripts reside in python standard directory, i.e. /usr/lib/python2.7/site-packages/lmi/scripts/storage/vg.py o there must be description (on wiki) how to copy, modify and use a modified script easily (e.g by setting PYTHONPATH or ~/.lmirc?).
* scripts use standard python logging, lmishell provides formatter and writes them to appropriate place (stderr, file, ...) * lmishell also formats exceptions appropriately - as errors to stderr * scripts raise LmiError as exception, the top-level script (e.g. lmi metacommand, see below) catches it and either sends it to stderr or throws complete traceback (in verbose mode)
* each script must check for remote API version and refuse to operate if it is incompatible o with optional override o -> our API must report its version in a profile registration o -> we need to do the profile registration in our providers!
* the scripts reside on github, which allows for easy management of community contributions o again, this needs to be documented
* there must be some 'common' library which defines support functions for logging, cmdline parsing and LmiError
Sample script usage (in lmishell):
from lmi.scripts import logicalfile
files = logicalfile.list('/mnt') # 'files' is now array of LmiInstances of LMI_LogicalFile for file in files: print file.Name logicalfile.mkdir('/mnt/mydisk')
* we create new 'metacommand' /usr/bin/lmi
o it scans common area (e.g. /usr/libexec/lmi/cmd) for sub-commands o it has uniform usage for various management areas: $ lmi directory list /mnt $ lmi directory create /mnt/payroll $ lmi storage list $ lmi service list $ lmi service restart httpd o it has interactive mode: $ lmi -c remote.host.openlmi.com -U root password: ****** > directory list /mnt > directory create /mnt/payroll > service list > service restart httpd o it can run command (or set of commands) on multiple remote hosts, consistently reporting errors + e.g. list of machines stored in a file + maybe SLP later + _we really need kerberos, passwords do not scale well here_ o we need documentation (on wiki), how to create new subcommands + the part in /usr/libexec/lmi/cmd just registers new subcommands and parses command line, calling functions imported from /usr/lib/python2.7/site-packages/lmi/scripts/ o we should *not* mimic existing cmdline tools (mdadm, vgcreate, ip, ...). Our goal should be to reduce the number of things new users need to learn. The variances in these classic tools is one of the primary issues with manageability on Linux.
On 07/03/13 09:38, Jan Safranek wrote:
o we need documentation (on wiki), how to create new subcommands + the part in /usr/libexec/lmi/cmd just registers new subcommands and parses command line, calling functions imported from/usr/lib/python2.7/site-packages/lmi/scripts/
What about to have filename defining the subcommand? It should simplify things a lot. Example:
/usr/libexec/lmi/account/list_user /usr/libexec/lmi/account/create_user /usr/libexec/lmi/account/delete_user /usr/libexec/lmi/storrage/create_vg ...
pros: - simplifies the process - one file do one thing - the UNIX way - transparency
cons: - a lot of files to maintain
Any ideas, objections?
RR
On 3.7.2013 10:38, Roman Rakus wrote:
On 07/03/13 09:38, Jan Safranek wrote:
o we need documentation (on wiki), how to create new subcommands + the part in /usr/libexec/lmi/cmd just registers new subcommands and parses command line, calling functions imported from/usr/lib/python2.7/site-packages/lmi/scripts/
What about to have filename defining the subcommand? It should simplify things a lot. Example: /usr/libexec/lmi/account/list_user /usr/libexec/lmi/account/create_user /usr/libexec/lmi/account/delete_user /usr/libexec/lmi/storrage/create_vg ...
pros:
- simplifies the process
- one file do one thing - the UNIX way
- transparency
cons:
- a lot of files to maintain
Any ideas, objections?
I like the idea, but each such scripton could define its own subcommands. Example above would result in commands:
* lmi list_user ... * lmi create_user ... * lmi delete_user ... * lmi create_vg ...
I would prefer following layout:
*
/usr/libexec/lmi/account/account
*
/usr/libexec/lmi/storrage/vg
*
/usr/libexec/lmi/storrage/md
*
/usr/libexec/lmi/software/sw
Each would define its own subcommands. Then the metacommand would take:
* lmi account list ... * lmi account create ... * lmi account delete ... * lmi vg ... * lmi md ... * lmi sw ...
Mm
RR
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
I created git repository on https://github.com/openlmi/openlmi-scripts for our scripts. We don't use fedorahosted to allow super-easy contribution from community via github pull requests.
I also marked all other openlmi repositories on github as deprecated, these are just leftovers from our github evaluation.
Jan
On 07/03/2013 09:38 AM, Jan Safranek wrote:
Summarizing email replies and discussion around - in addition to the initial email, following also applies to scripts:
scripts reside in python standard directory, i.e. /usr/lib/python2.7/site-packages/lmi/scripts/storage/vg.py o there must be description (on wiki) how to copy, modify and use a modified script easily (e.g by setting PYTHONPATH or ~/.lmirc?).
scripts use standard python logging, lmishell provides formatter and writes them to appropriate place (stderr, file, ...)
lmishell also formats exceptions appropriately - as errors to stderr
scripts raise LmiError as exception, the top-level script (e.g. lmi metacommand, see below) catches it and either sends it to stderr or throws complete traceback (in verbose mode)
each script must check for remote API version and refuse to operate if it is incompatible o with optional override o -> our API must report its version in a profile registration o -> we need to do the profile registration in our providers!
the scripts reside on github, which allows for easy management of community contributions o again, this needs to be documented
there must be some 'common' library which defines support functions for logging, cmdline parsing and LmiError
Sample script usage (in lmishell):
from lmi.scripts import logicalfile files = logicalfile.list('/mnt') # 'files' is now array of LmiInstances of LMI_LogicalFile for file in files: print file.Name logicalfile.mkdir('/mnt/mydisk')
we create new 'metacommand' /usr/bin/lmi
o it scans common area (e.g. /usr/libexec/lmi/cmd) for sub-commands o it has uniform usage for various management areas: $ lmi directory list /mnt $ lmi directory create /mnt/payroll $ lmi storage list $ lmi service list $ lmi service restart httpd o it has interactive mode: $ lmi -c remote.host.openlmi.com -U root password: ****** > directory list /mnt > directory create /mnt/payroll > service list > service restart httpd o it can run command (or set of commands) on multiple remote hosts, consistently reporting errors + e.g. list of machines stored in a file + maybe SLP later + _we really need kerberos, passwords do not scale well here_ o we need documentation (on wiki), how to create new subcommands + the part in /usr/libexec/lmi/cmd just registers new subcommands and parses command line, calling functions imported from /usr/lib/python2.7/site-packages/lmi/scripts/ o we should *not* mimic existing cmdline tools (mdadm, vgcreate, ip, ...). Our goal should be to reduce the number of things new users need to learn. The variances in these classic tools is one of the primary issues with manageability on Linux.
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
openlmi-devel@lists.stg.fedorahosted.org