Hi,
what is "official" stance on PEP8 or Pyflake in Python parts of OpenLMI? Or other means of static analysis?
Most code I've seen does not conform to PEP8 and/or does not pass Pyflake test.
**I'm not trying to say it should**, but ... in case there are no objections (or even if it's desired), I'd be happy to go and clean up at least lmi.test.* and other test modules I come along (common.py, methods.py, ...).
My motivation: I use both tests to almost full extent on all my Python code (it helps me a *lot* to catch typos and design errors). Now when adding my code to existing modules, I get overwhelmed by warnings. Easiest way I can imagine is to to clean up the rest. (The next easiest way would be to set exception on particular rules but that only makes sense if it's a project-wide consent.)
Any comments?
Thanks, aL.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 05/07/2014 09:18 AM, Alois Mahdal wrote:
Hi,
what is "official" stance on PEP8 or Pyflake in Python parts of OpenLMI? Or other means of static analysis?
Most code I've seen does not conform to PEP8 and/or does not pass Pyflake test.
**I'm not trying to say it should**, but ... in case there are no objections (or even if it's desired), I'd be happy to go and clean up at least lmi.test.* and other test modules I come along (common.py, methods.py, ...).
My motivation: I use both tests to almost full extent on all my Python code (it helps me a *lot* to catch typos and design errors). Now when adding my code to existing modules, I get overwhelmed by warnings. Easiest way I can imagine is to to clean up the rest. (The next easiest way would be to set exception on particular rules but that only makes sense if it's a project-wide consent.)
Any comments?
Our coding conventions are listed at https://fedorahosted.org/openlmi/wiki/CodingConventions#Python
This is pretty loose, and an argument could certainly be made to standardize on PEP8. I'd like to hear what the main developers think on the subject.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 05/07/2014 03:18 PM, Alois Mahdal wrote:
Hi,
what is "official" stance on PEP8 or Pyflake in Python parts of OpenLMI? Or other means of static analysis?
Most code I've seen does not conform to PEP8 and/or does not pass Pyflake test.
**I'm not trying to say it should**, but ... in case there are no objections (or even if it's desired), I'd be happy to go and clean up at least lmi.test.* and other test modules I come along (common.py, methods.py, ...).
My motivation: I use both tests to almost full extent on all my Python code (it helps me a *lot* to catch typos and design errors). Now when adding my code to existing modules, I get overwhelmed by warnings. Easiest way I can imagine is to to clean up the rest. (The next easiest way would be to set exception on particular rules but that only makes sense if it's a project-wide consent.)
Any comments?
Thanks, aL.
Well it's quite difficult to follow particular coding style accross our project comprising of python providers, unit tests, shell, scripts and various utilities. There are simply too many exceptions. For example:
* autogenerated class names and methods for modules in providers - they are very long - mix of CamelCase, lowercase and underscores - but they just make sense since they correspond to CIM equivalents * CamelCase used throughout unittest - when using unittest.TestCase as a base class for our test cases, it's just impossible to follow these coding styles. Overriding any method results in CamelCased method in your code. * OpenLMI Scripts and particularly command-line wrappers are specifically designed for quick prototyping and do not follow any coding style at all (you may expect a bunch of warnings).
It would be great to have all our python code looking similar but due to above facts it's IMHO not possible. We could have some rc files for these checkers with all the exceptions handled. But each project would need its own. We already have openlmi-providers/tools/pylint/pylintrc for pylint which is desined for our python providers. But try to use it on our tests - not a nice sight.
Any effort to improve this situation is more than welcomed. I'd appreciate having some tool that could point me to major deviations and really bad practices without generating too much noise which makes its output quite troublesome to read. Making at least our tests look similar should not be a big problem. Let's start with that and see, where it gets. Please, go on with your idea. Let's discuss the details on review :).
I'm in favor of following PEP8 as closely as possible and deviate in cases, where it makes sense or is a must. For example allowing 2-chars long identifiers makes sense for me :).
Cheers, Michal
Thanks both Michal and Stephen for comments.
Just few comments, mostly from the point of view of static analysis tools usability.
On Fri, 09 May 2014 10:18:25 +0200 Michal Minář miminar@redhat.com wrote:
- autogenerated class names and methods for modules in providers
- they are very long
- mix of CamelCase, lowercase and underscores
- but they just make sense since they correspond to CIM
equivalents
- CamelCase used throughout unittest
- when using unittest.TestCase as a base class for our test cases, it's just impossible to follow these coding styles. Overriding any method results in CamelCased method in your code.
AFAIK casing is not exactly required by PEP8; there are recommendations, but at least it's not enforced by the pep8 script by default.
One exception is the line length 79, but even considering the above, I believe it would be very rare to have such long ID that would force us to break this rule (and still make sense).
- OpenLMI Scripts and particularly command-line wrappers are specifically designed for quick prototyping and do not follow any coding style at all (you may expect a bunch of warnings).
For that matter, at least flake8 and pep8 can be disabled by magic comment in file header.
Making at least our tests look similar should not be a big problem. Let's start with that and see, where it gets. Please, go on with your idea. Let's discuss the details on review :).
Thanks, I have already started with lmi.test.util. (Unfortunately it's not on RB yet due to problems with RBTools --- I'm working on resolving that.)
Thanks, aL.
openlmi-devel@lists.stg.fedorahosted.org