I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list. Here's a few of the common questions and my thoughts to get us started.
(1) What language(s) should be used?
IMO, bash. I'm leaning this way because it's included in *every* RHEL release, whereas puppet modules would require the installation of 3rd party software. I'd like to see as much done through 'native' tools as possible. There's certainly advantages to Perl (e.g., potential speed) however I don't think we want to assume Perl is installed on all RHEL machines.
(2) Do we perform checking in the scripts?
Defined further, should the scripts contain conditional checks to see if they should be ran? IMO, no. That's what OVAL is for.
(3) Where do we begin?
- Name remediation scripts after corresponding XCCDF rule - Build process includes them into final ssg-rhel6-xccdf.xml
Known challenge on passing XCCDF variables through to the scripts, however I wouldn't let this hold us up. Still *tons* of work to be done while this gets sorted.
There's a good bit of RHEL6 content in the Aqueduct project that (I believe) Tresys committed. Perhaps we could reuse those scripts?
You rang? Or, y'know, whatever sound emails make. The internet is obviating the need for the onomatopoeia.
On Monday, March 25, 2013 10:15 PM, Shawn Wells wrote:
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list. Here's a few of the common questions and my thoughts to get us started.
I feel like I wade into this conversation from time to time and end up repeating myself a bit. A couple past threads [1] for context [2] should keep me from being too repetitive.
(1) What language(s) should be used?
IMO, bash. I'm leaning this way because it's included in *every* RHEL release, whereas puppet modules would require the installation of 3rd party software. I'd like to see as much done through 'native' tools as possible. There's certainly advantages to Perl (e.g., potential speed) however I don't think we want to assume Perl is installed on all RHEL machines.
Puppet 's high-level language lets you statefully define system configurations. This is nice considering ostensibly XCCDF follows a very similar modeling approach. The problem is government applicability - Puppet is in EPEL and is not enterprise-supported as such the last I checked, so for some areas it's simply not an option. Additionally some people targeting embedded environments don't want to pull Puppet in. BASH is everywhere and as such makes the most sense. From a tools side it would make sense to support numerous 'fix' mechanisms, and from a content side it may make sense to have fix content in various formats. For the first steps it seems like BASH is the way to go.
(2) Do we perform checking in the scripts?
Defined further, should the scripts contain conditional checks to see if they should be ran? IMO, no. That's what OVAL is for.
This conversation gets a bit muddied by the definition of 'checking'. The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling. As such they would need to report errors and the tool calling out to the fix scripts would have to act on them, so some common dictionary of return code values may be useful. Using CEE has been brought up in the past as well. Basically whichever front-end is calling out to SCAP libraries for check content gets its fine-grained error reporting taken care of thanks to those libraries; but the fix side will be done ad-hoc and having granular error reporting would still be a huge benefit. The quick-and-dirty way would simply be to have 'success' and 'fail' RCs defined and to capture the stderr and stdout of the script according to whether it passes or fails, and what degree of logging your tool is set to (debug/verbose/etc.). This last approach is what SecState is currently doing.
(3) Where do we begin?
- Name remediation scripts after corresponding XCCDF rule
- Build process includes them into final ssg-rhel6-xccdf.xml
Known challenge on passing XCCDF variables through to the scripts, however I wouldn't let this hold us up. Still *tons* of work to be done while this gets sorted.
I'm not sure what challenge you're referencing here. We've gotten variable passing to work pretty simply [3].
There's a good bit of RHEL6 content in the Aqueduct project that (I believe) Tresys committed. Perhaps we could reuse those scripts?
The Tresys CLIP team did contribute a lot of scripts to Aqueduct which were written specifically to go alongside SSG SCAP content [4]. These scripts were mapped back directly to SSG rules [5]. They were written against an older version of the SSG repo and as such likely need a bit of an update, but I think they would be a very good starting place.
There are a lot of design concerns to keep in mind when approaching remediation content and the balance between authorship time and effort must be considered for each of those. Unfortunately I missed the registration for your SCAP workshop this time around, but I would be happy to support a call to talk through some of this.
Thank you - Francisco
[1] Sep 2012 - https://lists.fedorahosted.org/pipermail/scap-security-guide/2012-September/... [2] Feb 2013 - https://lists.fedorahosted.org/pipermail/scap-security-guide/2013-February/0... [3] SecState XCCDF variable passing - https://fedorahosted.org/secstate/wiki/RemediationContentHowTo [4] - https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG [5] - https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG/tools/fi...
On 3/26/13 12:18 AM, Francisco Slavin wrote:
You rang? Or, y'know, whatever sound emails make. The internet is obviating the need for the onomatopoeia.
I still envision the AOL "you've got mail!"
Anyway....
On Monday, March 25, 2013 10:15 PM, Shawn Wells wrote:
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list. Here's a few of the common questions and my thoughts to get us started.
I feel like I wade into this conversation from time to time and end up repeating myself a bit. A couple past threads [1] for context [2] should keep me from being too repetitive.
(1) What language(s) should be used?
IMO, bash. I'm leaning this way because it's included in*every* RHEL release, whereas puppet modules would require the installation of 3rd party software. I'd like to see as much done through 'native' tools as possible. There's certainly advantages to Perl (e.g., potential speed) however I don't think we want to assume Perl is installed on all RHEL machines.
Puppet 's high-level language lets you statefully define system configurations. This is nice considering ostensibly XCCDF follows a very similar modeling approach. The problem is government applicability - Puppet is in EPEL and is not enterprise-supported as such the last I checked, so for some areas it's simply not an option. Additionally some people targeting embedded environments don't want to pull Puppet in. BASH is everywhere and as such makes the most sense. From a tools side it would make sense to support numerous 'fix' mechanisms, and from a content side it may make sense to have fix content in various formats. For the first steps it seems like BASH is the way to go.
Personally I'm with you regarding puppet. Additionally RH has publicly stated plans to incorporate puppet into future (layered) products. However, as you pointed out, we can reasonably *depend* on bash being installed on any given RHEL box.
As for additional formats, the extensibility is already built into the SSG build process (sample content for puppet exists). The challenge -- as you seemed also to identify -- is what the first steps will be. I worry about finding ourselves in a situation with fragmented remediation content... a few bash fixes, a few puppet. I'd rather pick one to begin with (and bash seems most logical).
(2) Do we perform checking in the scripts?
Defined further, should the scripts contain conditional checks to see if they should be ran? IMO, no. That's what OVAL is for.
This conversation gets a bit muddied by the definition of 'checking'. The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling. As such they would need to report errors and the tool calling out to the fix scripts would have to act on them, so some common dictionary of return code values may be useful. Using CEE has been brought up in the past as well. Basically whichever front-end is calling out to SCAP libraries for check content gets its fine-grained error reporting taken care of thanks to those libraries; but the fix side will be done ad-hoc and having granular error reporting would still be a huge benefit. The quick-and-dirty way would simply be to have 'success' and 'fail' RCs defined and to capture the stderr and stdout of the script according to whether it passes or fails, and what degree of logging your tool is set to (debug/verbose/etc.). This last approach is what SecState is currently doing.
I like the stderr/stout approach. Simon Lukasik recently wrote a good posting about how this output would be captured in result XML: http://isimluk.livejournal.com/3573.html
(3) Where do we begin?
- Name remediation scripts after corresponding XCCDF rule
- Build process includes them into final ssg-rhel6-xccdf.xml
Known challenge on passing XCCDF variables through to the scripts, however I wouldn't let this hold us up. Still*tons* of work to be done while this gets sorted.
I'm not sure what challenge you're referencing here. We've gotten variable passing to work pretty simply [3].
There's a good bit of RHEL6 content in the Aqueduct project that (I believe) Tresys committed. Perhaps we could reuse those scripts?
The Tresys CLIP team did contribute a lot of scripts to Aqueduct which were written specifically to go alongside SSG SCAP content [4]. These scripts were mapped back directly to SSG rules [5]. They were written against an older version of the SSG repo and as such likely need a bit of an update, but I think they would be a very good starting place.
There are a lot of design concerns to keep in mind when approaching remediation content and the balance between authorship time and effort must be considered for each of those. Unfortunately I missed the registration for your SCAP workshop this time around, but I would be happy to support a call to talk through some of this.
I'll need to dig deeper into your final output, but can you help me understand how your JSON gets transformed into valid XML?
e.g.:
<!-- We use JSON notation to articulate a bash script call -->
<fix system="urn:xccdf:fix:script:bash"> { "script" : "/root/passreqs.sh", "environment-variables" : { "login_defs_min_len" : "<sub idref="pass-min-length-var" />" }, "positional-args" : ["argument1", "argument2"] } </fix>
I'm imaging we'd need to modify the makefile to parse all this, sucking in $script and performing some kind of variable substitution. How do you guys handle things?
Thank you
- Francisco
[1] Sep 2012 -https://lists.fedorahosted.org/pipermail/scap-security-guide/2012-September/... [2] Feb 2013 -https://lists.fedorahosted.org/pipermail/scap-security-guide/2013-February/0... [3] SecState XCCDF variable passing -https://fedorahosted.org/secstate/wiki/RemediationContentHowTo [4] -https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG [5] -https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG/tools/fi...
##
On 3/26/13 12:42 AM, Shawn Wells wrote:
On 3/26/13 12:18 AM, Francisco Slavin wrote:
You rang? Or, y'know, whatever sound emails make. The internet is obviating the need for the onomatopoeia.
I still envision the AOL "you've got mail!"
Anyway....
On Monday, March 25, 2013 10:15 PM, Shawn Wells wrote:
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list. Here's a few of the common questions and my thoughts to get us started.
I feel like I wade into this conversation from time to time and end up repeating myself a bit. A couple past threads [1] for context [2] should keep me from being too repetitive.
(1) What language(s) should be used?
IMO, bash. I'm leaning this way because it's included in*every* RHEL release, whereas puppet modules would require the installation of 3rd party software. I'd like to see as much done through 'native' tools as possible. There's certainly advantages to Perl (e.g., potential speed) however I don't think we want to assume Perl is installed on all RHEL machines.
Puppet 's high-level language lets you statefully define system configurations. This is nice considering ostensibly XCCDF follows a very similar modeling approach. The problem is government applicability - Puppet is in EPEL and is not enterprise-supported as such the last I checked, so for some areas it's simply not an option. Additionally some people targeting embedded environments don't want to pull Puppet in. BASH is everywhere and as such makes the most sense. From a tools side it would make sense to support numerous 'fix' mechanisms, and from a content side it may make sense to have fix content in various formats. For the first steps it seems like BASH is the way to go.
Personally I'm with you regarding puppet. Additionally RH has publicly stated plans to incorporate puppet into future (layered) products. However, as you pointed out, we can reasonably *depend* on bash being installed on any given RHEL box.
As for additional formats, the extensibility is already built into the SSG build process (sample content for puppet exists). The challenge -- as you seemed also to identify -- is what the first steps will be. I worry about finding ourselves in a situation with fragmented remediation content... a few bash fixes, a few puppet. I'd rather pick one to begin with (and bash seems most logical).
(2) Do we perform checking in the scripts?
Defined further, should the scripts contain conditional checks to see if they should be ran? IMO, no. That's what OVAL is for.
This conversation gets a bit muddied by the definition of 'checking'. The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling. As such they would need to report errors and the tool calling out to the fix scripts would have to act on them, so some common dictionary of return code values may be useful. Using CEE has been brought up in the past as well. Basically whichever front-end is calling out to SCAP libraries for check content gets its fine-grained error reporting taken care of thanks to those libraries; but the fix side will be done ad-hoc and having granular error reporting would still be a huge benefit. The quick-and-dirty way would simply be to have 'success' and 'fail' RCs defined and to capture the stderr and stdout of the script according to whether it passes or fails, and what degree of logging your tool is set to (debug! /verbose/e tc.). This last approach is what SecState is currently doing.
I like the stderr/stout approach. Simon Lukasik recently wrote a good posting about how this output would be captured in result XML: http://isimluk.livejournal.com/3573.html
(3) Where do we begin?
- Name remediation scripts after corresponding XCCDF rule
- Build process includes them into final ssg-rhel6-xccdf.xml
Known challenge on passing XCCDF variables through to the scripts, however I wouldn't let this hold us up. Still*tons* of work to be done while this gets sorted.
I'm not sure what challenge you're referencing here. We've gotten variable passing to work pretty simply [3].
There's a good bit of RHEL6 content in the Aqueduct project that (I believe) Tresys committed. Perhaps we could reuse those scripts?
The Tresys CLIP team did contribute a lot of scripts to Aqueduct which were written specifically to go alongside SSG SCAP content [4]. These scripts were mapped back directly to SSG rules [5]. They were written against an older version of the SSG repo and as such likely need a bit of an update, but I think they would be a very good starting place.
There are a lot of design concerns to keep in mind when approaching remediation content and the balance between authorship time and effort must be considered for each of those. Unfortunately I missed the registration for your SCAP workshop this time around, but I would be happy to support a call to talk through some of this.
I'll need to dig deeper into your final output, but can you help me understand how your JSON gets transformed into valid XML?
e.g.:
<!-- We use JSON notation to articulate a bash script call -->
<fix system="urn:xccdf:fix:script:bash"> { "script" : "/root/passreqs.sh", "environment-variables" : { "login_defs_min_len" : "<sub idref="pass-min-length-var" />" }, "positional-args" : ["argument1", "argument2"] } </fix>
I'm imaging we'd need to modify the makefile to parse all this, sucking in $script and performing some kind of variable substitution. How do you guys handle things?
I messed around with this transform for awhile, and I suppose I'm to burnt out for the day to be productive. I think it's important for our final XCCDF to contain the fully working scripts, and not references to JSON, so that OpenSCAP and other SCAP scanners can natively use the content. If you agree and feel ambitious, feel free to whip up a patch to do this transform. I should have some time between workshops tomorrow to checkout the patch.
Maybe we'll apply it *during* the workshop and see what happens :)
##
Thank you
- Francisco
[1] Sep 2012 -https://lists.fedorahosted.org/pipermail/scap-security-guide/2012-September/... [2] Feb 2013 -https://lists.fedorahosted.org/pipermail/scap-security-guide/2013-February/0... [3] SecState XCCDF variable passing -https://fedorahosted.org/secstate/wiki/RemediationContentHowTo [4] -https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG [5] -https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG/tools/fi...
##
I don't know 5% of what you guys do when it comes to SCAP and the way the content is manipulated, but one thing stuck out to me.
> The fix scripts should not be written to check system state at the > granularity targeted by OVAL checks. But they should still be doing > basic error checking and error handling.
While I think I agree with this as an idea, I think this may be somewhat more complicated than that. When the OVAL check fails, it fails in a binary form (pass/fail). I would argue that the remediation content will have to do a more granular check in some cases where the current content may not be so straightforward. For example, PAM parses it's configuration files in a specific order and you can't just stick the required line in there anywhere. In my experience the check just looks for a regex match and gives a pass/fail from that. The remediation content will have to "understand" the proper layout of the file and handle variances within that file. This example is pretty simple, but I hope I am getting across the point I am trying to make.
I honestly didn't know that OpenSCAP had the ability to do remediation at all, and I'm in the process of trying to understand how that works and reading the 7670 document, but from the OpenSCAP Remediation page that Simon Lukasik so graciously wrote up I get the idea of how it all ties together. That leads me to a question about the selective ability to remediate findings though. Using that information alone it appears that you can either a) wholesale remediate all failed findings right when they are found using the 'eval' parameter or b) do the 'Offline' remediation that does the same thing, but just gives you a chance to see what needs to be changed first. Is it possible to add an option to exclude or include particular things to be remediated while still having them checked? Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers. For example, I wouldn't want it to disable IPv6 on my system since it is an operational requirement.
This may already exist somewhere and I just don't know what I am rattling on about and if so I apologize. I also realize that this is a very early stage, but I thought I would bring it up.
On a side note - I also agree bash is the way to go. Other content can come later, but bash should be first. I don't even know how you would be able to put something like puppet in here. That's (for the most part) managed at a central server and the client can not/should not be attempting to apply any recipes locally.
I'd love to help with this more as I begin to understand the inner workings of how the content is created and managed. But if there is anything I can help with in the short term let me know. I would love to see this succeed.
Thanks everyone, Chad
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Shawn Wells Sent: Tuesday, March 26, 2013 12:42 AM To: scap-security-guide@lists.fedorahosted.org Subject: Re: Remediation Scripts
On 3/26/13 12:18 AM, Francisco Slavin wrote:
You rang? Or, y'know, whatever sound emails make. The internet is obviating the need for the onomatopoeia.
I still envision the AOL "you've got mail!"
Anyway....
On Monday, March 25, 2013 10:15 PM, Shawn Wells wrote:
> I've been taking a few off-list questions around remediation lately, namely > from interested parties asking "where do we start?" Wanted to move those > conversations to on-list. Here's a few of the common questions and my > thoughts to get us started. >
I feel like I wade into this conversation from time to time and end up repeating myself a bit. A couple past threads [1] for context [2] should keep me from being too repetitive.
> > (1) What language(s) should be used? > > IMO, bash. I'm leaning this way because it's included in *every* RHEL > release, whereas puppet modules would require the installation of 3rd party > software. I'd like to see as much done through 'native' tools as possible. > There's certainly advantages to Perl (e.g., potential speed) however I don't > think we want to assume Perl is installed on all RHEL machines. >
Puppet 's high-level language lets you statefully define system configurations. This is nice considering ostensibly XCCDF follows a very similar modeling approach. The problem is government applicability - Puppet is in EPEL and is not enterprise-supported as such the last I checked, so for some areas it's simply not an option. Additionally some people targeting embedded environments don't want to pull Puppet in. BASH is everywhere and as such makes the most sense. From a tools side it would make sense to support numerous 'fix' mechanisms, and from a content side it may make sense to have fix content in various formats. For the first steps it seems like BASH is the way to go.
Personally I'm with you regarding puppet. Additionally RH has publicly stated plans to incorporate puppet into future (layered) products. However, as you pointed out, we can reasonably *depend* on bash being installed on any given RHEL box.
As for additional formats, the extensibility is already built into the SSG build process (sample content for puppet exists). The challenge -- as you seemed also to identify -- is what the first steps will be. I worry about finding ourselves in a situation with fragmented remediation content... a few bash fixes, a few puppet. I'd rather pick one to begin with (and bash seems most logical).
> > (2) Do we perform checking in the scripts? > > Defined further, should the scripts contain conditional checks to see if they > should be ran? > IMO, no. That's what OVAL is for. >
This conversation gets a bit muddied by the definition of 'checking'. The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling. As such they would need to report errors and the tool calling out to the fix scripts would have to act on them, so some common dictionary of return code values may be useful. Using CEE has been brought up in the past as well. Basically whichever front-end is calling out to SCAP libraries for check content gets its fine-grained error reporting taken care of thanks to those libraries; but the fix side will be done ad-hoc and having granular error reporting would still be a huge benefit. The quick-and-dirty way would simply be to have 'success' and 'fail' RCs defined and to capture the stderr and stdout of the script according to whether it passes or fails, and what degree of logging your tool is set to (debug! /verbose/e tc.). This last approach is what SecState is currently doing.
I like the stderr/stout approach. Simon Lukasik recently wrote a good posting about how this output would be captured in result XML: http://isimluk.livejournal.com/3573.html
> > (3) Where do we begin? > > - Name remediation scripts after corresponding XCCDF rule > - Build process includes them into final ssg-rhel6-xccdf.xml > > Known challenge on passing XCCDF variables through to the scripts, > however I wouldn't let this hold us up. Still *tons* of work to be done > while this gets sorted. >
I'm not sure what challenge you're referencing here. We've gotten variable passing to work pretty simply [3].
> > There's a good bit of RHEL6 content in the Aqueduct project that (I > believe) Tresys committed. Perhaps we could reuse those scripts? >
The Tresys CLIP team did contribute a lot of scripts to Aqueduct which were written specifically to go alongside SSG SCAP content [4]. These scripts were mapped back directly to SSG rules [5]. They were written against an older version of the SSG repo and as such likely need a bit of an update, but I think they would be a very good starting place. There are a lot of design concerns to keep in mind when approaching remediation content and the balance between authorship time and effort must be considered for each of those. Unfortunately I missed the registration for your SCAP workshop this time around, but I would be happy to support a call to talk through some of this.
I'll need to dig deeper into your final output, but can you help me understand how your JSON gets transformed into valid XML?
e.g.:
<!-- We use JSON notation to articulate a bash script call --> <fix system="urn:xccdf:fix:script:bash"> { "script" : "/root/passreqs.sh", "environment-variables" : { "login_defs_min_len" : "<sub idref="pass-min-length-var" />" }, "positional-args" : ["argument1", "argument2"] } </fix>
I'm imaging we'd need to modify the makefile to parse all this, sucking in $script and performing some kind of variable substitution. How do you guys handle things?
Thank you - Francisco [1] Sep 2012 - https://lists.fedorahosted.org/pipermail/scap-security-guide/2012-September/... [2] Feb 2013 - https://lists.fedorahosted.org/pipermail/scap-security-guide/2013-February/0... [3] SecState XCCDF variable passing - https://fedorahosted.org/secstate/wiki/RemediationContentHowTo [4] - https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG [5] - https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG/tools/fi...
##
On 03/26/2013 05:26 PM, Truhn, Chad M CTR NSWCDD, CXA30 wrote:
I don't know 5% of what you guys do when it comes to SCAP and the way the content is manipulated, but one thing stuck out to me.
The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling.
While I think I agree with this as an idea, I think this may be somewhat more complicated than that. When the OVAL check fails, it fails in a binary form (pass/fail). I would argue that the remediation content will have to do a more granular check in some cases where the current content may not be so straightforward. For example, PAM parses it's configuration files in a specific order and you can't just stick the required line in there anywhere. In my experience the check just looks for a regex match and gives a pass/fail from that. The remediation content will have to "understand" the proper layout of the file and handle variances within that file. This example is pretty simple, but I hope I am getting across the point I am trying to make.
I agree. Scripts can be more complicated. (And in fact it *has to be* complicated, because configuration is harder problem than assessment).
I honestly didn't know that OpenSCAP had the ability to do remediation at all, and I'm in the process of trying to understand how that works and reading the 7670 document, but from the OpenSCAP Remediation page that Simon Lukasik so graciously wrote up I get the idea of how it all ties together. That leads me to a question about the selective ability to remediate findings though. Using that information alone it appears that you can either a) wholesale remediate all failed findings right when they are found using the 'eval' parameter or b) do the 'Offline' remediation that does the same thing, but just gives you a chance to see what needs to be changed first. Is it possible to add an option to exclude or include particular things to be remediated while still having them checked?
Yes, there is an option for selectively omit remediation of certain rule. Though, it is not easy/nice yet. Today, You can either:
(*) Remove the fix element from that source XCCDF document (*) Create new (inherited) profile for this special machine which has the given Rule unselected. This option is getting easier, as Martin has recently added tailoring file support into OpenSCAP -> thus your new profile may be in external file. (*) Use CPE identifier assigned to the fix. If the CPE does not match on given system, the fix will not be executed. Moreover, I can think of having some file like /etc/NONCRITICAL on all my non critical systems. And then having CPE identifier which matches this exact file. That way, no fix (with this CPE) will be executed unless the machine has /etc/NONCRITICAL. (*) Use offline remediation and proceed as described at https://www.redhat.com/archives/open-scap-list/2013-March/msg00016.html (*) Wait for new SCAP-Workbench, which should allow users to select fix elements in GUI. (*) File a feature request against OpenSCAP for interactive (like: Yes/No/Quit) remediation.
On the other hand, I think that scenario of having a fix element for the Rule and having the Rule failing on a machine is somewhat schizophrenic.
I either have a correct policy or not. I either want this exact machine compliant or not. I either want it to be remediated automatically or not. I either want to use some exact fix script or not.
Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers.
I understand this. What of the above mentioned approaches would be the viable for You? Or can You see any other?
Thanks,
I think every option has it's good and bad sides and no clear 'winner' appears to me. I'll try to comment on my feelings of each.
(*) Remove the fix element from that source XCCDF document
I think this way is always an option. It isn't necessarily graceful, but IMO any admin worth their weight should be able to manage this. But I think this would be a short term 'hacky' way of handling it. It isn't a solution as much as a work around. I'm not saying this is bad, but I don't think it can be called a solution. This might actually work nicely for some scenarios where you know that you don't want Fix ABC on every machine you can just remove it from the source XCCDF and then use that as your 'baseline' XCCDF for the remediation of the rest of the machines. When running on 100s of machines if you can save 5 steps from each that turns into substantial savings.
(*) Create new (inherited) profile for this special machine which has the given Rule unselected. This option is getting easier, as Martin has recently added tailoring file support into OpenSCAP -> thus your new profile may be in external file.
This might be a workable way to do it. Especially if you are doing offline remediation. Run the initial scan to find out what's broke, review the output, disable the check (which disables the fix), run the remediation, re-run the full scan (no remediation) for final analysis. This doesn't scale well though.
(*) Use CPE identifier assigned to the fix. If the CPE does not match on given system, the fix will not be executed. Moreover, I can think of having some file like /etc/NONCRITICAL on all my non critical systems. And then having CPE identifier which matches this exact file. That way, no fix (with this CPE) will be executed unless the machine has /etc/NONCRITICAL.
I think I like this train of thought. More on this below...
(*) Use offline remediation and proceed as described at https://www.redhat.com/archives/open-scap-list/2013-March/msg00016.html
I would comment similar to the one above about inherited profiles. Scan, review, modify, remediate, scan
(*) Wait for new SCAP-Workbench, which should allow users to select fix elements in GUI.
I can see where this is useful, but I think the majority of users won't have/use a true GUI. I think the concept is valid though.
(*) File a feature request against OpenSCAP for interactive (like: Yes/No/Quit) remediation.
Again, this can be useful especially if you just have a machine or two to handle. This doesn't scale well to large enterprises though, but I definitely think it has it's merits. Maybe if we can create some kind of answer file to automate this it might scale a bit better. But that answer file would have to be handled carefully since every machine might not be identical. You can't just say "Question 1: yes", "Question 2: no", etc because the first question on System A might not be the first question on System B. If the answer can be mapped to a specific identifier and somehow manage the outliers manually it could work. Again, I would have to spend more cycles of thought about it and I'm not software developer so I have no idea how hard/easy this is. I'm just spitballing here.
Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers.
I understand this. What of the above mentioned approaches would be the viable for You? Or can You see any other?
So building on your /etc/NONCRITICAL topic from above...
I had a similar thought, but I am not sure how feasible it is. In my head the first thing I jump to is a include or exclude file. Kind of hosts.allow/hosts.deny type of thing.
1) Specify a particular id in scap.deny which doesn't get run and either no entry or something like 'All' in scap.allow 2) Deny 'All' with the exception of what is specified in the scap.allow.
This way I can custom tailor which particular remediation steps I want done per box. If the scan decides it wants to remediate ID 1234, it checks the list to see if it should or not, then proceeds based on that input. Now as an admin I just have to read through the checks one time and make a list then I can run the scan/remediation at any time in the future without having to re-invest time in the applicability of the content again.
In MY perfect world (I am sure others would disagree) I would like if the check was performed regardless of the statement in allow/deny and only the remediation step be concerned with it. I have to show every check to my security guys so I am OK with a failed check and no remediation being done. But, again, I have no idea how that would be handled within the content or if it is even really an option. This way I would be pretty OK with having oscap make the changes automatically because I have already declared that the listed elements are OK to be changed.
I could do this within the bash remediation content as well if there is an ability to import functions or if the function declaration is persistent throughout the run (declare it once at the top). If there is some way to check this outside of the bash remediation content (built into some part of the SCAP content or something like that) I think we could skip some issues that would arrive when doing this through the bash content (what happens if I skip the remediation step that declares this function?).
I'm just an admin with no real software development experience so feel free to tell me to go away.
Thanks again, Chad
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
Thank you.
Link to NIST IR 7670 http://csrc.nist.gov/publications/drafts/nistir-7670/Draft-NISTIR-7670_Feb20... Link to Remediation (dev) Discussion list http://scap.nist.gov/community.html
Luis Nunez G022 - IA Industry Collaboration The MITRE Corporation www.mitre.org
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Truhn, Chad M CTR NSWCDD, CXA30 Sent: Wednesday, March 27, 2013 11:14 AM To: Simon Lukasik Cc: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
I think every option has it's good and bad sides and no clear 'winner' appears to me. I'll try to comment on my feelings of each.
(*) Remove the fix element from that source XCCDF document
I think this way is always an option. It isn't necessarily graceful, but IMO any admin worth their weight should be able to manage this. But I think this would be a short term 'hacky' way of handling it. It isn't a solution as much as a work around. I'm not saying this is bad, but I don't think it can be called a solution. This might actually work nicely for some scenarios where you know that you don't want Fix ABC on every machine you can just remove it from the source XCCDF and then use that as your 'baseline' XCCDF for the remediation of the rest of the machines. When running on 100s of machines if you can save 5 steps from each that turns into substantial savings.
(*) Create new (inherited) profile for this special machine which has the given Rule unselected. This option is getting easier, as Martin has recently added tailoring file support into OpenSCAP -> thus your new profile may be in external file.
This might be a workable way to do it. Especially if you are doing offline remediation. Run the initial scan to find out what's broke, review the output, disable the check (which disables the fix), run the remediation, re-run the full scan (no remediation) for final analysis. This doesn't scale well though.
(*) Use CPE identifier assigned to the fix. If the CPE does not match on given system, the fix will not be executed. Moreover, I can think of having some file like /etc/NONCRITICAL on all my non critical systems. And then having CPE identifier which matches this exact file. That way, no fix (with this CPE) will be executed unless the machine has /etc/NONCRITICAL.
I think I like this train of thought. More on this below...
(*) Use offline remediation and proceed as described at https://www.redhat.com/archives/open-scap-list/2013-March/msg00016.html
I would comment similar to the one above about inherited profiles. Scan, review, modify, remediate, scan
(*) Wait for new SCAP-Workbench, which should allow users to select fix elements in GUI.
I can see where this is useful, but I think the majority of users won't have/use a true GUI. I think the concept is valid though.
(*) File a feature request against OpenSCAP for interactive (like: Yes/No/Quit) remediation.
Again, this can be useful especially if you just have a machine or two to handle. This doesn't scale well to large enterprises though, but I definitely think it has it's merits. Maybe if we can create some kind of answer file to automate this it might scale a bit better. But that answer file would have to be handled carefully since every machine might not be identical. You can't just say "Question 1: yes", "Question 2: no", etc because the first question on System A might not be the first question on System B. If the answer can be mapped to a specific identifier and somehow manage the outliers manually it could work. Again, I would have to spend more cycles of thought about it and I'm not software developer so I have no idea how hard/easy this is. I'm just spitballing here.
Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers.
I understand this. What of the above mentioned approaches would be the viable for You? Or can You see any other?
So building on your /etc/NONCRITICAL topic from above...
I had a similar thought, but I am not sure how feasible it is. In my head the first thing I jump to is a include or exclude file. Kind of hosts.allow/hosts.deny type of thing.
1) Specify a particular id in scap.deny which doesn't get run and either no entry or something like 'All' in scap.allow 2) Deny 'All' with the exception of what is specified in the scap.allow.
This way I can custom tailor which particular remediation steps I want done per box. If the scan decides it wants to remediate ID 1234, it checks the list to see if it should or not, then proceeds based on that input. Now as an admin I just have to read through the checks one time and make a list then I can run the scan/remediation at any time in the future without having to re-invest time in the applicability of the content again.
In MY perfect world (I am sure others would disagree) I would like if the check was performed regardless of the statement in allow/deny and only the remediation step be concerned with it. I have to show every check to my security guys so I am OK with a failed check and no remediation being done. But, again, I have no idea how that would be handled within the content or if it is even really an option. This way I would be pretty OK with having oscap make the changes automatically because I have already declared that the listed elements are OK to be changed.
I could do this within the bash remediation content as well if there is an ability to import functions or if the function declaration is persistent throughout the run (declare it once at the top). If there is some way to check this outside of the bash remediation content (built into some part of the SCAP content or something like that) I think we could skip some issues that would arrive when doing this through the bash content (what happens if I skip the remediation step that declares this function?).
I'm just an admin with no real software development experience so feel free to tell me to go away.
Thanks again, Chad
Did anything ever come of this conversation on the other lists? I tried joining the Remediation-dev mailing list last week, but I got the "Your request has been forwarded to the list moderator for approval." and have never seen any traffic from it. Unsure if the list is just quiet or if my account was never approved. I can't seem to find an online archive either...
Thanks, Chad
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Nunez, Luis K Sent: Wednesday, March 27, 2013 11:37 AM To: scap-security-guide@lists.fedorahosted.org; Simon Lukasik Cc: remediation-dev@nist.gov; open-scap-list@redhat.com Subject: RE: Remediation Scripts
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
Thank you.
Link to NIST IR 7670 http://csrc.nist.gov/publications/drafts/nistir-7670/Draft-NISTIR-7670_Feb20... Link to Remediation (dev) Discussion list http://scap.nist.gov/community.html
Luis Nunez G022 - IA Industry Collaboration The MITRE Corporation www.mitre.org
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Truhn, Chad M CTR NSWCDD, CXA30 Sent: Wednesday, March 27, 2013 11:14 AM To: Simon Lukasik Cc: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
I think every option has it's good and bad sides and no clear 'winner' appears to me. I'll try to comment on my feelings of each.
(*) Remove the fix element from that source XCCDF document
I think this way is always an option. It isn't necessarily graceful, but IMO any admin worth their weight should be able to manage this. But I think this would be a short term 'hacky' way of handling it. It isn't a solution as much as a work around. I'm not saying this is bad, but I don't think it can be called a solution. This might actually work nicely for some scenarios where you know that you don't want Fix ABC on every machine you can just remove it from the source XCCDF and then use that as your 'baseline' XCCDF for the remediation of the rest of the machines. When running on 100s of machines if you can save 5 steps from each that turns into substantial savings.
(*) Create new (inherited) profile for this special machine which has the given Rule unselected. This option is getting easier, as Martin has recently added tailoring file support into OpenSCAP -> thus your new profile may be in external file.
This might be a workable way to do it. Especially if you are doing offline remediation. Run the initial scan to find out what's broke, review the output, disable the check (which disables the fix), run the remediation, re-run the full scan (no remediation) for final analysis. This doesn't scale well though.
(*) Use CPE identifier assigned to the fix. If the CPE does not match on given system, the fix will not be executed. Moreover, I can think of having some file like /etc/NONCRITICAL on all my non critical systems. And then having CPE identifier which matches this exact file. That way, no fix (with this CPE) will be executed unless the machine has /etc/NONCRITICAL.
I think I like this train of thought. More on this below...
(*) Use offline remediation and proceed as described at https://www.redhat.com/archives/open-scap-list/2013-March/msg00016.html
I would comment similar to the one above about inherited profiles. Scan, review, modify, remediate, scan
(*) Wait for new SCAP-Workbench, which should allow users to select fix elements in GUI.
I can see where this is useful, but I think the majority of users won't have/use a true GUI. I think the concept is valid though.
(*) File a feature request against OpenSCAP for interactive (like: Yes/No/Quit) remediation.
Again, this can be useful especially if you just have a machine or two to handle. This doesn't scale well to large enterprises though, but I definitely think it has it's merits. Maybe if we can create some kind of answer file to automate this it might scale a bit better. But that answer file would have to be handled carefully since every machine might not be identical. You can't just say "Question 1: yes", "Question 2: no", etc because the first question on System A might not be the first question on System B. If the answer can be mapped to a specific identifier and somehow manage the outliers manually it could work. Again, I would have to spend more cycles of thought about it and I'm not software developer so I have no idea how hard/easy this is. I'm just spitballing here.
Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers.
I understand this. What of the above mentioned approaches would be the viable for You? Or can You see any other?
So building on your /etc/NONCRITICAL topic from above...
I had a similar thought, but I am not sure how feasible it is. In my head the first thing I jump to is a include or exclude file. Kind of hosts.allow/hosts.deny type of thing.
1) Specify a particular id in scap.deny which doesn't get run and either no entry or something like 'All' in scap.allow 2) Deny 'All' with the exception of what is specified in the scap.allow.
This way I can custom tailor which particular remediation steps I want done per box. If the scan decides it wants to remediate ID 1234, it checks the list to see if it should or not, then proceeds based on that input. Now as an admin I just have to read through the checks one time and make a list then I can run the scan/remediation at any time in the future without having to re-invest time in the applicability of the content again.
In MY perfect world (I am sure others would disagree) I would like if the check was performed regardless of the statement in allow/deny and only the remediation step be concerned with it. I have to show every check to my security guys so I am OK with a failed check and no remediation being done. But, again, I have no idea how that would be handled within the content or if it is even really an option. This way I would be pretty OK with having oscap make the changes automatically because I have already declared that the listed elements are OK to be changed.
I could do this within the bash remediation content as well if there is an ability to import functions or if the function declaration is persistent throughout the run (declare it once at the top). If there is some way to check this outside of the bash remediation content (built into some part of the SCAP content or something like that) I think we could skip some issues that would arrive when doing this through the bash content (what happens if I skip the remediation step that declares this function?).
I'm just an admin with no real software development experience so feel free to tell me to go away.
Thanks again, Chad
Hi Chad, I've not seen any further dialog specific to the topic. Remediation has the tendency to scare people off :(
I'll check with the moderator of the remediation-dev list on you request to join.
Thanks.
-ln
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Truhn, Chad M CTR NSWCDD, CXA30 Sent: Wednesday, April 03, 2013 9:05 AM To: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
Did anything ever come of this conversation on the other lists? I tried joining the Remediation-dev mailing list last week, but I got the "Your request has been forwarded to the list moderator for approval." and have never seen any traffic from it. Unsure if the list is just quiet or if my account was never approved. I can't seem to find an online archive either...
Thanks, Chad
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Nunez, Luis K Sent: Wednesday, March 27, 2013 11:37 AM To: scap-security-guide@lists.fedorahosted.org; Simon Lukasik Cc: remediation-dev@nist.gov; open-scap-list@redhat.com Subject: RE: Remediation Scripts
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
Thank you.
Link to NIST IR 7670 http://csrc.nist.gov/publications/drafts/nistir-7670/Draft-NISTIR-7670_Feb20... Link to Remediation (dev) Discussion list http://scap.nist.gov/community.html
Luis Nunez G022 - IA Industry Collaboration The MITRE Corporation www.mitre.org
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Truhn, Chad M CTR NSWCDD, CXA30 Sent: Wednesday, March 27, 2013 11:14 AM To: Simon Lukasik Cc: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
I think every option has it's good and bad sides and no clear 'winner' appears to me. I'll try to comment on my feelings of each.
(*) Remove the fix element from that source XCCDF document
I think this way is always an option. It isn't necessarily graceful, but IMO any admin worth their weight should be able to manage this. But I think this would be a short term 'hacky' way of handling it. It isn't a solution as much as a work around. I'm not saying this is bad, but I don't think it can be called a solution. This might actually work nicely for some scenarios where you know that you don't want Fix ABC on every machine you can just remove it from the source XCCDF and then use that as your 'baseline' XCCDF for the remediation of the rest of the machines. When running on 100s of machines if you can save 5 steps from each that turns into substantial savings.
(*) Create new (inherited) profile for this special machine which has the given Rule unselected. This option is getting easier, as Martin has recently added tailoring file support into OpenSCAP -> thus your new profile may be in external file.
This might be a workable way to do it. Especially if you are doing offline remediation. Run the initial scan to find out what's broke, review the output, disable the check (which disables the fix), run the remediation, re-run the full scan (no remediation) for final analysis. This doesn't scale well though.
(*) Use CPE identifier assigned to the fix. If the CPE does not match on given system, the fix will not be executed. Moreover, I can think of having some file like /etc/NONCRITICAL on all my non critical systems. And then having CPE identifier which matches this exact file. That way, no fix (with this CPE) will be executed unless the machine has /etc/NONCRITICAL.
I think I like this train of thought. More on this below...
(*) Use offline remediation and proceed as described at https://www.redhat.com/archives/open-scap-list/2013-March/msg00016.html
I would comment similar to the one above about inherited profiles. Scan, review, modify, remediate, scan
(*) Wait for new SCAP-Workbench, which should allow users to select fix elements in GUI.
I can see where this is useful, but I think the majority of users won't have/use a true GUI. I think the concept is valid though.
(*) File a feature request against OpenSCAP for interactive (like: Yes/No/Quit) remediation.
Again, this can be useful especially if you just have a machine or two to handle. This doesn't scale well to large enterprises though, but I definitely think it has it's merits. Maybe if we can create some kind of answer file to automate this it might scale a bit better. But that answer file would have to be handled carefully since every machine might not be identical. You can't just say "Question 1: yes", "Question 2: no", etc because the first question on System A might not be the first question on System B. If the answer can be mapped to a specific identifier and somehow manage the outliers manually it could work. Again, I would have to spend more cycles of thought about it and I'm not software developer so I have no idea how hard/easy this is. I'm just spitballing here.
Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers.
I understand this. What of the above mentioned approaches would be the viable for You? Or can You see any other?
So building on your /etc/NONCRITICAL topic from above...
I had a similar thought, but I am not sure how feasible it is. In my head the first thing I jump to is a include or exclude file. Kind of hosts.allow/hosts.deny type of thing.
1) Specify a particular id in scap.deny which doesn't get run and either no entry or something like 'All' in scap.allow 2) Deny 'All' with the exception of what is specified in the scap.allow.
This way I can custom tailor which particular remediation steps I want done per box. If the scan decides it wants to remediate ID 1234, it checks the list to see if it should or not, then proceeds based on that input. Now as an admin I just have to read through the checks one time and make a list then I can run the scan/remediation at any time in the future without having to re-invest time in the applicability of the content again.
In MY perfect world (I am sure others would disagree) I would like if the check was performed regardless of the statement in allow/deny and only the remediation step be concerned with it. I have to show every check to my security guys so I am OK with a failed check and no remediation being done. But, again, I have no idea how that would be handled within the content or if it is even really an option. This way I would be pretty OK with having oscap make the changes automatically because I have already declared that the listed elements are OK to be changed.
I could do this within the bash remediation content as well if there is an ability to import functions or if the function declaration is persistent throughout the run (declare it once at the top). If there is some way to check this outside of the bash remediation content (built into some part of the SCAP content or something like that) I think we could skip some issues that would arrive when doing this through the bash content (what happens if I skip the remediation step that declares this function?).
I'm just an admin with no real software development experience so feel free to tell me to go away.
Thanks again, Chad
I saw your email to the remediation list, so I guess I was granted access. I suppose it's just quiet.
Thanks for your help Luis!
Thanks, Chad
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Nunez, Luis K Sent: Wednesday, April 03, 2013 9:25 AM To: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
Hi Chad, I've not seen any further dialog specific to the topic. Remediation has the tendency to scare people off :(
I'll check with the moderator of the remediation-dev list on you request to join.
Thanks.
-ln
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Truhn, Chad M CTR NSWCDD, CXA30 Sent: Wednesday, April 03, 2013 9:05 AM To: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
Did anything ever come of this conversation on the other lists? I tried joining the Remediation-dev mailing list last week, but I got the "Your request has been forwarded to the list moderator for approval." and have never seen any traffic from it. Unsure if the list is just quiet or if my account was never approved. I can't seem to find an online archive either...
Thanks, Chad
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Nunez, Luis K Sent: Wednesday, March 27, 2013 11:37 AM To: scap-security-guide@lists.fedorahosted.org; Simon Lukasik Cc: remediation-dev@nist.gov; open-scap-list@redhat.com Subject: RE: Remediation Scripts
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
Thank you.
Link to NIST IR 7670 http://csrc.nist.gov/publications/drafts/nistir-7670/Draft-NISTIR-7670_Feb20... Link to Remediation (dev) Discussion list http://scap.nist.gov/community.html
Luis Nunez G022 - IA Industry Collaboration The MITRE Corporation www.mitre.org
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Truhn, Chad M CTR NSWCDD, CXA30 Sent: Wednesday, March 27, 2013 11:14 AM To: Simon Lukasik Cc: scap-security-guide@lists.fedorahosted.org Subject: RE: Remediation Scripts
I think every option has it's good and bad sides and no clear 'winner' appears to me. I'll try to comment on my feelings of each.
(*) Remove the fix element from that source XCCDF document
I think this way is always an option. It isn't necessarily graceful, but IMO any admin worth their weight should be able to manage this. But I think this would be a short term 'hacky' way of handling it. It isn't a solution as much as a work around. I'm not saying this is bad, but I don't think it can be called a solution. This might actually work nicely for some scenarios where you know that you don't want Fix ABC on every machine you can just remove it from the source XCCDF and then use that as your 'baseline' XCCDF for the remediation of the rest of the machines. When running on 100s of machines if you can save 5 steps from each that turns into substantial savings.
(*) Create new (inherited) profile for this special machine which has the given Rule unselected. This option is getting easier, as Martin has recently added tailoring file support into OpenSCAP -> thus your new profile may be in external file.
This might be a workable way to do it. Especially if you are doing offline remediation. Run the initial scan to find out what's broke, review the output, disable the check (which disables the fix), run the remediation, re-run the full scan (no remediation) for final analysis. This doesn't scale well though.
(*) Use CPE identifier assigned to the fix. If the CPE does not match on given system, the fix will not be executed. Moreover, I can think of having some file like /etc/NONCRITICAL on all my non critical systems. And then having CPE identifier which matches this exact file. That way, no fix (with this CPE) will be executed unless the machine has /etc/NONCRITICAL.
I think I like this train of thought. More on this below...
(*) Use offline remediation and proceed as described at https://www.redhat.com/archives/open-scap-list/2013-March/msg00016.html
I would comment similar to the one above about inherited profiles. Scan, review, modify, remediate, scan
(*) Wait for new SCAP-Workbench, which should allow users to select fix elements in GUI.
I can see where this is useful, but I think the majority of users won't have/use a true GUI. I think the concept is valid though.
(*) File a feature request against OpenSCAP for interactive (like: Yes/No/Quit) remediation.
Again, this can be useful especially if you just have a machine or two to handle. This doesn't scale well to large enterprises though, but I definitely think it has it's merits. Maybe if we can create some kind of answer file to automate this it might scale a bit better. But that answer file would have to be handled carefully since every machine might not be identical. You can't just say "Question 1: yes", "Question 2: no", etc because the first question on System A might not be the first question on System B. If the answer can be mapped to a specific identifier and somehow manage the outliers manually it could work. Again, I would have to spend more cycles of thought about it and I'm not software developer so I have no idea how hard/easy this is. I'm just spitballing here.
Without thinking about it too much I can't think of a good way to do that without it being cumbersome. But I can say that in my years working with security measures I have never been able to take the 'recommended' solution and fit 100% of it to my system. There are always outliers.
I understand this. What of the above mentioned approaches would be the viable for You? Or can You see any other?
So building on your /etc/NONCRITICAL topic from above...
I had a similar thought, but I am not sure how feasible it is. In my head the first thing I jump to is a include or exclude file. Kind of hosts.allow/hosts.deny type of thing.
1) Specify a particular id in scap.deny which doesn't get run and either no entry or something like 'All' in scap.allow 2) Deny 'All' with the exception of what is specified in the scap.allow.
This way I can custom tailor which particular remediation steps I want done per box. If the scan decides it wants to remediate ID 1234, it checks the list to see if it should or not, then proceeds based on that input. Now as an admin I just have to read through the checks one time and make a list then I can run the scan/remediation at any time in the future without having to re-invest time in the applicability of the content again.
In MY perfect world (I am sure others would disagree) I would like if the check was performed regardless of the statement in allow/deny and only the remediation step be concerned with it. I have to show every check to my security guys so I am OK with a failed check and no remediation being done. But, again, I have no idea how that would be handled within the content or if it is even really an option. This way I would be pretty OK with having oscap make the changes automatically because I have already declared that the listed elements are OK to be changed.
I could do this within the bash remediation content as well if there is an ability to import functions or if the function declaration is persistent throughout the run (declare it once at the top). If there is some way to check this outside of the bash remediation content (built into some part of the SCAP content or something like that) I think we could skip some issues that would arrive when doing this through the bash content (what happens if I skip the remediation step that declares this function?).
I'm just an admin with no real software development experience so feel free to tell me to go away.
Thanks again, Chad
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/03/2013 07:24 AM, Nunez, Luis K wrote:
Hi Chad, I've not seen any further dialog specific to the topic. Remediation has the tendency to scare people off :(
I'll check with the moderator of the remediation-dev list on you request to join.
Thanks.
-ln
This is something I have to deal with as well, would it be possible to add me to the list? thanks.
- -- Kurt Seifried Red Hat Security Response Team (SRT) PGP: 0x5E267993 A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
On 4/3/13 11:47 AM, Kurt Seifried wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 04/03/2013 07:24 AM, Nunez, Luis K wrote:
Hi Chad, I've not seen any further dialog specific to the topic. Remediation has the tendency to scare people off :(
I'll check with the moderator of the remediation-dev list on you request to join.
Thanks.
-ln
This is something I have to deal with as well, would it be possible to add me to the list? thanks.
So, to clarify.... since I received a few "wtf is this?" emails off-list...
remediation-dev@nist is about developing remediation /standards/, such as CREs. People can signup here: http://scap.nist.gov/community.html
open-scap-list@redhat is about developing the OpenSCAP tool, of which: - homepage @ http://open-scap.org/ - mailing list @ https://www.redhat.com/mailman/listinfo/open-scap-list
SCAP Security Guide is where protocols, tooling, and content come together to produce usable content such as the existing prose guides and STIG content. There's certainly overlaping interests and common problems shared between these focus areas, and cross-posting when needed is totally cool (we need to communicate!), however remediation-dev@nist isn't where we'll be figuring out remediation scripts for SSG ;)
On 3/27/13 11:36 AM, Nunez, Luis K wrote:
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
In regards to DR 5, a key challenge I see is passing XCCDF refine-value pairings into remediation scripts.
For example, in the SSG content we set a umask of 022 to meet FSO standards: <refine-value idref="var_umask_for_daemons" selector="022" >
How can I get the value of var_umask_for_daemons into remediation content? To my (limited) knowledge of current standards such a method doesn't exist, is it planned via NIST or the OpenSCAP guys?
On 04/05/2013 06:00 AM, Shawn Wells wrote:
On 3/27/13 11:36 AM, Nunez, Luis K wrote:
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
In regards to DR 5, a key challenge I see is passing XCCDF refine-value pairings into remediation scripts.
For example, in the SSG content we set a umask of 022 to meet FSO standards: <refine-value idref="var_umask_for_daemons" selector="022" >
How can I get the value of var_umask_for_daemons into remediation content? To my (limited) knowledge of current standards such a method doesn't exist, is it planned via NIST or the OpenSCAP guys?
It is already possible with OpenSCAP.
For more info please see NISTIR-7275r4 and search for the <sub> element.
Here is an example of <sub> usage in the OpenSCAP unit tests:
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/unittests...
Have a great (hacking) weekend,
On 4/5/13 6:08 AM, Simon Lukasik wrote:
On 04/05/2013 06:00 AM, Shawn Wells wrote:
On 3/27/13 11:36 AM, Nunez, Luis K wrote:
This is good a conversation worth informing others on. I am cross posting to the Open-SCAP-list and Remediation-dev mailing lists.
I’ve noticed pockets of remediation discussions in the various email-lists and would like to align them to a forum where can work as a collective. I don’t want to stifle this effort or conversation but would like to move the discussion to the remediation-dev list. The remediation-dev list, is an open list for all to participate, was setup to inform and to foster capabilities to enable automated enterprise remediation. The list members constitute industry vendors and government constituents. It contains experience and knowledge from previous attempts at remediation capabilities.
Some observations on the current discussion. The OpenSCAP remediation capability addresses part of the problem. The current discourse (OpenSCAP XCCDF remediation) is beginning to touch on various Remediation Architectural issues (Workflow, tasking, reporting, OVRL, etc…). As you know the subject of Remediation is broad with many perspectives and implications. Before we spiral out control, I’ve seen it happen many times before with this subject, lets break them down into manageable sets.
For lack of better reference material on Remediation Architecture, I would like to propose the NIST IR 7670 as a frame of reference for topic of discussions. The NIST IR 7670 is by no means a standard, but it is something to reference form a work flow and use cases. Certainly the NIST IR 7670 is subject to revision to suit the needs of the community as it evolves and it invites any and all for critics to make it better.
And so using the “Derived Requirements” from the IR 7670 I believe we can have meaningful discourse and solutions. The current discussions on “Remediation Scripting” seems to originate and is related to DR 5 – Remediation Policy specification. It would be great to leverage the existing capabilities in OpenSCAP as a way to prototype and exercise elements in the XCCDF specification for remedial needs. We could also use this effort to propose revisions in specifications and guidance as needed. The prototype working code and content will be the mechanism by which a rough consensus from the community is achieved.
Going forward I would like to invite thoughts and ideas to further innovate remediation capabilities.
In regards to DR 5, a key challenge I see is passing XCCDF refine-value pairings into remediation scripts.
For example, in the SSG content we set a umask of 022 to meet FSO standards: <refine-value idref="var_umask_for_daemons" selector="022" >
How can I get the value of var_umask_for_daemons into remediation content? To my (limited) knowledge of current standards such a method doesn't exist, is it planned via NIST or the OpenSCAP guys?
It is already possible with OpenSCAP.
For more info please see NISTIR-7275r4 and search for the <sub> element.
Here is an example of <sub> usage in the OpenSCAP unit tests:
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/unittests...
Have a great (hacking) weekend,
Thanks Simon! 6.4.4.5 "xccdf:fixtext and xccdf:fix Elements" was exactly what I needed.
On Mon, 25 Mar 2013 22:14:40 -0400 Shawn Wells shawn@redhat.com wrote:
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list. Here's a few of the common questions and my thoughts to get us started.
(1) What language(s) should be used?
IMO, bash. I'm leaning this way because it's included in *every* RHEL release, whereas puppet modules would require the installation of 3rd party software. I'd like to see as much done through 'native' tools as possible. There's certainly advantages to Perl (e.g., potential speed) however I don't think we want to assume Perl is installed on all RHEL machines.
(2) Do we perform checking in the scripts?
Defined further, should the scripts contain conditional checks to see if they should be ran? IMO, no. That's what OVAL is for.
(3) Where do we begin?
- Name remediation scripts after corresponding XCCDF rule
- Build process includes them into final ssg-rhel6-xccdf.xml
Known challenge on passing XCCDF variables through to the scripts, however I wouldn't let this hold us up. Still *tons* of work to be done while this gets sorted.
There's a good bit of RHEL6 content in the Aqueduct project that (I believe) Tresys committed. Perhaps we could reuse those scripts?
Agree with your points above.
As for scripts, I've got +- 400 scripts that I'm ready to commit, but being new to the git process, I do not want to make a mistake sending all at once to the list as patches.
There is also a new combinefixes.py script that fixes having the characters "<", ">", and "&" in them.
How should I proceed?
Thanks.
On 03/26/2013 03:14 AM, Shawn Wells wrote:
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list.
Guys, thanks for bringing this up. I also the one that still believes that on-list discussion can be valuable.
Fist of all, I agree with Francisco that having the fix content (scripts) separated from XCCDF file would great. On the other hand, I work on implementing the tool so I need to stick with the standard as much as possible.
That being said, I added support for embedded scripts to openscap following NISTIR-7275r4. Note that some of the aspects of remediation are not specified in the great detail in NISTIR-72775r4. When implementing the tool, I always preferred defensive approach over features.
In the long run, we may need to separate XCCDF and scripts again, but for that I would like to see some support from the standard bodies. Maybe we just need the NISTIR-7670 to be amended for work with scripts (as opposed to the OVRL). Or we can use the "urn:xccdf:fix:urls" and design own file format ... but I am not fond of that json usage.
Regarding the selection of fix languages, I don't think that you need to use the only one language exclusively. If some goal is easier to achieve with python than in bash, go for it. OpenSCAP will handle it correctly. Python fix will not be issued unless the python interpreter is available.
On 03/26/2013 05:18 AM, Francisco Slavin wrote:
This conversation gets a bit muddied by the definition of 'checking'. The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling. As such they would need to report errors and the tool calling out to the fix scripts would have to act on them, so some common dictionary of return code values may be useful. Using CEE has been brought up in the past as well. Basically whichever front-end is calling out to SCAP libraries for check content gets its fine-grained error reporting taken care of thanks to those libraries; but the fix side will be done ad-hoc and having granular error reporting would still be a huge benefit. The quick-and-dirty way would simply be to have 'success' and 'fail' RCs defined and to capture the stderr and stdout of the script according to whether it passes or fails, and what degree of logging your tool is set to (debug/verbose/etc.). This last approach is what SecState is currently doing.
I feel like some of these question might be answered by current implementation in OpenSCAP:
- The output of script is captured and stored in the rule-result/message element - The output of script has no effect on evaluation result - The return value of script is captured and stored in the rule-result/message element - The return value of script has no effect on evaluation result. - The fix scripts are applied only for those rules which has 'fail' result from OVAL check. - For those the OVAL check is evaluated twice. The second run is immediately after the fix is applied. The result of second OVAL evaluation decides between 'fixed' and 'error' result. - That assures that fix element is not run twice for the same TestResult
I hope this helps,
On hosts that do not have a requirement for perl, or python (i.e. compliant with the minimal-build concept) .... how would that remediation action occur were it coded in perl/python (or other language/tool)? I believe it is reasonable to assume bash and coreutils will be there (please correct me if I am wrong) for a minimal build.
R, -Joe
From: Simon Lukasik slukasik@redhat.com
Regarding the selection of fix languages, I don't think that you need to use the only one language exclusively. If some goal is easier to achieve with python than in bash, go for it. OpenSCAP will handle it correctly. Python fix will not be issued unless the python interpreter is available.
On 03/26/2013 04:03 PM, Joe Wulf wrote:
On hosts that do not have a requirement for perl, or python (i.e. compliant with the minimal-build concept) .... how would that remediation action occur were it coded in perl/python (or other language/tool)? I believe it is reasonable to assume bash and coreutils will be there (please correct me if I am wrong) for a minimal build.
You're right. In that case remediation would not be possible.
What I was trying to say is that you can start implementing things in whatever language suits you the best. And get the first implementation done quickly. That bash content can be added later if indeed needed.
If you have multiple fix elements for the given Rule, OpenSCAP can choose the best fitting one. That decision process is based also on the existence of interpreter binary.
R, -Joe
------------------------------------------------------------------------ *From:* Simon Lukasik <slukasik@redhat.com> ** Regarding the selection of fix languages, I don't think that you need to use the only one language exclusively. If some goal is easier to achieve with python than in bash, go for it. OpenSCAP will handle it correctly. Python fix will not be issued unless the python interpreter is available.
On Tuesday, March 26, 2013 03:23:55 PM Simon Lukasik wrote:
On 03/26/2013 03:14 AM, Shawn Wells wrote:
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list.
Guys, thanks for bringing this up. I also the one that still believes that on-list discussion can be valuable.
Fist of all, I agree with Francisco that having the fix content (scripts) separated from XCCDF file would great. On the other hand, I work on implementing the tool so I need to stick with the standard as much as possible.
That being said, I added support for embedded scripts to openscap following NISTIR-7275r4. Note that some of the aspects of remediation are not specified in the great detail in NISTIR-72775r4. When implementing the tool, I always preferred defensive approach over features.
In the long run, we may need to separate XCCDF and scripts again, but for that I would like to see some support from the standard bodies. Maybe we just need the NISTIR-7670 to be amended for work with scripts (as opposed to the OVRL). Or we can use the "urn:xccdf:fix:urls" and design own file format ... but I am not fond of that json usage.
If anyone wanted remediation in the SCAP 1.2 standard, I think doing it within the XCCDF might be the only approach. (Data streams kinda forces this.) For the next SCAP revision, it might be broken out.
Regarding the selection of fix languages, I don't think that you need to use the only one language exclusively. If some goal is easier to achieve with python than in bash, go for it. OpenSCAP will handle it correctly. Python fix will not be issued unless the python interpreter is available.
I would strongly suggest staying with the simplest bash commands possible. This way it can be pulled into kickstarts so systems are set up correctly via virt-install or any other way. Systems might be appliance-like and only have BusyBox or some other micro environment.
The concern with Python is it can pull in dependencies via import statements. If we do that, we'd have to define exactly what imports are expected for content writers.
-Steve
On 03/26/2013 05:18 AM, Francisco Slavin wrote:
This conversation gets a bit muddied by the definition of 'checking'. The fix scripts should not be written to check system state at the granularity targeted by OVAL checks. But they should still be doing basic error checking and error handling. As such they would need to report errors and the tool calling out to the fix scripts would have to act on them, so some common dictionary of return code values may be useful. Using CEE has been brought up in the past as well. Basically whichever front-end is calling out to SCAP libraries for check content gets its fine-grained error reporting taken care of thanks to those libraries; but the fix side will be done ad-hoc and having granular error reporting would still be a huge benefit. The quick-and-dirty way would simply be to have 'success' and 'fail' RCs defined and to capture the stderr and stdout of the script according to whether it passes or fails, and what degree of logging your tool is set to (debug/verbose/etc.). This last approach is what SecState is currently doing.
I feel like some of these question might be answered by current implementation in OpenSCAP:
- The output of script is captured and stored in the rule-result/message element
- The output of script has no effect on evaluation result
- The return value of script is captured and stored in the rule-result/message element
- The return value of script has no effect on evaluation result.
- The fix scripts are applied only for those rules which has 'fail' result from OVAL check.
- For those the OVAL check is evaluated twice. The second run is immediately after the fix is applied. The result of second OVAL evaluation decides between 'fixed' and 'error' result.
- That assures that fix element is not run twice for the same TestResult
I hope this helps,
Is there a repo already set up for this that we could use?
- Isaac
-----Original Message----- From: Shawn Wells [mailto:shawn@redhat.com] Sent: Monday, March 25, 2013 10:15 PM To: scap-security-guide@lists.fedorahosted.org Subject: Remediation Scripts
I've been taking a few off-list questions around remediation lately, namely from interested parties asking "where do we start?" Wanted to move those conversations to on-list. Here's a few of the common questions and my thoughts to get us started.
(1) What language(s) should be used?
IMO, bash. I'm leaning this way because it's included in *every* RHEL release, whereas puppet modules would require the installation of 3rd party software. I'd like to see as much done through 'native' tools as possible. There's certainly advantages to Perl (e.g., potential speed) however I don't think we want to assume Perl is installed on all RHEL machines.
(2) Do we perform checking in the scripts?
Defined further, should the scripts contain conditional checks to see if they should be ran? IMO, no. That's what OVAL is for.
(3) Where do we begin?
- Name remediation scripts after corresponding XCCDF rule - Build process includes them into final ssg-rhel6-xccdf.xml
Known challenge on passing XCCDF variables through to the scripts, however I wouldn't let this hold us up. Still *tons* of work to be done while this gets sorted.
There's a good bit of RHEL6 content in the Aqueduct project that (I believe) Tresys committed. Perhaps we could reuse those scripts?
On 3/26/13 12:07 PM, Isaac Smitley wrote:
Is there a repo already set up for this that we could use?
https://fedorahosted.org/scap-security-guide/wiki/becomeadeveloper
RHEL6/input/fixes/bash/ for dropping scripts into
Hey Shawn.
I'm getting this:
git clone ssh://git.fedorahosted.org/git/scap-security-guide.git Cloning into 'scap-security-guide'... Permission denied (publickey). fatal: The remote end hung up unexpectedly
Something wrong with the server? Or do I need to do something on my end that isn't specified on that page?
- Isaac
-----Original Message----- From: scap-security-guide-bounces@lists.fedorahosted.org [mailto:scap-security-guide-bounces@lists.fedorahosted.org] On Behalf Of Shawn Wells Sent: Friday, March 29, 2013 2:11 PM Cc: scap-security-guide@lists.fedorahosted.org Subject: Re: Remediation Scripts
On 3/26/13 12:07 PM, Isaac Smitley wrote:
Is there a repo already set up for this that we could use?
https://fedorahosted.org/scap-security-guide/wiki/becomeadeveloper
RHEL6/input/fixes/bash/ for dropping scripts into _______________________________________________ scap-security-guide mailing list scap-security-guide@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/scap-security-guide
On 3/29/13 2:43 PM, Isaac Smitley wrote:
Hey Shawn.
I'm getting this:
git clonessh://git.fedorahosted.org/git/scap-security-guide.git Cloning into 'scap-security-guide'... Permission denied (publickey). fatal: The remote end hung up unexpectedly
Something wrong with the server? Or do I need to do something on my end that isn't specified on that page?
Ah, the documentation needs improvement. The ssh:// assumes you've been granted commit access. Use http://
I've been meaning to chime in on this topic but have been extremely busy as of late.
I have already been writing scripts based off the SSG output and slowly working towards something usable. The main thing I have been working towards follows these 3 rules when it comes to remediation..
1: If we need to change a file, make a time stamped copy before changing the file. 2: If a file is modified, lets put a comment on what was done with a date and SSG Rule ID 3: Write stdout and stderr to a log file. Whatever happens, I want to know about it.
The other things I have done is written all the the remediations into in file specific scripts like audit.rules.sh with their own checks. Check if its there, check if it is the right value, ect. I know its redundant to double check but it hurts nothing to do so in my book but also leaves the option to run the scripts separately.
I figured at a bare minimum with what I am working on based off the SSG findings of a base install, these scripts could be kickstarted to wrap up the base install. I also agree to keep it all bash. Consider it from a minimal install and then built up from there. I would have to test out python and see if it would work from my images I have to ensure there are no deps involved. If it was a big system already in production then maybe python might be applicable from a resource perspective.
Not sure if what I am presently doing is being reiterated buy other people but thought I would throw my 2 cents in. Seems like a good time to pool resources.
Regards, Aron Lamb
On Fri, Mar 29, 2013 at 2:11 PM, Shawn Wells shawn@redhat.com wrote:
On 3/26/13 12:07 PM, Isaac Smitley wrote:
Is there a repo already set up for this that we could use?
https://fedorahosted.org/scap-**security-guide/wiki/**becomeadeveloperhttps://fedorahosted.org/scap-security-guide/wiki/becomeadeveloper
RHEL6/input/fixes/bash/ for dropping scripts into
______________________________**_________________ scap-security-guide mailing list scap-security-guide@lists.**fedorahosted.orgscap-security-guide@lists.fedorahosted.org https://lists.fedorahosted.**org/mailman/listinfo/scap-**security-guidehttps://lists.fedorahosted.org/mailman/listinfo/scap-security-guide
On 3/29/13 3:08 PM, Aaron Lamb wrote:
I've been meaning to chime in on this topic but have been extremely busy as of late.
Thanks for taking time to contribute to the conversation! Many people seem to think commuting code is the only way to contribute; getting engaged in conversations is *extremely* valuable.
I have already been writing scripts based off the SSG output and slowly working towards something usable. The main thing I have been working towards follows these 3 rules when it comes to remediation..
1: If we need to change a file, make a time stamped copy before changing the file. 2: If a file is modified, lets put a comment on what was done with a date and SSG Rule ID 3: Write stdout and stderr to a log file. Whatever happens, I want to know about it.
I agree with the logic behind #3, however note that output is captured by OpenSCAP natively (thus the scripts don't need their own log files). Check out Simon Lukasik's awesome writeup here: http://isimluk.livejournal.com/3573.html
Specifically, the <message> tags within <rule-result>'s
The other things I have done is written all the the remediations into in file specific scripts like audit.rules.sh http://audit.rules.sh with their own checks. Check if its there, check if it is the right value, ect. I know its redundant to double check but it hurts nothing to do so in my book but also leaves the option to run the scripts separately.
From a performance standpoint you're correct, overhead should be minimal. The counterpoint (which is where I personally fall) has to do with the longterm support of the code snippets.
Currently there are some 430 OVAL checks:
$ ls RHEL6/input/checks/ | wc -l 434
Lets say, on average, the conditional check is 1.5 lines of code. Over 434 checks, that's an extra 651 lines of code we'd have to QA and support. Why bother when OVAL does this for us?
Certain scripts will have checks in them, such as sysctl settings. e.g.
if (value present in /etc/sysctl.conf) perform sed -i else echo value >> /etc/sysctl.conf fi
However, there's no reason that script should check for the need to be ran. OVAL will do this for us.
I figured at a bare minimum with what I am working on based off the SSG findings of a base install, these scripts could be kickstarted to wrap up the base install. I also agree to keep it all bash. Consider it from a minimal install and then built up from there. I would have to test out python and see if it would work from my images I have to ensure there are no deps involved. If it was a big system already in production then maybe python might be applicable from a resource perspective.
I, too, envision people wrapping the remediation scripts into their %post section.
Not sure if what I am presently doing is being reiterated buy other people but thought I would throw my 2 cents in. Seems like a good time to pool resources.
Patches welcome!
Although, as OVAL determines if a remediation is even needed, we *must* ensure the corresponding OVAL code is functioning properly before committing associated remediation code.
So it might be my mail client, but I'm struggling to discern who is saying what in this thread. I added in some >> indicators but may have gotten them wrong.
On Friday, April 05, 2013 12:23 AM, Shawn Wells wrote:
On 3/29/13 3:08 PM, Aaron Lamb wrote: I've been meaning to chime in on this topic but have been extremely busy as of late.
Thanks for taking time to contribute to the conversation! Many people seem to think commuting code is the only way to contribute; getting engaged in conversations is *extremely* valuable.
I have already been writing scripts based off the SSG output and slowly working towards something usable. The main thing I have been working towards follows these 3 rules when it comes to remediation..
1: If we need to change a file, make a time stamped copy before changing the file. 2: If a file is modified, lets put a comment on what was done with a date and SSG Rule ID 3: Write stdout and stderr to a log file. Whatever happens, I want to know about it.
I agree with the logic behind #3, however note that output is captured by OpenSCAP natively (thus the scripts don't need their own log files). Check out Simon Lukasik's awesome writeup here:
http://isimluk.livejournal.com/3573.html
Specifically, the <message> tags within <rule-result>'s
For point #3 I disagree. I think that the scripts themselves should not know or care where information is getting logged to. Basic info should be written to stdout by default, error information should be written to stderr by default. Tools such as the oscap tool can then capture this information and log it accordingly (possibly based on whether a user passes in a --verbose option or has different config options set). I think that having one configuration point for logging will make the most sense in the long run and that point should be the tool consuming the content. Thus is a tool provides the options you can configure logging however you like for your system: I want my audit results to go to spot A I want my remediation results to go to spot B If verbose, capture the stdout information somewhere If non-verbose, only capture the stderr information
I think we should have a more involved conversation about the approach to writing remediation content for SSG. I will send a patch up momentarily (purely to facilitate this discussion) demonstrating how we were originally writing SSG-targeted scripts in Aqueduct [1].
My concern is that there may be some clash between script-authorship best practices and SSG-content-authorship best practices. From our quick conversation after the workshop it seems like Jeff and Shawn are both on the same page re: scripts should live directly in the <fix> tags when content hits a consuming system; they should not be kept in separate .sh files and reference from within the <fix> tag. This poses a problem for authorship with regards to functional programming and code maintenance.
As you will be able to see from the patches I forward over, we took the approach of grouping common tasks into one common function to perform those tasks and passing in parameters as appropriate for a particular fix. This relied on having the common function in its own file and sourcing (the bash '.' operator) that file for specific fix-scripts. This is basic programming best-practice to keep from copy/pasting code across multiple areas. If all of the bash scripts will live within one XCCDF XML file, each in discrete <fix> tags, I'm not sure what approach the community would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
It seems like a good time to start pushing these conversations given the recent questions popping up on the list. Thoughts?
- Francisco
[1] - Aqueduct SSG-targeted remediation scripts - https://fedorahosted.org/aqueduct/browser/trunk/compliance/Bash/SSG
On 4/5/13 3:08 PM, Francisco Slavin wrote:
So it might be my mail client, but I'm struggling to discern who is saying what in this thread. I added in some >> indicators but may have gotten them wrong.
fwiw, works in Thunderbird 17.0.5 and Zimbra.
On Friday, April 05, 2013 12:23 AM, Shawn Wells wrote:
On 3/29/13 3:08 PM, Aaron Lamb wrote: I've been meaning to chime in on this topic but have been extremely busy as of late.
Thanks for taking time to contribute to the conversation! Many people seem to think commuting code is the only way to contribute; getting engaged in conversations is *extremely* valuable.
I have already been writing scripts based off the SSG output and slowly working towards something usable. The main thing I have been working towards follows these 3 rules when it comes to remediation.. 1: If we need to change a file, make a time stamped copy before changing the file. 2: If a file is modified, lets put a comment on what was done with a date and SSG Rule ID 3: Write stdout and stderr to a log file. Whatever happens, I want to know about it.
I agree with the logic behind #3, however note that output is captured by OpenSCAP natively (thus the scripts don't need their own log files). Check out Simon Lukasik's awesome writeup here:
http://isimluk.livejournal.com/3573.html
Specifically, the <message> tags within <rule-result>'s
For point #3 I disagree. I think that the scripts themselves should not know or care where information is getting logged to. Basic info should be written to stdout by default, error information should be written to stderr by default. Tools such as the oscap tool can then capture this information and log it accordingly (possibly based on whether a user passes in a --verbose option or has different config options set). I think that having one configuration point for logging will make the most sense in the long run and that point should be the tool consuming the content. Thus is a tool provides the options you can configure logging however you like for your system: I want my audit results to go to spot A I want my remediation results to go to spot B If verbose, capture the stdout information somewhere If non-verbose, only capture the stderr information
I think we should have a more involved conversation about the approach to writing remediation content for SSG. I will send a patch up momentarily (purely to facilitate this discussion) demonstrating how we were originally writing SSG-targeted scripts in Aqueduct [1].
EMails with [PATCH] in the subject line move to the top of my queue... Your patch makes much more sense after reading this!
My concern is that there may be some clash between script-authorship best practices and SSG-content-authorship best practices. From our quick conversation after the workshop it seems like Jeff and Shawn are both on the same page re: scripts should live directly in the <fix> tags when content hits a consuming system; they should not be kept in separate .sh files and reference from within the <fix> tag. This poses a problem for authorship with regards to functional programming and code maintenance.
I know you and I have spoken of this, but as a clarification point (to avoid confusion of others), the location of scripts at /development/ will be different than finished product. Within the project the RHEL6/input/fixes/bash/ directory houses the scripts, of which Make merges into the master XCCDF output. When saying content should live within the <fix> tags, the statement is regarding finished output.
Full agreement on the challenges this may introduce, though.
As you will be able to see from the patches I forward over, we took the approach of grouping common tasks into one common function to perform those tasks and passing in parameters as appropriate for a particular fix. This relied on having the common function in its own file and sourcing (the bash '.' operator) that file for specific fix-scripts. This is basic programming best-practice to keep from copy/pasting code across multiple areas. If all of the bash scripts will live within one XCCDF XML file, each in discrete <fix> tags, I'm not sure what approach the community would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
It seems like a good time to start pushing these conversations given the recent questions popping up on the list. Thoughts?
Perhaps templates can be used, similar to the OVAL checks. For example the OVAL for checking package installation is here: http://git.fedorahosted.org/cgit/scap-security-guide.git/tree/RHEL6/input/ch...
The create_package_installed.py script replaces instances of PKGNAME with items listed here: http://git.fedorahosted.org/cgit/scap-security-guide.git/tree/RHEL6/input/ch...
And the whole process is managed by create_package_installed.py: http://git.fedorahosted.org/cgit/scap-security-guide.git/tree/RHEL6/input/ch...
On 4/5/13 10:25 PM, Shawn Wells wrote:
On 4/5/13 3:08 PM, Francisco Slavin wrote:
So it might be my mail client, but I'm struggling to discern who is saying what in this thread. I added in some >> indicators but may have gotten them wrong.
fwiw, works in Thunderbird 17.0.5 and Zimbra.
On Friday, April 05, 2013 12:23 AM, Shawn Wells wrote:
On 3/29/13 3:08 PM, Aaron Lamb wrote: I've been meaning to chime in on this topic but have been extremely busy as of late.
Thanks for taking time to contribute to the conversation! Many people seem to think commuting code is the only way to contribute; getting engaged in conversations is *extremely* valuable.
I have already been writing scripts based off the SSG output and slowly working towards something usable. The main thing I have been working towards follows these 3 rules when it comes to remediation.. 1: If we need to change a file, make a time stamped copy before changing the file. 2: If a file is modified, lets put a comment on what was done with a date and SSG Rule ID 3: Write stdout and stderr to a log file. Whatever happens, I want to know about it.
I agree with the logic behind #3, however note that output is captured by OpenSCAP natively (thus the scripts don't need their own log files). Check out Simon Lukasik's awesome writeup here:
http://isimluk.livejournal.com/3573.html
Specifically, the <message> tags within <rule-result>'s
For point #3 I disagree. I think that the scripts themselves should not know or care where information is getting logged to. Basic info should be written to stdout by default, error information should be written to stderr by default. Tools such as the oscap tool can then capture this information and log it accordingly (possibly based on whether a user passes in a --verbose option or has different config options set). I think that having one configuration point for logging will make the most sense in the long run and that point should be the tool consuming the content. Thus is a tool provides the options you can configure logging however you like for your system: I want my audit results to go to spot A I want my remediation results to go to spot B If verbose, capture the stdout information somewhere If non-verbose, only capture the stderr information
I think we should have a more involved conversation about the approach to writing remediation content for SSG. I will send a patch up momentarily (purely to facilitate this discussion) demonstrating how we were originally writing SSG-targeted scripts in Aqueduct [1].
EMails with [PATCH] in the subject line move to the top of my queue... Your patch makes much more sense after reading this!
My concern is that there may be some clash between script-authorship best practices and SSG-content-authorship best practices. From our quick conversation after the workshop it seems like Jeff and Shawn are both on the same page re: scripts should live directly in the <fix> tags when content hits a consuming system; they should not be kept in separate .sh files and reference from within the <fix> tag. This poses a problem for authorship with regards to functional programming and code maintenance.
I know you and I have spoken of this, but as a clarification point (to avoid confusion of others), the location of scripts at /development/ will be different than finished product. Within the project the RHEL6/input/fixes/bash/ directory houses the scripts, of which Make merges into the master XCCDF output. When saying content should live within the <fix> tags, the statement is regarding finished output.
Full agreement on the challenges this may introduce, though.
As you will be able to see from the patches I forward over, we took the approach of grouping common tasks into one common function to perform those tasks and passing in parameters as appropriate for a particular fix. This relied on having the common function in its own file and sourcing (the bash '.' operator) that file for specific fix-scripts. This is basic programming best-practice to keep from copy/pasting code across multiple areas. If all of the bash scripts will live within one XCCDF XML file, each in discrete <fix> tags, I'm not sure what approach the community would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
It seems like a good time to start pushing these conversations given the recent questions popping up on the list. Thoughts?
Perhaps templates can be used, similar to the OVAL checks. For example the OVAL for checking package installation is here: http://git.fedorahosted.org/cgit/scap-security-guide.git/tree/RHEL6/input/ch...
The create_package_installed.py script replaces instances of PKGNAME with items listed here: http://git.fedorahosted.org/cgit/scap-security-guide.git/tree/RHEL6/input/ch...
And the whole process is managed by create_package_installed.py: http://git.fedorahosted.org/cgit/scap-security-guide.git/tree/RHEL6/input/ch...
Generated a sample patch: https://lists.fedorahosted.org/pipermail/scap-security-guide/2013-April/0029...
This clearly won't solve all the code reuse challenges, but wanted to throw some code behind the idea to start the conversation. Such an approach would have a measurable impact on QA and supportability/maintenance.
I also realized that I did not address your pre-processing comment. Since the SSG output is generated by a self inflected build process, there's an incredible amount of freedom to perform such a thing and hide it from content developers and users. What do you have in mind?
On 04/05/2013 09:08 PM, Francisco Slavin wrote:
If all of the bash scripts will live within one XCCDF XML file, each >
in discrete <fix> tags, I'm not sure what approach the community
would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
Maybe the text substitution of <plain-text> could be considered for this task. According to NISTIR-7275r4, the xccdf:sub element within xccdf:fix may refer to the xccdf:plain-text element.
Hence, SSG may use plain-text elements for definition of common scripts or functions. And only refer to such single plain-text from all of the Rules.
The example of <plain-text> usage is in OpenSCAP unittests at:
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/unittests...
and
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/unittests...
Best regards,
On Apr 6, 2013, at 8:08 AM, Simon Lukasik slukasik@redhat.com wrote:
On 04/05/2013 09:08 PM, Francisco Slavin wrote:
If all of the bash scripts will live within one XCCDF XML file, each >
in discrete <fix> tags, I'm not sure what approach the community
would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
Maybe the text substitution of <plain-text> could be considered for this task. According to NISTIR-7275r4, the xccdf:sub element within xccdf:fix may refer to the xccdf:plain-text element.
Hence, SSG may use plain-text elements for definition of common scripts or functions. And only refer to such single plain-text from all of the Rules.
The example of <plain-text> usage is in OpenSCAP unittests at:
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/unittests...
and
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/unittests...
This is fantastic, thank you Simon! I went through your unit test scripts and got a few ideas on improving SSG (outside of remediation).
I won't get a chance to try this until late Sunday, but we should easily be able to transform "functions" as existing in current Tresys scripts. Someone feel free to shoot out a first draft/patch!
On Saturday, April 06, 2013 4:30 PM, Shawn Wells wrote:
On Apr 6, 2013, at 8:08 AM, Simon Lukasik slukasik@redhat.com wrote:
On 04/05/2013 09:08 PM, Francisco Slavin wrote:
If all of the bash scripts will live within one XCCDF XML file, each
in discrete <fix> tags, I'm not sure what approach the community
would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
Maybe the text substitution of <plain-text> could be considered for this task. According to NISTIR-7275r4, the xccdf:sub element within xccdf:fix may refer to the xccdf:plain-text element.
Hence, SSG may use plain-text elements for definition of common scripts or functions. And only refer to such single plain-text from all of the Rules.
The example of <plain-text> usage is in OpenSCAP unittests at:
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/uni ttests/test_remediation_subs_plain_text.xccdf.xml
and
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/uni ttests/
This is fantastic, thank you Simon! I went through your unit test scripts and got a few ideas on improving SSG (outside of remediation).
I won't get a chance to try this until late Sunday, but we should easily be able to transform "functions" as existing in current Tresys scripts. Someone feel free to shoot out a first draft/patch!
The <plain-text> usage does look like an excellent approach here. I'll try to find some time today to hack together a patch based on the scripts I sent previously.
- Francisco
On Sunday, April 07, 2013 9:03 AM, Francisco Slavin wrote:
On Saturday, April 06, 2013 4:30 PM, Shawn Wells wrote:
On Apr 6, 2013, at 8:08 AM, Simon Lukasik slukasik@redhat.com wrote:
On 04/05/2013 09:08 PM, Francisco Slavin wrote:
If all of the bash scripts will live within one XCCDF XML file, each
in discrete <fix> tags, I'm not sure what approach the community
would like to take regarding function re-use. It seems like some pre-processing may be necessary; i.e. resolve the source operator before inserting the script content into the <fix> tag. The goal is to only have one copy of a specific function saved in the SSG repo but to be able to use it for multiple <fix>es which differ only in one parameter.
Maybe the text substitution of <plain-text> could be considered for this task. According to NISTIR-7275r4, the xccdf:sub element within xccdf:fix may refer to the xccdf:plain-text element.
Hence, SSG may use plain-text elements for definition of common scripts or functions. And only refer to such single plain-text from all of the Rules.
The example of <plain-text> usage is in OpenSCAP unittests at:
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/u ni ttests/test_remediation_subs_plain_text.xccdf.xml
and
http://git.fedorahosted.org/cgit/openscap.git/tree/tests/API/XCCDF/u ni ttests/
This is fantastic, thank you Simon! I went through your unit test scripts and got a few ideas on improving SSG (outside of remediation).
I won't get a chance to try this until late Sunday, but we should easily be able to transform "functions" as existing in current Tresys scripts. Someone feel free to shoot out a first draft/patch!
The <plain-text> usage does look like an excellent approach here. I'll try to find some time today to hack together a patch based on the scripts I sent previously.
I just sent up a patch with an initial stab at this, but it will need a bit of touchup. My XSLT is rocky so I was pretty heavy-handed with my update to the addfixes XSLT; there is probably a cleaner way of getting everything in the proper order in the output. It currently takes everything from files named ".*common" in the fixes/bash/ directory and puts it into <plain-text> elements in the final XCCDF as per Simon's examples. The question I have would be whether I should replace the source operator lines in the .sh files with <sub> references, or whether we should leave those source lines in place and turn them into <sub> lines with some more transform-magic. Any preferences?
Thank you - Francisco
- Francisco
scap-security-guide@lists.fedorahosted.org