Replies: 5 comments 6 replies
-
I'm guessing you smartcard enforced yourself and your computer most likely? |
Beta Was this translation helpful? Give feedback.
-
This may be an impetus to preserve backup copies of modified files, or a test/debug mode which only generates replacement files without actually applying them? Modification backup files could serve not only for restore in cases like this (maybe not this case) but the change inventory and modification diffs could be used for debugging. |
Beta Was this translation helpful? Give feedback.
-
Are you referring to: Continuous Diagnostics and Mitigation (CDM) and Configuration Settings Management (CSM)?
Yes, too ambitious as a framework. Yet fundamental to most technical debt remediation efforts. Instantiation/Deployment Automation along with A/B switching is far simpler objective, for continuous integrations. Nevertheless, preservation of configuration change artifacts are trivial to generate and retain, and their utility for debugging (even ruling out MSCP as root cause) cannot be understated. Configuration changes which are not atomic would require additional steps to preserve artifacts, but an env such as MSCP_PREFIX (and mkdir -p) in the target configuration path (rule fixes) could be a simple means for dry run change verification, debugging, and also a reasonable starting point for those interested in preserving a record/log of configuration changes.
Does the repo contain any hooks or env to facilitate a dry run? or an alternate prefix? Is /private/etc/ the only directory modified by the project? Perhaps changes could be evaluated by operating on a copy of /private/etc/ rather than the original? |
Beta Was this translation helpful? Give feedback.
-
While I do agree that the idea of preserving current configuration settings and artifacts are beneficial to things like debugging and understanding of what might be causing certain behavior on a system, the actual implementation of security configuration settings is not in scope of this project. The project provides guidance and documentation on how an admin can meet security controls and establish baselines for use in their environments. The tools of the project create the documentation and supporting artifacts that can then be used in implementing the configuration, but doesn't directly impact configuration of a system. No changes to /private/etc or other system configuration files are applied during the process of generating the guidance documentation and artifacts from the repo. The compliance script that is generated by the project is meant to simplify the process of evaluating and remediating the security configurations selected for an organization's baseline. However, it is up to the admin who is implementing the security configurations to test and verify that the compliance script (if they choose to use it) meets their needs in their environment prior to any deployment. MSCP cannot test for all use-cases or implementations. It is in this generated compliance script that would facilitate the A/B switching and testing, but is not something currently part of the generated script. Testing of specific controls happens prior to a control being available in the MSCP library and if there are specific issues with how a control functions, those can be identified and addressed in the project. If there is interest by someone in the community to tackle adding this feature to the generated script, we would definitely be interested in reviewing suggestions or PRs for this idea. |
Beta Was this translation helpful? Give feedback.
-
So, after my vent on the other discussion thread, #396 (comment) I considered the status of my notes regarding this, and similar discussions, where public users are inappropriately applying this project, and locking themselves out, as it were, as a learning exercise. And I also considered comments from Jamf developers implying concerns about the challenges of adopting a CI/CD model that tries to embrace integrating this project, without having the artifacts/diffs between the vendor's official release and the baseline benchmarks produced by the scripts in this project. There's quite a lot to consider here, even before grappling with the complexity of aligning the asynchronous release cadences involved. Nonetheless, I agree with a theoretical reasoning that projects like this, which are effectively (or actually) subordinate to RMF and the marque vendor, should not usurp control, or declare the changes from the original position to the baseline benchmark are authoritative (e.g. the script file change listing as an artifact of benchmark generation). However, this is not just a theoretical project repository; it is intended for public consumption and collaboration (I presume). As such, practical considerations might need to take precedence? And, if this is the actual reasoning, behind the resist of change logs and artifacts, to accompany benchmarks, that sounds more like obfuscation than compliance. Diffs are not CDM or change management. I am certain that providing guideline examples of control selections for, eg low, medium, and high security organizations, complemented with respective reports showing the artifact diffs produced by rendering the benchmarks, would go a long way in empowering public consumers to adopt this project. The greatest barrier is not configuration management itself, but rather educating on and integrating what this project provides (and the context it requires) with any existing configuration management practices at public consumer sites. In other words, it would be immensely helpful to provide representative diffs for various standard control selections---perhaps termed "benchmark verification for baseline validations"---to demonstrate the expected output. If I were to develop a PR to facilitate public consumer adoption of this project, it would go one step further than just logging script file changes. It would be a tool to streamline control selection for defined standards and/or my own custom controls. The tool would make it easy to experiment, for example, by copying relevant filesystem nodes to a temporary prefix and applying changes there. While service testing could be tricky, intentionally failing all service checks to trigger configuration fixes might be the low hanging fruit. Imagine the win---a low-barrier development environment for anyone with a single computer to experiment with this project. I concur and view use-case and implementation testing as a waste of effort for this project. It would just create one more thing to qualify before adoption. And the last thing I want is the liability of qualifying an entire baseline benchmarks with every repository change. Like most sites, I already have an established deployment state, which is either good enough or a candidate for improvement. In my view, unless designing a completely new site implementation, adopting security improvements involves incrementing the existing state, not adopting or surrendering configuration management to a third-party tool. Actually, I already have configuration management in place; I just want help understanding what the project's scripts do to qualify the changes made to a benchmark from the original position. That is a reasonable preparation, or staging, for local integration. I don't view this project as a means to adopt or implement the RMF outright. Like most, my world is to unorganized for that. I have multiple frameworks already trying to coexist. While RMF alignment is great, the first step toward adopting this project is integrating its output with other existing frameworks already in use. |
Beta Was this translation helpful? Give feedback.
-
I followed the instructions in the wiki, ran the compliance script, did option 2 to see where I was, did option 3 to run the fixes for the non compliant items. The fixes ran, I noticed my compiance percentage went up. But after exiting out of the script I can no longer sudo or sudo su - root to run the scap tool, macos_security scripts, etc. Any assistance would be greatly appreciated.
Also I was using 800-53r5_moderate.yaml
Beta Was this translation helpful? Give feedback.
All reactions