OMB 22-18 and NIST's SSDF

You want me to attest to what?

By now the OMB memo 22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” has been discussed by many. In short, among other activities, the memo requires federal agencies to begin collecting self-attestations from their software vendors and suppliers that they are using secure software development practices, specifically NIST’s Secure Software Development Framework (SSDF) and additional NIST Software Supply Chain Security guidance here. In some cases, it allows agencies to make use of a third-party assessor, such as a FedRAMP 3PAO to perform these activities rather than the supplier themselves, if the agencies determine the risk warrants it. Additionally, it states that agencies may request additional artifacts, such as Software Bill of Materials (SBOM)’s.

For a detailed overview of the memo, its scope, requirements and concerns, check out this article “Confronting the government’s latest secure software development guidance” from Walter Haydock at Deploy Securely.

In addition, industry trade organizations, such as the Information Technology Industry Council (ITI) have written a formal letter to The White House’s Office of Management and Budget (OMB) citing several concerns associated with the memo and its requirements for software suppliers and vendors. Among the concerns and recommendations cited by ITI are:

  • Clarify the mandate to leverage one standardized form for all agencies with the option to request addendums for mission-unique needs

  • Discourage agencies from requiring artifacts until SBOMs are scalable and consumable (I personally think this one is ridiculous and demonstrates a misunderstanding of what is possible currently or an attempt to avoid transparency and accountability)

  • Adjust the implementation timeline to allow for a standardized rollout through the established regulatory process under the Administrative Procedure Act

  • Consider piloting the collection of attestations and artifacts per M-22-18 prior to mandating them

  • Leverage the overlap with existing processes to the greatest extent possible to avoid the introduction of additional complexity

Needless to say, strong industry groups and lobbyists are pushing back very hard on the calls for increased transparency in the software supply chain from third-party software vendors. Those groups were successful in having SBOM language stripped from the 2023 NDAA and are looking to do something similar, delaying or impeding self-attestations to secure software development practices and providing machine-readable artifacts such as SBOM’s which will show objectively the measures organizations take to produce secure software and remove some of the information asymmetry that currently exists between software suppliers and consumers and that contribute to cybersecurity being a dubbed a market failure by some.

In my opinion, if every objection and critique is followed precisely, we will have several more years where the Federal government and its critical systems across Federal and Defense programs continuing to lack visibility into the software they consume and the (in)secure development practices to create it. This all occurs against the concurrent backdrop of the push for zero trust - to me that seems… just slightly off. As it means continued implicit trust (a zero trust antipattern) in software vendors without objective artifacts that build trust in software consumption.

Couple that with the reality that the Cyber Safety Review Board (CSRB) found that agencies spent tens of thousands of hours responding to events such as Log4j, and even recently, CISA and the FBI announced just last month another agency fell victim to a data breach due to the presence of a vulnerable Log4j component still residing in their environment a year after the original incident ravaged the industry and Federal agencies.

The delay of transparency for several more years, while malicious actors have rapidly realized the value of the software supply chain attack vector, with sources such as Sonatype’s “State of the Software Supply Chain” report showing an over 700% increase annually of these attacks over the last 36 months would be hard to describe as anything but a recipe for disaster.

All of that aside, one thing is clear, and that is the Government is committed to requiring more secure software from their suppliers in some shape or form.

While several others have written about the OMB memo and its implications, including concerns and critiques, I wanted to provide further insight into the SSDF itself, since many software vendors/suppliers likely aren’t familiar with it, but, assume lobbyists aren’t successful, will soon need to attest that they’re adhering to the practices it lays out, so let’s take a look at SSDF below.

Regardless of what concerns and critiques get taken into account by The White House and OMB, one thing is clear, and that’s the Government’s commitment to pushing suppliers towards secure software development and using SSDF as a tool to do so.

For that reason, it is critical that software suppliers get very familiar with it prior to attesting to aligning with it and the defined practices.

NIST Secure Software Development Framework (SSDF) 

As discussed in previously, the Cybersecurity Executive Order (EO) 14028 is having wide reaching impacts across areas such as zero trust, cloud computing and of course software supply chain security.

As part of the Cybersecurity EO, the Government is required to “only purchase software that is developed securely.” The EO directed NIST to issue guidance to identify practices that enhance the security of the software supply chain. NIST did exactly that, when in collaboration with industry, it published the Secure Software Development Framework (SSDF) Version 1.1, along with other software supply chain security guidance. This section will discuss SSDF in depth, what it is, and why it matters.  

The SSDF points out that few software development lifecycle (SDLC) models explicitly address software security. A common phrase many in the industry are familiar with is “bolted on not baked in” when it comes to cybersecurity.

This is to represent the fact that cybersecurity is often an afterthought in developing digital systems and is often addressed later in the SDLC, rather than earlier where security best practices and requirements can be integrated into software and systems from the onset. It is worth noting that SSDF Version 1.1 released in 2022 builds on an original SSDF version from April of 2020. To facilitate the update of the SSDF, NIST held a workshop with participants from the public and private sector and received over 150 position papers to be considered for the SSDF update.  

The intended audience for the SSDF includes both software producers such as product vendors, government software developers and internal development teams as well as software acquirers or consumers.

While SSDF was specifically created for use by Federal agencies, the best practices and tasks it contains apply to software development teams across all industries and can be used by many diverse organizations. It is also worth noting that SSDF is not prescriptive, but descriptive. This means that it does not specifically say how to implement each practice and instead focuses on secure software outcomes and allows the organization to implement practices to facilitate those outcomes.

This is logical, given the infinite ways to secure software and the unique people, processes and technologies that make up every organization producing and consuming software. The guidance also makes it clear that factors such as an organization's risk tolerance should be considered when determining which practices to use and the resources to invest in achieving said practices.  

NIST has defined minimum recommendations for Federal agencies that are acquiring software or products containing software from producers and vendors. These recommendations include several key provisions to help ensure the government is not acquiring insecure software and products.

As agencies procure software it is recommended, they use SSDF terminology to organize their communications around secure software development requirements. NIST also recommends that vendors attest to SSDF development practices throughout their SDLC. An often-contentious topic is that of attestation, which is evidence or proof of something. Typically, attestation from a process perspective can be done firsthand, also known as self-attesting, or by an independent third party, such as a 3rd Party Assessment Organization or 3PAO.  

The use of a 3PAO adds to the assurance of the attestation since it is made by a theoretical third-party rather than the party that is being assessed. That said, 3PAO compliance regimes also come with additional overhead in terms of time and cost to accompany their potential increased rigor. For example, FedRAMP, which is the authorization process for Cloud Service Providers (CSP) looking to offer their services in the Federal market, undergo a 3rd party assessment.  

However, at the time of this writing, despite the program being in existence for 10 years, there are less than 300 FedRAMP authorized cloud service offerings in a market of tens of thousands. If a 3PAO approach was taken for software producers under SSDF it would undoubtedly have a similar impact in terms of drastically limiting the pool of qualified vendors authorized to sell software to the government due to the cost and administrative overhead associated with 3PAO models.

That said, NIST has stated in their guidance that depending on the risk tolerance of the agency and software consumer, a third-party attestation could be warranted in some situations, and this was further supported with OMB’s 22-18 memo.

Critics have urged the government not to take this approach due to the impact it would have on adoption and momentum of SSDF adoption and pointed to examples of delays in other similar programs, such as the DoD’s Cybersecurity Maturity Model Certification (CMMC), which has experienced several setbacks, some of which are related to the complexity of implementing a 3PAO process for a new compliance certification. That said, key government officials, including those from such as Brett Baker, Inspector General for the U.S. Archives and Records Administration have stated “you can’t just trust vendors, we have to stop that”, echoing a call for third-party assessors and ideally objective machine-readable artifacts as well.

However the debate on self-attestation versus third-party assessment goes, one thing is clear, software suppliers need to be very familiar with NIST SSDF and ensure they’re implementing those practices into their software development lifecycle (SDLC), so let’s take a look.  SSDF Details 

The NIST SSDF as mentioned is aimed at advocating for the use of fundamental and recognized secure software development best practices. One thing that makes SSDF unique is that rather than creating guidance from scratch entirely, it utilizes many known and implemented established sources of guidance, such as the Building Security In Maturity Model (BSIMM) by Synopsys and the Software Assurance Maturity Model (SAMM) from OWASP among several others.  

SSDF’s robust set of secure software development practices are broken into four distinct groups. These include Preparing the Organization (PO), Protecting the Software (PS), Produce Well-Secured Software (PW) and Respond to Vulnerabilities (RV). Within those practices you have elements which define the practice, such as Practice, Task, Notional Implementation Example and Reference, which map the practice to tasks.

As previously mentioned, the latest version of SSDF was created out of requirements from the Cybersecurity EO, so it also includes mapping to specific EO requirements, specifically in Section 4e. The desired goal of using the SSDF practices is to reduce the number of vulnerabilities included in the release of software and reduce the impact of those vulnerabilities being exploited if they are undetected or unmitigated.  

In this article we will take a look at each of the groups of practices.

Prepare the Organization (PO) 

Preparing the organization for secure software development is a logical first step for any organization looking to develop secure software. Practices in this group include defining security requirements for software development. This includes requirements for the organization's software development infrastructure and security requirements that organization-developed software must meet. Of course, these requirements will need to be communicated to all third parties that provide commercial software components to the organization for reuse as well, which gets increasingly complicated when you consider third-party OSS components, which compose up to 80% of modern applications. Rather than those third-parties being bound like a proprietary commercial vendor would be via contracts and other means, it will fall on the software supplier to implement OSS governance and security practices to mitigate risks from the use of OSS components in their products.

Defining Roles and Responsibilities is another fundamental step that organizations must take. This includes roles for all parts of the SDLC and providing appropriate training for the individuals in those roles. The guidance emphasizes the need to get upper management or authorizing officials' commitment to secure development and ensuring individuals involved in the process are aware of that commitment. This is often referred to as getting “executive buy-in".  

Modern software delivery involves supporting toolchains that use automation to minimize the human toil associated with software development and lead to more consistent, accurate and reproducible outcomes. Tasks in this area involve specifying the tools and tool types that must be used to mitigate risks and how they integrate with one another. Organizations should also define recommended security practices for using the toolchains and ensure the tools are configured correctly to support secure software development practices.  

Organizations should also define and use criteria for software security checks. This includes implementing processes and tooling to safeguard information throughout the SDLC. Toolchains can be used to automatically inform security decision making and produce metrics around vulnerability management.  

Lastly, organizations should implement secure environments for software development. This typically manifests as creating different environments such as Development, Testing, Staging and Production. These environments are segmented to limit the blast radius of a compromise impacting other environments and allow for differing security requirements depending on the environment.  

These environments can be secured through methods such as MFA or conditional access control, least-permissive access control and ensuring that all activities are logged and monitored across the various development environments to enable better detection, response, and recovery. Securing the environment also means that the endpoints developers and others interacting with the environments use are hardened to ensure they do not introduce risk as well or implementing contextual access control that takes device posture into consideration in dynamic access decisions. You will notice there are several parallels to these recommendations with the current guidance and best practices for Zero Trust as well.  

Protect Software (PS) 

Moving on from protecting the organization is protecting the software itself. Practices in this group involve protecting the code from unauthorized changes, verifying integrity, and protecting each software release.  

Protecting all forms of code from unauthorized changes and tampering is critical to ensure the code is not modified either intentionally or un-intentionally in a form that compromises its integrity. Code should be stored in methods that align with least-permissive access control based on its security requirements, which looks different for OSS code or proprietary code. Organizations can take measures such as utilizing code repositories that support version control, commit signing and review by code owners and maintainers to prevent unauthorized changes and tampering. Code can also be signed to ensure its integrity with methods such as cryptographic hashes. Code signing of course isn’t infalliable and can be compromised itself, leading to signed code that is malicious but appears trustworthy.

Not only does the integrity of the code need to be maintained but there must be methods for software consumers to validate this integrity. This is where practices such as posting hashes on well-secured websites come into play. Code signing should be supported by trusted certificate authorities that software consumers can use as a measure of assurance or trust in the signature.  There are also emerging efforts, such as Sigstore which has been adopted by major OSS projects such as Kubernetes, that alleviates some of the administrative overhead traditional associated with key management and signing.

Finally, each software release should be protected and preserved. This can be used to identify, analyze, and eliminate vulnerabilities tied to specific releases. This also facilitates the ability to roll-back in the case of compromised releases and restore to “known good” states of software and applications. Protecting and preserving software releases allows consumers to understand the provenance of code and the associated integrity of the code provenance.  

Produce Well-Secured Software (PW) 

Now that requirements have been codified and development environments and the endpoints that access them have been addressed, the organization can focus on producing well-secured software. This is not to say that each group does not occur concurrently throughout the life of an organization or program, but they do build upon one another while also warranting revisiting and revising, as necessary.  

You will note in the Protect the Organization section of the SSDF security requirements were defined and documented. Now software must be designed to meet those security requirements. This is where organizations can use methods of risk modeling such as threat modeling and attack surface mapping to assess the security risk of the software being developed. Organizations can train development teams in methods such as threat modeling to facilitate empowered development teams capable of understanding the threats to the systems and software they develop and measures to reduce those risks. By using data classification methods organizations can prioritize more rigorous assessments of high sensitivity and elevated risk areas for risk mitigation and remediation. Organizations should also review software design regularly to ensure that it meets security and compliance requirements that the organization has defined.

This includes not only internally developed software but also software that is being procured or consumed from third parties as well. Depending on the nature of the software being consumed, organizations may be able to work with software designers to correct failures to meet security requirements, but this does not apply in situations such as OSS where there are not contracts or associated agreements such as Service Level Agreements (SLA)’s.  

Organizations are encouraged to reuse existing, well-secured software rather than duplicating functionality. This reuse has a myriad of benefits such as lowering the cost of development, speeding up capability delivery and reducing the potential of introducing new vulnerabilities into environments. It is not uncommon for large enterprise organizations to experience code sprawl, particularly in the era of “as-Code” where infrastructure and even security, in cloud-native environments can be defined as code. This as-Code approach supports concepts such as modularity, re-use, configuration-as-code and hardened code templates and manifests which can be safely used elsewhere in organizations or even beyond. That said, as mentioned by the Palo Alto Unit 42 Threat Research Group, if these manifests and code templates include vulnerabilities, they now become replicated at scale as well, so proper governance and security rigor is required.  

Organizations or even teams within organizations that make re-use of existing software and code should ensure they review and evaluate code for security and misconfiguration concerns as well as understand the provenance information associated with the code they are re-using. A similar recommendation SSDF makes is to create and maintain well-secured software components and repositories in-house for development re-use. This is like recommendations made in NIST 800-161 Rev1.

Source code created by the organization should ensure it aligns with secure coding practices adopted by the organization as well as advocated by industry guidance. These include steps such as validating all inputs, avoiding unsafe functions and calls, and utilizing tools to identify vulnerabilities in the code.  

Respond to Vulnerabilities (RV) 

While organizations may have defined security requirements, prepared their environments, and even strived to produce secure software, vulnerabilities will inevitably arise. This is due to the reality that identifying all possibly known vulnerabilities during development is impossible and as time goes on, vulnerabilities will be discovered. There is a common phrase that “software ages like milk” due to the reality that the longer software has been around, the more likely it is that vulnerabilities will be discovered by researchers, malicious actors, or others.  

Organizations should be working to both identify and confirm vulnerabilities on an ongoing basis. This includes monitoring vulnerability databases, utilizing threat intelligence feeds and automating the review of all software components to identify any new vulnerabilities. This is key since new vulnerabilities will inevitably emerge from the initial time code may have been scanned and examined. Organizations should also have policies around vulnerability disclosure and remediation and as previously mentioned, define roles and responsibilities to address vulnerabilities as they emerge. This helps inform software consumers of vulnerabilities associated with code and products from software suppliers and allows them to mitigate the vulnerabilities before they can be identified and utilized by malicious actors.

Organizations will not only need methods to identify and confirm vulnerabilities, but they will also need to remediate vulnerabilities in a method that aligns with the risk that vulnerabilities pose. This means having a process to assess, prioritize and remediate software vulnerabilities. Using tools and governance, organizations can then make risk-informed decisions such as remediating, accepting and in some cases, transferring the risk if possible. Traditionally and still largely, vulnerabilities are often prioritized based on metrics such as the Common Vulnerability Scoring System (CVSS) but we’re now seeing innovative methods such as the Exploit Prediction Scoring System (EPSS) emerge to augment or in some cases take the place of CVSS. CISA has also advocated for the use of the the Stakeholder Specific Vulnerability Categorization (SSVC) system, along with their Known Exploited Vulnerability (KEV) catalog, both of which offer opportunities to improve vulnerability management and prioritization.

Organizations that are producing software also need established methods to develop and release security advisories to software consumers that help them understand the vulnerabilities in the software and the potential impact to them as a consumer, and steps to resolve the vulnerability if possible. While traditional advisories occurred in static formats such as websites, emails and static documentation, the industry is increasingly shifting towards machine readable advisories, such as CSAF and Vulnerability Exploitability eXchange, the latter of which is supported by industry leader OWASP in their CycloneDX VEX BOM.

Lastly, organizations should take steps to identify the root causes of vulnerabilities through analysis. This helps reduce their frequency in the future by addressing the root cause rather than just an individual vulnerability.  

Conclusion 

As evident from the vast array of secure software development tasks and practices discussed above, no organization of significant size or scale will always do all these practices perfectly immediately, if ever, at least not perfectly. That said, organizations can take steps to codify their secure software development practices by using the SSDF as a guide and helping ensure proper steps are taken to secure software throughout the SDLC.  

While it remains to be seen, what extent attestation of SSDF alignment and granularity OMB and the Federal government requires, organizations are well-advised to start sooner than later, as this requirement is soon coming to the entire Federal software supplier ecosystem.

For more details, be sure to read SSDF Ver. 1 itself, and look at the tables included, which cover Practices, Tasks for each Practice, Notional Implementation Examples and References to the various frameworks and guides that SSDF pulls from.