- Resilient Cyber
- Posts
- Software: Liability, Safe Harbor and National Security
Software: Liability, Safe Harbor and National Security
A look at the evolving dialogue around software liability, safe harbor and the role of software in national security.
The industry conversation around software’s critical role in our modern society continues to heat up and evolve.
Just last week, CISA’s Director Jen Easterly, testified at the Select Committee Hearing on China’s Cyber Threat to the U.S. that we’re seeing an increased focus from adversarial nation states on our digital critical infrastructure (e.g. aviation, energy water, transportation and more).
Most notably, Ms. Easterly stated:
This builds on previous publications, such as the Cybersecurity Executive Order (EO) and the National Security Strategy (NCS).
The latter of which has a core emphasis on “rebalancing the responsibility to defend cyberspace” by stating that the best positioned actors (e.g. suppliers, vendors and manufacturers) must be better stewards of the digital ecosystem, rather than placing that burden on individuals, small businesses and state and local governments - which is large the current paradigm.
Most controversially, Pillar 3 of the NCS titled “Shape Market Forces to Drive Security and Resilience” discusses placing responsibility on those best positioned to reduce risk (again, think technology suppliers and manufacturers).
Historically, technology suppliers externalize the cost of security (or lack thereof) onto customers and consumers (including Government entities), due to competing demands that they prioritize over security, such as speed to market and profit.
However, the concept of Software Liability is a controversial one, as suppliers inevitably push back on the prospect of legal liability and the financial and legal ramifications that would accompany it.
There’s also the reality that a liability regime is a complex topic, due to some of the unique aspects of software compared to other domains. In this article, we will look at some of the leading thoughts on what a software liability and safe harbor regime could look like, and some of the challenges that will inevitably be encountered as well.
Software Liability - Cybersecurity Policy’s Third Rail
As Jim Dempsey, Lecturer at UC Berkeley Law School and Senior Policy Advisory at the Stanford Cyber Policy Center has put it, software liability is generally considered the “third rail” of cybersecurity policy. Meaning, if you touch it, you die. Or in less violent terms, it is a heavily controversial topic with heated opinions, opposition and potential pitfalls.
Additionally, there’s been a lot of questions around “what exactly would a software liability model look like”.
Luckily, Jim recently published a paper titled “Standards for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor”, which is part of the Lawfare Blog Security-by-Design series.
We will use this paper and proposed ideas to orient the discussion around what a software liability and safe harbor regime could look like.
As Jim astutely points out, one critical aspect to get right in any standard of care, or software liability regime is to ensure there is total clarity with regard to the standard of care, so that it isn’t subjective and opaque.
Jim proposes a three part software liability approach, focused on real-world routinely exploited software flaws coupled with existing technical standards for secure software development.
The three aspects are:
A “floor” or minimum standard of care for software
A standard based on the defect analysis common to products liability law, to account for the complexity and dynamic nature of software
A safe harbor that shields suppliers from liability for hard-to-detect flaws above the defined floor. Essentially protecting those who did their due diligence and properly invested in cybersecurity.
There are 6 sections of the paper, which are summarized below:
Section 1: Problem to be Solved
Jim points out that almost every software contract today has language that disclaims the supplier from responsibility for the defects in their products. This is despite the reality that there are billions annually associated from software defects and data breaches by organizations and individuals.
The problem to be solved is defining a standard of care that can be codified via Federal legislation and used in private litigation and separately, regulatory agencies and their enforcement mechanisms.
Section2: Warranty vs. Negligence vas. Products Liability vs. Certification: The Question is the Same
One challenge is differentiating between software that is too insecure from software that is secure “enough” (e.g. safe harbor). The paper acknowledges that perfectly secure software isn’t realistic or practical and the intent isn’t to hold developers and suppliers liable for every single flaw. The intent is to incentivize developers to align with a secure software development framework and agreed upon standard of care.
Various approaches can be used, from warranties, to making disclaimers of warranties ineffective, tort law, product liability law and certification or licensing. Each approach has its own unique nuances, considerations and challenges.
Irregardless of approach, Jim points out that at the end of the day the question is “What is the standard of care?”, and can it be developed quickly and sufficiently enough to keep up with the iterative dynamic world of software without stifling innovation, which in its own right, has economic prosperity and national security implications.
Additionally, there is the challenge of “How much safety/security is enough”?
Answering these questions can be complex and would lead to many legal cases where courts try and adjudicate decisions around proper care, diligence, investment, and rigor for the security of software. This of course will require expert testimony as well, given most courts and their staff are not software experts in their own right, but this also applies to countless other domains where legal cases get heard and decided.
Jim demonstrates that a case-by-case evolutionary approach would both be too nuanced and slow to deal with the current problems of insecure software across society and points to others who argue that cybersecurity liability should be based on “some clear-cut do’s and don’ts to which liability should attach”.
Section 3: Existing Software Standards Focus on the Process, Not the Product
Another challenge Jim emphasizes is that current software standards focus on the process, not the product - e.g. the SDLC and practices and processes followed but not the security of the actual end-product.
He uses NIST’s SSDF as an example, which includes four practice areas with many defined practices, cross-mapped to industry frameworks such as OWASP SAMM and BSIMM for secure software development.
The challenge Jim notes is that a supplier could follow defined practices and ultimately still produce an insecure or unsafe product.
Section 4: The Floor: Lessons From Other Fields
The point is made that other fields such as building codes have definitive standards on features, not processes. These include explicit features such as around voltage (buildings) or brake fluid temperature tolerations (vehicles).
Knowing how infinitely complex software can be compared to some of its physical counterparts such as lawn mowers and automobiles (the later of which are increasingly powered by software), Jim notes that the goal isn’t to address every possible flaw but instead fundamental features that are either must-haves or no-no’s from a security and safety perspective.
This list shouldn’t be static either, but dynamic, and iterative, much like the world of software it will seek to govern. It is stressed that the benefit of focusing on the product rather than the process is performance-based, focused on secure products as an outcome. It also avoids any liability standard from being too prescriptive and rigid, allowing organizations to take a variety of development approaches, as long as the end product aligns with the defined security features.
Section 5: The Software Liability Floor: Compiling Definitive Software Do’s and Don’ts
The paper points to examples such as CISA’s Secure-by-Design series of publications that have explicit requirements where a defined standard of care could be derived, such as eliminating default passwords, creating secure configuration templates and providing logging “at no additional charge” (I’m not a fan of this, given there’s always a cost, we’re just asking the supplier to absorb the cost of developing and providing the feature in this, and many other cases).
Jim cites existing examples of product security feature guides from computer scientist Carl Landwehr, which for devices such as power system supplies and medical device software. An initial generalized standard could be used and then evolve into specific sectors and device types over time.
Most notable among Jim’s cited examples is CISA’s Annual Publication of the most frequently exploited vulnerabilities and MITRE’s annual list of the twenty-five most dangerous software weanesses (e.g. Common Weakness and Enumerations (CWE)’s). Specific CWE examples are cited, such as CWE-22, Path Traversal Flaw’s, which could be converted to product features to be avoided or added depending on the CWE and context.
However, the problem isn’t quite that simple, as pointed out by longtime application security (AppSec) leader Jeff Williams on LinkedIn. Jeff stated that “I don't think this is anywhere near workable. There are instances of *every* type of vulnerability that range from completely obvious to insanely complex. If it's an instance that's completely obvious, I can maybe imagine liability attaching. But if it's complex, I find it difficult to blame the developer or hold them liable.” Jeff goes on to point out example scenarios where the approach would quickly get convoluted.
As Jim himself points out, there are over 900 published CWE’s on MITRE’s CWE List as of January 3rd 2024. This makes it difficult to determine what CWE’s if any, would be valid to hold developers/suppliers liable for in every scenario.
Ironically, Jim also cites the OWASP Top Ten as another potential example, and its ironic, given Jeff Williams helped create the original OWASP Top Ten list, and he himself states he struggles to see a world where suppliers could always be held liable for any specific weakness or flaw due to complexity of software development and modern digital ecosystems.
Jim proposes an inverse of safe harbor that he dubbed “a zone of automatic liability”, citing practices that have widespread consensus as being dangerous.
One key point Jim makes that needs to be emphasized is that there should be liability for software suppliers/developers AND software users. This is absolutely critical given there is an undeniable reality that there’s a substantial difference between something that is inherently insecure and dangerous versus the misuse and unsafe use of something (think about a vehicle being inherently insecure and flawed vs. a driver being reckless and putting themselves and others at harm).
Lastly, Jim stresses that liability should only be applicable when a design flaw is actually exploited and causes actionable damage, rather than people scouring for flaws that didn’t cause harm, seeking litigation and compensation. However, he points out this of course would require a case-by-base analysis again.
Section 6: For All Other Flaws: A Process-Based Safe Harbor
No conversation around software liability would be complete without an accompanying discussion of safe harbor, meaning, a pathway where suppliers have safeguards and protections in place from liability and litigation when they have done their due diligence and taken a proper standard of care for the products they put into the ecosystem.
Ironically, here the paper states that this is where process does make a difference, meaning suppliers can avoid liability if they show that they did adhere to defined safe harbor processes.
Jim’s stance is there existing examples such as NIST’s Secure Software Development Framework (SSDF), Microsoft Secure Development Lifecycle (SDL) and others could be reviewed and provide a derived set of practices and processes for which a safe harbor provision could be established for software suppliers.
Of course it would need to be decided what government entity or standards body defines the safe harbor. Jim points out this would require resisting inevitable industry pressure for a weak broadly worded standard and cites potential examples as CISA, as an agency who may be positioned and have the relevant expertise to serve in this capacity.
If not this, then what?
The paper closes by pointing out that the goal isn’t to have perfectly secure software but to compensate users of software for losses caused by unreasonable dangerous defects in software.
There are of course many unresolved questions around defining actionable damages, the role of users, their misconfigurations, misuse and more. Also, what happens to existing legacy systems and software that are governed under the current operating model, how long are they immune, and more.
However, it is clear that the current model largely shields software suppliers from liability and disproportionately disadvantages customers and consumers. As the National Cyber Strategy calls for, this issue must be addressed by those best positioned to do, which is software suppliers, but that requires a shifting of the balance of responsibility, from customers to suppliers.
Despite software powering most of modern society, from consumer goods to critical infrastructure and national security, the world of software still lags other counterparts such as medical devices, pharmaceuticals, automobiles and more when it comes to liability for insecure and unsafe products.
Just this week, we saw a report from the NSA, FBI and CISA stating that peer nations such as China has had malicious actors burrowed in our critical infrastructure for up to five years, looking to potentially carry out disruptive and destructive attacks in the event of a future conflict.
Will the industry have the motivation, will and commitment to push forward on a software liability regime, and find a way to do so without further stifling U.S. technological innovation, which is now inextricably tied to economic prosperity and national security, with cybersecurity a core domain of modern warfare?
Time will tell.