- Resilient Cyber
- Posts
- Resilient Cyber Newsletter #1
Resilient Cyber Newsletter #1
SEC vs. Solarwinds, Microsoft Visits Congress and Whistleblower Comes Forward, Apple AI Privacy Cloud Compute, DoD's NIPRGPT and Out of Control CVE Growth
Hi!
By now you’ve been following Resilient Cyber for sometime.
I’ve been using this outlet to publish articles, podcasts, interviews and deep dives into various technical and cybersecurity topics ranging from Cloud, DevSecOps, AppSec, Software Supply Chain and AI.
That said, I originally got started on my LinkedIn profile, where I have built a following of 60,000+ people due to sharing cybersecurity content and resources daily for several years.
I’ve often been asked if I save or consolidate those resources anywhere, and historically I hadn’t - until now!
This will be the first issue of the Resilient Cyber Newsletter, where I will be publishing a weekly collection of resources all across the Cybersecurity, Software and Business domain.
So let’s dive in.
Cybersecurity Leadership
The SolarWinds case from the SEC continues to get a lot of industry attention, being the first of its kind, asserting fraud and internal control charges against the CISO Timothy Brown. This Harvard Law article does a great job breaking down the nuances of its case as well as industry criticisms against the SEC’s actions against SolarWinds and it’s CISO. Many industry leaders have claimed the case will make further challenges for CISO’s and perhaps even push away talent from the role.
As the saying goes, it is not a matter of if, but when, security incidents will impact your organization. It is often said organizations should train like they fight, yet when it comes to training, and using activities like tabletop exercises, most organizations don’t know where to begin.
Luckily, there are resources out there such as CISA’s Cybersecurity Tabletop Exercises (CTEP)’s, which are templatized packages aligned with various potential incidents and scenario scenarios that enable teams to be best prepared and test their policies, procedures, plans and capabilities.
I published a comprehensive article on CSO online, covering 3 scenarios which are:
Scenario #1 - Compromised open source software (OSS) packages
Scenario #2 - Ransomware attacks
Scenario #3 - Insider Threats
While this is just a subset of CTEP’s available, it lets organizations focus on some of the most pressing risks, including the exponential rise of software supply chain attacks and proliferation of ransomware incidents.
Ironically, CISA facilitated the Government’s first AI Tabletop exercise and published a CTEP for it this week, so I will be covering that separately below.
I had a chance to sit down on the Resilient Cyber Podcast with Anduril CISO Joe McCaffrey and dive into all things DoD, Cyber and National security, including:
The background on Anduril and its role as a next-generation Defense contractor
Challenges for new entrants into the Defense space
Compliance requirements, complexity and toil
The role of software in future conflicts and the rise of Software-Defined Warfare
Strong and timely piece, published in the widely non-tech outlet The Hill, discussing the role and impact of Microsoft on National Security. As many know, Microsoft has a unique relationship receiving billions of Federal contracts a year for its products and software but also often leading to incidents and data breaches which open up the government to risk.
This of course comes on the heels of the damning Cyber Safety Review Board (CSRB) brief, which showed how the Exchange Online incident impacted Federal agencies and national security and cited systemic security deficiencies at Microsoft and a lack of a security culture.
Microsoft has since responded with efforts like their Secure Futures Initiative and their internal memo, emphasizing the importance of security to their organization and customers and even promising to tie executive compensation to security outcomes.
As it stands right now, Microsoft is a paradoxical position of having tens of billions in cybersecurity revenue while also holding the top spot on the CISA Known Exploited Vulnerability (KEV) catalog and multiple visible and impactful incidents which have harmed the Government, and as the article points out, potentially U.S. national security interests.
The scrutiny of Microsoft has come from both sides of the aisle and even included a committee hearing embarrassingly titled “A Cascade of Security Failures: Assessing Microsoft Corporation’s Cybersecurity Shortfalls and the Implications for Homeland Security”.
Ironically enough, Microsoft recently was a signatory on the CISA Secure-by-Design Pledge, which of course is voluntary, and many consider a virtue signal, and then they turned around and released Recall, which got absolutely slammed by the industry for violating customers and users privacy and security potentially.
In what can only be described as a damning whistleblower testimony, in addition to everything discussed above. A comprehensive expose’ from ProPublica lays out a very concerning whistleblower allegation that Microsoft ignored concerns around identity risks and vulnerabilities, which were ultimately used to help carry out attacks that impacted both Government and commercial customers vulnerable.
It comes from former Microsoft employee Andrew Harris and lays out how Microsoft refused to fix vulnerabilities Andrew brought to leadership and others at Microsoft, in part due to concerns it would hinder Microsoft’s chances of winning a spot on the coveted DoD JWCC cloud contract, valued in the billions.
Timely, Microsoft’s President, Brad Smith testified in front of members of Congress yesterday on the Cybersecurity failures of Microsoft. That full testiomny can be seen below.
During the testimony, Brad Smith stated that Microsoft “accepts responsibility” for each and every finding in the CSRB report and has already begun acting on a majority of the reports recommendations.
The testimony received mixed feedback, with some applauding Microsoft for accepting responsibility, while others cited Microsoft’s ties and presence in China as a challenge and risk and pointed out The Government is paying Microsoft to find culprits, pushing back on Smith’s claim that no one entity in the ecosystem can see everything.
Thought provoking piece from Venture in Security, discussing the fact that the cybersecurity industry is a market for silver bullets, meaning neither buyers nor sellers have sufficient information about the goods sold. This is an unfortunate reality of our field, as product vendors and consumers, including CISO’s and security teams, can’t make any guarantees that they know more than attackers or that the products they buy will stop attacks and be effective.
The article demonstrates how in security, buyers use various signals as proxies for informing purchasing decisions, such as the background of founders, mentions by analyst firms, peer feedback and marketing hype to drive their decisions, rather than actual deep understanding of the effectiveness of what they’re buying.
Artificial Intelligence
Insightful share from the OpenAI team, explaining their architecture that supports the secure training of frontier AI models. It includes a deep dive into:
Threat Model
Architecture
Protecting Model Weights
Auditing and Testing
Research and Development on Future Controls
It discusses some of the unique security challenges when it comes to infrastructure and security for advanced AI use cases.
The hype around GPT and AI’s ability to autonomously exploit vulnerabilities continues to grow, making it hard to tell fact from fiction. This continues with an article in “The Register” titled “GPT-4 can exploit real vulnerabilities by reading advisories.
The original research was in a paper titled “LLM Agents can Autonomously Exploit One-Day Vulnerabilities”.
However, after digging in and dialogue with others such as Oliver Rochford, it was determined to be a poorly reported piece with a misleading title.
In a comprehensive debunking piece titled “No, LLM Agents can not Autonomous Exploit One-Day Vulnerabilities”, a researcher points out no new vulnerabilities were discovered and the research was likely done by using readily available public exploits to demonstrate the vulnerabilities.
This means while the FUD machine of AI crushing society, including our digital systems, we may need to wait a while for the reality to match the hype still.
Former Commander of the U.S. Cyber Command and Director of the NSA Ret. General Nakasone will serve on OpenAI’s Board of Directors, as a member of its Safety and Security Committee. This is interesting because it brings someone with deep Public Sector/DoD/Intel experience into the OpenAI BoD and also may be considered timely with some recent departures among OpenAI’s team who had concerns around AI Safety and Security contrasted with competing priorities, such as product, capabilities and markets.
Apple unveiled their “Private Cloud Compute” which is designed for processing AI tasks in a privacy-preserving manner in the cloud. It was announced and rolled out alongside Apple’s next generation of software, including iOS 18, iPadOS 18 and macOS Sequoia.
Apple set out to build Private Cloud Compute with a set of core requirements:
Stateless computation on personal user data
Enforceable guarantees
No privileged runtime access
Non-targetability
Verifiable transparency
You can read much more about it in their announcement page, but they go into great detail to describe the architecture and functionality and how privacy is a core part of their approach. This comes at a time when many are raising privacy concerns around AI and speaks to Apple’s emphasis on using privacy as a differentiating focus over some competitors.
There is a great conversation on the announcement in the widely popular RiskyBiz podcast, that even includes Rob Joyce, who recently retired as the Cybersecurity Director for the NSA.
Steve Wilson, lead for the OWASP’s Top 10 for LLM AI Security and author of “The Developer’s Playbook for LLM’s” published a LinkedIn Article discussing both his excitement and concern for key risks associated with Apple’s announcement titled “Apple’s Bold AI Move: An LLM Security Perspective”.
On a similar topic, I have previous published several articles on AI Governance and Security, including:
I also had a chance to present at the Techstrong Virtual Summit this past week, with a session titled “Vulnerability Management in the Age of AI and OSS”
The DAF announced they have released “NIPRGPT” in collaboration with the DAF Chief Information Officer and Air Force Research Laboratory (AFRL), aimed at accelerating efforts to grant access to service members to responsibly experiment with GenAI, with safeguards in place.
In comments from USAF CIO, Venice Goodwine, she emphasized the need for the workforce to develop skills with GenAI technologies, and it will come at no additional cost to units or users.
NIPRGPT is framed as an “AI chatbot that allows users to have human-like conversations to complete various tasks”.
This is a much welcomed advancement in the DoD and USAF’s approach to embracing emerging technologies, especially to those who remember in Fall of 2023, when U.S. Space Force Guardian’s (the name of Space Force service members) were placed under a temporary ban from using GenAI and LLM tools for official purposes.
I personally applaud this effort and think it is critical to get the emerging technologies and tools into the workforces hands, ensure DoD isn’t a technological laggard and also to experiment to understand the safe and secure use of GenAI technologies. Banning as we all know leads to shadow usage and of course hinders the workforce from developing competency with the associated technologies that are banned or not allowed to be used in an “authorized” context.
CISA, along with partners on both the Government and Industry side conducted the federal governments inaugural tabletop exercise focused on effective and coordinated responses to AI Security Incidents.
They produced an excellent AI Tabletop Exercise Package (TEP) which can be used by both Government and Commercial organizations to help them be prepared when, not if, an incident involving AI occurs.
It involves hypothetical scenarios such as:
Phishing against AI Engineers and AI DevSecOps Teams
Social media activity by attackers claiming to have impacted your organization
Poisioned Training Datasets
And more.
This is an excellent resource for the community and huge praise for CISA for helping lead from the front here!
I had a chance this week to listen to this episode of the a16z podcast that featured Marc Andreessen and Ben Horowitz. They dive into the state of AI, covering how startups can compete with big tech’s compute and data scale advantages, why data is overrated as a sellable asset and unpack the AI boom compared to previous technological waves.
It was an interesting conversation with a lot of parallels to past tech waves but also some nuances related to AI.
Application Security & Software Supply Chain
Resourcely Founder Travis McPeak has released a three part series (1,2, and 3) discussing why DevSecOps is broken. He covers the evolution of DevOps to DevSecOps, the fact that security is a laggard, and the need to transition to Secure-by-Design and Secure Defaults - this of course is a message that aligns with what CISA is championing in their Secure-by-Design publication series.
There of course has been a massive focus in the industry on topics such as software supply chain security and Secure-by-Design, the latter of which has been championed by CISA most notably.
Google previously released an excellent whitepaper titled “Secure-by-Design at Google” by Christoph Kern. Building on that, Christoph recently published an excellent paper titled “Developer Ecosystems for Software Safety” in the ACM Digital Library.
It focuses on Continuous assurance at scale and discusses how despite all the efforts by security practitioners and the industry as a whole, common software weaknesses (e.g. Common Weaknesses and Enumerations (CWE)’s) continue to be prevalent Year-over-Year (YoY).
The same types of weaknesses and vulnerabilities are on lists like the CWE Top 25 Dangerous Software Weaknesses or OWASP Top 10 continuously, but why?
The paper argues that a big part of the underlying problem is the role of the developer ecosystem and emphasizes the key point that:
“In short, the safety and security posture of a software application or service is substantially an emergent property of the developer ecosystem that produced it”
I think few seasoned security practitioners would argue with this perspective and we know it ties closely to studies and research emphasizing the importance of the Developer Experience (DevEx) as well.
I had a chance to interview Christoph on the Resilient Cyber show, which can be listened to below and on any podcast outlet you prefer.
This is an excellent piece from longtime open source and software security expert Josh Bressers.
Josh discusses critical topics such as the exponential growth of vulnerabilities (e.g. Common Vulnerabilities and Enumerations (CVE)’s
The struggles of the NVD, which has been a hot topic, and quit enriching and analyzing vulnerabilities in February and just recent began again
Josh also talks about the unsustainable future, pointing to the outright ridiculous size of the overall open source ecosystem and the reality that there are over 100 million packages released over the last 15 years, and only 250,000 CVE’s in the entire history of the NVD.
Safe to say there are many many more vulnerabilities that are undiscovered and the NVD of course only represents a subset of the overall vulnerability database ecosystem, with other key players existing such as the GitHub Advisory Database and OSV.
So even if we get better at discovering and documenting vulnerabilities, that just means more work for organizations to identify, triage, prioritize and remediate them and we know from studies that organizations have vulnerability backlogs in the hundreds of thousands to millions already, so imagine if we identified even more for them to wrestle with.
Josh closes with emphasizing the need to cooperate better around vulnerability data and also stop the insane push for zero CVE’s. Remember, we’re in the business of risk management not risk elimination.
Closing Thoughts
Thanks again for checking out the first edition of the Resilient Cyber Newsletter. I aim to provide these regularly, aggregating resources for the community and hoping folks find it valuable.
If you haven’t subscribed already, please do so below, and also be sure to pass along a link to friends for them to check out and do the same!
Chris