On January 1, 2026, a deadline quietly came and went. CISA and the FBI had asked every software manufacturer building products for critical infrastructure to publish a memory safety roadmap by that date. The headlines screamed "Feds demand industry drop C/C++." The reality? Far more nuanced—and far more actionable than most people realize.
I've been working with C and C++ codebases since the late 1980s. I've watched every wave of "C++ is dead" predictions come and go—Java was supposed to kill it, then Go, now Rust. And every time, the systems I maintain are still running. Still processing transactions. Still keeping infrastructure alive. So when the CISA guidance landed, I didn't panic. I read it carefully. And what I found was surprisingly sensible, even if the media coverage was anything but.
Let's cut through the noise and talk about what the deadline actually meant, what happened when it passed, and what your enterprise should be doing right now.
What CISA Actually Asked For
The CISA/FBI "Product Security Bad Practices" guidance, finalized in October 2024, identified a list of practices considered dangerous for software manufacturers whose products support critical infrastructure. Among them: developing new product lines in memory-unsafe languages like C and C++ when memory-safe alternatives are available.
But here's the part that most headlines buried: for existing products already written in C/C++, CISA didn't demand a rewrite. They asked for a roadmap—a published plan describing how the manufacturer intends to address memory safety vulnerabilities in priority code components. Network-facing code. Cryptographic operations. The attack surface, not the entire codebase.
The Headline vs. The Reality
What the media said: "Feds demand companies drop C/C++ by 2026."
What CISA actually said: Software manufacturers should publish a memory safety roadmap by January 1, 2026, outlining how they will eliminate memory safety vulnerabilities in priority code components—using memory-safe languages or hardware capabilities that prevent memory safety vulnerabilities.
Notice that word: or. CISA explicitly acknowledged that there are multiple paths to memory safety. Rewriting everything in Rust is one option. Using compiler hardening, static analysis, sandboxing, and hardware mitigations is another. The guidance isn't dogmatic—it's pragmatic. Which, honestly, surprised me.
The Numbers That Drove the Deadline
To understand why CISA pushed this, you need to understand the scale of the problem. According to Microsoft's Security Response Center, approximately 70% of all vulnerabilities they assign a CVE each year are memory safety issues—buffer overflows, use-after-free, null pointer dereferences. Google's Chromium project reported nearly identical numbers: 70% of serious security bugs trace back to memory safety.
These aren't obscure academic findings. They represent real-world exploits that cost enterprises millions in breach response, regulatory fines, and reputational damage. When seven out of ten security vulnerabilities in the world's most widely deployed software stem from the same root cause, regulators notice.
And here's where it gets personal for enterprises running legacy C/C++: those Microsoft and Google statistics come from actively maintained codebases with world-class security teams. Imagine the state of a C++ system written in 2003 that hasn't had a security audit in a decade. That's the codebase CISA is worried about. That's probably a codebase you're running right now.
What Actually Happened After January 1st
The honest answer: not much changed overnight. The guidance is technically voluntary. There's no enforcement mechanism, no fines, no compliance audit team knocking on doors. Some large software manufacturers—particularly those already engaged with CISA's Secure by Design pledge—published roadmaps. Many others didn't.
But "voluntary" is a misleading word in the government procurement context. Federal agencies increasingly reference these guidelines in their purchasing decisions. If you sell software to the U.S. government—or to organizations that sell to the U.S. government—lacking a memory safety roadmap is starting to look like a competitive disadvantage. It's the same trajectory we've seen with SOC 2 compliance: technically optional, practically mandatory.
For Canadian enterprises, there's an additional consideration. Canada's Canadian Centre for Cyber Security closely tracks CISA guidance and often mirrors it. What starts as a U.S. recommendation has a habit of becoming a Canadian expectation within 12 to 18 months.
Why "Just Rewrite It in Rust" Isn't the Answer
Every time the CISA deadline comes up in conversation, someone inevitably suggests the obvious: rewrite everything in Rust. It's memory-safe by design. Problem solved, right?
I've been in this industry long enough to have heard the same argument for every new language that came along. "Just rewrite it in Java." "Just rewrite it in Go." The technology changes; the naivety of the suggestion doesn't.
The Rewrite Reality
According to Gartner, 70% of rip-and-replace IT projects fail or exceed budget. A full language migration of a mature C/C++ codebase isn't a technical project—it's a business risk event. You're not just translating syntax. You're re-implementing decades of embedded business logic, edge-case handling, and undocumented behaviors that only exist in the minds of developers who may have retired years ago.
Consider what a "rewrite" actually involves for a real enterprise system. That C++ trading engine processing millions of transactions per day? Every single edge case, every performance optimization, every workaround for a counterparty's quirky message format—all of it has to be perfectly replicated. Miss one edge case, and you're looking at a production incident measured in dollars per second.
Even Meta—with effectively unlimited engineering resources—took years to begin migrating select components from C++ to Rust, and they're doing it incrementally, module by module, with extensive parallel testing. If Meta can't do a big-bang rewrite, your enterprise probably shouldn't try either.
What Enterprises Should Actually Do
Here's the pragmatic playbook. Not the conference-keynote version. The one that actually works when you're dealing with a 20-year-old codebase, a limited budget, and a board that wants to see results before 2030.
Step 1: Know What You Have
You can't secure what you don't understand. Conduct a comprehensive inventory of your C/C++ codebases. Identify which components are network-facing, which handle cryptographic operations, and which process untrusted input. This is your attack surface—and it's where CISA wants you to focus.
Step 2: Deploy Static Analysis Now
Tools like Coverity, PVS-Studio, and clang-tidy can identify memory safety vulnerabilities in existing C/C++ code without any rewrite. These aren't theoretical—they catch real bugs. Enable compiler warnings, turn on address sanitizers in testing, and make static analysis part of your CI pipeline today.
Step 3: Harden at the Compiler Level
Modern compilers offer powerful mitigations: stack protectors (-fstack-protector-strong), Address Space Layout Randomization (ASLR), Control Flow Integrity (CFI), and shadow stacks. These don't eliminate memory safety bugs, but they make exploitation significantly harder. Many enterprises have never enabled them.
Step 4: Sandbox the Riskiest Components
Network-facing C/C++ code that parses untrusted input is your highest-risk surface. Isolate these components using process sandboxing, containerization, or microservice boundaries. Even if the code has a vulnerability, the blast radius is contained.
Step 5: Selectively Rewrite—Strategically
If you're going to migrate code to a memory-safe language, start with the modules that parse external input: network protocol handlers, file format parsers, API endpoints. These are the components most likely to contain exploitable bugs. Leave the core business logic alone until the perimeter is hardened.
Step 6: Publish Your Roadmap
Even if you're not a software manufacturer in the CISA sense, documenting your memory safety strategy has internal value. It forces prioritization, creates accountability, and gives your security team a framework for resource allocation. Treat it like a living document, not a compliance checkbox.
The Approach We've Taken for 35 Years
At BJPR, we work inside C/C++ codebases every day. We maintain systems that were written when "memory safety" wasn't even a term—systems that nonetheless process critical transactions for enterprises across North America. We're not theoretical about this.
Our approach has always been incremental hardening rather than panic migration. It's the same philosophy we apply to every legacy system we touch: understand it deeply, stabilize what's fragile, modernize what's exposed, and preserve what works.
For memory safety specifically, that means:
- Auditing first. We map every memory allocation pattern, every buffer operation, every pointer arithmetic path. We know where the bodies are buried before we start digging.
- Hardening the perimeter. Network-facing code gets the most attention. We add bounds checking, deploy sanitizers, and isolate parsing logic.
- Building adapter layers. Instead of rewriting legacy components, we wrap them in memory-safe interfaces. The old code still runs—but it's no longer directly exposed to untrusted input.
- Gradual modernization. When components do need to be rewritten, we do it one module at a time with extensive parallel testing. No big-bang deployments. No extended downtime.
This is essentially what CISA is asking for, even if they don't describe it in these terms. A roadmap. A prioritized approach. An acknowledgment that memory safety matters, paired with a realistic plan to get there.
The Bigger Picture: Security Isn't Just a Language Choice
Here's something the "rewrite in Rust" crowd often misses: memory safety is necessary but not sufficient. A perfectly memory-safe application can still have SQL injection vulnerabilities, broken authentication, insecure API designs, and logic flaws that leak sensitive data.
The organizations that get security right don't obsess over a single category of vulnerability. They build defense in depth: secure architecture, regular auditing, incident response plans, and yes—memory safety hardening as one layer among many. As we've discussed in our analysis of why AI projects fail, the foundation matters more than any single technology choice.
"The most dangerous response to the CISA deadline isn't ignoring it—it's panic-rewriting critical code without understanding why it works."
If you're an enterprise running C/C++ systems, the CISA deadline isn't a reason to panic. It's a reason to get organized. Map your attack surface. Deploy the mitigations that are available today. Build a roadmap that's realistic, not performative. And partner with people who understand both the code you have and the threats you face.
C and C++ aren't going anywhere. The question is whether you'll manage their risks proactively—or wait for an incident to force the conversation.
Need a Memory Safety Strategy?
We audit, harden, and modernize C/C++ systems without the risks of rip-and-replace. Let's build your roadmap together.
Talk to an Expert