CISA and FBI Release Updated Guidance on Product ...
TL;DR
- The article covers the latest joint guidance from cisa and the fbi regarding product security bad practices. It breaks down critical categories like product properties and security features while explaining how software manufacturers can avoid risky behaviors. You'll learn about memory safety roadmaps, mfa requirements for ot, and why transparency in cve reporting is now a non-negotiable for critical infrastructure providers.
The shift from reactive to proactive security
Ever wonder why we're still seeing the same basic security blunders in 2024? It feels like we are stuck in a loop of "patch and pray" while the bad guys just walk through the front door.
Honestly, the fbi and cisa are getting pretty fed up with software that's broken by design. They just dropped some heavy guidance because they want manufacturers to stop treating security like a premium add-on. It is about moving away from reactive firefighting and actually building things right from the jump.
The new focus is all about Secure by Design. Instead of customers bearing the burden of fixing a vendor's mess, the responsibility is shifting back to the people writing the code. This philosophy means security isn't a feature you toggle on; it's the foundation of the whole product.
I've seen so many dev teams treat security as the "final check" before shipping. That's how you end up with massive holes in a hospital's database system. According to Industrial Cyber, the government is now targeting specific "bad practices" because they are the most exploited paths into our critical systems.
It's a tough pill to swallow for some, but the days of "move fast and break things" are over when it comes to the software running our power grids or banks.
Breaking down the product properties category
It is wild that we’re still talking about buffer overflows and sql injection like it is 1999, but here we are. The fbi and cisa are basically drawing a line in the sand. While there are 13 specific "bad practices" in the full cisa list, a few of them are absolute deal-breakers for anyone building critical infrastructure.
First, let's talk about Memory Safety. If you're starting a new project in C or C++ for a power grid or a hospital system, you are just asking for trouble. These languages let devs make manual memory mistakes that lead to crashes or hacks. Memory-safe languages like Rust or Go are right there. According to NASCUS, manufacturers need a "memory safety roadmap" by the start of 2026 to show how they'll move away from these risky languages.
Then there is the "input" problem.
- SQL Injection (SQLi): This happens when you don't use parameterized queries. If user input can talk directly to your database, a hacker can just type a command into a login box and steal everything.
- OS Command Injection: This is even worse—it's when a web interface lets someone run a shell command on the actual server because of a messy input field.
- Default Passwords: Shipping anything with "admin/1234" is basically a crime at this point. You gotta force a unique password on day one or use physical setup keys.
As noted earlier, these aren't just suggestions; they're the "bad practices" that make our banks and water systems sitting ducks.
Security features and organizational transparency
It’s honestly kind of wild that in 2024 we still have to tell companies that "admin123" isn't a security plan. But here we are, and the fbi and cisa aren't playing around anymore regarding how we handle access and logs.
The new guidance is clear: if your app handles logins, it better support phishing-resistant mfa right out of the box. We aren't talking about those annoying SMS codes—think FIDO2 or passkeys.
- Admin first: By jan 2026, mfa should be the default for all admin accounts. No excuses.
- OT is different: In healthcare, you can't have a surgeon fumbling with a token during an emergency. For medical devices, manufacturers need a solid threat model that explains how they stop credential abuse without killing the patient.
- Log everything: If you're running a saas, you need to keep six months of logs for free. Customers shouldn't have to pay a "security tax" just to see who broke into their system.
Look, manually checking every cisa requirement is a nightmare for dev teams. That’s where tools like AppAxon come in to save your sanity. It uses ai to run autonomous threat modeling, catching those "bad practices" while you're still writing the code. Instead of waiting for a yearly audit, you can use ai-powered red-teaming to poke holes in your logic during every sprint.
Next up, we’ll dive into why being honest about your bugs and being transparent is actually good for business.
Implementation and transparency
So, what do we actually do with all these new rules? It's one thing to read a pdf from the fbi, but it's another to actually change how your dev team works on a tuesday morning.
First off, you gotta be honest about your mess. If you don't have a vulnerability disclosure policy (vdp) yet, you're basically hiding from the people trying to help you. This policy needs to protect "good-faith" hackers so they don't get sued for finding a bug in your retail app or healthcare portal.
- CWEs are non-negotiable: When you file a cve (a public notice of a bug), you have to include the CWE field. A CWE (Common Weakness Enumeration) is just a category that explains why the bug happened—like "improper input validation"—instead of just saying "there is a bug."
- The 30-day clock: If a bug hits the cisa kev (Known Exploited Vulnerabilities) catalog and it affects your product, you’ve got 30 days to get a patch out. No more sitting on it for six months.
You should probably sit down and update your threat models to specifically hunt for those 13 bad practices (which include things like using end-of-life software or failing to use mfa). Use some ai tools to scan for the "dumb" stuff—hardcoded secrets in your git history or old-school tls 1.0 that should've died a decade ago.
Also, be straight with your b2b customers about how long you'll actually support a product. If you're selling a smart sensor to a water plant, tell them exactly when the security updates stop. Honestly, just being transparent is half the battle in keeping our critical systems from faceplanting.