According to Utility Dive, the National Institute of Standards and Technology (NIST) has released a draft “Cybersecurity Framework Profile for Artificial Intelligence.” This companion document to the widely-used NIST Cybersecurity Framework focuses on three areas: securing AI systems, using AI to improve cyber defenses, and thwarting AI-powered attacks. The draft was created with input from over 6,500 people and is now open for public comment until January 30, with a virtual workshop scheduled for January 14. This is NIST’s latest move in AI guidance, following its 2023 AI Risk Management Framework and a 2024 generative AI profile. The effort stems from directives by both President Joe Biden and President Donald Trump, who have tasked NIST with developing AI security standards and evaluation help for other agencies.
Why this matters now
Look, every organization is being told to “adopt AI” like it’s a magic button you press for growth. But here’s the thing: most existing security frameworks weren’t built with AI’s weird, data-hungry, and often opaque nature in mind. You can’t just slap your old firewall rules on a large language model and call it a day. This NIST profile is basically an attempt to translate a proven security playbook—the CSF—into a language that makes sense for AI. It’s saying, “Okay, for this classic security activity, here’s what it looks like when AI is in the picture.” That’s huge for giving security teams and risk officers a fighting chance.
The three-front war
I think the most useful part is how NIST breaks it down into “Secure, Defend, Thwart.” It acknowledges that AI isn’t just one thing. First, you have to secure the AI system itself—think about poisoned training data, model theft, or adversarial attacks that trick it. Second, you can use AI to defend your broader network—better anomaly detection, automated threat hunting, you name it. And third, you have to thwart attackers who are using AI as a weapon against you. That’s the scary part: hyper-realistic phishing, automated vulnerability discovery, and more. As author Barbara Cuthill said, every org will eventually face all three. You can’t just pick one.
Stakeholder impact and what’s next
For enterprise leaders, this is becoming the de facto checklist. It’s not a regulation—yet—but it’s the gold-standard guidance that regulators and insurers will absolutely look to. If you’re audited and you’ve ignored the NIST AI profile, good luck explaining that. For developers and vendors, it’s a blueprint for building secure AI products from the ground up. And for the tech industry at large, it brings some desperately needed standardization to a chaotic space. Now, the draft isn’t final. The public comment period until January 30 is critical. Will it be too vague? Too burdensome for smaller companies? This is where the real-world kinks get ironed out. But one thing’s clear: treating AI security as an afterthought is no longer an option. The framework is here, and it’s time to start mapping your strategy to it.
