From Frameworks to Action: How NIST’s COSAIS Will Protect AI Models

Introduction: Why NIST Launched COSAIS

While artificial intelligence (AI) has transformative possibilities, it also poses novel cybersecurity threats that conventional security measures might not be able to adequately counter. The National Institute of Standards and Technology (NIST) has acknowledged this by publishing a concept paper on Control Overlays for Securing AI Systems (COSAIS), a new project intended to assist enterprises in handling security issues unique to AI.

What is COSAIS?

Building upon NIST’s popular SP 800-53 security and privacy standards, COSAIS customizes them for overlays that target AI systems. Organizations can use these overlays to protect AI models, training data, model weights, configurations, and outputs by following implementation-focused guidelines.
COSAIS guarantees that organizations can easily include AI security in their current cybersecurity risk management systems by utilizing their existing familiarity with SP 800-53.

How COSAIS Fits into NIST’s AI Security Portfolio

COSAIS is designed to complement several other key NIST efforts on AI security. Together, these resources provide both a strategic foundation and practical tools. By customizing these high-level resources to fit specific scenarios, COSAIS provides insightful guidance to practitioners on how to secure AI systems in the real world.

Following is a brief description of each resource used in COSAIS.

AI Risk Management Framework (AI RMF)

The AI RMF offers an all-encompassing method for handling AI-related risks. It places an emphasis on the development of trustworthy artificial intelligence systems by considering elements such as safety, security, transparency, accountability, and fairness. It provides organizations with a flexible method of identifying, evaluating, and mitigating risks over the whole artificial intelligence (AI) lifecycle, rather than prescribing specific controls.

Cybersecurity Framework Profile for AI

The Cybersecurity Framework Profile for AI adds to NIST’s well-known Cybersecurity Framework to deal with problems that are unique to AI. On a strategy level, it helps organizations figure out how to adapt their current cybersecurity practices to work with AI. The profile talks about the most important things to do to protect AI systems, use AI for defense, and stop threats that use AI.

SP 800-218A (SSDF Profile for AI)

Published as SP 800-218A, the Secure Software Development Framework (SSDF) Profile for AI modifies secure software development procedures to accommodate AI, including generative AI and models with dual-use foundations. Developers are given instructions on how to secure important artifacts, including configuration files, model weights, and training data, throughout the design, development, and deployment phases.

Draft AI 800-1

The Draft NIST AI 800-1 discusses the potential risks of dual-use foundation models, which can be modified for both good and bad. With an emphasis on national security and high-risk applications, it offers model- and organization-specific methods to lessen the possibility of criminal or malevolent exploitation of sophisticated AI systems.

AI 100-2e2025

There is a taxonomy of adversarial AI attacks and defenses in the article AI 100-2e2025. Data poisoning, model evasion, and inference attacks are only a few examples of threats mapped to possible countermeasures. Anyone working with AI systems, from developers to practitioners, can benefit from this resource’s breakdown of common attack vectors and recommended countermeasures.

COSAIS Use Cases: 5 Key Overlays

NIST intends to create five initial overlays, each targeting a specific scenario of AI adoption.

  1. For Generative AI (LLMs & Assistants): Ensuring the security of AI systems that produce text, images, and other forms of content, whether deployed on-premises or through third-party services.
  2. For Predictive AI: Addressing risks in systems that evaluate historical data for decision-making purposes, such as hiring, credit scoring, and recommendations.
  3. For AI Agent Systems: Single Agent: Ensuring the security of autonomous agents, including enterprise copilots and coding assistants.
  4. For AI Agent Systems: Multi-Agent: Safeguarding collaborative agent systems that facilitate the automation of intricate workflows, including expense reimbursement.
  5. For AI Developers: Implementing secure development practices to safeguard model artifacts and mitigate risks of misuse.

Each overlay will correspond with current organizational controls and emphasize the protection of confidentiality, integrity, and availability of AI components.

COSAIS

Community Involvement & Timeline

To ensure COSAIS reflects real-world needs, NIST is inviting public participation:

  • Slack Channel (https://csrc.nist.gov/projects/cosais): A hub for collaboration and feedback.
  • Public Drafts & Workshops: The first draft overlay is expected in early 2026, followed by a workshop for further stakeholder input.
Why COSAIS Matters for AI Security

AI is becoming more popular faster, but there are also more risks, such as data poisoning, model theft, and illicit use of strong generative models. COSAIS provides a structured way to bring tried-and-true protection measures into the age of AI, providing professionals with a useful set of tools based on real-life scenarios. For businesses that already use SP 800-53, COSAIS will be an easy way to expand into the AI security space. It’s a chance for AI makers and users to make sure that security and innovation go hand in hand from the start.

https://thecyberskills.com/category/ai-security/

Scroll to Top