Cutting Through the Noise: Must-Know Details on AI Compliance and Frameworks

By
Itan Brill, Product Analyst
January 23, 2025
5 min read
Share this post

As artificial intelligence advances, regulatory frameworks are intensifying efforts to address security challenges, implementing significant legislative measures that outline guidelines for the responsible use of generative AI and large language models (LLMs). For technology-driven businesses, preparing for these new regulations to avoid major penalties is crucial.

To effectively navigate the evolving field of AI, it's essential to understand AI risk management, guided by frameworks developing around the globe. This blog will focus on the EU AI Act as the cornerstone of European regulation, MITRE Atlas, the OWASP Top 10 for Large Language Model Applications, and key documents within the U.S. regulatory framework, such as the White House AI Executive Order and the AI Risk Management Framework.

The EU AI Act

ֿֿThe EU AI Act aims to promote the safe adoption of AI systems within the EU, ensuring they respect fundamental rights and values. It also seeks to regulate the development, distribution, and usage of these systems to guarantee user protection. 

What should you know about the EU AI Act?

  • It classifies “stakeholders” of AI systems—providers or developers of AI systems with the majority of regulatory obligations, and users who deploy the system.
  • The regulation applies equally to AI providers, regardless of location, and to deployers within the Union. It also covers third-country providers and deployers if their AI outputs are intended for use in the Union.
  • It classifies AI systems according to their risk:
    • Prohibited AI Practices
      • Unacceptable risk AI systems involve subliminal or manipulative techniques, exploit vulnerabilities, employ social scoring, make biased criminal predictions, create facial recognition databases through untargeted scraping, infer emotions, categorize biometrics to infer sensitive attributes, or use real-time biometric identification in public for law enforcement without strict necessity, potentially causing significant harm.
      • These systems are prohibited from use.
      • Example: AI social scoring systems - These systems assess or categorize individuals or groups based on their social behavior or personal characteristics, leading to harmful or adverse treatment of those individuals.
    • High-risk AI systems
      • High-risk AI systems are systems of biometrics, infrastructure, education, employment, essential services, law enforcement, migration, and justice, where their use can significantly impact individual rights and safety under relevant Union or national law.
      • High-risk AI providers must adhere to additional requirements in risk and quality management, data governance and technical documentation, as well as system design and record-keeping.
      • Example: AI hiring system - These systems are used in recruitment for targeted job ads, application filtering, and candidate evaluation, but they can perpetuate bias, lack transparency, raise privacy concerns, and overlook important human aspects.
    • Limited risk AI systems -
      • These AI systems are characterized by their relatively low-risk profile, meaning they do not significantly impact users. As such, they encounter less stringent oversight and regulatory procedures compared to high-risk AI systems.
      • These systems must maintain transparency to ensure users are fully aware when they are interacting with AI. This includes clear disclosures and the ability to identify when a machine, rather than a human, is responsible for communication or actions.
      • Example: AI chatbots - A typical example is a customer service chatbot, which facilitates interactions by answering queries or assisting with tasks. Users should be notified that they are conversing with an AI rather than a human operator.
    • Minimal risk -
      • These systems, including both generative and non-generative AI, simulate human decision-making or content creation processes with limited user engagement, which does not present significant risk.
      • These systems must comply with transparency obligations, with a growing emphasis on safe practices for training data and usage due to their potential for widespread impact.
      • Example: AI spam filters - These are non-generative AI systems that detect unwanted communications by using machine learning algorithms to identify spam characteristics.
  • The act started to take effect in 2024, with a transition period of 6 months for prohibited AI systems, 12 months for general AI requirements, and 24 months for full compliance.

MITRE ATLAS

ATLAS is a global, dynamic knowledge base of adversary tactics and techniques against AI systems, informed by real-world attacks and demonstrations from AI red teams and security groups. Mitre Atlas increasing awareness of the swiftly changing vulnerabilities of AI-enabled systems as they extend beyond cyber.

What should you know about Mitre Atlas?

  • The ATLAS framework establishes a structured approach for documenting and analyzing real-world AI threats by building on the success of existing threat frameworks like MITRE ATT&CK.
  • The framework’s updates include in-depth insights into 14 emerging attack vectors such as poisoning AI training data, exploiting model vulnerabilities like adversarial inputs, evading detection mechanisms, and manipulating outputs. It also emphasizes defenses like robust model design, secure data pipelines, and enhanced AI monitoring techniques.
  • Each entry in the ATLAS framework provides a detailed description of the threat, common attack scenarios, recommended prevention strategies, and real-world examples. This structured approach makes ATLAS a valuable resource for understanding and addressing the growing risks in AI security.

OWASP Top 10 for LLM Applications

OWASP is a nonprofit foundation that enhances software security by providing resources, tools, and frameworks, supported by a global community of security professionals. Its OWASP Top 10 for Large Language Model Applications Project is designed to educate on potential security risks in LLM and Generative AI applications, aiming to raise awareness and improve security practices.

What should you know about the OWASP Top 10 for LLM Applications?

  • The OWASP Top 10 for LLM Applications was introduced in 2023 to address and highlight security issues specific to AI applications. It aims to mitigate the growing risks associated with the expanding use of this technology across various industries
  • The 2023 list successfully established a foundation for secure LLM usage by raising awareness. The 2025 edition builds on this success, incorporating input from a globally diverse group of contributors through collaborative brainstorming, voting, and real-world feedback.
  • The 2025 updates highlight concerns about resource management, securing embedding methods like RAG, preventing prompt information leakage, and addressing the risks associated with unchecked permissions in agentic architectures.
  • Each entry presents a description of the risk, common examples, prevention and mitigation strategies, and sample attack scenarios.


NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework (AI RMF 1.0) to support the White House AI Executive Order by establishing the U.S. AI Safety Institute and its consortium. Released in January 2023, the AI RMF serves as a voluntary guide to help organizations integrate trust into AI products. It defines trustworthy AI as accountable, transparent, explainable, fair, privacy-enhanced, safe, secure, resilient, valid, and reliable. Several states within the U.S., such as Colorado and California, have announced they will adopt it as a framework for complying with AI regulations, establishing it as a pivotal document in the AI compliance landscape. 

What should you know about the AI RMF?

  • Most of the definitions are similar to those described in the EU AI Act, particularly regarding the distinction between a developer and a deployer of an AI system, as well as the associated risks.
  • The AI RMF is accompanied by a Playbook, which suggests a set of actions to manage generative AI risks, grouped by categories, so that each action falls under a category and its associated risks. 
  • The categories divide the actions into four key functions:
    • Govern: Establishes oversight frameworks and promotes risk awareness. Example: A healthcare organization forms a committee to address bias and privacy.
    • Map: Identifies AI risks within the organizational context. Example: A retailer detects biases in its recommendation system.
    • Measure: Develops metrics to assess AI risks. Example: A bank focuses on fairness, while a car company emphasizes safety.
    • Manage: Directs actions to mitigate risks, including system adjustments or deciding not to deploy AI.

As this dynamic regulatory sphere evolves, security and development teams, with support from legal teams, need tools to ensure compliance. That's where Aim comes in, detecting threats and eliminating misconfigurations to fortify trust boundaries, helping organizations adhere to privacy and AI regulations. By securing internal and homegrown AI deployments against vulnerabilities, Aim empowers businesses to adopt AI innovations confidently within regulatory frameworks.