Home Our Insights Articles The Regulatory Perspective on Safe AI Solutions

The Regulatory Perspective on Safe AI Solutions

Konrad Trubas Head of IT/Information Security Officer
7 min read
18.11.2025

Ensuring security and safety of AI-based solutions, especially those using generative models and large language models (LLMs), requires the application of best practices, which identify and describe the most common and critical vulnerabilities, threats, and what’s particularly important: common risks. They point, among other things, to the risk of model hallucinations, prompt injection, improper access control, and data leaks, making them a valuable framework for developing effective security measures. 

However, while technical security frameworks are fundamental, their application should not be detached from the broader organizational context. For example, the OWASP guidelines themselves, while extremely useful, are operational and tactical in nature, focusing on specific technical threats and countermeasures. Meanwhile, effective AI security management requires a holistic approach that includes risk analysis, model lifecycle management, and organizational accountability. This is where the ISO/IEC 42001 standard comes in, providing a framework for managing AI systems in a systematic way, integrated with the overall management strategy of the organization. 

Compliance with ISO 42001 allows organizations to consider not only technical aspects, but also the ethical, legal, and social context of AI use. As a result, the security of AI solutions becomes part of a larger management system in which decision-making is based on risk analysis and accountability, rather than solely on a checklist of threats. For example, not every risk identified by OWASP must have the same significance in every organization; their prioritization should be based on an assessment of the potential impact and likelihood of occurrence in a given business context.

But ISO/IEC 42001, all-encompassing as it may be, is still a piece of the puzzle that exists within the larger ecosystem of regulation. The rise of AI, as it becomes more powerful and more prominent, is forcing governments worldwide to confront how to oversee its growth and adoption. This is not just about stopping abuse and make AI solutions secure; this is also about fairness, transparency, accountability and protecting human rights. The regulatory environment is emerging, but the trends are becoming more obvious, and companies should prepare for more attention and more regulation.

The Rise of AI Regulations: A Global Overview

The regulatory approaches to AI are diverse, reflecting differing philosophical and legal traditions. Here’s a look at some key developments:

EU AI Act

The EU is pioneering AI regulation with a risk-based approach. It bans “unacceptable risk” AI (like manipulative systems) and imposes strict requirements, data governance, transparency, oversight, on “high-risk” applications in areas like law enforcement and critical infrastructure. It also addresses transparency for large AI models.

AI Regulation Within the US 

The US favors regulating AI within existing laws, rather than creating a single AI law. The White House’s AI Bill of Rights and NIST’s AI Risk Management Framework (AI RMF) offer guidance, with agencies like the FTC focusing on preventing unfair practices and algorithmic bias.

Other Global Developments

Other regions are developing their own strategies. Canada (AIDA) and Japan focus on high-impact/trustworthy AI, while Brazil addresses bias and privacy. China is taking a more centralized and proactive regulatory approach, particularly concerning deepfakes and recommendations.

In the table below, we gathered a list of most commonly used and known AI-related regulations. When building AI solutions, organizations need to identify requirements that must be followed during the whole lifecycle of an AI-based solution.

Key Regulations and Standards for AI Solutions

Name of the StandardShort DescriptionArea of ApplicabilityIndustries to FollowAdditional Notes for AI Managers
EU AI ActA comprehensive legal framework that classifies AI systems based on risk (unacceptable, high, limited, minimal) and imposes corresponding obligations on providers and users.Regional (European Union)Broad applicability, with a focus on high-risk sectors like critical infrastructure, medical devices, and law enforcement.It is a landmark regulation with far-reaching global consequences due to its application outside the EU. Non-EU businesses selling AI products to EU citizens will be obligated to comply. Punishment for non-compliance is substantial.
NIST AI Risk Management Framework (AI RMF)A voluntary framework that provides a structured approach for identifying, assessing, and managing risks associated with AI systems throughout their lifecycle.Global (Voluntary)All industries developing or using AI systems.The AI RMF is highly influential and provides a practical, adaptable framework that can be integrated with other risk management processes. It emphasizes trustworthiness, including aspects like fairness, accountability, and transparency.
ISO/IEC 42001:2023An international standard that specifies the requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS).GlobalAll industries aiming for a structured approach to AI governance.Like other ISO management system standards (e.g., ISO 27001 for information security), this standard provides a certifiable framework that can help organizations demonstrate responsible AI governance to stakeholders.
ISO/IEC 27099:2022A standard that provides guidance on the security of AI systems, focusing on the protection of AI models, data, and the overall AI system from adversarial attacks.GlobalAll industries where the security and integrity of AI models are critical.This standard is particularly relevant for the technical aspects of AI security, including model theft, data poisoning, and evasion attacks. It complements broader governance frameworks.
Guidance for an AI Bill of Rights (United States)A white paper from the White House Office of Science and Technology Policy outlining five principles to guide the design, use, and deployment of automated systems.National (United States – Policy guidance)All industries, particularly those in the public sector and those providing critical services.This is not a legally binding document but signals the U.S. government’s direction on AI regulation, focusing on safe and effective systems, algorithmic discrimination protections, and data privacy.

Implications for Organizations: Beyond Global Compliance Standards

To build a secure and safe AI solution, organization need to understand not only data security and privacy measures, but also identify, mitigate and manage AI related risks as well as align with regulatory requirements. Organizations must adopt a proactive and strategic approach to AI governance.

Here is a set of questions for product owners and project managers to answer in order to identify important requirements for the AI solution being built:

AI Solution Requirements Checklist

I. Data Governance & Privacy

Data Source Compliance 
Are all data sources compliant with relevant regulations (e.g., GDPR, CCPA)?

Consent Management
Is there a clear process for obtaining and managing user consent for data collection and use?

Data Security 
Are appropriate security measures in place to protect the data from unauthorized access, use, or disclosure?

Data Minimization 
Are we only collecting and using the data necessary for the AI’s intended purpose?

II. Transparency & Explainability

Explainable AI (XAI) Techniques 
What are the implications of actions / decisions made by AI system? Are justifications for decisions needed? Are XAI techniques being considered/implemented to help understand how the AI makes decisions? (Especially for high-stakes applications)

Model Card/Documentation 
Is a “model card” or similar documentation being created to describe the AI system’s purpose, capabilities, limitations, and potential biases? If cloud-based LLM models will be used, how it’s capabilities will be used and accuracy confirmed for the application use?

III. Bias & Fairness

Dataset Diversity
Is the training data diverse and representative of the population the AI will impact?

Bias Detection Tools
Are bias detection tools being used to identify and mitigate potential biases in the data and model?

Fairness Metrics
Are appropriate fairness metrics being used to evaluate the AI’s performance across different groups?

Ongoing Monitoring
Is there a plan for ongoing monitoring of the AI’s performance to detect and address any emerging biases?

IV. Risk Management & Oversight

Risk Assessment
Has a comprehensive risk assessment been conducted to identify potential AI-related risks (technical, ethical, legal, reputational)?

Mitigation Strategies
Are there mitigation strategies in place to address identified risks?

Human-in-the-Loop 
What will be the autonomy level of AI system for decision making? Is there a mechanism for human oversight and intervention in the AI’s decision-making process, especially for critical applications?

V. Documentation & Auditability

Design Documentation
Is the AI system’s design documented in detail?

Development Documentation
Is the development process documented, including data preparation, model training, and evaluation? Is there a policy that describes the model and training data versioning and testing approach?

Deployment Documentation
Is the deployment process documented, including monitoring and maintenance procedures?

Audit Trail
Is there an audit trail to track the AI’s decisions and actions? Is it obligatory to track AI decisions?

The questions above provide a quick overview of the most important requirements. For detailed compliance requirements we recommend contacting with appropriate departments in your organization.

A Systemic Approach to Safe and Responsible AI

In summary, technical best practices are an essential part of protecting AI solutions. However, their effectiveness is significantly enhanced when they are applied as part of a broader risk management and compliance system, such as that described in ISO 42001. Organizations should consider starting form such an approach as it enables the creation of secure, values-aligned, and responsible AI solutions.

If you want to explore enterprise AI safety from other perspective, check out our Safe AI compendium and learn about mitigating risks related to data, models and users.

Would you like more information about this topic?

Complete the form below.