Editorial Feature

Ethics Vs Compliance: What AI Regulation Misses

Governing AI Systems
The Limits of Regulatory Compliance
Ethical Blind Spots in AI Regulation
Towards Ethical AI Governance
Conclusion
References and Further Reading

Artificial intelligence (AI) has become an integral part of modern decision-making in healthcare, finance, education, and governance. As its influence grows, governments are introducing regulations to guide how AI is developed and used. Yet much of the discussion still focuses on compliance and whether organizations follow legal rules and standards, rather than on the broader ethical responsibilities that come with deploying these systems.

Diagram illustrating AI ethics and regulation, with blocks of policy and technology icons and a central balance scale symbolising the balance between human oversight and AI systems.

Image Credit: Suri_Studio/Shutterstock.com

While regulations such as the European Union (EU) Artificial Intelligence Act (AIA) aim to improve safety and accountability, simply meeting legal requirements does not guarantee that AI systems will be fair, transparent, or socially responsible.1-4 

Legal compliance may set boundaries for AI development, but it does not necessarily address deeper concerns around bias, transparency, and the societal impact of algorithmic decision-making.

Save this PDF for later by downloading it here.

Governing AI Systems

As AI systems move into high-stakes environments such as financial services and human resources, questions of governance become unavoidable. Decisions that were once made by people are increasingly supported, or sometimes replaced, by algorithmic systems, which raises concerns about oversight, accountability, and responsibility.

AI model governance has therefore emerged as a key mechanism for ensuring that these systems are developed and used in ways that remain safe, transparent, and socially responsible.

In practice, governance involves a mix of regulatory requirements, institutional policies, and technical safeguards that guide how AI systems are designed, implemented, and monitored. Governments and regulatory bodies have begun outlining obligations and restrictions around AI use.

For example, the EU White Paper on AI proposed a European approach that seeks to encourage AI innovation while also addressing potential risks. Alongside these regulatory efforts, initiatives focused on fair AI have drawn attention to issues such as organizational accountability, intellectual property, and bias mitigation, particularly in the case of large foundation models that rely on vast datasets and computational resources.

But AI governance is not purely a technical challenge. Addressing the societal implications of AI systems requires input from multiple disciplines, including computer science, economics, law, and ethics. Public debate also plays an important role. Discussions around misuse, bias, and fairness help identify risks early and encourage the development of safeguards that improve transparency, oversight, and public trust.1,2

A wide range of stakeholders are now shaping the governance landscape. Technology companies such as Microsoft, Google, Amazon, and IBM are actively developing methods to address issues such as bias and responsible model design. At the same time, international and national supervisory bodies, including the United Nations, the European Union, China, Canada, and Australia, are working to define standards, monitor AI deployment, and establish regulatory frameworks.

These approaches differ considerably across jurisdictions.

The European Union has adopted a risk-based model through the Artificial Intelligence Act, which classifies AI systems according to their potential impact and level of risk. By contrast, the United States has generally taken a more flexible approach that seeks to encourage technological innovation while managing risks through initiatives such as the NIST AI Risk Management Framework and a White House Executive Order on the safe and trustworthy development of AI.

Together, these efforts highlight the broader challenge of balancing rapid technological progress with meaningful oversight and governance.1,2

The Limits of Regulatory Compliance

The growing number of governance frameworks and regulatory initiatives reflects an increasing awareness of the risks associated with AI deployment. Many of these frameworks focus on classifying AI systems, monitoring their operation, and identifying potential harms such as malicious use or the leakage of sensitive information.

While these measures are essential for improving oversight, they largely address technical and operational risks. The broader ethical challenges that arise during the development and use of AI systems are often less clearly addressed within existing regulatory structures.

One of the most prominent examples of recent regulatory action is the European Union’s Artificial Intelligence Act (AIA). Proposed by the European Commission in April 2021 under the title Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence, the legislation was designed to align AI development with European values and fundamental human rights.

Following several years of negotiation and political debate, the AIA entered into force on 1 August 2024. The Act establishes a governance framework that regulates the development, marketing, and use of AI technologies within the EU.

Systems considered incompatible with EU values and fundamental rights are prohibited, while those classified as “high-risk” must comply with strict safeguards and monitoring requirements. As a result, organizations are increasingly focused on meeting regulatory obligations, and developers face growing pressure to integrate these requirements into everyday development practices.1,2,4

Yet prioritizing compliance does not necessarily lead to ethical AI systems.

Legal frameworks are not always grounded in ethical reasoning, and they cannot anticipate every dilemma that arises during system design or deployment. Compliance, therefore, represents only one dimension of broader AI governance. 

Developers frequently encounter situations in which regulatory guidance is limited, particularly when ethical concerns fall outside the scope of existing legal requirements. For example, when evaluating fairness in AI systems, developers may need to choose between different fairness metrics without clear regulatory direction on which measure should be prioritised.

Another important regulatory instrument shaping AI development in the EU is the General Data Protection Regulation (GDPR), which governs how personal data is collected, processed, and used. GDPR gives individuals greater control over their personal data, seeks to prevent discriminatory outcomes, and introduces transparency requirements for automated decision-making systems, including those based on machine learning models.

For organizations, compliance involves implementing auditing procedures, documenting AI models, evaluating prediction fairness, and providing explanations for automated decisions. Although fairness in machine learning has been studied extensively, producing explanations that satisfy legal expectations remains challenging, leaving many organizations uncertain about what constitutes an adequate explanation under GDPR.1,2

Ethical Blind Spots in AI Regulation

In response to growing concern about the societal impact of artificial intelligence, a range of frameworks and guidelines have emerged to promote more responsible AI development. These initiatives often emphasise principles such as fairness, accountability, and transparency.

While these efforts reflect increasing awareness of the ethical challenges surrounding AI, important blind spots remain within existing governance and regulatory approaches. Ethical reflection is not always fully integrated into the technical and regulatory processes that shape how AI systems are designed, developed, and deployed.

Much of the work in AI ethics has focused on articulating high-level principles intended to guide responsible innovation. Although these principles provide an important foundation, translating them into practical development processes has proved difficult.

Ethical guidance can remain abstract, which risks producing governance practices that are only loosely connected to the ethical reasoning behind them. When ethical principles are implemented primarily through operational checklists or compliance procedures, they may fail to address the deeper tensions that arise when competing values influence design decisions.

These blind spots become particularly visible when regulatory compliance becomes the dominant organizational priority. Developers preparing AI systems for the European market, for example, must ensure that their systems comply with the legally binding requirements of the Artificial Intelligence Act. In practice, this often directs organizational attention toward demonstrating regulatory compliance rather than engaging in broader ethical reflection.

Bridging the gap between ethics and governance is therefore essential if ethical reasoning is to meaningfully inform both system design and regulatory implementation throughout the AI lifecycle.¹,²

Some of the most significant blind spots appear in relation to bias and transparency in AI systems. Bias may emerge at several stages of AI development, including data collection, feature selection, algorithm design, and the broader decision environment. Different forms of bias, such as historical bias, sampling bias, label bias, and evaluation bias, can influence model outcomes and affect the fairness of algorithmic decision-making.

When these biases become embedded within training data or system design, they may reinforce existing social inequalities and reduce trust in AI-driven decisions.

Transparency presents a related challenge.

Understanding how AI systems reach particular decisions is essential for accountability, auditing, and error detection. 

Transparent models allow stakeholders to examine the reasoning behind algorithmic outcomes and identify potential sources of discrimination or unfairness. However, many advanced AI models, particularly deep learning systems, operate as complex “black boxes,” making their internal decision processes difficult to interpret. 

Questions therefore remain about which decisions require explanation, what information should be disclosed, and who should have access to it.

Despite regulatory efforts to introduce transparency requirements, identifying unethical behavior, discriminatory outcomes, or opaque decision-making within AI systems continues to present significant challenges.¹,²

Towards Ethical AI Governance

Addressing the ethical blind spots in current AI governance requires attention from the earliest stages of the AI development lifecycle. Providers must navigate a range of functional and non-functional requirements, including compliance with regulatory frameworks such as the Artificial Intelligence Act. However, ethical considerations must extend beyond regulatory obligations and be integrated alongside them, ensuring that AI systems are developed in ways that reflect broader societal values and responsibilities.

In the design phase, AI providers must define the purpose of a system and make architectural choices that create value for users while also considering the ethical implications of those decisions. This involves assessing how AI systems may affect fundamental liberties and social justice: dimensions that are not always fully addressed by existing regulatory frameworks.

Issues such as democratic freedoms and equality of opportunity therefore require careful consideration during system design. For example, an AI recruitment tool should be evaluated not only for its technical performance but also for its potential impact on fair access to employment, ensuring that the least advantaged members of society are not disproportionately harmed.1,2

During development, ethical governance must complement technical evaluation. Whether a system is formally classified as high-risk or not, developers should assess algorithms, foundation models, and training data for their potential social implications.

Large language models, embeddings, and datasets require careful scrutiny to identify possible sources of bias or discrimination. Where such risks cannot be adequately mitigated, alternative models or datasets may need to be considered. At the same time, developers themselves should be equipped with the knowledge and training necessary to recognise ethical issues and reflect on the broader consequences of their design decisions.

Testing and iterative evaluation are also essential components of ethical AI governance. Developers should assess how systems behave across different contexts and anticipate potential unintended uses. This includes evaluating how AI systems might affect fundamental liberties and equality of opportunity in practice.

Systems that influence high-stakes decisions, such as those relating to healthcare, social security, or other life-altering outcomes, require particularly rigorous testing to minimize the risk of serious harm. Failures in oversight can have significant consequences, as illustrated by cases such as the Dutch childcare benefits scandal.

Finally, responsible deployment requires meaningful transparency and accountability. Individuals affected by AI systems should have the opportunity to understand how decisions are made and how those decisions may influence their rights or opportunities. Transparency, therefore, plays a role arguably larger than any other when it comes to enabling individuals to exercise fundamental rights, including autonomy and freedom of conscience.

In practice, this means moving beyond minimal regulatory disclosure toward clearer explanations of how AI systems operate and how their outputs are used in decision-making processes.1,2

Conclusion

The growing effort to regulate artificial intelligence reflects a broader recognition that these technologies are no longer confined to technical experimentation. They now shape decisions that affect employment, access to services, and many aspects of everyday life. Regulatory frameworks therefore play an essential role in establishing oversight and accountability.

At the same time, regulation alone cannot resolve the deeper challenges that accompany the use of AI systems in socially significant contexts.

The persistence of issues such as bias, limited transparency, and uncertainty around accountability suggests that important questions remain beyond the reach of compliance-focused governance. What is increasingly clear is that responsible AI cannot be secured through regulation alone. It depends on how ethical considerations are integrated into the everyday practices of designing, developing, and deploying these systems.

Ensuring that AI systems are both effective and socially responsible requires governance approaches that treat ethical reflection as part of the development process itself, rather than as something addressed only after systems are built or regulated.

References and Further Reading

  1. Westerstrand, S. (2025). Fairness in AI systems development: EU AI Act compliance and beyond. Information and Software Technology, 107864. DOI:10.1016/j.infsof.2025.107864, https://doi.org/10.1016/j.infsof.2025.107864
  2. Koneti, S. B. (2025). Regulation, Ethics and Fairness in Artificial Intelligence for Finance: Governance, Explainability and Compliance. Artificial Intelligence-Powered Finance: Algorithms, Analytics, and Automation for the Next Financial Revolution, 4, 124. DOI:10.70593/978-93-7185-613-3, https://doi.org/10.70593/978-93-7185-613-3
  3. AI ethics, governance & compliance [Online] Available at https://kpmg.com/dk/en/services/ai-and-data/ai-ethics-governance-and-compliance.html (Accessed on 10 March 2026)
  4. The EU Artificial Intelligence Act [Online] Available at https://artificialintelligenceact.eu/ (Accessed on 10 March 2026)

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2026, March 11). Ethics Vs Compliance: What AI Regulation Misses. AZoRobotics. Retrieved on March 11, 2026 from https://www.azorobotics.com/Article.aspx?ArticleID=814.

  • MLA

    Dam, Samudrapom. "Ethics Vs Compliance: What AI Regulation Misses". AZoRobotics. 11 March 2026. <https://www.azorobotics.com/Article.aspx?ArticleID=814>.

  • Chicago

    Dam, Samudrapom. "Ethics Vs Compliance: What AI Regulation Misses". AZoRobotics. https://www.azorobotics.com/Article.aspx?ArticleID=814. (accessed March 11, 2026).

  • Harvard

    Dam, Samudrapom. 2026. Ethics Vs Compliance: What AI Regulation Misses. AZoRobotics, viewed 11 March 2026, https://www.azorobotics.com/Article.aspx?ArticleID=814.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.