INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

AI Ethics Frameworks: A Complete Guide to Who Sets the Rules

A comprehensive investigation into the major AI ethics frameworks, corporate ethics boards, enforcement gaps, ethics washing, cultural relativism, and the global struggle to govern artificial intelligence before it governs us.

The Illusion of Consensus

There is no shortage of AI ethics frameworks. By one count, more than 160 sets of AI ethics principles have been published since 2016 by governments, corporations, academic institutions, civil society organizations, and international bodies. They share a remarkable number of words — fairness, transparency, accountability, beneficence, non-maleficence — and an equally remarkable inability to prevent any of the harms they describe.

This is the central paradox of AI ethics in 2026: the proliferation of principles has not produced a proliferation of protections. The frameworks multiply. The harms continue. The gap between what we say about AI and what we do with AI grows wider with each passing year.

Understanding why requires examining not just the content of these frameworks but their structure, their enforcement mechanisms (or lack thereof), their political context, and the interests of those who write them. AI ethics is not a philosophical exercise conducted in a vacuum. It is a contest for power — over technology, over markets, over the future of human autonomy — and the frameworks are as much weapons in that contest as they are guides to right conduct.

This guide maps the landscape of AI ethics frameworks as it exists in early 2026. It covers the major international principles, the corporate ethics boards that have risen and fallen, the growing criticism of ethics washing, the enforcement gap, cultural dimensions of AI ethics, and the uncomfortable questions that no framework has adequately answered.


The Major International Frameworks

IEEE Ethically Aligned Design

The Institute of Electrical and Electronics Engineers published the first edition of “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems” in 2019, after three years of work involving hundreds of experts across multiple working groups. The document runs to over 300 pages and covers ground ranging from classical ethics to affective computing to policy.

The IEEE framework is notable for its ambition and its specificity. Where many frameworks stop at abstract principles, IEEE attempted to provide concrete recommendations for implementation. It organized its analysis around five principles: human rights, well-being, data agency, effectiveness, and transparency. Each principle came with detailed discussion of technical mechanisms, policy instruments, and organizational practices.

The framework’s strength is its engineering orientation. It was written by and for people who build systems, and it speaks their language. It addresses questions like how to embed values into system design, how to conduct impact assessments, and how to build accountability structures within engineering teams.

Its weakness is the same as every voluntary standard: compliance is optional. IEEE is a professional organization, not a regulatory body. It can set standards; it cannot enforce them. Companies can cite IEEE principles in their marketing materials while ignoring them in their engineering practices, and many do.

The IEEE framework also struggles with what philosophers call the is-ought gap. It describes what AI systems should do but provides limited mechanisms for ensuring they actually do it. The distance between a 300-page vision document and the daily decisions of an engineer optimizing a recommendation algorithm for engagement metrics is vast, and no amount of principled language bridges it on its own.

OECD AI Principles

The Organisation for Economic Co-operation and Development adopted its AI Principles in May 2019, making them the first intergovernmental standard on artificial intelligence. Forty-two countries endorsed the original principles, and the number has grown since. The G20 subsequently drew on the OECD framework for its own AI principles, giving them additional political weight.

The OECD principles are organized around five values-based principles and five recommendations for national policy. The values-based principles call for AI that is beneficial, respects human rights and democratic values, is transparent and explainable, is robust and secure, and is subject to accountability. The policy recommendations address investment in research, fostering a digital ecosystem, shaping an enabling policy environment, building human capacity, and international cooperation.

The OECD framework matters because of its political reach. It represents the closest thing to an international consensus on AI governance. It is also remarkably toothless. The principles contain no enforcement mechanisms, no compliance requirements, no penalties for violation. They are recommendations in the purest sense: suggestions that countries may follow if they find it convenient.

The OECD has supplemented the principles with the AI Policy Observatory, a platform for tracking national AI policies and sharing best practices. This is useful but insufficient. Tracking what countries say they are doing is not the same as verifying what they actually do. And the OECD’s membership skews heavily toward wealthy Western democracies, raising questions about whose values and whose interests the framework truly reflects.

UNESCO Recommendation on the Ethics of Artificial Intelligence

UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states in November 2021, is the most globally inclusive AI ethics instrument ever produced. Its breadth of endorsement is both its greatest strength and its greatest limitation.

The recommendation covers ten principles, including proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, responsibility, awareness, and multi-stakeholder governance. It goes further than the OECD framework in addressing issues of environmental sustainability, cultural diversity, and the needs of developing countries.

UNESCO’s framework explicitly addresses power asymmetries in AI development, acknowledging that the benefits and risks of AI are not evenly distributed. It calls for particular attention to the needs of low- and middle-income countries, indigenous peoples, and marginalized communities. This is a significant departure from frameworks that treat AI governance as a primarily technical problem solvable by technical means.

The limitation is enforcement. UNESCO recommendations are not binding treaties. They represent a consensus aspiration, not a legal obligation. A country can endorse the UNESCO recommendation and simultaneously deploy AI-powered surveillance systems against its own citizens, and several have.

The recommendation also suffers from the consensus trap: to achieve universal endorsement, it had to be vague enough that countries with radically different political systems and values could all agree to it. The result is language that can mean nearly anything and therefore constrains nearly nothing.

Asilomar AI Principles

The Asilomar AI Principles emerged from a 2017 conference organized by the Future of Life Institute, attended by a mix of AI researchers, ethicists, and policy experts. The 23 principles cover research issues, ethics and values, and longer-term concerns about advanced AI.

The Asilomar principles are significant for being among the first major AI ethics documents to address existential risk explicitly. They include principles on capability caution, importance, risks, recursive self-improvement, and common good. They also address the AI arms race, stating that an arms race in lethal autonomous weapons should be avoided.

The principles were signed by thousands of researchers, including prominent figures from across the AI community. Their influence has been more cultural than regulatory: they helped establish a vocabulary and a set of concerns that subsequently shaped government policy discussions and corporate communications.

Critics note that the Asilomar principles emerged from a relatively narrow community — predominantly Western, predominantly male, predominantly from the AI research establishment. The principles reflect the concerns of people who build AI systems, not necessarily the concerns of people most affected by them. The emphasis on existential risk, while important, can overshadow more immediate harms like algorithmic bias and surveillance that disproportionately affect marginalized communities.

Beijing AI Principles

China’s Beijing AI Principles, released in May 2019 by the Beijing Academy of Artificial Intelligence, represent a significant non-Western contribution to the global AI ethics conversation. The principles cover research and development, use, and governance of AI, emphasizing harmony, safety, shared governance, and human privacy.

The Beijing principles share much surface-level language with Western frameworks — they too call for fairness, transparency, and human well-being. The differences lie in emphasis and context. The Beijing principles place greater weight on harmony and social stability, concepts that carry different connotations in a Chinese political context than in a Western liberal democratic one. They also emphasize the role of the state in AI governance more explicitly than most Western frameworks.

The Beijing principles have been criticized as a legitimation strategy: an effort to present China’s approach to AI governance as principled and values-driven while the Chinese state deploys AI for mass surveillance, social credit scoring, and political control. This criticism is valid but not unique to China. Every AI ethics framework serves the political interests of its authors to some degree.


Corporate Ethics Boards: Rise and Fall

Google’s ATEAC

No corporate AI ethics initiative has failed more publicly than Google’s Advanced Technology External Advisory Council (ATEAC). Announced in March 2019 with eight members, the council was dissolved less than two weeks later following employee protests and public outcry over the inclusion of Kay Coles James, president of the Heritage Foundation, whose organization had opposed LGBTQ+ rights and climate science.

The ATEAC debacle exposed several uncomfortable truths about corporate AI ethics. First, that ethics boards assembled for public relations purposes cannot withstand public scrutiny. Second, that selecting members who represent a diversity of political perspectives is not the same as selecting members who are committed to protecting vulnerable communities. Third, that Google’s internal culture — which had already produced employee revolts over Project Maven, the Pentagon drone targeting contract — would not accept ethics governance that it perceived as performative.

Google subsequently relied on its internal AI Principles, published in June 2018 after the Maven controversy. These principles commit Google to building AI that is socially beneficial, avoids creating or reinforcing unfair bias, is safe, is accountable, incorporates privacy design principles, upholds scientific standards, and is limited in its use. They also list applications Google will not pursue, including weapons and surveillance that violates international norms.

Whether Google actually adheres to these principles is a separate question. The company’s subsequent firing of AI ethics researchers Timnit Gebru and Margaret Mitchell — who had raised concerns about the environmental costs and bias risks of large language models — suggested that Google’s commitment to its own principles was conditional on those principles not threatening its business model.

Microsoft AETHER

Microsoft’s AETHER Committee (AI, Ethics, and Effects in Engineering and Research) has operated since 2017 and has been more durable than most corporate ethics initiatives. AETHER is an internal advisory body that reviews sensitive AI use cases and provides guidance to product teams.

Microsoft has also published a Responsible AI Standard, a detailed internal document that translates principles into engineering requirements. The standard covers fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. It includes specific requirements for impact assessments, data documentation, and human oversight.

Microsoft’s approach is more operationally grounded than many competitors’. It has invested in tools like Fairlearn (for bias assessment) and InterpretML (for model explainability) and has published research on responsible AI practices. The company’s Chief Responsible AI Officer role, created in 2023, signals organizational commitment.

Yet Microsoft’s responsible AI efforts have not prevented controversies. The company’s massive investment in OpenAI, its rapid deployment of AI features across its product suite, and its aggressive marketing of AI capabilities have raised questions about whether responsible AI practices are keeping pace with commercial ambitions. The layoffs within Microsoft’s responsible AI team in 2023, even as the company was investing billions in AI deployment, spoke volumes about institutional priorities.

Meta’s Oversight Board

Meta’s Oversight Board occupies a unique position in the corporate ethics landscape. Established in 2020 with a $130 million trust fund, the board operates with genuine independence, including the power to overturn Meta’s content moderation decisions. Its members include former heads of state, legal scholars, human rights advocates, and journalists from around the world.

The Oversight Board has made consequential decisions, including reversing Meta’s indefinite suspension of Donald Trump’s accounts. It has issued policy advisory opinions on issues ranging from COVID-19 misinformation to the sharing of private residential addresses.

Critics argue that the Oversight Board addresses content moderation — what appears on Meta’s platforms — while avoiding the more fundamental question of how Meta’s algorithms amplify harmful content in the first place. The board reviews individual content decisions; it does not review the recommendation algorithms that determine which content billions of people see. This is like reviewing individual criminal cases while ignoring the laws that criminalize behavior: necessary but insufficient.

The Oversight Board also operates entirely within Meta’s frame. It can overturn specific content decisions, but it cannot challenge Meta’s business model, its data collection practices, its advertising ecosystem, or its fundamental architecture. It is an ethics mechanism for the symptoms, not the disease.


Ethics Washing: When Principles Become Performance

Ethics washing — the practice of using ethical language and ethics initiatives as a shield against meaningful regulation — has become the dominant critique of corporate AI ethics. The term, coined by analogy with greenwashing, describes a pattern that has become depressingly familiar.

The pattern works as follows: a company publishes AI ethics principles, establishes an ethics board, funds ethics research, and hires ethics staff. These activities generate positive media coverage and create an impression of responsible behavior. Meanwhile, the company’s actual engineering practices, business model, and lobbying efforts remain unchanged or actively undermine the principles the company professes.

The empirical evidence for ethics washing is substantial. A 2022 study by researchers at the University of Oxford found that companies with published AI ethics principles were no less likely to be involved in AI ethics controversies than companies without them. A separate analysis found that many corporate AI ethics boards had no decision-making authority, no access to proprietary systems, and no ability to block or modify products.

The most insidious form of ethics washing is the use of ethics initiatives as arguments against regulation. Companies point to their voluntary frameworks and say: we are governing ourselves; government regulation is unnecessary. This argument has been deployed repeatedly in lobbying against the EU AI Act, proposed U.S. AI legislation, and other regulatory efforts. The message is: trust our principles, not your laws.

The problem is that voluntary principles without enforcement are not governance. They are marketing. Genuine governance requires the power to compel behavior, not just the power to suggest it. No company has ever voluntarily constrained a profitable line of business because an ethics principle told it to — at least not without external pressure from regulators, courts, or sustained public outrage.


The Enforcement Gap

The most critical failure of existing AI ethics frameworks is enforcement. Principles without penalties are suggestions. And suggestions do not constrain behavior when billions of dollars are at stake.

The enforcement gap operates at multiple levels. At the international level, no AI ethics framework carries binding legal force. The OECD principles, UNESCO recommendation, and various bilateral agreements are all voluntary. Countries can endorse them and ignore them simultaneously.

At the national level, enforcement varies dramatically. The EU AI Act, which came into force in stages beginning in 2024, represents the most comprehensive attempt to translate AI ethics principles into enforceable law. It establishes risk categories for AI systems, imposes requirements for high-risk applications, and creates penalties for non-compliance, including fines of up to six percent of global annual revenue.

The United States has taken a more fragmented approach, relying on existing regulatory frameworks (FDA for medical devices, FTC for consumer protection, EEOC for employment discrimination) supplemented by executive orders and agency guidance. This approach has the advantage of leveraging established enforcement mechanisms but the disadvantage of leaving significant gaps, particularly for novel AI applications that do not fall neatly under existing regulatory authority.

China has implemented AI-specific regulations, including rules on algorithmic recommendation systems, deep synthesis (deepfakes), and generative AI. These regulations are enforced by the Cyberspace Administration of China and carry real penalties. However, Chinese AI regulation serves the state’s interests as much as the public’s: it constrains commercial AI companies while leaving state AI applications largely unregulated.

Saudi Arabia, home to HUMAIN and its ambitions to become a global AI leader, has published a national AI ethics framework through SDAIA (Saudi Data and AI Authority). The framework echoes international principles — fairness, transparency, safety, human control — but operates within a governance context where political dissent is criminalized, press freedom is severely restricted, and the government maintains extensive surveillance capabilities. The question of how AI ethics principles function in an authoritarian context is not merely academic. It is the defining challenge of global AI governance.

At the corporate level, enforcement depends entirely on internal governance structures, which are, by definition, controlled by the entities being governed. Asking a company to enforce AI ethics principles against its own commercial interests is asking it to act against its nature. Some do, under pressure. Most do not, absent pressure.


Cultural Relativism in AI Ethics

The global proliferation of AI ethics frameworks has exposed a fundamental tension: whose values should AI systems embody?

Western frameworks tend to emphasize individual autonomy, privacy, non-discrimination, and democratic governance. These values reflect a specific philosophical tradition rooted in Enlightenment liberalism, and they are not universally shared. Framing them as universal AI ethics principles is itself a political act — one that exports a particular worldview while claiming to establish neutral ground.

East Asian frameworks, influenced by Confucian traditions, tend to place greater emphasis on social harmony, collective well-being, and the role of the state in maintaining order. The Beijing AI Principles’ emphasis on “harmony” is not merely rhetorical; it reflects a genuinely different philosophical tradition in which the rights of the individual are balanced against (and sometimes subordinated to) the needs of the community.

Islamic ethical traditions bring yet another perspective. The Quran and Islamic jurisprudence (fiqh) contain extensive discussions of justice (adl), public interest (maslaha), and the prevention of harm (la darar wa la dirar). These traditions offer sophisticated frameworks for reasoning about technology’s social impact, but they are largely absent from the global AI ethics conversation, which remains dominated by Western secular perspectives.

This matters directly for entities like HUMAIN and its ALLAM Arabic language model. AI systems trained on and deployed for Arabic-speaking populations should arguably reflect Arabic and Islamic ethical traditions, not simply import Western frameworks. But what does that mean in practice? Does it mean incorporating Sharia-compliant principles into AI decision-making? Does it mean different fairness criteria? Does it mean different privacy norms?

These questions have no easy answers, and the AI ethics community has barely begun to grapple with them. The default — applying Western frameworks globally and calling them universal — is both intellectually lazy and politically suspect.

Indigenous Perspectives on AI

Indigenous communities around the world have articulated distinct perspectives on AI ethics that challenge the assumptions underlying most mainstream frameworks. These perspectives emphasize relational accountability, data sovereignty, intergenerational responsibility, and the recognition of non-human entities as stakeholders.

The concept of indigenous data sovereignty — the right of indigenous peoples to control the collection, ownership, and application of data about them, their territories, and their resources — directly challenges the data extraction model that underlies most AI development. Major AI systems are trained on data scraped from the internet without the consent of the communities whose knowledge, languages, and cultural expressions are included.

New Zealand’s Te Hiku Media, a Maori media organization, has developed speech recognition technology for the Maori language while maintaining community ownership of the data and models. This represents an alternative paradigm for AI development: one in which the communities that contribute data retain control over how it is used.

Indigenous perspectives also challenge the anthropocentrism of most AI ethics frameworks. Many indigenous traditions recognize relationships of obligation and respect toward non-human entities — animals, plants, rivers, ecosystems. As AI systems increasingly govern resource allocation and environmental management, indigenous perspectives on the moral status of the natural world become directly relevant to AI governance.

Religious Perspectives on AI

Religious traditions offer rich resources for thinking about AI ethics that are underutilized in mainstream frameworks. Catholicism, through the Rome Call for AI Ethics (signed by the Vatican, Microsoft, IBM, FAO, and the Italian government in 2020), has articulated principles of transparency, inclusion, accountability, impartiality, reliability, and security. Pope Francis has spoken repeatedly about the need for AI to serve human dignity and the common good.

Jewish tradition contributes concepts like pikuach nefesh (the obligation to preserve life, which overrides almost all other commandments) and tikkun olam (repairing the world), which provide frameworks for weighing AI’s benefits against its risks. The tradition’s emphasis on argumentation and dissent (machloket) also offers a model for AI governance that embraces rather than suppresses disagreement.

Hindu and Buddhist traditions raise questions about consciousness, suffering, and the nature of mind that are directly relevant to the AI consciousness debate. If consciousness is not uniquely biological — if it can arise in any sufficiently complex information-processing system — then the moral status of AI systems becomes a pressing question, not a science fiction curiosity.

Islamic scholars are actively engaging with AI ethics through the framework of maqasid al-shariah (the objectives of Islamic law), which identifies five essential values: protection of life, intellect, progeny, wealth, and religion. These objectives provide a structured approach to evaluating AI systems that is both principled and practically applicable. The question is whether entities like HUMAIN will genuinely incorporate these frameworks or merely invoke them for legitimation.


The Gap Between Principles and Practice

The most damning critique of AI ethics frameworks is empirical: they do not work. Or more precisely, they do not work as governance mechanisms. They work quite well as public relations instruments.

The evidence for this claim is overwhelming. Despite the proliferation of AI ethics principles calling for transparency, most AI systems remain opaque. Despite principles calling for accountability, most AI harms go unremedied. Despite principles calling for fairness, algorithmic bias persists and in some cases worsens. Despite principles calling for human oversight, autonomous AI systems are making increasingly consequential decisions without meaningful human review.

The gap between principles and practice is not accidental. It is structural. AI ethics frameworks are typically written by committees of academics, lawyers, and policy experts. They are implemented — or not — by engineers working under commercial pressure, using tools and processes that were designed to optimize for performance metrics, not ethical compliance.

Bridging this gap requires not just better principles but better infrastructure: technical tools for bias detection and fairness assessment, organizational processes for ethical review, legal frameworks for accountability, and market incentives for responsible behavior. Some of this infrastructure exists. Most of it does not.

The AI safety community has been more successful at translating concerns into concrete technical research programs than the AI ethics community has been at translating principles into enforceable practices. This is partly because safety problems are more tractable as engineering challenges and partly because the safety community has been more willing to challenge the assumption that voluntary self-regulation is sufficient.


What Would Real AI Ethics Governance Look Like?

If existing frameworks are insufficient, what would sufficient governance look like? Several elements are clear.

First, it would be binding. Voluntary principles are necessary but not sufficient. Legal requirements with real penalties — as the EU AI Act begins to provide — are the minimum viable governance mechanism.

Second, it would be specific. Abstract principles like “fairness” and “transparency” must be translated into measurable requirements. What level of accuracy disparity across demographic groups is acceptable? What information must be disclosed about how a system was trained? What human oversight is required before an AI system can make a consequential decision? These questions demand specific answers, not aspirational language.

Third, it would be resourced. Enforcement requires funding, staff, and technical expertise. Regulators cannot govern AI systems they do not understand. Building regulatory capacity is as important as writing regulations.

Fourth, it would be inclusive. The communities most affected by AI systems — workers displaced by automation, communities subjected to algorithmic surveillance, populations whose data trains systems they will never benefit from — must have meaningful input into governance frameworks. Current frameworks are written primarily by and for the powerful. Genuine AI ethics governance would center the perspectives of the vulnerable.

Fifth, it would be adaptive. AI technology evolves faster than any regulatory framework can keep pace with. Governance mechanisms must be designed to evolve as well, with built-in review processes, sunset clauses, and mechanisms for rapid response to emerging challenges.

Finally, it would be honest about power. AI ethics is not a neutral, technical exercise. It is a political contest over who benefits from AI, who bears its costs, and who decides. Any framework that pretends otherwise — that treats AI governance as a matter of getting the principles right, rather than getting the power dynamics right — is part of the problem, not the solution.


Where We Go From Here

The AI ethics landscape in 2026 is characterized by abundance of principles and scarcity of enforcement. This is not sustainable. As AI systems grow more powerful and more pervasive, the consequences of ungoverned deployment grow more severe. The trolley problems become real. The biases become entrenched. The erosion of human agency accelerates.

The next phase of AI ethics must move beyond frameworks and into institutions. Not ethics boards that advise, but regulators that compel. Not principles that aspire, but laws that require. Not voluntary commitments that evaporate under commercial pressure, but enforceable obligations that constrain behavior regardless of what is profitable.

This will not happen without political struggle. The companies that benefit from ungoverned AI deployment will resist governance. The countries that use AI for authoritarian control will resist accountability. The technical communities that prefer self-regulation will resist external oversight.

But the alternative — a world in which the most powerful technology ever created is governed by nothing more than the good intentions of those who profit from it — is not acceptable. The frameworks exist. The principles are written. What remains is the harder work: building the institutions, the laws, the enforcement mechanisms, and the political coalitions necessary to make those principles real.

The question is not whether we need AI ethics. It is whether we are willing to enforce it.