An Open Letter to HUMAIN: 50 Questions the World Deserves Answered
An open letter from INHUMAIN.AI to HUMAIN demanding transparency on governance, safety, data sovereignty, environmental impact, and human rights.
To the Board of Directors of HUMAIN, CEO Tareq Amin, and His Royal Highness Crown Prince Mohammed bin Salman Al Saud, Chairman of the Public Investment Fund:
We write to you not as adversaries, but as observers. HUMAIN represents the largest concentrated artificial intelligence buildout in history, backed by the resources of a trillion-dollar sovereign wealth fund, partnerships with the world’s leading technology companies, and the full institutional weight of the Kingdom of Saudi Arabia. The scale is extraordinary. The ambition is undeniable.
Yet no entity in history has undertaken a deployment of AI at this magnitude with so little public oversight, independent scrutiny, or transparent accountability. HUMAIN operates at the intersection of sovereign power and artificial intelligence in a way the world has never seen, and the world has a right to understand what is being built, for whom, and under what safeguards.
The questions below are not accusations. They are the minimum that any entity deploying artificial intelligence at this scale, with this concentration of power, owes to the global community. Every question is grounded in publicly available information, official announcements, and established international norms.
We invite HUMAIN to respond publicly to each one.
Governance & Accountability
1. HUMAIN’s chairman is the Crown Prince of Saudi Arabia. Its owner is the Public Investment Fund, chaired by the Crown Prince. Its regulator is SDAIA, which reports to the government led by the Crown Prince. Who provides genuinely independent oversight of HUMAIN’s operations, and how can the public verify that oversight is real?
2. Does HUMAIN have an independent board of directors with the authority to override management decisions, or does governance ultimately flow through PIF and the Crown Prince’s office? If an independent board exists, who sits on it, what are their qualifications, and have any members ever dissented from a board decision?
3. Will HUMAIN commit to publishing comprehensive annual transparency reports detailing its operations, safety incidents, government data requests, content moderation decisions, and financial performance, in a format accessible to international researchers and civil society?
4. What is HUMAIN’s internal safety framework? Is there a dedicated safety team with the authority to halt deployments? What is the reporting chain for safety concerns, and does it have independence from commercial and political leadership?
5. Does HUMAIN have an AI ethics board? If so, who are its members, what authority does it hold, are its deliberations published, and can it veto deployments that it determines to be unsafe or unethical?
6. What whistleblower protections exist for HUMAIN employees, contractors, and partners who identify safety concerns, ethical violations, or misuse of AI systems? In a country ranked 166th of 180 on the Reporters Without Borders Press Freedom Index, what guarantees can HUMAIN offer that individuals who raise concerns will not face retaliation?
7. Has HUMAIN conducted a comprehensive human rights impact assessment of its operations, as recommended by the UN Guiding Principles on Business and Human Rights? If so, will it be published? If not, why not?
Safety & Technical Risk
8. HUMAIN OS is described as deploying over 150 AI agents capable of autonomous action across government and enterprise systems. What testing frameworks, validation protocols, and containment measures ensure these agents cannot cause harm through error, emergent behavior, or adversarial exploitation?
9. What is HUMAIN’s responsible scaling policy? At what capability thresholds does HUMAIN pause deployment to conduct safety evaluations, and who makes that determination? Are there any capabilities HUMAIN has committed never to develop?
10. ALLAM, HUMAIN’s Arabic-language large language model, is described as being fine-tuned for “Saudi cultural context.” Who defines what that cultural context is? What values, norms, and perspectives are encoded, and which are excluded? Is there documentation of these decisions available for independent review?
11. Has ALLAM been evaluated by independent third parties for bias, including religious bias, gender bias, political bias, and bias against minority communities within and outside the Kingdom? Will HUMAIN publish evaluation results?
12. What red-teaming and adversarial testing has been conducted on HUMAIN’s models and agent systems? Has HUMAIN engaged external red teams, and will it publish the methodologies and summary findings of those exercises?
13. When 150+ autonomous agents are deployed across critical government and enterprise infrastructure, what guardrails prevent those agents from taking harmful, irreversible, or unauthorized actions? What is the human-in-the-loop policy, and under what circumstances can agents act without human approval?
14. Has HUMAIN evaluated its models and agent systems for dangerous capabilities, including the ability to assist in the development of weapons, conduct surveillance, generate disinformation, or undermine democratic processes? What were the results, and what mitigations are in place?
Data Sovereignty & Privacy
15. Saudi Arabia’s Personal Data Protection Law went into effect in September 2024. How does HUMAIN’s data handling compare to the protections offered by the EU’s GDPR, and where does the PDPL fall short? What additional protections, if any, does HUMAIN voluntarily adopt?
16. Under what legal authorities can the Saudi government access data processed by HUMAIN’s systems? Is there a transparency mechanism for disclosing the volume and nature of government data requests, and will HUMAIN commit to publishing a government access transparency report?
17. HUMAIN’s $3 billion investment in xAI, Elon Musk’s AI company, includes the potential for data flows between X (formerly Twitter) and HUMAIN’s infrastructure. What data-sharing agreements exist between xAI, X, and HUMAIN? Do users of X know their data may flow to Saudi-controlled infrastructure?
18. For HUMAIN’s international clients and partners, what data localization guarantees are offered? Can clients ensure their data never leaves a specific jurisdiction, and how is this verified?
19. HUMAIN’s partnership with Groq involves deploying inference infrastructure in Saudi Arabia. What data residency and privacy protections apply to data processed through Groq hardware on Saudi soil, and do they differ from protections in Groq’s other operational jurisdictions?
20. When HUMAIN processes data for government ministries, including health, education, and security, what segmentation exists between government surveillance capabilities and HUMAIN’s commercial AI operations? How is mission creep prevented?
Geopolitics & Concentration of Power
21. HUMAIN’s partnership with SpaceX reportedly involves PIF taking an equity stake. When a sovereign wealth fund controlled by an absolute monarchy holds equity in the company that operates the world’s dominant satellite communications network, what safeguards prevent the weaponization of communications infrastructure?
22. HUMAIN has announced partnerships totaling over $23 billion with companies including Google, Amazon, Oracle, AMD, Groq, and xAI. When a single sovereign-backed entity holds this concentration of commercial AI relationships, what prevents it from leveraging those relationships for geopolitical advantage?
23. PIF’s $1.1 trillion portfolio includes stakes in companies across every major sector globally. When PIF-owned HUMAIN deploys AI agents for “enterprise optimization,” what prevents HUMAIN’s AI from providing preferential intelligence, insight, or operational advantages to PIF portfolio companies over their competitors?
24. Saudi Arabia is simultaneously the world’s largest oil exporter and is now building one of the world’s largest AI infrastructure platforms. When one nation controls both the energy that powers AI and the AI infrastructure itself, what prevents the strategic bundling of energy and compute access as a geopolitical tool?
25. Has HUMAIN conducted an assessment of potential military or intelligence applications of its technology? What policies prevent HUMAIN’s AI capabilities from being used for military targeting, mass surveillance, predictive policing, or suppression of dissent?
Environmental Impact
26. What is the total energy consumption of HUMAIN’s data center buildout, both currently operational and projected? Given the multi-gigawatt scale announced, how does HUMAIN reconcile this energy demand with global climate commitments?
27. Operating hyperscale data centers in a desert climate with ambient temperatures regularly exceeding 45 degrees Celsius requires extraordinary cooling overhead. What is the energy penalty of desert cooling compared to temperate-climate alternatives, and how does HUMAIN account for this inefficiency?
28. What is HUMAIN’s total water consumption for data center cooling? In a country classified as one of the most water-stressed on Earth, how does HUMAIN justify the water demands of hyperscale computing, and what alternative cooling technologies are being deployed?
29. What is HUMAIN’s total carbon footprint, including Scope 1, 2, and 3 emissions across its operations, supply chain, and energy consumption? Will HUMAIN commit to independent verification and annual disclosure of its carbon emissions?
30. HUMAIN is backed by PIF, which derives a significant portion of its assets from Saudi Aramco, the world’s largest oil company. How does HUMAIN reconcile its environmental commitments with the fundamental economic interest of its owner in continued fossil fuel extraction and consumption?
Economic Impact & Labor
31. What percentage of HUMAIN’s technical workforce is comprised of Saudi nationals, and what is the company’s Saudization plan? How does HUMAIN balance international talent recruitment with the Kingdom’s national employment mandates?
32. HUMAIN OS deploys AI agents described as capable of replacing human workers across government and enterprise functions. What assessment has HUMAIN conducted of the labor displacement impact of its technology, and what mitigation measures are in place for workers whose roles are automated?
33. The construction of HUMAIN’s data center infrastructure at the scale announced will require tens of thousands of construction workers. What labor standards, wage protections, and working conditions are guaranteed for construction workers, including migrant workers, on HUMAIN projects?
34. HUMAIN’s $100 million venture fund invests in AI startups that may become dependent on HUMAIN’s infrastructure. What terms govern these investments, and how does HUMAIN prevent its venture activities from creating a captive ecosystem that stifles competition?
35. When HUMAIN deploys AI agents across an entire nation’s government infrastructure, what prevents the creation of a single point of failure, and what sovereign risk does this concentration of dependency pose to the Kingdom itself?
International Standards & Cooperation
36. Is HUMAIN a signatory or supporter of the Bletchley Declaration on AI Safety? If so, how is HUMAIN implementing the commitments made in that declaration? If not, why not?
37. Has HUMAIN submitted any of its models or agent systems for evaluation by the UK AI Safety Institute, the US AI Safety Institute, or any equivalent international body? If not, will it commit to doing so?
38. Does HUMAIN support the establishment of a binding international framework for AI governance under the United Nations? What role does HUMAIN see for itself in shaping global AI norms, and how does it reconcile that role with its status as a sovereign-backed entity?
39. As HUMAIN expands its commercial operations to serve international clients, will it commit to complying with the EU AI Act’s requirements for high-risk AI systems, regardless of where those systems are deployed?
40. ALLAM is described as culturally tailored for Saudi Arabia. When this model is deployed across the broader Arabic-speaking world, how does HUMAIN account for the vast cultural, political, and religious diversity across Arabic-speaking populations? Is cultural tailoring disclosed to end users?
The Fundamental Questions
41. HUMAIN’s website at humain.ai redirects to humain.com, a domain that obscures the “.ai” suffix that would immediately signal the company’s artificial intelligence focus. Why?
42. HUMAIN’s name is derived from “human.” How does the company reconcile branding itself around humanity with operating under the authority of a government whose human rights record is consistently documented by Amnesty International, Human Rights Watch, and the United Nations as among the most concerning in the world?
43. If a HUMAIN autonomous agent causes harm, whether through error, bias, or emergent behavior, who is legally liable? The agent operator? HUMAIN? PIF? The Saudi government? Has this liability framework been established, and is it publicly available?
44. When a single sovereign entity controls AI infrastructure at the scale HUMAIN is building, with no independent oversight, limited press freedom, and no democratic accountability, does that concentration of power itself constitute a threat, regardless of intent?
45. What specific technical and institutional safeguards prevent HUMAIN’s AI capabilities from being used by state security services for surveillance, identification, tracking, or suppression of dissidents, journalists, activists, or minority communities?
46. Does HUMAIN believe that AI systems operating at the scale of national infrastructure require human oversight that is independent of the entity deploying those systems? If so, what does that oversight look like at HUMAIN? If not, why not?
47. What concrete, verifiable actions has HUMAIN taken, not planned, not announced, but actually implemented, to earn the trust of the international community, civil society, and the people whose data it processes and whose lives its systems affect?
48. Will HUMAIN respond to this letter? Will it respond publicly, in detail, and on the record? Or will silence confirm what many already suspect: that HUMAIN believes it operates beyond the reach of public accountability?
49. Will HUMAIN commit to a standing transparency framework, including annual public reporting, independent safety audits, human rights impact assessments, and engagement with international civil society, not as a matter of goodwill, but as a binding institutional obligation?
50. We ask this final question not of HUMAIN alone, but of the world that is watching: When the largest AI buildout in history is conducted by a sovereign entity with no independent oversight, no free press, no democratic accountability, and a documented record of silencing dissent, is it enough to ask questions? Or must the international community demand answers?
A Note on Our Intent
This letter is published in the spirit of accountability, not antagonism. HUMAIN may prove to be a responsible steward of extraordinary power. We hope it does. But hope is not a governance framework, and trust must be earned through transparency, not demanded through scale.
We invite HUMAIN to respond publicly to these questions. We will publish any response in full, unedited, alongside this letter.
For those inside or adjacent to HUMAIN who wish to share information confidentially, our secure tip line is available at inhumain.ai/tips. We protect our sources absolutely.
Signed,
The Editors, INHUMAIN.AI February 2026