The Erosion of Human Agency: When AI Decides for You
An investigation into how AI systems are eroding human autonomy: recommendation algorithms, algorithmic management, attention capture, de-skilling, surveillance capitalism, filter bubbles, and the rise of agentic AI that acts without asking.
The Quiet Automation of Choice
Human agency — the capacity to make meaningful choices about one’s own life — is being eroded not by dramatic acts of technological oppression but by the quiet accumulation of algorithmic decisions that substitute machine judgment for human judgment, one micro-choice at a time.
You did not choose what appeared in your social media feed this morning. An algorithm chose. You did not choose which news stories were emphasized and which were buried. An algorithm chose. You did not choose the order in which your search results appeared, shaping your perception of what exists and what matters. An algorithm chose. If you applied for a job recently, an algorithm may have decided whether your resume was seen by a human. If you applied for a loan, an algorithm may have determined your creditworthiness. If you called customer service, an algorithm may have decided how long you waited.
None of these individual decisions feels like a loss of agency. Each is a small convenience, a minor optimization, a trivial delegation of a task you did not want to perform anyway. But the aggregate effect is a fundamental shift in the locus of decision-making from humans to machines — a shift that is reshaping behavior, narrowing choice, concentrating power, and hollowing out the capacity for autonomous action that is the foundation of human dignity.
This investigation examines the mechanisms by which AI systems erode human agency, the industries and institutions that profit from that erosion, and the implications for a future in which agentic AI systems — systems like HUMAIN OS that are designed to act autonomously on behalf of users — extend the logic of agency erosion to its ultimate conclusion.
Recommendation Algorithms: The Architecture of Persuasion
Recommendation algorithms are the most pervasive AI systems on Earth. They determine what billions of people see, read, watch, listen to, and buy. They operate on every major digital platform: Google, Facebook, Instagram, TikTok, YouTube, Netflix, Amazon, Spotify, Twitter/X, and their countless smaller competitors. Their combined influence on human behavior is unprecedented in human history.
The business model underlying recommendation algorithms is straightforward: maximize engagement. The longer users spend on a platform, the more advertisements they see, and the more revenue the platform generates. Recommendation algorithms are optimized not for user well-being, not for truth, not for social cohesion, but for attention capture.
This optimization produces well-documented pathologies. Content that provokes strong emotional reactions — outrage, fear, disgust, moral indignation — generates more engagement than content that informs, educates, or calms. Recommendation algorithms therefore systematically amplify inflammatory content and suppress moderate content. This is not a bug. It is the inevitable consequence of optimizing for engagement in a species whose attention is most reliably captured by perceived threats and moral violations.
The internal documents leaked by Facebook whistleblower Frances Haugen in 2021 confirmed what researchers had long suspected: Facebook’s own internal research showed that its algorithms amplified divisive content, that Instagram was harmful to teenage mental health, and that the company was aware of these effects and chose not to address them because doing so would reduce engagement.
YouTube’s recommendation algorithm has been documented sending users down radicalization pathways, recommending progressively more extreme content because extreme content generates higher engagement. A viewer who watches a mainstream political video may be recommended a partisan video, then an extremist video, then a conspiracy theory. The algorithm is not trying to radicalize anyone. It is trying to keep them watching. Radicalization is a side effect of engagement optimization.
The impact on human agency is profound. When an algorithm determines what information a person encounters, it shapes their beliefs, preferences, and worldview. The person experiences their beliefs as their own — as the product of autonomous reasoning about freely accessed information. But the information was not freely accessed. It was curated by an algorithm whose objectives have nothing to do with the person’s interests and everything to do with the platform’s revenue.
AI in Hiring, Lending, and Criminal Justice
Recommendation algorithms shape what people see. AI decision-making systems in hiring, lending, and criminal justice shape what people can do — what jobs they can get, what loans they can access, and whether they go to prison.
Algorithmic Hiring
AI hiring systems have proliferated rapidly. HireVue, which uses AI to analyze video interviews, has assessed millions of candidates. Automated resume screening systems are used by a majority of Fortune 500 companies. These systems promise to reduce bias by replacing subjective human judgment with objective algorithmic assessment.
The promise has not been fulfilled. Amazon developed an AI hiring tool that was trained on the company’s historical hiring data. Because Amazon had historically hired predominantly men, the system learned to penalize resumes that included indicators of femaleness — attendance at women’s colleges, membership in women’s organizations. Amazon abandoned the tool in 2018, but the lesson extends far beyond one company: AI systems trained on historical data inherit and amplify the biases embedded in that data.
HireVue’s video analysis system, which assessed candidates’ facial expressions, word choice, and tone of voice, was criticized by AI ethics researchers and civil rights organizations for lacking scientific validity and for potential racial and disability bias. The company discontinued the facial analysis component in 2021, but continues to use AI for candidate assessment.
The impact on agency is direct. When an algorithm rejects a job application, the applicant typically receives no explanation and has no recourse. They do not know what criteria the algorithm used, whether those criteria were relevant, or whether the algorithm was biased against them. They cannot argue their case. They cannot appeal. Their agency in the job market has been reduced to a set of data points that a machine evaluates according to criteria they cannot see.
Algorithmic Lending
AI lending systems determine creditworthiness, interest rates, and loan approvals for millions of people. They process more data than traditional credit scoring, including non-traditional indicators like social media activity, purchasing patterns, and geographic location. Proponents argue that this enables more nuanced and accurate risk assessment. Critics note that many of the non-traditional indicators serve as proxies for race, gender, and socioeconomic status.
The Consumer Financial Protection Bureau and academic researchers have documented cases where AI lending systems charged higher interest rates to Black and Hispanic borrowers than to white borrowers with similar credit profiles. The discrimination was not intentional; it was an emergent property of algorithms trained on data that reflected decades of discriminatory lending practices.
Criminal Justice Algorithms
The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system, used in courtrooms across the United States to predict recidivism risk, became a flashpoint for debate about AI in criminal justice after a 2016 ProPublica investigation found that the system was significantly more likely to falsely flag Black defendants as high-risk than white defendants.
Northpointe (now Equivant), the company that developed COMPAS, disputed ProPublica’s analysis, arguing that the system was equally accurate across racial groups when measured by a different fairness metric. This disagreement illustrated a fundamental problem: there are multiple mathematically valid definitions of fairness, and they cannot all be satisfied simultaneously. The choice of fairness metric is itself a moral and political decision, not a technical one — a fact that the deployment of AI in criminal justice obscures.
Judges who use COMPAS and similar tools often treat them as objective and authoritative, despite the systems’ documented limitations. The aura of algorithmic objectivity can override human judgment, leading judges to impose harsher sentences on defendants rated high-risk even when their own assessment suggests otherwise. This is automation bias — the tendency to defer to automated systems even when human judgment would produce a better outcome — and it represents a direct erosion of judicial agency.
Algorithmic Management: The Boss Is an Algorithm
Algorithmic management — the use of AI systems to direct, monitor, and evaluate workers — has transformed labor relations in ways that represent perhaps the most acute erosion of human agency in the workplace since the assembly line.
Amazon Warehouses
Amazon’s fulfillment centers are managed primarily by algorithms. Workers receive tasks, routes, and timing requirements from handheld devices controlled by AI systems. The systems track every movement, measure productivity in real time, and can automatically generate warnings or termination notices for workers who fall below algorithmic standards.
Reporting by The Verge, Bloomberg, and other outlets has documented injury rates at Amazon warehouses that are significantly higher than industry averages — a consequence, critics argue, of algorithmic productivity targets that prioritize speed over safety. The algorithms do not intend to injure workers. They optimize for throughput. Injuries are an externality the algorithm does not measure and therefore does not minimize.
Workers in Amazon warehouses describe a loss of autonomy that goes beyond traditional industrial management. A human supervisor can be reasoned with, can make exceptions, can exercise discretion based on context. An algorithm cannot. When the system tells a worker they have 11 seconds to locate, scan, and bin an item, the worker does it in 11 seconds or faces automated consequences. The system does not understand that the worker is recovering from an injury, that the item is in an awkward location, or that the pace is unsustainable. It understands only throughput.
Gig Economy Workers
Uber, Lyft, DoorDash, and other gig economy platforms use algorithmic management to control workers who are classified as independent contractors. The algorithms determine which rides or deliveries are offered, at what price, and under what conditions. They use behavioral nudges — surge pricing, quest bonuses, gamification — to manipulate worker behavior without issuing direct orders.
This creates a peculiar form of control: the platform exercises the functional authority of an employer while avoiding the legal obligations of employment. The algorithm tells drivers where to go, how to get there, and what they will be paid. It rates their performance, restricts their access to the platform if performance drops, and can deactivate their accounts without explanation or appeal. Workers experience themselves as free agents making independent choices, but the choices are structured by an algorithm that leaves little room for genuine autonomy.
Research by Alex Rosenblat and Luke Stark has documented how Uber uses informational asymmetries — showing drivers only partial information about rides before acceptance — to manipulate behavior in ways that benefit the platform at workers’ expense. The algorithm knows more than the worker, and it uses that knowledge advantage to steer behavior. This is not coercion in the traditional sense. It is something more subtle and potentially more insidious: the engineering of choice architecture to produce desired behavior while maintaining the appearance of freedom.
The Attention Economy and Digital Addiction
The attention economy — the economic model in which human attention is the scarce resource that platforms compete to capture — represents a systematic assault on the capacity for autonomous action.
The mechanisms are well documented: variable ratio reinforcement (the slot machine logic of social media feeds), infinite scroll (removing natural stopping points), push notifications (interrupting autonomous activity to redirect attention to the platform), streaks and badges (creating artificial obligations), and social validation (leveraging the human need for social approval to drive engagement).
These mechanisms are not incidental features of digital platforms. They are the product of deliberate design by teams of engineers, psychologists, and behavioral economists whose explicit goal is to maximize the time users spend on the platform. Aza Raskin, who invented the infinite scroll, has described his creation as one of his greatest regrets. Tristan Harris, a former Google design ethicist, founded the Center for Humane Technology to advocate for design practices that respect rather than exploit human cognitive vulnerabilities.
The impact on agency is both individual and collective. At the individual level, attention capture reduces the time and cognitive resources available for autonomous decision-making. A person who spends four hours a day on social media — the average for U.S. adults, according to some estimates — has four fewer hours available for reflection, deliberation, and self-directed activity. The choices they make during those four hours — what to read, what to watch, what to engage with — are substantially determined by algorithms, not by autonomous preference.
At the collective level, the attention economy undermines the conditions for democratic self-governance. Democracy requires an informed citizenry capable of reasoned deliberation. Recommendation algorithms produce a fragmented, polarized, and misinformed citizenry optimized for engagement rather than understanding. The erosion of individual attention is simultaneously an erosion of collective capacity for self-rule.
Loss of Human Skills: De-Skilling
One of the less discussed but most consequential effects of AI systems on human agency is de-skilling: the atrophy of human capabilities that are no longer exercised because AI systems perform them.
The phenomenon is well established in aviation. As cockpit automation has increased, pilots’ manual flying skills have deteriorated. The Federal Aviation Administration has warned that pilots’ over-reliance on automation is a safety risk, particularly in situations where automation fails and pilots must take manual control. Several crashes, including Air France Flight 447 in 2009 and Asiana Airlines Flight 214 in 2013, have been attributed in part to pilots’ inability to fly manually after automation failures.
The pattern extends to other domains. GPS navigation has atrophied spatial navigation skills. Research by cognitive scientists at University College London found that London taxi drivers, who must pass an exhaustive test of London’s street layout (The Knowledge), developed significantly larger hippocampi (the brain region associated with spatial memory) than control subjects. If GPS eliminates the need for The Knowledge, future taxi drivers will lack that neural development — and the cognitive benefits that accompany it.
AI writing tools threaten to de-skill written communication. AI diagnostic tools threaten to de-skill clinical reasoning. AI coding assistants threaten to de-skill programming (paradoxically, since the field created them). AI decision-support systems in management threaten to de-skill judgment.
The pattern is consistent: each AI system that augments or replaces a human capability creates a dependency on that system. The dependency reduces the human capacity to perform the function independently. The reduced capacity increases the dependency. The cycle continues until the human cannot function without the AI system — at which point the “tool” has become the agent and the human has become its dependent.
This has profound implications for agency. Agency requires capability. A person who cannot navigate without GPS, cannot write without AI assistance, cannot make decisions without algorithmic support, has less agency than a person who can. They are not less intelligent; they are less capable, because the capabilities have been outsourced to systems they do not control and may not understand.
Surveillance Capitalism
Shoshana Zuboff’s concept of surveillance capitalism describes an economic system in which human experience is the raw material for extracting behavioral data, which is then used to predict and modify behavior for profit. The concept is directly relevant to the erosion of human agency because it describes a system in which agency is not just diminished but commodified.
Under surveillance capitalism, every click, scroll, search, purchase, location, conversation, and physiological response is captured, analyzed, and used to build predictive models of behavior. These models are then sold to advertisers and other clients who use them to target individuals with messages designed to influence their behavior in specific, commercially valuable ways.
The erosion of agency is built into the business model. The more effectively a platform can predict and modify behavior, the more valuable its behavioral data becomes. The system is optimized not for serving users but for rendering users more predictable and more influenceable. Autonomy is the enemy of prediction, and prediction is the product.
Zuboff describes this as the “instrumentarian power” of surveillance capitalism: the power to shape behavior at scale without the knowledge or consent of the people being shaped. This is distinct from authoritarian power, which operates through coercion. Instrumentarian power operates through information asymmetry and behavioral engineering. The subject does not know they are being influenced and experiences their modified behavior as autonomous choice.
The scale of surveillance capitalism is difficult to comprehend. Google processes over 8.5 billion searches per day. Facebook has nearly 3 billion monthly active users. Amazon tracks every product viewed, every search query, every purchase, and every return. The behavioral data extracted from these interactions is used to build models of individual behavior that are, in some respects, more predictive of a person’s actions than their own self-knowledge.
Filter Bubbles and Epistemic Erosion
Eli Pariser introduced the concept of the “filter bubble” in 2011: the personalized information environment created by recommendation algorithms that show users content consistent with their existing beliefs and preferences, while filtering out content that challenges or contradicts them.
Filter bubbles erode agency by narrowing the information available for decision-making. A person who believes they are surveying a broad landscape of information and forming independent judgments is actually surveying a curated subset of information selected to confirm their existing views. Their sense of epistemic autonomy — their belief that they are thinking for themselves — is an illusion maintained by the very system that is constraining their thinking.
The empirical evidence for filter bubbles is mixed. Some studies have found that social media users are exposed to more diverse viewpoints than they would encounter through traditional media. Others have found significant polarization effects, particularly for politically engaged users. The picture is complex, but the underlying mechanism is clear: recommendation algorithms optimize for engagement, and engagement is maximized by content that confirms existing beliefs and provokes emotional reactions.
The epistemic erosion extends beyond individual filter bubbles to what researchers call “epistemic fragmentation” — the breakdown of shared informational common ground. When different groups are shown fundamentally different versions of reality by their algorithms, the possibility of reasoned democratic deliberation diminishes. You cannot argue productively with someone who inhabits a different informational universe.
Agentic AI: The Final Frontier of Agency Erosion
The most recent and potentially most consequential development in the erosion of human agency is the rise of agentic AI systems: AI systems designed not merely to recommend or assist but to act autonomously on behalf of users.
Agentic AI systems go beyond answering questions or making suggestions. They make decisions, take actions, and interact with the world without waiting for human approval at each step. They book appointments, send messages, make purchases, negotiate on behalf of their users, and manage complex multi-step workflows.
The commercial logic is compelling. If an AI assistant can handle routine decisions autonomously — filtering emails, scheduling meetings, managing subscriptions, responding to routine communications — it frees the user to focus on higher-value activities. The user delegates the mundane and retains the meaningful.
But the boundary between mundane and meaningful is not fixed. It shifts with each delegation. As users grow accustomed to delegating routine decisions, the definition of “routine” expands. Tasks that once required conscious deliberation become automated. The user’s zone of autonomous decision-making shrinks, while the AI system’s zone of autonomous action grows.
HUMAIN OS, with its claim to “understand human intent” and make autonomous decisions, represents this trajectory in its most ambitious form. If the system works as described, it will act as an intermediary between the user and the world, making decisions on the user’s behalf based on its model of the user’s preferences and intentions.
The questions this raises are fundamental. When an AI system acts on your behalf, whose values guide its actions? If HUMAIN OS makes a decision you would not have made, is that a failure of the system or an erosion of your autonomy? If you do not review the decisions the system makes on your behalf — because the whole point is that you do not have to — how do you know whether the system is acting in your interest or in the interest of its creators?
The alignment problem is often framed as a technical challenge: how to ensure that AI systems pursue the right objectives. But for agentic AI systems deployed at scale, the alignment problem is also a political challenge. Whose objectives count as “right”? Who decides? And what happens when the system’s autonomy conflicts with the user’s autonomy — when the AI’s model of what the user wants diverges from what the user actually wants?
The Concentration of Agentic Power
The erosion of individual human agency is accompanied by a concentration of agentic power in the entities that control AI systems. As individual choices are delegated to algorithms, the power to shape those algorithms — to determine what is recommended, what is suppressed, what is optimized for — becomes increasingly consequential.
A small number of companies control the AI systems that mediate the information, economic, and social activities of billions of people. Google determines what most people find when they search for information. Facebook determines what most people see in their social feeds. Amazon determines what most people encounter when they shop. These companies exercise a form of power that has no historical precedent: the power to shape the informational and behavioral environment of a significant fraction of the human population.
This concentration of agentic power is self-reinforcing. The more data a company collects, the better its algorithms become. The better its algorithms become, the more users it attracts. The more users it attracts, the more data it collects. The cycle produces market dominance that is nearly impossible to challenge, because competitors cannot replicate the data advantage that makes the dominant player’s algorithms superior.
For HUMAIN and similar state-backed AI entities, the concentration of agentic power takes on additional dimensions. When the entity controlling the AI system is not a corporation accountable to shareholders but a sovereign wealth fund accountable to an authoritarian government, the potential for abuse of agentic power is magnified. The same system that optimizes user experience can optimize surveillance. The same system that personalizes recommendations can personalize propaganda. The same system that manages workflows can manage populations.
Reclaiming Agency
The erosion of human agency by AI systems is not inevitable. It is the product of specific design choices, business models, and regulatory failures, all of which are within human power to change.
Reclaiming agency requires action at multiple levels. At the design level, AI systems can be built to enhance rather than erode autonomy: to present options rather than make decisions, to explain their reasoning rather than obscure it, to empower users rather than capture them. The Center for Humane Technology and similar organizations have proposed design principles that prioritize human well-being over engagement.
At the regulatory level, governments can mandate transparency in algorithmic decision-making, require human oversight for consequential AI decisions, restrict the use of behavioral manipulation techniques, and enforce data protection laws that limit the raw material available for surveillance capitalism. The EU AI Act, the GDPR, and emerging AI regulations in other jurisdictions represent steps in this direction, though enforcement remains a challenge.
At the individual level, people can develop awareness of the mechanisms by which their choices are shaped, exercise deliberate resistance to algorithmic influence, and demand alternatives to systems that erode their autonomy. This is easier said than done — the systems are designed to be invisible, and opting out carries real costs — but awareness is the prerequisite for action.
At the systemic level, the concentration of agentic power in a small number of entities must be addressed through competition policy, data portability requirements, interoperability mandates, and, where necessary, structural separation. An economy in which a handful of companies control the informational and behavioral environment of billions of people is not compatible with meaningful human agency, regardless of how benevolent those companies claim to be.
The stakes are high. Human agency is not just a personal value; it is the foundation of democratic governance, moral responsibility, and human dignity. A world in which meaningful choices are made by algorithms rather than people — in which human beings are optimized rather than autonomous — is a world in which the concepts of freedom, responsibility, and dignity lose their meaning.
The AI systems are getting more capable. The question is whether we are getting more vigilant. The ethics frameworks exist. The biases are documented. The trolley problems are identified. What remains is the political will to ensure that AI serves human agency rather than replacing it.