4.3.1 Sense of Control
Tom Martinez has been a customer service manager at a mid-sized insurance company for eight years. He built his team from the ground up—hired people, trained them, developed workflows, handled escalations. The work was demanding, but he had control. He could make decisions about how to handle difficult cases, when to make exceptions to policy, how to prioritize competing demands.
In early 2024, his company deployed an AI-powered customer service system.
The AI handles routine inquiries automatically. It routes complex cases to human agents. It suggests responses. It evaluates agent performance in real-time. It even recommends which cases Tom should personally review based on risk scores.
Initially, Tom saw the AI as a tool under his supervision. He'd review its recommendations, override them when appropriate, and use his judgment for complex situations.
But over the past eighteen months, the balance has shifted.
Now, when Tom wants to override an AI recommendation, he needs to document his reasoning in the system. His overrides are tracked, analyzed, and reviewed by upper management. If his override rate is too high, it's flagged as a performance issue—evidence that he's not "leveraging AI effectively."
When he wants to adjust team workflows, he can't just implement changes. The AI system has built-in processes that his team is expected to follow. Deviation requires approval from IT, compliance, and operations—a weeks-long process for changes that used to take hours.
When he evaluates his team's performance, he's expected to rely primarily on AI-generated metrics: response time, resolution rate, customer satisfaction scores. His personal judgment about an agent's growth, problem-solving ability, or potential is secondary to what the AI reports.
Tom still has his manager title. He still makes decisions. But increasingly, those decisions are constrained by AI-generated parameters, processes, and recommendations. He has lost autonomy without losing responsibility. He feels less like a manager and more like a middle layer between the AI and his team—enforcing the AI's logic rather than exercising his own judgment. The psychological cost is significant: he is stressed, resentful, and increasingly disengaged from work that once felt meaningful precisely because it allowed him to decide how it was done.
The Autonomy Paradox
By 2025, 75% of workers were using AI in the workplace, and a troubling pattern had emerged: while AI is marketed as empowering workers, many are experiencing the opposite—agency decay, the gradual erosion of autonomous decision-making capability and the perceived ability to function independently of AI systems.
The paradox is structural. AI systems are designed to optimize processes, reduce variability, and standardize workflows. These goals inherently constrain human autonomy because autonomy means the freedom to deviate, experiment, and exercise judgment in ways that algorithms cannot predict or approve in advance. When a worker wants to make an exception for a long-time customer facing unusual circumstances, the AI system does not understand context—it sees policy deviation. The worker can make the exception, but the system flags it, requires justification, and logs it as a potential performance risk.
The cumulative effect is that workers learn exercising autonomous judgment creates friction, documentation burden, and potential negative consequences. So they stop exercising judgment. They defer to the AI. And their sense of control over their work erodes. Research confirms that this erosion is not merely frustrating—it is psychologically harmful. Sense of control is one of the strongest predictors of workplace well-being, job satisfaction, and mental health. When workers lose control, even while remaining employed, they experience increased stress, reduced engagement, and higher rates of burnout.
The Efficiency Trap
The autonomy paradox is compounded by what researchers at the Wharton School call the AI efficiency trap—the way productivity tools create perpetual pressure that ultimately constrains rather than liberates workers.
AI enables workers to complete tasks faster, but instead of reducing workload, increased efficiency simply raises expectations. When a team can handle 50 customer cases per day instead of 30, the new target becomes 50, then 60, then 70. Workers are caught in a cycle that moves perpetually toward higher performance standards, unable to slow down because AI capabilities set the baseline and management sets targets accordingly.
Critically, workers lose control over the pace itself. The AI determines throughput capacity. Management sets targets based on AI performance. Workers execute. The autonomy to decide "this is a reasonable workload" or "we need to slow down to maintain quality" disappears. The constant accessibility of AI tools creates what researchers describe as digital omnipresent stress—a psychological pressure to utilize round-the-clock availability that paradoxically reduces human autonomy rather than enhancing it. In practice, teams may handle 40% more volume than before AI deployment while productivity metrics register them as more efficient. The human experience tells a different story: exhaustion, low morale, and rising turnover. The AI has demonstrated the team's capacity, and anything less looks like underperformance. Workers lose control not just over how work is done, but over what constitutes reasonable expectations in the first place.
The Micromanagement Paradox
AI was supposed to reduce micromanagement by handling routine oversight, freeing managers to focus on strategic work and people development. Instead, many workers report experiencing more micromanagement, not less—except now it is algorithmic rather than human.
AI monitoring systems track everything: keystrokes, response times, idle periods, decision patterns. They generate real-time performance dashboards, flag deviations from optimal workflows, and rank workers against each other on dozens of metrics. This creates a surveillance environment in which workers feel constantly watched and evaluated—not by a human manager who might understand context, but by a system that optimizes for measurable outputs without regard for human factors. Workers in heavily monitored environments describe it as oppressive. They cannot take a few extra minutes to think through a complex case without triggering a response-time flag. They cannot deviate from scripted workflows without algorithmic scrutiny. The space to exercise creativity, build genuine rapport with customers, or apply contextual judgment shrinks close to zero because the system optimizes for efficiency over relationship or nuance.
Managers face a mirrored version of the same problem. AI-generated metrics are visible to leadership, and a manager who ignores them to support team autonomy is perceived as failing in their role. If they enforce the metrics, they are micromanaging—not by choice, but as agents of the AI's demands. Research confirms this dynamic: workers may enjoy less autonomy when they are merely expected to follow automated instructions from machines, and AI monitoring capabilities can worsen work quality precisely by eliminating the discretionary space that skilled workers rely on. The result is that autonomy erodes from both directions—workers lose control over how they do their work, and managers lose control over how they lead their teams.
The Organizational Priority Shift
Organizational leaders are not unaware of this problem. By 2025, priorities had shifted in many companies from focusing purely on automation to a more human-centered question: making work easier for people. Employee experience, reliable information access, consistent communication, and a sense of control over the digital work environment had become central to how forward-thinking organizations approached technology deployment.
Yet a significant gap persists between stated priorities and implementation realities. Leadership may genuinely want employees to feel a sense of control, but the AI systems being deployed systematically reduce that control. The friction lies partly in incentive structures: the efficiency gains from AI automation are immediate, quantifiable, and reportable. The psychological costs—erosion of agency, disengagement, rising burnout—are diffuse, hard to attribute to any specific system, and largely invisible in standard performance dashboards.
The contradiction produces a familiar pattern. Organizations roll out employee well-being initiatives—mindfulness applications, flexible scheduling options, mental health resources—while simultaneously tightening AI-driven performance monitoring that reduces the autonomy those initiatives are meant to compensate for. Workers experience this as performative concern: the organization acknowledges stress while maintaining the systems producing it, and the sense of powerlessness deepens, extending beyond individual tasks to the broader work environment and the conditions that shape it. Closing this gap requires more than leadership intent. It requires treating human agency as a design constraint when deploying AI—not an afterthought to be addressed through wellness programs after the damage is done.
What Workers Want: Equal Partnership
Research provides a clear picture of what workers actually want from human-AI collaboration. A study using the Human Agency Scale, applied across 104 occupations, asked workers what level of human involvement they prefer when working alongside AI systems. The dominant preference, found in 47 out of 104 occupations, was what researchers labeled H3: Equal Partnership—where humans and AI share decision-making authority and neither can act unilaterally without the other's input.
Critically, workers prefer higher levels of human agency than technical experts deem necessary on 47.5% of tasks. Workers want more control than technologists think they need. This is not technophobia or resistance to change; it reflects a fundamental human need for autonomy—the ability to influence decisions affecting one's work and to feel that one's judgment matters.
The gap between worker preferences and organizational incentives is sharp. AI systems that operate autonomously are more efficient, scalable, and predictable than systems requiring constant human input, so the systems being deployed systematically trend toward lower human agency rather than the equal partnership workers prefer. This distance between what workers want and what they experience is not incidental—it is structurally produced by deployment choices that prioritize technical optimization over human factors, and it is a primary driver of the psychological distress this chapter examines.
Coping Strategies and Their Limits
Workers and managers facing AI-driven autonomy erosion are not passive. They develop strategies—some effective in limited ways, others ultimately inadequate—to recover some sense of control within constrained environments.
At the individual level, workers negotiate with IT and operations to adjust AI parameters: lengthening response-time thresholds, customizing routing algorithms to account for case complexity, or carving out spaces within the system where human judgment is explicitly valued. These micro-negotiations can provide breathing room, but they operate at the margins of systems designed for standardization and are rarely scalable across an organization. A more common coping strategy is role reframing—consciously reconceptualizing one's function from autonomous decision-maker to facilitator operating within AI-defined constraints. This cognitive shift can reduce the psychological dissonance of holding responsibility without authority, but it carries its own cost: what feels like adaptation is often a form of internalized diminishment, lowering expectations for autonomy to match a new reality rather than addressing the underlying need for control.
Some organizations attempt structural remedies. Dedicated override mechanisms with low friction and no performance penalty, regular reviews of AI parameters informed by worker feedback, and protected spaces for human judgment—such as case review meetings that operate outside algorithmic metrics—can partially restore agency. Where these measures are implemented consistently, they tend to improve morale and reduce turnover among skilled workers. But they require sustained organizational commitment and remain uncommon relative to the pace of AI deployment. The deeper problem is that most coping strategies treat the symptom rather than the cause. They help individual workers manage the experience of reduced autonomy without addressing the structural design choices that produce it.
Mental Health Implications
The loss of workplace autonomy has well-documented mental health consequences. A substantial body of research establishes perceived control as one of the strongest predictors of psychological well-being—protective against stress, anxiety, and depression across a wide range of industries and occupations. Workers who feel genuine autonomy over their work report better mental health outcomes, higher job satisfaction, and lower burnout rates. Conversely, low perceived control is consistently associated with elevated stress and anxiety, higher rates of depression and burnout, reduced job satisfaction, diminished organizational commitment, and increased turnover intentions. The relationship between loss of control and psychological harm is among the most replicated findings in occupational health research.
What makes AI-driven autonomy erosion particularly significant is its scale. By early 2025, workers in at least 36% of occupations were using AI for at least 25% of their tasks—a substantial share of the workforce experiencing varying degrees of control loss simultaneously. The mental health costs of this shift are likely substantial, but they are also diffuse, accumulating slowly and difficult to attribute directly to any single technology deployment. Organizations optimizing for productivity metrics may not recognize the psychological toll until it manifests as elevated turnover, chronic absenteeism, or acute burnout crises—by which point the damage is already significant and the causal chain harder to trace. This invisibility is itself part of the problem: what is not measured is not managed, and the psychological costs of AI deployment are rarely built into the accounting frameworks organizations use to evaluate technology investments.
The Governance Challenge
Managing AI-driven autonomy loss is ultimately a governance problem—one that most organizations have not yet adequately addressed. Traditional frameworks for AI governance focus on accuracy, bias detection, legal compliance, and data security. These concerns are legitimate and important, but they leave unexamined a different set of questions: what actions AI systems are permitted to take autonomously, what decisions require human involvement, and what conditions must be met before an AI system's recommendations carry binding weight.
Agentic AI systems—those capable of taking sequences of actions and making decisions across extended workflows—present governance challenges that static rule-based systems do not. Their behavior is dynamic, contextually variable, and difficult to audit in real time. Establishing meaningful human control over such systems requires explicit organizational commitments: defined thresholds for AI authority, protected override rights with low friction and no performance penalty, regular audits of how AI parameters affect worker discretion, and accountability structures that treat autonomy erosion as a risk to be managed alongside bias or accuracy failures.
The departure of skilled workers who cite loss of autonomy as their reason for leaving represents a measurable governance failure—one that appears in turnover data long before it surfaces in AI performance dashboards. Organizations that treat human agency as a fixed cost to be minimized in the pursuit of efficiency tend to discover, often too late, that the workers whose judgment they displaced were carrying institutional knowledge, relational capital, and adaptive capacity that AI systems have not yet been able to replicate. Effective AI governance in this domain is not about limiting what AI can do. It is about designing systems in which human judgment remains genuinely authoritative—not performatively consulted, but structurally preserved.
Summary
This chapter examined how AI deployment in the workplace is reshaping workers' sense of control and the psychological consequences that follow. The central finding is that AI systems, while marketed as empowering tools, frequently produce the opposite effect through several interlocking mechanisms.
The autonomy paradox arises because AI systems are designed to optimize and standardize—goals that inherently constrain the discretionary judgment that gives skilled work its meaning. The efficiency trap compounds this by translating productivity gains directly into higher workload expectations, leaving workers with neither a sustainable pace nor the power to negotiate one. The micromanagement paradox transforms algorithmic monitoring into a form of surveillance more pervasive and less contextually aware than human oversight, eroding discretion from both the worker's role and the manager's.
Research on worker preferences consistently shows that people want more human agency in human-AI collaboration than technologists tend to build in, with equal partnership as the dominant preference across occupations. Yet organizational incentives push deployment toward greater AI autonomy and less human control, widening the gap between what workers need and what they experience. Coping strategies—individual negotiation, role reframing, structural override mechanisms—can partially mitigate this gap but rarely address its root causes.
The mental health consequences of sustained autonomy loss are well-established: elevated stress and anxiety, higher burnout rates, reduced job satisfaction, and increased turnover. These costs are real but diffuse, rarely appearing in the productivity metrics organizations use to evaluate AI investments. Addressing them requires governance frameworks that treat human agency as a design constraint—not an afterthought—and that hold AI systems accountable not just for accuracy and compliance, but for the degree of meaningful control they preserve for the people working alongside them.
Key Takeaways
-
AI creates an autonomy paradox. Systems designed to optimize and standardize work inherently constrain the discretionary judgment that makes skilled work meaningful, and workers who exercise autonomous judgment outside AI-defined parameters face documentation burdens, performance flags, and friction that teaches them to defer to the algorithm instead.
-
The efficiency trap converts productivity gains into higher expectations. When AI enables faster throughput, management raises targets proportionally, so workers find themselves running faster just to stay in place—with no ability to negotiate a sustainable pace, because AI capability sets the new baseline.
-
Algorithmic monitoring produces more micromanagement, not less. Real-time performance dashboards and deviation flags create pervasive surveillance that eliminates the discretionary space skilled workers rely on, simultaneously eroding worker autonomy and turning managers into enforcers of algorithmic demands.
-
Workers consistently want more human agency than they receive. Research across 104 occupations finds that "equal partnership"—shared decision-making authority between humans and AI—is the dominant worker preference, yet organizational incentives systematically push deployments toward greater AI autonomy and less human control.
-
Loss of workplace autonomy carries well-established mental health consequences. Elevated stress and anxiety, higher burnout rates, reduced job satisfaction, and increased turnover are all consistently associated with low perceived control—and the scale of AI-driven autonomy erosion makes these effects societally significant even if they accumulate too slowly to appear in productivity dashboards.
-
Human agency must be a design constraint, not an afterthought. Effective AI governance requires defining thresholds for AI authority, protecting override rights with low friction and no performance penalty, and auditing how AI parameters affect worker discretion—treating autonomy erosion as a risk to be managed alongside bias and accuracy failures.
Sources:
- McKinsey: AI in the workplace 2025 - Superagency and empowering people
- The AI Efficiency Trap | Knowledge at Wharton
- Brookings: Intelligence saturation and the future of work
- 2025 in review: Evolution of the digital workplace | WORKAI
- AI Governance in 2026: From experimentation to maturity | Lexology
- Future of Work with AI Agents: Auditing Automation | ArXiv
- AI and autonomy at work: empirical insights | Journal for Labour Market Research
- How to Preserve Agency in an AI-Driven Future | The Decision Lab
- 75% of workers using AI in workplace
- Agency decay and autonomous decision-making erosion
- Digital omnipresent stress from AI accessibility
- Human Agency Scale research
- 36% of occupations using AI for 25% of tasks
Last updated: 2026-02-25