2.4.1 Privacy and Surveillance

Nina found out her employer was tracking her through her work laptop by accident.

She'd noticed the battery draining faster than usual, so she opened the task manager to see what was running. There, buried in the background processes, was software she didn't recognize. A quick search revealed it was employee monitoring software: keystroke logging, screen capture every few minutes, mouse movement tracking, application usage monitoring.

Her company hadn't told her. There was no disclosure in her employment contract. No IT department memo. They'd just installed it silently during a routine update and started collecting data.

Nina worked in marketing. She wasn't handling classified information or working with sensitive client data. She was writing blog posts and managing social media. But every word she typed, every website she visited, every second she spent on each task was being recorded, analyzed, and stored.

When she asked her manager about it, he seemed confused by her concern. "It's company property," he said. "We have a right to know how it's being used."

Nina quit three months later. But the surveillance stayed with her. She still catches herself wondering if she's being watched, even on her personal devices, even years later. The feeling of constant monitoring — what researchers call "digital paranoia" — doesn't go away easily.

And Nina's experience is increasingly common. We are living under surveillance in ways we barely comprehend, enabled by AI systems that can process, analyze, and act on more data than any human overseer ever could.

The Workplace Panopticon

In the absence of a national privacy law in the United States, there are few legal safeguards limiting workplace computer or network surveillance. Employers can — and do — monitor keystrokes, facial expressions, application usage, time spent on tasks, even tone of voice in customer service calls.

Some firms analyze this data to identify who may be underperforming, who is violating company policies, or who is looking for other jobs. AI makes this surveillance both comprehensive and automated. You don't need a human manager watching over every employee. The algorithm does it, continuously, and flags anomalies for review.

The justifications are familiar: productivity monitoring, security, compliance, quality assurance. And in some contexts, those justifications are legitimate. But the scope and opacity of surveillance have expanded far beyond what those justifications require. Employees often don't know what is being monitored, how the data is being used, who has access to it, or how long it is retained. They cannot opt out, they cannot see their own surveillance file, and they have limited recourse if the data is used against them unfairly.

The power asymmetry is profound. And AI amplifies it, because algorithmic surveillance scales in ways human supervision never could. A single system can monitor thousands of employees simultaneously, flag behavioral patterns across months of data, and produce reports that no manager could compile manually. What once required dedicated oversight staff now runs as background infrastructure.

The Face Database

Facial recognition technology has evolved from a niche innovation to a ubiquitous tool embedded in everyday life. It unlocks phones, verifies identities at airport security, powers loss-prevention systems in retail stores, and assists law enforcement in identifying suspects.

London's Metropolitan Police Service has reportedly scanned around one million faces so far in 2025. In summer 2025, they installed permanent live facial recognition cameras in Croydon, South London, running continuously and comparing every passing face against a watchlist of wanted individuals. The accuracy of modern systems has improved dramatically — they can identify people in crowds, through partial obstructions, from oblique angles, and even accounting for aging or minor disguises. They work in real time, and they do not forget.

The practical consequence is this: you walk down a street, and your face is scanned, analyzed, and compared against a database without your knowledge or consent. If you match someone on the watchlist, police are alerted. If you don't, your biometric data may still be retained — policies vary, and enforcement is inconsistent. This is not a hypothetical scenario. It is operational. And it is spreading. Cities across Europe, Asia, and North America are deploying similar systems, some for law enforcement, some for traffic management, and some for commercial purposes: tracking shoppers, analyzing customer demographics, personalizing advertising.

The infrastructure for ubiquitous facial surveillance now exists. The question is no longer whether it is technically possible, but whether societies will accept it as normal.

The Distributed Surveillance Model

In 2025, Flock Safety — a company providing AI surveillance systems — began outsourcing facial recognition and license plate monitoring to gig workers. Human reviewers, working from home, were paid to verify AI-flagged matches and improve training data. The arrangement is efficient and inexpensive. It is also emblematic of a broader structural shift in how surveillance is organized.

The immediate privacy concerns are significant. Gig workers performing these reviews have access to sensitive biometric and location data about individuals who never consented to having remote contractors examine their faces or track their movements. Unlike law enforcement personnel or trained security professionals, these workers receive no formal privacy instruction, hold no security clearance, and operate under minimal oversight. They are contractors, not employees, working on personal devices, and data leaks are an inherent risk of the arrangement.

But the deeper issue is what this model represents architecturally. Traditional surveillance — however invasive — operated within institutions that at least nominally maintained legal accountability structures. Police departments answered to oversight bodies. Corporate security teams operated under employment law. When surveillance is disaggregated into piecework performed by contractors scattered across jurisdictions, those accountability structures dissolve. There is no single identifiable responsible party when something goes wrong, and that diffusion of responsibility is, for many companies deploying such systems, a feature rather than a flaw.

The economic logic reinforces the trend: gig labor is cheaper than trained employees, AI handles the bulk of processing, and humans are brought in only to validate or refine what the algorithm flags. This combination of computational efficiency and low-cost human oversight creates surveillance capability at a scale and price point that would have been unattainable a decade ago. The result is an emerging architecture that is not a centralized panopticon controlled by a single authority, but a distributed network of private companies, platforms, and algorithmic systems operating in overlapping legal gray zones, with accountability fragmented to the point of near-invisibility.

The Biometric Creep

Biometric data collection has expanded well beyond fingerprints and facial recognition. Voiceprints, gait analysis, behavioral identifiers, and keystroke dynamics are now routinely collected and analyzed, often without users being aware of what they are contributing.

Your voice is biometric data. Every interaction with Siri, Alexa, or Google Assistant provides voiceprint samples that companies use not only for authentication but for emotion detection, stress analysis, and behavioral profiling. Your walking pattern is similarly distinctive — security systems can identify individuals by gait through standard video surveillance, and unlike a password, you cannot change how you walk. Keystroke dynamics — the rhythm, pressure, and timing of how you type — are unique enough to serve as an identifier even when someone is using an unfamiliar device. Banks use this for authentication; employers use it for monitoring. Even mouse movements, scrolling behavior, and how you hold a phone can be converted into biometric identifiers given the right analytical models.

The pace of this expansion has consistently outrun legal and governance frameworks. Companies collect biometric data because they can, not because clear rules govern when they should. Nearly two dozen U.S. states have passed laws regulating how technology companies collect biometric information from faces, eyes, and voices, but those laws are inconsistent, enforcement is weak, and companies frequently ignore them unless litigation forces compliance. The practical consequence is acute: biometric data, once collected, is effectively permanent. You can change a password. You cannot change your face or your fingerprints. A breach of biometric data is not a temporary vulnerability — it is a lifelong exposure.

The Inference Problem

A subtler but increasingly serious development involves not what data is directly collected, but what AI systems can deduce from it. Researchers have demonstrated that AI can predict sexual orientation from facial photographs with surprising accuracy, infer political beliefs from Facebook activity patterns, estimate health status from purchasing records, and project income, education level, and personality traits from data shared for entirely unrelated purposes.

This is what privacy scholars call inference privacy violation. A person consents to share data point A for purpose X. The AI uses that data point to infer data point B — something the person never intended to disclose — for purpose Y, which was never consented to. The inference is invisible to the subject. It cannot be seen, challenged, or corrected, even when it is wrong.

The applications are already widespread. Insurance companies use AI to infer health risks from social media activity and adjust premiums accordingly. Employers apply algorithmic analysis to interview video recordings, deriving personality assessments that drive hiring decisions. Law enforcement uses predictive policing systems that infer criminal risk from neighborhood and demographic data, shaping how resources are deployed and where scrutiny falls. In each case, the person affected has no meaningful knowledge of the inference being made, no access to the underlying model, and no mechanism for contesting a conclusion that may be inaccurate or methodologically unsound but nonetheless consequential to their life.

The Regulatory Scramble

Governments are beginning to respond to AI-enabled surveillance, but the responses are fragmented and consistently slower than the technology they aim to govern.

The EU AI Act is the most comprehensive legislative framework to date. Its full applicability takes effect in August 2026, though provisions targeting prohibited AI practices and AI literacy obligations have been enforceable since February 2025. The Act bans social scoring systems, real-time facial recognition in public spaces (with narrow exceptions), exploitation of psychological vulnerabilities, and subliminal manipulation. It imposes transparency requirements on high-risk AI systems and grants individuals rights to explanation and redress. It is a meaningful beginning — but enforcement remains uncertain, and the EU represents only a fraction of global AI deployment.

In the United States, no federal AI regulation exists, and no national privacy law has passed. States are filling the void with patchwork legislation: California's CCPA, Illinois's BIPA, Virginia's VCDPA, and the nearly two dozen state-level biometric data laws mentioned earlier. Companies operating nationally face an inconsistent compliance landscape. Some argue this forces the adoption of the strictest standard everywhere; others contend it produces inefficiency without meaningfully protecting anyone.

China has implemented extensive AI governance, but oriented toward state control rather than individual privacy. The government itself deploys AI surveillance, social credit mechanisms, and predictive policing tools at scale. Formal privacy protections exist in Chinese law, but they are subordinate to state security interests in practice. The result, globally, is a fragmented and unevenly enforced regulatory environment chasing technology that evolves faster than legislative processes can accommodate.

The Acceptance Gradient

Perhaps the most consequential dynamic in the surveillance landscape is not technological or legal — it is psychological. Research on public acceptance of AI-powered facial recognition shows that trust in the deploying institution and perceived security benefits both increase willingness to sacrifice privacy. When people believe surveillance makes them safer, they tend to accept it.

This is how surveillance normalizes: not through coercion, but through incremental accommodation. Unlocking a phone with your face is convenient. Airport facial scanning speeds up boarding. Retail tracking offers personalized discounts. Each individual instance appears reasonable. The cumulative effect is comprehensive, continuous surveillance that few people consciously chose and most cannot easily exit. The shift happens at the level of default: the baseline assumption changes from privacy to visibility, and opting out becomes an active and costly choice rather than the natural condition.

Generational patterns reinforce the trend. Younger populations, having grown up with social media and pervasive digital visibility, report different privacy expectations than older ones. They are, on average, more willing to trade personal data for convenience and less troubled by the idea of being continuously monitored. As those cohorts age into positions of policy-making, business leadership, and cultural influence, attitudes toward surveillance are likely to shift further. Whether the result will be genuine recalibration of what privacy means, or simply the erasure of a concept that once seemed fundamental, remains to be seen. One structural dynamic is clear, however: once surveillance infrastructure is built, it is almost never dismantled. It is refined, extended, and made cheaper. The decisions being made now about where and how to deploy AI surveillance will shape the environment that future generations inherit.

Key Takeaways

AI has transformed surveillance from a resource-intensive, human-dependent activity into a continuous, automated, and increasingly invisible feature of everyday life. Several themes define this transformation.

Workplace surveillance has grown in both scope and opacity. Employers now monitor employees through AI systems that track keystrokes, screen activity, behavioral patterns, and even emotional cues, often without disclosure. The power asymmetry this creates — employees cannot see their surveillance files, cannot opt out, and have limited legal recourse — largely escapes existing labor law.

Facial recognition technology is no longer speculative; it is operational in public spaces across many countries, and expanding in commercial applications as well. The biometric data it captures is permanent in a way that distinguishes it from other sensitive information: unlike a password, a face cannot be changed if compromised.

Biometric collection has spread well beyond facial recognition to include voiceprints, gait signatures, typing dynamics, and behavioral fingerprints — categories most people do not recognize as biometric data at all, and for which legal protections remain sparse.

AI's capacity for inference introduces a qualitatively new privacy threat. Systems can derive sensitive personal attributes — health status, political beliefs, financial vulnerability, psychological traits — from data shared for unrelated purposes, producing invisible conclusions used in high-stakes decisions without the subject's knowledge or ability to challenge them.

Regulatory frameworks are fragmented and outpaced by technology. The EU AI Act is the most comprehensive instrument to date, but geographic scope is limited and enforcement uncertain. In the United States, no federal framework exists.

Finally, surveillance normalizes gradually. The cumulative effect of individually convenient trade-offs is an infrastructure of comprehensive monitoring that, once established, has historically not been reversed. How societies negotiate the boundary between security and privacy in the current decade will determine the conditions under which the next generation lives — and whether a meaningful private sphere remains available to them at all.


Sources:

Last updated: 2026-02-25