3.2.3 Intelligence and Surveillance Capabilities
Michael Torres works for the National Security Agency at Fort Meade, Maryland. His clearance level is high enough that he can't tell his family what he does, where he works within the building, or what technologies he uses.
What he can say: he analyzes signals intelligence—SIGINT—intercepted communications from foreign adversaries. What he can't say: how AI has transformed that job in ways that would have been impossible five years ago.
In 2020, Michael's team could process maybe a few thousand intercepts per day. Human analysts listened to audio, read transcripts, identified speakers, flagged keywords, and passed relevant intelligence up the chain. It was slow, labor-intensive, and incomplete. Most intercepts never got analyzed. There simply weren't enough analysts.
By 2025, the system processes millions of intercepts per day. AI handles initial sorting, transcription, translation, speaker identification, and anomaly detection. Analysts only review what the AI flags as high-priority. The workload has shifted from processing raw data to supervising AI systems and investigating leads those systems surface.
Michael's productivity—measured by intelligence products delivered—has increased by an order of magnitude. Not because he's working harder, but because AI is doing the grunt work while he does the analysis. This is the AI intelligence revolution: not replacing human analysts, but augmenting them—processing vastly more data, identifying patterns humans would miss, and accelerating the intelligence cycle from intercept to actionable insight from weeks to hours.
The experience is a microcosm of a transformation occurring across the entire intelligence community. AI functions as a force multiplier—and America's adversaries are deploying similar systems. The result is an intelligence arms race in which superior AI translates directly to information advantage, and information advantage, in modern geopolitics, translates directly to strategic power.
The SIGINT Transformation
Signals intelligence has always been about scale. The more you intercept, the more you learn. But historically, scale was limited by human capacity to process information. AI has removed that constraint.
The U.S. Army is integrating AI into SIGINT processing, exploitation, and dissemination. AI reduces analyst workload, enhances speed and accuracy, and improves targeting and decision-making in complex operations. What once required teams of analysts now happens automatically, with humans supervising rather than executing. Booz Allen, a major intelligence contractor, is implementing machine learning systems that can demodulate full waveforms, train neural networks for signal identification, and compress sensemaking timelines for analysts—tasks that previously took hours now take seconds.
Project Linchpin is creating a centralized AI ecosystem for the U.S. Army, delivering near-real-time insights that were previously human-intensive. The National Geospatial-Intelligence Agency now circulates AI-generated products that cut analyst workloads and accelerate tasking cycles. Across the NSA, CIA, DIA, and military intelligence branches, AI is being deployed across the full intelligence lifecycle—collection, processing, analysis, and dissemination. The signals intelligence market reflects this shift: valued at $16.8 billion in 2024, it is projected to reach $28.1 billion by 2034, driven primarily by advances in AI and sensor technologies. Contracts for automated signal decryption and real-time pattern recognition systems were awarded to multiple U.S. defense contractors in 2024 alone. This is no longer experimental—it is operational, and the results are reshaping how intelligence agencies understand their mission.
The Pattern Recognition Advantage
Beyond raw processing speed, AI offers something qualitatively different from human analysis: the ability to find patterns across datasets too large and complex for any human team to review. Human analysts excel at contextual reasoning, judgment calls, and strategic interpretation. They are far less suited to scanning millions of data points for subtle correlations. That is where AI excels.
When trained on years of intercepted communications, social media activity, financial transactions, travel records, and known threat-actor networks, AI systems can surface connections that would otherwise remain invisible—a phone number shared by multiple suspects, an unusual pattern of wire transfers timed to international events, a coded phrase appearing across unrelated channels. These are not conclusions; they are threads. Human analysts pull those threads, investigating leads the AI surfaces and discarding those that prove coincidental. The combination of AI pattern recognition with human investigative judgment has proven far more effective than either approach alone, and dramatically faster.
This creates meaningful advantages in counterterrorism, counterintelligence, and strategic monitoring, but it also introduces distinct risks. AI systems trained on biased or incomplete data can mislead analysts, generating false leads or missing genuine threats. Adversaries who understand how a system is trained may attempt to manipulate it by feeding deceptive signals designed to produce specific outputs. And statistical correlations identified by AI are not always causal—analysts who over-rely on AI-generated leads risk chasing patterns that reflect noise rather than intent. Establishing calibrated trust in AI-generated intelligence is a slow, ongoing process, but once established, the productivity gains are substantial.
The Facial Recognition State
Facial recognition technology has moved from experimental to operational across intelligence and law enforcement agencies worldwide, and its rapid deployment is reshaping the relationship between governments and citizens.
In the United States, Immigration and Customs Enforcement agents are using a mobile application called Mobile Fortify, connected to government facial recognition databases, to determine individuals' citizenship status in the field. Internal footage from 2025 shows agents using the app to identify teenagers not carrying identification. Originally justified for tracking noncitizens at border crossings, the tool has expanded into domestic use—deployed in neighborhoods far from any border to identify and investigate both noncitizens and U.S. citizens alike. This trajectory illustrates a pattern common to AI surveillance tools: built for narrow, legally specific purposes, they tend to expand over time as operational convenience creates institutional incentives for broader application, legal and political justifications shift incrementally, and oversight frameworks lag the technology's deployment.
Legislative responses have begun to emerge. In February 2026, senators introduced the ICE Out of Our Faces Act, which would ban ICE and CBP from acquiring or using facial recognition technology. The bill's sponsors argue that the technology represents a rapidly growing surveillance state that violates civil liberties and disproportionately harms marginalized communities. The outcome remains uncertain, and even enacted legislation faces enforcement challenges once surveillance infrastructure is embedded in operational practice.
Internationally, the picture is similarly contested. The UK Home Office is proposing a national facial recognition framework that would connect police databases across the country, with pilot testing expected in 2026. The EU AI Act classifies retrospective facial recognition for law enforcement as a high-risk AI system requiring additional safeguards, which take effect August 2, 2026—though enforcement across member states has been uneven. Governments continue deploying facial recognition for monitoring protests, tracking minority groups, and suppressing dissent, with authoritarian states offering the most systematic examples. The global facial recognition market is projected to reach $12.67 billion by 2028, driven by demand from law enforcement agencies, intelligence services, and private security firms worldwide.
Securing AI in Intelligence Operations
As AI becomes central to intelligence operations, it has also become a target. Recognizing this, the NSA launched the Artificial Intelligence Security Center, tasked with defending the nation's AI capabilities through collaboration with industry, academia, the Intelligence Community, and government partners. The AISC is developing a governmentwide security playbook covering risks in model development, training environments, and the broader AI supply chain. It is also building the capability to perform rapid, systematic, classified testing of AI models' potential to detect, generate, or exacerbate offensive cyber threats.
The threat the AISC is designed to counter is not merely that adversaries might disrupt AI systems, but that they might subvert them in ways that are difficult to detect. Consider a scenario in which hostile actors infiltrate an AI model used by an intelligence agency for threat assessment—not to disable it, but to subtly bias its outputs, causing analysts to overweight certain indicators and underweight others. Analysts would continue receiving plausible-looking intelligence products, unaware that their understanding of an adversary's capabilities and intentions was gradually diverging from reality. This kind of slow, invisible manipulation is in many respects more dangerous than outright system disruption. It produces confident, well-reasoned misjudgments at scale, potentially for months or years before the error is identified. This recognition—that AI is simultaneously a capability multiplier and an attack surface—has made securing AI systems a core national security mission in its own right, integrated into intelligence operations from the design phase forward.
The procurement dimension adds further complexity. The Department of Defense and the Office of the Director of National Intelligence have established a working group focused specifically on acquisition issues for AI in national security applications. The core tension is structural: AI development moves fast, and government acquisition processes move slowly. By the time a system completes security reviews, contracting procedures, and deployment planning, the technology underlying it may already be a generation behind. Adversaries, particularly China, face fewer procedural constraints and can move more quickly, creating a tempo advantage that the United States is working to close. The White House AI Action Plan, released in July 2025, addresses this directly, placing heavy emphasis on streamlining procurement to allow intelligence agencies and DoD to acquire cutting-edge AI capabilities without waiting years for bureaucratic approval. But speed introduces its own risks: rapidly deployed AI may carry security vulnerabilities, embedded biases, or unexpected behaviors that more thorough testing would have caught. Different agencies are resolving this tension differently—some prioritizing deployment tempo, others insisting on rigorous pre-deployment vetting—and no consensus approach has yet emerged.
The Global Surveillance Competition
The United States is not alone in deploying AI for intelligence and surveillance. China, Russia, Israel, and others are racing to integrate AI across their intelligence operations, and the resulting competition is reshaping the global information environment.
China's capabilities are particularly significant. China combines mass surveillance infrastructure—hundreds of millions of cameras, pervasive facial recognition, and AI-driven social governance systems—with cutting-edge AI research, producing a surveillance apparatus of a scope no authoritarian government in history has previously been able to deploy. Domestically, these systems enable fine-grained monitoring of the population, from tracking the movements of ethnic minorities to scoring citizen compliance with state norms. Internationally, China exports surveillance technology to dozens of governments worldwide, extending its influence while normalizing mass surveillance as a standard governance tool. Countries that acquire Chinese surveillance systems tend to become dependent on Chinese technical support, creating leverage that Beijing can exploit in diplomatic and commercial contexts.
Russia's AI intelligence capabilities are concentrated in signals intelligence, cyber operations, and information warfare. Russian intelligence agencies have deployed AI-driven systems that generate fake social media accounts, amplify divisive content, and coordinate disinformation campaigns at scales that manual operation could never achieve. Israel occupies a different position: a small country that has become a world leader in surveillance technology, with firms like NSO Group developing tools sold to intelligence agencies and law enforcement worldwide. Israeli capabilities in AI-enabled surveillance have been honed through decades of intelligence operations in contested environments, producing commercial products that combine sophistication with adaptability.
The cumulative effect of this competition is a global information environment in which AI provides asymmetric advantages, enabling smaller countries with advanced AI programs to gather intelligence beyond what their size would otherwise allow, while giving larger countries with vast data resources the ability to achieve surveillance at previously unimaginable scales. Because AI systems improve through training on data, agencies that collect more intelligence produce better-performing models, which in turn enable more effective collection. This feedback loop means that early AI adopters gain compounding advantages that are difficult for late movers to close, creating structural incentives for rapid deployment that governance mechanisms have so far struggled to moderate.
Civil Liberties and the Limits of AI Surveillance
The capabilities described in this chapter raise a fundamental question about what kind of societies AI surveillance will produce. The civil liberties implications are not peripheral concerns—they are central to understanding what the technology actually does in practice.
AI surveillance tools built for foreign intelligence do not inherently distinguish between foreign and domestic targets. A system trained to identify threat indicators will flag those indicators wherever they appear. When intelligence agencies acquire powerful AI surveillance capabilities, the temptation to apply them domestically is substantial, because the threats they are designed to counter—terrorism, espionage, organized crime—are not confined to foreign actors. The legal frameworks separating foreign intelligence collection from domestic surveillance, such as the Foreign Intelligence Surveillance Act in the United States, were designed for an era before AI systems could process domestic and foreign signals simultaneously at massive scale.
The expansion of ICE's facial recognition use is illustrative. Built for border security—a specific, legally distinct context—Mobile Fortify is now deployed in interior neighborhoods for domestic investigations. Each incremental expansion was justified on its own terms, but the cumulative effect has been a fundamental shift in the scope of domestic surveillance. This pattern is common enough to constitute a predictable dynamic: surveillance tools built for narrow purposes tend to expand as the technology becomes operationally embedded, oversight mechanisms lag deployment, and institutional convenience creates pressure for wider application. Once surveillance infrastructure is built, it is rarely dismantled.
Democracies face a genuine tension here that does not resolve cleanly. The security benefits of AI surveillance are real—faster threat detection, broader coverage, more effective counterterrorism. The civil liberties costs are also real—erosion of privacy, chilling effects on speech and assembly, and the risk of discrimination in systems that have historically shown bias against minority communities. The appropriate balance between these interests is legitimately contested, and it will be set not only by courts and legislatures but by the design choices embedded in AI systems, the procurement decisions of intelligence agencies, and the degree to which meaningful oversight can be maintained over technologies that operate faster than any oversight process was designed to handle.
Key Takeaways
AI has fundamentally transformed intelligence and surveillance capabilities, shifting the constraint on intelligence operations from collection capacity to analytical judgment. Modern AI systems can process millions of intercepts per day, identify cross-dataset patterns invisible to human analysts, and compress the intelligence cycle from weeks to hours—changes that represent a qualitative, not merely quantitative, shift in what intelligence agencies can accomplish.
Facial recognition is the most visible manifestation of AI-enabled surveillance in domestic contexts, and its rapid deployment has outpaced the legal and regulatory frameworks designed to govern it. Mission creep—the expansion of surveillance tools built for narrow purposes into broader domestic use—is a well-documented dynamic that legislative responses have so far struggled to contain.
The security of AI systems has itself become a national security concern. The NSA's AI Security Center reflects a recognition that AI used in intelligence operations is both a capability and a vulnerability: compromising an AI system can be more damaging than disabling it outright, because subtle manipulation of AI outputs produces confident, systematic misjudgment at scale rather than obvious failure.
Global competition in AI surveillance is intensifying, with China's combination of domestic surveillance infrastructure, advanced AI research, and technology export strategy posing particular challenges. Because AI systems improve through data feedback loops, early adopters gain compounding advantages that create structural pressure for rapid deployment and make governance difficult to establish before capabilities are already embedded in operations.
The tension between security and civil liberties in AI surveillance is real and unresolved. The tools exist, the threats are genuine, and the institutional incentives for deployment are strong. Whether democratic societies can maintain meaningful oversight of AI surveillance capabilities—and where the appropriate limits of those capabilities lie—remains one of the most consequential questions raised by the technology.
Sources:
- White House Issues National Security Memorandum on Artificial Intelligence | Covington & Burling
- NSA Artificial Intelligence Security Center | National Security Agency
- National Security and the AI Action Plan: A Deep Dive | Steptoe
- Addressing the Gap within SIGINT PED Analysis with AI | U.S. Army
- Signals Intelligence (SIGINT) Industry Analysis and Forecast 2025-2034 | GlobeNewswire
- Transforming Signals Analysis and Capabilities | Booz Allen
- Facial Recognition and Privacy Concerns in the Age of AI | ISACA
- Bill to Ban ICE and CBP Use of Facial Recognition Technology | Rep. Jayapal
- Mission Creep: AI Surveillance at DHS | American Immigration Council
- Inside the AI surveillance state | WBUR On Point
- UK Plans National Facial Recognition: Testing in 2026 | State of Surveillance
- With no federal facial recognition law, states rush to fill void | NPR
- Toward Regulation: Addressing the Legal Void in Facial Recognition Technology | Privacy International
- America's AI Action Plan | July 23, 2025 | The White House
Last updated: 2026-02-25