2.1.3 Social Mobility and Class Dynamics
Jamal grew up in West Baltimore. His mother cleaned office buildings. His father was in and out of the picture. The high school he attended had a 60% graduation rate. But Jamal was sharp—test scores high enough to get him into the University of Maryland on financial aid.
He studied computer science. Worked night shifts at Target to cover what loans didn't. Graduated in 2024 with a 3.4 GPA and a portfolio of projects he'd built on nights and weekends. Not perfect, but solid. He applied to 127 tech jobs in the spring of 2025.
Eighty-three applications never got a human response. They were filtered by AI resume screeners before anyone at the company saw them. The algorithms looked at his degree (state school, not elite), his neighborhood (zip code correlated with poverty), and—though the companies would never admit this—his name. Research shows AI resume screening tools prefer white-associated names 85.1% of the time. Black male candidates are disadvantaged in 100% of direct comparisons with white males.
Jamal didn't know this. He just knew he kept getting auto-rejections. He assumed he wasn't good enough. He started to believe it.
Eventually, one company—a small startup that used human recruiters—gave him a shot. He got the job. He's doing well. But he thinks about those 83 rejections sometimes. How many of them were algorithms deciding he wasn't worth a conversation?
AI was supposed to make hiring more objective, more fair, more meritocratic. It's doing the opposite.
The Meritocracy Myth
For most of the 20th century, America sold a story: work hard, get an education, and you can rise. It didn't matter where you started. Talent would win out. The playing field might not be perfectly level, but it was fair enough that merit would shine through.
That story was always more myth than reality, but it had enough truth to be believable. Social mobility existed. People from working-class backgrounds did sometimes climb into the middle class or beyond. Education was the ladder.
AI is breaking what remained of that ladder.
Here's the mechanism: AI systems are trained on historical data. They learn patterns from the past. If the past was biased—and it was—the AI learns the bias. A hiring tool trained on a company that historically employed mostly white, male graduates of elite universities will learn to prioritize white, male graduates of elite universities. It's not malicious. It's pattern recognition.
But the result is the same. People from under-represented groups, from non-elite schools, from the wrong zip codes, get filtered out before a human ever sees their application. Not because they're unqualified. Because the algorithm learned that people like them weren't hired in the past.
This isn't theoretical. It's documented. AI trained on companies that historically didn't recruit from Historically Black Colleges and Universities (HBCUs) learns to downgrade applicants from those institutions. If certain zip codes correlate with race and the training data shows few successful candidates from those areas, the AI learns to deprioritize those addresses.
It's redlining, but automated. And because it's algorithmic, it's invisible. Candidates don't know why they were rejected. Companies don't know they're discriminating. The bias is laundered through code.
The Education Divide, Amplified
If hiring algorithms are where mobility dies at the end of the pipeline, education is where it breaks at the beginning. AI's uneven distribution across schools is quietly widening the attainment gap that determines who even reaches the point of applying for the jobs algorithms filter.
Research in the UK found that private school students use AI for schoolwork almost twice as much as state school students. The gap isn't primarily about willingness—it's about access, cost, and the presence of knowledgeable adults who can guide effective use. Private schools are integrating AI thoughtfully, teaching students to use it as a thinking tool, to interrogate its outputs, and to maintain critical judgment. Under-resourced state schools, by contrast, often either ban it out of fear or allow unsupervised use, where it becomes a shortcut rather than a learning accelerator.
The result is a divergence in the skill that will most define competitiveness in the labor market. Wealthy students are developing AI fluency—the ability to work with, alongside, and through AI systems. Students in under-resourced schools are falling further behind, not because they lack capacity, but because they lack exposure and guidance.
The pattern extends into university admissions. AI models used to assess applicants have been shown to favor students from wealthier backgrounds, because the features they evaluate—zip code, high school quality, extracurricular breadth, standardized test preparation—are proxies for wealth rather than genuine indicators of potential. A student from a wealthy suburb with tutors, enrichment programs, and college counselors presents a profile that the algorithm reads as high-potential. A student who spent those same years working a part-time job to contribute to family income does not—not because the student lacks ability, but because the algorithm has been trained to confuse advantage with merit.
The Credentialing Arms Race
As traditional education pathways fragment and degree inflation sets in, credentialing has become both more important and more inaccessible to those who most need it.
The bachelor's degree that once opened most professional doors now competes with specific technical certifications, micro-credentials, AI fluency signals, portfolio demonstrations, and evidence of continuous self-directed learning. This shift is often framed as democratizing: if competence can be demonstrated without a four-year degree, the argument goes, then anyone with access to online learning can compete. Credentials become portable. The old gatekeeping institutions lose their power.
In practice, the reality is considerably messier. Building a competitive portfolio requires time, and time is the scarcest resource for people working multiple jobs, caring for family members, or simply trying to survive financially. Bootcamps and certification programs—even nominally affordable ones—carry direct costs, and beyond the fees they require cultural capital: knowing which credentials carry actual weight in the labor market and which are worthless paper. That knowledge flows through professional networks, alumni communities, and family connections that are distributed unevenly across class lines.
The credentialing arms race, as a result, tends to favor those who already have resources. Wealthy young people can afford to spend time building portfolios, attend credentialing programs, and tap the networks that signal which qualifications matter. Those without that support structure are competing in a game whose rules keep changing, often without knowing what the current rules even are.
AI compounds this. Hiring tools scan resumes for recognizable credential keywords, for signals that fit the pattern of "successful candidate" in the training data. If a candidate lacks those signals—because they couldn't afford them, didn't know they needed them, or were too occupied with other obligations—the algorithm filters them out before a human can exercise judgment about potential rather than demonstrated credentials.
The Class Ceiling
Social mobility in America has been declining for decades. Children born in 1940 had roughly a 90% chance of earning more than their parents. Children born in 1980 had about a 50% chance. Current projections suggest those born in 2000 may face odds below 40%. AI did not create this trend, but it is accelerating and structuring it in new ways.
The mechanism is winner-takes-most dynamics. The best AI tools, the most effective training, and the most productive applications of the technology concentrate among those who already have resources. They use those advantages to pull further ahead, while everyone else falls further behind—not through any single dramatic disparity, but through countless compounding marginal disadvantages across hiring, education, credentialing, and networking.
The result is the emergence of a new stratification that cuts across older class categories. At the top sits an AI-fluent elite—those who grew up with technology access, attended schools that taught AI literacy, and can afford both the latest tools and the credentials to demonstrate competence with them. They use AI to amplify their productivity, their creativity, and their earning capacity. Below them is a broad AI-adjacent middle: people with partial access and partial skills who are adapting and holding on, but not advancing. Their jobs are being fragmented by automation; their wages are stagnant; their paths forward are narrowing. And at the bottom, a growing AI-excluded population—those without reliable technology access, without digital literacy, and without the economic margin to acquire either. They are being filtered out by algorithms before human judgment can intervene, displaced by automation in roles that offered entry-level footholds, and unable to afford the education or credentials that might allow them to compete.
The consequences for traditional upward mobility are stark. Research on AI adoption in firms shows that junior employment declines sharply following adoption while senior employment remains stable. Entry-level positions—historically the first rung on the ladder of professional advancement—are disappearing. Without that first rung, the ladder becomes theoretical. The promise of meritocracy requires, at minimum, the opportunity to demonstrate merit; AI-driven hiring is systematically denying that opportunity to those who need it most.
The Geography of Stagnation
Class is not the only axis along which AI is stratifying opportunity. Geography is increasingly decisive, and the two reinforce each other in ways that compound disadvantage.
As discussed in Chapter 1.4.2, AI investment is concentrating heavily in a small number of coastal cities. The San Francisco Bay Area, Seattle, Boston, and New York together account for a disproportionate share of AI employment, venture capital, and the informal knowledge networks through which new opportunities flow. Physical proximity to those ecosystems matters enormously—for internships, for serendipitous professional encounters, for absorbing the tacit knowledge of an industry that isn't yet written down anywhere.
The intuitive policy response is mobility: if opportunity concentrates, move toward it. But moving is not a neutral act for people without resources. Relocation is expensive. It requires social capital—someone who can help you find housing, make introductions, navigate an unfamiliar city. It requires severing ties to support networks—family, friends, community institutions—that provide crucial stability to people with thin financial margins. For the professional-class worker who can afford moving costs, deposits, and a few months of uncertainty, relocation is a genuine option. For the low-income worker, it often is not.
The result is that geography becomes a form of structural disadvantage that reinforces class barriers. A talented young person in rural Mississippi and a talented young person in Palo Alto may be equally capable, equally motivated, and equally hardworking. But the Palo Alto student has access to better-funded schools, stronger internship pipelines, denser professional networks, and immediate exposure to the signals and skills the labor market is actually rewarding. When both apply for the same position, the algorithm is likely to favor the Palo Alto candidate—not because it explicitly rewards geography, but because geographic advantage expresses itself through every other proxy the algorithm weighs: school quality, credential type, extracurricular profile, network connections. Geography becomes destiny. Concentrated AI investment means that the gap between places—already wide—is widening further, and the populations left behind are increasingly those who were already disadvantaged.
The Diversity Collapse
The aggregate effects of algorithmic bias in hiring are most visible in data on racial and gender disparities. AI resume screening tools have been shown to disadvantage Black male candidates in 100% of direct comparisons with white males—not a tendency, not a statistical skew, but a systematic outcome across every tested case. The disparity extends to gender and age: models that present themselves as objective have been documented penalizing women, older workers, and people with disabilities at rates that track closely with the discrimination patterns embedded in the human hiring processes on which they were trained.
The explanation is straightforward. AI hiring tools are trained on historical hiring data. When those histories reflect decades of discrimination—often unintentional, embedded through network effects, unconscious bias, and institutional path dependence—the model learns those patterns as signal. It does not know it is replicating discrimination; it is simply optimizing for the features associated with "successful candidates" in the training data. The result is discrimination at scale, executing thousands of filtering decisions per day with a speed and opacity that far exceeds any individual human bias.
The claim that AI removes subjectivity from hiring was always misleading. Objective process does not produce objective outcomes when the inputs are historically biased. Landmark litigation in 2024–2025, including Mobley v. Workday, which covered millions of job seekers over the age of forty, has begun to establish legal accountability for these harms. But the law moves slowly relative to the deployment of these systems. The practical consequence is that algorithmic bias is actively narrowing the pipeline of opportunity for the groups that have historically been most excluded from it, at precisely the moment when those groups most need access to AI-adjacent careers.
The False Promise of Skill-Based Hiring
One of the most popular responses to credential inflation and network-dependent hiring is the move toward skill-based evaluation: assessing candidates not on where they went to school or who they know, but on what they can demonstrably do. It sounds like an egalitarian correction. In practice, it often reproduces class stratification through a different mechanism.
Skills do not develop in a vacuum. They develop in environments that provide time, resources, mentorship, and access to tools. Wealthy young people enter the labor market having spent years in enrichment programs, internships, and independent projects—not because they worked harder, but because they had the structural support to develop demonstrable competency while their less affluent peers were working to support themselves or their families. The candidate who built a robust portfolio during four years of university and the candidate who worked forty-hour weeks in retail while completing the same degree are not on equal footing in a skill-based evaluation system. The algorithm cannot see potential, only evidence of prior development—and prior development is itself a function of prior advantage.
Skill-based hiring also creates a problem of legibility. What counts as demonstrated skill, and who decides? In practice, the recognized signals of competence—the platforms, portfolio formats, credential types, and project domains that AI screening tools are calibrated to reward—are defined by the same professional communities that already benefit from existing networks. Workers without access to those communities may have equivalent or superior capabilities, but expressed in forms the algorithm does not recognize.
The deeper issue is that skill-based hiring addresses a real problem—over-reliance on credentials as proxies for ability—without touching the structural conditions that determine who can develop which skills. Unless those conditions change, the appearance of objectivity in evaluation does not produce genuinely meritocratic outcomes. Competence becomes a category that the system can only perceive in people who already had the resources to demonstrate it in the expected way.
The Structural Picture
The sections above describe different facets of a single structural problem: AI systems, deployed in a society that was already deeply unequal, are being optimized in ways that amplify existing inequalities rather than correct them. Individually, each mechanism—biased hiring algorithms, unequal access to AI-enhanced education, credential barriers, geographic concentration, algorithmic discrimination—might appear to be a specific technical problem amenable to a specific technical fix. Together, they constitute a system in which advantage compounds at every stage of the life course.
Wealthy people use AI to become more productive and better connected. AI-fluent students from well-resourced schools enter the labor market more prepared. The algorithms sorting those candidates reward the signals that advantage produces. People who make it through that filter use AI tools to increase their leverage and earnings further. Those who don't, don't. The gains from AI-driven productivity flow predominantly to those who were already positioned to capture them, while the risks—job displacement, algorithmic exclusion, credential obsolescence—fall disproportionately on those who were already vulnerable.
The point is not that AI is inherently inequitable. It is that technological capability does not determine how technology is deployed, and current deployment choices are systematically favoring those who were already advantaged. The ladder of social mobility has not been removed, but the rungs are spaced further apart, and the sorting mechanisms at the base of the ladder have been calibrated to screen out people before they can begin to climb.
What Can Be Done
The outcomes described in this chapter are not technologically inevitable. They reflect choices—about how to design and train AI systems, whether to subject them to regulatory oversight, how to distribute access to the tools and skills they create, and how to structure the governance of algorithmic decision-making in consequential domains.
Regulatory intervention in AI hiring is among the most direct levers available. Transparency requirements—mandating that companies disclose the factors their AI screening tools evaluate—would at minimum create accountability and enable external audit. Mandatory bias testing, with results subject to public reporting, would create incentives to address discriminatory patterns before they are embedded in widespread deployment. Human review requirements for AI-filtered candidates in certain employment categories could preserve the possibility that judgment might catch what an algorithm misses. Legal frameworks are beginning to develop through litigation and legislation, but they remain fragmented and slow relative to the pace of AI adoption in hiring.
On the education side, the most important intervention is ensuring that AI literacy is not a luxury good. Public investment in AI tools for under-resourced schools—not merely hardware and bandwidth, but curriculum, teacher training, and ongoing technical support—is a necessary condition for narrowing rather than widening the attainment gap. Universal access to AI-powered tutoring and learning support, free at the point of use, could substantially reduce the advantage currently accruing to wealthy students. Evidence on what effective, equity-oriented AI integration looks like is available; the gap between that evidence and actual implementation remains significant.
Addressing geographic concentration requires sustained public investment in digital infrastructure, including genuine broadband access in rural and under-served areas, as well as investment in educational and economic institutions outside major tech hubs that could create more distributed opportunity. The productivity gains from AI are currently being captured in a small number of regions; taxation of those gains with reinvestment in broader economic development represents one mechanism for addressing this concentration.
More fundamentally, any serious response to AI-driven inequality requires engaging with the distributional question directly: who benefits from the economic gains AI generates, and through what mechanisms can those gains reach people who are being left behind by AI-driven displacement? Universal basic income, expanded social insurance, and publicly funded retraining programs have all been proposed as components of such a response. The political will to implement any of them at the necessary scale has not yet materialized. Whether it does is one of the defining social policy questions of the coming decade.
Summary
AI is not a neutral force with respect to social mobility—it is an amplifier, and in a society that already has deep structural inequities, it is amplifying those inequities at every stage of the social ladder.
AI hiring tools, trained on historically biased data, systematically filter out candidates from under-represented groups, non-elite institutions, and disadvantaged geographic areas before human judgment can intervene. AI-enhanced education is accruing disproportionately to wealthy students, who benefit from better access, more thoughtful integration, and stronger support systems. The shift toward credentialing and skill-based hiring, though nominally meritocratic, reproduces class hierarchies by rewarding demonstrated competency that is itself a function of prior advantage. Geographic concentration of AI investment is turning location into a structurally determining factor in life outcomes. And documented algorithmic bias in hiring is actively narrowing opportunity for the groups that have historically been most excluded.
The combined effect is the emergence of a new class stratification structured around AI access and fluency: an AI-fluent elite pulling ahead, an AI-adjacent middle holding on, and an AI-excluded population falling further behind. Upward mobility requires the opportunity to demonstrate merit; AI systems, as currently deployed, are denying that opportunity to many of those who need it most. These outcomes are not inevitable—they reflect policy choices, design choices, and governance failures that targeted regulation, public investment, and redistribution could meaningfully address. Whether those interventions materialize at the required scale remains an open and urgent question.
Key Takeaways
- AI resume screening tools disadvantage Black male candidates in 100% of direct comparisons with white males and favor white-associated names 85.1% of the time — not through malice, but by learning historically biased patterns from training data and executing them at scale.
- The education gap compounds at the start of the pipeline: UK private school students use AI for schoolwork nearly twice as much as state school students, with wealthy schools teaching AI fluency as a skill while under-resourced schools either ban it or allow unsupervised use that becomes a shortcut.
- AI hiring tools filter candidates based on signals that proxy for wealth (zip code, school prestige, credential type) before any human can exercise judgment about potential — effectively encoding the class ceiling into algorithmic infrastructure.
- Social mobility is already declining (children born in 1980 had ~50% odds of out-earning their parents, down from 90% for the 1940 cohort), and AI is accelerating this by concentrating productivity gains at the top of the skill and asset distribution while removing the entry-level positions that served as the first rung of the ladder.
- Entry-level positions are disappearing fastest: research shows junior employment declines sharply after AI adoption while senior employment remains stable, removing the opportunity to demonstrate merit that upward mobility requires.
- Skill-based hiring, though nominally meritocratic, reproduces class hierarchies by rewarding demonstrated competency that is itself a function of prior advantage — time, resources, mentorship, and access to tools distributed unevenly across class lines.
- A new AI-stratified class structure is emerging: an AI-fluent elite pulling ahead, an AI-adjacent middle holding on, and an AI-excluded population falling further behind — filtered by algorithms, displaced by automation, and priced out of the credentials needed to compete.
Sources:
- 85% of AI Resume Screeners Prefer White Names | The Interview Guys
- Gender, Race, and Intersectional Bias in AI Resume Screening | Brookings
- Bias in AI-Driven HRM Systems | ScienceDirect
- Fair AI in Hiring: Experimental Evidence on Biased Hiring Algorithms | SAGE Journals
- People Mirror AI Systems' Hiring Biases | University of Washington
- No Thoughts Just AI: Biased LLM Recommendations | arXiv
- AI Bias in Hiring: Algorithmic Recruiting and Your Rights | Sanford Heisler Sharp
- Ensuring Fairness in AI: Addressing Algorithmic Bias | YIP Institute
- Generative AI as Seniority-Biased Technological Change | SSRN
- How Does AI Impact Social Mobility? | techUK
- AI in Education: Can It Raise Us Up or Divide Us Further? | Centre for Progressive Policy
- The Potential Impact of AI on Equity and Inclusion in Education | OECD
- AI's Future for Students Is in Our Hands | Brookings
- Social Mobility in the AI Era: Risks and Opportunities | CoachBright
- Shaping the Future of Social Mobility Through AI | Social Tech Trust
- Artificial Intelligence and Social Mobility | EY Foundation
- Educating in the AI Era: The Urgent Need to Redesign Schools | Learning Policy Institute
- AI and Education | I by IMD
Last updated: 2026-02-25