AI Can’t Replace Jobs It Replaces the Leverage Those Jobs Provided

AI Can’t Replace Jobs It Replaces the Leverage Those Jobs Provided

THE CLAIM

AI is not primarily a job-destroying technology. The best data from the last decade show that employment in AI-exposed occupations often grows when firms adopt these systems. The decisive change is subtler and more corrosive: AI strips away the leverage that jobs used to provide.

Across sectors, AI is weakening workers’ bargaining power, dissolving professional identities that anchored status and pay, and internalizing coordination that once depended on human networks and institutions. Employment headcounts remain defensible, but the workers inside those jobs have less control over the terms of their work, less visibility into how decisions are made, and less ability to walk away and be missed. The leverage embedded in jobs is being transferred to the owners of models, data, and infrastructure. That is the real redistribution underway in the AI transition, and it is already visible in 2024-2026 deployments.

THE EVIDENCE

Employment resilience, wage stagnation

MIT Sloan analysis of U.S. labor markets from 2014-2023 finds that occupations most exposed to AI saw employment growth of 3-6 percent in high-adoption firms. Other studies estimate that only about 9 percent of jobs are fully automatable with current or near-term AI, even though around 60 percent of jobs contain subtasks that can be automated. Workers are not disappearing en masse; they are staying in place while significant slices of their work are silently reassigned to software.

At the firm level, AI adopters have grown roughly 9.5 percent faster in sales and employment than similar non-adopters. Yet the productivity bonus largely accrues to capital: wage gains are narrow and concentrated, while profit shares and executive control expand. The system keeps the job headcount while hollowing out the bargaining position those jobs once conferred.

Bargaining power bypassed

Recent labor disputes make the leverage loss visible. In 2024, DHL’s European workers clashed with management over the company’s use of AI for routing and shift allocation. The union’s demand was not to ban automation but to secure enforceable protections for jobs and schedules as AI systems were rolled out. Management’s advantage came from AI’s capacity to handle a large fraction of planning and monitoring work without hiring more supervisors-and without reopening the core agreement.

Because modern models can now shoulder a substantial share of automation tasks-some analyses put roughly 40 percent of foreseeable automation impacts within reach of AI-management can keep operations running even under partial work stoppages. Algorithmic scheduling and cross-training, underpinned by AI decision-support tools from enterprise providers like Azure OpenAI, blunt unions’ strike leverage across logistics, warehousing, and retail. The threat is not that the jobs vanish overnight, but that workers’ last-resort tool for influencing terms and conditions becomes dramatically less effective.

Expert identity commoditized

Generative AI is also dissolving the professional identities that once underpinned wage premia. Brookings-summarized experiments in consulting, grant writing, and customer support found that novice workers using systems such as GPT-4o (OpenAI) or Claude 3.5 Sonnet (Anthropic) achieved 30–40 percent productivity gains and output quality approaching experienced colleagues, while experts gained far less.

On paper this looks like democratized skill. In practice it flattens pay structures and undercuts the signaling value of experience. High-wage analytical roles in business and finance are already under pressure as AI takes over spreadsheet modeling, drafting, and scenario analysis that justified premium pay. In creative sectors, the Writers Guild’s 2023–2025 negotiations turned on a similar dynamic: studios wanted AI writing tools to justify smaller rooms, fewer guaranteed weeks, and weaker residuals for human writers. The job titles remained; the leverage attached to them did not.

Coordination and the “capital singularity”

Leverage also comes from coordination—workers’ ability to find each other, compare notes, and act collectively. AI is being deployed precisely to internalize that coordination inside firms. Recruiting platforms like LinkedIn’s AI-driven tools screen candidates; scheduling engines allocate hours; performance systems flag “low performers” for exit, all operating over data lakes hosted on platforms such as Snowflake and queried through enterprise models from Azure OpenAI or Meta’s Llama 3.1.

As decisions migrate into opaque models, traditional “voice” mechanisms falter. Harvard’s Center for Labor and a Just Economy documents workers on AI-managed platforms who cannot contest disciplinary decisions because they never see, let alone understand, the scoring systems that govern them. Existing consultation and grievance channels were designed for human supervisors, not probabilistic ranking functions; voice mechanisms fail against opaque models.

At the same time, every email drafted, code snippet written with GitHub Copilot Enterprise, or sales call transcribed into an internal model makes the firm’s AI smarter. Data-leverage analysts describe the result as a “capital singularity”: an accumulating intelligence asset that belongs entirely to the organization. When that asset can perform a growing share of high-value coordination and analysis, the practical bargaining power once derived from holding unique knowledge of a product, client base, or workflow evaporates. The job stays. The leverage migrates into the model.

THE STRONGEST OBJECTION

The strongest objection is that AI, properly understood, enhances rather than erodes worker leverage. Large language models, open-source systems, and cheap cloud inference give individuals access to capabilities that once required an entire support staff or corporate IT department. A call-center agent with an AI assistant can handle more complex queries; a solo developer with GitHub Copilot Enterprise can build products that once needed a team; organizers can use models like GPT-4o or Claude 3.5 Sonnet to draft campaigns and analyze contracts. If power comes from competence and exit options, this looks like more power, not less.

Economists add a familiar historical gloss: previous waves of automation displaced particular tasks while raising overall labor demand and real wages. From this angle, the current mismatch between AI-driven productivity and wage growth is a short-term distributional problem, not a structural break. As AI diffuses, the argument goes, new firms will arise, workers will use the same tools as management, and leverage will rebalance.

There is also a democratic version of the objection. AI can be turned back on institutions: unions are already feeding contracts into models to surface hidden clauses; watchdog groups are using open-source systems to parse algorithmic decision records; regulators are sketching AI-era rights frameworks that mandate human oversight of consequential decisions. If workers, advocates, and small businesses can share in the tooling, there is no iron law that AI’s net effect must favor capital.

WHY THE CLAIM HOLDS

The objection captures something real: generative AI can raise individual competence. But leverage is not simply a function of individual productivity; it is a function of control over infrastructure, data, and chokepoints. On those dimensions, current AI deployment patterns overwhelmingly favor capital.

First, firms, not workers, decide where and how AI is integrated. Management selects the tools, configures access rights, and owns the resulting models. Even when workers use the same systems, they do so inside governance structures they do not control. The call-center agent with an AI assistant does resolve more tickets, but the performance traces feed into training runs that make the system better at routing and scripting—tasks that can later be reassigned to a smaller, cheaper workforce or to contractors.

Second, data compounding creates one-way dependency. The “capital singularity” is not just a metaphor; it describes the flywheel in which every task completed with internal AI generates more proprietary data, which in turn improves the model, which justifies shifting yet more discretion to software. Leaving a job does not withdraw any of that capability. Workers remain individually smart, but the collective leverage once derived from holding unique knowledge of a product, client base, or workflow now resides in a system they neither own nor can credibly threaten to withhold.

Third, coordination advantages scale with institutional resources. Open-source models like Llama 3.1 lower technical barriers, but it takes legal teams, compute budgets, and data-engineering capacity—things large employers have and fragmented workers typically do not—to weaponize them in bargaining or oversight. The few counter-examples, such as the Writers Guild building technical capacity to audit AI usage and hard-code contractual protections, required rare levels of organization and sectoral power. They prove that leverage can be rebuilt; they also underscore that it is currently being eroded by default.

Individual upskilling through AI coexists with, and is often harnessed to, a structural shift in power toward those who own the models and the data. That is the asymmetry the “job replacement” narrative obscures, and why employment counts alone badly misstate AI’s impact on human systems.

THE IMPLICATION

If AI is primarily replacing leverage rather than jobs, the central questions for the next decade are institutional, not technological. The axis that matters is not “how many jobs are lost” but “who controls the models, who owns the data they are trained on, and which mechanisms exist for workers to contest and reshape their deployment.”

Coverage that treats AI as a neutral productivity boost or a simple head-count threat misses the deeper story: the quiet re-engineering of bargaining, identity, and coordination inside firms. Some actors are already experimenting with countermeasures—AI-augmented collective bargaining protocols, worker–AI hybrids that preserve human veto power, audits of opaque systems, and equity-sharing schemes that treat training data as a contributed asset. Whether those experiments remain exceptions or become the norm will determine whether AI deepens existing imbalances or becomes a platform for rebuilding worker power in a data-driven economy.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *