Prediction Engines Turn Uncertainty Into Corporate Power
Three recent books surveyed in MIT Technology Review converge on a single claim: prediction is no longer just a cognitive skill or a technical aid. It has become infrastructure. The ability to forecast what people will do, buy, believe, or suffer now sits inside global data centers, wrapped in supervised learning pipelines and optimization code, owned by a narrow set of firms and institutions.
Maximilian Kasy calls this new complex “the means of prediction” – data, compute, expertise, and energy organized into industrial-scale forecasting machines. Benjamin Recht traces how a century of decision theory and “mathematical rationality” built computers in the image of an idealized rational agent. Carissa Véliz argues that predictions act like magnets: once believed, they bend reality toward themselves, turning expectations into outcomes and modeling into governance.
Underneath the different lenses is the same leverage shift. Predicting the future used to be a diffuse, fallible, human activity bound up with responsibility and shared risk. Now it is an automated service and a business model. The question is no longer whether the future can be known, but who owns the machinery that decides which futures become possible.
WHAT IT DOES
The predictive systems at issue here are not crystal balls but supervised learning pipelines operating at industrial scale. They ingest labeled data about past behavior or conditions, learn statistical patterns, and output a “best guess” about some future or unknown variable. In practice, that means predicting whether someone will repay a loan, violate parole, click an ad, succeed in college, default on medical bills, or churn from a subscription.
Formally, each system solves the same abstract problem: given input X, estimate target Y as accurately as possible on average. Operationally, these models sit inside decision workflows. A credit scoring model does not remain a number on a server; it gates access to a mortgage. A recidivism model shapes bail decisions. A ranking algorithm predicts which post will maximize engagement and then fills your feed accordingly. Prediction becomes an action in the world, even if a human or institution is technically “in the loop.”
Crucially, these systems run continuously and at scale. Millions of micro-predictions per second optimize ad auctions, logistics routes, dynamic pricing, and content recommendation. Optimization logic derived from decision theory – estimate probabilities, weigh expected utilities, maximize an objective – is encoded in software and left to run on cheap compute and vast behavioral datasets. What was once specialized statistical analysis turns into a ubiquitous layer of automated triage and sorting across social, economic, and political life.
The effect is a world where uncertainty is constantly harvested, monetized, and operationalized. Prediction engines do not only describe what might happen; they structure which options appear, what risks are priced in, and which lives are treated as profitable bets or write-offs.
WHAT IT ABSORBS
At the human level, predictive AI absorbs three intertwined capabilities: situated judgment about people, collective sense-making about risk, and the political contest over which futures are desirable.

First, it commoditizes judgment. Hiring, lending, parole, admissions, targeted welfare programs – these were always shaped by power and bias, but they relied on accountable people making explicit decisions in context. A manager knew a candidate, a parole officer knew a case file, a loan officer knew a community. That mix of intuition, prejudice, knowledge, and responsibility is now flattened into features in a dataset. Supervised models learn correlations between past inputs and labeled outcomes, then apply them to new individuals encountered only as vectors.
Kasy emphasizes that this does not simply eliminate bias; it hardens it. Historical data in domains like policing, employment, and healthcare encodes structural racism, sexism, and class stratification. When prediction engines learn from those records, they reproduce the past with mathematical polish. Bias ceases to be an individual failing and becomes an infrastructural property of the decision pipeline. Efforts at “fairer” models may tweak error rates but do not change whose history is being universalized as ground truth.
Second, predictive systems absorb the work of collectively negotiating risk. Recht’s history of mathematical rationality shows how human uncertainty was reimagined as a set of quantifiable probabilities and payoffs. In that framing, the correct decision is whichever option maximizes expected utility. When that logic is embedded in software, the messy, value-laden process of deciding what to prioritize – safety over speed, equity over efficiency, long-term stability over short-term gain – is treated as a solved problem. The objective function is rarely up for public debate once it is coded.
Third, prediction engines intrude on the political function of imagining and contesting futures. Véliz’s claim that predictions act like magnets captures this shift. A forecast of rising crime in a neighborhood justifies heavier policing, which generates more recorded incidents, which validate the original model. A projection that a student is “unlikely to succeed” channels them into fewer advanced classes and resources, making failure more likely. The space where people collectively envision alternative futures shrinks as probabilistic expectations start to feel like destiny.
The net effect is that human capabilities once distributed across professions, communities, and institutions – judging, anticipating, and deliberating on what should happen – are increasingly intermediated by opaque systems built far away in time and space. The skill shifts from understanding a concrete person or situation to tuning abstractions: data schemas, loss functions, regularization terms, privacy budgets.

WHO GAINS, WHO LOSES
Predictive AI does not simply make decisions faster. It redistributes leverage over whose preferences count and whose risks are acceptable.
Those who control what Kasy calls the means of prediction gain a new kind of structural power. Data-rich platforms, financial institutions, logistics giants, and security agencies own the pipelines from data collection to model deployment. They decide which phenomena are worth modeling, what labels count as “success,” how errors are tolerated, and when a prediction triggers action. Because these systems scale cheaply, each new domain they enter – from advertising to welfare fraud detection to insurance pricing – amplifies the influence of the same small set of actors and their priorities.
Inside organizations, predictive infrastructure strengthens executive and shareholder priorities at the expense of mid-level discretion and frontline judgment. If an algorithmic hiring system ranks candidates, managers are nudged to treat that ranking as objective truth, especially when deviating requires extra justification. When an engagement-optimizing recommender drives revenue, editorial or curatorial staff find their roles reduced to feeding the machine content it can best exploit. Judgment becomes enforcement of the model’s logic rather than a check on it.
The losers are not only those misclassified or denied opportunities, though they bear the sharpest harms. Whole communities become legible primarily as data sources and risk profiles. Historical disadvantage is reinterpreted as predictive signal that justifies further exclusion. High-risk neighborhoods warrant higher premiums or heavier policing. Chronically ill patients look like bad insurance bets. People who already sit at the margins of labor markets are first in line to be filtered out by automated hiring or performance prediction tools.
Democratic institutions also cede ground. When core policy choices are framed as optimization problems for experts – tax systems tuned by behavioral models, resource allocation driven by predictive analytics – public debate is pushed upstream into technical parameter choices that are difficult for non-specialists to contest. Recht’s “mathematical rationality” becomes not just a methodology but a gatekeeping ideology: only those fluent in its language can meaningfully participate in the design of systems that govern everyone.
Meanwhile, alternative forms of rationality lose standing. Community knowledge, narrative evidence, moral claims, and qualitative experience register as “anecdotal” next to a dashboard of probabilities and confidence intervals. Véliz’s point that heavy prediction use correlates with authoritarian tendencies highlights the danger. When forecasts are treated as orders in disguise, obeying them looks like pragmatism and resisting them looks like irrationality.

Some proposed counter-moves, like data trusts or public AI infrastructures, aim to move leverage back toward citizens by collectivizing control over training data and setting democratic constraints on model goals. Yet even these assume that prediction at scale is here to stay. The struggle shifts from whether to predictive governance toward who steers and to whose benefit.
THE TRAJECTORY
If this predictive regime continues to succeed on its own terms, more of social life will be reorganized around the assumption that everything important can be forecast and optimized. Hiring, education, healthcare, policing, credit, and even personal relationships are already targets for predictive modeling. As sensor data, genomic information, and financial and behavioral traces deepen, models will become more fine-grained and confident, at least internally.
That trajectory does not only mean more accurate guesses. It means thicker feedback loops between prediction and reality. School systems may increasingly stream children based on early risk scores, creating segmented futures while claiming scientific neutrality. Health insurers may dynamically price coverage in ways that make chronic illness economically unsustainable for the poor. Cities may police “hot spots” so intensively that escape from a predicted crime trajectory becomes nearly impossible for residents.
On the ideological plane, the worldview Recht describes is likely to consolidate: a culture that treats good governance as synonymous with maximizing formal objectives under uncertainty. Within that frame, calls to slow down or abstain from prediction-heavy interventions sound Luddite, while critiques focused on fairness or transparency risk being absorbed as compliance checklists rather than structural challenges.
Yet the books surveyed also hint at a fork. Kasy’s insistence on democratic control of the means of prediction, and Véliz’s framing of predictions as speech acts rather than neutral forecasts, open a different path. Prediction could be treated more like industrial pollution: sometimes useful, often profitable, but socially toxic without strong collective governance and clear no-go zones. That would mean designing institutions that can say not only how models should be built, but where they should not be used at all.
The deeper trajectory is about who gets to shape the future and on what terms. If predictive AI continues to absorb judgment and risk management into commercial infrastructures, the future becomes something delivered to people rather than made with them. If instead societies reclaim prediction as a contested, accountable practice – one tool among many in democratic life, not an invisible oracle – then human leverage shifts back from those who own the models toward those who must live inside their forecasts.
Leave a Reply