yaodong.dev
Back

If They're Right: Preparing for the AI Economic Shock

Most conversations about AI fall into one of two camps. The optimists talk about productivity, abundance, and a future where everyone benefits from machines doing the hard work. The doomers talk about existential risk, misalignment, and the end of human relevance. Both camps, in their own way, miss the most immediately important question.

What if AI works exactly as the optimists hope, and that’s what breaks the economy?

That’s the scenario laid out by macro researcher Citrini and investor Alap Shah in a recent collaboration. It’s written as a memo from June 2028, looking back at how a financial crisis unfolded. Not a story about AI going wrong. A story about AI going right, and the economic system not being built to handle it.

The Global Intelligence CrisisA scenario analysis written as a memo from June 2028, looking back at how AI-driven prosperity triggered a financial crisis. www.citriniresearch.com

This isn’t a prediction. Neither Citrini nor Shah is claiming this is the base case. It’s a stress test of the left tail: the scenario that’s possible but underexplored. The mechanisms it describes are already visible, the exposure is real, and the cost of being unprepared is asymmetric.

You don’t need to believe the 2028 crisis happens to find this useful. You just need to take it seriously enough to ask: if it did, would I be okay?


The Scenario: What They’re Actually Saying

The story begins not with collapse, but with euphoria.

By October 2026 in the scenario, the S&P 500 is flirting with 8,000 and Nasdaq has broken 30,000. AI is exceeding every expectation, productivity is booming at rates not seen since the 1950s, and corporate earnings are record-setting. The market is AI, and AI is working.

The only problem: the economy is not.

The Ghost GDP Problem

The first crack isn’t visible in the headline numbers. Nominal GDP is printing mid-to-high single digits. Output per hour is surging. By every traditional measure, things look good. But something is wrong with how that output circulates.

Citrini coins the term: Ghost GDP. Output that shows up in the national accounts but never moves through the real economy. A GPU cluster in North Dakota can generate the economic output previously attributed to 10,000 white-collar workers in Manhattan, but it doesn’t pay rent in Manhattan. It doesn’t eat at the restaurant downstairs, buy clothes, take vacations, or carry a mortgage. The productivity is real. The circulation is gone.

This is the central paradox. The consumer economy (70% of GDP) runs on human incomes. When the humans are replaced, the machines don’t pick up the spending. The velocity of money flatlines. The real economy withers even as the measured economy looks healthy.

The Displacement Spiral

What makes this particularly dangerous is the absence of a natural brake. The feedback loop runs like this:

AI capabilities improve → companies need fewer workers → white-collar layoffs increase → displaced workers spend less → margin pressure pushes firms to invest more in AI → AI capabilities improve.

Each step is rational at the individual firm level. No one company can opt out. If your competitor cuts headcount and reinvests in AI, you either do the same or fall behind. The collective result is catastrophic. Every dollar saved on human labor funds the technology that makes the next round of cuts possible.

The workers most affected aren’t factory floor workers or manual laborers. They’re the white-collar professionals whose incomes were the foundation of the $13 trillion mortgage market, the people underwriters considered “prime.” When their earning power deteriorates structurally, the assumptions baked into credit markets start to look shaky.

Three Waves

The disruption spreads in waves, each one surprising the people who thought they were safe.

Wave one is the labor market. Expected, talked about, painful but contained. Or so it seems.

Wave two is software. Agentic coding tools reach a step-change capability in late 2025. A competent developer working with AI can now replicate the core functionality of a mid-market SaaS product in weeks. Not perfectly, but well enough that the CIO reviewing a $500,000 annual renewal starts asking “what if we just built this ourselves?” Pricing leverage evaporates. The long tail of SaaS gets hit hard. Then the “safe” systems of record follow. ServiceNow misses badly, cuts 15% of its workforce, and reveals something uncomfortable: the same AI-driven headcount reductions at its enterprise customers were mechanically destroying its own revenue base.

Wave three is intermediation, and this is the one most people don’t see coming. Fifty years of the American economy were built on human limitations: people forget to cancel subscriptions, accept bad prices to avoid more clicks, default to familiar brands out of laziness, trust a friendly face over a better deal. Trillions of dollars of enterprise value depended on those frictions persisting. AI agents don’t get tired. They don’t feel brand loyalty. They don’t have a home screen. Consumer agents begin re-shopping insurance renewals, negotiating SaaS contracts, eliminating travel booking markup, compressing real estate commissions from 2.5–3% to under 1%. What people called “relationships” in many of these industries was, it turns out, friction with a friendly face.

The Reflexive Trap

The cruelest part of the scenario is the corporate response. Faced with an existential threat, companies do the only rational thing available: cut headcount, redeploy the savings into AI, use AI to maintain output with lower costs. They become the most aggressive adopters of the very technology disrupting them.

This is different from every historical disruption pattern. Kodak resisted. Blockbuster resisted. They died slowly. In this scenario, the incumbents don’t resist; they can’t afford to. But their rational individual responses create a collective doom loop. By November 2027, the crash arrives. By June 2028, unemployment is at 10.2% and the S&P is down 38% from its highs.


The Mechanisms Already Running

You don’t have to believe the 2028 timeline to take the mechanisms seriously. Most of what the scenario describes isn’t speculation about the future. It’s extrapolation of trends already visible today.

Ghost GDP Is Already Measurable

The divergence between productivity and wage growth has been one of the defining economic puzzles of the last two decades, but the AI era is accelerating it in a specific way. Companies are reporting productivity gains while holding headcount flat or cutting it. Earnings growth isn’t translating into wage growth. The consumer economy looks strained despite GDP figures that appear healthy.

The velocity of money has been in structural decline since the 2008 financial crisis. AI-driven replacement of white-collar roles doesn’t reverse that trend. It accelerates it. When a dollar of economic output is generated by a machine rather than a person, that dollar doesn’t flow back into rent, groceries, and consumer spending. The machine has no wallet.

SaaS Pricing Pressure Is Already Here

The build-vs-buy conversation is happening at Fortune 500 procurement departments right now, not in 2026. CIOs are already renegotiating contracts with leverage they didn’t have 18 months ago. The long tail of SaaS (tools that automate workflows but don’t own the underlying data) is already facing compression.

The ServiceNow dynamic in the scenario isn’t hypothetical either. Any company that sells seats to white-collar workers and whose customers are actively cutting white-collar headcount has a mechanical revenue problem. As AI-driven workforce reduction spreads from tech to finance to professional services, the seat-count model faces structural headwinds regardless of how good the product is.

Intermediation and the Reflexive Loop

The intermediation compression and the reflexive corporate loop described in the scenario are both already observable. Real estate commissions are compressing for the first time in decades, catalyzed by the 2024 NAR settlement and accelerated by AI-equipped buyers and sellers. Financial advice fees are under pressure from robo-advisors and AI-augmented platforms. Travel booking platforms are losing relevance as AI assistants plan and book end-to-end. Insurance, tax preparation, and routine legal work are all in the early stages of commoditization.

Meanwhile, every round of tech layoffs partially funds the next generation of AI infrastructure. Every SaaS company under pricing pressure accelerates its AI feature development to justify its contract. Every bank cutting analyst headcount is simultaneously investing in AI research tools. The mechanisms are live. The question is only whether they reach crisis velocity.


What Gets Destroyed

The scenario is precise about what breaks first and why.

Businesses Built on Human Friction

The broadest category of destruction is businesses whose value proposition depends on human limitations. Not businesses that serve humans, but businesses that exploit the inefficiencies of being human. Subscription businesses that renew passively because cancellation is annoying. Pricing structures that persist because nobody shops five platforms before buying. Commission structures that survive because information asymmetry made the intermediary necessary. AI agents eliminate those frictions continuously, on behalf of the consumer, around the clock.

The Middle of the Labor Market

The scenario targets a specific band of workers: the white-collar professionals who process information, synthesize data, follow procedures, and produce outputs that can be described in a sufficiently detailed prompt. Junior analysts, paralegals, mid-level coders working on well-defined features, writers producing commodity content, managers whose primary job is coordinating human processes.

This isn’t every job. But it’s a very large number of jobs, the backbone of the professional services economy, and it’s the income base the mortgage market was built on.

SaaS Without Real Moats

The distinction that matters isn’t “SaaS vs. not SaaS.” The real question is whether a product owns the data or merely processes it, whether it owns the relationship or merely facilitates it, whether it’s replaceable by a team with agentic coding tools and a few weeks.

Workflow automation tools, lightweight project management, business intelligence with commodity data sources, integration middleware: these are the most exposed. Switching costs, build costs, and moats are all compressing at the same time.

Credit Built on White-Collar Income

This is the mechanism that turns an economic slowdown into a financial crisis. Prime mortgages are underwritten on the assumption that white-collar income is stable and growing. If that assumption is structurally impaired, the $13 trillion mortgage market, and everything levered against it, gets repriced.

It doesn’t require most white-collar workers to lose their jobs. It just requires enough uncertainty about white-collar income stability that underwriting assumptions break.


What Survives

Not everything breaks. The scenario is specific enough about who gets hurt that it also implies who doesn’t.

The Compute Layer

The scenario is explicit: wealth concentrates at the GPU cluster level. The owners of compute see their wealth explode as labor costs vanish. This is the most direct hedge against the scenario. Not speculative AI software companies, but the actual picks-and-shovels: chips, hyperscaler infrastructure, and the energy required to run it.

There’s an irony worth sitting with: the best hedge against AI-driven economic disruption is owning the AI. But that’s how the economics work.

Hard Assets and Physical Scarcity

Things machines cannot replicate: land, energy infrastructure, ports, water rights, supply-constrained real estate. Hard assets have historically been the right place to be when returns to capital diverge sharply from returns to labor (as in the Gilded Age or the post-2008 decade). The scenario accelerates that divergence dramatically.

There’s a supply-side logic here too. AI-driven construction or AI-optimized manufacturing can increase output efficiency, but it can’t create more land in San Francisco or more pipeline capacity in the Gulf. Physical scarcity is a durable moat in a way that software-based moats increasingly are not.

Judgment, Accountability, and High-Stakes Relationships

The scenario reveals that conventional wisdom overestimated the durability of “relationships” in commodity intermediation. A real estate agent whose value was primarily information access gets disrupted. But it also implies, without quite saying it, that genuine high-stakes judgment is stickier.

The surgeon making a complex call. The executive making a bet-the-company decision with incomplete information. The lawyer navigating a genuinely novel legal situation. The investor synthesizing macro, micro, and human context into a portfolio. These roles involve accountability, context, and judgment under uncertainty in ways that are harder to automate, not because AI lacks the pattern recognition, but because the accountability has to sit somewhere human.

The key question to ask about any role: if AI gets this wrong, who is responsible? Where that answer matters, where the consequences of error are significant and traceable, human judgment retains value.

People Who Direct the Machines

The most durable near-term position isn’t competing with AI; it’s directing it. The people who understand what agents can and can’t do, who can spec complex workflows, evaluate outputs, catch failure modes, and connect AI capability to real business problems, these people become substantially more productive, not less relevant.

This isn’t “learn to prompt.” It’s developing genuine fluency with how AI systems work: their capabilities, their failure modes, their limits. The gap between someone who uses AI superficially and someone who understands it deeply is growing, not shrinking.


What to Actually Do

Three tracks. Be honest about which one applies to you. Most people need all three in different proportions.

Career: Audit Your Exposure

The most useful exercise is uncomfortable but simple. Look at what you get paid for and ask: how much of this could be described as a detailed prompt?

The higher that percentage, the more exposed you are. Not necessarily immediately (organizations move slowly, displacement takes time, and experience creates context AI doesn’t have). But the direction is set.

The response isn’t panic. It’s migration. Move your work toward judgment, direction, and accountability. If you’re an analyst, the goal isn’t to analyze faster; it’s to develop the judgment layer above the analysis. If you’re a manager, the goal isn’t to coordinate processes more efficiently; it’s to make decisions the process can’t make. Get genuinely fluent with AI tools in your domain, not because it makes you look good, but because it changes what you can do and what you’re responsible for.

Capital: Stress-Test Your Portfolio

You don’t need to restructure everything. But running the scenario against your current holdings is a useful exercise.

What in your portfolio depends on intermediation margins, white-collar income growth, or human inertia as a business model? What is exposed to SaaS multiples that assume ARR remains recurring? What is levered to credit structures built on professional income?

On the other side: do you have any exposure to compute infrastructure, hard assets, or physical scarcity? Not as a speculation on AI winning, but as a hedge against the disruption scenario even partially unfolding.

Increase liquidity more than feels comfortable. In disruption scenarios, optionality has asymmetric value. Distressed assets get cheap, and the people with cash available to act are the ones who benefit.

Mental Model: Shift from Earning to Owning

The deepest implication of the scenario is a shift in the returns-to-capital vs. returns-to-labor ratio. If that shift is directionally correct even at a fraction of the scenario’s magnitude, the most important long-term move is migrating from being primarily a seller of labor to being an owner of productive assets, whether that’s equity, real assets, or intellectual property.

This isn’t a new idea. But the scenario makes the urgency sharper. Building wealth through labor income is the dominant strategy of the professional class. If the value of cognitive labor deteriorates structurally, the people who are only selling their time are the most exposed.


The Asymmetry That Matters

To be clear about what this article is and isn’t: it’s not a prediction. Citrini and Shah weren’t making one either. The scenario might not happen. History does suggest that technological disruption creates new categories of work faster than it destroys old ones.

But there’s something different about cognitive labor. Every prior wave of automation targeted physical tasks, leaving the cognitive premium intact. The Industrial Revolution, computerization, even early AI: all displaced hands while increasing the returns to minds. This time, the minds are in scope. Whether the historical pattern holds is a genuine open question, not a settled one.

You don’t need certainty to act. You need asymmetry. The cost of stress-testing your exposure, building some liquidity, developing genuine AI fluency, and reorienting toward ownership is low. The cost of being wrong about income stability and business model durability is high.


This article draws on scenario analysis by CitriniResearch and Alap Shah. Both are worth reading in full.

Related Posts