The Convergence Problem: Rethinking the 2028 Global Intelligence Crisis
AI makes automation trivial. Without productive imperfection, differentiation disappears.
Citrini Research's provocative "2028 Global Intelligence Crisis" essay dropped in late February 2026 and immediately moved markets. Within days, Citadel Securities published a formal rebuttal. Paul Krugman compared the episode to Orson Welles' War of the Worlds broadcast. Reuters ran a column urging readers to "deflate the AI doom bubble". Mike Konczal formalized distribution critiques using DSGE models. Noah Smith called the scenario "just a scary story". Casey Newton at Platformer noted it was remarkable that a major market-maker felt compelled to rebut a Substack post.
I deliberately held off from writing about it. Not because I didn't have thoughts, but because large portions of the argument touched on macroeconomics, geopolitics, and labor dynamics that sit well outside my professional expertise. I wanted to absorb the full range of reactions before weighing in on the parts I actually know well: AI agent capabilities, SaaS, software systems, and what happens when you try to run a real business on top of rapidly shifting infrastructure.
Now that the discourse has settled, I think there's a problem embedded in the thesis that most of the commentary hasn't fully reckoned with.
The core narrative goes something like this: intelligence becomes abundant, agents coordinate work, software builds itself, and knowledge work compresses into automated systems that operate faster than humans can follow. The Citrini scenario maps this through specific transmission channels: agentic coding compressing SaaS margins, agents eroding intermediation businesses, layoffs weakening mortgage underwriting, private credit losses propagating into insurers, fiscal capacity shrinking as tax receipts decline while transfer needs rise.
To be clear, I'm summarizing their framing here, not independently validating each link in the chain. Some of those transmission channels are more plausible than others, and the scenario's force comes from stacking them sequentially, not from any single claim.
Most of the follow-on discussion stayed anchored to a shared assumption: more intelligence leads to more output, which leads to more value.
That assumption is doing a lot of heavy lifting. And I'm not convinced.
AI isn’t just accelerating how we build software. It’s compressing it toward sameness.
As the cost of building approaches zero, optimization becomes the default. Every workflow gets streamlined. Every inefficiency gets removed. Every edge gets sanded down.
Individually, these changes look like progress.
Collectively, they collapse variation.
The Bottleneck Didn't Disappear. It Moved.
AI is genuinely good at removing friction from production. Writing code, generating documents, synthesizing analysis, scaffolding systems. That part of the story is real, and I see it every day in my own work.
But production was never the only bottleneck. It was just the most visible one.
The harder parts of execution have always lived somewhere else: coordinating across teams, aligning incentives, managing risk, navigating regulatory environments, deciding what not to build. The Citadel Securities rebuttal touched on this, emphasizing S-curves, infrastructure constraints, and integration friction. Ethan Mollick called the scenario "hard science fiction" and useful scenario building but not a fully plausible path. It must be acknowledged that despite exponential capability gains, most organizations have changed remarkably little so far. Organizations already struggle to operationalize the tools they have. Intelligence alone doesn't resolve that.
So as AI removes friction from production, something subtle happens. The bottleneck doesn't disappear. It moves. From building to absorbing. From creating to deciding. From output to alignment.
And that shift changes the calculus for a lot of things we take for granted, starting with SaaS.
SaaS Was Never Just About Software
One of the more grounded parts of the Citrini scenario is what happens to SaaS when agentic coding tools can rapidly replicate mid-market functionality. The essay frames this as an early catalyst: procurement leverage shifts, margins compress, and large portions of the market reprice.
What’s notable is that even skeptics largely agree on this point. The debate isn’t whether some SaaS models come under pressure. It’s how far that pressure propagates. Even skeptics such as the Citadel authors and Noah Smith dismiss the broader scenario, but still concede that parts of the software stack are exposed.
But that framing misses something more structural.
This isn’t just about pricing pressure. It’s about convergence.
If software can be generated on demand from the same underlying models, then many of the differentiators that SaaS products relied on, implementation speed, feature depth, workflow design, start to collapse toward the same baseline. Different vendors. Same shape.
B Capital's analysis gets closer to this, framing the shift as a capital allocation problem. The focus isn’t on predicting the full macro outcome, but on observing where AI is already compressing labor into software today.
And in those areas, the pattern is consistent: as generation gets easier, variation starts to disappear.
Push this further and the natural question becomes: if AI can generate internal tools, replicate product features in hours, and orchestrate workflows through agents, then why would companies buy SaaS at all?
The common answer is: they won't! Companies will just build it themselves.
But that framing misunderstands what SaaS actually provides.
SaaS exists largely to centralize responsibility, not just functionality.
When you adopt a SaaS product, you're not just getting features. You're outsourcing compliance interpretation, security practices, uptime guarantees, operational reliability, support and escalation paths. You're buying accountability.
If you replace that with AI-generated internal systems, you don't remove that responsibility. You inherit it. Fully. And that creates a kind of complexity that most of the 2028 narratives gloss over.
The question isn't can we build this?
It's who owns the consequences when it breaks?
Because the answer is no longer "the vendor." It's you.
AI doesn't remove complexity. It redistributes it. Instead of writing code, configuring infrastructure, and integrating systems, you now have to manage agent orchestration, model behavior, prompt drift, policy enforcement, auditability, and failure modes you don't fully understand. And the granularity matters enormously. Adoption won't be uniform across an organization. It will vary by task, by workflow, by liability regime. Back-office automation might move fast while regulated decisioning barely moves at all. A slow average can hide fast local collapses, which is exactly how financial contagion often starts.
This is where the "build everything ourselves" narrative starts to break down.
The Illusion of Infinite Throughput
A lot of the optimism around AI assumes that increasing output automatically increases value. More features, more systems, more automation.
But value is not created when something is built. It's created when something is adopted.
Even today, companies struggle to fully utilize the SaaS tools they already pay for. They can't onboard teams fast enough, can't integrate systems into workflows smoothly enough. Not because the tools are lacking, but because people are.
Everett Rogers documented this decades ago in his diffusion of innovations research: adoption follows S-curves governed by human factors (awareness, understanding, trial, and integration) not by production speed. Decades of research in the field of implementation science has formalized what enterprise technology teams experience daily: the bottleneck is organizational absorption capacity, not feature availability.
Customer adaptation rate is the real constraint.
Not engineering speed. Not model intelligence. Human comprehension.
You can ship faster. You can generate more. You can optimize continuously. But if your customers, or your own teams, can't discover changes, understand them, trust them, and operationalize them, then all you've done is increase noise.
Push this to its logical extreme and you get something worse: unbounded change velocity with no absorption layer. Imagine a SaaS product that ships hundreds of meaningful changes every week. Constant iteration driven by AI. At first glance, that sounds like progress. In practice, it creates a different kind of failure mode.
Discovery, evaluation, integration, and team training don't scale with model performance.
They scale with human attention, organizational capacity, and trust. Stability becomes the scarce resource, not innovation.
The Citrini essay's own framework actually supports this reading, even if the authors didn't intend it. Their concept of "Ghost GDP", where output and profits rise in the national accounts but money velocity and household spending fall, is a macro version of the same mismatch. Production metrics improve while the human systems that convert output into value can't keep up.
As economics, the concept has real problems. Several critics pointed out that the accounting identities don't hold, and they're right. But as a metaphor (and I want to be explicit that I'm using it as a metaphor, not as an economic model) it captures something important about the gap between production capacity and adoption capacity. That gap is the one I see playing out in enterprise software every day, and it's more useful than most of the rebuttals acknowledged.
The Convergence Problem
Most of the discussion around the 2028 thesis focuses on acceleration. Very little of it explores what happens after everything accelerates.
Here I need to bridge two ideas that might seem contradictory. Earlier, I argued that AI introduces new layers of complexity: agent orchestration, model management, prompt drift, and failure modes that don't map to existing operational playbooks. That's real. Locally, within any given organization, AI adoption creates divergence. New problems, new architectures, new operational surfaces.
But there's a countervailing force. As organizations wrestle with that local complexity, they increasingly collapse on the same solution space. The same frameworks, the same model providers, the same optimization targets, the same architectural patterns. Complexity increases locally, but optimization pressure reduces variance globally. Individual companies face novel internal challenges, but the strategies they converge on to solve them start to look structurally similar.
That's the convergence problem.
The mechanism is specific. It emerges from three inputs that co-occur structurally in modern AI systems: shared data, shared incentives, and fast iteration loops. Or more formally:
When AI systems are trained on similar non-proprietary datasets, optimized toward similar objectives, and reinforced by similar feedback signals, they converge. Not perfectly. Not uniformly. But directionally. You start to see similar workflows, similar decision-making patterns, and similar optimization strategies narrowing toward the same local maxima.
This isn't theoretical. You can already see it in domains where all three inputs are present. In programmatic advertising, AI agents all optimize for click-through rate, learning from the same signals and iterating in real time. Ads converge toward similar messaging, similar formats, similar emotional triggers. And the pattern extends well beyond advertising.
Convergence follows a simple rule: it increases as inputs become shared, objectives measurable, and feedback loops tighter.
That gives us a rough map of where to expect it and where not to.
Where convergence pressure is strongest:
- Pricing optimization
- Feature prioritization
- Content marketing
- Procurement strategy
- Operational workflows
These are domains where the inputs to AI decision-making are widely shared, the objectives are quantifiable, and iteration cycles are fast. If every company uses AI to optimize vendor selection, contract terms, and pricing negotiations against the same market data, you get procurement convergence: the same suppliers selected, the same terms extracted, the same cost structures. This can lead to supply chains homogenizing. The diversity of supplier relationships that once provided resilience against disruption narrows. When a shock hits, everyone is exposed to the same concentrated risk, because everyone optimized the same way.
Where convergence pressure is weakest:
- Brand
- Distribution networks
- Proprietary data assets
- Regulatory positioning
- Organizational culture
These are domains where inputs are asymmetric, objectives resist quantification, and feedback loops are slow or ambiguous. A company's brand isn't a function that can be optimized against a shared dataset. Distribution advantages are built through relationships and infrastructure that can't be replicated by running the same model. Regulatory positioning depends on jurisdiction-specific knowledge and institutional relationships that don't generalize.
That boundary determines where the real competitive risk lives. AI increases pressure toward convergence. It doesn't guarantee convergence as a dominant equilibrium. Markets rarely converge completely because inputs, constraints, and incentives differ. Even with similar models, data asymmetry and distribution channels still create space for differentiation. But in the domains where convergence pressure is strongest, the effects are already visible and accelerating.
Consider content and go-to-market strategy. If positioning, messaging, and sales playbooks are all optimized against the same corpus, companies don’t just build the same products. They start telling the same story.
Some critics of the Citrini piece raised exactly this concern: whether AI would truly unlock creativity or just amplify existing patterns at scale. That's the right question. Because if AI primarily reinforces what already works, then widespread adoption doesn't just increase efficiency. It standardizes it.
Michael Porter's work on competitive strategy warned about precisely this dynamic: when competitors all adopt the same best practices, they converge on "operational effectiveness" while losing strategic positioning. In evolutionary biology, it's called the Red Queen effect. You have to keep running faster just to stay in the same place. AI turbocharges the treadmill, but the treadmill is still a treadmill. You don't pull ahead. You just run faster alongside everyone else.
When Everyone Can Do Everything Right
Take the 2028 thesis seriously, and you have to follow it to its logical conclusion.
A world of abundant intelligence means best practices are instantly accessible, inefficiencies are quickly eliminated, and decisions are continuously optimized. Everyone can "do business" the right way. No guesswork. No experimentation. No waste.
But if everyone is equally optimized, equally informed, and equally capable of execution, then differentiation starts to erode. You don't get dominant players pulling ahead. You get more competitors iterating faster with weaker advantages. Unstable leadership. Compressed margins. Constant churn.
Competition doesn't disappear. It intensifies. Barriers to entry fall, so more competitors enter the market. But because they're all operating with similar capabilities, no one can sustain an advantage for long. This doesn't lead to monopoly. It leads to instability, a kind of high-frequency competition where leadership is temporary and margins evaporate.
And it produces another structural shift: lock-in becomes temporal, not structural.
You're not locked in because switching is hard. You're locked in because switching hasn't happened yet. Agents can migrate workflows. Schemas can be translated. Systems can be reconstituted quickly. The durability that SaaS companies have relied on starts to fade. Not because their products failed, but because switching costs no longer provide meaningful friction.
What looked like advantage was just inertia. Once that fades, similar products become interchangeable. That’s convergence.
The Limits of Optimization
This is where things get counterintuitive. If full optimization drives convergence, the question isn't how to optimize faster. It's where to resist optimization entirely.
Several critiques of the Citrini piece framed governance and regulation as friction that would eventually be automated away. That may be true in some domains. But in others, especially where liability is involved, human oversight persists by design. Not because we can't automate it, but because we choose not to.
The broader commentary ecosystem captured this well: politics isn't downstream of economics in AI adoption. Regulatory constraint, procurement rules, compute taxes, and labor policy can all change the speed and direction of the transition. Policy inertia is one of the Citrini essay's hidden load-bearing assumptions.
But there's a more fundamental reason friction matters: markets don't thrive on perfect optimization.
They thrive on asymmetry, experimentation, taste, bias, and imperfect decisions. That's where differentiation comes from. That's where brand lives. That's where creativity emerges.
This is where you have to separate productive imperfection from waste. Waste is a duplicated process nobody needs. Productive imperfection is an approval workflow that forces a team to articulate why a decision matters, a staged rollout that surfaces edge cases before they become incidents, a human review that catches the thing the model optimized away because it didn't fit the objective function. These aren't inefficiencies. They're control points, alignment mechanisms, and accountability layers.
Consider what happened when Knight Capital's automated trading system deployed faulty code in August 2012. In 45 minutes, the system executed erroneous trades that cost the firm $440 million. The failure wasn’t a lack of optimization. It was the absence of effective friction: no staged rollout, no containment, no reliable way to stop the system once it started. Knight Capital went from profitable to insolvent in less than an hour because the speed of execution outpaced the system’s ability to contain it.
Now scale that dynamic to every business process in an organization. AI doesn't just speed up the good decisions. It speeds up all decisions, including the wrong ones. Without intentional friction, as in the places where you slow down to verify, validate, and decide, you don't get faster progress. You get faster failure.
There's a counterargument worth addressing directly: doesn't AI also enable hyper-personalization, which should increase divergence rather than reduce it? If every customer gets a tailored experience, aren't we moving toward more variety, not less?
On the surface, yes. AI-driven personalization can produce enormous variation in outputs. Different recommendations, different interfaces, different messaging for different users. But personalized outputs and convergent systems are not mutually exclusive. In fact, they tend to coexist. The underlying architectures, optimization strategies, data pipelines, and decision frameworks converge even as the customer-facing layer diversifies. Netflix and Spotify don't differentiate through their recommendation algorithms. Those are structurally similar. They differentiate through content libraries, brand, and distribution. The personalization layer creates the appearance of divergence while the systems underneath it homogenize.
The convergence risk isn't about what customers see. It's about what companies become. If every organization's internal operations converge on the same optimized patterns, the surface-level personalization doesn't protect against the deeper structural vulnerabilities: correlated failure modes, synchronized supply chain exposure, and the loss of the operational diversity that makes markets resilient.
This applies to consumer systems too, though the calculus is different. Enterprise systems optimize for efficiency constrained by governance. Consumer systems operate on identity and expression. People don't just want outcomes. They want experiences. We may automate routine decisions: what delivery service we use, when to reorder printer ink, etc. But we're far less likely to fully automate how we present ourselves, what we find entertaining, or how we connect with other people. That's a hard boundary on how far intelligence abundance reshapes behavior. Not everything collapses into automation. Some things remain intentionally human.
If AI pushes every system toward the same optimal path, the differences that make markets healthy collapse.
And when they collapse, you don't get innovation.
You get equilibrium.
A flat, frictionless landscape where everyone is equally capable and nobody stands out.
The Real Risk Isn't Disruption
The 2028 Global Intelligence Crisis report frames the future as an intelligence supply shock. Most responses focused on job displacement, productivity gains, and speed of change. Those are real concerns. But there's a quieter risk underneath all of them.
What happens when everything starts to work the same way?
The Citrini scenario isn't going to play out as written. The full doom loop requires too many extreme assumptions to line up at once, and there are real counterforces: policy intervention, diffusion constraints, price adjustments, and the sheer inertia of enterprise adoption. But the sector-level disruptions the essay describes? Those are credible. SaaS repricing, intermediation compression, shifts in procurement leverage. These don't need the whole economy to crater to be real, and they could arrive faster than most incumbents expect. The harder question, and the one that keeps me up at night, is what happens in the gap between disruption and adjustment.
That last point is what I keep coming back to. Even the "boom case" scenarios, like the companion piece by Michael Bloch that argues AI surplus gets redistributed as a deflationary dividend, acknowledge that timing matters. The surplus might be real in the long run, but the short-run distribution lag (much like Engels' Pause in the Industrial Revolution) could still generate real instability.
AI will make building software trivial. It will make parts of running a business easier.
But it won’t make decision-making trivial.
It won’t make accountability trivial.
And it won’t make differentiation trivial.
And if everything becomes equally optimized, those are the only things that matter.
The real risk isn’t that AI disrupts markets. It’s that it flattens them.
Because markets don’t run on automation. They run on variation.
When the cost of building approaches zero, optimization becomes the default.
Every workflow improves.
Every inefficiency disappears.
Every system gets optimized.
And when everything gets better in the same way, the system doesn’t improve. It converges.
That’s the paradox.
In a world where AI removes every bottleneck to production, the bottleneck that remains is the one we started with: human absorption. The ability to decide. To judge. To choose what not to optimize. The advantage doesn’t go to who builds the fastest. It goes to who understands, most clearly, what should stay slow.
Because without friction, without delay, without the small imperfections that force divergence, differentiation doesn’t just get harder. It disappears.
That’s the problem AI creates.
Not a lack of intelligence. A lack of differentiation.