6 min read

AI Isn't Democratizing Development. It's Doing the Opposite

AISoftware EngineeringDeveloper Productivity

TL;DR: AI coding tools aren't levelling the playing field. They're amplifying existing skill gaps. Developers with deep domain expertise and architectural taste are becoming 10x more productive. Everyone else is shipping more bugs, faster.

The Promise vs. The Reality

I used to think AI would democratize software development. Give everyone access to Claude or GPT, and suddenly the junior dev ships as fast as the senior. The bootcamp grad competes with the 10-year veteran.

I was wrong. It's doing the exact opposite.

Six months ago, my team got access to AI coding tools. Everyone was excited. The prediction was simple: "This will make everyone equally productive." The reality? The best developers became untouchable. The rest found new ways to struggle.

Three Developers, Three Outcomes

Here's what actually happened, distilled into three real patterns I've observed in trading tech:

Developer A: Junior, Two Years Experience

Developer A generated 10x more code. Features shipped "faster." On the surface, everything looked great.

Under the surface: a debt explosion. Bugs in production tripled. When things broke, Developer A couldn't debug the AI-generated code because they hadn't written it and didn't fully understand it. Copy-paste solutions without understanding. No architectural judgment. Couldn't distinguish good AI suggestions from garbage.

They left the team after four months, overwhelmed.

Developer B: Senior, No Domain Expertise

Developer B used AI as "faster Stack Overflow." A moderate productivity boost, maybe 2x. But they still made the same architectural mistakes they always made. They just made them faster.

AI didn't fix their blind spots. It accelerated them.

Developer C: Elite, Ten Years in Trading Systems

Developer C explored 10+ approaches before committing to any solution. They rejected 60% of AI suggestions outright. They used AI to eliminate boilerplate, never to make design decisions.

Their productivity on architecture validation went up 5-10x. They single-handedly rebuilt our market data pipeline in two weeks, a project that would have taken three months before AI.

The difference wasn't coding speed. It was taste.

The Real Unlock: Solution Space Exploration

The narrative around AI tools focuses on code generation speed. "Write code 10x faster!" But that's the least interesting benefit.

The real superpower is exploration speed.

Before AI, faced with a problem like "How should we handle market data streaming for our LLM agent?", the process looked like this: research 2-3 approaches, implement one, discover its limitations in production, refactor over three months.

With AI and domain expertise: prototype 10 different approaches in two days, stress-test each against real market conditions, discover edge cases early, and pick the right approach upfront.

I experienced this firsthand when testing TOON vs CSV vs JSON for LLM token efficiency. The traditional approach would have taken four weeks: implementing three parsers, writing comprehensive tests, running comparisons, analysing results. With AI acceleration, it took two days.

But here's the critical detail: I rejected about 40% of the AI-generated code. Type handling bugs, inefficient tokenisation, wrong assumptions about LLM behaviour. The AI accelerated exploration. My domain expertise made it correct.

Without that domain knowledge, I would have shipped any of those 10 prototypes and learned the hard way which ones break at scale.

Where Domain Expertise Becomes the Filter

In trading tech, AI suggestions routinely miss things that matter:

Race conditions in order handling. AI generates solutions that look clean but fail under concurrent access patterns that are standard in trading systems.

Floating-point precision in financial calculations. Catastrophic in P&L calculations. AI often reaches for convenient representations that introduce rounding errors at scale.

Regulatory compliance edge cases. These aren't in the training data. No model knows your specific regulatory environment.

Latency-sensitive code paths. AI generates "readable" code, not "fast" code. In trading, the difference between a clean abstraction and an optimised hot path costs real money.

Without domain expertise, you ship these bugs. In trading, they cost millions. With domain expertise, you spot the issue in five seconds, reject the suggestion, and ask AI to try a different approach.

The Uncomfortable Economics

Developers with deep technical skills, domain expertise in fields like trading or biotech or infrastructure, and architectural taste were already scarce. Probably 1 in 50 engineers.

AI makes them 10x more productive on the right problems. It makes them even more scarce, because the bar for what qualifies as "elite" is higher now. And it makes them dramatically more valuable, because only they can separate signal from slop in AI output.

Meanwhile, the developer competing purely on "how fast can I write code" is in trouble. AI can do that. The grunt work that used to be their domain is being automated.

We're heading toward a barbell distribution: elite developers with superpowers on one end, AI-generated slop on the other. The middle class of "solid but unremarkable" developers is thinning out.

The New Competitive Moats

What AI has commoditised:

  • Writing code fast
  • Knowing syntax
  • Implementing standard algorithms

What AI cannot replace:

  • Domain expertise: AI doesn't know your specific market, your regulatory environment, or your system's failure modes
  • Architectural judgment: AI hallucinates here more than anywhere else
  • Taste: knowing what "good" looks like, when to stop optimising, where complexity is worth it
  • Boundary detection: knowing when to trust AI output and when to reject it entirely

What This Means

For developers: if you're competing on "how fast can I write code," you've already lost. The new game is building deep domain expertise, developing architectural taste, and learning to use AI as an exploration tool rather than a replacement for thinking.

For companies: elite developers with domain knowledge were worth 10x before AI. Now they're worth 100x. Pay accordingly or lose them to companies that will.

For the industry: AI isn't democratising development. It's increasing the returns to expertise. The productivity formula isn't skill + AI. It's skill × AI. Multiplication, not addition. And when your base skill is low, multiplying by AI doesn't help much.


The slop is real. The superpowers are real. The difference is whether you have the expertise to tell which is which.


If you found this interesting, subscribe to my newsletter for weekly insights on AI engineering, quant development, and building systems at the intersection of both.

ENJOYED THIS POST?

SUBSCRIBE TO NEWSLETTER

Get weekly insights on AI engineering, quant development, and building LLM systems for financial data

SUBSCRIBE