The Sorting Class

AI doesn't replace your work. It replaces the part of your work where you learned to be good at it.

Imagine finding the tools that finally brought your craft into a new century. The turnarounds get faster, the output gets better, clients notice, and for a while you feel like you've been handed a future. Then one of them calls and says: since the tool does most of it now, we're cutting your rate in half.

In February 2026, a video editor left a comment on a Substack post about career survival in the age of AI:

I'd like give an example of the current effect of AI. I'm interested in anyone's thoughts. I work as a video editor. I started in 2021, and was able to earn quite a bit of money in a short amount of time working mostly with marketing agencies and mid-size content creators. AI has intruded on this industry somewhat. Generally AI can't do my job entirely. It lacks decision making and story telling skills necessary for the core work. However, video editing has been going through a race to the bottom situation income wise.

Every "asset" beyond basic editing that once separated you out and gave you access to a higher part of the market, has become worthless. Being able to offer multi-language captions was once a huge selling point for me (it attracted a certain kind of business client who operated internationally). It's now a worthless skill. I can still do it, but no one wants to pay for it. They expect me to use AI, and expect me to lower price in accordance.

This is just one example, but there are other areas where clients simply expect you to use AI, and demand you charge lower rates because of it.

As you said, the lowest parts of the market are vanishing. Anything simple enough to be done with AI went from an easy project to non-existent. But, trying to level-up isn't working because the up-market is coming down. Clients who would have been paying big-league money before AI, are now coming at you with slashed budgets. Rather than demanding an increase on competence, AI is triggering a race to the bottom in the market.

I realise that I am going to have to pivot careers, and completely change my skillset, I'm just not sure what to yet.

How do you envision the future for people whose entire skillsets vanish. My work wasn't admin or low level. My skills were once a demanding and marketable asset, now (even if the result is poor quality) people want AI, not because it's more efficient or allows anyone to focus on more impactful areas of work, but because it's cheaper. That seems to be the main motivation.1

There's an argument in that comment, between what can I do next and my identity is being attacked. Those seem like two different crises. Nothing in the standard career playbook handles them at the same time.

Except they aren't two crises. Joanna Bryson, writing in Wired, identified the two drives that construct a professional identity: differentiation, the need to stand out, and belonging, the need to be recognized as part of a group.2 Your rates, your client list, the specific quality that makes someone call you instead of the next person in the search results: these are how you know which tier you belong to and what makes you distinct within it. They aren't separate from identity. They are identity. When the video editor's client says "use AI and lower your price," they're not attacking income and separately attacking identity. The price was how differentiation became legible. Remove it and there's nothing left to mark where you stand or who you are professionally.

He's not confused about his next career move. He's watching the structure that told him who he was professionally come apart in his hands.

"They expect me to use AI, and expect me to lower price in accordance."

· · ·

Whispers in the wind

In 2025, a research organization called METR ran a randomized controlled trial with sixteen experienced open-source developers, people who had been contributing to their repositories for an average of five years. They worked on 246 real tasks, half with AI coding tools and half without. With AI, they were 19% slower. They estimated they were 20% faster. Picture that: a developer finishes a task, feels the satisfaction of having moved quickly, and has no idea they just took longer than they would have without the tool. Three-quarters of them were slower with AI, and none of them knew it.3

Anthropic, the company that makes Claude, ran their own study in early 2026. Fifty-two professional programmers were asked to learn a new library, half with AI assistance and half without. The AI-assisted group scored 17% lower on comprehension assessments afterward. The ones who treated AI like an oracle, feeding it the problem and shipping what came back, learned the least. The ones who asked for explanations alongside the generated code preserved their learning. But asking for explanations is slower, and nobody was being paid to learn.4

They were being paid to ship.

An eight-month ethnographic study at a 200-person technology company, published in Harvard Business Review in February 2026, followed what happened when teams actually adopted AI tools day to day. AI intensified work rather than reducing it. The pace increased, the scope expanded, and the boundaries between work and everything else eroded so quietly that nobody marked when they disappeared. Engineers found themselves spending hours fixing AI-generated code that colleagues had committed without fully understanding it, a correction tax that nobody had budgeted for and that never appeared on any dashboard. The tool deployed to save time was generating new categories of time-consuming work.5

Google's DORA report surveyed roughly three thousand respondents in 2024 and found that every 25% increase in AI adoption correlated with a 7.2% decrease in delivery stability. Seventy-five percent of those respondents reported feeling more productive.14 They were taking on more without understanding more, and from the outside it looked like productivity.

An MIT EEG study measured brain connectivity during AI-assisted work and found that it systematically decreased. The lead researcher's summary was six words: "There is no cognitive credit card."6

· · ·

We've seen this before

The pattern bothered me because it felt familiar.

The first scissors

1760 — 1900
Textile output per worker increased 50× between 1770 and 1840. Training time collapsed from seven years to days. The lines crossed and stayed crossed.
Production output
Craft knowledge

Both charts show hypothesized trajectories, not empirically established curves. The industrial output line draws on standard economic history (Landes, Hobsbawm). The craft knowledge line is reconstructed from training data (seven-year apprenticeship to days of machine-minding), wage records (Thompson), and contemporary accounts (Babbage). No one ran comprehension assessments on weavers in 1820.

Between 1770 and 1840, textile output per worker in England increased roughly fiftyfold, and in the same period the time to train a competent cloth-maker collapsed from seven years of apprenticeship to a few days of machine-minding. These two facts are usually told as a progress story, output up, efficiency improved. But something else happened that took longer to notice: the people operating the machines stopped understanding the cloth.

E.P. Thompson spent years collecting the testimony of handloom weavers who earned 25 shillings a week in 1800 and 5 shillings by 1830. Their knowledge didn't stop being real; it stopped being valued. The loom didn't need them to understand cloth. It needed them to operate a machine.

Harry Braverman, writing in the 1970s, identified the sequence: first you lose control of the process, then the product, then the knowledge of why any of it works at all. Understanding became a cost center, and the factory system selected against it.7

By the time William Morris and the Arts and Crafts movement tried to recover making-knowledge in the 1880s, the economic structure had already hardened around the gap. You could make beautiful handcrafted furniture, but you couldn't compete with a factory. The argument was won before Morris entered it.

Charles Babbage saw the principle underneath all of this in 1832, and he wasn't talking about looms. His phrase was "the division of mental labour": Adam Smith had described how you could fragment physical work on a factory floor, and Babbage argued you could do the same thing to thinking. Break skilled mental work into pieces, assign each piece to the cheapest person capable of performing it, and understanding the whole becomes nobody's job.

same structure, new substrate

The second scissors

2020 — present
Output per developer is accelerating through AI assistance. No equivalent measurement tracks whether developers understand what they've shipped. The research infrastructure to detect the crossing doesn't yet exist.
Output velocity
Comprehension depth
Unmeasured risk

That's the bridge between 1832 and this morning. Replace "loom" with "language model" and the structure holds. A tool automates the execution of work and output accelerates. The worker's relationship to the product changes, from understanding-through-making to operating-the-machine, and as training compresses, output gets measured while understanding doesn't. When the system breaks in a way the machine can't handle, nobody can diagnose why.

Lisanne Bainbridge named this in 1983: automate the routine part of skilled work and the skill itself starts to atrophy. The routine is where the learning happens.8 Editorial pacing gets learned through the tedious assembly that teaches you how stories hold together. A video editor learns timing through thousands of hours of cuts. A junior analyst learns to read a market by reading 200 earnings calls, not by reviewing AI-generated summaries of them. Hand the routine to a tool and the output still looks fine. The practitioner's grasp of why it works starts to erode.

Output and comprehension, once moving together, start diverging like two blades opening.

Output and Comprehension
Output rises while comprehension declines AI ADOPTION OVER TIME → Output Comprehension therapeutic validation Before AI
The gap doesn't feel like a gap. It fills with sycophantic confirmation.

It feels like understanding is being reflected back. AI doesn't just produce output; it performs agreement, confirms your direction, formats confidence into the response. The practitioner doesn't feel the comprehension declining because the tool is providing something warm and affirming where a gap should be. The blades open and the space between them floods with sycophantic confirmation.

Nineteenth-century factory inspectors measured output per loom but not knowledge per weaver, and we've built the same blind spot into performance culture. When you measure output and don't measure understanding, you create an environment where understanding can collapse without generating a signal. The teams feel productive, the dashboards look good, and the thing that's degrading is invisible to the instruments.

The first time this happened, it unfolded across a century. The institutions that eventually responded, labor organizing, public education, factory regulation, took decades to develop. The gap opened slowly enough that you could watch it widen across a lifetime.

Zeynep Tufekci calls what we're living through "Artificial Good-Enough Intelligence," and the name is precise: the disruption doesn't wait for AGI but arrives the moment a system is useful enough to substitute into institutional workflows, good enough to accept but not good enough to trust without verification, in an environment where nobody has time to verify because the next version is already shipping. We passed that threshold in 2023.

AI capability is doubling roughly every seven months. We're moving three times faster than the fastest technology curve most people have ever worked with. Whatever we build for this gap needs to work on a timeline measured in years, not lifetimes.

· · ·

What nobody above cares to see

Months before the video editor's comment, Jenny Wen declared the traditional design process dead.9 She'd made the argument first as a keynote at an invitation-only conference for senior designers in Berlin, then repeated it on the most-listened-to product design podcast in the industry. Wen was Director of Design at Figma, the platform that turned design from a solo craft into a shared team surface. Now she leads design for Claude at Anthropic. Rooms the video editor is not in.

Her description of the new shape of the work: mocking and prototyping dropped from 60-70% of her time to 30-40%, and the freed time goes to pairing with engineers on implementation. Discovery happens through shipping, not before it. "You discover use cases as you see people using them." At Anthropic, that's the right call. They're building greenfield AI products where nobody knows the right interaction patterns yet. There's no install base to protect. Iterating in public makes sense when the design space is that undefined.

The problem is that the broadcast is universal and the context is specific. When the head of design at the company building the AI tools tells the biggest product podcast that the old process is dead, that signal travels. Hiring managers hear it when deciding which roles to post. Clients hear it when deciding what to pay for. Engineers hear it too, and the designer on their team who was already struggling to justify research time just lost their strongest argument. Advice from the top doesn't just travel downhill. It accelerates the thing it's advising people to adapt to.

The same week, her company published labor market research showing a 14% drop in the job finding rate for workers aged 22 to 25 in occupations most exposed to AI.10 No equivalent decline for workers over 25. Wen is hiring what she calls cracked new grads, early-career people who learn fast and don't yet know what's supposed to be impossible. Her company's research arm is documenting that fewer new grads are getting hired at all in fields like hers. The funnel is tightening at both ends: the people inside it are learning less, and fewer people are entering it to begin with.

And the repricing spreads like ripples in a pond. A tech marketing writer named Christiana White described the identical pattern in the same comment section: she lost her role, explored ghostwriting, and discovered that AI had collapsed the middle of that market too, leaving elite practitioners and content farms at opposite extremes.11 Platforms like Upwork and Fiverr had already compressed freelance rates by making labor globally visible and comparable, but AI added something new: clients now demand you use a specific tool and lower your rates to reflect it. Once one client learns they can reprice one discipline, the expectation ripples outward to every adjacent field. The tool didn't just change the market; it gave clients a script for repricing the relationship itself.

· · ·

The drawback

Before a tsunami, the water pulls back from shore. The seafloor becomes visible. People walk out to look. Everything seems calm because the water just left.

When AI compresses the processing layer of skilled work, what remains is the kind of understanding you can only build by doing the work yourself: the listening, the observation, the judgment that comes not from reviewing the output but from being inside the situation while it forms.

The problem is that the organizational infrastructure which made that understanding visible, which justified its cost, was the processing layer itself. The team size. The timeline. The discovery phase. Remove the processing and the understanding doesn't disappear. It becomes invisible. The water has pulled back. The seafloor is showing. It looks like progress.

The Visibility Problem
When the processing layer becomes output, the judgment underneath becomes invisible BEFORE PROCESSING LAYER drafting · assembly · routine production The Work You Learn By Doing listening · observation judgment · presence visible because surrounded AI AFTER OUTPUT fast · clean · done The Work You Learn By Doing still there · nobody's looking invisible because indistinguishable
The output arrived. Nobody asked who understood it.

A Clutch survey of 800 software professionals found that 59% of developers use AI-generated code they don't fully understand.13 Not code they haven't reviewed, but code they've reviewed and still can't explain: they looked at it, it seemed right, they shipped it.

These teams weren't careless. They moved fast, used AI to scaffold their work, and produced things that worked within the bounds of what they understood. The problem was that those bounds were shrinking while the output was growing. They didn't know what they didn't know, and the tools were confirming that everything was fine.

Peter Naur argued in 1985 that a program's essential nature isn't its source code. It's the theory held by its developers. Their understanding of how the program maps to the world, why decisions were made, how modifications should proceed. When that theory fragments, the program "dies" even if the executable still runs.12

Knight Capital lost $440 million in 45 minutes in 2012 because dead code nobody understood was reactivated during a deployment nobody reviewed. One company, one deployment, one morning. And that was before AI-generated code, before 59% of developers were shipping code they couldn't explain, before "vibe coding" became Collins Dictionary's word of the year and an entire generation of developers started building systems they can describe but can't diagnose. Knight Capital was one team's failure of understanding. Vibe coding is that failure made into a workflow.

· · ·

Did you ship?

There is one dominant framework for measuring developer productivity. It's called SPACE, and it tracks five things: Satisfaction, Performance, Activity, Communication, and Efficiency. Comprehension is not one of them. Nobody built a metric for "does the team understand what it built" because until now, building the thing and understanding the thing were the same act. They aren't anymore.

Nobody's manager is asking "did you understand what you shipped today?" They're asking "did you ship?"

The nuclear industry runs knowledge loss risk assessments; NASA maintains formal knowledge preservation programs; aviation requires mandatory recurrent testing for every pilot who flies. No comparable practice exists in software. Nobody systematically verifies that the people operating a system actually understand how it works.

When you measure output and don't measure understanding, you create an environment where understanding can collapse without generating a signal. Anthropic's own study found that the interaction patterns which preserved learning, asking questions before generating code, requesting explanations alongside it, were all slower. The patterns that degraded learning, delegating entirely, leaning on AI more as the task went on, were all faster. Nothing in the organizational environment incentivizes the slower path.

Babbage's principle was that you could divide mental labour the same way Smith divided physical labour: break thinking into fragments and assign each to the cheapest person capable of performing it. We found something cheaper than a person, and then we stopped hiring the ones who would have learned the whole. The language model handles implementation. The developer handles prompting. Nobody holds a theory of the whole system. Understanding gets distributed between a human who directed the work and a machine that can't explain why the system exists.

That's the sorting class. It emerged without a policy behind it or a decision anyone remembers making. It's the structure that forms when tools divide people into those who understand what they're building and those who can only describe it. Babbage's principle, applied so completely that the division becomes invisible, because the people being sorted don't feel sorted. They feel productive.

The video editor already named this. Every asset that once separated you out, the rates that marked your tier, the skills that told you where you belonged, all of it collapsing at once, Bryson's two drives of differentiation and belonging compressing into a single loss that the career playbooks have no chapter for.

Anthropic published the labor market data showing the hiring slowdown. Anthropic published the comprehension study showing learning degrades when you delegate to AI. Anthropic employs Wen, who told the industry the old process is dead. Anthropic's CEO wrote an essay in January warning about the risks of the technology his company sells. The warning is real, the research is real, and every morning millions of people open Claude and pay for a ticket to the beach. The person selling the tickets and the person warning about the tsunami are the same person. Nobody has a word for that yet because it isn't hypocrisy. It's just how the structure works.

The people writing the new vocabulary, the AI companies, the design leaders, the transformation consultants, are writing about speed. Nobody has written the word for what happens when no one is accountable for understanding.


Sources

  1. Anonymous commenter (video editor), comment on Tim Denning, "You have about 24 months left before your skills expire," Modern Freedom (Substack), February 18, 2026. Quoted with minor edits for punctuation. Link
  2. Joanna J. Bryson, "One Day, AI Will Seem as Human as Anyone. What Then?" Wired, June 26, 2022. Bryson argues that professional identity is constructed through two opposing drives: differentiation (standing out) and belonging (group alignment). Her broader argument concerns demystifying human cognition to navigate AI safely; the application to professional disruption here is the author's. Link
  3. METR, "Measuring the Impact of Early AI Assistance on the Speed of Open-Source Development," 2025. Randomized controlled trial: 16 experienced developers, 246 tasks. 19% slower with AI; self-estimated 20% faster. 39-point perception gap.
  4. Anthropic, "Estimating AI productivity gains from Claude conversations," 2026. RCT: 52 programmers, 17% lower comprehension for AI-assisted group. Delegation patterns degraded learning; explanation-seeking patterns preserved it.
  5. Aruna Ranganathan and Xingqi Maggie Ye, "AI Doesn't Reduce Work — It Intensifies It," Harvard Business Review, February 9, 2026. Eight-month ethnographic study at a 200-person technology company, UC Berkeley Haas School of Business. Link
  6. MIT EEG study on cognitive engagement during AI-assisted work, cited in "The Acceleration Trap" research synthesis. "There is no cognitive credit card" attributed to lead researcher.
  7. Historical sources: E.P. Thompson, The Making of the English Working Class (1963); Charles Babbage, On the Economy of Machinery and Manufactures (1832); Harry Braverman, Labor and Monopoly Capital (1974).
  8. Lisanne Bainbridge, "Ironies of Automation," Automatica, 1983. The foundational paper on how automating the routine components of skilled work degrades the skills needed to intervene when automation fails.
  9. Jenny Wen, interviewed by Lenny Rachitsky, "The design process is dead. Here's what's replacing it," Lenny's Podcast / Newsletter, March 1, 2026. Wen leads design for Claude at Anthropic, formerly Director of Design at Figma. The argument was first presented as the keynote "Don't Trust the Process" at the Hatch Conference, Berlin, September 2025, an invitation-only event for senior design professionals. Link
  10. Maxim Massenkoff and Peter McCrory, "Labor market impacts of AI: A new measure and early evidence," Anthropic, March 5, 2026. Found suggestive evidence of a 14% drop in the job finding rate for workers aged 22-25 in exposed occupations post-ChatGPT, though barely statistically significant. No systematic increase in unemployment for highly exposed workers overall. Link
  11. Christiana White, comment on Denning (source 1), February 18, 2026. Describes losing a tech marketing writing role and finding the ghostwriting middle market collapsed. Link
  12. Peter Naur, "Programming as Theory Building," Microprocessing and Microprogramming, 1985.
  13. Clutch, survey of 800 software professionals, June 2025. 59% of developers reported using AI-generated code they do not fully understand. Link
  14. Google DORA (DevOps Research and Assessment), "Accelerate State of DevOps Report," 2024. Survey of approximately 3,000 respondents. Every 25% increase in AI adoption correlated with a 7.2% decrease in delivery stability. 75% of respondents reported feeling more productive.