The Model That May Starve on Its Own Success
A $10 billion startup is paying professionals to train AI to replace professionals. Everyone is watching the disruption. Nobody is asking what happens next.
There is a company in San Francisco currently paying over $1.5 million a day to doctors, lawyers, investment bankers, and journalists to teach AI models how to think like doctors, lawyers, investment bankers, and journalists.
The company is called Mercor. It is valued at $10 billion. Its founders are in their twenties and have never held a conventional job. Its clients include OpenAI and Anthropic. Its premise is straightforward: AI labs need human expertise to train their models, and Mercor connects that expertise to the labs at scale.
Bloomberg called it the startup training AI to replace the white-collar workforce. The coverage has been substantial. The conversation has been almost entirely about disruption which jobs will go, how fast, what comes after.
That is the right question asked of the wrong problem.
What the Coverage Is Missing
Here is the assumption buried inside Mercor’s model and inside the broader AI training economy that almost nobody is stress-testing in public.
AI models do not train once and then know everything. They need to be continuously updated with current, living, practiced human knowledge. The training data that makes a model genuinely useful in investment banking is not a textbook from 2019. It is the accumulated judgement of people who are actively working in investment banking right now making decisions, reading markets, navigating clients, developing the kind of tacit expertise that only comes from doing the work in real time.
Mercor’s contractors are valuable precisely because they are drawing from active, living professional experience. The doctor being paid $250 an hour to help train a healthcare model is not valuable because she read medical journals. She is valuable because she has spent years in consultation rooms making decisions under uncertainty, and that embodied knowledge is what the model is trying to absorb.
If AI displaces these professionals at scale which is explicitly the goal where does the next generation of training knowledge come from?
There are no new bankers developing new banking judgement through live transactions. No new lawyers building new legal reasoning through real cases. No new doctors accumulating the kind of clinical instinct that comes only from years of practice.
Someone will raise an objection here. And it deserves a direct answer.
Once AI models are deployed at scale treating patients, advising clients, executing trades they generate their own outputs. Those outputs become new data. The model trains on what it does, not just on what humans taught it. The loop, the argument goes, is self-sustaining.
In narrow, high-volume, well-defined domains, that is partially true. A model processing ten million radiology scans gets better at identifying the patterns it was trained to find. The loop works within its boundaries.
But a loop that only references itself cannot discover what lies outside it.
The radiologist who gets better at reading scans does not spontaneously develop a new hypothesis about a disease mechanism nobody has previously connected to imaging. That requires a different kind of knowing the kind that comes from being wrong in a real consultation, from a patient describing a symptom in an unexpected way, from the friction of practice in a world that keeps changing the questions.
Knowledge fields do not just deepen. They change direction.
The questions that will matter in medicine in 2035 are not the same questions that matter now. Some of those questions will emerge from the friction of human practice the GP who notices three unrelated patients share an unusual symptom cluster, the lawyer who spots a pattern across cases no database was designed to flag, the banker who reads a geopolitical shift before it appears in any dataset.
A model that trains on its own outputs gets better at the world it was trained on.
It has no mechanism to notice when the world has changed.
That is not a technical limitation waiting to be solved. It is a structural one. The model’s closed loop becomes more efficient and more confident precisely as the gap between what it knows and what is now true quietly widens. It does not degrade noisily. It drifts optimising for a reality that no longer exists, with no signal from the outside world to correct it.
The internet analogy makes this concrete. Large language models were trained on the accumulated output of the web decades of human writing, analysis, and debate. That resource was rich because millions of people had strong economic incentives to create and update it continuously. When those incentives weaken, when ad revenue falls, when publications close, when the economic logic of content creation deteriorates the web stops being a living source. The models start training on an increasingly static archive. The most recent and most relevant knowledge becomes the scarcest.
Mercor’s model, extended to its logical conclusion, creates the same problem in every professional domain it touches but faster, and without the signal that something has gone wrong.
The Second Contradiction
There is a second assumption inside the $10 billion valuation that deserves the same scrutiny.
Mercor’s long-term vision, as its CEO has stated publicly, is that AI will eventually be better than the best consulting firm, better than the best investment bank, better than the best law firm. The technology will, in his framing, transform the economy radically and create abundance for everyone.
Set aside for a moment whether that is technically achievable. Ask the economic question instead.
If AI displaces the professional workforce at the scale the vision requires the bankers, the lawyers, the consultants, the doctors who has the income to pay for the services these AI systems are designed to deliver?
The model assumes a market. The displacement strategy erodes the market that justifies the model. These are not separate considerations to be resolved sequentially. They are simultaneous. The faster the displacement, the faster the market contraction. The more successful Mercor becomes at its stated mission, the more it undermines the economic conditions that make its valuation rational.
This is not an argument against technological progress. Tools have always existed and will continue to exist. Every major technology transition in history has displaced some forms of work and created others. That is not the question.
The question is whether this particular transition is being designed with any serious attention to the economic ecosystem it depends on or whether the design horizon stops at the valuation and the disruption story, with the harder questions deferred to a future that someone else will have to navigate.
The Room This Is Happening In
The Mercor story is being covered as a future-of-work story. It is a more uncomfortable story than that.
It is a story about a room full of very intelligent people, founders, investors, AI labs, enterprise clients who can all see the immediate value being created, and who are collectively not asking the questions that sit one layer beneath the surface.
Not because they are incapable of asking them. Because the incentive structures of the room do not reward the asking. The funding round is closed. The valuation is set. The contractors are working. The revenue is growing. The questions about knowledge source degradation and market erosion are, in that room, somebody else’s problem at a later date.
That is not unusual. It is how most consequential decisions get made in fast-moving industries. The discomfort gets scheduled for later.
What is unusual about this moment is the scale. The room is not a single company or a single sector. It is the entirety of the AI economy, moving at a speed that makes the scheduling of discomfort feel increasingly theoretical.
Both problems arrive at the same time. The model loses its connection to living knowledge precisely when the market it was meant to serve has lost the income to use it. These are not two risks to be managed sequentially. They are the same failure, approaching from opposite directions simultaneously.
The question worth sitting with is not whether the technology will work. It is whether the world the technology is building will still have the conditions required for the technology to keep working and whether anyone in the room has made that their problem yet.
I write about the consequential truths that are visible, evidenced, and systematically underaddressed in the rooms where decisions get made. If you’ve sat in one of those rooms, I’d like to hear what you saw. Join the conversation at Tuskers.

