If you live in a first-world country with a sizable knowledge work sector, you might find it hard to escape the subject of AI. That’s probably an understatement. We are saturated with talk of artificial intelligence and, in particular, large language models. The economist Edgar R. Fiedler is quoted as saying, “He who lives by the crystal ball soon learns to eat ground glass,” but that hasn’t stopped the proliferation of prognostication on the subject of AI.
Matt Shumar posts his take as an insider working with AI on the Xitter site. He issues a clarion call to those who may not be as close as he is to what is happening in the industry.
I should be clear about something up front: even though I work in AI, I have almost no influence over what’s about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies… OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn’t lay. We’re watching this unfold the same as you… we just happen to be close enough to feel the ground shake first.
Shumar is adamant that what he is saying applies to the newest, freshest pro-grade AI models from OpenAI and Anthropic and consistently warns that if you’ve tried previous models and found them lacking, you would be surprised at the capabilities of what has come into existence only very recently. Though he comes from a tech background and most of what he says applies to software development, he extrapolates his experiences to other industries, such as law and medicine.1

The refrain from insiders like Shumar is that entry-level jobs in certain industries will evaporate as they get replaced by capabilities provided by artificial intelligence. If true, this poses huge structural problems for these industries. Frankly, no one seems to have figured out how we retain the most necessary elements of the apprenticeship model if this comes to pass. How do we develop senior software engineers if no one can break into the industry at a junior level? If AI is doing the grunt work at a law firm, how does one gain the experience to become senior counsel? Nilay Patel discussed this very subject on The Vergecast with Sean Fitzpatrick the CEO of LexisNexis. The takeaway from that conversation: there are no easy answers.
It is an apprentice system. So, if you start to take some of the layers out of the bottom, how does everyone skip the bottom layer and still make it to the second with the same capabilities and skills? That’s a real challenge.
For some, belief in the imminence of the spread of AI has taken on almost a messianic sense of urgency. One is reminded of the words of Jesus in chapter 25 of the book of Matthew: “Therefore keep watch, because you do not know the day or the hour.”
Despite its common usage, the word “apocalypse” doesn’t exactly mean the end of the world as much as it refers to a revealing (usually of massive proportions). Shumar and others believe the AI apocalypse will be massively disruptive, and it’s quickly bearing down upon us.
As someone with a son who is considering law school in the not-too-distant future, I am watching these developments carefully. Insofar as the purported changes may affect gainful employment in industries like law, it could mean a totally different way into practice.
The Other Side
For a different perspective, the always interesting Robin Sloan hones in on the constraints of AI—symbols in, symbols out. Inside of its magic circle, he says, “anything can become anything else.” But so much rests outside that circle. He writes of a process by which he tracks letters he sends through USPS. Most of the cobbling together this process was done in the physical world and required manipulations of matter—impossible for a large language model. He doubts even cool robots will be stuffing envelopes anytime soon. Everything the LLM does is disembodied (and heaven help us if that changes in the near future).
Compromise must take place in the other direction, as well. AI doesn’t just adapt to the landscape; the landscape must be adapted to it. He uses the analogy of an olive oil harvester.
That’s an over-the-row olive harvester. Most olive oil production at medium-or-greater scale depends on machines of this kind; they trundle over trees planted in long rows, almost like continuous hedges, and collect the fruit with vibrating fingers. Machine-harvested olives are cheaper to collect, and they arrive at the mill in better shape than olives harvested by hand.
The catch: most olives can’t be cultivated in this configuration; the trees don’t thrive so close together. Only a handful of varieties will tolerate it, so those handful have been planted in huge numbers, and the flavor of global olive oil has changed as a result.
These real-world negotiations with automation don’t happen overnight. There are places that AI just can’t reach. Maybe some of those places will be altered to fit the needs of the machine, but some can’t be or won’t be.
The Personal Effects
When the current administration imposed radical tariffs, I read an article about the challenge of making iPhones in the US, one of the stated goals of the new international economic policies. It wasn’t just the advantages provided by Chinese industrial complex that caused Apple to favor manufacturing the devices in China. It was also the smaller hands of the Chinese (particularly women) on the assembly line that could do the delicate assembly better than their Western counterparts. Even putting together one of the most impactful technologies of our time requires a human touch.
I’ve yet to calculate what this means for myself or my offspring. Humanity will find its unique place again amid the talks with the robots. History provides ample evidence that labor-saving devices only cause us to do more labor of a different kind, so I don’t believe we’ll all be spending more time on tropical islands sipping mimosas. I’m convinced the tech utopians are dead wrong to think when our contributions are replaced by computer models we will lead lives of greater leisure. We will just have to figure out how we provide value when general knowledge work is no longer a differentiator.
Full disclosure: I am currently working on an AI-based feature for large law firms. Though thankfully it’s an augmentation of what people already do in their work and doesn’t pose a danger of job loss. My opinions are my own and not those of my employer (or vice versa), etc., etc., whatever and ever.
-
One of Shumar’s most alarming discoveries is that agentic AI is now capable of updating and improving itself. Each successive cycle of agentic self-sufficiency builds on the previous, creating ever more reproductive capability. Natural selection at hyperspeed. ↩︎

Leave a Reply