AI and job security: the conversation we're not having

AI is going to displace jobs, faster than I expected, slower than the doomers want you to believe. The honest conversation is about pace, incentives, where the lines are, and what human+AI collaboration actually looks like. I'd rather be wrong than caught off guard.

A vintage mechanical time clock on a dark wooden surface with brass numerals visible and the hands paused mid-stroke

I've been writing about AI for almost three years and I've been holding back on this specific piece because it's the one I most wanted to be careful about. Job security in the AI era is a conversation that gets had badly almost everywhere it's had at all, by the doomers in one direction, by the dismissers in the other, by the executives looking for cover in a third, and the version of the conversation I want to have is none of those.

Worth being plain up front about my actual position, because the half-takes are everywhere and most of them aren't mine.

What I actually believe

AI is going to displace jobs. A lot of jobs. Faster than I thought it would happen, slower than the most aggressive timelines suggest. I always knew it was coming; I just had it on a longer arc than the one we're actually on.

I'm against the pace. I am not against the reality.

That distinction is the whole essay, so let me say it differently. The displacement is real. The technology is doing real work. The headcount reductions in 2025 were not all narrative cover for cost-cutting; some of them were and some of them weren't, and the share that was real productivity-driven displacement is going up, not down. I expect that to keep accelerating through 2026 and 2027. I don't think the cloud-era "we cut too soon and had to hire back" pattern is going to repeat at anywhere near the scale a lot of the analysis (including some of mine, earlier) suggested.

What I'm against is the rush. The pace right now is being driven by short-term incentives, by markets that reward layoff announcements with stock-price bumps, by executives whose bonuses depend on the cost-cutting story, by competitive dynamics where the first company to announce the AI-replacement headcount cut sets the floor for what every competitor has to do in their next earnings call. AI is in a rush phase, and the rush is set by what the markets reward. That's the pattern, and it's the pattern I want to push back on.

I keep a level head and a realistic view. The financial incentives are doing a lot of work here. The honest conversation about AI and jobs has to start there.

What I'm fine with

I want to be clear about the parts I have no problem with, because the calibration matters.

I have spent my entire career automating IT systems. Infrastructure automation, ops automation, configuration management, the whole arc from manual everything-by-hand to mostly-automated cloud-native ops. Automation is good. Automating away the parts of work that should have been automated decades ago is overdue, not threatening. When AI handles the rote ticketing work that nobody wanted to do anyway. That's the displacement I'm fine with. When AI handles the boilerplate code generation that engineers were copy-pasting from Stack Overflow, that's fine too. The work that gets displaced is the work that was a foundation for the higher-value work.

The line I draw is not at "no displacement." The line is at "what gets displaced and how fast."

Where the lines are

There are two lines I think the AI conversation needs to hold and isn't holding well.

The first line is creatives. Artists, singers, illustrators, writers, designers, the people whose work product is creative human expression. AI replicating their output without consent or compensation is wrong. It's wrong because the training data was their work, taken without permission. It's wrong because the generated output competes with their living. It's wrong because the social value of human creative work is not just the artifact; it's the human-being-doing-it that gives the artifact its meaning. We are speed-running the destruction of the economic basis for a lot of creative careers, and the answer "well, the AI did it cheaper" is not an answer.

I'm not a creative. I'm a technologist. But my read on this isn't from inside the creative economy; it's from looking at what we're doing and recognizing the shape of the harm. The carveout for creatives isn't sentimental; it's about preserving the conditions for the kind of human work that doesn't reduce to throughput.

The other line is the deeper one and the one I think isn't getting talked about nearly enough.

A person's processes, the way they think, the patterns of how they approach problems, the accumulated way of being that lets them do what they uniquely do, belongs to them. That's their IP. Not just their writing, not just their voice, not just their face. The cognitive foundation that produces all of those: that's the thing AI training is increasingly trying to capture, and the legal and ethical framework for how that capture happens is essentially absent.

We have copyright law for outputs. We have trademark law for marks. We have rights of publicity for likenesses. We don't have a framework for the underlying cognitive architecture that produces all of those. When an AI is fine-tuned to reproduce a person's way of thinking (their decision patterns, their rhetorical habits, their problem-solving moves) what's been taken is not any specific output. It's the engine. And the engine, in my view, belongs to the individual who developed it over their lifetime.

I've written before about encoding a person and about knowledge as an asset. This is the natural extension of those threads. The next IP fight isn't about song lyrics or visual style. It's about the cognitive process. And we're not ready.

What human+AI collaboration actually looks like

The sustainable model (the one I think the firms that survive the next decade will end up running) is human and AI working together. Not AI replacing human, not human refusing to use AI. The collaboration.

What that looks like in practice: humans handle the parts that require being human (judgment under ambiguity, relationship work, novel problem framing, ethical decisions, creative direction); AI handles the parts that benefit from speed and pattern recognition at scale (routine analysis, drafting, retrieval, structured manipulation); the workflow is designed so the handoff between them is smooth and the human stays in the loop on the consequential decisions.

This is the working model in most of the AI-mature engineering teams I know. It's also the working model that's compatible with both productivity gains and continued employment of the humans involved. Not at the same headcount as before (that's the honest part) but at a headcount that's closer to constant than to drastically reduced, with the humans doing higher-leverage work than they did before.

The companies that figure out the human+AI collaboration model outperform the companies that just cut. That's true. But to be fully honest: even the collaboration-model companies are running with smaller teams than they would have five years ago. The collaboration model shrinks the headcount more slowly and shrinks it well, the people who stay are doing better work and getting paid more. It doesn't make the displacement go away. It changes the shape of it.

Where this leaves us

A few specific things I think follow from the actual position:

For individuals: Take the displacement seriously. Don't bet on it being slower or smaller than the realistic view suggests. Invest in the kind of work that compounds with AI rather than getting substituted by it, judgment, relationships, the cognitive moves that aren't easily captured in training data. Watch the IP-of-cognition question carefully; the legal framework is going to develop and the people who think about their own cognitive process as a real asset will be better positioned than the people who don't.

For organizations: The incentive-driven pace is going to bite. Companies that cut hard now to chase the productivity narrative will get short-term margin and long-term workforce damage that takes years to repair. The companies that move toward human+AI collaboration as the operating model, plainly, with investment in reskilling, with honest internal communication about what's changing and what isn't, build the muscle that lets them keep adapting. The transition is multi-year. The companies that treat it as a one-time cost-cut event will be playing catch-up.

For policy: The IP framework needs to extend to cognitive process and individual style. The training-data conversation has been mostly about what was published; the harder question is what was learned by observation, by interaction, by extracting patterns from work product the person didn't think they were licensing for AI training. There's no clean answer here yet. The conversation has to start.

For the public discourse: The catastrophism and the dismissal are both wrong. The catastrophism overstates the speed and understates the human capacity to adapt. The dismissal understates the magnitude and the asymmetry of who absorbs the cost. The honest middle is "displacement is real and accelerating, the pace is incentive-driven, the lines worth holding are creatives and individual cognition, the working model is collaboration, the policy gap is real."

I'd rather be wrong

The closing position, because it's the actual stance I hold and it doesn't come through cleanly otherwise.

I would rather be wrong about all of this than be caught off guard. I'd rather the displacement turn out to be slower than I think, or smaller than I think, or more easily absorbed than I think. If those things are true, great. We'll all be fine and I'll have written some essays that look pessimistic in hindsight.

But planning for the optimistic case when the realistic case is darker is the thing I won't do. The cost of being wrong toward optimism, being unprepared when the displacement hits, having no plan for the people in your organization, having no read on where your own role fits, is much higher than the cost of being wrong toward realism. Realism produces preparation. Optimism, in this conversation, produces complacency.

So that's where I am. The displacement is real and it's coming faster than I thought. Short-term incentives are driving the pace. The lines I want to hold are creatives and individual cognition as IP. The model I think wins is human+AI collaboration. And I'd rather be wrong about all of it than be caught off guard.

Worth being plain because the implicit framings (both extremes) are bad analysis with bad consequences. The honest position is the one I'd rather hold and be wrong about than not hold and be right about. That's the conversation we're not having; that's the conversation I'm trying to have here.