The death of the prompt-engineer job title

Two years ago, every other LinkedIn post promised six-figure salaries for prompt engineers. Today the title's mostly gone from job boards. The work didn't go away. The framing did, and it's worth being clear about why.

A vintage ornate paper business card with elegantly curling edges resting on a polished dark wooden desk next to a fountain pen

In mid-2023 you could not open LinkedIn without seeing a post promising $300K salaries for prompt engineers. The title was on every job board. There were bootcamps. There were certifications. There were thinkpieces about whether it was the next big career path or a passing fad. By mid-2025, the title's mostly gone. A search across the major job boards for "prompt engineer" returns mostly contractor gigs, junior roles at companies that haven't updated their JDs, and a few specialty positions at AI-research labs. The number of "Senior Prompt Engineer" openings at typical engineering orgs is approximately zero.

The work didn't go away. The framing did. Worth being clear about why, because the pattern is recurring and the next few "this is the new role" hype cycles will benefit from the same analysis.

What actually happened to the work

The work that prompt-engineer was supposed to encompass split in three directions:

Most of it absorbed into existing engineering roles. The day-to-day work of writing prompts, evaluating model outputs, iterating on system prompts, building eval suites, all of this became part of the regular software-engineering job. Backend engineers writing AI features write prompts as part of writing the feature. Data scientists evaluating model performance use the same statistical literacy they always used. The skill is real; it didn't justify a separate job family.

A subset evolved into the prompt-architecting / system-prompt-design discipline I wrote about a couple of months ago. This is the systematic work, designing the prompt architecture for a production AI feature, building the eval pipeline, designing the failure-mode handling. Real specialty, real expertise, but it sits inside ML platform teams or AI-feature teams, not as its own job title.

A specialty subset persisted at the AI-research labs. OpenAI, Anthropic, DeepMind, the major labs all have people whose job is to deeply understand how their models behave under various prompting regimes. These roles still exist; they just don't have "prompt engineer" in the title. They're called things like "model behavior researcher" or "alignment engineer" or just "research engineer."

The pattern is consistent: the work is real, the standalone job-title framing wasn't.

Why it didn't survive as a title

Three structural reasons:

The skill became table stakes for any engineer working with AI. When AI features are everywhere, knowing how to prompt the model is part of the job, not a separate job. The same way "knows how to use a database" stopped being a separate role decades ago.

The skill commoditized faster than the role established. The actual practice of writing good prompts isn't that hard to learn. The expert version is meaningfully better than the novice version, but the gap closes within a few months of focused practice for most engineers. The "rare expertise" framing that justified the salary premium didn't hold once the population of engineers with the skill grew.

The model-vendor tooling kept improving. A lot of the early prompt-engineering work was about coaxing reasonable behavior out of fragile models. As the models got better, less coaxing was needed. The skill of "knowing the right magic incantation to get GPT-3.5 to follow instructions reliably" is largely obsolete because GPT-4.1 and Claude 4 follow instructions much more reliably without the magic incantation.

The combination (table-stakes skill, commodity practice, declining marginal value) is fatal for any standalone job title.

The pattern this is part of

The same pattern played out for several other AI-adjacent roles that briefly looked like new job families:

  • "Conversational designer" in the chatbot era of 2017-2019. Real skill; absorbed into UX design and content design.
  • "ML Ops engineer" in 2020-2022. Persisted longer than prompt engineer because the work has more durable substance, but increasingly absorbed into platform-engineering and SRE roles.
  • "Data scientist" itself, in the early 2010s, went through the same arc, briefly the hot new role, then gradually split into specializations (analytics engineer, ML engineer, research scientist) without the unified title surviving as a primary job family.

The pattern: hot new technology generates a hot new job title, the title attracts oversupply because the salary premium is visible, the underlying work splits into more specialized variants, the unified title fades as the work matures into the existing role taxonomy. Prompt engineer is a textbook case in fast-forward.

What this says about the next round

Two implications for what's coming next:

The first is that "AI agent engineer" or "agentic systems engineer" is on a similar trajectory. The titles are showing up; the salary premiums are real; the bootcamps are starting. The work is real and the skill is real. The work will mostly absorb into platform-engineering and ML-platform roles within two years. The "Agent Engineer" job title in 2027 will be roughly where "Prompt Engineer" is in 2025, mostly a junior or contractor title, with the senior work absorbed into existing roles under different titles.

The second is that the actual durable specialties forming around AI are less catchy than the hype titles. The titles that survive look more like:

  • AI platform engineer (the team that runs the AI infrastructure for the org)
  • Applied AI engineer (the engineer who builds AI features)
  • Eval and evaluation engineer (the specialist who builds the evaluation surface for AI systems)
  • Model behavior researcher (the lab specialist)
  • AI safety / alignment engineer (the specialist who works on the harder versions of the treat-the-AI-like-an-employee discipline)

None of these have the marketing energy of "prompt engineer" or "AI agent engineer." All of them are more durable as standalone roles because the work has clear specialty boundaries that don't dissolve as the technology matures.

What practitioners should take from this

If you're thinking about your own positioning in mid-2025:

Don't optimize for hot job titles. The titles cycle faster than the careers; betting on the title is betting on the cycle catching the right phase. The skills underneath the titles are more durable.

Optimize for compounding skills. The skills that compound across AI generations (system design, evaluation discipline, the architectural patterns from the agent-design piece) are worth more than the skills that are tied to a specific generation of tooling.

Be skeptical of "this is the next role" framing. The roles that emerge as durable are usually the ones that nobody predicted, not the ones the marketing layer is loud about. If everyone's writing thinkpieces about a new role, the role is probably already past its peak hype.

Cross-train into the adjacent specialties. A backend engineer who understands the prompt-architecting layer is more valuable than a prompt engineer who doesn't understand backend engineering. The prompt engineer in 2023 had a narrow window where the specialty alone was enough; that window closed faster than people expected.

The death of the prompt-engineer title isn't a tragedy. It's the predictable arc for any role that emerges from an emerging technology category. The people who got into the role early and used it as a stepping stone to broader engineering roles are mostly fine. The people who optimized for the title itself ran out of runway when the title faded. The next round of hot AI titles will follow the same pattern. Worth being prepared for it before it happens to the title you're holding.