At some point in the last two years, someone at a company you have heard of sat in a meeting and said “we need an AI Whisperer” out loud, and everyone nodded.
That job now exists, so does Prompt Engineer, Head of AI Behaviour, AI Ethicist and, more recently, Chief AI Officer. LinkedIn is full of them, job boards are full of them and company org charts, particularly at startups trying to signal that they are serious about AI, are full of them.
Whether this is a sign of true organisational maturity or the most elaborate collective delusion in tech history rather depends on the company. But the titles themselves merit closer inspection, because buried underneath the branding is the one actually worth asking: what does the real work of AI look like in 2026, who does it, and where does it sit in an organisation?
Behind The Job Title, What’s The Actual Job?
Start with the Prompt Engineer or AI Whisperer, the roles that sparked the most mockery when they first appeared.
The job is more substantive than the name suggests. At companies using large language models in production, someone has to design, test and refine the text inputs that get the model to produce accurate, consistent, useful outputs. That means A/B testing prompts, building and maintaining prompt libraries, integrating prompts into internal tools, and working with product and engineering teams to make sure the AI behaves predictably across edge cases. It is less glamorous than it sounds, which is probably why it was rebranded as “whispering.”
AI Ethicist and Head of AI Behaviour occupy different territory. The focus here moves from building to governing: setting fairness guidelines, designing red-teaming scenarios, mapping risks under regulations like the EU AI Act, and running ethics reviews for high-stakes deployments. They tend to exist at the junction of legal, risk and product, and their job is to make sure that when something goes wrong with an AI system, the company can explain what it did and why. Given where regulation is heading, these roles are getting more serious faster than almost anyone predicted.
The Chief AI Officer is the most senior and the most contested. HSBC created one recently and plenty of others are following. The CAIO operates alongside the CIO, CTO and CDO but with a cross-functional mandate: deciding which AI use cases to scale, which to kill, how to align AI investment with P&L and regulation and how to build governance frameworks that hold up in practice. The challenge is that it requires someone who understands the technology, the business, the regulatory environment and the organisational politics simultaneously – that is a very short list of people.
More from Artificial Intelligence
- Can’t Find Your Car? AI Can Now Help You Locate Your Parking Spot
- How Is The UK’s Cybersecurity Being Impacted By AI And Geopolitics?
- Top 10 Women In AI To Watch In 2026
- The Risks Of Male-Dominated AI: Could AI Widen The Gender Gap Instead Of Closing It?
- Novo Nordisk Went All-In On OpenAI – Is Big Pharma About To Eat HealthTech’s Lunch?
- Hotspring Develops Leading Hybrid AI And Manual Workflows For Roto And Unveils Brand New 2.0 Interface
- What Do AI Experts Think About Claude Mythos?
- Experts Comment: The EU AI Act Comes Into Force This August – Will It Help Or Hinder European Startups?
Are These Titles Here To Stay?
Some of them, almost certainly, are not.
Prompt engineering in particular is showing early signs of title fatigue. Platforms are getting better at handling prompts without specialist input, and the responsibilities are gradually being absorbed back into product, data and engineering roles. The people who invented prompt libraries at frontier companies in 2023 are already doing something more complex, and the title has started to feel like a description of a moment rather than a career.
The governance and strategy roles are a different story. AI Ethicist, Head of AI Behaviour and Chief AI Officer look set to stay, partly because the regulatory environment is forcing companies to have named, accountable humans in charge of AI risk. The EU AI Act requires exactly that kind of ownership for high-risk systems. The ICO’s recent guidance on AI agents and data protection indicates more of the same.
Companies that are serious about running AI in production are going to need people who own the governance question, and that role will need a title whether or not anyone likes the ones currently on offer.
Read The Job Title, Read The Company
Taken together, these titles are pointing at something: the industry is slowly agreeing on where the real work of AI actually happens.
For the first few years of the current wave, most of the attention went to the model layer: who has the best foundation model, who is training on the most data, who is closest to AGI. The job title explosion suggests the conversation is shifting. Companies are starting to agree, at least implicitly, that the challenge isn’t building the model. It is designing how humans and organisations interact with it, governing what it does and being accountable when it goes wrong.
For anyone building with AI, that makes these new titles less a meme and more a proxy for how seriously a company is willing to invest in turning AI experiments into something that holds up in production. A company with a Chief AI Officer and a Head of AI Behaviour is making a statement about where it thinks the work is. A company with a single Prompt Engineer and no governance structure is making a different one.
These titles are trying to tell you something about where the industry thinks the real work of AI is. The companies that have figured that out are already ahead.