Harvard Says AI Adds Onto Workloads More Than It Reduces Them, Here’s Why

When AI became mainstream, a lot of companies hoped (and were promised that) it would help with easing staff off their workloads. The small “tedious” day to day tasks like drafting docs, summarising info, even debugging code were some ways AI promised to help. It was an appealing promise, for sure.

Harvard researchers decided to test that promise in practice. In an 8 month study at a US technology company with about 200 employees, they tracked how generative AI changed day to day work. The team observed staff in person 2 days a week, monitored internal communication channels and carried out more than 40 in depth interviews across engineering, product, design, research and operations.

The company did not force anyone to use AI, although it paid for enterprise subscriptions to commercial tools. Even so, workers began to change how they worked. According to the researchers, AI tools did not, in fact, take down workloads. If anything, they actually intensified them. Employees worked at a faster pace, took on a lot more kinds of tasks and extended work into more hours of the day. Many did this without being asked.

Workers said AI made “doing more” feel possible and accessible. Many described “just trying things” with the tools. Over time, those small experiments added up. Employees absorbed work that might previously have justified extra hiring or outside support.

 

How Did AI Intensify Day To Day Work?

 

Harvard identified three main patterns. The first was task expansion. Because AI can fill gaps in knowledge, employees stepped into responsibilities that once belonged to colleagues. Product managers and designers began writing code. Researchers took on engineering tasks. People attempted work they might previously have deferred.

This expansion created extra demands elsewhere. Engineers spent more time reviewing and correcting AI assisted work. Oversight did not happen only in formal code reviews. It also appeared in Slack threads and quick desk side consultations. That guidance added to engineers’ workloads.

The second pattern involved unclear boundaries between work and non work. AI made it easier to begin tasks. Workers prompted tools during lunch, in meetings or while waiting for files to load. Many sent a “quick last prompt” before leaving their desk. Each action felt minor, but together they reduced natural breaks in the day and increased continuous engagement with work.

The third pattern was heavier multitasking. Employees managed multiple active threads at once, writing code while AI generated alternatives or running agents in the background. This created frequent attention switching and a growing list of open tasks. Staff felt productive, but also under greater pressure.

As one engineer told the researchers, “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don’t work less. You just work the same amount or even more.”

 

 

What Is The AI Verification Tax?

 

Concerns about rising workload are echoed in the corporate world. Greg Hanson, Group Vice President and Head of EMEA North at Informatica from Salesforce, described what he calls the AI Verification Tax.

He said: “AI intensifies workloads when companies fall foul to the AI Verification Tax. If AI can’t be trusted to work unsupervised, the productivity promise collapses and instead adds time to the task. This leaves employees spending more time checking and correcting AI outputs as they would if they were doing the task themselves.

“This verification burden is compounded by a skills gap. 75% of data leaders tell us their workforce lacks data literacy, and 74% say more AI literacy training is needed to use AI responsibly. But it isn’t inevitable. Where data is well governed and employees have the skills to challenge AI outputs, verification drops, decisions scale more safely, and productivity gains become real rather than theoretical.”

Harvard’s findings support that concern. Early productivity gains can hide growing cognitive strain as employees juggle more AI enabled workflows. Over time, this can impair judgement, increase errors and make it harder to step away from work.

How Should Organisations Respond?

 

The researchers propose building what they call an “AI practice”. This means setting norms around how AI is used, when work should pause and how far job scope should stretch.

They recommend intentional pauses before major decisions, sequencing work to avoid constant interruption and protecting time for human connection. Short check ins and structured dialogue can counter the isolating effect of fast AI mediated work.