AI doesn’t just add work; it changes work in ways that are now empirically undeniable. The HBR article “AI Doesn’t Reduce Work—It Intensifies It” validates what I called the “AI Tax” nearly a year ago: AI increases the volume, velocity, and ambiguity of work unless organizations intentionally design against that outcome.
When the Research Catches Up with the Floor
In the AI Tax post, I argued that AI doesn’t arrive as simply as a productivity dividend; it arrives as six categories of new work: juggling and tool sprawl, vetting, data readiness, relevance and safety, the burden of failed projects, and perpetual learning and relearning. Those categories emerged from conversations with teams already using AI in practice, users toggling among tools, reconciling outputs, and cleaning data rather than doing the “higher-value” work they were promised.
The HBR piece by Aruna Ranganathan and Xingqi Maggie Ye offers a rare longitudinal look at that reality, following roughly 200 employees at a U.S. tech company over eight months to see how generative AI actually changed their work. Their conclusion is blunt: AI tools did not reduce work; they “consistently intensified it.” Employees worked at a faster pace, took on a broader scope of tasks, and extended their work into more hours of the day, often without any manager asking them to do so.
Put simply, the study provides the ethnography for the AI Tax’s categories of work.

Three Ways AI Intensifies Work
The HBR research identifies three main patterns of intensification that emerge once AI tools move from demonstration to daily use.
- Task expansion
Once AI is available, people don’t just do the same work faster; they begin to do more kinds of work. Product managers and researchers begin writing and reviewing code; employees take on tasks that would previously have required new headcount; and individuals reclaim work that had been outsourced, deferred, or simply avoided. At one level, this could be perceived as empowerment. A deeper dive exposes engineers who find themselves mentoring colleagues on AI-assisted code, reviewing a flood of partial pull requests, and fixing low-quality “work-slop” that arrives in their queue dressed up as finished work - Blurred boundaries between work and non-work
AI makes it easy to “just try something” in the margins of the day: a quick prompt during lunch, one more refinement before heading to a meeting, a late-night idea tested in bed on a phone. Those micro-sessions don’t feel like extra work, but over time, they erode breaks and recovery, creating a continuous sense of cognitive engagement. Workers in the study reported that, as prompting became their default during downtime, their breaks no longer felt restorative. - Increased multitasking and cognitive load
Employees run multiple AI agents and threads in parallel, let AI generate alternative versions while they write, and keep half an eye on outputs while trying to focus on something else. The presence of a “partner” that never gets tired encourages constant context switching: checking, nudging, re-prompting, and reconciling. The result is an ambient sense of being always behind, even as visible throughput increases.
If you read my AI Tax post, these themes will feel very familiar—because they are the lived experience behind the categories.

How the AI Tax Explains Intensification
In “The AI Tax,” I described six ways AI creates more work than it saves when deployed without design. The new HBR research slots cleanly into that framework.
- Juggling with AI: multi-tasking, switching, sprawl
The study’s third pattern, increased multitasking, is the human experience of juggling across AI tools, agents, and metaphors of interaction. In my post, I wrote about toolchain sprawl: one AI for scheduling, another in email, a third hidden in a CRM, each with a different interface, set of capabilities, and quirks. The result is a workday that feels like a perpetual reconciliation exercise, with attention sliced into dozens of thin tasks. - Vetting: oversight and the hallucination problem
Task expansion sounds efficient until you remember that every AI-generated draft, be it a document, snippet of code, or marketing campaign, requires vetting. The HBR study documents engineers who start spending significant time reviewing AI-assisted work produced by colleagues outside their discipline, often through informal Slack exchanges and favors. That is the AI Tax’s “shadow labor,” real work with no line item in a project plan, absorbed by people already at capacity. - Data science and readiness: hidden work exposed
AI makes data problems visible. When employees eagerly expand their scope: writing analyses, reports, or prototypes they would not previously have attempted, they quickly collide with scattered, mislabeled, or outdated data. That collision forces them into ad hoc data wrangling: reconciling formats, hunting for authoritative sources, and learning just enough about the organization’s data architecture to be dangerous. - Relevance and safety: governance lagging adoption
As AI disseminates content more quickly, questions of tone, bias, confidentiality, and regulatory risk become daily concerns rather than edge cases. The HBR article hints at this indirectly, but the connection to my AI Tax category is direct: when governance lags behind adoption, each step forward requires a detour to verify compliance and appropriateness. That friction doesn’t show up in vendor demos, but employees feel it immediately. - Failed projects and abandonment cycles
The study depicts enthusiastic early experimentation: people “just trying things” with AI. In my post, I warned that this pattern often evolves into a cycle of pilots that don’t connect to real workflows, bots that die on the edge of a promise, and technical debt that someone has to clean up. When every failed experiment leaves behind abandoned prompts, partial automations, and skeptical users, the AI Tax compounds over time. - Learning and relearning: AI as a moving target
Finally, both the HBR article and my AI Tax post converge on the churn of learning. Every model update, interface change, and new feature, let alone the arrival of entirely new tools, forces people back into training mode. Add in social FOMO (“Have you tried the latest model?”) and you get a culture in which workers are expected to keep up with a constantly shifting AI landscape while also maintaining their existing responsibilities.
The point isn’t that AI cannot create value. It’s that value and complexity scale together, and complexity arrives first.

The Free Time Mirage
When AI works, when it actually speeds up a task or simplifies a workflow, a different question emerges: what happens to the time that is freed? In the AI Tax article, I argued that this is not a technical question but a leadership and policy challenge. Without intentional design, freed time gets reabsorbed into:
- More tasks, often vaguely defined as “strategic work” or “innovation.”
- Informal expectations that individuals will take on extra responsibilities because “the tools make it faster now.”
- Subtle pressure to maintain or increase output rather than use time for recovery, learning, or collaboration.
The HBR study makes this dynamic visible. Employees used AI to shave time off tasks, then filled the margin with new work: helping colleagues, experimenting with additional prompts, or extending their responsibilities into areas previously out of scope. They felt more productive, but not less busy. Over time, the initial thrill gave way to exhaustion and cognitive fatigue.
This is the core of the AI Tax argument: if organizations do not explicitly decide how to treat time saved by AI, the default will always be intensification, not liberation, and in many cases, substitution rather than augmentation.

Designing Against Intensification
The HBR authors suggest that organizations need explicit “AI practices” to prevent intensification from becoming the default: norms about when to use AI, when not to use it, and how to manage AI-enabled work sustainably. The AI Tax framework aligns with that call and offers concrete starting points.
Here are several design moves leaders can make, informed by both the research and the AI Tax:
- Standardize the AI stack
Reduce toolchain sprawl by choosing a small number of platforms and building around them. Consolidation lowers cognitive switching costs, simplifies governance, and makes it easier to design training that sticks rather than chasing every new feature. - Make vetting visible and accountable
Stop treating oversight as invisible heroism. Assign vetting responsibilities, track the time it takes, and factor that time into project plans and ROI claims. This isn’t just fair; it generates the data needed to decide where AI genuinely helps and where it merely redistributes labor. - Invest in data before scale
Many of the frustrations uncovered in the study,, such as partial results, confusing outputs, and reliance on “vibe” coding, stem from poor data, unclear standards, or missing context. Cleaning, tagging, and aligning data are unglamorous, but they are essential if AI is to produce outputs that reduce work rather than create additional cleanup work. - Run time-bound pilots with real endings
Organizations should treat AI pilots as experiments with explicit timelines and decision gates, rather than as permanent, half-adopted features. At the end of a pilot, either commit and invest, or close it down and document what was learned so you don’t repeat the same mistakes later. I also regularly argue that AI requires knowledge management, but accelerated AI adoption too often overwhelms its implementation. - Protect human time as an asset
Perhaps most importantly: decide, in advance, how to reclaim free time with purpose. Some portion should be explicitly allocated to rest, reflection, mentoring, and exploration, rather than being harvested as a shadow productivity gain. If AI is to be a colleague, it should create conditions for better human judgment, not simply greater throughput.

From AI Tax to AI Practice
The convergence between the HBR research and the AI Tax is encouraging because it suggests we’re moving out of the speculative phase of AI and into a more empirical, design-oriented one. We now have a growing body of evidence that, left to its own devices, AI doesn’t reduce work; it lowers friction and invites more work.
The task for leaders is to treat these realities as design constraints rather than as inconveniences. The AI Tax identifies where costs accumulate; the HBR article shows how those costs manifest in a real organization over time. Between them lies the opportunity to build “AI practices” that honor human limits, protect time, and ensure that intensity is a choice rather than an accident.
