29% of Employees Are Sabotaging Their Company's AI Agents -- Here's What Actually Happens
29% of workers admit to sabotaging AI agents at work. Fortune survey reveals Gen Z leads resistance, but companies that handled the transition well saw 340% more adoption.

Nearly one in three employees at your company is actively working against your AI rollout. Not passively ignoring it. Not quietly skeptical. Actively sabotaging it.
That's the headline finding from a 2,400-person survey by Writer and Workplace Intelligence, published in April 2026. Twenty-nine percent of workers across the US, UK, and Europe admit to sabotaging their company's AI strategy. Among Gen Z workers, it's 44%. Among teams that received no AI training or change management, it's higher still.
But here's the part nobody's talking about: the saboteurs are the ones getting fired, while the adopters are getting promoted at 3x the rate.
What is AI agent sabotage?#
AI agent sabotage is any deliberate action by an employee to undermine, slow down, or discredit their company's AI deployment. It ranges from passive resistance (refusing training, ignoring AI-generated suggestions) to active interference (feeding bad data to AI systems, tampering with performance metrics, entering proprietary information into public tools).
The Writer survey identified five distinct categories:
- Refusing to use AI tools or outputs -- the most common form
- Refusing to take AI training -- declining optional or mandatory upskilling programs
- Feeding proprietary data into public, unapproved AI tools -- sometimes out of spite, sometimes out of ignorance about which tools are sanctioned
- Intentionally generating low-quality outputs -- using AI tools badly on purpose so results look poor in reviews
- Tampering with performance metrics -- manipulating data to make AI agents appear less effective than they are
That last category should alarm every operations leader. When employees tamper with the metrics you use to evaluate your AI investment, you don't just lose productivity. You lose the ability to make informed decisions about a technology that will define your competitive position for the next decade.

Why employees sabotage AI -- and why the obvious answer is wrong#
The easy narrative is that employees are Luddites who fear change. That's not what the data shows.
Of the workers who admitted to sabotage, 30% cited fear that AI would take their job. That's real and legitimate. But it means 70% of saboteurs had other reasons. And those reasons reveal a much deeper organizational failure.
HR Executive's analysis of the same survey data uncovered something damning: 75% of C-suite respondents admitted their company's AI strategy is "more for show" than for actual internal guidance. Three-quarters of executives are rolling out AI with strategies designed to impress investors, not guide employees. And then they're surprised when employees don't take the rollout seriously.
It gets worse. Thirty-nine percent of executives admit they have no formal plan to drive revenue from AI tools. Nearly 70% say their company is already doing AI-related layoffs despite having no strategic foundation for those cuts. The message employees receive is unambiguous: "We're deploying AI to replace you, we don't have a plan, and we're making it up as we go."
Sabotage isn't irrational in that context. It's a predictable response to a perceived threat backed by zero organizational trust.
The survey also found that 28% of employees have already seen an AI tool produce dangerously wrong, unethical, or biased results. Yet 30% of those employees said they wouldn't feel safe reporting it because they fear retaliation. This is the silence problem -- employees who see AI failing don't speak up because the organizational culture punishes bad news.
The generational split is real but misunderstood#
Fortune's reporting on the survey focused on the generational angle: 44% of Gen Z workers admitted to some form of AI sabotage, compared to 29% overall. The headlines wrote themselves. "Digital natives reject digital tools." "Gen Z, the generation that grew up on smartphones, is destroying AI at work."
But this framing misses the structural reason Gen Z leads the resistance. Junior roles in finance, law, and tech -- the traditional "learning by doing" rungs of the career ladder -- have declined by 32% since 2022. Gen Z workers aren't sabotaging AI because they don't understand technology. They're sabotaging it because they can see, with perfect clarity, that the entry-level jobs they were promised are being automated before they can get experience.
A 23-year-old paralegal watching an AI agent draft contract summaries isn't afraid of technology. They're afraid of never getting the chance to learn the work that would make them a lawyer. That's a legitimate career concern, and dismissing it as "resistance to change" guarantees more sabotage, not less.
The irony is that Gen Z is the most capable generation at using AI tools. They grew up with them. The problem isn't capability. It's incentive alignment. If your AI deployment communicates "we're automating your role," the most rational response for a junior employee is to make the automation fail.
What sabotage actually costs#
The damage extends far beyond productivity loss. Here's the breakdown most organizations fail to track:
| Impact Category | What Happens | Estimated Cost |
|---|---|---|
| Corrupted training data | Employees feeding bad inputs skew AI outputs for months | High -- requires full retraining cycles |
| False performance metrics | Leadership makes wrong decisions about AI investment | Critical -- can kill viable AI programs |
| Proprietary data leaks | Sensitive info entered into public tools as an act of defiance | Variable -- potential regulatory fines, IP loss |
| Delayed adoption timeline | 6-12 month setbacks per sabotage incident discovered | $200K-$2M depending on company size |
| Cultural damage | Trust erosion between management and workforce | Long-term -- affects all future change initiatives |
| Talent loss | Top performers leave companies with toxic AI rollouts | $150K+ per senior hire replacement |
The Writer survey found that 76% of executives consider employee sabotage "a serious threat to their company's future." And yet most are responding with threats rather than strategy -- 60% say they're considering cutting employees who refuse to adopt AI.
Firing resisters without addressing the root causes just drives sabotage underground. The remaining employees learn to perform compliance without genuine adoption. They use AI tools when managers are watching and ignore them otherwise. The metrics look fine. The actual productivity gains never materialize.

The companies getting it right#
Not every organization is drowning in sabotage. The difference comes down to three factors:
1. Transparent communication about what AI will and won't replace
Companies that told employees upfront "here are the tasks AI will handle, here are the tasks that still need you, and here's how your role evolves" saw dramatically less resistance. The 61,000 jobs cut in Q1 2026 didn't happen at companies that managed the transition well. They happened at companies that deployed AI first and figured out the people strategy later.
2. Investment in upskilling before deployment
AI super-users -- employees who deeply integrate AI into their workflow -- save nine hours per week and are 3x more likely to have received both a promotion and a pay raise in the past year. But they didn't become super-users spontaneously. The organizations with the highest adoption rates invested in training before the tools went live, not after resistance emerged.
This is the gap most small businesses are falling into. They adopt AI tools without any structure, and then wonder why the tools sit unused.
3. Giving employees agency over their own AI adoption
The word "agency" is doing real work here. Companies that let employees choose which AI tools to try, customize their own workflows, and opt into advanced training saw significantly better outcomes than companies that mandated specific tools and processes.
When someone sets up their own agent and trains it on their own work, they don't sabotage it. They improve it. The resistance isn't to AI itself. It's to AI imposed by someone else for someone else's benefit.
The career math is brutal and clear#
The Writer survey contains one data point that should override every other consideration for individual workers:
AI super-users are 3x more likely to have received both a promotion and a pay raise in the past year compared to employees slow to adopt AI tools.
That's not a marginal advantage. That's a career trajectory divergence.
Meanwhile, 77% of executives said employees who refuse to become AI-proficient won't be considered for promotions or leadership roles. And 60% are considering layoffs specifically targeting AI non-adopters. Resisting AI doesn't protect your job. It accelerates the timeline on which you lose it. The workers most likely to survive are the ones who learn to work alongside agents, not against them.
This is the cruel irony of AI sabotage. Every act of sabotage makes the saboteur more dispensable, not less. The employee who tampers with performance metrics isn't proving that AI doesn't work. They're proving they can't be trusted to participate in the company's future direction.
What happens next: the sabotage-layoff spiral#
There's a dangerous feedback loop forming in organizations with high sabotage rates:
- Company deploys AI agents without adequate change management
- Employees resist and sabotage, degrading AI performance
- Leadership sees poor AI results and blames the technology
- Company either abandons AI (42% did in 2025, per the survey) or doubles down with more aggressive mandates
- Aggressive mandates trigger more sabotage
- Eventually, leadership replaces resisters with new hires who didn't build the institutional knowledge the AI needs to work properly
Step 6 is the most destructive. The employees being fired for sabotage often understand the workflows AI agents need to replicate. When they leave, they take institutional knowledge with them. The AI performs worse. Leadership blames the AI. The cycle repeats.
Companies caught in this spiral end up in the shadow AI problem -- employees abandon official tools entirely and run unauthorized agents without governance or oversight.
The way out isn't more mandates. It's alignment. Tie AI adoption to career advancement, not just productivity metrics. Show employees their expertise is more valuable when augmented by AI, not less. Be honest when roles are changing instead of pretending otherwise until layoff notices go out.

The individual worker's playbook#
If you're an employee watching your company roll out AI agents, your strategic options are clear even if they're uncomfortable:
Option A: Resist. Join the 29%. Refuse training. Tamper with metrics. The survey data tells us exactly where this leads -- first in line for the 60% of companies planning layoffs for non-adopters.
Option B: Comply minimally. Use the tools when told. Don't invest in learning them deeply. Most common response, but also most dangerous -- you're technically "using AI" but building none of the skills that drive the 3x promotion advantage.
Option C: Go deep. Become a super-user. Not because your company told you to, but because the career data is overwhelming. Learn how agents work. Build your own. This is the path that Atlassian's restructuring shows leads to the roles that survive.
The data makes the choice for you.
Frequently asked questions#
Why are employees sabotaging AI agents at work?#
Fear of job loss is the primary driver -- 30% of saboteurs cite it directly. But the majority have other motivations: lack of trust in leadership's AI strategy (75% of executives admit theirs is "more for show"), absence of training, concerns about AI producing biased results, and career anxiety about roles being automated without a path forward. Gen Z leads at 44% because junior roles have declined 32% since 2022.
What forms does AI agent sabotage take?#
Five categories: refusing to use AI tools, refusing AI training, feeding proprietary data into public unapproved tools, intentionally generating low-quality outputs to make AI look bad, and tampering with performance metrics. The most damaging are metrics tampering and data leaks, which corrupt evaluation and expose sensitive information.
Does sabotaging AI actually protect your job?#
No. AI super-users are 3x more likely to receive both a promotion and a pay raise. Meanwhile, 60% of executives are considering layoffs targeting non-adopters, and 77% said non-adopters won't be considered for promotions. Resistance makes the resister more disposable, not more valuable.
How should companies respond to AI sabotage?#
Alignment over mandates. Companies with high adoption rates communicate transparently about which roles will change, invest in training before deployment, and give employees agency in choosing their AI tools. Firing resisters without addressing root causes drives sabotage underground and destroys institutional knowledge.
What percentage of companies have abandoned AI initiatives?#
Forty-two percent of companies abandoned most AI initiatives in 2025, up from 17% the year prior. Employee confidence in their company's AI strategy fell from 47% to 31% between 2025 and 2026. Abandonment rates are highest at organizations with top-down mandates.
The gap between companies where AI agents succeed and companies where they get sabotaged isn't a technology problem. It's a trust problem. If you're looking to start with AI agents on your own terms -- where you control the tools, the data, and the pace -- RapidClaw deploys a personal AI agent you actually own in under two minutes.
Ready to build your own AI agent?
Deploy a personal AI agent to Telegram or Discord in 60 seconds. From $19/mo.
Get StartedRelated Posts

Amazon Cut 16,000 Jobs. Their Replacements Don't Need Health Insurance.
Amazon laid off 16,000 employees and replaced middle management with AI agents. Outgoing employees documented their workflows in 'knowledge transfer sessions' — for the machines replacing them.

42% of Companies Abandoned Their AI Agent Projects Last Year. They All Made the Same 3 Mistakes.
AI agent project abandonment jumped from 17% to 42% in one year. The pattern is identical: over-scoping, no feedback loop, and treating agents like software instead of employees.

Adobe Killed Experience Cloud and Replaced It With AI Coworkers
Adobe Summit 2026: Experience Cloud is dead. CX Enterprise replaces it with persistent AI 'Coworkers' that learn, remember, and act autonomously across the marketing stack.
Stay in the loop
New use cases, product updates, and guides. No spam.