What’s really stopping workers from using AI isn’t what you think

From Hollywood to Big Tech, major industries across the U.S. are increasingly going all-in on AI workflow tools, and they’re expecting employees to follow suit. Late last month, Business Insider reported that Microsoft has started evaluating some employees on their AI fluency, factoring their competency with AI tools into metrics like performance reviews. But in spite of the growing workplace incentive to adopt AI tools, some employees are actively resisting AI uptake—and their reasons make more sense than you might think.
According to a new study conducted by a team of researchers at Peking University and The Hong Kong Polytechnic University, an emerging phenomenon is actively deterring employees from picking up AI tools, even at companies where doing so is strongly encouraged.
Dubbed the “competence penalty,” this bias leads to AI users being seen as less competent by their peers—regardless of actual performance. It’s a perception gap that’s especially damaging for women in technical roles.
The background
The researchers’ study was conducted at an unnamed leading tech company. In an article written for the Harvard Business Review (HBR), the study’s authors explain that this company had previously rolled out a state-of-the-art AI coding assistant to its developers, which was promised to “boost productivity significantly.” Still, 12 months later, only 41% of the nearly 30,000 surveyed engineers had even tried the coding assistant.
Adoption also varied based on employees’ identities. Just 39% of engineers 40 and older were using the tool, alongside a meager 31% of female engineers. That’s not for lack of trying on the company’s part, either: Rather than throwing their employees into the AI deep end without guidance (a prevalent issue as AI workflow tools become more common), this company offered dedicated AI teams, adoption incentives, and free training.
So, researchers set out to understand what was going wrong.
The competence penalty
To get to the bottom of this lackluster adoption pattern, the study’s authors established an experiment with 1,026 engineers from the same company. The engineers were given a snippet of Python code to evaluate. While the code was the exact same for every participant, each was told that it was created under different conditions—including with or without AI and by a male or female engineer.
The results showed that, when participants believed a fellow engineer had used AI to write their code, they rated that engineer’s competence 9% lower on average. The competence penalty’s severity was also dependent on the reported gender of the engineer. If they were described as male, there was only a 6% competence reduction, compared to 13% for those described as female.
Further, the reviewer’s own identity and stance on AI had an impact on how they rated others. Engineers who hadn’t adopted AI themselves were most critical of AI-users, and male non-adopters penalized female AI-users 26% more harshly than their male AI-using counterparts.
Through a follow-up study of 919 engineers, the researchers found that many employees were actually innately aware of this competence penalty, and were avoiding AI usage as a result.
“Those who most feared competence penalties in the tech industry—disproportionately women and older engineers—were precisely those who adopted AI least,” the study’s authors write. “The very groups who might benefit most from productivity-enhancing tools felt they couldn’t afford to use them.”
“Women often face extra scrutiny”
The study’s findings offer a strong counterpoint to the oft-repeated sentiment that AI tools might even the proverbial playing field at work, presenting a one-size-fits-all solution by making everyone more productive.
“Our results suggest that this is not guaranteed and in fact the opposite could be true,” the authors write. “In our context, which is dominated by young males, making AI equally available increased bias against female engineers.”
These results could help explain patterns that have already been observed in AI uptake. According to recent research conducted by Harvard Business School associate professor Rembrand Koning, women are adopting AI tools at a 25% lower rate than men, on average.
In an article for Fast Company earlier this month, Kamales Lardi, author of the book Artificial Intelligence For Business, noted that, “In my experience, women often face extra scrutiny over their skills, capabilities, and technical prowess. There may be a deep-rooted concern that leveraging AI tools may be perceived as cutting corners or reflect poorly on the users’ skill level.”
How leaders should prepare for the competence penalty
Companies like the one in the study shouldn’t give up on implementing new AI tools, especially given that agentic AI is predicted to play a huge role in the future of work. Instead, leaders should use this data to put more AI adoption guardrails in place. In their analysis for HBR, the study’s authors offer several main steps for managers to consider:
- Map your organization’s penalty hotspots. Leaders should focus on identifying teams where the AI competence penalty might be highest, including those with more women and older engineers reporting to male non-adopters. Monitoring these teams might help to understand where and how the competence penalty is playing out.
- Convert the influential skeptics. Because non-adopters are the harshest critics of AI users, influential skeptics can have a major impact on the whole team. The study’s authors suggest that breaking this cycle requires the skeptics to see respected colleagues successfully using AI without professional consequence.
- Redesign evaluations to remove the signal. Based on the study’s results, flagging a product as “made with AI” can negatively impact performance reviews. “The solution is straightforward: Stop signalling AI use in performance evaluations until your culture is ready,” the authors write.
What's Your Reaction?






