Women are slower to adopt AI at work. Here’s why

Jul 14, 2025 - 13:24
 0  0
Women are slower to adopt AI at work. Here’s why

As a leader in technology for nearly 30 years, I have observed waves of innovation disrupt the global business landscape and trigger major shifts in the way we work. Now, as AI takes its place as the next big thing, the global workforce is facing an overwhelming demand for new skills and capabilities.

In my new book, Artificial Intelligence For Business, I highlight the impact of AI on the future of work, specifically the skills gaps and job displacements, as well as future essential skills required in global organizations. Interestingly, there is a cautious instinct at play, specifically for women at work, as they weigh the promise of innovation with the risks of AI application. This hesitation may be deterring women from using AI at work, as worries about embracing AI could undermine their credibility or even invite harsher judgement, instead of highlighting their true potential.

According to recent research conducted by Harvard Business School Associate Professor Rembrand Koning, women are adopting AI tools at a 25% lower rate than men, on average. Synthesizing data from 18 studies that cover over 140,000 individuals worldwide, combined with estimates of the gender share of the hundreds of millions of users of popular generative AI platforms, the research demonstrates the gender gap holds across all regions, sectors, and occupations.

Although the study highlights that closing this gap is crucial for business and economic growth, and development of AI-based technologies that avoid bias, the reasons for the gap existing in the first place needs to be explored further. Let’s unpack several ethical, reputational, and systemic hurdles that may lead women to be more reluctant to use AI at work and explore how companies can help bridge this gap.

Ethical concerns

First, ethical concerns of AI adoption tend to weigh heavily on women’s minds. Studies indicate that women consistently rate hesitation about AI technology adoption higher than men do, placing greater weight on ethics, transparency, accountability, explainability, and fairness when evaluating AI tools. In one study that examines public perceptions of AI fairness across three societal U.S.-based contexts, personal life, work life, and public life, women consistently perceived AI as less beneficial and more harmful across all contexts. This caution may be evident as women hold themselves, and their teams, to strong ethical standards. These concerns are amplified by the rapid increase in “black box” AI tools adoption across key business decision points, where the inner workings are opaque and hidden behind proprietary algorithms.

As more female ethicists and policy experts enter the global field, they raise high-impact questions about bias, data privacy, and harmful consequences, feeling a special responsibility to get answers before signing off on innovative technology solutions. Women all over the world watched in dismay as leading AI ethicists were penalized for raising valid concerns over ethical development and use of AI.

Famously, Timnit Gebru, co-lead of Google’s Ethical AI team, was forced out after pushing back on orders to withdraw her paper on the social risks of large language models. Subsequently, Margaret Mitchell was also fired while standing in solidarity with Gebru and raising similar concerns. This move, among others, has sent a stark message that calling out potential harm in AI could make you a target.

Extra scrutiny

Alongside ethics, there is may be a fear of being judged at work for leaning on AI tools. In my experience, women often face extra scrutiny over their skills, capabilities, and technical prowess. There may be a deep-rooted concern that leveraging AI tools may be perceived as cutting corners or reflect poorly on the users’ skill level. That reputational risk may be magnified when flaws or issues in the AI outputs are attributed to the user’s lack of competence or expertise. Layer onto this a host of ongoing systemic challenges inherent in the business environment and AI tools that are implemented. For example, training data can under-represent the experiences of women in the workplace and reinforce the perception that AI products were not built for them. Nondiverse AI teams also pose as a deterrent, creating additional barriers to participate and engage.

The consequence of the gender gap in AI is more than a discomfort. It can result in AI systems that reinforce gender stereotypes and ignore inequities, issues that are augmented when AI tools are applied for decision-making across essential areas such as hiring, performance reviews, and career development. For example, a recruitment tool trained on historical data may limit female candidates from leadership roles, not due to lack of capabilities, but because historically there have been more male leaders. Blind spots like these further deepen the very gap that organizations are trying to close.

To counter this and encourage more women to use AI at work, organizations should start by creating an environment that balances guardrails with exploration. Additionally, they should build psychological safety by encouraging dialogue that gives space for concerns, challenges, and feedback, without fears of being penalized. Open and transparent communication addresses any expected fears and uncertainty that accompany AI use in the workplace. Build fail-safe sandbox environments for exploration, where the goal is to learn through trial and error and develop skills through experiential learning.

Policy changes

Changing policy and guidelines in the organization can prove effective in encouraging more women to use AI at work. Apart from clear guidelines around responsible AI use, policies specifically allowing the use of AI can help close the gap. In a study conducted by the Norwegian School of Economics (NHH), male students were less likely to view using AI as “cheating.” Additionally, when policies forbid the use of AI, male students tended to use it anyway, while women adhered to the policy. When a policy explicitly allowing the use of AI was put in place, over 80% of both men and women used it, suggesting that policies encouraging the use of AI can help trigger more women to use it.

Crucially, organizations should make a proactive effort to bring in more women into the AI conversation at every level. Diverse perspectives can prove effective in catching blind spots, and this approach sends a powerful message that representation matters. When women see their peers proactively shaping AI application in a safe, fair, and impactful way, they will feel more confident in participating as well.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0