Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Top 5 AI employee fears and how to combat them

news analysis
Jul 02, 20247 mins
Data PrivacyGenerative AIIT Jobs

A new study revealed that employees have real fears about AI's intrusion into their workplace. Companies can alleviate many of those anxieties by being more transparent around how they plan to use the technology.

As artificial intelligence adoption surges in business, employees are left to wonder how systems placed on “automatic” can be controlled and how long it will be before their jobs are on the chopping block.

Those were two top fears revealed in a recent study by Gartner about the five main concerns workers have over generative AI and AI in general. And those fears are warranted, according to survey data. For example, IDC predicts that by 2027, 40% of current job roles will be redefined or eliminated across Global 2000 organizations adopting genAI.

A remarkable 75% of employees said they are concerned AI will make certain jobs obsolete, and about two-thirds (65%) said they are anxious about AI replacing their job, according to a 2023 survey of 1,000 US workers by professional services firm Ernst & Young (EY). About half (48%) of respondents said they are more concerned about AI today than they were a year ago, and of those, 41% believe it is evolving too quickly, EY’s AI Anxiety in Business Survey report stated.

“The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement,” EY said in its report.

The future of work has shifted due to genAI in particular, enabling work to be done equally well and securely across remote, field, and office environments, according to EY.

Managing highly distributed teams doing complex, interdependent tasks is not easy; finding employees trained sufficiently well to offer effective IT support across a broad security threat landscape of applications, platforms, and endpoints is also not easy. That’s where AI promises to facilitate and automate repetitive tasks like coding, data entry, research, and content creation and also amplify the effectiveness of learning in the flow of work, according to EY.

Gartner’s recent study identified five unique fears employees have about how their company will apply AI:

  • Job displacement due to AI that makes their job harder, more complicated, or less interesting
  • Inaccurate AI that creates incorrect or unfair insights that negatively impact them
  • Lack of transparency around where, when, and how the organization is using AI, or how it will impact them
  • Reputational damage that occurs because the organization uses AI irresponsibly
  • Data insecurity because the implementation of AI solutions puts personal data at risk 

“Employees are concerned about losing their job to AI; even more think their job could be significantly redesigned due to AI,” said Duncan Harris, research director for Gartner’s HR practice. “When employees have these fears, they all have a substantial impact on either the engagement of the employee, their performance, or sometimes both.”

One problem Gartner cited in its report is that organizations aren’t being fully transparent about how AI will impact their workforce. Organizations can’t just provide information about AI; they also need to provide context and details on what risks and opportunities are influencing their AI policy and how AI relates to key priorities and company strategy. 

“We can say that the most common worry is that AI will impact an employee’s role – either making it obsolete entirely or changing it in a way which concerns the employee, For example, taking some of the challenge or excitement out of it,” Harris said. “And the point is, these perspectives are already having an impact – irrespective of what the future really holds.”

Harris said in another Gartner survey, employees indicated they were less likely to stay with an organization due to concerns about AI-driven job loss. That phenomenon has cost the average enterprise with 10,000 employees about $53 million a year in lost productivity, Harris said.

Gartner recommends organizations consider what tasks within roles are most likely to be disrupted by genAI. For example, GenAI will likely have the greatest immediate impact on tasks such as content creation, question answering and discovery, translation, document summarization and software coding. But this doesn’t mean wholesale replacement of employees in the near term, he said.

Organizations can also overcome employee AI fears and build trust by offering training or development on a range of topics, such as how AI works, how to create prompts and effectively use AI, and even how to evaluate AI output for biases or inaccuracies. And employees want to learn. According to the report, 87% of workers are interested in developing at least one AI-related skill.

AI has the potential to create high business value for organizations, but employee distrust of the technology is getting in the way, Gartner’s study found. Leaders involved in AI cite concerns about ethics, fairness, and trust in AI models as top barriers they face when implementing the technology.

Employee concerns are not fear of the technology itself, but fear about how their company will use the new technology.

“If organizations can win employees’ confidence, the benefits will extend beyond just AI projects. For example, high-trust employees have higher levels of inclusion, engagement, effort, and enterprise contribution,” Harris said.

One particular concern is that AI, especially GenAI can lead to organizations making inadvertent mistakes, according to Harris. “So, from an executive perspective, the biggest concern for the future in using GenAI is around data privacy – this is also one of the most common concerns for employees,” he said.

“We suggest that by 2026, enterprises that apply AI trust, risk and security management to AI applications will consume at least 50% less inaccurate or illegitimate information that leads to faulty decision making,” Harris said.

Companies should also work on partnering with employees to create AI solutions, which will reduce fears about inaccuracy. Companies that show how AI works, provide input on where it could be helpful or harmful, and test solutions for accuracy can allay fears. For example, many organizations are setting up sandbox environments for experimenting with AI solutions and are keen for employees to be involved in these.

Organizations also need to formalize accountability through new governance structures that demonstrate they are taking AI threats seriously.

“For example, to boost employee trust in organizational accountability, some companies have deputized AI ethics representatives at the business unit level to oversee implementation of AI policies and practices within their departments,” Harris said.

Organizations should also establish an employee data bill of rights to serve as a foundation for their AI policies.

“The bill of rights should cover the purpose for data collection, limit the data collected to the defined purpose, commit to use data in ways that reinforce equal opportunity, and recognize employees’ right to awareness about the data collected on them,” Harris said.

Investment in AI is going to continue and employees who lean into this trend will benefit, according to Harris. Instead of distancing themselves, Gartner found employees want to learn more and be involved in working with AI.

“In fact, when we asked employees in different industries whether they would swap jobs if they were nearly identical apart from the new role offering the ability to work with GenAI, the likelihood to swap was over 40% for employees in the finance, construction, telecom, and technology sectors,” Harris said.