Americas

  • United States

Asia

lucas_mearian
Senior Reporter

IT pros find generative AI doesn’t always play well with others

news analysis
Jun 12, 20245 mins
Generative AIPrivacyRegulation

AI is often difficult to integrate with existing enterprise system and leaves open doors to security and privacy concerns, according to a SolarWinds IT Trends Report.

AI compliance
Credit: Shutterstock/Sansoen Saengsakaorat

While nine out of 10 IT professionals say they want to implement generative artificial intelligence (genAI) in their organization, more than half have integration, security and privacy concerns, according to a recent survey released Wednesday by Solarwinds, an infrastructure management software firm.

The SolarWinds 2024 IT Trends report, AI: Friend or Foe? found that very few IT pros are confident in their organization’s readiness to integrate genAI. The company surveyed about 7,000 IT professionals online regarding their views of the fast-evolving technology, and despite a near-unanimous desire to adopt genAI and other AI-based tools, less than half of respondents feel their infrastructure can work with the new technology.

Only 43% said they are confident that their company’s databases can meet the increased needs of AI, and even fewer (38%) trust the quality of data or training used in developing the technology. “Because of this, today’s IT teams see AI as an advisor (33%) and a sidekick (20%) rather than a solo decision-maker,” SolarWind said in its report.

Privacy and security worries were cited as the top barriers to genAI integration, and IT pros specifically called for increased government regulations to address security (72%) and privacy (64%) issues. When asked about challenges with AI, 41% said they’ve had negative experiences; of those, privacy concerns (48%) and security risks (43%) were most often cited.

“Many large tech companies have integrated AI into their platforms, and the reality is that some of those big players have developed really useful LLMs,” said Krishna Sai, senior vice president of Technology & Engineering at SolarWinds. “We expect that most companies will adopt a hybrid approach, combining the integration of large existing LLMs with the development of their own smaller models or other approaches to tuning.”

More than half of respondents also believe government regulation should play a role in combating misinformation. “To ensure successful and secure AI adoption, IT pros recognize that organizations must develop thorough policies on ethics, data privacy, and compliance, pointing to ethical considerations and concerns about job displacement as other significant barriers to AI adoption,” the report said.

SolarWinds found that more than a third of organizations still lack ethics, privacy and compliance policies in place to guide proper genAI implementation. “While talk of AI has dominated the industry, IT leaders and teams recognize the outsize risks of the still-developing technology, heightened by the rush to build AI quickly rather than smartly,” Sia said.

Indeed, leading security experts are predicting hackers will increasingly target genAI systems and attempt to poison them by corrupting data or the models themselves. Earlier this year, the US National Institute of Standards and Technology (NIST) published a paper warning that “poisoning attacks are very powerful and can cause either an availability violation or an integrity violation.

“In particular, availability poisoning attacks cause indiscriminate degradation of the machine learning model on all samples, while targeted and backdoor poisoning attacks are stealthier and induce integrity violations on a small set of target samples,” NIST said.

Overall, the IT industry’s sentiment reflects “cautious optimism about AI despite the obstacles,” SolarWinds reported. Almost half of IT professionals (46%) want their company to move faster in implementing the technology, despite costs, challenges, and concerns, but only 43% are confident that their company’s databases can meet the increased needs of AI. Moreover, even fewer (38%) trust the quality of data or training used in developing AI technologies.

“There are two ways to look at AI expertise: there is expertise around building AI models, and then expertise in utilizing those systems,” Sai said. “Some of the ways companies plan to address that first aspect is through classic training and education programs, and through collaboration with external experts. In terms of that second aspect – the ability to actually use those AI products or systems – the responsibility falls on to the developers to ensure the technology is created in such a way that it is functional to the end user.”

IT pros cited AIOps (Artificial Intelligence for IT Operations) as the technology that will have the most significant positive impact on their role (31%), ranking above large language models and machine learning. More than a third of respondents (38%) said their companies already use AI to make IT operations more efficient and effective.