Americas

  • United States

Asia

lucas_mearian
Senior Reporter

Biggest problems and best practices for generative AI rollouts

feature
Apr 02, 202412 mins
ChatbotsEmerging TechnologyGenerative AI

Enterprises are rapidly adopting genAI to increase productivity and efficiency, but many are not taking a strategic approach implementing the technology. Because of that, many projects fail or end up costing far more than they should, without an ROI.

businessman using AI to manipulate data
Credit: SuPatMaN / Shutterstock

IT leaders face a multitude of major hurdles to the effective adoption and scaling of generative AI (genAI), including a talent shortage, poor data quality, lack of comprehensive AI governance, and risk mitigation and control.

Through 2025, at least 30% of genAI projects will be abandoned after organizations perform a proof of concept (POC) due to these and other challenges, according to a report by Gartner Research.

Escalating costs and unclear business value are other leading causes of genAI project failure, according to Gartner.

In a research paper on the 10 best practices for scaling generative AI across the enterprise, Gartner advised that in order to be successful, organizations must prioritize business value and focus on AI literacy and responsible AI. Organizations should also nurture cross-functional collaboration and stress continuous learning to achieve successful outcomes.

gartner ai best practices graphic

Gartner

Arun Chandrasekaran, a Gartner distinguished vice president analyst, said the greatest challenges facing organizations in genAI projects is the poor quality of existing data, integrating relevant data into genAI workflows, and governance of AI systems.

Many companies are already taking steps to ensure successful genAI projects. By 2027, more than 50% of enterprises will have implemented a responsible AI governance program to address the risks of genAI, up from less than 2% today, according to Gartner.

Chandrasekaran and other experts have long called out the fact that data hygiene, categorization, and security are lacking in most organizations. When poor data quality is combined with a genAI large language model (LLM) — well, garbage in, garbage out. GenAI platforms are little more than next-word, image or line-of-code prediction engines, so they generate responses based on the data they’ve been fed.

A lack of talent and inherent risks

Other causes of genAI problems include ineffective prompt engineering (the training up of LLMs), inadequate chunking or retrieval in Retrieval-Augmented Generation (RAG), and the complexity involved in fine-tuning an AI model.

“It is clear that a deficit in AI skills and expertise is adversely affecting enterprises,” Chandrasekaran said.

James Briggs, the founder and CEO of AI Collaborator, an AI managed service provider that assists businesses in AI implementation, agreed the top issue for most organizations is a lack of talent that can help implement, monitor, and continue to manage a genAI project. There are also a growing list of risks associated with deploying AI.

“Those [risks] include transparency, governance, and fairness issues that might arise when AI applications aren’t built on a solid responsibility framework,” Briggs said.

But as with any new technology, genAI comes not only with inherent risks but the potential to amplify existing ones. For example, poor or improper integration of genAI tools with other enterprise systems can lead to vulnerabilities, such as unsecured data and back doors.

Bad actors can leverage AI tools to rapidly spread misinformation and deepfakes, which can alter public opinion, according to business consultancy Deloitte.

There are also new and older regulatory risks that organizations rolling out genAI must consider, such as the National Institute of Standards and Technologies’ (NIST) proposed AI Risk Management Framework and new EU regulations for general-purpose AI systems.

Struggles also include mitigating genAI bias and outright hallucinations, where a genAI tool goes completely off the rails when creating a response to a user prompt.

“Furthermore, IT leaders remain concerned regarding the protection of their data, mindful of the ambiguously defined boundaries of model training and the potential legal liabilities,” Chandrasekaran said.

Because it’s so difficult to get the right genAI talent in the enterprise, startups who offer tooling to make it easier to bring genAI development in house will likely see faster adoption, according to Andreesen Horowitz, a venture capital firm that recently released a study on AI adoption.

Costs are high, but companies believe genAI benefits outweigh risks

The initial costs of genAI projects are negligible, according to Chandrasekaran, but they can quickly escalate as use cases expand, exacerbated by poor architectural decisions, a lack of expertise in inferencing optimization, and insufficient change management, thereby increasing the total cost of ownership of genAI.

Andreesen Horowitz recently spoke with dozens of Fortune 500 firms and their top enterprise leaders, and surveyed 70 more organizations, to understand how they’re using, buying, and budgeting for generative AI.

“We were shocked by how significantly the resourcing and attitudes toward genAI had changed over the last six months,” the firm said in a new report. “Though these leaders still have some reservations about deploying generative AI, they’re also nearly tripling their budgets, expanding the number of use cases that are deployed on smaller open-source models, and transitioning more workloads from early experimentation into production.”

average enterprise spend on llms actual and anticipated

Andreessen Horowitz

Simply having an API to a model provider isn’t enough to build and deploy generative AI solutions at scale, according to Andreesen Horowitz. It takes highly specialized talent to implement, maintain, and scale the requisite computing infrastructure.

“Implementation alone accounted for one of the biggest areas of AI spend in 2023 and was, in some cases, the largest,” Sarah Wang, an Andreessen Horowitz general partner, stated in a blog post. “One executive mentioned that LLMs are probably a quarter of the cost of building use cases, with development costs accounting for the majority of the budget.”

Two separate surveys conducted last year by Gartner revealed that 78% of nearly 4,000 IT leaders surveyed believed genAI benefits outweigh the risks of implementing the tech. Because of the high cost of implementation, however, getting genAI deployments right the first time is critical to their success.

Another significant challenge for GenAI projects is demonstrating a strong return on investment (ROI). “The reality is that many organizations do not observe a financial return, compounded by difficulties in defining the ROI for AI initiatives in the first instance,” Chandrasekaran said.

Measuring the value of genAI implementations is “very specific to a use case, domain or industry,” Chandrasekaran said. “The vast majority of improvements will accrue to leading indicators of future financial value, such as productivity, cycle time, customer experience, faster upskilling of junior people, etc.”

Determine potential benefits up front

The first step in the genAI journey is to determine the AI ambition for the organization and conduct an exploratory dialogue on what is possible, according to Gartner. The next step is to solicit potential use cases that can be piloted with genAI technologies.

Unless genAI benefits translate into immediate headcount reduction and other cost reduction, organizations can expect financial benefits to accrue more slowly over time depending on how the generated value is used.

For example, Chandrasekaran said, an organization being able to do more with less as demand increases, to use fewer senior workers, to lower use of service providers, and to improve customer and employee value, which leads to higher retention, are all financial benefits that grow over time.

Most enterprises are also customizing pre-built LLMs, as opposed to building out their own models. Through the use of prompt engineering and retrieval-augmented generation (RAG), firms can fine-tune an open-source model for their specific needs.

RAG creates a more customized and accurate genAI model that can greatly reduce anomalies such as hallucinations.

Adoption of genAI by organizations will depend on six factors, according to Andreessen Horowitz:

  • Cost and efficiency: The ability to assess whether the benefits of using genAI-based systems outweigh the associated expenses. Handling and storing large data sets can result in increased expenses related to infrastructure and computational resources.
  • Knowledge and process-based work: A high degree of knowledge and process-based work vs. only field and physical work.
  • High cloud adoption: Medium-to-high level of cloud adoption, given infrastructure requirements.
  • Low regulatory and privacy burden: Functions or industries with high regulatory scrutiny, data privacy concerns, or ethics bias are not good candidates for genAI adoption.
  • Specialized talent: Strong talent with technical knowledge and new capabilities, and the ability to help transform the workforce to adapt quickly.
  • Intellectual property and licensing and usage agreements: Ability to assess licensing/usage agreements and restrictions, establish and monitor related compliance requirements, and negotiate customized agreements with relevant vendors.

Accessing genAI tools through cloud service providers is also the dominant procurement method, “as leaders were more concerned about closed-source models mishandling their data than their [cloud service providers], and to avoid lengthy procurement processes,” Andreesen Horowitz stated.

In order to help enterprises get up and running on their models, foundation model providers offer professional services, typically related to custom model development.

Best practices for deploying genAI

Along with partnering with a service provider, it’s also critical that organizations take steps to prepare for genAI implementations, the most critical of which is prioritizing the upskilling and reskilling of the workforce. That includes training around security and compliance — and ensuring that cloud provider licensing agreements address those concerns as well.

Deloitte’s genAI guide for CISOs recommends that organizations poised to gain the most from genAI adoption implement procedures to evaluate, negotiate, and oversee licensing agreements. Organizations should design methods to monitor genAI tools and set up guardrails or controls to address AI specific risks, such as innate biases.

As software code augmentation is a key use for genAI, companies should have assessment tools and model validation capabilities, as well as threat monitoring and detection that are aimed specifically at genAI models, Deloitte recommends.

“Above all, remember, a road map for Gen AI adoption should include close, constant collaboration for risk stakeholders, including cyber leaders, chief resource officers, an organization’s legal team, and more, to help understand and anticipate the risks,” Deloitte stated.

Research firm IDC’s advice for organizations to prepare for AI rollouts starts with clearly defining business objectives, use cases, and how value will be measured; making “build vs. buy” decisions at a use-case level; and partnering with trusted solution providers. Other steps include getting buy-in from company leadership; assessing and upgrading data infrastructure for AI-readiness; and establishing processes and controls around privacy, security, and responsible AI use.

idc prepare for long term impact of ai

IDC

GenAI initiatives will need to scale from a few users to thousands, and eventually they should be deployed across the enterprise. Scaling genAI requires a systematic approach to build vs. buy decisions for the many potential use cases in the organization, according to Gartner.

“This upfront decision will have a lasting impact and must be thought through carefully for each use case,” Gartner stated in its report. “Ideally, you want to build when the AI product can give you a competitive differentiation in your industry and when you have adequate skills and know-how for the build process.”

Organizations should run pilots to try new ideas, build muscle memory within the organization for what is or isn’t possible through genAI, and learn by experimentation.

Additionally, Gartner recommends that organizations:

  • Design a composable genAI platform architecture. The genAI landscape consists of four critical layers — infrastructure, models, AI engineering tools, and applications. Ensure that your platform architecture is composable, scalable, and embedded with governance up front.
  • Put responsible AI framework at the forefront of your genAI efforts by defining and publicizing a vision for responsible AI with clear principles and policies across focus areas like fairness, bias mitigation, ethics, risk management, privacy, sustainability, and regulatory compliance.
  • Invest in data and AI literacy, because genAI will eventually be used by all or a large segment of employees. The ability to utilize AI in context with competency to identify relevant use cases, as well as implement and operate corresponding AI applications, is key. Also, partner with HR to set up career mapping clinics and open mic sessions to address the fear, uncertainty, and doubt (FUD) that exists around AI’s impact on skills and jobs.
  • Create robust data engineering practices, because GenAI models deliver the most value when combined with organizational data; that includes training AI teams on best practices for integrating models with enterprise data via vector embeddings as well as emerging approaches for efficient fine-tuning. Invest in capabilities like capturing metadata, building knowledge graphs and creating data models.
  • Adopt a product approach for genAI where timelines are ongoing and designed to continuously enhance customer value until the service or product is phased out.