Ventana Research Analyst Perspectives

AI Hallucinations Are More Than a Headache for Business and IT

Written by Jeff Orr | Jan 24, 2024 11:00:00 AM

Imagine a world where artificial intelligence (AI) seamlessly integrates into every facet of your business, only to subtly distort your data and skew your insights. This is the emerging challenge of AI hallucinations, a phenomenon where AI models perceive patterns or objects that do not exist or are beyond human detection. 

AI hallucinations became popularized in early 2023 with the rollout of large language models (LLMs) such as ChatGPT. Users reported these applications incorporating credible sounding inaccuracies into the content they generated. These are not mere errors but subtle distortions in AI outputs that are often hard to detect. 

Consider a retail company using AI to predict future sales. If the AI starts to hallucinate a trend that doesn’t exist, the company might overstock or understock products — known as inventory distortion — leading to financial losses. This is an example of how AI hallucinations can impact business. 

Biases in training data can lead to AI hallucinations. For instance, if a model is trained on unbalanced data, it might generate biased outputs. Algorithmic biases also play a significant role. For example, AI recruiting tools biased against women have been scrapped due to these hallucinations. Another case is commercial software used in the U.S. to predict criminal recidivism, which was found to be biased against African Americans. 

Unchecked AI hallucinations can lead to severe financial repercussions, including legal and reputational damage. They can influence various aspects of enterprise systems, accumulating over time and leading to systemic issues. For example, an AI system used for credit scoring might hallucinate a negative correlation between a certain demographic and creditworthiness, leading to unfair loan denials. 

In the enterprise context, IT teams should be particularly concerned about the trust and reliability of generative AI (GenAI) outputs due to the potential for hallucinations. These hallucinations can lead to misleading insights, which can impact decision-making and operational efficiency. Therefore, devising appropriate strategies to combat the issue is necessary. We assert that through 2026, one-third of enterprises will realize that a lack of AI and machine learning (ML) governance has resulted in biased and ethically questionable decisions. 

Enterprise organizations can reduce AI hallucinations when training on, and maintaining, their own company data in several ways: 

  1. Data Quality: Ensure the quality of the data used for training. The data should be accurate, complete and representative of the problem space. Any inaccuracies, gaps or biases in the data can lead to AI hallucinations. 
  2. Data Preprocessing: Use robust preprocessing techniques to clean the data and catch missing values, outliers and errors. This can help reduce the chances of AI hallucinations. 
  3. Feature Selection: Carefully select the features used for training the AI model. Irrelevant or redundant features can confuse the model and lead to hallucinations. 
  4. Model Selection and Training: Choose the right AI model for the task and train it properly. Overfitting, where the model learns the training data too well and performs poorly on new data, can lead to hallucinations. The use of vector search and retrieval-augmented generation, a technique that helps improve the model’s performance, can be helpful here. 
  5. Regular Audits: Regularly audit the AI model’s outputs to catch and correct hallucinations early. This can involve manual checks or automated forms of testing. 
  6. Feedback Loops: Implement feedback loops where the predictions made by the AI model are compared against real results. This can help identify and correct hallucinations. 
  7. Transparency and Explainability: Strive for transparency and explainability in the AI model. If the workings of the model are understandable, it’s easier to identify and correct hallucinations. 
  8. Collaboration: Collaborate with AI developers, data scientists and domain experts to ensure the AI model is well understood and its outputs are valid and reliable. 

CIOs can proactively address AI hallucinations by implementing robust data validation processes and regularly auditing AI algorithms. Collaboration with AI developers and researchers is key to staying ahead of emerging challenges. For instance, CIOs could establish a dedicated team of data scientists, AI developers and domain experts to monitor AI outputs and validate them against real-world data. They could also collaborate with external AI ethics consultants to reinforce their AI systems so as to adhere to ethical guidelines. 

Human oversight in AI systems is essential to prevent and correct hallucinations. To that end, building a culture of responsibility among AI developers and data scientists is crucial. Enterprises have shown they can effectively balance AI autonomy with humans in the loop providing oversight, and many have implemented rigorous testing and validation processes for AI systems. 

Reducing AI hallucinations is an ongoing process that requires continuous monitoring, evaluation and adjustment. It’s not a one-time task but a crucial part of maintaining and improving the performance of AI systems. Enterprises also have a responsibility to address ethical implications. Industry-wide collaboration, such as AI-centric coalitions, to establish ethical guidelines and best practices is necessary. 

AI hallucinations are more than just an enterprise IT headache. They are a call to action for CIOs to prioritize addressing this emerging challenge. The transformative potential of AI can only be fully realized when approached with caution and responsibility. 

The journey towards responsible AI is ongoing and collaborative. CIOs and IT leaders are encouraged to stay informed, adapt strategies and contribute to the ethical evolution of AI in the enterprise landscape. Remember, the future of AI in enterprise is not just about leveraging its power, but also about tackling its obstacles with insight and anticipation. 

Regards,

Jeff Orr