Services for Organizations

Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection

Consulting & Strategy Sessions

Ventana On Demand

    Services for Investment Firms

    We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

    Consulting & Strategy Sessions

    Ventana On Demand

      Services for Technology Vendors

      We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

      Analyst Relations

      Demand Generation

      Product Marketing

      Market Coverage

      Request a Briefing


        Ventana Research Analyst Perspectives

        << Back to Blog Index

        AI Hallucinations Are More Than a Headache for Business and IT

        Imagine a world where artificial intelligence (AI) seamlessly integrates into every facet of your business, only to subtly distort your data and skew your insights. This is the emerging challenge of AI hallucinations, a phenomenon where AI models perceive patterns or objects that do not exist or are beyond human detection. 

        AI hallucinations became popularized in early 2023 with the rollout of large language models (LLMs) such as ChatGPT. Users reported these applications incorporating credible sounding inaccuracies into the content they generated. These are not mere errors but subtle distortions in AI outputs that are often hard to detect. 

        Consider a retail company using AI to predict future sales. If the AI starts to hallucinate a trend that doesn’t exist, the company might overstock or understock products — known as inventory distortion — leading to financial losses. This is an example of how AI hallucinations can impact business. 

        Biases in training data can lead to AI hallucinations. For instance, if a model is trained on unbalanced data, it might generate biased outputs. Algorithmic biases also play a significant role. For example, AI recruiting tools biased against women have been scrapped due to these hallucinations. Another case is commercial software used in the U.S. to predict criminal recidivism, which was found to be biased against African Americans. 

        Unchecked AI hallucinations can lead to severe financial repercussions, including legal and reputational damage. They can influence various aspects of enterprise systems, accumulating over time and leading to systemic issues. For example, an AI system used for credit scoring might hallucinate a negative correlation between a certain demographic and creditworthiness, leading to unfair loan denials. 

        In the enterprise context, IT teams should be particularly concerned about the trust and reliability of generative AI (GenAI) outputs due to the potential for hallucinations. TheseVentana_Research_2024_Assertion_AI_AIML_Governance_12_S hallucinations can lead to misleading insights, which can impact decision-making and operational efficiency. Therefore, devising appropriate strategies to combat the issue is necessary. We assert that through 2026, one-third of enterprises will realize that a lack of AI and machine learning (ML) governance has resulted in biased and ethically questionable decisions. 

        Enterprise organizations can reduce AI hallucinations when training on, and maintaining, their own company data in several ways: 

        1. Data Quality: Ensure the quality of the data used for training. The data should be accurate, complete and representative of the problem space. Any inaccuracies, gaps or biases in the data can lead to AI hallucinations. 
        2. Data Preprocessing: Use robust preprocessing techniques to clean the data and catch missing values, outliers and errors. This can help reduce the chances of AI hallucinations. 
        3. Feature Selection: Carefully select the features used for training the AI model. Irrelevant or redundant features can confuse the model and lead to hallucinations. 
        4. Model Selection and Training: Choose the right AI model for the task and train it properly. Overfitting, where the model learns the training data too well and performs poorly on new data, can lead to hallucinations. The use of vector search and retrieval-augmented generation, a technique that helps improve the model’s performance, can be helpful here. 
        5. Regular Audits: Regularly audit the AI model’s outputs to catch and correct hallucinations early. This can involve manual checks or automated forms of testing. 
        6. Feedback Loops: Implement feedback loops where the predictions made by the AI model are compared against real results. This can help identify and correct hallucinations. 
        7. Transparency and Explainability: Strive for transparency and explainability in the AI model. If the workings of the model are understandable, it’s easier to identify and correct hallucinations. 
        8. Collaboration: Collaborate with AI developers, data scientists and domain experts to ensure the AI model is well understood and its outputs are valid and reliable. 

        CIOs can proactively address AI hallucinations by implementing robust data validation processes and regularly auditing AI algorithms. Collaboration with AI developers and researchers is key to staying ahead of emerging challenges. For instance, CIOs could establish a dedicated team of data scientists, AI developers and domain experts to monitor AI outputs and validate them against real-world data. They could also collaborate with external AI ethics consultants to reinforce their AI systems so as to adhere to ethical guidelines. 

        Human oversight in AI systems is essential to prevent and correct hallucinations. To that end, building a culture of responsibility among AI developers and data scientists is crucial. Enterprises have shown they can effectively balance AI autonomy with humans in the loop providing oversight, and many have implemented rigorous testing and validation processes for AI systems. 

        Reducing AI hallucinations is an ongoing process that requires continuous monitoring, evaluation and adjustment. It’s not a one-time task but a crucial part of maintaining and improving the performance of AI systems. Enterprises also have a responsibility to address ethical implications. Industry-wide collaboration, such as AI-centric coalitions, to establish ethical guidelines and best practices is necessary. 

        AI hallucinations are more than just an enterprise IT headache. They are a call to action for CIOs to prioritize addressing this emerging challenge. The transformative potential of AI can only be fully realized when approached with caution and responsibility. 

        The journey towards responsible AI is ongoing and collaborative. CIOs and IT leaders are encouraged to stay informed, adapt strategies and contribute to the ethical evolution of AI in the enterprise landscape. Remember, the future of AI in enterprise is not just about leveraging its power, but also about tackling its obstacles with insight and anticipation. 

        Regards,

        Jeff Orr

        Authors:

        Jeff Orr
        Director of Research, Digital Technology

        Jeff Orr leads the research and advisory for the CIO and digital technology expertise at Ventana Research, now part of ISG, with a focus on modernization and transformation for IT. Jeff’s coverage spans cloud computing, DevOps and platforms, digital security, intelligent automation, ITOps and service management, intelligent automation and observation technologies across the enterprise.

        JOIN OUR COMMUNITY

        Our Analyst Perspective Policy

        • Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business, industry and technology vendor trends. Each Analyst Perspective presents the view of the analyst who is an established subject matter expert on new developments, business and technology trends, findings from our research, or best practice insights.

          Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@ventanaresearch.com

        View Policy

        Subscribe to Email Updates

        Posts by Month

        see all

        Posts by Topic

        see all


        Analyst Perspectives Archive

        See All