Services for Organizations

Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection

Consulting & Strategy Sessions

Ventana On Demand

    Services for Investment Firms

    We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

    Consulting & Strategy Sessions

    Ventana On Demand

      Services for Technology Vendors

      We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

      Analyst Relations

      Demand Generation

      Product Marketing

      Market Coverage

      Request a Briefing


        Ventana Research Analyst Perspectives

        << Back to Blog Index

        Market Observations on AI Governance


        Market Observations on AI Governance
        4:50

        Having just completed our AI Platforms Buyers Guide assessment of 25 different software providers, I was surprised to see how few provided robust AI governance capabilities. As I’ve written previously, data governance has changed dramatically over the last decade, with nearly twice as many enterprises (71% v. 38%) implementing data governance policies during that time. With all this attention on data governance, I had expected AI platform software providers would recognize the needs of enterprises and would have incorporated more AI governance capabilities. Good governance efforts can lead to improved business processes, but as we saw with analytics, AI is emerging as a weak link in data governance. As a result, we expect that through 2026, one-third of enterprises will realize that a lack of AI and machine learning (ML) governance has resulted in biased and ethically questionable decisions.

        Let’s look at some of the things enterprises should consider as they implement AI governance within their organizations. First of all,Ventana_Research_2024_Assertion_AI_AIML_Governance_12_S AI is heavily dependent on data, so all the same data governance and privacy issues exist in AI platforms that exist in data and analytics platforms. Data access should be restricted, and privacy should be protected with the appropriate access controls, encryption and masking. In addition, the output of the models—particularly generative AI (GenAI) models—may contain data about individuals that could be protected information. Only 6 of the 25 software providers we evaluated provided adequate data governance.

        The process of developing AI models and maintaining them is iterative. Understanding how a model was created and being able to recreate the model is important and, in some cases, may be necessary to comply with regulations. Reproducibility requires versioning and archiving various artifacts used in the model training process. Most platforms provided limited capabilities. While many software providers supported some level of reproducibility, particularly in the data preparation process, only three providers fully met the requirements for end-to-end reproducibility.

        Another issue of concern during the model-building process is bias detection. Bias is a measure of the fairness, impartiality and neutrality of a model. Only five software providers had adequate mechanisms for detecting bias in the models they produced. And once a model is produced, enterprises need to monitor drift, or how much the model has deviated in its accuracy over time. The model may have been deemed adequate at the time it was created, but changing market conditions or business operations may cause the accuracy to decline over time. In this area, providers did slightly better, with nine providers fully meeting the requirements.

        Another area with a parallel to data governance is the concept of a catalog. Data catalogs are indispensable for good data governance, as my colleague Matt Aslett has written. Similarly, model catalogs play a key role in AI governance. Catalogs provide an inventory of what models exist within an enterprise as well as metadata about those models, such as their development or production status and other characteristics. Ideally, the catalog would include an indication of whether a model was certified for production use within the enterprise. However, only five of the platforms evaluated had robust, built-in approval workflows.

        Other governance considerations include documentation of models and cost controls for the creation and usage of models. In addition, there are some governance issues specific to genAI, such as toxicity, hijacking, hallucinations and IP infringement.

        The collection of these issues can be overcome with processes implemented outside of the AI platforms themselves. But creating and maintaining those processes is fraught with challenges. First of all, it requires additional resources. Second, any manual process is prone to errors. It’s not all doom and gloom though. Our Buyers Guide evaluations only considered generally available capabilities. Several software providers are adding additional governance capabilities that are currently in preview mode. In the meantime, enterprises will need to continue to be vigilant. ISG Research shows more than one-quarter of enterprises report their governance of AI fails to meet or falls short of their expectations.   It’s important to understand what features exist and what features are planned, but for the near term, enterprises should expect to invest resources in AI governance if they expect to utilize and trust AI in their business processes.

        Regards,

        David Menninger

        Authors:

        David Menninger
        Executive Director, Technology Research

        David Menninger leads technology software research and advisory for Ventana Research, now part of ISG. Building on over three decades of enterprise software leadership experience, he guides the team responsible for a wide range of technology-focused data and analytics topics, including AI for IT and AI-infused software.

        JOIN OUR COMMUNITY

        Our Analyst Perspective Policy

        • Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business, industry and technology vendor trends. Each Analyst Perspective presents the view of the analyst who is an established subject matter expert on new developments, business and technology trends, findings from our research, or best practice insights.

          Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@ventanaresearch.com

        View Policy

        Subscribe to Email Updates

        Posts by Month

        see all

        Posts by Topic

        see all


        Analyst Perspectives Archive

        See All