ISG Software Research Analyst Perspectives

Detecting and Preventing Bias: A Crucial Element of AI Governance

Written by David Menninger | Oct 9, 2024 10:00:00 AM

In today's rapidly evolving technological landscape, artificial intelligence (AI) governance has emerged as a critical ingredient for successful AI deployments. It helps build trust in the results of AI models, it helps ensure compliance with regulations and it is necessary to meet internal governance requirements. Effective AI governance must encompass various dimensions, including data privacy, model drift, hallucinations, toxicity and perhaps most importantly, bias. Unfortunately, we expect that through 2026, model governance will remain a significant concern for more than one-half of enterprises, limiting the deployment, and therefore the realized value of AI and machine learning (ML) models. Continuing my previous discussions about AI governance, I’ll take a look at bias in this analyst perspective.

Bias can be described as skew in the outputs of AI models (including large language models or LLMs) for some specific segment of the domain which is being modeled. Bias is most often associated with gender, ethnicity, sexual orientation, age, disability or another group that could experience discriminatory practices. Bias in models used for hiring, credit decisions, lease applications and school admissions could result in legal issues or damage to an enterprise’s reputation. But bias can affect any group. Perhaps the model is biased with respect to residents who live in the western part of the country compared with other regions. Perhaps the model is biased with respect to high school educated adults versus those with higher education. While these latter examples may not result in discrimination that has legal consequences, the inaccuracies in the model can result in suboptimal operations and impact the bottom line.

Typically, bias can arise from two sources: data bias and model bias. Data bias arises from unrepresentative samples and historical biases that are carried forward in the data used for training. For example, in granting credit, many lending institutions have been less inclined to offer credit to minorities. Those real-world biases will be captured in the predictions of the models unless steps are taken to compensate for them. Another related issue is when a training dataset primarily represents one demographic, the model may perform poorly for others. Model bias is the systematic and repeatable error in the predictions. The way in which data is cataloged and labeled can produce model bias. Often data is cataloged and labeled by humans, and the decisions they make may not be fully accurate or may be incomplete, inadvertently altering the resulting model outputs. Model bias can also be the result of the algorithms’ inherent design. Common examples of bias include healthcare, recruitment and predictive policing, where predictive algorithms can produce disparate outcomes based on gender, race or ethnic background.

Enterprises need to take steps to detect and prevent bias. One of the most important steps is to establish and track metrics that measure bias. These metrics should track and compare performance across various demographic groups over time. Predefined metrics will also help maintain accountability. As mentioned in my earlier perspective, at the time of our evaluation a few months ago, only 5 of the 25 software providers evaluated in our AI Platform Buyers Guide provided tools that measure and track bias. Several others provided a generic metric capability, leaving it up to the enterprise to define and track metrics it considers relevant. And still other providers ignored the issue entirely. Additional metrics should be measured and tracked as well, including drift, fairness and model accuracy. Ideally, providers would offer a metrics framework along with predefined metrics that address core governance issues such as bias, drift and fairness. In addition to metrics, enterprises must ensure their training datasets are diverse and representative, employing measures to mitigate historical bias.

Detecting and preventing bias is just one aspect—albeit an important one—of a robust AI governance program. Understanding the sources of bias can help an enterprise design its governance processes to address these issues. In designing those processes, start with an understanding of what capabilities your software provider offers today and what is planned in the near future. Create the metrics necessary to track bias and other key metrics. Report and review those metrics frequently, even if it must be done outside of the AI platform you are using. Where possible, establish notifications when metrics deviate from accepted tolerances.

In summary, AI governance is not just a regulatory checkbox; it is an essential framework for building trust and improving outcomes in AI deployments. The path to unbiased AI begins with a commitment to effective governance. By prioritizing the creation of fair, transparent and trustworthy AI systems, enterprises can harness AI's transformative power while minimizing risks.

Regards,

David Menninger