Welcome back -

Services for Organizations

Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection

Consulting & Strategy Sessions

Ventana On Demand

    Services for Investment Firms

    We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

    Consulting & Strategy Sessions

    Ventana On Demand

      Services for Technology Vendors

      We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.

      Analyst Relations

      Demand Generation

      Product Marketing

      Market Coverage

      Request a Briefing

        Ventana Research Analyst Perspectives

        << Back to Blog Index

        Who Poisoned the AI Watering Hole: Adversarial Threats in AI

        Who Poisoned the AI Watering Hole: Adversarial Threats in AI

        Embracing artificial intelligence technologies opens doors for innovation and efficiency. Alongside these opportunities, however, come risks. Threat actors are keenly aware of the potential impact of AI systems and are actively exploring ways to manipulate them. In this Analyst Perspective, I explore the world of adversarial machine-learning threats and provide practical guidance for securing AI systems.

        Generative AI applications introduce great potential to enterprises, bringing the intersection of innovation and security closer to business needs. Ventana Research asserts that by 2026, one-third of enterprises will establishVentana_Research_2024_Assertion_DigiTech_GenAI_Governance_12_S governance to mitigate risks associated with GenAI software, ensuring ethical considerations, avoiding data and model bias, and safeguarding privacy and data security.

        GenAI systems can inadvertently generate biased or harmful content. Enterprises must ensure that ethical guidelines are in place to direct AI-generated outputs and prevent unintended consequences. GenAI models also learn from historical data, which may contain biases. Organizations need robust mechanisms to detect and rectify bias during model training and deployment. Plus, GenAI applications often process sensitive data. Implementing strong privacy controls and safeguarding data against breaches is essential.

        Striking a balance between innovation and digital security becomes paramount for enterprise adoption of GenAI software. Let’s explore the adversarial threats that enterprises must navigate to protect AI systems.

        A recent report by scientists from the National Institute of Standards and Technology and its collaborators identifies vulnerabilities in AI systems that adversaries can use to create malfunctions. The NIST report provides an overview of four major attack types that AI systems might suffer and corresponding approaches to mitigate damage. The report further classifies attacks according to multiple criteria such as the attacker's goals and objectives, capabilities and knowledge. The leading attack types include abuse, evasion, poisoning and privacy.

        Abuse attacks involve exploiting AI systems for unintended purposes. Using a language model to generate harmful or inappropriate content by compromising legitimate data sources is an example. Strict access controls, content filtering and regular audits can prevent abuse attacks.

        Evasion attacks aim to deceive AI models during inference by subtly modifying input data. For example, adversaries alter an image, such as a speed limit sign or road lane marker, to mislead an image-recognition system into misclassifying objects. Robust model architectures, input preprocessing and adversarial training can help mitigate evasion attacks.

        Poisoning attacks involve injecting malicious data into the training set to compromise model performance. An attacker, for example, modifies training data to bias a sentiment analysis model like a chatbot into utilizing inappropriate language. Rigorous data validation, outlier detection and monitoring during training are essential to prevent poisoning attacks.

        Privacy attacks exploit model outputs to infer sensitive data about individuals. A membership inference attack reveals whether a specific data point was part of the training set. By asking a chatbot legitimate questions, an attacker may use those responses to identify and exploit a weakness in the model. Differential privacy techniques, model aggregation and data anonymization enhance privacy protection.

        While enterprises invest significant effort in building robust AI models, attackers face lower barriers. Building attacks on AI systems requires less effort than constructing a robust AI model. Attackers can exploit vulnerabilities without the complexity of model development, training and optimization. Enterprises must recognize this asymmetry and prioritize AI security measures.

        A wealth of open-source tools and research papers are available for crafting attacks. Adversarial attack libraries, pre-trained models and tutorials are accessible online. Organizations must defend against these documented techniques.

        Threat actors can test their attacks against various models without rigorous validation. Enterprises often lack comprehensive testing for adversarial scenarios. Rigorous testing, including adversarial examples, is essential during model development.

        The potential impact of compromising AI systems drives attacker’s motivation. Enterprises may underestimate the attractiveness of AI assets to attackers. Organizations should assess the value of AI models from an adversary’s perspective.

        As organizations embrace AI, understanding the unique security challenges becomes paramount. Understanding the lower barriers faced by attackers is crucial for effective AI security. Enterprises must proactively address these challenges to safeguard AI systems and data. Essential practices to safeguard AI assets include:

        • Leveraging existing frameworks. Extend existing governance and risk-management programs to cover AI. Rather than creating entirely new policies, integrate AI security considerations into your enterprise’s existing governance structure. This ensures consistency and alignment with overall risk-management practices.
        • Risk assessment. Regularly assess AI-related risks. Understand the unique threats posed by AI systems, including adversarial attacks, data privacy concerns and model vulnerabilities. Update policies and risk assessments accordingly to address emerging risks.
        • Stakeholder engagement. Involve non-technical executives and stakeholders in security discussions. Effective AI security requires collaboration across departments. Engage business leaders, legal teams and compliance officers to ensure a holistic approach to risk management.
        • Quality control. Rigorously validate and curate training data. Poisoning attacks often exploit vulnerabilities in training data. Implement strict data validation processes to prevent malicious injections. Regularly review and clean training datasets.
        • Data privacy. Implement privacy-preserving techniques. Sensitive data used for training AI models must be protected. Techniques like differential privacy can help safeguard individual privacy while maintaining model performance. Consider anonymization methods as well.
        • Data retention. Define data retention policies. Minimize the exposure of sensitive data by retaining it only as long as necessary. Regularly review and update retention policies to align with changing business needs and compliance requirements.
        • Adversarial training. Train models with adversarial examples. Adversarial training exposes models to intentionally crafted adversarial inputs during training. This helps improve model robustness and resilience against attacks.
        • Monitoring and alerts. Continuously monitor model behavior. Set up alerts for suspicious activities, such as unexpected prediction shifts or sudden drops in performance that may indicate tampering. Regularly review model outputs to detect signs of an adversarial attack.
        • Regular updates. Keep models up-to-date with security patches. Just like any enterprise software, AI models may have vulnerabilities. Stay informed about security updates and apply them promptly to mitigate known risks.

        The convergence of AI’s transformative potential and the increasing sophistication of threat actors requires a proactive approach to safeguarding AI systems. Securing AI systems is an ongoing journey that demands collaboration and adaptability. By following these best practices, enterprises can secure the AI watering hole and contribute to a safer digital future.


        Jeff Orr


        Jeff Orr
        Director of Research, Digital Technology

        Jeff Orr leads the research and advisory for the CIO and digital technology expertise at Ventana Research, now part of ISG, with a focus on modernization and transformation for IT. Jeff’s coverage spans cloud computing, DevOps and platforms, digital security, intelligent automation, ITOps and service management, intelligent automation and observation technologies across the enterprise.


        Our Analyst Perspective Policy

        • Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business, industry and technology vendor trends. Each Analyst Perspective presents the view of the analyst who is an established subject matter expert on new developments, business and technology trends, findings from our research, or best practice insights.

          Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@ventanaresearch.com

        View Policy

        Subscribe to Email Updates

        Posts by Month

        see all

        Posts by Topic

        see all

        Analyst Perspectives Archive

        See All