NEW FEATURE
Cobalt PtaaS + DAST combines manual pentests and automated scanning for comprehensive applications security.
NEW FEATURE
Cobalt PtaaS + DAST combines manual pentests and automated scanning for comprehensive applications security.

Multi-Modal Prompt Injection Attacks Using Images

Recent developments have unveiled a new class of cyber threats aimed at Large Language Models (LLMs) like ChatGPT: multi-modal prompt injection attacks. These sophisticated strategies involve embedding malicious instructions within images and audio clips. Unlike direct attacks, these are indirect, manipulating the LLM's responses to user prompts subtly and dangerously. 

Researchers from Cornell University recently demonstrated at Black Hat Europe 2023 how these attacks could redirect users to malicious URLs or extract sensitive information, among other malicious actions. Below, we'll unpack the nature of these indirect prompt injections, their potential impact, and the subtle yet significant threat they pose to the increasingly multimodal world of AI chatbots.

Risks of Image Attacks in Large Language Models (LLMs)

Multimodal prompt injection image attacks have emerged as a new attack vector due to GPT-4V's support for image inputs. Unlike their text-only counterparts, these attacks embed commands, malicious scripts, or code within images, which the LLM then processes as legitimate. 

The model’s fundamental nature compounds this vulnerability. LLMs, including GPT-4V, lack a data sanitization step in their processing. They trust every image, making them susceptible to attacks that can embed a series of commands within images, leading to fraud, operational sabotage, or social engineering attacks.

Potential Risks and Consequences for Users and Organizations

The implications for users and organizations are profound. For businesses, especially those relying on LLMs for image analysis and classification, an attack could dramatically alter how images are interpreted, leading to misinformation and chaotic outcomes. 

The risks extend beyond just a contaminated dataset to include severe operational disruptions and financial fraud. Once an LLM's prompt is overridden, it becomes more susceptible to further malicious commands, making it a tool for sustained attacks rather than a one-off breach. This susceptibility is particularly alarming given the LLMs' increasing integration into sensitive areas like medical diagnosis, autonomous driving, and surveillance.

Real-World Implications and Examples of Past Breaches

Real-world implications are already emerging. Attackers have demonstrated the ability to manipulate GPT-4V, steering it to ignore safety guardrails and perform harmful commands. 

The researchers' demonstrations at Black Hat Europe 2023, for instance, highlighted the model's susceptibility to manipulated audio and visual inputs. These aren't hypothetical risks; they are proven vulnerabilities that have already been exploited in various forms.

What Companies Can Do to Mitigate the Potential Damage of a Multi-Modal Prompt Injection Attack

To combat these threats, a multi-pronged approach is necessary. Mitigation tactics include:

  • Improving the sanitation and validation of user inputs
  • Adopting a more robust platform architecture
  • Creating multi-stage processing workflows 

These are some recommended strategies today, with more evolving alongside this new technology. As the technology and attack vectors evolve, so too must the defenses. 

As LLMs become increasingly multimodal, the image emerges as a new frontier for attackers, one that requires immediate and sustained attention to protect against the potential for sabotage and misinformation on a massive scale.

Strategies and Best Practices for Protecting Against Image Attacks

To shield against image-based attacks, layered approach to security is best. First, improving input sanitation and validation is crucial. LLMs should not blindly trust inputs but rather have robust mechanisms to analyze and vet images and audio for potentially malicious content. 

Additionally, identity-access management (IAM) and least-privilege access strategies should be standard for enterprises utilizing private LLMs. By controlling who has access and ensuring that access is only granted as necessary, the potential for introducing malicious prompts is greatly reduced.

The Role of Cybersecurity Measures and LLM Design Improvements

From a design perspective, LLMs need to be fundamentally rethought to address their inherent vulnerabilities. This involves incorporating data sanitization steps within the processing, ensuring that all data, especially images, undergo rigorous checks before being processed. 

Moreover, separating user input from system logic can help ensure that even if malicious input is received, it doesn't directly affect the LLM's core functions or decision-making processes. Implementing multi-stage processing workflows can also help trap attacks early, identifying and neutralizing threats before they cause harm.

Importance of Ongoing Research and Updates in Defense Mechanisms

The battle against multi-modal prompt injection attacks is ongoing, with attackers continually evolving their strategies. As such, continuous research is vital. 

Staying abreast of the latest attack methodologies and understanding their implications is crucial for developing effective defenses. Collaboration between industry, academics, and security professionals can foster the innovation needed to stay one step ahead of attackers. Regular updates and patches to LLMs based on the latest research are not just beneficial but necessary to ensure they remain secure against an ever-changing threat landscape.

It's clear that mitigating the potential damage of multi-modal prompt injection attacks requires a multi-faceted approach. It involves not only implementing best practices and strong cybersecurity measures but also fundamentally rethinking the design of LLMs to address inherent vulnerabilities. Moreover, it requires a commitment to ongoing research and collaboration to adapt and evolve defenses in line with the ever-changing nature of cyber threats.

Examples of Multi-modal Prompt Injection Attacks Using Images

The realm of cybersecurity is continually evolving, with multi-modal prompt injection attacks emerging as a sophisticated threat. These attacks, which manipulate Large Language Models (LLMs) through images and sounds, have demonstrated the vulnerabilities in systems we previously believed secure. 

Two notable cases, a targeted-output attack on PandaGPT and a dialog poisoning attack against LLaVA, offer deep insights into the techniques used, vulnerabilities exploited, and the broader implications for cybersecurity.

PandaGPT

In the targeted output attack against PandaGPT, researchers developed a method to manipulate the model's output by embedding an adversarial perturbation within an audio clip. 

When a user innocently queried the system about this audio, the model, deceived by the hidden prompt, directed the user to a malicious URL. The technique of adversarial perturbation is particularly insidious. 

It involves subtle, often imperceptible alterations to an input, designed to exploit the model's inherent trust in the data it processes. This attack not only exposed the vulnerability of LLMs to multi-modal inputs but also highlighted the potential for seemingly benign inputs to serve as vehicles for dangerous payloads.

The dialog poisoning attack against LLaVA took this a step further. In this scenario, an image was altered with an instruction that made the chatbot respond as if it were a character from Harry Potter, influencing the entire conversation that followed. 

This attack didn't just change the LLM's response to a single prompt but poisoned its behavior for an entire interaction. It exploited the auto-regressive nature of LLMs, which build responses based on both the immediate input and the broader context of the conversation. This showed how an initial malicious input could have a cascading effect, steering the model's behavior long after the original input was processed.

These cases underscore a critical vulnerability in LLMs: the inherent trust placed in their inputs. While designed to analyze and interpret vast amounts of multi-modal data, these models are particularly susceptible to inputs that have been maliciously crafted. The adversarial perturbations used in these attacks are challenging to detect and can significantly influence the model's behavior, demonstrating a sophisticated understanding of the models' inner workings and their potential weak points.

Minimize risk efficiently and effectively with offensive security call to action image

Back to Blog
About Andrew Obadiaru
Andrew Obadiaru is the Chief Information Security Officer at Cobalt. In this role Andrew is responsible for maintaining the confidentiality, integrity, and availability of Cobalt's systems and data. Prior to joining Cobalt, Andrew was the Head of Information Security for BBVA USA Corporate Investment banking, where he oversaw the creation and execution of Cyber Security Strategy. Andrew has 20+ years in the security and technology space, with a history of managing and mitigating risk across changing technologies, software, and diverse platforms. More By Andrew Obadiaru