Beyond Tomorrow: The Dark Side of Gen AI

The latest news in AI, from Google's AI Overviews meltdown to Immersive Labs' study on GenAI prompt injection attacks, and DataRobot's collaboration with IMDA.
Beyond Tomorrow: The Dark Side of Gen AI

Beyond Tomorrow: The Dark Side of Gen AI

The way conversation around artificial intelligence has picked up, there’s no dearth of news, and that’s a good thing for people who report on it. This week was particularly eventful, with notable developments and some weekend action.

Google’s AI Overviews experienced a major meltdown, generating bizarre and potentially harmful responses.

Oh, Google, when will you learn? Over the past weekend, Google’s recently introduced AI Overviews, designed to provide AI-generated responses to search queries, experienced a major meltdown. In many instances, the answers it generated were not just incorrect but also bizarre and potentially harmful. Asking Google how to ensure cheese sticks to pizza shouldn’t lead to suggestions like adding non-toxic glue or petrol to your noodles.

Newgen Software launched a Gen AI platform for banks, called LumYn, to enhance profitability and improve customer experiences.

We can’t go a week without mentioning OpenAI, and it’s only because it’s so interesting. In the past few weeks, the company had reality television levels of drama, but that seems to be cooling off a little bit now.

Immersive Labs conducted a study involving GenAI prompt injection attacks on chatbots, revealing that 88% of participants were able to trick a bot into exposing passwords.

Generative artificial intelligence presents dilemmas for security teams as they determine how to use it in ways that benefit their business without creating vulnerabilities. Immersive Labs, a Bristol, England-based cybersecurity firm that focuses on user training, recently performed a study involving GenAI prompt injection attacks on chatbots. It released a report of the results and found that 88% of participants were able to trick a bot into exposing passwords.

DataRobot, the enterprise AI platform leader, announced the integration of LLM evaluation measures aligned with a new initiative from the Singapore Government Agency, Infocomm Media Development Authority (IMDA).

DataRobot Collaborates with IMDA

On May 31, 2024, DataRobot, the enterprise AI platform leader, today announced the integration of LLM evaluation measures aligned with a new initiative from the Singapore Government Agency, Infocomm Media Development Authority (IMDA). The “Project Moonshot” initiative unveiled at the Singapore Asia Tech x Summit offers new capabilities that help AI practitioners and system owners manage LLM deployment risks by providing a common framework for benchmarking and red teaming evaluation.

“At DataRobot, our focus is addressing the confidence gap and helping organizations scale responsible use of generative AI,” said Jay Schuren, Chief Customer Officer, DataRobot. “We’re excited to announce that our latest product release incorporates Project Moonshot’s testing toolkit and its benchmarking and evaluation tests. The result is that LLM evaluations are more accessible and help scale the responsible use of generative AI, enabling practitioners to turn on and configure guard models to change the behavior and responses of LLMs.”

Project Moonshot delivers three core capabilities for AI practitioners and system owners:

  • Automated evaluation tools for generative AI solutions that easily integrate into CI/CD pipelines.
  • A benchmark repository allowing teams to run evaluations relevant to their applications by curating the right benchmarks.
  • A one-stop tool for AI red teaming, from jailbreaks to customized attacks.

It’s Time to Adjust Our Approach to AI

Large language models aren’t currently a safe training ground for sensitive information, like exposed personal data or payment info like credit card data. Breen provided some recommendations for protecting data when using LLMs like GPT in business environments.

“The simple answer is not to give any sensitive data to these models where unauthorized or untrusted users can query the LLMs,” he said. “Developers and security teams should also know how data is transferred between components in GenAI applications. Knowing where data could be exposed means more protections can be put in place.”

Breen also recommended taking data loss prevention requirements into consideration for both user queries and the bot’s response, despite inherent vulnerabilities in DLP checks.

“Finally, ensure you have adequate logging in place, not just for access but also for the messages and responses sent to and from GenAI models,” Breen suggested. “This can help developers and security teams monitor for signs of attack and tune the application to limit potential impact.”

Analyzing logs over time could reveal patterns that indicate prompt injections. It can also help teams identify which users or web page visitors are a potential threat if the bot is embedded within an external-facing application.