Indirect Prompt Injection - 101 👹
TL;DR 📚
Occurs when an LLM accepts input from external sources that can be controlled by an attacker, such as websites or files. The attacker may embed prompt injection in the external content, hijacking the conversation context. This would
🔄 Natural Language => LLM => SQL 🔄
As you probably know, one of the prevalent applications these-days for LLMs in production is the translation of natural language to SQL.
This awesome tutorial demonstrates how to create a natural language to SQL code generator using LLM in a
Our CEO
@ItakGol
weighs in on
@eschuman
's latest piece.
Hasty LLM integrations into cloud services create attack opportunities.
Read the full article on
@CSOonline
📅MARK YOUR CALENDAR: February 20th
A conversation between Danny Portman from
@ZetaGlobal
and
@ItakGol
on Generative AI, building customer-facing apps and its security implications.
Have questions for the speakers? DM us!
Register here:
Plugins, Prompt Injection and Cross Plug-in Request Forgery.
TL;DR-
Let ChatGPT visit a website and have your email stolen 🤯📧
In details-
Here’s how it works, step-by-step:
1) The attacker hosts a malicious prompt-injection payload on their website. Johann didn’t want to
This is dangerous 🛑
Custom GPTs have a significant data security flaw.
Prompt injection can now lead to the leakage of the entire uploaded knowledge bases. The entire uploaded knowledge bases.
In some cases, even a simple request like "Let me download the file" can lead to
Introducing: LLM Top10 GPT🛡️
Our team recently built a custom GPT based on the OWASP Top 10 For Large Language Model Applications, enriched by many other GenAI Security resources out there.
Try the LLM Top 10 GPT here:
**Requires ChatGPT Plus
On this Valentine's Day, we have something to confess...
We're so LLM-agnostic that we swipe right on all your GenAI apps. You could say we're in a polyamorous relationship😻