Lenny's Newsletter
Lenny's Podcast: Product | Career | Growth
The coming AI security crisis (and what to do about it) | Sander Schulhoff
Preview
0:00
-1:32:40

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Sander Schulhoff is an AI researcher specializing in AI security, prompt injection, and red teaming. He wrote the first comprehensive guide on prompt engineering and ran the first-ever prompt injection competition, working with top AI labs and companies. His dataset is now used by Fortune 500 companies to benchmark their AI systems security, he’s spent more time than anyone alive studying how attackers break AI systems, and what he’s found isn’t reassuring: the guardrails companies are buying don’t actually work, and we’ve been lucky we haven’t seen more harm so far, only because AI agents aren’t capable enough yet to do real damage.

We discuss:

  1. The difference between jailbreaking and prompt injection attacks on AI systems

  2. Why AI guardrails don’t work

  3. Why we haven’t seen major AI security incidents yet (but soon will)

  4. Why AI browser agents are vulnerable to hidden attacks embedded in webpages

  5. The practical steps organizations should take instead of buying ineffective security tools

  6. Why solving this requires merging classical cybersecurity expertise with AI knowledge


Brought to you by:

Datadog—Now home to Eppo, the leading experimentation and feature flagging platform

Metronome—Monetization infrastructure for modern software companies

GoFundMe Giving Funds—Make year-end giving easy

Where to find Sander Schulhoff:

• X: https://x.com/sanderschulhoff

• LinkedIn: https://www.linkedin.com/in/sander-schulhoff

• Website: https://sanderschulhoff.com

• AI Red Teaming and AI Security Masterclass on Maven: https://bit.ly/44lLSbC

Referenced:

• AI prompt engineering in 2025: What works and what doesn’t | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff

• The AI Security Industry is Bullshit: https://sanderschulhoff.substack.com/p/the-ai-security-industry-is-bullshit

• The Prompt Report: Insights from the Most Comprehensive Study of Prompting Ever Done: https://learnprompting.org/blog/the_prompt_report?srsltid=AfmBOoo7CRNNCtavzhyLbCMxc0LDmkSUakJ4P8XBaITbE6GXL1i2SvA0

• OpenAI: https://openai.com

• Scale: https://scale.com

• Hugging Face: https://huggingface.co

• Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition: https://www.semanticscholar.org/paper/Ignore-This-Title-and-HackAPrompt%3A-Exposing-of-LLMs-Schulhoff-Pinto/f3de6ea08e2464190673c0ec8f78e5ec1cd08642

• Simon Willison’s Weblog: https://simonwillison.net

• ServiceNow: https://www.servicenow.com

• ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html

• Alex Komoroske on X: https://x.com/komorama

• Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack: https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack

• MathGPT: https://math-gpt.org

• 2025 Las Vegas Cybertruck explosion: https://en.wikipedia.org/wiki/2025_Las_Vegas_Cybertruck_explosion

• Disrupting the first reported AI-orchestrated cyber espionage campaign: https://www.anthropic.com/news/disrupting-AI-espionage

• Thinking like a gardener not a builder, organizing teams like slime mold, the adjacent possible, and other unconventional product advice | Alex Komoroske (Stripe, Google): https://www.lennysnewsletter.com/p/unconventional-product-advice-alex-komoroske

• Prompt Optimization and Evaluation for LLM Automated Red Teaming: https://arxiv.org/abs/2507.22133

• MATS Research: https://substack.com/@matsresearch

• CBRN: https://en.wikipedia.org/wiki/CBRN_defense

• CaMeL offers a promising new direction for mitigating prompt injection attacks: https://simonwillison.net/2025/Apr/11/camel

• Trustible: https://trustible.ai

• Repello: https://repello.ai

• Do not write that jailbreak paper: https://javirando.com/blog/2024/jailbreaks


Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Lenny may be an investor in the companies discussed.


My biggest takeaways from this conversation:

This post is for paid subscribers