If you’re a premium subscriber
Add the private feed to your podcast app at add.lennysreads.com
Ever run an AI analysis on customer data, only to discover the numbers were fabricated and the insights completely generic? In this episode, Caitlin Sullivan, a user-research veteran who’s trained hundreds of product and research professionals, shares her four prompting techniques for getting trustworthy, actionable insights out of any LLM. After 2,000+ hours of testing customer discovery workflows with AI, she’s identified the failure modes that break AI analysis and the reliable fixes for each one.
In this episode, you’ll learn:
How to catch the two types of AI quote hallucinations
Why AI defaults to useless generic themes and insights
Which LLM is best for analysis work (and which one fabricates the most)
How to turn vague signal into actual decision clarity
The final verification pass that stress-tests everything before it hits a deck
Referenced












