How I built LennyRPG
Turning 300+ episode transcripts into a fun, playable game with AI
👋 Each week, I answer reader questions about building product, driving growth, and accelerating your career. For more: Lenny’s Podcast | Lennybot | How I AI | My favorite AI/PM courses, public speaking course, and interview prep copilot
P.S. Get a full free year of Lovable, Manus, Replit, Gamma, n8n, Canva, ElevenLabs, Amp, Factory, Devin, Bolt, Wispr Flow, Linear, PostHog, Framer, Railway, Granola, Warp, Perplexity, Magic Patterns, Mobbin, ChatPRD, and Stripe Atlas by becoming an Insider subscriber. Yes, this is for real.
A few months ago, I shared all of my podcast transcripts on socials on a whim, and holy sh*t, y’all found such incredibly creative ways to use this data: parenting wisdom rooted in PM advice, user research scripts, antimemes, an infographic for every episode, a “Learn from Lenny” Twitter bot, and at least 50 other amazing projects.
But my favorite project of all was by Ben Shih, a non-technical product designer at Miro, who created LennyRPG. I asked Ben to share the step-by-step journey behind this wildly fun, video-game-inspired project—how he built it and what he learned.
To let a thousand more flowers bloom, today I’m releasing my entire newsletter archive (and my podcast transcripts) in AI-friendly Markdown files. Also, an MCP server and a handy GitHub repo. Paid subscribers get all of the data (some 350 posts and 300 transcripts); free subscribers can access a subset. Grab the data here: LennysData.com.
I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to start playing with the latest and greatest AI tools.
Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments below. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Here’s the data. Let’s go.
Ben is a designer and product builder who enjoys creating small, fun, and thoughtful products that make the world a little better. He’s currently a growth designer at Miro. You can explore more of his work on his website or LinkedIn.
Also, a big thank-you to Tal Raviv, Claire Vo, and Este Lopez for helping me beta-test and improve LennysData.com (which I proudly “agentically engineered” with Codex and Claude Code 👌).
A couple months back, Lenny dropped something special. He made transcripts from all his more than 300 podcast episodes structured and publicly available. As someone who’s listened to the podcast for years, I couldn’t stop thinking about what I could actually build with this.
Brian Balfour talks a lot about building at the right moment: if you get the timing right, you’ll find a real window of opportunity. This felt like one of those windows.
The first idea that popped into my mind was to make a Lenny interview app where you can practice job interviews with Lenny’s Podcast guests. However, the more I thought about that idea, the less excited I felt. Interview practice tools by nature feel stressful, and that’s the last type of product I wanted to create. I wanted to make something fun.
What if I turned Lenny’s Podcast into a small role-playing game (RPG)? A game where you explore a pixel world, meet guests from Lenny’s Podcast, compete with them to test your product knowledge, and even capture them like Pokémon when you win. That’s how LennyRPG was born.

Here’s how I built it
When I build apps with AI, I usually follow a very simple flow:
Define the core idea: I start by clarifying what the app is. For visually heavy products, I sketch it out so the AI can better understand the requirements.
Create a product requirement document (PRD): I turn the core idea into a proper PRD with the AI. This becomes the single source of truth for the build.
Create a proof of concept: I use the PRD to plan implementation and build the core functionalities first.
Add remaining features: I finish the end-to-end flow, such as connecting the database and building out settings, profile pages, and other non-core features.
Polish: I go through the app end-to-end, fix UX/UI details, and do final code reviews to make sure everything is stable.
Ship it: I deploy, get feedback, and get it out into the world.
The process isn’t that different from before the AI era. But now I really make sure that I spend enough time on the first two steps to ensure that the AI gets all the context of what I want. In my experience, nailing the core idea and PRD determines 80% of how smooth the rest of the build will be.
Here are the main tools and technologies I used throughout the build:
Ideation and planning: Miro, ChatGPT
Coding: Claude Code, Codex, Cursor
Image generation: GPT Image Gen (gpt-image-1.5)
Quiz generation: GPT-4o
Game engine: Phaser 3
Database: Supabase
Deployment: Vercel
Now let’s walk through how I used this process to build my RPG game. I’ll share the exact prompts, tools, and decisions at each step so you can apply the same workflow to your own projects.
1. Define the core idea
The core idea was simple: turn Lenny’s Podcast into a Pokémon-style RPG where players encounter podcast guests in the wild and battle them through product questions.
For many apps, text and a clear idea are enough to get started with. But for highly visual products like this game, spending some time on visualization can help you get a solid sense of how you want the game to look and feel. That makes a big difference later on when you ask the AI to build the UI and interactions.
For this game, I dropped a few Pokémon screenshots into a Miro board and put together a rough concept directly on the board. Nothing fancy, mostly text and boxes on top of screenshots. But it was enough to show how I imagined the map, the battle screen, and how the characters might look.
The goal was not to design the game exactly but to give the AI something concrete to read and reason about. Once the core idea was roughly visualized, the AI could read the visuals alongside the text, which led to a much stronger PRD in the next step.
Creating a sample set of avatars
As part of the visualization, I also created a few test avatars with ChatGPT to validate the content generation workflow. This helped me understand the prompt engineering needed for consistent pixel art style avatars.
The process was very simple. I dragged in images of the RPG characters I wanted the style to match, then added Lenny’s photo into ChatGPT to create a similar one.
Once I was happy with it, I asked ChatGPT to describe the tone, style, and design in detail so that I could reuse that as a prompt later.
Prompt I used:
Study and think through the styling, design, colors, proportions, and overall look in detail, then return only a polished image-generation prompt that will create a similar front-facing character based on the provided person’s photo, with a transparent background and no additional elements.
2. Create a PRD (in a lazy way)
In my experience, the PRD is the most important document if you want the AI to execute your vision correctly. As a PRD does well with human teammates, it gives the AI the base understanding of your app’s goal, problem statement, and core idea. Whenever the AI hits a wall or the context window runs out, you can refer it back to the PRD to realign. No matter what stage you’re at, the PRD makes sure everything the AI generates stays true to what you’re actually trying to build. That’s why I always invest time here.
That said, writing PRDs can sometimes drive you crazy. So instead of writing the PRD myself from scratch, I let the AI interview me. I pasted my core idea along with the visuals into ChatGPT and then asked it to ask me questions so that I could answer them one by one.
Prompt I used:
Ask me questions to help you put together a brief PRD for the following web game: I want to create a mini game that takes all the podcast episodes from Lenny’s Podcast, generates questions from each episode, and make it like a Pokémon RPG game, with similar visuals. What I am expecting is, for example, you found Elena in the wild, and you can compete with Elena on product questions, you get 5 questions, and you lose HP [hit points] when you lose the answer etc. We can randomly pick 50 guests from the podcast and get challenged. The entire theme/design of the game needs to be very Pokémon RPG style in the old day.

With the prompt, ChatGPT came back with 17 questions. I moved them into Miro to visualize them better and used Wispr Flow to quickly dictate my answers verbally.
Answering these questions also forced me to think through gaps and assumptions in my idea, while giving the AI much better context than a one-page description ever could.
Once I answered all of the questions the AI had for me, I chained the answers together with all the available artifacts in Miro and asked the AI to generate a comprehensive PRD.
3. Create a proof of concept
With the PRD in place, I moved it over to Cursor as a Markdown file so I could start working on the POC.
For the actual development, my setup uses three tools: Claude Code, Codex, and Cursor’s Composer.
I treat Claude Code as my lead engineer. It helps draft the implementation plan, think through architecture, and reason about product and design constraints. I’ve also found it to be great at searching for solutions and open source libraries. Codex is mainly for executing tasks from the implementation plan. It’s very good at following instructions accurately, and comes with a more generous token limit. Composer is mostly for smaller tasks like formatting documents, JSON files, or writing simple scripts.
Using the PRD as input, I first asked Claude Code to search for any open source projects that could help me move faster. This is something I always do early on. Very often, people have already built something similar and made it open source on GitHub, which can help you set things up much faster.
One of the first libraries I landed on was RPG-JS. Thanks to the library, it took me around five minutes to get something running. I was able to quickly build out the essential game flow. The overworld map handled basic player movement, encounter zones, and simple UI elements.
But very quickly, I started hitting challenges.
Challenge #1: Hitting the limits of RPG-JS and pivoting
After a few iterations, it became clear that RPG-JS was not the right foundation. The framework is heavily designed around inventory systems and weapon-based combat. That worked against me, since my battles were quiz-based and logic-driven. The more I tried to bend it, the harder it became for the AI to reason about the system cleanly.
After talking it through with Claude Code, I decided to stop forcing it and pivot. The new framework that I decided to use is Phaser, a 2D game framework used for making HTML5 games for desktop and mobile.
Challenge #2: Getting the map running in Phaser
After switching to Phaser, things became much more flexible in terms of scenes, maps, and game logic. However, because everything is more customizable, even setting up a basic map took a lot more work.
Fortunately, using Claude Code, I found a Medium article from a while ago that included an open source, reusable map template. That helped me speed things up significantly and get back to focusing on the game itself.
Challenge #3: Polishing the details in Phaser
Phaser is a powerful but complex library with a lot of different features. Claude Code took some time to actually understand how it works, and I had to go through many iterations to get the details right. Things like importing the right fonts, making sure UI elements were positioned correctly, and editing everything within Phaser’s open canvas all required a lot of back and forth.
One tip for complicated tasks like this is to ask Claude Code to create a simple Markdown file to log everything it tries, so it can keep referring back and updating what works and what doesn’t.
This helps a lot for Claude Code and AI in general to understand the codebase and framework better. It’s especially useful as your codebase grows larger, where even small things like font changes can become difficult for an AI to handle.
After working through all these challenges, the game finally reached a playable state. The starting screen, map, and battle screen were all working end-to-end.
Once the POC was ready, I shared it internally in the office to get a few people to try it out. At this stage, I wasn’t looking for polished feedback or detailed bug reports. I mostly wanted to see how people reacted when they opened the game for the first time. Did they immediately understand what to do? Did the core loop make sense? Most importantly—did it feel fun, or did it feel like work?
This kind of lightweight, informal testing gave me confidence that the core idea worked, and that it was worth investing more time to turn the POC into something more complete.
4. Add remaining features
Once I got the app running correctly with the basics and got great feedback from folks in the office, I started following the plan to scale my POC into a proper game.
But the process was less straightforward than expected, mainly because there were a lot of podcast episodes to process. Scaling from a working POC to a full game turned out to be mostly about figuring out how to handle things systematically instead of manually.
Here are the main tools and decisions that helped me get there:
Processing 300+ transcripts systematically
The transcript file provided by Lenny contained only raw text. To make it usable in the game, I first had to enrich the data with things like episode title, episode URL, and podcast cover.
To do this, I pulled in the podcast RSS feed with Cursor’s Composer and used it to attach the missing metadata to each transcript. This gave me a much more complete dataset that the game could actually use.

Then, using Claude Code, I asked it to create a simple CLI tool that could systematically generate quiz questions for each episode using the OpenAI API. Instead of doing this episode by episode, the tool processed everything in one go.
This step was as simple as typing in a prompt: “Create a CLI command tool that creates a simple way to read through all the transcripts in /transcript folder one by one, and for each, generate 5 questions following the requirements and JSON format: {Your requirements and JSON format}”
It took around 20 minutes to finish, and the output was a structured JSON file that I could plug directly into the game.

The potential nightmare of creating 250+ RPG avatars
One of the hardest parts of building the game was creating over 250 RPG avatars in a consistent way. Each avatar needed a photo of the guest as input. Doing this manually by searching and downloading guest photos one by one would have taken forever.
Fortunately, every Lenny’s episode already includes an episode cover that contains the guest’s avatar. For this, I triggered Cursor’s Composer to pull RSS feed again to pull the image URLs, downloaded them locally, and used those as inputs for avatar generation.
That solved the sourcing problem but introduced another one: How do I make sure every avatar looks consistent in quality and style?
This is where I used OpenAI Playground to repeatedly test and refine my prompt, as well as testing which models work the best for the task. I kept adjusting it until every generated avatar followed the same style and looked like it belonged to the same game.
Once the prompt was stable, I used Claude Code again to write another CLI tool that could systematically generate all the RPG avatars from the episode covers. That turned a very painful manual task into a one-click process.
And of course, for each output, I had to check one by one to make sure the sizes and styling were similar and matched how the guests looked in the podcast cover. This was one of the most interesting steps because there were a few fun edge cases. For example, I didn’t know Adam Grenier really has rabbit ears on top of his profile image in the original podcast cover—I almost deleted them. Or there are episodes with two people in the cover image, like Jake Knapp and John Zeratsky’s episode, so I had to tell AI to generate a single separate image for each person.

Claude Code’s magic with background music
Audio is a huge part of any game. Many successful gamified apps, like Duolingo, invest a lot of effort in sound design because it makes everything feel more alive.
At the same time, searching for the right background music and wiring it into the game usually takes a lot of time. So I went to Claude Code and simply said: “Search for me background music for each phase, with mute control.”
To my surprise, it was able to find OpenGameArt.org, an open source audio library for games, and wire it into the game correctly. When I wrote the prompt, I actually just wanted to add background music for when players are on the map, but it automatically added music for battle screens, victory screens, and defeat screens as well. I still had to adjust the timing and volume, but most of the heavy lifting was done automatically. That part genuinely felt like magic.
Defining the gaming mechanics
Defining the gaming mechanics was the most interesting part of the process, as I wanted the game to be fun and low-stress but still competitive enough that people felt progression and stakes. I’ve studied game theory in the past, but for this project, most of it came down to common sense, play testing, and iteration.
I started with a very simple rule set: Each opponent has three questions. Every correct answer gives XP (experience points). If you answer all three correctly, that counts as a perfect kill.
To keep things interesting, I added small variations. Occasionally, one of the three questions becomes a bonus question, which gives extra XP and a small HP boost. This introduces a bit of randomness without breaking balance.
Stage progression is based on XP thresholds. Once you reach the required XP, a new map unlocks with a new batch of guests. Defeated opponents disappear and get added to your collection, so you can’t farm the same ones repeatedly.
I worked through most of this logic on my own first and then verified with the AI to make sure there were no obvious bugs or edge cases. The AI sanity-checked numbers and flows, but the final calls on balance, pacing, and stress level were all manual.
Connecting Supabase using MCP (leaderboard)
The last step of the game is the leaderboard. It is the competition aspect of the game, where people can see their ranking and compete with each other.
I knew I had to set up a database for this, so I started by setting up Supabase MCP in Claude Code. That means instead of manually setting up tables, APIs, and connections, all I had to do was describe to Claude Code that I wanted a leaderboard synced with Supabase.
Once I did that, it triggered Supabase MCP, which called tools like create_project and apply_migration to set up the project and tables automatically, including the database structure and the connection between the game and Supabase. This made the whole process much faster and removed a lot of setup work that would normally take much longer.
The result was a working leaderboard that synced player progress in real time, without my having to touch much backend code at all.
5. Polish
Before shipping, I focused on final polish to make sure the app was stable, usable, and presentable enough for public launch. At this stage, the core gameplay was already working, so the goal was not to add new features but to reduce friction and obvious issues.
QA check with Claude Skills
For this step, I downloaded the review skill from the Claude Code Awesome Skills marketplace and used it to review the entire codebase comprehensively.
This was especially helpful for catching things I would normally miss, such as state issues between scenes, missing error handling, and small logic bugs that only show up after multiple rounds of gameplay. I did not blindly accept everything it suggested, but it gave me a solid checklist to go through before shipping.
UI polish
I went through the game end-to-end and logged all UI and UX inconsistencies in a Markdown file—things like spacing issues, text overflow, unclear labels, alignment problems, and visual hierarchy issues.
Once everything was written down, I let AI pick up the list one by one and fix them. This worked surprisingly well, especially when the issues were clearly described. It also made the process much more systematic compared with fixing things ad hoc while clicking around the app.
SEO
For SEO, I used Claude Code to help figure out the basics: page title, meta description, social preview, and basic indexing setup.
Since this was a game and not a content-heavy site, I did not go deep into SEO optimization. The main goal was to make sure the site was indexable, shareable, and looked good when people posted it on social media.
6. Ship it
Once the game was deployed smoothly on Vercel, I reached out to Lenny in the community Slack to get a quick sanity check. I honestly wasn’t even expecting a direct reply given how busy he is—but to my surprise, I got a very kind and encouraging response from him.
That was the nudge I needed to just ship it.





























