How I Use AI Vibe Coding to Build Escape Rooms: Best Practices for Building Puzzle Games

A Brief Story of Our First AI Puzzle

I’ve been working on “AI” for a while now. It all started with early ChatGPT. I was bouncing ideas back and forth, nothing fancy. My first project was to create an AI integrated puzzle where players have to talk to an AI in order to guess the answer to a puzzle. (If you would like to see the final puzzle in action, come join us for a game of Glitch.) It used fuzzy logic to allow more flexibility from players to create a fun experience.

However, things quickly advanced, and the AI became good enough to handle real development, not just simple conversations, thanks to the improvements in the tech stack. This new AI-assisted development workflow has completely changed how I approach software development for our escape room projects.

I want to share exactly how I use AI tools to build our future tech escape rooms, because it’s probably different from what you’d expect.

What is “Vibe Coding”? 

Essentially, vibe coding is coding, but instead of jumping straight into Python or whatever coding language you are using, you tell an AI model what you want using natural language and the model builds the code for you. You can kind of think of it like pair programming with a bot. Does this mean you do not need to be a programmer to code any more? Kind of, but not really. Like learning any human language, knowing the rules of a language and how it works is still very important, and will be integral to troubleshooting. However, vibe coding will at least let you up-skill a lot faster than a traditional learning path. 

Reality Check: AI Isn’t Magic

Let me be upfront about something: AI isn’t truly creative (yet) or great at freeform development. Never forget the word “model” in “large language model” (LLM): AI has a very hard time working on new ideas that have not been modeled for it. It gets confused. A lot. Sometimes I feel like I’m managing an overeager intern who’s really smart but needs constant direction and re-direction.

But here’s the thing – when you know how to work with it, it becomes incredibly powerful. It has become one of our best tools in our toolkit for overcoming the specific challenges we face building high-tech escape rooms.

My Current Vibe Coding Workflow (The Honest Version)

Quick note: I only break out this full process for the big stuff like when I am starting a new dev project or building an entirely new feature. Small fixes or single-feature adds to an existing code base doesn’t need all the overhead. 

Starting Point: Using AI to Prompt Engineer for Escape Room Projects

I still use OpenAI’s ChatGPT web interface for most initial brainstorming because the UI is easily accessible. I’ve created different projects (essentially, templates) with different context in ChatGPT to help ease prototyping. I can quickly iterate on different approaches to a problem and talk through the type of sensors needed, the technical feasibility, or how complex an idea will be to implement. The key is using AI as a coding tool rather than expecting the AI to write code from scratch.

The Big Projects: Documentation First

When I’m tackling something bigger like integrating a mind-reading EEG headset with AI to verify what players are doing AND feeling (hint for our new escape room), I’ve learned to talk through a project with the AI until we have a solid plan. I want to make sure the AI coding agent I use knows what it’s trying to do, what it’s working with, and what it’s optimizing for. Essentially, I want it to understand the code it will create.

Then, I do something that might sound boring but is actually crucial: I have the AI create detailed markdown files documenting everything. Function requirements, development plans, expected user experiences, and a system prompt for the AI. When Player A does X, Y happens, following the rules defined in the spec. After the person feels this for this amount of time, stage 2 is triggered. This prompt engineering approach ensures we have a solid framework before any code generation begins.

This documentation step is where the magic happens: The AI helps me think through every interaction, every edge case, every potential failure point. It can help me outline unit tests, add or remove open-source dependencies, or identify APIs. Most importantly, it helps me create an AI-ready document. 

Git Init: Creating the Initial Codebase

I feed all the planning documents into Claude Opus with this specialized knowledge base alongside my standard development instruction file specifically for escape room development. That file focuses on what hardware we typically use to keep systems simple, how I want things structured, how what I want to focus on: player experience.

I’ll ask it to create separate files for each component. Arduino code for sensors, Python for interfaces, network protocols for device communication. Every device, API, or protocol it will work with. This AI-assisted coding approach lets me review code systematically rather than wrestling with boilerplate from scratch.

Self-Auditing and Debug Process (Because AI Makes Mistakes)

Before I download any AI-generated code into a GitHub repository, I always have the AI check itself and confirm that it’s aligned with the system requirements as well as check for potential problems. Did it meet all requirements? Did it overcomplicate things? This self-audit step catches a lot of issues early and ensures better code quality. 

Real Development: Iterative Terminal Based Vibe Coding

Once I have the initial files downloaded into a GitHub repository, the real work begins. I’m usually in an IDE like Visual Studio using tools like Gemini CLI, GitHub Copilot, or Claude Code. (Right now, out of all the LLMS, I use Claude Code the most.) This is where I iterate through and start to refactor the entire codebase. This usually results in multiple revisions as I have the AI create simulations, test different scenarios, and make sure everything actually works as expected. During each sweep, I have the AI tackle very specific aspects of the program so that I can make sure the AI can focus and does not jump the shark trying to refactor the entire repository in one go. I also frequently push to the repository to make sure I can go back to any previous idea.

Deployment: Getting Code Out of GitHub and into Our Games

Once I have working code, I load it into my test prop (an R-Pi, Arduino, or other novel and futuristic sensor) and test it like a normal puzzle. First we start testing it ourselves, then we test it in small groups, and finally, we test it in our games. Even with so much prep, I expect to encounter scenarios that neither I nor the AI could have seen coming. When this happens, I repeat the process until my puzzle works the way I want it to.

What an AI-Powered Workflow Actually Means for Our Escape Rooms

Creativity

This AI workflow lets me get out of my head in order to be more creative. I can quickly prototype puzzles or game mechanics, and then figure out how I would actually do things using a modular design. If I want to build an escape room where players use a drone to scan QR codes that trigger holographic clues, I can simulate the entire system before buying a single piece of hardware.

Unexpected Benefits

The documentation that comes out of this process is incredibly valuable. We have detailed records of how every system works, what components we need, and how everything connects together. This makes maintenance, pricing, and updates much easier.

Unexpected Negatives

One unexpected negative is AI models go down a surprising amount of the time. If you are only using it for coding, that is one thing, but if you intend to use an AI API inside of your code, especially within a specific use case, this can be devastating. For example, OpenAI released Sora on the same day we released our new AI puzzle, which immediate took our new puzzle down. Creating production environments that rely on a third-party AI’s API has proven to be problematic sometimes, so make sure to create an AI agnostic model. Have your code check itself to see which AI API is operational and auto choose which one it should reach out to.

What’s Next: How I Plan to Iterate on AI Success

Current Explorations

I’m currently testing out model context protocols (MCPs) with other tools, such as Unity to create custom VR games. I have also been testing out creating port maps, connectivity diagrams, and bills of materials so I can share in the build workload.

I have been constantly refining my vibe code workflow as well and because I am not a coder by trade, I’m sure that others out there are even better than me. The AI tools seem to be growing super quickly and the level of output in just the last few months has been astounding. There is an explosion of new MCPs out there that seem to be able to integrate with anything. Feel free  to reach out with suggestions on more things to look into.

My Advice for Future Vibe Coders

If you want to build something with AI, just go for it.Use it for concept development or to bounce ideas back-and-forth. Be the creative you want to be while the AI tries to bring it into reality. Experiment as much as you can with whichever AI agent you like best.

But remember: you still need to be the one who understands what it’s supposed to create and what it’s creating underneath the hood. It is a very similar process to go from a solo development to a joint development. Documentation and communication matters. 

Let’s Build Something Together

This is the start of a new a series. There’s way more to cover: tool comparisons, hardware integration, and the failures that taught me the most, just to name a few.

What do you actually want to see next? More importantly, what are you building? I learn more from your questions than you probably do from my posts. Reach out and tell me what you’re working on or where you’re stuck. Your project might be exactly what I write about next.

In SF? Come by our business! I’d rather show you what this stuff creates than just talk about it.

Written in 2025