The question that has always driven the work: how does music live inside an experience that's different every time? The Lab is where that question gets stress-tested, broken, and rebuilt.
These are the active systems, deployed tools, and documented experiments built at the intersection of game audio, artificial intelligence, and adaptive music theory. Some are infrastructure. Some are instruments. Some are just questions that needed code to answer them.
The most technically ambitious project in the lab. A four-layer Model Context Protocol architecture that bridges natural language commands with Wwise audio middleware via WAAPI — describe what you want in plain English, and the system interprets, translates, and executes it in the game engine.
The workflow that should have always existed. Once you've spent time in both the creative chair and the implementation chair, it's the most obvious thing imaginable.
Built on a deep understanding of how composers actually think about interactive music — not how engineers document it. The system speaks the language of creative intent.
A compositional philosophy and prompt methodology developed over 13+ months of documented experimentation with Suno AI. Approaches AI music generation as a music director, not a consumer. Literary vocabulary over genre tags. Raw vulnerability over coolness. Leave space for the unexpected.
Generative ambient soundscape engine for real-time interactive environments. Synthesizes rather than plays back — so it never repeats, never anticipates, and never competes with the action. Web Audio API, four distinct modes based on musical intervals.
Built to learn. Shipped anyway. Some became methodologies, some became tools, some just needed to exist long enough to reveal the question underneath them.