MICROSOFT R&D
Art Direction & 3D: Dropframe
Sound: H1 Sound/Harvey Fisher
Special thanks to Arthur de Liz Sperb for helping us get set up with Comfy UI.
Late last year, Microsoft reached out for a visual R&D project related to an AI module. The task was to create a conceptual and visual pitch over the course of a month— various styleframe explorations, and ultimately a hero video with logo resolve. It was a chance to push creative boundaries and experiment with AI in our workflow.
Early development styleframes.
Coming into the project, we had reservations about using AI in my creative process. We had always viewed AI-generated visuals with a degree of skepticism, concerned that they might dilute originality and intent. But the creatives at Microsoft encouraged us to bring AI into the process, so we decided to give Stable Diffusion a shot. Instead of letting AI do the heavy lifting, we wanted to treat it as a tool—something to enhance ideas rather than replace them. The focus was on building a strong foundation first, then use that foundation to generate AI elements and weave them in where they made sense.
We had nearly complete creative freedom on this project, which was both exciting and daunting. At first, we leaned into high-fidelity 3D—big, photorealistic visuals with a strong sense of scale. But as things progressed, the look evolved into something more stylized and graphic.
Collage of AI-generated elements based on my 3D render.
The core idea was to visualize a vivid, imaginative journey into a AI prompt bar. We experimented with AI-generated textures and colors, blending them into Octane renders in a way that felt curated and intentional. The goal was never to let AI take over but to use it as an artistic collaborator, guiding its output to fit within the vision. Bringing the elements together in After Effects, where the final look was polished and refined, posed a fun challenge with what felt like endless possibilities.
Utilizing masks within masks, we could carefully mask and integrate the AI’s output into the renders. These elements acted as a bridge between the AI’s unpredictable textures and the controlled designs. They gave the freedom to place AI-generated elements exactly where we wanted, keeping the concept cohesive while still embracing the unexpected nature of AI.
Various vid2vid AI output.
Going into this project, we had mixed feelings about AI in creative work. But by the end, we saw its potential, not as a replacement for artistic vision, but as a tool to push boundaries and add complexity that would be impossible to achieve alone in such a short time. Collaborating with Microsoft expanded our technical skills and shifted our perspective on AI’s role in visual storytelling. That said, it’s hard to ignore the flipside: as AI streamlines workflows, it will inevitably reduce the need for manpower. There is a curiousity to see how that shift will shape the industry moving forward.