How is this work relevant or how ​does it resonate out in the world?​

2023 has seen an explosion in Machine Learning, yet there’s no clear consensus on what this will mean more broadly for our industry and society. ​

The LAB is using AI to find new ways of designing worlds, conceiving content, imagining interactions and exploring form. We do this by combining existing, publicly available tools with our own custom-built software and pipelines. The experiments that follow are largely focused on new design processes to be paired with our strengths in narrative and hospitality. ​

We are simultaneously building up our own capabilities to train custom models with data that we own and control.​

Opening The Black Box

Generative Adversarial Networks Latent Diffusion

Latent Diffusion is just the technical term for the tools we’ve seen emerging that are labelled as “Generative AI” - models that take text input to generate imagery.​

New tools and breakthrough white papers are published every week illuminating new approaches for Machine Learning and its applications to the creative fields. ​

By focusing on our ML capabilities in-house we can carve out novel processes and tooling, stay ahead of the curve with application, and develop custom models based on our own IP.​]] Built with 3D Massing Study, Stable Diffusion, Custom LoRa, Custom Image Processing, ControlNet Model ​

Massing Model Proof of Concept

Many off the shelf models can take a description or an image prompt to generate new variations. This proof-of-concept uses what’s called a custom checkpoint model for generating rapid stylistic variations that more closely adhere to an input, in this case a massing model developed by one of our designers.​

Solves:​ Quickly explore stylistic variations quickly​
Form Finding​
Pattern Breaking ​

Next Steps:​ Further develop input approach​
Test integration into concept process​

Image Synthesis Proof of Concept

Outputs for image synthesis can have a wild variety of outcomes; they can produce serendipitous if not random results. This experiment combines existing tools with a custom process to generate outputs that we can more tightly control.

Solves:​ Establishing tighter control over Diffusion Models ​

Next Steps:​ Expand to Architectural and Content Process​
Roll into Project Demo MCAAD

Interaction Ideas Proof of Concept

When developing and presenting interaction ideas to we often describe what an idea looks like, how it moves, and how a person would interact with it. This process prototype attempts to close the gap between look, motion, and function when developing interaction ideas.​

Solves:​ Generating references that move and suggest interaction ​

Next Steps:​ Expand to Architectural and Content Process​
Roll into Project Demo MCAAD

Diffusion Models Prototype​

To bring our experimentation with diffusion models together, we wanted to take existing project sketches and try to bring them to life as animated vignettes by passing them through an in-house pipeline that ties together existing tools with custom models. ​

Built With:​

Original Sketches and Storyboards + ​
Midjourney Image Synthesis + ​
Textual Descriptions + ​
Custom Video Diffusion Model ​

Solves:​ Sketching & Form Finding​
Brainstorming motion references, inspiration ​

Next Steps:​ Develop Storyboarding Process with Content Team​
Role into Project Demo – Hard Rock Las Vegas​ ​ Look at 0. Machine Learning + AI for progress made in 2024

Questions