This course assesses the impact of artificial Intelligence on design and architecture as an aesthetic rather than a purely economic question. The assumption that machine learning will impact architectural practice before affecting architectural design has been undermined by the development and wide-spread use of text-to-image models such as Mid Journey, DALL-E, and Stable Diffusion over the last two years. These models are trained the discipline’s largest source of data: billions of images ‘scraped’ from the web. New software is being developed to generate video and 3D digital models in a similar way.

More subtly, machine vision has added a series of invisible layers to how we see and represent our environment. Understanding this new machine-mediated visual culture is critical to addressing its growth, finding potentials and opportunities, and identifying avenues for critique and resistance. Readings, lectures, and discussion will trace the development of machine learning, chart the spectrum of machine vision models available now, and explore how they work, what data they are trained on, what is left out, what abstractions they make. Lectures and assignments will also speculate on their (near) future impact on design and architectural workflows.

At the core of the class are three assignments exploring the capacity of different machine vision models to act as tools for visual analysis, image synthesis, and the description of 3-dimensional space. Workflows will involve image-to-image translation as well as moving back and forth between text and imagery and 2D and 3D. We will use a variety of code-free interfaces, but also specific applications through Google Colab. Machine Vision models will be used as part of larger workflows with other digital and analog methods. Assignment specific tutorials will be provided – no technical skills are pre-requisite for enrolling.