We are witnessing a major shift in the process of generating images. The recent influx and growth of machine learning and artificial intelligence raises questions about the way in which creative processes evolve and develop through technology. Systems like DALL-E, DALL-E 2, and Midjourney are AI programs trained to generate images from text descriptions using a dataset of text-image pairs. The diverse set of capabilities includes creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, and applying transformations to existing images.
Generative Design: The Latest Architecture and News
This article is the fifth in a series focusing on the Architecture of the Metaverse. ArchDaily has collaborated with John Marx, AIA, the founding design principal and Chief Artistic Officer of Form4 Architecture, to bring you monthly articles that seek to define the Metaverse, convey the potential of this new realm as well as understand its constraints.
Science fiction writers inspire us with bold and provocative visions of the future. Huxley, Orwell, Assimov, and Bradbury easily come to mind. They have imagined great advances in technology and oftentimes predicted shifts in social structure that were a result of the human need to open Pandora's Box. A large part of the charm and allure of science fiction is the bold audacity of some of these predictions. They seem to defy the laws of nature and science, and then, faster than you might have thought, the spectrum of human inventiveness makes it so.
In architecture, drawing is a technical and artistic expression that involves creating visual representations using various analog instruments. While drawing remains relevant and current in practice today, efforts have been made to carry out architectural tasks and studies more efficiently. The drafting machine, a significant development in this regard, enabled precise strokes using fewer instruments. However, the emergence of computational tools, such as computer-aided drafting (CAD), has revolutionized the workflow by leveraging the advantages offered by computers. Architects can now play a more direct and creative role in the design process, reducing their reliance on time-consuming drawing and repetitive tasks. Moreover, workflow enhancements have fostered more effective collaboration among different stakeholders in the architectural process.
On June 5, Apple launched Apple Vision Pro, a new type of spatial computer that uses augmented reality goggles to allow users to experience a blend between the digital and physical worlds. The device promises to offer its users an infinite canvas for apps, larger and more immersive than traditional displays, while allowing them to stay present and connected to others. It features visionOS, the first spatial operating system to create this new way of interacting with digital content. Previous concepts like the metaverse have promised to transform the way we experience digital worlds, with architects taking the opportunity to delve into the design of restriction-free virtual spaces. Could this new device bring new ways of experiencing three-dimensional spaces, to better integrate architecture with digital environments?
It’s here! The 21st-century digital renaissance has just churned out its latest debutante, and its swanky, sensational entrance has sent the world into an awed hysteria. Now sashaying effortlessly into the discipline of architecture, glittering with the promise of being immaculate, revolutionary, and invincible: ChatGPT. OpenAI’s latest chatbot has been received with a frenzied reception that feels all too familiar, almost a déjà vu of sorts. The reason is this: Every time any technological innovation so much as peeks over the horizon of architecture, it is immediately shoved under a blinding spotlight and touted as the “next big thing.” Even before it has been understood, absorbed, or ratified, the idea has already garnered a horde of those who vouch for it, and an even bigger horde of those who don’t. Today, as everyone buckles up to be swept into the deluge of a new breakthrough, we turn an introspective gaze, unpacking where technology has led us, and what more lies in store.
2022 has been the year of AI image generators. Over the past few years, these machine learning systems have been tweaked and refined, undergoing multiple iterations to find their present popularity with the everyday internet user. These image generators—DALL-E and Midjourney arguably the most prominent—generate imagery from a variety of text prompts, for instance allowing people to create conceptual renditions of architectures of the future, present, and past. But as we exist in a digital landscape filled with human biases—navigating these image generators requires careful reflection.
“I Believe that Architecture is Never Finished”: In Conversation with FAR, Creator of the First Generative Project for the Metaverse
The promise of the metaverse, this new type of three-dimensional and immersive digital space, is proving to become more and more appealing to architects eager to explore the new realm of virtual creations. As it currently stands, the metaverse does not have a singular definition but is composed of many narratives and explorations. This unknown land is however fruitful ground for architects, who have to opportunity to shape not only the new environment but also the experiences of future users. The SOLIDS project represents one response to these conditions. Developed by FAR, an architect and engineer working with digital environments, SOLIDS uses a generative process to design unique, metaverse-compatible buildings.
Interior AI is a new platform that helps users generate new styles and even new functions for their interior spaces. The program uses the input of a 2D image of an interior space, be it a picture found on the internet or a photograph taken by the user. It can then modify this picture to fit one of the 16 preselected styles, ranging from Minimalist, Art nouveau, or Biophilic to Baroque or Cyberpunk. The program also allows users to select a different function for the room, kitchen, home office, outdoor patio, or even fitness gym, thus creating a completely new interior design.
“This House Does Not Exist” Uses AI to Generate Images Inspired by ArchDaily's Modern Architecture Projects
Developed by indie entrepreneur @levelsio, This House Does Not Exist is a platform that allows users to generate images of modern architecture homes in the style of ArchDaily. The program uses latent text-to-image diffusion to automatically generate realistic images of modern houses. The website is intuitive and easy to use, with one button at the top right reading “tap image to generate new house”. The website also allows users to vote for the best images generated or see similar houses by clicking on the keywords displayed at the bottom of the image.
Should designers care about artificial intelligence (AI) or machine learning (ML)? There is no question that technology is adding texture to the current zeitgeist. Never could I have imagined seeing a blockbuster hit where Ryan Reynolds emerges as a conscious non-player character in a video game and a flop where Melissa McCarthy negotiates humanity’s future with a James Corden-powered superintelligence within a year of each other. But does learning AI and ML’s ins and outs really matter for the creative professions and our nebulous, invaluable way of operating?
Artificial intelligence is transforming how we design and build. By 2050, the effects of AI adoption will be widely felt across all aspects of our daily lives. As the world faces a number of urgent and complex challenges, from the climate crisis to housing, AI has the potential to make the difference between a dystopian future and a livable one. By looking ahead, we're taking stock of what's happening, and in turn, imagining how AI can shape our lives for the better.
In May, aec+tech hosted an event on Clubhouse discussing how architects are using generative design in architecture firms today and towards the future. Five guest speakers from reputable architecture and tech start-ups —Zaha Hadid Architects, BIG, Outer Labs, 7fold, and RK Architects— joined the session to share their experiences and insights.
Are machines capable of design? Though a persistent question, it is one that increasingly accompanies discussions on architecture and the future of artificial intelligence. But what exactly is AI today? As we discover more about machine learning and generative design, we begin to see that these forms of "intelligence" extend beyond repetitive tasks and simulated operations. They've come to encompass cultural production, and in turn, design itself.
Sidewalk Labs has unveiled Delve, a new generative design tool that aims to help developers, architects and urban designers identify better neighborhood design options. It uses machine learning to reveal optimal design options from a series of core components, including buildings, open spaces, amenities, streets, and energy infrastructure. By applying machine learning, it explores millions of design possibilities for a given project while measuring the impact of design choices.