We are witnessing a major shift in the process of generating images. The recent influx and growth of machine learning and artificial intelligence raises questions about the way in which creative processes evolve and develop through technology. Systems like DALL-E, DALL-E 2, and Midjourney are AI programs trained to generate images from text descriptions using a dataset of text-image pairs. The diverse set of capabilities includes creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, and applying transformations to existing images.
This article is the fifth in a series focusing on the Architecture of the Metaverse. ArchDaily has collaborated with John Marx, AIA, the founding design principal and Chief Artistic Officer of Form4 Architecture, to bring you monthly articles that seek to define the Metaverse, convey the potential of this new realm as well as understand its constraints.
Science fiction writers inspire us with bold and provocative visions of the future. Huxley, Orwell, Assimov, and Bradbury easily come to mind. They have imagined great advances in technology and oftentimes predicted shifts in social structure that were a result of the human need to open Pandora's Box. A large part of the charm and allure of science fiction is the bold audacity of some of these predictions. They seem to defy the laws of nature and science, and then, faster than you might have thought, the spectrum of human inventiveness makes it so.
Lo-tech Augmented Reality. Image Courtesy of James Corbett
Augmented reality (AR) software has been a common feature in professional design toolkits for a while. But the recent release of Apple’s Vision Pro glasses shows the mixed-reality wearables sector is making serious inroads in consumer markets too, as one of the world’s biggest names in consumer design and technology enters the market.
A major reason for the immense hype surrounding Apple’s foray into AR/VR hardware, however, is the decision to position it as ‘spatial computing.’ By taking the complexity of augmented reality, and using it to heighten a familiar consumer sector – personal computing – the Cupertino-based brand has simplified the whole experience, widening its understanding and appeal.
In architecture, drawing is a technical and artistic expression that involves creating visual representations using various analog instruments. While drawing remains relevant and current in practice today, efforts have been made to carry out architectural tasks and studies more efficiently. The drafting machine, a significant development in this regard, enabled precise strokes using fewer instruments. However, the emergence of computational tools, such as computer-aided drafting (CAD), has revolutionized the workflow by leveraging the advantages offered by computers. Architects can now play a more direct and creative role in the design process, reducing their reliance on time-consuming drawing and repetitive tasks. Moreover, workflow enhancements have fostered more effective collaboration among different stakeholders in the architectural process.
https://www.archdaily.com/1001757/generative-space-design-exploring-8-transformative-tools-in-architectureEnrique Tovar
On June 5, Apple launched Apple Vision Pro, a new type of spatial computer that uses augmented reality goggles to allow users to experience a blend between the digital and physical worlds. The device promises to offer its users an infinite canvas for apps, larger and more immersive than traditional displays, while allowing them to stay present and connected to others. It features visionOS, the first spatial operating system to create this new way of interacting with digital content. Previous concepts like the metaverse have promised to transform the way we experience digital worlds, with architects taking the opportunity to delve into the design of restriction-free virtual spaces. Could this new device bring new ways of experiencing three-dimensional spaces, to better integrate architecture with digital environments?
It’s here! The 21st-century digital renaissance has just churned out its latest debutante, and its swanky, sensational entrance has sent the world into an awed hysteria. Now sashaying effortlessly into the discipline of architecture, glittering with the promise of being immaculate, revolutionary, and invincible: ChatGPT. OpenAI’s latest chatbot has been received with a frenzied reception that feels all too familiar, almost a déjà vu of sorts. The reason is this: Every time any technological innovation so much as peeks over the horizon of architecture, it is immediately shoved under a blinding spotlight and touted as the “next big thing.” Even before it has been understood, absorbed, or ratified, the idea has already garnered a horde of those who vouch for it, and an even bigger horde of those who don’t. Today, as everyone buckles up to be swept into the deluge of a new breakthrough, we turn an introspective gaze, unpacking where technology has led us, and what more lies in store.
Architecture is a referential discipline. From ziggurats, machines for living, to contemporary biophilic high-rises designs, it is impossible to know whether ideas are genuinely novel or whether they have been conceptualized before. Artificial intelligence has ignited the conversation on intellectual property (IP) even more. As millions generate unique graphic work by typing keywords, controversies have arisen, specifically concerning protecting creative work and the Copyright of architects in their creations. Therefore, understanding the scope of what is protected helps determine whether licenses are sufficient, whether trademark registration's long road is worth it; or perhaps a graphic piece cannot be protected and belongs to the public domain.
2022 has been the year of AI image generators. Over the past few years, these machine learning systems have been tweaked and refined, undergoing multiple iterations to find their present popularity with the everyday internet user. These image generators—DALL-E and Midjourney arguably the most prominent—generate imagery from a variety of text prompts, for instance allowing people to create conceptual renditions of architectures of the future, present, and past. But as we exist in a digital landscape filled with human biases—navigating these image generators requires careful reflection.
The promise of the metaverse, this new type of three-dimensional and immersive digital space, is proving to become more and more appealing to architects eager to explore the new realm of virtual creations. As it currently stands, the metaverse does not have a singular definition but is composed of many narratives and explorations. This unknown land is however fruitful ground for architects, who have to opportunity to shape not only the new environment but also the experiences of future users. The SOLIDS project represents one response to these conditions. Developed by FAR, an architect and engineer working with digital environments, SOLIDS uses a generative process to design unique, metaverse-compatible buildings.
Interior AI is a new platform that helps users generate new styles and even new functions for their interior spaces. The program uses the input of a 2D image of an interior space, be it a picture found on the internet or a photograph taken by the user. It can then modify this picture to fit one of the 16 preselected styles, ranging from Minimalist, Art nouveau, or Biophilic to Baroque or Cyberpunk. The program also allows users to select a different function for the room, kitchen, home office, outdoor patio, or even fitness gym, thus creating a completely new interior design.
Developed by indie entrepreneur @levelsio, This House Does Not Exist is a platform that allows users to generate images of modern architecture homes in the style of ArchDaily. The program uses latent text-to-image diffusion to automatically generate realistic images of modern houses. The website is intuitive and easy to use, with one button at the top right reading “tap image to generate new house”. The website also allows users to vote for the best images generated or see similar houses by clicking on the keywords displayed at the bottom of the image.
Anatomy of an AI System. Image Courtesy of Kate Crawford
Should designers care about artificial intelligence (AI) or machine learning (ML)? There is no question that technology is adding texture to the current zeitgeist. Never could I have imagined seeing a blockbuster hit where Ryan Reynolds emerges as a conscious non-player character in a video game and a flop where Melissa McCarthy negotiates humanity’s future with a James Corden-powered superintelligence within a year of each other. But does learning AI and ML’s ins and outs really matter for the creative professions and our nebulous, invaluable way of operating?
Machine learning and generative design are profoundly shaping modern life. A central critique to the value and advancement of artificial intelligence, especially in the context of architecture, is the ability for a machine to design, as well as the resulting fear that professional services may be limited. As cities continue to develop, new tools emerge to help envision and create the built environment. How can architects embrace generative design to reimagine models of sustainability, inclusive practice, and new aesthetics?
Artificial intelligence is transforming how we design and build. By 2050, the effects of AI adoption will be widely felt across all aspects of our daily lives. As the world faces a number of urgent and complex challenges, from the climate crisis to housing, AI has the potential to make the difference between a dystopian future and a livable one. By looking ahead, we're taking stock of what's happening, and in turn, imagining how AI can shape our lives for the better.
Artificial intelligence, machine learning and generative design have begun to shape architecture as we know it. As systems and tools to reimagine the built environment, they present diverse opportunities to rethink traditional workflows. Designers also fear they may inversely affect practice, limiting the services of the architect. Looking to building technologies, new companies are creating software and projects to explore the future of design.
In May, aec+tech hosted an event on Clubhouse discussing how architects are using generative design in architecture firms today and towards the future. Five guest speakers from reputable architecture and tech start-ups —Zaha Hadid Architects, BIG, Outer Labs, 7fold, and RK Architects— joined the session to share their experiences and insights.
Are machines capable of design? Though a persistent question, it is one that increasingly accompanies discussions on architecture and the future of artificial intelligence. But what exactly is AI today? As we discover more about machine learning and generative design, we begin to see that these forms of "intelligence" extend beyond repetitive tasks and simulated operations. They've come to encompass cultural production, and in turn, design itself.
Quintain, award-winning developer, used Delve to redesign London’s Wembley Park. Image Courtesy of Quintain
Sidewalk Labs has unveiled Delve, a new generative design tool that aims to help developers, architects and urban designers identify better neighborhood design options. It uses machine learning to reveal optimal design options from a series of core components, including buildings, open spaces, amenities, streets, and energy infrastructure. By applying machine learning, it explores millions of design possibilities for a given project while measuring the impact of design choices.