Benjamin Bratton on Artificial Intelligence, Language and "The New Normal"

Benjamin Bratton, Professor of Visual Arts and Director of the Center for Design at the University of California, San Diego, is the new Programme Director at Moscow's Strelka Institute. The New Normal is based on the premise that "something has shifted. [...] We are making new worlds faster than we can keep track of them, and the pace is unlikely to slow."

Have our technologies have advanced beyond our ability to conceptualize their implications? "One impulse," the course advocates, "is to pull the emergency brake and to try put all the genies back in the bottle." According to Bratton, this is hopeless. "Better instead to invest in emergence, in contingency: to map The New Normal for what it is, and to shape it toward what it should be."

© Egor Slizyak

Could you give a collapsed history of Artificial Intelligence (AI) and explain why you think it has impacted—or is beginning to impact—architecture, design and urbanism today?

It depends on how one chooses to define it. Intelligence is an emergent property of material complexity – it has less to do with whether or not the material—the substrate of this intelligence—is organic or inorganic than it has to do with generic structures and properties of the intelligent phenomena that emerge from it. It also matters less whether or not any one bit of this material substrate is itself intelligent; an ant is not particularly intelligent, but millions of ants together in multiple interactions are. Similarly, one neuron by itself is not particularly intelligent, but in the trillions of interactions of the brain it becomes so.

Another way of thinking about intelligence is that it has capacity for abstraction. Consider things like primal protozoan with its cilia on the outside – it begins by figuring out what’s out there, what’s around it, in order to create some kind of predictive abstraction of the space that it’s in. What is friend, food, or foe? And so in this cartographic function is a fundamental abstraction that we may consider when we think about intelligence as goal-directed behaviour. There are all sorts of ways in which the question of intelligence may be understood on a basic material level.

The history of AI proper has always had a close relationship with philosophy and, in certain ways, ethics. One could argue that it was invented philosophically before it was invented mechanically. Let’s take Alan Turing’s 1950 essay (Computing Machinery and Intelligence), for example, who is tremendously interesting on a number of different levels (if anyone should be on the £1 note it’s him!). The way in which this essay has found its way into the structure and culture of AI, and the anthropocentric logic of AI, has been a real problem. Turing offered the notion that if you can’t tell whether Player A is or is not a real person, then it isn’t a sufficient criterion to argue that it’s intelligent. It has been taken not as a sufficient criterion of intelligence, but as a necessary criterion of intelligence – as a threshold. If you can’t tell that it’s a real person, only then and based on that threshold, can we declare that it is in fact an intelligent creature.

The big shift that we’ve witnessed over the last fifty years—or sixty-seven years since Turing’s paper was first published—is a shift from top-down to bottom-up models. The understanding of intelligence was essentially a bunch of nested rules of symbolic logic; if you implant these rules of symbolic logic into something then it would be able to sort out what to do in a more bottom-up approach. The things that are of interest to me in the field of AI philosophically have less to do with how to teach the machine to think as we think, but rather in how they might demonstrate a wider range of embodied intelligence we could understand. That way we could see our own position in a much wider context and it would teach us a little about what ‘thinking’ actually is in that particular version rather than simply extrapolating it as a universal model. Where does sensing stop, for example, and information processing begin?

And how does this apply to cities?

I believe that whenever a new technology emerges a skeuomorphic imperative appears, and we ask how we are going to apply this to automate, accelerate or amplify something that we have done before: if we have an AI version of this, or an AI version of that, it will make it work better.

Cities are and always have been information-rich, built with information processing structures. In the long run we would want to see ways in which there are forms of emergent algorithmic intelligence that is built into the city – multiple nested niches, signalling processes and micro-ecologies. This type of urban environment would be one in which there is a greater richness as opposed to simply automating the existing forms of pedestrian skeuomorphic interaction. That, of course, is a much more difficult design and policy brief to grapple with, but it’s one in which AI could reveal the ways in which the city was already artificially intelligent. That realization would shift part of our urban ontology, urban epistemology, in such a way that it could potentially give way to a better understanding of where a more deliberate synthetic algorithmic might enter into the picture.

For architects, and within architecture, words—semantics—are very important. The word ‘wall’, for example, has a range of meanings which makes complete sense to the initiated. Would you argue that a new language is being developed to talk about these existing phenomena (related to AI, urbanism and architecture) which are only now becoming significant? If so, do you think there is a risk of alienating practitioners?

It’s a very good question. There is a strong vernacular connotation to particular words, and there’s also a propensity to pull new glossaries from all over the place quite promiscuously, and sometimes in a very creative way. I’m much happier when people are breaking philosophy apart, taking its terms and putting them to use in ways that they weren’t intended than seeing ‘true believers’ attempting to keep them pure. Those people scare the hell out of me!

The programme at the Strelka Institute, The New Normal, will be focusing on urbanism and the conditions that we work in but the students will come from a broader range of disciplines by design: we will have students from computer science, from filmmaking, from journalism, from economics. Architects will be in the majority, but they won’t the only people sat around the table. So it’s not just about inventing neologisms, although we are no doubt going to come up with some long and strange new words but, in many cases, these are words that we are already using in a more tactical way. This will give the course some incremental conceptual depth. We certainly want to avoid a language of hybrids and ‘horseless carriage’ vocabulary, like ‘mobile phone’ and ‘smart city’.

In the context of the Strelka Institute this is not a tabula rasa, a clean slate, starting anew. It’s part of a continuum, and we want to map the convergences of multiple continuities that are taking place at this point in time.

© Egor Slizyak
The New Normal 2016/17

While these notions do not constitute a new discourse, they do represent a fresh approach to the field of architecture. There is a conservatism that pervades architectural education almost everywhere, and the Strelka Institute has always been three steps ahead in how it approaches discussions about the city and about space. Is The New Normal taking the school’s pedagogical discussion in a new direction, or do you see it more as a cumulative process?

I think that it’s part of the process. One of the interesting things about Strelka’s model is that it’s something that can only work in an institute of this kind; it has it’s own scale and economic structure. The process is tidal – every three years something new thing comes along in any case, so it should definitely be described as more of a continuum. There is probably a stronger break between this course theme and the previous but I do believe that it represents a logical increment.

You have suggested that while students may enter as an architect, or a journalist, or a programmer, you anticipate that upon graduating they will feel less compartmentalised, less limited. If I challenged you to actually define what students might emerge from the course as, how would you describe their skill-set?

Designers. Part of the interest in bringing this range of disciplinary backgrounds to the fore is to introduce ways in which those disciplines themselves may be shifting to becoming more like design disciplines – where someone who has an understanding that a profession or discipline can be less descriptive and more projective; a future-oriented practice.

How the sorts of practices that emerge from this as our students triangulate amongst one another—fall in love, build practices to get together—will allow them to bring all of those interests and capacities together. To explain this I often list the key areas of 20th Century design that we’ve been taught, and then the things that go into the new kit of 21st Century design: biotech, robotics, and all the rest of it. I then argue that it’s not about leaving one for the other – you should pick three from both lists that you’re good at, triangulate them, and build a practice from that. That, of course, is what we're hoping to see at Strelka next year.

Find out more about The New Normal, here.

About this author
Cite: James Taylor-Foster. "Benjamin Bratton on Artificial Intelligence, Language and "The New Normal"" 09 Feb 2017. ArchDaily. Accessed . <https://www.archdaily.com/799871/benjamin-bratton-on-artificial-intelligence-language-and-the-new-normal-strelka-moscow> ISSN 0719-8884

More interviews from ourYouTube Channel


You've started following your first account!

Did you know?

You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.