Not so far in the future, smartphones and laptops will go the way of the beeper and fax machine, fading into obsolescence. Soon, according to MIT Media Lab's Joseph Paradiso, we will interface with the physical world via wearable technologies that continually exchange information with sensors embedded all around us.
Paradiso has been at the forefront of these developments for decades, exploring new applications for sensor networks in everything from music (he will lead a presentation of the lab’s musical innovations later this month at Moogfest) to baseball. In recent years, his group’s research has focused increasingly on smart buildings. I spoke with him about the implications of his work for the future of architecture and the built environment.
You run the Responsive Environments group at the Media Lab. Can you describe some of your work in the building realm?
We’re all about how people essentially hook up to this electronic nervous system that’s covering the planet in many ways. We tend to think broadly of all the stuff interfacing to humans. If it’s a sensor that you’re funneling to people, how do you really do that? Or smart environments: buildings that are almost like an extension of your body. Instead of regulating to the temperature on the wall, it should be responding to your sense of comfort. Things like that.
We did a project some years ago where we controlled the thermostat of the HVAC system, with a wearable. And it’s not just that you use the wearable to control the temperature of the room. Instead, you take a bunch of readings from a bunch of sensors—location, position, amount of activity, temperature, and humidity—and then you tell the wearable if you’re comfortable or not, basically. It builds a model of your personal comfort metrics, and then the HVAC system is automatically controlled based on that.
Which I think is still a future that’s waiting for us. At some point this should all be personal: a kiosk of things that we actually have on our body that share the first-person perspective.
Where do you see your work situated in terms of the smart building industry?
There are a lot of products now. For instance, you can get Bluetooth modules to control a light bulb. And you can use an app on your phone to control lights. Industry is going in those directions.
But we like to start from a more radical premise. Think about building systems that could really graft smoothly to the human, like a prosthetic. And you definitely want to use information from all these sensor systems to throttle the amount of energy that you use. We like to take that kind of radical view and then see what we can do based on that.
So we try to think ahead. Is it going to be 10 years out? No. These days this kind of thing we’re talking about is two, three years out, probably. It comes fast. Just like the web hit, changing the way we look at PCs—this change is about when sensors break free of devices and applications and just become available for more or less any application to use their data.
And we’re close to it. The sensors are already there. Not all of them, but right now there are lots of different sensors embedded in lots of different systems. Without this common framework in place, though, they don’t talk to each other. We’re working on standards or frameworks that will enable that transition. Then it’s going to be very fluid. We’re living in a very, very different ecosystem then. It’s going to happen with everything very soon.
How do you think that will affect the career of your average architect? I’m assuming that’s going to be a massive transition in what design means?
I’d say so. Certainly lighting, right away—this is going to change completely. Building systems in general are going to evolve, of course being much more energy-centric than they are now, but also much more aware of what the user’s doing. Buildings will know more about what’s happening inside of them.
Information will get to and from people in many ways. The revolution we’re heading to now is defined by wearable and infrastructure-embedded computing, moving away from handhelds and tablets. So when we get there, the building will know more about us, and will know ideally who & where we are and broadly what we want to do. There are some security issues, naturally, so we have to hold some things back while letting other data flow. But there are hopefully ways to fluidly manage that.
But information should be able to manifest in the best possible way as we’re navigating these environments. It could be on a wearable; it could be in whatever your phone becomes; the wall could be a display. Architects will have to think about how the information is coming to and from the user, what people are doing in the space, and this huge integration of information, because it’s all going to be available and not locked down or siloed.
What do you think the implications of this kind of change for design education and practice will be? It sounds like there could quickly become a bottleneck just because so few people will understand how this stuff works.
I think it will be the way the web played out, because it’s the same kind of revolution that happened there. You had penetration of technology, then you had a framework evolve where you could do things like visit webpages and easily share information. It took people a while to see how to leverage that, and we’re still figuring it out in many ways.
When you start getting this closed loop—sensor to cloud to application and back down to environment—it presents a whole other set of possibilities and tools. Yeah, the first steps will be pretty locked down and simple. It will be much deeper than that, though. Exactly what will happen . . . we can all speculate.
But it’s going to happen quickly. There will be some initial apps that explore the conceptual space, and then there will be a whole economy built around it.
It sounds like the role of architect and engineer—in both the physical infrastructure and the computer senses—will merge, to some degree.
Some of that’s already true. Architects and engineers: certainly there’s got to be a strong communication between the two to make a modern building; I’d think that people who are facile in engineering aren’t uncommon around the architectural firm. Especially if you push the edge, you have to know what’s feasible, or have people on tap to be able to figure out how to do it.
This is going to happen as well in terms of the information infrastructure. It’s not just going to be like you have a server room here and there you have WiFi. There are going to be what we call information portals all over the place. Maybe there will be spots you design for people to specially exploit wearables. And so on . . .
How will the buildings look different? There’s an intriguing idea. Everything can become display. Or maybe photons will be painted right onto your retina so it doesn’t matter so much what you see. Environments will be some combination of what you physically see and what’s virtual. That’s totally intriguing for the architect. You’ll have architects working in both the real and virtual, taking the blend very seriously.
How do the industry partners you work with view the applications for buildings and architecture at this stage?
Everyone knows the change is happening. Certainly the IT providers, from Intel to Cisco to Qualcomm; what I’m telling you basically they get at a pretty high level now. It all has impact on infrastructure. It’s all going to live amorphously and not just be on your desktop, because the manifestation of computation will be part of the environment that you’re living in. We collaborate with companies that work in building systems, and they definitely want to get the sensors all over to get as much potentially relevant information as they can. They want to know what people are doing in the space to make their systems as efficient as possible, because we all have to be that way going forward. So yeah, they get it.
We also work with cities more and more. Cities now get that it’s not just balkanized systems and utilities; it’s the total infrastructure in their city that’s all coming together at different levels.
I just came from a meeting in Washington DC last week on climate change, open data, and disaster. We had a lot of city planners, especially people involved with preparedness for emergencies and disasters, which are increasingly exasperated by climate change. They know that all this data is going to be available, not only from their systems. but also from citizen scientists, so to speak, and they also know they have access to individuals at a far greater level of granularity than before. So they’re definitely all interested in trying to exploit these trends to make the city run better, at many levels. They want to minimize disaster casualties by making prudent data-guided choices beforehand, while getting the right level of information out there when a disaster actually happens so they can respond better. Aside from emergency situations, of course, planners want to exploit this sensor revolution to make the city a richer place to live.
I think one of the big differences that’s going to happen is that the information’s going to be always “right up there,” so to speak. I don’t yet regularly wear Google Glass, which actually came out of work from the Media Lab in the mid ’90s, as I think it’s still pretty limited. But I give the Glass development team credit for really putting a stake in the ground, and it’s quickly going to get better. I believe that we’ll soon be living in this kind of always-augmented world.
That’s got its pros and cons; it depends on who you talk to. But it’s coming, and I like to be an optimist.
What can people working in the built environment do to be prepared for these changes?
Although in the end, we’re living also in the age of networks, right? Nobody can be good at everything anymore. It was always elusive, and now it’s impossible. So now we have to work in discipline-crossing teams. And architects probably are some of the earliest practitioners of that, because you can’t build a building by yourself.
Interview edited and condensed.