Machine Learning from Las Vegas – Volume #49: Hello World!

The following essay by Pierre Cutellic was first published by Volume Magazine in their 49th issue, Hello World! You can read the Editorial of this issue, Going Live, here.

The relevant revolution today is the current electronic one. Architecturally, the symbol systems that electronics purveys so well are more important than its engineering content. The most urgent technological problem facing us is the humane meshing of advanced scientific and technical systems with our imperfect and exploited human systems, a problem worthy of the best attention of architecture's scientific ideologues and visionaries.

—Robert Venturi, Denise Scott Brown, Steven Izenour, Learning from Las Vegas

It is almost always relevant to put the emergence of significant architectural discourses in perspective of other contemporary societal events, particularly since the latter tend to become a pre-text of the former. But what happens if such events fail to meet their own expectations? We might then find ourselves in front of a sign of the times yet to come, or a Zeitgeist in the making.

In parallel to the release of Robert Venturi, Denise Scott Brown and Steven Izenour's revelation of the Information Age’s consequences on both the built environment and modern ideologies, the then-promising field of artificial intelligence (AI) fell into an ice age for almost a decade. Researchers were facing the very problem of not being able to realize their own ambitions, both conceptually and technically. The great initial excitement of both researchers and financial backers brought about by the developments of neural nets, for example, was abruptly interrupted by Marvin Minsky and Seymour Papert’s 1969 book Perceptrons, which revealed important limitations on what and how a machine, as one could be conceived at the time, could actually learn. Most of the barriers encountered were directly related to the challenge of processing vast quantities of data. Problems such as memory or processing speed and the necessity of having datasets to learn from were all linked to the same issue of acquiring and processing a constantly increasing amount of data. Most of those issues were overcome in the 1980s by new generations of computers, models and methods for neural nets such as the Neocognitron and Backpropagation, but these developments remained intimately linked and limited by the ability to process and evaluate increasingly larger datasets. A symptomatic form of this phenomena can be seen in the overwhelming communication system described by Venturi, Scott Brown and Izenour as Las Vegas’ architecture and urbanism: “It is an architecture of communication over space; communication dominates space as an element in the architecture and in the landscape.”

It is very rare that architectural thinking finds itself ahead of its own time. But with regards to the “humane meshing of technological and human systems” they were clearly addressing the tip of a crisis we have come to face only now. The then-symbolic crisis they addressed was one of the earliest symptoms of a general crisis of knowledge impairment. Las Vegas served as a tangible proof that, with the evolution of human activities, communication would come to dominate space in an ambient manner, conveying increasingly vast quantities of data from which the production of knowledge would become increasingly difficult. Machine learning – the mechanization of knowledge acquisition from experience – has come to act as a response to this issue, but rather than considering a “humane meshing”, tends to work as a substitution for human cognition. Total knowledge – how to do, to live and think – has nowadays fully achieved its mechanization. And such an integral mechanization has also generated a progressive dependency towards technology as well as a general incapacity to understand technological environments. The Las Vegas citizen’s car became essential to navigate through the symbolic and sign systems made for high-speed recognition, and both rendered the unmediated person impaired to experience and understand his immediate surroundings. Spatial alienation due to lack of mediation became the pervasive architectural paradox of Las Vegas, and eventually, our daily lives. Such foresight was already manifest of a new kind of knowledge impairment, one that is conveyed through the mechanization of acquisition and production of information. But could this substitution mechanism be approached from a new angle, this condition of impairment seen in a new light?

A Quadriplegic woman controlling a robotic arm in three dimensions through a BCI, UPMC, 2012. Image © Volume

Delays in AI-related technological developments offered a time window for the development of an extended architectural thinking of such progressing events. Nicholas Negroponte, founder of the Architecture Machine Group at MIT, published two books at around the same time as Venturi, Scott Brown and Izenour’s study of Las Vegas: The Architecture Machine in 1970, and Soft Architecture Machines in 1975. Negroponte laid out an anticipatory plan to technological advancements and a synthetic approach for architecture. One of the many critical aspects of this work has been to highlight the ethical and ontological issues faced by AI research: the discrimination between artificial and natural cognition. “To the first machine that can appreciate the gesture”, Negroponte autographed the first book. Within these two books is a progressive roadmap for a collaborative environment between humans and machines, where one cognitive system does not necessarily discriminate against or substitute the other.

At around the same time, in 1976, one of the first significant pieces of writing on AI ethics was published: Joseph Weizenbaum’s Computer Power and Human Reason: From Judgment To Calculation. This influential work was the first to propose a comprehensive ethical differentiation between decision-making and choice – with the former defined as the product of computational activity and the latter of human judgment – and would eventually pave the way for the evolution of artificial knowledge. But this very discrimination, which is still operative today, remains an obstacle to truly envision a positive outcome of integral mechanization, as human cognition and knowledge production remain separated from each other and mediated by artificial knowledge.

In 1965, scientists discovered peculiar types of neurosignals involved in the process of decision-making. Then, at the beginning of the 1970s, novel methods to monitor the electrical activity of the brain like ElectroEncephaloGrams (EEG’s) became more portable, allowing them to expand from the fields of psychiatric and medical research and eventually deliver the first brain-computer interfaces, paving the way for neuroprosthetics. With the invention of brain-computer interfaces, electric signals could convey information between biological and mechanical material, constituting a symbiotic technium in favor of human re-capacitation. Neuroscience and neurotechnology could then combine human and mechanized cognitions to reproduce an interface necessary for a body to interact with its environment, acquire information and produce knowledge. Here, one can realize that within the field of neuroscience the developments of cognitive science and artificial intelligence are deeply linked. But aside from medical applications, where disabilities tend to be immediately observable, not a single positive integrative approach has emerged from new technological developments to propose a solution to general knowledge impairment. Rather, the most effective applications have been for military or marketing purposes. But just like how the mechanization of labor in the automotive and airspace industries served as productive models for the development of modern architecture, perhaps the medical industry can now serve as a model for the cognition of architecture: a heuristic graft for new values of knowledge (as is the function of neuroprosthesis).

In the medical realm one can observe people subject to such successful surgical operations regain lost control of their motor skills and psyche. Scientific research has developed applications capable of returning haptic control to those who had lost it, not just with bionic devices but also by assisting in the progressive reconstruction of motor nerves and the deceleration of neurodegenerative diseases. Human re-capacitation is a process of re-valuing knowledge of the human body and the way it interacts with and controls its environment. Augmented by the computational powers of AI and other artifacts of integral mechanization, the re-capacitation of human knowledge seems approachable, now more than ever. And such a graft, previously applied to bodies only clinically diagnosed as disabled, should be considered in general more seriously.

One peculiar justification for cognitive computing today is to propose an extension to human cognition. But the current state of scientific research rather tends to develop applications of the opposite scenario, where human cognition helps and extends the artificial one (such as in the case of ‘attribute learning’, where attributing values to symbolic systems still remains as a highly demanding and complicated task for cognitive machines). The curiosity of such a common statement cannot be undermined while human cognition is just about to reveal an extraordinary potential to positively relink with automation. Whether the evolution of architecture lies in tangible artifacts or extremely simulated environments, the next steps of extended, augmented cognition should be integrated with human cognition, as in neurotechnologies, in such ways that maintaining a humane meshing along with technological shifts would allow for the construction of novel types of knowledge and development of new creative processes. From the Miocene to the Anthropocene, the human brain, along with the human body and its sensory organs, has had about eight million years to evolve. Elements of that process of evolution, such as visual cognition, which serves as a mentoring model for machine vision and data compression, are being factored in to the development of artificial intelligence. When thinking about the future of knowledge and the evolution of architecture we should learn to see ourselves as a more integral part of the machine learning agenda and consider factoring it into human evolution with greater care and creativity.

References
[1] Robert Venturi, Denise Scott Brown and Steven Izenour, Learning From Las Vegas (Cambridge (MA)/London: MIT Press, 1972).
[2] Marvin Minsky and Seymour Papert, Perceptrons: An Introduction to Computational Geometry (Cambridge (MA)/London: MIT Press, 1969).
[3] Kunihiko Fukushima, ‘Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position’, Biological Cybernetics, 36(4), 1980, pp. 93-202. Ravid E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams, ‘Learning representations by back-propagating errors’, Nature 323, Oct. 1986, pp. 533–536.
[4] Bernard Stiegler, La Société Automatique 1, Le Futur du Travail (Paris: Fayard, 2015).
[5] Nicholas Negroponte, The Architecture Machine: Toward a More Human Environment (Cambridge (MA/London: MIT Press, 1970). Nicholas Negroponte, Soft Architecture Machines (Cambridge (MA)/London: MIT Press, 1975).
[6] Joseph Weizenbaum, Computer Power and Human Reason: From Judgement To Calculation (San Francisco: H.Freeman & Company, 1976).
[7] Steven J. Luck, An Introduction to the Event Related Potential Technique, (Cambridge (MA)/London: MIT Press, 2005).
[8] Jacques J. Vidal, ‘Toward direct brain-computer communication’, Annual Review of Biophysics and Bioengineering, 1973.
[9] For an example of DARPA military applications of Brain-Computer-Interfaces see: Robbin A. Miranda et al, ‘DARPA-funded efforts in the development of novel brain–computer interface technologies’, Journal of Neuroscience Methods 244, 2015, pp 52–67. As for the field of Neuromarketing, one can trace the mechanization of human consumer behaviors to the theoretical model from which it derives: Gerald Zaltman’s Zaltman Metaphor Elicitation Technique (ZMET©).
[10] Pierre Cutellic, Le Cube d’Après, Integrated Cognition for Iterative and generative Designs’, ACADIA 14: Design Agency, Proceedings of the 34th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), 23-25 October, 2014, pp. 473-478.
[11] José del R. Millánet al, ‘Combining Brain–Computer Interfaces and Assistive Technologies: State-Of-The-Art and Challenges’, Frontiers in Neuroscience, 4, 2010, pp. 161.
[12] John E. Kelly III, ‘Computing, Cognition, and the Future of Knowing, How Humans and Machines are Forging a New Age of Understanding’, IBM Research, 2015.
[13] David Marr, Vision (Cambridge (MA)/London: MIT Press, 1982). James V. Stone, Vision and Brain: How We Perceive the World (Cambridge (MA)/London: MIT Press, 2012).

About this author
Cite: Pierre Cutellic. "Machine Learning from Las Vegas – Volume #49: Hello World!" 20 Oct 2016. ArchDaily. Accessed . <https://www.archdaily.com/797717/machine-learning-from-las-vegas-volume-number-49-hello-world> ISSN 0719-8884

You've started following your first account!

Did you know?

You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.