Designers have fixated on the visual culture that wrought Casio wrist watches and Superstudio. Mario Carpo explores the reasons why.
It began with a watch—actually, two. Last year I was co-tutoring two brilliant master students in a school of architecture in a European country I shall not name. They had started their thesis project with some very idealistic, “accelerationist” views of technology—assuming, in the footsteps of some improbable political theories currently in fashion, that technological change would “accelerate” the final demise of capitalism. Then one day they showed up for their tutorial sporting two identical black Casio digital watches, and I immediately realized that something had gone awry. As if struck by some illumination on their road to Damascus, they explained to me they had concluded that technology should thenceforth be their foe. From that moment, their project turned into a “critical” reinterpretation of some Superstudio projects from the early ’70s. For their final presentation, some months later, they set up an installation where everything, right down to some fresh baguettes bought from a baker’s next door, was wrapped in carefully executed Superstudio wallpaper—black grid on white background. Most of their friends in attendance were also wearing the same Casio watch, I noticed.
They were not alone. That watch, the “classic” Casio F-91W—cheap, small, but eye-catching—appears to be popular among radical leftists in the United States, in the U.K., and elsewhere. Other, less savory characters adopted it in the recent past for unrelated criminal purposes, but I would not be surprised if today we spotted Jeremy Corbyn, Bernie Sanders, or Jean-Luc Mélenchon wearing one as a mere fashion statement (disclaimer: I have no evidence of that to date). The F-91W has been in production without interruption since 1989, but it derives from an earlier, clunkier model of 1978, the F-100C, which was, back then, more expensive and less commercially successful. And for a reason: LCD wristwatches were from the start a paragon of misplaced technology. Their digital display, showing numbers through a mosaic of liquid crystals, or LCDs, were a technical breakthrough in the early ’70s. Despite this, they could not (and still cannot) do what analog dials always did—tell time. At least, not as well as any analog dial, as it is easier to tell the time at a glance looking at the position of two hands on a dial than by trying to make some sense of a blinking string of barely visible numbers posted and kept at arm’s length. Besides, most people today do not need to wear an actual watch, as they have one in their smartphones. Why then do so many of us today cherish this vintage gadget from the late ’70s, a technical absurdity from the start, and perfectly useless today—and why do so many of us today proudly show it off?
That’s only the tip of the iceberg. Plenty of failed high-tech objects of the 1970s (and some from the ’60s) are today revered and admired: They are not only avidly collected (that would be expected, evidently); they are revived, imitated, replicated, revisited, and reinterpreted, and many young designers choose them as a direct source of inspiration—often saying say so in so many words. But what inspiration can be found in stuff that famously did not work when it was invented, never worked, nor could it possibly be made to work today? More in general, why do so many of us today feel such a strong affinity with so many late-mechanical technologies from the ’70s—or even with the ’70s in general? If a contemporary visual artist is obsessed with Dan Flavin’s neon tubes, that is not going to harm anyone—at least, not directly; although, as a historian, I would still want to know why that is happening. But if an architect today installs one thousand neon tubes in a building, that’s a problem, because while neon tubes may have been a great technology in Dan Flavin’s times, today we have many better and more environmentally friendly sources of lighting.
The profusion of Rudolphian orange (“paprika”) carpets and Juddian steel, aluminum, and perspex in contemporary interior decoration is certainly innocuous, and may even be funny; but today all that moves in a building—even a garage door—is seen as a reference to Cedric Price; any grid—even a waffle iron—as a reference to Superstudio; any three-dimensional modular assembly—from Legos to shipping containers—as a reference to Nicholas Negroponte’s or John Frazer’s precocious, and famously failed, digital experiments. Grids and modularity were technical staples of the mechanical world: they served to make more products out of fewer components, as fewer standard parts could then be more easily mass-produced in the pursuit of economies of scale. But that was the industrial logic of mechanical machine-making. Post-industrial, digitally driven manufacturing does not work that way, and we have known that for quite some time.
Cedric Price, who is often hailed today as the inventor of almost everything, was seduced by Norbert Wiener’s theories of feedback and interactivity (then known as cybernetics). When he tried to apply Wiener’s communication theory to architecture, Price concluded, somewhat quixotically, that “cybernetic” buildings should be capable of reacting to external stimuli by permanently reconfiguring themselves, through a game of mechanically moving parts. But buildings do not move as easily as the Mexican cats on whom Wiener performed his notorious neurological experiments; even today most new buildings, after completion, stay as they were built, and they are seldom expected to be rebuilt on a daily basis: floors and ceilings going up and down, or walls and roofs moving back and forth, albeit common in stage design, are still a rare and costly exception in building. When an early avatar of Price’s cybernetic visions, the Centre Pompidou, was built in Paris in the early 1970s, its only visibly moving part was a monumental escalator; Wiener’s cybernetic theories, as well as most early theories of artificial intelligence, were quietly dropped by the scientific community as of the mid-1970s, for the simple reason that they did not serve any practical purpose. The list of failed late mechanical and early cybernetic technologies from the 1970s is a long one; the reasons for their failure back then would be an interesting historiographical topic, but the reasons for their resurrection today are a dark and troubling mystery.
Of course, it was not only technology—so promising in the 1960s—that failed catastrophically in the 1970s; it was in a sense the whole universe of Modernist promise and expectations, which the 1960s had nurtured and to some extent fulfilled, that faltered and collapsed beyond repair in the 1970s. Politics in most Western countries was then widely seen as failing—in the sense that the existing political order in most Western democracies could manifestly not cope with the social, ideal, and economic issues that 1968 had brought to the forefront (and the energy crises of the 1970s then compounded). The Soviet Union was then widely seen as winning the Cold War, but not many in Western Europe would have welcomed Leonid Brezhnev as their leader—no more than they would have fancied a sputtering Trabant in their garage. In this generally despondent and at times desperate climate, many avant-garde designers decided that they should stop designing altogether, and do other things instead: cultivate and celebrate their irrelevance, for example, throw bombs, or simply commit suicide, as one of the Superstudio leaders suggested in 1971 (although to this day he does not appear to have done so himself). While the activist left was busily committing suicide (in various, more or less picturesque ways), the neo-conservative right took power: Margaret Thatcher became prime minister of the U.K. in 1979, and Ronald Reagan was elected president of the United States in 1980.
One truly novel alternative to the hopelessness of the 1970s only emerged in the last years of the decade, when various strains of anti-Modernist ideas coalesced in a coherent post-modern worldview: in architecture, first and foremost, thanks to Charles Jencks, but then also in philosophy and science. The postmodern sciences of complexity and non-linearity provided a powerful conceptual framework where a new “project,” including a new architectural project, could at long last take root and thrive. But the revolutionary, propositive phase of Postmodernism was short-lived: in 1984 many postmodern ideas were famously endorsed by Prince Charles, and soon the PoMos—old aristocrats and new populists alike—became known for their traditionally conservative ideology, and often reactionary political stance. Not surprisingly, it is this peculiar trend of historicist or neoclassical Postmodernism that today’s PoDigs (or self-styled Post-Digitals: most of them white, male, and British) appear to have chosen as their reference and inspirational source—but that would be a topic for another discussion.
Reactionaries will be reactionaries, so I do not blame them for playing their game—at least we know where they stand. But it grieves and worries me to see so much talent and effort being wasted today in the deliberate pursuit of failed models. What is so exciting in the revival and showy display of so many vintage technologies, and in the celebration of the awareness of the inevitability of failure? Is “suicide” really our best line of action today? Some said so in the 1970s, and we can now see with what results. Nostalgia, claims Don Draper in season I, episode 13 of Mad Men, originally meant in Greek “the pain from an old wound.” But to feel that pain, one must have been wounded in the first place. I was there in the 1970s—at least, in the late 1970s—and, from what I remember, it was not fun. Losing is not fun. Let’s move on.