di Mario Carpo
There was a time when the Internet, then new, and untested, was widely welcome as a revolutionary technology that promised to alleviate—even fix— many of the evils then affecting late modern societies. That brief, juvenile spell was followed by almost twenty years of remorse and misgivings: from the early 2000s to last month the Internet, now ubiquitous and inevitable, has been seen by many with mistrust and suspicion. This is now changing again for obvious, contingent reasons. In just a single generation, our perception of the new electronic technologies of information and communication has already gone through two sudden and violent reversals of judgment. The second U-turn started a few weeks ago, and, albeit due to the Coronavirus pandemic, it does not yet have a name. The first U-turn, which started in March 2000–20 years earlier almost to the day–is in the history books. It is known as the dot-com crash. To the benefit of younger readers, who will not remember it, here follows a brief recap of that momentous story.
Around the mid 1990s many started to claim that digital technologies were about to change the world—and to change it for the better. Architects and designers were enthralled by the creative potentials of the new digital tools for design and fabrication; digital mass-customization (the mass-production of variations at no extra cost, as advocated by Greg Lynn and Bernard Cache, among others) promised a complete reversal of the technical logic of industrial modernity. At the same time, sociologists and town planners were trying to make sense of a new information technology with the potential to upend all known patterns of use of urban space, and of cities in general: the Internet was still a relatively new concept (many still called it “the information superhighway” or the “infobahn”), yet some started to point out that, with the rise of the Internet, many human activities were inevitably poised to migrate from physical space to what was then called “cyberspace” (i.e., again, the Internet): Amazon sold its first book in the spring of 1995.
In the years that followed every company with a dot and a “com” in its name, as per its URL, seemed destined for the brightest future. So was the Internet in general, and with it, many then thought, the world economy. As the late William Mitchell pointed out in his seminal City of Bits (1995), many things we used to do in physical space can now be more easily and more efficiently done electronically: think of e-commerce, e-learning, e-working (or remote working, or telecommuting), etc. As one proverb frequently cited at the time went: for every megabyte of memory you install on your hard disk, one square foot of retail space downtown will disappear.
Strange as it may seem today, everyone at the time thought that was a splendid idea. The valuation of all dot-com companies (companies doing business on the Internet, or just saying they would do so at some point) soared. Between January 1995 and March 2000 the NASDAQ composite index, where many of these young companies were quoted, rose by almost 600 percent. As the then chairman of the Federal Reserve of the US, Alan Greenspan, famously said, that extraordinary surge was not all due to “irrational exuberance”: valuations were rising because the Internet made our work in general more productive, and many things easier to find, buy, or sell, hence cheaper. Thanks to the Internet, we were told, we were all doing more with less: more work, more reading, more teaching, more learning, more researching, more interacting, more dating—you name it. The electronic transmission of data costs so much less than mechanical transportation of persons and goods: think of the advantage of reading a scholarly article from your office—or from your couch!—without having to travel to a faraway library. What’s more, the elimination of the mechanical transportation of persons and goods could be environmentally friendly (or, as we would say today, would reduce our “carbon footprint”).
If all that seemed too good to be true, it’s because it was. The NASDAQ peaked on March 10, 2000. It lost 80 percent of its value in the 18 months that followed. That was the dot-com crash, aka the burst of the Internet bubble. Many tech companies disappeared; Amazon barely survived, after losing 88 per cent of its market capitalization. The NASDAQ itself took 15 years to crawl back to its peak valuation of March 2000. In the contrite climate of those post-crash years (which were also the post 9/11 years) few still saw the Internet as a benevolent, or even a promising, technology. The anti-Internet backlash was swift, and predictable. As many had warned from the start, technology should not replace human contact; there can be no community without physical proximity. Ideologues and philosophers from various quarters soon chimed in, fueling the anti-technological spirit of the time. Christian phenomenologists, for example, had long held that the elision of human dialogue started with the invention of alphabetic writing: if we write, we use a technology to transmit our voice in the absence of our body. For those sharing this worldview, disembodiment is the original sin of all media technologies: after that first and ancestral lapse into the abyss of mediated communication, things could only get from bad to worse; the Internet is just more of the same. A few years into the new millennium the so-called social media reinvented the Internet; in recent times we have learned to fear their intrusion on our privacy. Furthermore, by abolishing all traditional tools for thoughtful moderation, and giving unmediated voice to so many dissenters, outliers, and misfits, the Internet has been seen by many as the primary technical cause of the rise of populism. (That may as well be true, regrettably, although I suspect that if I had been a Roman Catholic cleric around 1540 I would have said the same of the use of the new barbaric technology of print by the likes of John Calvin or Martin Luther.)
I write this while self-isolating in my London apartment, like hundreds of millions of Europeans, contemplating the unfolding of an unspeakable man-made catastrophe, created by human error and compounded by political cynicism, criminal calculations, and incompetence. The Internet is, literally, my lifeline. It is all I have. I wish I could use it to replace my errands to the grocery store and to the pharmacy—but, as everyone is doing that, Amazon deliveries are now few and far between. Two weeks ago I started to use the Internet to improvise classes, tutorials, and meetings, for my students in London and elsewhere. I wish I had started practicing a bit earlier—say, in 1994, following the example of a handful of pioneers like Mark Taylor, then at Williams College. I must also use the Internet to read the papers, to keep paying my bills, and to carry out my duties in the schools where I teach. I use it to see family and friends. I may even restart using Facebook, which I jettisoned some 12 years ago (and my reasons for doing so back then are likely still posted on my Facebook page).
From my living-room windows I used to see, in the distance, the uninterrupted flow of airplanes gliding into Heathrow, evenly spaced, 3 or 4 minutes from one another. I could still see a handful today, oddly—I wonder where from, and who for. Only a few months ago Greta Thunberg still incited us, by words and deeds, to flight shaming; she can rest now—she has won her battle big way, albeit not in any way she would have chosen. It appears that as the carbon-heavy economy of the industrial age (or anthropocene) has almost entirely stopped, we may have already staved off the global warming catastrophe—or at least postponed it. Only a few months ago some climate activists were more or less openly advocating the elimination of part of the human population as the only fix to save the planet: well, there you go.
Meanwhile, something we have already learned is that Internet viruses are less lethal than real ones. The coronavirus traveled by plane, boat, and rail. It was born and bred as a pure product of the industrial age. If a few months back, when this all started, we had already been using more Internet, and flying less (as we are doing now, by necessity not by choice), many lives would have been saved, because the virus would have had fewer conduits for spreading. So perhaps, in retrospect, this is exactly what we should have been doing all along.
Sooner or later schools, offices, cafés, restaurants, stores, and cities will reopen, somehow. When that happens, we shall be so starved for the human contact we lost, and missed, during our quarantines that my guess is the use of the Internet will plunge—at least for a while. But at that point we shall also have learned that the traditional way of working—the mechanical, “anthropocenic” way of working—is no longer the only one. We shall have had evidence that in many cases viable electronic alternatives to the mechanical transportation of persons and goods do exist, and—when used with due precautions, and within reasonable limits—they can work pretty well. Remote working can already effectively replace plenty of facetime, thus making plenty of human travel unnecessary: the alternative to air travel is not sailing boat travel; it’s the Internet. Service work and blue-collar work cannot yet be despatialized as effectively as white-collar work, but that’s not too far away in the future either: automated logistics, fulfillment, and fully automated robotic fabrication are already current in some industries. Robotic factories are mostly immune to economies of scale, and they can be located closer to their markets, thus reducing the global transportation of mass-produced goods and components. Anecdotally, but meaningfully, I know that some among my friends and colleagues, like Manuel Jimenez Garcia at the Bartlett, or Jenny Sabin at Cornell, have already converted their 3D printers and robotic arms to produce protective equipment for medics and hospital workers—on site, on specs and on demand. Because this is indeed the point; this is what robotic fabrication was always meant to do: where needed, when needed, as needed. The same robotic arm that made a Zaha Hadid flatware set last week can make face shields for medical staff today—10 miles from a hospital in need. No airport needed for delivery.
During the second world war the brutality of the war effort had the side effect of revealing the effectiveness of modern technologies. Many who had resisted modernism in architecture and design before the war got used to it during the war, out of necessity; then adopted and embraced modernism out of choice, and without cultural reservations, as soon as the war was over. Likewise, the coronavirus crisis may now show that many cultural and ideological reservations against the rise of post-industrial digital technologies were not based on fact, nor on the common good, but on prejudice or self-interest. From the start of the coronavirus crisis to March 23 the Dow Jones Industrial Index lost one third of its value; the tech-heavy NASDAQ, one quarter; Amazon lost nothing; and Zoom Video, the company making and selling the tool many of us use for online teaching, was up almost 140 per cent. That was before the U.S. government and the U.S. Federal Reserve stepped in with a number of stimulus measures, which reflated all valuations indifferently; at the time of this writing, April 1, Zoom Video was still up about 100 per cent. This looks a bit like the dot-com crash of twenty years ago in reverse. Perhaps, as many thought and said in the 1990s, and up to March 2000, the Internet is not such a bad thing after all.
Earlier versions of this paper were read during a public lecture at the University of Applied Arts in Vienna on March 30, 2020 and published online by The Architect’s Newspaper on April 13, 2020 (https://archpaper.com/2020/04/coronavirus-might-give-internet-weve-always-wanted/
 See in particular Jean-Louis Cohen, Architecture in Uniform: Designing and Building for the Second World War (Montréal: Canadian Centre for Architecture; Paris: Hazan; New Haven, CT: Yale University Press, 2011).