The Utopia of Rules — Chapter 2 : Of Flying Cars and the Declining Rate of Profit

By David Graeber

Entry 5702

Public

From: holdoffhunger [id: 1]
(holdoffhunger@gmail.com)

../ggcms/src/templates/revoltlib/view/display_grandchildof_anarchism.php

Untitled Anarchism The Utopia of Rules Chapter 2

Not Logged In: Login?

0
0
Comments (0)
Permalink
(1961 - 2020)

Anarchist, Anthropologist, Occupy Movement Organizer, and Anti-Bullshit Jobs Activist

David Rolfe Graeber was an American anthropologist and anarchist activist. His influential work in economic anthropology, particularly his books Debt: The First 5,000 Years and Bullshit Jobs , and his leading role in the Occupy movement, earned him recognition as one of the foremost anthropologists and left-wing thinkers of his time. Born in New York to a working-class Jewish family, Graeber studied at Purchase College and the University of Chicago, where he conducted ethnographic research in Madagascar under Marshall Sahlins and obtained his doctorate in 1996. He was an assistant professor at Yale University from 1998 to 2005, when the university controversially decided not to renew his contract before he was eligible for tenure. Unable to secure another position in the United States, he entered an "academic exile" in England, where he was a lecturer and reader at Goldsmiths' College from 2008 to 2013, and a professor at the London School of Economic... (From: Wikipedia.org / TheGuardian.com.)


On : of 0 Words

Chapter 2

2. Of Flying Cars and the Declining Rate of Profit

“Contemporary reality is the beta-version of a science fiction dream.”

—Richard Barbrook

There is a secret shame hovering over all us in the twenty-first century. No one seems to want to acknowledge it.

For those in what should be the high point of their lives, in their forties and fifties, it is particularly acute, but in a broader sense it affects everyone. The feeling is rooted in a profound sense of disappointment about the nature of the world we live in, a sense of a broken promise—of a solemn promise we felt we were given as children about what our adult world was supposed to be like. I am referring here not to the standard false promises that children are always given (about how the world is fair, authorities are well-meaning, or those who work hard shall be rewarded), but about a very specific generational promise —given above all to those who were children in the fifties, sixties, seventies, or even eighties—one that was never quite articulated as a promise but rather as a set of assumptions about what our adult world would be like. And since it was never quite promised, now that it has spectacularly failed to come true, we’re left confused; indignant, but at the same time, embarrassed at our own indignation, ashamed we were ever so silly to believe our elders to begin with.

I am referring, of course, to the conspicuous absence, in 2015, of flying cars.

Well, all right, not just flying cars. I don’t really care about flying cars—especially because I don’t even drive. What I have in mind are all the technological wonders that any child growing up in the mid-to-late twentieth century simply assumed would exist by 2015. We all know the list: Force fields. Teleportation. Antigravity fields. Tricorders. Tractor beams. Immortality drugs. Suspended animation. Androids. Colonies on Mars. What happened to them? Every now and then it’s widely trumpeted that one is about to materialize—clones, for instance, or cryogenics, or anti-aging medications, or invisibility cloaks—but when these don’t prove to be false promises, which they usually are, they emerge hopelessly flawed. Point any of this out, and the usual response is a ritual invocation of the wonders of computers—why would you want an antigravity sled when you can have second life?—as if this is some sort of unanticipated compensation. But, even here, we’re not nearly where people in the fifties imagined we’d have been by now. We still don’t have computers you can have an interesting conversation with, or robots that can walk the dog or fold your laundry.

Speaking as someone who was eight years old at the time of the Apollo moon landing, I have very clear memories of calculating that I would be thirty-nine years of age in the magic year 2000, and wondering what the world around me would be like. Did I honestly expect I would be living in a world of such wonders? Of course. Everyone did. And so do I feel cheated now? Absolutely.

Certainly, I didn’t think I’d see all the things we read about in science fiction realized in my lifetime (even assuming my lifetime was not extended by centuries by some newly discovered longevity drug). If you asked me at the time, I’d have guessed about half. But it never occurred to me that I wouldn’t see any of them.

I have long been puzzled and fascinated by the near silence surrounding this issue in public discourse. One does occasionally see griping about flying cars on the Internet, but it’s muted, or very marginal. For the most part, the topic is treated almost as taboo. At the turn of the millennium, for instance, I was expecting an outpouring of reflections by forty- somethings in the popular media on what we had expected the world of 2000 to be like, and why we had all gotten it so wrong. I couldn’t find a single one. Instead, just about all the authoritative voices—both Left and Right—began their reflections from the assumption that a world of technological wonders had, in fact, arrived.

To a very large extent, the silence is due to fear of being ridiculed as foolishly naïve. Certainly if one does raise the issue, one is likely to hear responses like “Oh, you mean all that Jetson stuff?” As if to say, but that was just for children! Surely, as grown-ups, we’re supposed to understand that the Jetsons future was about as realistic as the Flintstones past. But of course it wasn’t just the Jetsons. All serious science shows designed for children in the fifties, sixties, seventies, and even the eighties—the Scientific Americans, the educational TV programs, the planetarium shows in national museums—all the authoritative voices who told us what the universe was like and why the sky was blue, who explained the periodic table of elements, also assured us that the future was indeed going to involve colonies on other planets, robots, matter transformation devices, and a world much closer to Star Trek than to our own.

The fact that all these voices turned out to be wrong doesn’t just create a deep feeling of largely inexpressible betrayal; it also points to some conceptual problems about how we should even talk about history, now that things haven’t unfolded as we thought they would. There are contexts where we really can’t just wave our hands and make the discrepancy between expectations and reality go away. One obvious one is science fiction. Back in the twentieth century, creators of science fiction movies used to come up with concrete dates in which to place their futuristic fantasies. Often these were no more than a generation in the future. Thus in 1968, Stanley Kubrick felt that a moviegoing audience would find it perfectly natural to assume that only thirty-three years later, in 2001, we would have commercial moon flights, city-like space stations, and computers with humanlike personalities maintaining astronauts in suspended animation while traveling to Jupiter.[78] In fact about the only new technology from 2001 that actually did appear were video telephones, but those were already technically possible in 1968—at the time, they were simply unmarketable because no one really wanted to have one.[79] Similar problems crop up whenever a particular writer, or program, tries to create a grand mythos. According to the universe created by Larry Niven, which I got to know as a teenager, humans in this decade (2010s) are living under a one-world U.N. government and creating their first colony on the moon, while dealing with the social consequences of medical advances that have created a class of immortal rich people. In the Star Trek mythos developed around the same time, in contrast, humans would now be recovering from fighting off the rule of genetically engineered supermen in the Eugenics Wars of the 1990s—a war which ended when we shot them all in suspension pods into outer space. Star Trek writers in the 1990s were thus forced to start playing around with alternate time lines and realities just as a way of keeping the whole premise from falling apart.

By 1989, when the creators of Back to the Future II dutifully placed flying cars and antigravity hoverboards in the hands of ordinary teenagers in the year 2015, it wasn’t clear if it was meant as a serious prediction, a bow to older traditions of imagined futures, or as a slightly bitter joke. At any rate, it marked one of the last instances of this sort of thing. Later science fiction futures were largely dystopian, moving from bleak technofascism into some kind of stone-age barbarism, as in Cloud Atlas, or else, studiously ambiguous: the writers remaining coy about the dates, which renders “the future” a zone of pure fantasy, no different really than Middle Earth or Cimmeria. They might even, as with Star Wars, place the future in the past, “a long time ago in a galaxy far, far away.” This Future is, most often, not really a future at all, but more like an alternative dimension, a dream-time, some kind of technological Elsewhere, existing in days to come in the same sense that elves and dragon-slayers existed in the past; just another screen for the projection of moral dramas and mythic fantasies. Science fiction has now become just another set of costumes in which one can dress up a Western, a war movie, a horror flick, a spy thriller, or just a fairy tale.

I think it would be wrong, however, to say that our culture has completely sidestepped the issue of technological disappointment. Embarrassment over this issue has ensured that we’ve been reluctant to engage with it explicitly. Instead, like so many other cultural traumas, pain has been displaced; we can only talk about it when we think we’re talking about something else.

In retrospect, it seems to me that entire fin de siècle cultural sensibility that came to be referred to as “postmodernism” might best be seen as just such a prolonged meditation on technological changes that never happened. The thought first struck me when watching one of the new Star Wars movies. The movie was awful. But I couldn’t help but be impressed by the quality of the special effects. Recalling all those clumsy effects typical of fifties sci-fi films, the tin spaceships being pulled along by almost-invisible strings, I kept thinking about how impressed a 1950s audience would have been if they’d known what we could do by now—only to immediately realize, “actually, no. They wouldn’t be impressed at all, would they? They thought that we’d actually be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.”

That last word, “simulate,” is key. What technological progress we have seen since the seventies has largely been in information technologies—that is, technologies of simulation. They are technologies of what Jean Baudrillard and Umberto Eco used to call the “hyper- real”—the ability to make imitations more realistic than the original. The entire postmodern sensibility, the feeling that we had somehow broken into an unprecedented new historical period where we understood that there was nothing new; that grand historical narratives of progress and liberation were meaningless; that everything now was simulation, ironic repetition, fragmentation and pastiche: all this only makes sense in a technological environment where the only major breakthroughs were ones making it easier to create, transfer, and rearrange virtual projections of things that either already existed, or, we now came to realize, never really would. Surely, if we were really taking our vacations in geodesic domes on Mars, or toting about pocket-sized nuclear fusion plants or telekinetic mind-reading devices, no one would ever have been talking like this. The “postmodern” moment was simply a desperate way to take what could only otherwise be felt as a bitter disappointment, and dress it up as something epochal, exciting and new.

It’s worthy of note that in the earliest formulations of postmodernism, which largely came out of the Marxist tradition, a lot of this technological subtext was not even subtext; it was quite explicit.

Here’s a passage from Frederick Jameson’s original Postmodernism, or the Cultural Logic of Late Capitalism, in 1984:

It is appropriate to recall the excitement of machinery in the moment of capital preceding our own, the exhilaration of futurism, most notably, and of Marinetti’s celebration of the machine gun and the motorcar. These are still visible emblems, sculptural nodes of energy which give tangibility and figuration to the motive energies of that earlier moment of modernization … the ways in which revolutionary or communist artists of the 1930s also sought to reappropriate this excitement of machine energy for a Promethean reconstruction of human society as a whole …

It is immediately obvious that the technology of our own moment no longer possesses this same capacity for representation: not the turbine, nor even Sheeler’s grain elevators or smokestacks, not the baroque elaboration of pipes and conveyor belts, nor even the streamlined profile of the railroad train—all vehicles of speed still concentrated at rest—but rather the computer, whose outer shell has no emblematic or visual power, or even the casings of the various media themselves, as with that home appliance called television which articulates nothing but rather implodes, carrying its flattened image surface within itself.[80]

Where once the sheer physical power of technologies themselves gave us a sense of history sweeping forward, we are now reduced to a play of screens and images.

Jameson originally proposed the term “postmodernism” to refer to the cultural logic appropriate to a new phase of capitalism, one that Ernest Mandel had, as early as 1972, dubbed a “third technological revolution.” Humanity, Mandel argued, stood on the brink of a transformation as profound as the agricultural or industrial revolutions had been: one in which computers, robots, new energy sources, and new information technologies would, effectively, replace old-fashioned industrial labor—the “end of work” as it soon came to be called—reducing us all to designers and computer technicians coming up with the crazy visions that cybernetic factories would actually produce.[81] End of work arguments became increasingly popular in the late seventies and early eighties, as radical thinkers pondered what would happen to traditional working-class struggle once there was no longer a working class. (The answer: it would turn into identity politics.)

Jameson thought of himself as exploring the forms of consciousness and historical sensibilities likely to emerge from this emerging new age. Of course, as we all know, these technological breakthroughs did not, actually, happen. What happened instead is that the spread of information technologies and new ways of organizing transport—the containerization of shipping, for example—allowed those same industrial jobs to be outsourced to East Asia, Latin America, and other countries where the availability of cheap labor generally allowed manufacturers to employ much less technologically sophisticated production-line techniques than they would have been obliged to employ at home. True, from the perspective of those living in Europe and North America, or even Japan, the results did seem superficially to be much as predicted. Smokestack industries did increasingly disappear; jobs came to be divided between a lower stratum of service workers and an upper stratum sitting in antiseptic bubbles playing with computers. But below it all lay an uneasy awareness that this whole new post-work civilization was, basically, a fraud. Our carefully engineered high-tech sneakers were not really being produced by intelligent cyborgs or self-replicating molecular nanotechnology; they were being made on the equivalent of old-fashioned Singer sewing machines, by the daughters of Mexican and Indonesian farmers who had, as the result of WTO or NAFTA-sponsored trade deals, been ousted from their ancestral lands. It was this guilty awareness, it seems to me, that ultimately lay behind the postmodern sensibility, its celebration of the endless play of images and surfaces, and its insistence that ultimately, all those modernist narratives that were supposed to give those images depth and reality had been proved to be a lie.

So: Why did the projected explosion of technological growth everyone was expecting—the moon bases, the robot factories—fail to materialize? Logically, there are only two possibilities. Either our expectations about the pace of technological change were unrealistic, in which case, we need to ask ourselves why so many otherwise intelligent people felt they were not. Or our expectations were not inherently unrealistic, in which case, we need to ask what happened to throw the path of technological development off course.

When cultural analysts nowadays do consider the question—which they rarely do—they invariably choose the first option. One common approach is to trace the problem back to illusions created by the Cold War space race. Why, many have asked, did both the United States and the Soviet Union become so obsessed with the idea of manned space travel in the fifties, sixties, and seventies? It was never an efficient way to engage in scientific research. Was it not the fact that both the Americans and Russians had been, in the century before, societies of pioneers, the one expanding across the Western frontier, the other, across Siberia? Was it not the same shared commitment to the myth of a limitless, expansive future, of human colonization of vast empty spaces, that helped convince the leaders of both superpowers they had entered into a new “space age” in which they were ultimately battling over control over the future itself? And did not that battle ultimate produce, on both sides, completely unrealistic conceptions of what that future would actually be like?[82]

Obviously there is truth in this. There were powerful myths at play. But most great human projects are rooted in some kind of mythic vision—this, in itself, proves nothing, one way or the other, about the feasibility of the project itself. In this essay, I want to explore the second possibility. It seems to me there are good reasons to believe that at least some of those visions were not inherently unrealistic—and that at least some of these science fiction fantasies (at this point we can’t know which ones) could indeed have been brought into being. The most obvious reason is because in the past, they regularly had been. After all, if someone growing up at the turn of the century reading Jules Verne or H. G. Wells tried to imagine what the world would be like in, say, 1960, they imagined a world of flying machines, rocket ships, submarines, new forms of energy, and wireless communication … and that was pretty much exactly what they got. If it wasn’t unrealistic in 1900 to dream of men traveling to the moon, why was it unrealistic in the sixties to dream of jet-packs and robot laundry-maids? If from 1750 to 1950 new power sources emerged regularly (steam, electric, petroleum, nuclear …) was it that unreasonable to imagine we’d have seen at least one new one since?

There is reason to believe that even by the fifties and sixties, the real pace of technological innovation was beginning to slow from the heady pace of the first half of the century. There was something of a last spate of inventions in the fifties when microwave ovens (1954), the pill (1957), and lasers (1958) all appeared in rapid succession. But since then, most apparent technological advances have largely taken the form of either clever new ways of combining existing technologies (as in the space race), or new ways to put existing technologies to consumer use (the most famous example here is television, invented in 1926, but only mass-produced in the late forties and early fifties, in a self-conscious effort to create new consumer demand to ensure the American economy didn’t slip back into depression). Yet the space race helped convey the notion that this was an age of remarkable advances, and the predominant popular impression during the sixties was that the pace of technological change was speeding up in terrifying, uncontrollable ways. Alvin Toffler’s 1970 breakaway bestseller Future Shock can be seen as a kind of high-water mark of this line of thought. In retrospect, it’s a fascinating and revealing book.[83]

Toffler argued that almost all of the social problems of the 1960s could be traced back to the increasing pace of technological change. As an endless outpouring of new scientific breakthroughs continually transformed the very grounds of our daily existence, he wrote, Americans were left rudderless, without any clear idea of what normal life was supposed to be like. Perhaps it was most obvious in the case of the family, where, he claimed, not just the pill, but also the prospect of in vitro fertilization, test tube babies, and sperm and egg donation were about to make the very idea of motherhood obsolete. Toffler saw similar things happening in every domain of social life—nothing could be taken for granted. And humans were not psychologically prepared for the pace of change. He coined a term for the phenomenon: “accelerative thrust.” This quickening of the pace of technological advance had begun, perhaps, with the industrial revolution, but by roughly 1850, he argued, the effect had become unmistakable. Not only was everything around us changing, most of it— the sheer mass of human knowledge, the size of the population, industrial growth, the amount of energy being consumed—was changing at an exponential rate. Toffler insisted that the only solution was to begin to create some kind of democratic control over the process—institutions that could assess emerging technologies and the effects they were likely to have, ban those technologies likely to be too socially disruptive, and guide development in directions that would foster social harmony.

The fascinating thing is that while many of the historical trends Toffler describes are accurate, the book itself appeared at almost precisely the moment when most of them came to an end. For instance, it was right around 1970 when the increase in the number of scientific papers published in the world—a figure that had been doubling every fifteen years since roughly 1685—began leveling off. The same is true of the number of books and patents. In other areas, growth did not just slow down—it stopped entirely. Toffler’s choice of the word “acceleration” turns out to have been particularly unfortunate. For most of human history, the top speed at which human beings could travel had lingered around twenty-five miles per hour. By 1900 it had increased to perhaps 100 mph, and for the next seventy years it did indeed seem to be increasing exponentially. By the time Toffler was writing, in 1970, the record for the fastest speed at which any human had traveled stood at 24,791 mph, achieved by the crew of Apollo 10 while reentering the earth’s atmosphere in 1969, just a year before. At such an exponential rate, it must have seemed reasonable to assume that within a matter of decades, humanity would be exploring other solar systems. Yet no further increase has occurred since 1970.

The record for the fastest a human has ever traveled remains with the crew of Apollo 10. True, the maximum speed of commercial air flight did peak one year later, at 14,000 mph, with the launching of the Concorde in 1971. But airline speed has not only failed to increase since—it has actually decreased since the Concorde’s abandonment in 2003.[84] [85]

The fact that Toffler turned out to be wrong about almost everything had no deleterious effects on his own career. Charismatic prophets rarely suffer much when their prophecies fail to materialize. Toffler just kept retooling his analysis and coming up with new spectacular pronouncements every decade or so, always to great public recognition and applause. In 1980 he produced a book called The Third Wave,[86] its argument lifted directly from Ernest Mandel’s “third technological revolution”—except that while Mandel argued these changes would spell the eventual end of capitalism, Toffler simply assumed that capitalism would be around forever. By 1990, he had become the personal intellectual guru of Republican congressman Newt Gingrich, who claimed that his own 1994 “Contract with America” was inspired, in part, by the understanding that the United States needed to move from an antiquated, materialist, industrial mindset to a new, free-market, information-age, Third Wave civilization.

There are all sorts of ironies here. Probably one of the greatest real-world achievements of Future Shock had been to inspire the government to create an Office of Technology Assessment (OTA) in 1972, more or less in line with Toffler’s call for some sort of democratic oversight over potentially disruptive technologies. One of Gingrich’s first acts on winning control of Congress in 1995 was to defund the OTA as an example of useless government waste. Again, none of this seemed to faze Toffler at all. By that time, he had long since given up trying to influence policy by appealing to the general public, or even really trying to influence political debate; he was, instead, making a living largely by giving seminars to CEOs and the denizens of corporate think tanks. His insights had, effectively, been privatized.

Gingrich liked to call himself a “conservative futurologist.” This might seem oxymoronic; but actually, if you look back at Toffler’s work in retrospect, the guru’s politics line up precisely with his student’s, and it’s rather surprising anyone ever took him for anything else. The argument of Future Shock is the very definition of conservatism. Progress was always presented as a problem that needed to be solved. True, his solution was ostensibly to create forms of democratic control, but in effect, “democratic” obviously meant “bureaucratic,” the creation of panels of experts to determine which inventions would be approved, and which put on the shelf. In this way, Toffler might best be seen as a latter day, intellectually lightweight version of the early nineteenth-century social theorist Auguste Comte. Comte, too, felt that he was standing on the brink of a new age—in his case, the industrial age—driven by the inexorable progress of technology, and that the social cataclysms of his times were really caused by the social system not having managed to adjust. The older, feudal order, had developed not only Catholic theology, a way of thinking about man’s place in the cosmos perfectly suited to the social system of the time, but an institutional structure, the Church, that conveyed and enforced such ideas in a way that could give everyone a sense of meaning and belonging. The current, industrial age had developed its own system of ideas—science—but scientists had not succeeded in creating anything like the Catholic Church. Comte concluded that we needed to develop a new science, which he dubbed “sociology,” and that sociologists should play the role of priests in a new Religion of Society that would inspire the masses with a love of order, community, work-discipline, and patriarchal family values. Toffler was less ambitious: his futurologists were not supposed to actually play the role of priests. But he shared the same feeling that technology was leading humans to the brink of a great historical break, the same fear of social breakdown, and, for that matter, the same obsession with the need to preserve the sacred role of motherhood—Comte wanted to put the image of a pregnant woman on the flag of his religious movement.

Gingrich did have another guru who was overtly religious: George Gilder, a libertarian theologian, and author, among other things, of a newsletter called the “Gilder Technology Report.” Gilder was also obsessed with the relation of technology and social change, but in an odd way, he was far more optimistic. Embracing an even more radical version of Mandel’s Third Wave argument, he insisted that what we were seeing since the 1970s with the rise of computers was a veritable “overthrow of matter.” The old, materialist, industrial society, where value came from physical labor, was giving way to an information age where value emerged directly from the minds of entrepreneurs, just as the world had originally appeared ex nihilo from the mind of God, just as money, in a proper supply-side economy, emerged ex nihilo from the Federal Reserve and into the hands of creative, value- creating, capitalists. Supply-side economic policies, he concluded, would ensure that investment would continue to steer away from old government boondoggles like the space program, and towards more productive information and medical technologies.

Gilder, who had begun his career declaring that he aspired to be “America’s premier antifeminist,” also insisted that such salutary developments could only be maintained by strict enforcement of traditional family values. He did not propose a new religion of society. He didn’t feel he had to, since the same work could be done by the Christian evangelical movement that was already forging its strange alliance with the libertarian right.[87]

One would be unwise, perhaps, to dwell too much on such eccentric characters, however influential. For one thing, they came very late in the day. If there was a conscious, or semiconscious, move away from investment in research that might have led to better rockets and robots, and towards research that would lead to such things as laser printers and CAT scans, it had already begun before the appearance of Toffler’s Future Shock (1971), let alone Gilder’s Wealth and Poverty (1981).[88] What their success does show is that the issues these men raised—the concern that existing patterns of technological development would lead to social upheaval, the need to guide technological development in directions that did not challenge existing structures of authority—found a receptive ear in the very highest corridors of power. There is every reason to believe that statesmen and captains of industry were indeed thinking about such questions, and had been for some time.[89]

So what happened? Over the course of the rest of this essay, which is divided into three parts, I am going to consider a number of factors that I think contributed to ensuring the technological futures we all anticipated never happened. These fall into two broad groups. One is broadly political, having to do with conscious shifts in the allocation of research funding; the other bureaucratic, a change in the nature of the systems administering scientific and technological research.

Thesis: There appears to have been a profound shift, beginning in the 1970s, from investment in technologies associated with the possibility of alternative futures to investment technologies that furthered labor discipline and social control

“The bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society … All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.”

—Marx and Engels, Manifesto of the Communist Party (1847)

“I said that fun was very important, too, that it was a direct rebuttal of the kind of ethics and morals that were being put forth in the country to keep people working in a rat race which didn’t make any sense because in a few years the machines would do all the work anyway, that there was a whole system of values that people were taught to postpone their pleasure, to put all their money in the bank, to buy life insurance, a whole bunch of things that didn’t make any sense to our generation at all.”

—Abbie Hoffman, from the trial of the Chicago Seven (1970)

Since its inception in the eighteenth century, the system that has come to be known as “industrial capitalism” has fostered an extremely rapid rate of scientific advance and technological innovation—one unparalleled in previous human history. Its advocates have always held this out as the ultimate justification for the exploitation, misery, and destruction of communities the system also produced. Even its most famous detractors, Karl Marx and Friedrich Engels, were willing to celebrate capitalism—if for nothing else—for its magnificent unleashing of the “productive forces.” Marx and Engels also believed that that very tendency, or, to be more precise, capitalism’s very need to continually revolutionize the means of industrial production, would eventually be its undoing.

Is it possible that they were right? And is it also possible that in the sixties, capitalists, as a class, began to figure this out?

Marx’s specific argument was that, for certain technical reasons, value, and therefore profits, can only be extracted from human labor. Competition forces factory owners to mechanize production, so as to reduce labor costs, but while this is to the short-term advantage of the individual firm, the overall effect of such mechanization is actually to drive the overall rate of profit of all firms down. For almost two centuries now, economists have debated whether all this is really true. But if it is true, the otherwise mysterious decision by industrialists not to pour research funds into the invention of the robot factories that everyone was anticipating in the sixties, and instead to begin to relocate their factories to more labor-intensive, low-tech facilities in China or the Global South, makes perfect sense.[90]

I’ve already observed that there’s reason to believe the pace of technological innovation in productive processes—the factories themselves—had already begun to slow down considerably in the fifties and sixties. Obviously it didn’t look that way at the time. What made it appear otherwise were largely the side-effects of U.S. rivalry with the Soviet Union. This seems to have been true in two ways. One was a conscious policy: the Cold War saw frenetic efforts by U.S. industrial planners[91] to find ways to apply existing technologies to consumer purposes, to create an optimistic sense of burgeoning prosperity and guaranteed progress that, it was hoped, would undercut the appeal of radical working-class politics. The famous 1959 “kitchen debate” between Richard Nixon and Nikita Khrushchev made the politics quite explicit: “your communist ‘worker’s state’ may have beat us into outer space,” Nixon effectively argued, “but it’s capitalism that creates technology like washing machines that actually improve the lives of the toiling masses.” The other was the space race. In either case, the initiative really came from the Soviet Union itself. All this is difficult for Americans to remember, because with the end of the Cold War, the popular image of the USSR switched so quickly from terrifying rival to pathetic basket case—the exemplar of a society that “just didn’t work.” Back in the fifties, many U.S. planners were laboring under the suspicion that the Soviet system quite possibly worked much better than their own. Certainly, they keenly recalled the fact that in the 1930s, while the United States was mired in depression, the Soviet Union was maintaining almost unprecedented economic growth rates of 10 to 12 percent a year—an achievement quickly followed by the production of the vast tank armies that defeated Hitler, and of course, the launching of Sputnik in 1957, followed by the first manned spacecraft, the Vostok, in 1961. When Khrushchev assured Nixon that Soviet living standards would surpass those of the Americans in seven years, many Americans feared he might actually be right.

It’s often said that the Apollo moon landing was the greatest historical achievement of Soviet communism. Surely, the United States would never have contemplated such a feat had it not been for the cosmic ambitions of the Soviet Politburo. Even putting things this way is a bit startling. “Cosmic ambitions?” We are used to thinking of the Politburo as a group of unimaginative gray bureaucrats, but while the Soviet Union was certainly run by bureaucrats, they were, from the beginning, bureaucrats who dared to dream astounding dreams. (The dream of world revolution was just the first.) Of course, most of their grandiose projects—changing the course of mighty rivers, that sort of thing—either turned out to be ecologically and socially disastrous, or, like Stalin’s projected one-hundred-story Palace of the Soviets, which was to be topped by a twenty-story statue of Lenin, never got off the ground. And after the initial successes of the Soviet space program, most projects remained on the drawing board. But the Soviet leadership never ceased coming up with new ones. Even in the eighties, when the United States was attempting its own last—itself abortive—grandiose scheme, Star Wars, the Soviets were still planning and scheming ways to transform the world through creative uses of technology. Few outside of Russia now remember most of these projects, but vast resources were devoted to them. It’s also worth noting that unlike the Star Wars project, which was a purely military project designed to sink the Soviet Union, most were peaceful: as for instance, the attempt to solve the world hunger problem by harvesting lakes and oceans with an edible bacteria called spirulina, or to solve world energy problems by a truly breathtaking plan to launch hundreds of gigantic solar power platforms into orbit and beaming the resulting electricity back to earth.[92]

Even the golden age of science fiction, which had its heyday in the fifties and sixties, and which first developed that standard repertoire of future inventions—force fields, tractor beams, warp drives—that any contemporary eight-year-old is familiar with (just as surely as they will know that garlic, crosses, stakes, and sunlight are what’s most likely to be of help in slaying vampires) occurred in the United States and the USSR simultaneously.[93] Or consider Star Trek, that quintessence of American mythology. Is not the Federation of Planets—with its high-minded idealism, strict military discipline, and apparent lack of both class differences and any real evidence of multiparty democracy—really just an Americanized vision of a kinder, gentler Soviet Union, and above all, one that actually “worked”?[94]

What I find remarkable about Star Trek, in particular, is that there is not only no real evidence of democracy, but that almost no one seems to notice its absence. Granted, the Star Trek universe has been endlessly elaborated, with multiple series, movies, books and comics, even encyclopedias, not to mention decades’ worth of every sort of fan fiction, so the question of the political constitution of the Federation did eventually have to come up. And when it did there was no real way anyone could say it was not a democracy. So one or two late references to the Federation as having an elected President and legislature were duly thrown in. But this is meaningless. Signs of real democratic life are entirely absent in the show—no character ever makes even a passing reference to elections, political parties, divisive issues, opinion polls, slogans, plebiscites, protests, or campaigns. Does Federation “democracy” even operate on a party system? If so, what are the parties? What sort of philosophy or core constituency does each represent? In 726 episodes we’re not given the slightest clue.[95]

One might object: the characters themselves are part of Star Fleet. They’re in the military. True; but in real democratic societies, or even constitutional republics like the United States, soldiers and sailors regularly express political opinions about all sorts of things. You never see anyone in Star Fleet saying, “I never should have voted for those idiots pushing the expansionist policy, now look what a mess they’ve gotten into in Sector 5” or “when I was a student I was active in the campaign to ban terraforming of class-C planets but now I’m not sure we were right.” When political problems do arise, and they regularly do, those sent in to deal with them are invariably bureaucrats, diplomats, and officials. Star Trek characters complain about bureaucrats all the time. They never complain about politicians. Because political problems are always addressed solely through administrative means.[96]

But this is of course exactly what one would expect under some form of state socialism. We tend to forget that such regimes, also, invariably claimed to be democracies. On paper, the USSR under Stalin boasted an exemplary constitution, with far more democratic controls than European parliamentary systems of the time. It was just that, much as in the Federation, none of this had any bearing on how life actually worked.

The Federation, then, is Leninism brought to its full and absolute cosmic success—a society where secret police, reeducation camps, and show trials are not necessary because a happy conjuncture of material abundance and ideological conformity ensures the system can now run entirely by itself.

While no one seems to know or much care about the Federation’s political composition, its economic system has, from the eighties onward, been subject to endless curiosity and debate. Star Trek characters live under a regime of explicit communism. Social classes have been eliminated. So too have divisions based on race, gender, or ethnic origin.[97] The very existence of money, in earlier periods, is considered a weird and somewhat amusing historical curiosity. Menial labor has been automated into nonexistence. Floors clean themselves. Food, clothing, tools and weapons can be whisked into existence at will with a mere expenditure of energy, and even energy does not seem to be rationed in any significant way. All this did raise some hackles, and it would be interesting to write a political history of the debate over the economics of the future it sparked in the late eighties and early nineties. I well remember watching filmmaker Michael Moore, in a debate with editors of The Nation, pointing out that Star Trek showed that ordinary working-class Americans were far more amenable to overt anticapitalist politics than the beacons of the mainstream “progressive” left. It was around that time, too, that conservatives and libertarians on the Internet also began to take notice, filling newsgroups and other electronic forums with condemnations of the show as leftist propaganda.[98] But suddenly, we learned that money had not entirely disappeared. There was latinum. Those who traded in it, however, were an odious race who seemed to be almost exactly modeled on Medieval Christian stereotypes of Jews, except with oversize ears instead of oversize noses. (Amusingly, they were given a name, Ferengi, that is actually the Arabic and Hindi term for “annoying white person.”)[99] On the other hand, the suggestion that the Federation was promoting communism was undercut by the introduction of the Borg, a hostile civilization so utterly communistic that individuality had been effaced completely, sucking any sentient life form it assimilated into one terrifying beehive mind.

By the time of the moon landing of 1968, U.S. planners no longer took their competition seriously. The Soviets had lost the space race, and as a result, the actual direction of American research and development could shift away from anything that might lead to the creation of Mars bases and robot factories, let alone become the technological basis for a communist utopia.

The standard line, of course, is that this shift of priorities was simply the natural result of the triumph of the market. The Apollo program was the quintessential Big Government project—Soviet-inspired in the sense that it required a vast national effort, coordinated by an equally vast government bureaucracy. As soon as the Soviet threat was safely out of the picture, this story goes, capitalism was free to revert to lines of technological development more in accord with its normal, decentralized, free-market imperatives—such as privately funded research into marketable products like touch-pad phones, adventurous little start- ups, and the like. This is, certainly, the line that men like Toffler and Gilder began taking in the late seventies and early eighties. But it’s obviously wrong.

First of all, the amount of really innovative research being done in the private sector has actually declined since the heyday of Bell Labs and similar corporate research divisions in the fifties and sixties. Partly this is because of a change of tax regimes. The phone company was willing to invest so much of its profits in research because those profits were highly taxed—given the choice between sinking the money into higher wages for its workers (which bought loyalty) and research (which made sense to a company that was still locked in the old mind-set that said corporations were ultimately about making things, rather than making money), and having that same money simply appropriated by the government, the choice was obvious. After the changes in the seventies and eighties described in the introduction, all this changed. Corporate taxes were slashed. Executives, whose compensation now increasingly took the form of stock options, began not just paying the profits to investors in dividends, but using money that would otherwise be directed towards raises, hiring, or research budgets on stock buybacks, raising the values of the executives’ portfolios but doing nothing to further productivity. In other words, tax cuts and financial reforms had almost precisely the opposite effect as their proponents claimed they would.

At the same time, the U.S. government never did abandon gigantic state-controlled schemes of technological development. It just shifted their emphasis sharply away from civilian projects like the space program and in the direction of military research—not just Star Wars, which was Reagan’s version of a vast Soviet-scale project, but an endless variety of weapons projects, research in communications and surveillance technologies, and similar, “security-related” concerns. To some degree this had always been true: the billions poured into missile research alone had always dwarfed the relatively insignificant sums allocated to the space program. Yet by the 1970s, even much basic research came to be conducted following essentially military priorities. The most immediate reason we don’t have robot factories is that, for the last several decades, some 95 percent of robotics research funding has been channeled through the Pentagon, which is of course has far more interested in the kind of discoveries that might lead to the development of unmanned drones than fully automated bauxite mines or robot gardeners.

These military projects did have their own civilian spin-offs: the Internet is one. But they had the effect of guiding development in very specific directions.

One might suggest an even darker possibility. A case could be made that even the shift into R&D on information technologies and medicine was not so much a reorientation towards market-driven consumer imperatives, but part of an all-out effort to follow the technological humbling of the Soviet Union with total victory in the global class war: not only the imposition of absolute U.S. military dominance overseas, but the utter rout of social movements back home. The technologies that emerged were in almost every case the kind that proved most conducive to surveillance, work discipline, and social control. Computers have opened up certain spaces of freedom, as we’re constantly reminded, but instead of leading to the workless utopia Abbie Hoffman or Guy Debord imagined, they have been employed in such a way as to produce the opposite effect. Information technology has allowed a financialization of capital that has driven workers ever more desperately into debt, while, at the same time, allowed employers to create new “flexible” work regimes that have destroyed traditional job security and led to a massive increase in overall working hours for almost all segments of the population. Along with the export of traditional factory jobs, this has put the union movement to rout and thus destroyed any real possibility of effective working-class politics.[100] Meanwhile, despite unprecedented investment in research on medicine and life sciences, we still await cures for cancer or even of the common cold; instead, the most dramatic medical breakthroughs we have seen have taken the form of drugs like Prozac, Zoloft, or Ritalin—tailor-made, one might say, to ensure that these new professional demands don’t drive us completely, dysfunctionally, crazy.

When historians write the epitaph for neoliberalism, they will have to conclude that it was the form of capitalism that systematically prioritized political imperatives over economic ones.

That is: given a choice between a course of action that will make capitalism seem like the only possible economic system, and one that will make capitalism actually be a more viable long-term economic system, neoliberalism has meant always choosing the former. Does destroying job security while increasing working hours really create a more productive (let alone innovative, loyal) workforce? There is every reason to believe that exactly the opposite is the case. In purely economic terms the result of neoliberal reform of labor markets is almost certainly negative—an impression that overall lower economic growth rates in just about all parts of the world in the eighties and nineties would tend to reinforce. However it has been spectacularly effective in depoliticizing labor. The same could be said of the burgeoning growth in armies, police, and private security services. They’re utterly unproductive—nothing but a resource sink. It’s quite possible, in fact, that the very weight of the apparatus created to ensure the ideological victory of capitalism will itself ultimately sink it. But it’s also easy to see how, if the ultimate imperative of those running the world is choking off the possibility of any sense of an inevitable, redemptive future that will be fundamentally different than the world today must be a crucial part of the neoliberal project.

Antithesis: Yet even those areas of science and technology that did receive massive funding have not seen the breakthroughs originally anticipated

At this point, the pieces would seem to be falling neatly into place. By the 1960s, conservative political forces had become skittish about the socially disruptive effects of technological progress, which they blamed for the social upheavals of the era, and employers were beginning to worry about the economic impact of mechanization. The fading of the Soviet threat allowed for a massive reallocation of resources in directions seen as less challenging to social and economic arrangements—and ultimately, to ones that could support a campaign to sharply reverse the gains progressive social movements had made since the forties, thus achieving a decisive victory in what U.S. elites did indeed see as a global class war. The change of priorities was touted as a withdrawal of big-government projects and a return to the market, but it actually involved a shift in the orientation of government-directed research, away from programs like NASA—or, say, alternative energy sources—and toward even more intense focus on military, information, and medical technologies.

I think all this is true as far as it goes; but it can’t explain everything. Above all, it cannot explain why even in those areas that have become the focus of well-funded research projects, we have not seen anything like the kind of advances anticipated fifty years ago. To take only the most obvious example: if 95 percent of robotics research has been funded by the military, why is there still no sign of Klaatu-style killer robots shooting death rays from their eyes? Because we know they’ve been working on that.

Obviously, there have been advances in military technology. It’s widely acknowledged that one of the main reasons we all survived the Cold War is that while nuclear bombs worked more or less as advertised, the delivery systems didn’t; Intercontinental Ballistic Missiles weren’t really capable of hitting cities, let alone specific targets inside them, which meant there was little point in launching a nuclear first strike unless you were consciously intending to destroy the world. Contemporary cruise missiles, in contrast, are fairly accurate. Still, all those much-vaunted precision weapons never seem capable of taking out specific individuals (Saddam, Osama, Gaddafi), even if hundreds are dropped. Drones are just model airplanes, driven by remote control. And ray guns of any sort have not materialized, surely not for lack of trying—we have to assume the Pentagon has poured billions into coming up with one, but the closest they’re come so far are lasers (a fifties technology) that might, if aimed correctly, make an enemy gunner looking directly at the beam go blind. This is not just unsporting, but rather pathetic. Phasers that can be set to stun do not appear to even be on the drawing boards; in fact, when it comes to infantry combat, the preferred weapon in 2011, almost everywhere, remains the AK-47, a Soviet design, named after the year it was first introduced: 1947.[101]

The same, as I’ve already noted, can be said of widely anticipated breakthroughs in medicine, and even (dare I say?) computers. The Internet is surely a remarkable thing. Still, if a fifties sci-fi fan were to appear in the present and ask what the most dramatic technological achievement of the intervening sixty years had been, it’s hard to imagine the reaction would have been anything but bitter disappointment. He would almost certainly have pointed out that all we are really talking about here is a super-fast and globally accessible combination of library, post office, and mail order catalog. “Fifty years and this is the best our scientists managed to come up with? We were expecting computers that could actually think!”

All this is true, despite the fact that overall levels of research funding have increased dramatically since the 1970s. Of course, the proportion of that funding that comes from the corporate sector has increased even more dramatically, to the point where private enterprise is now funding twice as much research as the government. But the total increase is so large that the overall amount of government research funding, in real dollar terms, is still much higher than it was before. Again, while “basic,” “curiosity-driven,” or “blue skies” research—the kind that is not driven by the prospect of any immediate practical application, and which is therefore most likely to lead to unexpected breakthroughs—is an ever-smaller proportion of the total, so much money is being thrown around nowadays that overall levels of basic research funding has actually gone up. Yet most honest assessments have agreed that the results have been surprisingly paltry. Certainly we no longer see anything like the continual stream of conceptual revolutions—genetic inheritance, relativity, psychoanalysis, quantum mechanics—that humanity had grown used to, and even to expect, a hundred years before.

Why?

One common explanation is that when funders do conduct basic research, they tend to put all their eggs in one gigantic basket: “Big Science,” as it has come to be called. The Human Genome Project is often held out as an example. Initiated by the U.S. government, the project ended up spending almost three billion dollars and employing thousands of scientists and staff in five different countries, generating enormous expectations, only to discover that human gene sequences are nearly identical to those of chimpanzees, distinctly less complicated than the gene sequences of, say, rice, and that there would appear to be very little to be learned from them that’s of immediate practical application. Even more— and I think this is really key—the hype and political investment surrounding such projects demonstrate the degree to which even basic research now seems to be driven by political, administrative, and marketing imperatives (the Human Genome Project for instance had its own corporate-style logo) that make it increasingly unlikely that anything particularly revolutionary will result.

Here, I think our collective fascination with the mythic origins of Silicon Valley and the Internet have blinded us to what’s really going on. It has allowed us imagine that research and development is now driven, primarily, by small teams of plucky entrepreneurs, or the sort of decentralized cooperation that creates open-source software. It isn’t. These are just the sort of research teams most likely to produce results. If anything, research has been moving in the opposite direction. It is still driven by giant, bureaucratic projects; what has changed is the bureaucratic culture. The increasing interpenetration of government, university, and private firms has led all parties to adopt language, sensibilities, and organizational forms that originated in the corporate world. While this might have helped somewhat in speeding up the creation of immediately marketable products—as this is what corporate bureaucracies are designed to do—in terms of fostering original research, the results have been catastrophic.

Here I can speak from experience. My own knowledge comes largely from universities, both in the United States and the UK. In both countries, the last thirty years have seen a veritable explosion of the proportion of working hours spent on administrative paperwork, at the expense of pretty much everything else. In my own university, for instance, we have not only more administrative staff than faculty, but the faculty, too, are expected to spend at least as much time on administrative responsibilities as on teaching and research combined.[102] This is more or less par for the course for universities worldwide. The explosion of paperwork, in turn, is a direct result of the introduction of corporate management techniques, which are always justified as ways of increasing efficiency, by introducing competition at every level. What these management techniques invariably end up meaning in practice is that everyone winds up spending most of their time trying to sell each other things: grant proposals; book proposals; assessments of our students’ job and grant applications; assessments of our colleagues; prospectuses for new interdisciplinary majors, institutes, conference workshops, and universities themselves, which have now become brands to be marketed to prospective students or contributors. Marketing and PR thus come to engulf every aspect of university life.

The result is a sea of documents about the fostering of “imagination” and “creativity,” set in an environment that might as well have been designed to strangle any actual manifestations of imagination and creativity in the cradle. I am not a scientist. I work in social theory. But I have seen the results in my own field of endeavor. No major new works of social theory have emerged in the United States in the last thirty years. We have, instead, been largely reduced to the equivalent of Medieval scholastics, scribbling endless annotations on French theory from the 1970s, despite the guilty awareness that if contemporary incarnations of Gilles Deleuze, Michel Foucault, or even Pierre Bourdieu were to appear in the U.S. academy, they would be unlikely to even make it through grad school, and if they somehow did make it, they would almost certainly be denied tenure.[103]

There was a time when academia was society’s refuge for the eccentric, brilliant, and impractical. No longer. It is now the domain of professional self-marketers. As for the eccentric, brilliant, and impractical: it would seem society now has no place for them at all.

If all this is true in the social sciences, where research is still carried out largely by individuals, with minimal overhead, one can only imagine how much worse it is for physicists. And indeed, as one physicist has recently warned students pondering a career in the sciences, even when one does emerge from the usual decade-long period languishing as someone else’s flunky, one can expect one’s best ideas to be stymied at every point.

You [will] spend your time writing proposals rather than doing research. Worse, because your proposals are judged by your competitors you cannot follow your curiosity, but must spend your effort and talents on anticipating and deflecting criticism rather than on solving the important scientific problems … It is proverbial that original ideas are the kiss of death for a proposal; because they have not yet been proved to work.[104]

That pretty much answers the question of why we don’t have teleportation devices or antigravity shoes. Common sense dictates that if you want to maximize scientific creativity, you find some bright people, give them the resources they need to pursue whatever idea comes into their heads, and then leave them alone for a while. Most will probably turn up nothing, but one or two may well discover something completely unexpected. If you want to minimize the possibility of unexpected breakthroughs, tell those same people they will receive no resources at all unless they spend the bulk of their time competing against each other to convince you they already know what they are going to discover.[105]

That’s pretty much the system we have now.[106]

In the natural sciences, to the tyranny of managerialism we can also add the creeping privatization of research results. As the British economist David Harvie has recently reminded us, “open source” research is not new. Scholarly research has always been open- source in the sense that scholars share materials and results. There is competition, certainly, but it is, as he nicely puts it, “convivial”:

Convivial competition is where I (or my team) wish to be the first to prove a particular conjecture, to explain a particular phenomenon, to discover a particular species, star or particle, in the same way that if I race my bike against my friend I wish to win. But convivial competition does not exclude cooperation, in that rival researchers (or research teams) will share preliminary results, experience of techniques and so on … Of course, the shared knowledge, accessible through books, articles, computer software and directly, through dialogue with other scientists, forms an intellectual commons.[107]

Obviously this is no longer true of scientists working in the corporate sector, where findings are jealously guarded, but the spread of the corporate ethos within the academy and research institutes themselves has increasingly caused even publicly funded scholars to treat their findings as personal property. Less is published. Academic publishers ensure that findings that are published are more difficult to access, further enclosing the intellectual commons. As a result, convivial, open-source competition slides further into something much more like classic market competition.

There are all sorts of forms of privatization, up to and including the simple buying-up and suppression of inconvenient discoveries by large corporations for fear of their economic effects.[108] All this is much noted. More subtle is the way the managerial ethos itself militates against the implementation of anything remotely adventurous or quirky, especially, if there is no prospect of immediate results. Oddly, the Internet can be part of the problem here:

Most people who work in corporations or academia have witnessed something like the following: A number of engineers are sitting together in a room, bouncing ideas off each other. Out of the discussion emerges a new concept that seems promising. Then some laptop-wielding person in the corner, having performed a quick Google search, announces that this “new” idea is, in fact, an old one; it—or at least vaguely similar— has already been tried. Either it failed, or it succeeded. If it failed, then no manager who wants to keep his or her job will approve spending money trying to revive it. If it succeeded, then it’s patented and entry to the market is presumed to be unattainable, since the first people who thought of it will have “first-mover advantage” and will have created “barriers to entry.” The number of seemingly promising ideas that have been crushed in this way must number in the millions.[109]

I could go on, but I assume the reader is getting the idea. A timid, bureaucratic spirit has come to suffuse every aspect of intellectual life. More often than not, it comes cloaked in a language of creativity, initiative, and entrepreneurialism. But the language is meaningless. The sort of thinkers most likely to come up with new conceptual breakthroughs are the least likely to receive funding, and if, somehow, breakthroughs nonetheless occur, they will almost certainly never find anyone willing to follow up on the most daring implications.

Let me return in more detail to some of the historical context briefly outlined in the introduction.

Giovanni Arrighi, the Italian political economist, has observed that after the South Sea Bubble, British capitalism largely abandoned the corporate form. The combination of high finance and small family firms that had emerged after the industrial revolution continued to hold throughout the next century—Marx’s London, a period of maximum scientific and technological innovation; or Manchester; or Birmingham were not dominated by large conglomerates but mainly by capitalists who owned a single factory. (This is one reason Marx could assume capitalism was characterized by constant cutthroat competition.) Britain at that time was also notorious for being just as generous to its oddballs and eccentrics as contemporary America is intolerant. One common expedient was to allow them to become rural vicars, who, predictably, became one of the main sources for amateur scientific discoveries.[110]

As I mentioned, contemporary, bureaucratic, corporate capitalism first arose in the United States and Germany. The two bloody wars these rivals fought culminated, appropriately enough, in vast government-sponsored scientific programs to see who would be the first to discover the atom bomb. Indeed, even the structure of U.S. universities has always been based on the Prussian model. True, during these early years, both the United States and Germany did manage to find a way to cultivate their creative eccentrics—in fact, a surprising number of the most notorious ones that ended up in America (Albert Einstein was the paradigm) actually were German. During the war, when matters were desperate, vast government projects like the Manhattan Project were still capable of accommodating a whole host of bizarre characters (Oppenheimer, Feynman, Fuchs …). But as American power grew more and more secure, the country’s bureaucracy became less and less tolerant of its outliers. And technological creativity declined.

The current age of stagnation seems to have begun after 1945, precisely at the moment the United States finally and definitively replaced the UK as organizer of the world economy.[111] True, in the early days of the U.S. Space Program—another period of panic— there was still room for genuine oddballs like Jack Parsons, the founder of NASA’s Jet Propulsion Laboratory. Parsons was not only a brilliant engineer—he was also a Thelemite magician in the Aleister Crowley tradition, known for regularly orchestrating ceremonial orgies in his California home. Parsons believed that rocket science was ultimately just one manifestation of deeper, magical principles. But he was eventually fired.[112] U.S. victory in the Cold War guaranteed a corporatization of existing university and scientific bureaucracies sufficiently thorough to ensure that no one like him would ever get anywhere near a position of authority to start with.

Americans do not like to think of themselves as a nation of bureaucrats—quite the opposite, really—but, the moment we stop imagining bureaucracy as a phenomenon limited to government offices, it becomes obvious that this is precisely what we have become. The final victory over the Soviet Union did not really lead to the domination of “the market.” More than anything, it simply cemented the dominance of fundamentally conservative managerial elites—corporate bureaucrats who use the pretext of short-term, competitive, bottom-line thinking to squelch anything likely to have revolutionary implications of any kind.

Synthesis: On the Movement from Poetic to Bureaucratic Technologies

“All the labor-saving machinery that has hitherto been invented has not lessened the toil of a single human being.”

—John Stuart Mill

It is the premise of this book that we live in a deeply bureaucratic society. If we do not notice it, it is largely because bureaucratic practices and requirements have become so all- pervasive that we can barely see them—or worse, cannot imagine doing things any other way.

Computers have played a crucial role in all of this. Just as the invention of new forms of industrial automation in the eighteenth and nineteenth centuries had the paradoxical effect of turning more and more of the world’s population into full-time industrial workers, so has all the software designed to save us from administrative responsibilities in recent decades ultimately turned us all into part or full-time administrators. Just as university professors seem to feel it is inevitable that they will spend more and more of their time managing grants, so do parents simply accept that they will have to spend weeks of every year filling out forty-page online forms to get their children into acceptable schools, and store clerks realize that they will be spending increasing slices of their waking lives punching passwords into their phones to access, and manage, their various bank and credit accounts, and pretty much everyone understands that they have to learn how to perform jobs once relegated to travel agents, brokers, and accountants.

Someone once figured out that the average American will spend a cumulative six months of her life waiting for the light to change. I don’t know if similar figures are available for how long she is likely to spend filling out forms, but it must be at least that much. If nothing else, I think it’s safe to say that no population in the history of the world has spent nearly so much time engaged in paperwork.

Yet all of this is supposed to have happened after the overthrow of horrific, old-fashioned, bureaucratic socialism, and the triumph of freedom and the market. Certainly this is one of the great paradoxes of contemporary life, much though—like the broken promises of technology—we seem to have developed a profound reluctance to address the problem.

Clearly, these problems are linked—I would say, in many ways, they are ultimately the same problem. Nor is it merely a matter of bureaucratic, or more specifically managerial, sensibilities having choked off all forms of technical vision and creativity. After all, as we’re constantly reminded, the Internet has unleashed all sorts of creative vision and collaborative ingenuity. What it has really brought about is a kind of bizarre inversion of ends and means, where creativity is marshaled to the service of administration rather than the other way around.

I would put it this way: in this final, stultifying stage of capitalism, we are moving from poetic technologies to bureaucratic technologies.

By poetic technologies, I refer to the use of rational, technical, bureaucratic means to bring wild, impossible fantasies to life. Poetic technologies in this sense are as old as civilization. They could even be said to predate complex machinery. Lewis Mumford used to argue that the first complex machines were actually made of people. Egyptian pharaohs were only able to build the pyramids because of their mastery of administrative procedures, which then allowed them to develop production line techniques, dividing up complex tasks into dozens of simple operations and assigning each to one team of workmen—even though they lacked mechanical technology more complex than the lever and inclined plane. Bureaucratic oversight turned armies of peasant farmers into the cogs of a vast machine. Even much later, after actual cogs had been invented, the design of complex machinery was always to some degree an elaboration of principles originally developed to organize people.[113]

Yet still, again and again, we see those machines—whether their moving parts are arms and torsos or pistons, wheels, and springs—being put to work to realize otherwise impossible fantasies: cathedrals, moon shots, transcontinental railways, and on and on. Certainly, poetic technologies almost invariably have something terrible about them; the poetry is likely to evoke dark satanic mills as much as it does grace or liberation. But the rational, bureaucratic techniques are always in service to some fantastic end.

From this perspective, all those mad Soviet plans—even if never realized—marked the high-water mark of such poetic technologies. What we have now is the reverse. It’s not that vision, creativity, and mad fantasies are no longer encouraged. It’s that our fantasies remain free-floating; there’s no longer even the pretense that they could ever take form or flesh. Meanwhile, in the few areas in which free, imaginative creativity actually is fostered, such as in open-source Internet software development, it is ultimately marshaled in order to create even more, and even more effective, platforms for the filling out of forms. This is what I mean by “bureaucratic technologies”: administrative imperatives have become not the means, but the end of technological development.

Meanwhile, the greatest and most powerful nation that has ever existed on this earth has spent the last decades telling its citizens that we simply can no longer contemplate grandiose enterprises, even if—as the current environmental crisis suggests—the fate of the earth depends on it.

So what, then, are the political implications?

First of all, it seems to me that we need to radically rethink some of our most basic assumptions about the nature of capitalism. One is that capitalism is somehow identical to the market, and that both are therefore inimical to bureaucracy, which is a creature of the state. The second is that capitalism is in its nature technologically progressive. It would seem that Marx and Engels, in their giddy enthusiasm for the industrial revolutions of their day, were simply wrong about this. Or to be more precise: they were right to insist that the mechanization of industrial production would eventually destroy capitalism; they were wrong to predict that market competition would compel factory owners to go on with mechanization anyway. If it didn’t happen, it can only be because market competition is not, in fact, as essential to the nature of capitalism as they had assumed. If nothing else, the current form of capitalism, where much of the competition seems to take the form of internal marketing within the bureaucratic structures of large semi-monopolistic enterprises, would presumably have come as a complete surprise to them.[114]

Defenders of capitalism generally make three broad historical claims: first, that it has fostered rapid scientific and technological development; second, that however much it may throw enormous wealth to a small minority, it does so in such a way that increases overall prosperity for everyone; third, that in doing so, it creates a more secure and democratic world. It is quite clear that in the twenty-first century, capitalism is not doing any of these things. In fact, even its proponents are increasingly retreating from any claim that it is a particular good system, falling back instead on the claim that it is the only possible system —or at least, the only possible system for a complex, technologically sophisticated society such as our own.

As an anthropologist, I find myself dealing with this latter argument all the time.

SKEPTIC: You can dream your utopian dreams all you like, I’m talking about a political or economic system that could actually work. And experience has shown us that what we have is really the only option here.

ME: Our particular current form of limited representative government—or corporate capitalism—is the only possible political or economic system? Experience shows us no such thing. If you look at human history, you can find hundreds, even thousands of different political and economic systems. Many of them look absolutely nothing like what we have now.

SKEPTIC: Sure, but you’re talking about simpler, small-scale societies, or ones with a much simpler technological base. I’m talking about modern, complex, technologically advanced societies. So your counterexamples are irrelevant.

ME: Wait, so you’re saying that technological progress has actually limited our social possibilities? I thought it was supposed to be the other way around!

But even if you concede the point, and agree that for whatever reason, while a wide variety of economic systems might once have been equally viable, modern industrial technology has created a world in which this is no longer the case—could anyone seriously argue that current economic arrangements are also the only ones that will ever be viable under any possible future technological regime as well? Such a statement is self-evidently absurd. If nothing else, how could we possibly know?

Granted, there are people who take that position—on both ends of the political spectrum. As an anthropologist and anarchist, I have to deal fairly regularly with “anticivilizational” types who insist not only that current industrial technology can only lead to capitalist-style oppression, but that this must necessarily be true of any future technology as well: and therefore, that human liberation can only be achieved by a return to the Stone Age. Most of us are not such technological determinists. But ultimately, claims for the present-day inevitability of capitalism have to be based on some kind of technological determinism. And for that very reason, if the ultimate aim of neoliberal capitalism is to create a world where no one believes any other economic system could really work, then it needs to suppress not just any idea of an inevitable redemptive future, but really any radically different technological future at all. There’s a kind of contradiction here. It cannot mean convincing us that technological change has come to an end—since that would mean capitalism is not really progressive. It means convincing us that technological progress is indeed continuing, that we do live in a world of wonders, but to ensure those wonders largely take the form of modest improvements (the latest iPhone!), rumors of inventions about to happen (“I hear they actually are going to have flying cars pretty soon”),[115] even more complex ways of juggling information and imagery, and even more complex platforms for the filling out of forms.

I do not mean to suggest that neoliberal capitalism—or any other system—could ever be permanently successful in this regard. First, there’s the problem of trying to convince the world you are leading the way in terms of technological progress when you are actually holding it back. With its decaying infrastructure and paralyzes in the face of global warming, the United States is doing a particularly bad job of this at the moment. (This is not to mention its symbolically devastating abandonment of the manned space program, just as China revs up its own.) Second, there’s the fact that pace of change simply can’t be held back forever. At best it can be slowed down.

Breakthroughs will happen; inconvenient discoveries cannot be permanently suppressed. Other, less bureaucratized parts of the world —or at least, parts of the world with bureaucracies that are not quite so hostile to creative thinking—will, slowly, inevitably, attain the resources required to pick up where the United States and its allies have left off. The Internet does provide opportunities for collaboration and dissemination that may eventually help break us through the wall, as well. Where will the breakthrough come? We can’t know. Over the last couple years, since the first version of this essay saw print, there has been a whole spate of new possibilities: 3-D printing, advances in materials technologies, self-driving cars, a new generation of robots, and as a result, a new spate of discussion of robot factories and the end of work. There are hints, too, of impending conceptual breakthroughs in physics, biology, and other sciences, made all the more difficult because of the absolute institutional lock of existing orthodoxies, but which might well have profound technological implications as well.

At this point, the one thing I think we can be fairly confident about it is that invention and true innovation will not happen within the framework of contemporary corporate capitalism—or, most likely, any form of capitalism at all. It’s becoming increasingly clear that in order to really start setting up domes on Mars, let alone develop the means to figure out if there actually are alien civilizations out there to contact—or what would actually happen if we shot something through a wormhole—we’re going to have to figure out a different economic system entirely. Does it really have to take the form of some massive new bureaucracy? Why do we assume it must? Perhaps it’s only by breaking up existing bureaucratic structures that we’ll ever be able to get there. And if we’re going to actually come up with robots that will do our laundry or tidy up the kitchen, we’re going to have to make sure that whatever replaces capitalism is based on a far more egalitarian distribution of wealth and power—one that no longer contains either the super-rich or desperately poor people willing to do their housework. Only then will technology begin to be marshaled toward human needs. And this is the best reason to break free of the dead hand of the hedge fund managers and the CEOs—to free our fantasies from the screens in which such men have imprisoned them, to let our imaginations once again become a material force in human history.

From : TheAnarchistLibrary.org

(1961 - 2020)

Anarchist, Anthropologist, Occupy Movement Organizer, and Anti-Bullshit Jobs Activist

David Rolfe Graeber was an American anthropologist and anarchist activist. His influential work in economic anthropology, particularly his books Debt: The First 5,000 Years and Bullshit Jobs , and his leading role in the Occupy movement, earned him recognition as one of the foremost anthropologists and left-wing thinkers of his time. Born in New York to a working-class Jewish family, Graeber studied at Purchase College and the University of Chicago, where he conducted ethnographic research in Madagascar under Marshall Sahlins and obtained his doctorate in 1996. He was an assistant professor at Yale University from 1998 to 2005, when the university controversially decided not to renew his contract before he was eligible for tenure. Unable to secure another position in the United States, he entered an "academic exile" in England, where he was a lecturer and reader at Goldsmiths' College from 2008 to 2013, and a professor at the London School of Economic... (From: Wikipedia.org / TheGuardian.com.)

Chronology

Back to Top
An icon of a news paper.
January 7, 2021; 5:00:30 PM (UTC)
Added to http://revoltlib.com.

An icon of a red pin for a bulletin board.
January 17, 2022; 12:06:11 PM (UTC)
Updated on http://revoltlib.com.

Comments

Back to Top

Login to Comment

0 Likes
0 Dislikes

No comments so far. You can be the first!

Navigation

Back to Top
<< Last Entry in The Utopia of Rules
Current Entry in The Utopia of Rules
Chapter 2
Next Entry in The Utopia of Rules >>
All Nearby Items in The Utopia of Rules
Home|About|Contact|Privacy Policy