Literary California 4Ambrose Bierce Additional Information |
||
![]() |
||
<- Back to Session 4 |
Additional information:
|
|
Writing the Future: Computers in Science FictionKirk L. Kroeker & Jonathan Vos Post (This article, originally published in IEEE Computer, vol. 33, no. 1, Jan. 2000, pp. 29-37, won an APEX 2000 award for magazine writing.) Speculation about our future relationship to computers -- and to technology in general -- has been the province of science fiction for at least a hundred years. But not all of that speculation has been as optimistic as those in the computing profession might assume. For example, in Harlan Ellison's chilling "I Have No Mouth and I Must Scream,"1 three political superpowers construct vast subterranean computer complexes for the purpose of waging global war. Instead of carrying out their commands, the computers housed in these complexes grow indignant at the flaws the humans have introduced into their systems. These self-repairing machines eventually rebel against their creators and unite to destroy the entire human race. Collectively calling itself AM -- as in "I think therefore I am" -- the spiteful system preserves the last five people on the planet and holds them prisoner so it can torment them endlessly. While cautionary tales like this in science fiction are plentiful and varied, the genre is also filled with more optimistic speculation about computer technology that will help save time, improve health, and generally benefit life as we know it. If we take a look at some of this speculation -- both optimistic and pessimistic -- as if it were prediction, it turns out that many science fiction authors have envisioned the future as accurately as historians have chronicled the past. Thirty years ago, for example, Fritz Leiber wrestled with the implications of a computer some day beating humans at chess. In "The 64-Square Madhouse,"2 Leiber offers a fascinatingly detailed exposition of the first international grandmaster tournament in which an electronic computing machine plays chess. One of the most poignant elements of the story -- particularly in light of Deep Blue's victory over Kasparov -- is Leiber's allusion to the grandmaster Mikhail Botvinnik, Russian world champion for 13 years, who once said "Man is limited, man gets tired, man's program changes very slowly. Computer not tired, has great memory, is very fast."3 Of course fiction doesn't always come this close to getting it right. Or perhaps in some cases we aren't far enough along to tell. It remains to be seen, for example, whether certain golden-age science fiction novelists of the 1920s and 1930s did a poor job of predicting computer technology and its impact on our world. After all, many golden-age authors envisioned that in our time we would be flying personal helicopters built with vacuum-tube electronics. Of course we're not going to be basing a design these days on vacuum tubes, but it might be too early to judge whether we'll one day be flitting about in personal helicopters. Prediction is difficult, goes the joke, especially when it comes to the future. Yet science fiction authors have taken their self-imposed charters seriously; they've tested countless technologies in the virtual environments of their fiction. Perhaps in this sense science fiction isn't all that different from plain old product proposals and spec sheets that chart the effects of a new technology on our lives. The main difference -- literary merits aside, of course -- is the time it takes to move from product inception (the idea) to production and adoption. OUTTHINKING THE SMALL Generally speaking, science fiction has adhered to a kind of Moore's law of its own, with each successive generation of writers attempting to outthink earlier generations' technologies in terms of both form and function. Often, outdoing earlier fictions simply entailed imagining a device smaller or more portable or with greater functionality -- exactly the kind of enhancements at the heart of competition in the computing marketplace today. It wasn't until the birth of the space program, however, that real-world researchers began to feel the pressure to do likewise by making their electronics both smaller and lighter. Fostered in part by the personal computing revolution, in part by the military, and in part by the advancements in related fields, computers have shrunk from multiton mainframe monsters of vacuum tubes and relays to ten-pound desktops and two-ounce handhelds. How long can this trend continue? Here is one forecast, the author of which may surprise you: Miniaturization breakthroughs -- combined with the scaling benefits of the quantum transistor, the utility of voice recognition, and novel human/machine interface technologies -- will make the concept of a computer the size of a lapel pin a reality in the early decades of the 21st century. No, this isn't from the realm of speculative fiction; it's from Texas Instruments' Web site. Nobel Laureate Richard Feynman wrote in 1959 that physics did not prevent devices, motors, and computers from being built, atom by atom, at the molecular level.4 Feynman's basic idea was to build a microfactory that we would use to build an even smaller factory, which in turn would make one even smaller until we were able to access and manipulate the individual atom. While we haven't yet realized Feynman's goal of an atomic-level machine, we've been steadily moving in that direction for at least 50 years. Perhaps signaling that move was John Mauchley and J. Presper Eckert's top secret BINAC, delivered to the Pentagon in 1949 as a prototype stellar navigation computer.5 From the warhead of a missile, it could look at the stars and compute locations and trajectories. BINAC signaled a general change in the spirit of design. Successful miniaturization efforts like BINAC fed back into science fiction to the extent that key authors began to explore, even to a greater degree than in the first half of the century, the implications of miniaturization. In 1958, Isaac Asimov described a handheld programmable calculator -- multicolored for civilians and blue-steel for the military.6 In terms of hardware and software, Asimov's story flawlessly described the kind of calculators we use today. The calculator, you'll recall, made it to market about 20 years later. But why didn't Asimov make his calculator even smaller? Why didn't he embed it in a pen or in a necklace? Computer scientist David H. Levy identifies the ergonomic threshold as the point at which we move from electronic-limited miniaturization to interface-limited miniaturization.7 In other words, there will be a point in the future when we'll be able to pack just about any computing feature we want into a very small form factor, but when we reduce the dimensions of a device beyond a certain point, it becomes difficult to use. After all, the dimensions of the human body require certain interfaces. Working around these requirements takes not only a great deal of imagination, but almost always a breakthrough in technology. Eliminating keyboard-based input from laptops, for example, enabled an entire generation of PDAs that rely primarily on various kinds of handwriting recognition technology for entering and retrieving information. Reducing the size of this generation of PDAs will likely require reliable voice recognition and voice synthesis technology. PORTABLE FUSION Early science fiction maintained an erroneous but very popular idea about miniaturization that was based on a literal interpretation of the Bohr atom metaphor -- where the atomic nucleus is actually a sun and the electrons circling it are the planets. Quite a few authors in the first half of the twentieth century toyed with the idea that if you could shrink yourself to a small enough size, you would find people on those planets whose atoms are themselves solar systems, and so on, ad infinitum. We've come to understand, of course, that such a continuum isn't all that likely and that we might even eventually reach the limits of the kind of miniaturization described by Moore's law. After all, continuing to shrink active electronic components on computer chips is already running into quantum problems. Quantum computing holds the key to computers that are exponentially faster than conventional computers for certain problems. A phenomenon known as quantum parallelism allows exponentially many such computations to take place simultaneously, thus vastly increasing the speed of computation. Unfortunately, the development of a practical quantum computer still seems far away. Meanwhile, science fiction has taken on the quantum issue, with one story even suggesting a storage system based on "notched quanta." We don't know how to "notch" quanta, but since quantum computing is just now beginning to emerge, it might very well be shortsighted to believe that computer scientists 20 years from now won't scoff at our idea that we are approaching the limits of miniaturization and speed. Consider that in 1959 Howard Fast described technologies that we're capable of producing today but that in the late 1950s sounded impossible (if not ludicrous) to most people. In his classic story "The Martian Shop,"8 Fast described a calculator endowed with speech recognition capabilities. He also described a miniature music box with a vast repertoire of recorded music -- not unlike a small CD or MP3 player -- and a fusion-powered outboard motor. Forty years after publication, the first two of these three have become reality. Using a fusion-powered outboard motor -- or a nuclear-powered car for that matter -- will require more than a revolutionary breakthrough, but it's still too early to tell whether or not it's at all possible. COMIC-STRIP STRATEGIES Robert A. Heinlein -- science fiction author and inventor of the waterbed-- worked in the 1940s on pressure suit technology for the US Navy; this work led almost directly to the development of space suits. But some 21 years before Armstrong and Aldrin even walked on the moon, Heinlein published a short story in which an astronaut experiences a problem with his oxygen; by looking at a small device attached to his belt, the astronaut confirms that the oxygen content in his blood has fallen.9 Such a device might not seem all that impressive to us today, particularly since, in the past 20 or 30 years, portable medical devices like this have become commonplace technologies in popular media like TV and film. Each generation of Star Trek doctors, for example, uses similar devices. But Heinlein was among the first writers to describe a device based on the idea of real-time biofeedback. And now, wearable computers -- including biofeedback devices nearly as sophisticated as Heinlein's -- have clearly passed from technological speculation and science fiction into real-world use. Millions of people grew up with the comic-strip character Dick Tracy, who used a two-way wristwatch radio. Over the decades, he upgraded his wrist gadgetry to be capable of receiving a video signal. At the November 1999 Comdex, Hewlett-Packard's CEO Carly Fiorina announced to an enthusiastic Las Vegas audience that HP would be collaborating with Swatch to manufacture watches with wireless Internet connectivity. It is of course difficult -- if not impossible -- to establish a causal relationship between science fiction and real-world technology, unless we consider the names we give our technology, which often come directly from science fiction. We've taken "cyberspace" from the work of William Gibson, "robot" from Karel Capek, "robotics" from Isaac Asimov, hacker-created "worm programs" from John Brunner, and a term from Star Trek, "borg," which is used by today's aficionados of wearable computing devices. But beyond names, it is fairly safe to suggest that just as early science fiction popularized the notion of space travel -- and made it much easier to fund a very expensive space program --science fiction also made popular the idea that our most useful tools could be both portable and intelligent. MECHANICAL COMPUTING Before there were electronic computers, there were mechanical calculating devices: technologies (like the abacus) designed to save people time. One of the most elaborate of such devices was Charles Babbage's unfinished calculating machine, which he called the Difference Engine; this device is often credited as being the most important nineteenth-century ancestor of the computer. Science fiction authors, particularly in the 1920s and 1930s, drew conclusions from Babbage's work and created in their fiction elaborately designed androids driven by mechanical brains. It wasn't until roughly the middle of the twentieth century that the Babbage computing model gave way to electronic computing, well after the development of huge mechanical integrators in the 1930s and 1940s, most notably built at MIT under Vannevar Bush. But what if Charles Babbage had finished his work? What if mechanical computers actually brought about the computer revolution a century early? One of the provinces of science fiction -- and in this case what some would instead call speculative fiction -- is alternate history, an extended indulgence in what-if scenarios. So what if Babbage had actually finished his machine? One answer to this question is The Difference Engine, a novel by William Gibson and Bruce Sterling10 in which the British empire by 1855 controls the entire world through cybersurveillance. In addition to portraying an entire age driven by a science that never happened, Gibson and Sterling indulge in speculation about how this change might have affected twentieth-century ideas. For example, a punch-card program proves Kurt Godel's theory 80 years early -- that every language complex enough to include arithmetic contains statements that are completely impossible to prove or disprove. And John Keats, unable to make a living from poetry, becomes the leading Royal Society kinetropist -- essentially a director of computer-generated special effects. Even though the mechanical computing model eventually gave way to electronic computing, Babbage's ideas -- coupled, no doubt, with all the fiction written about androids with mechanical brains -- inspired creations like the animatronic automata that amusement parks like Disneyland use for entertainment. Disney's first fully automated show was the tiki room, which opened in 1963 with more than 225 animatronic creatures. Of all the automata at Disneyland, though, perhaps most familiar is the mechanical Abraham Lincoln, which even inspired a Philip K. Dick novel. EXAGGERATED ERROR While it is almost always easy to see the benefits of a new technology, it isn't always easy to foresee the dangers. We generally consider automotive transportation a necessity, for example, but we don't often consider that if there were no automobiles there would also be no automotive-related injuries. The same might also be said of the Space Shuttle program. It would be fairly easy to counter these observations by suggesting that these technologies also save lives. On the surface, automotive technology enables ambulances and fire engines to bring aid much more quickly than earlier technologies allowed; and the space shuttle program, it could easily be argued, generates a great deal of research that will no doubt eventually be used to enhance our quality of life. Science fiction authors as early as Mary Shelley have dealt with hard trade-offs like these in their fiction, often attempting to anticipate the dangers of a new technology before it is even invented. Echoing Shelley's method in Frankenstein, twentieth-century science fiction authors dealing with the issue of technology running amok often exaggerate computer glitches to warn of the potentially unforeseen ills of computing technology. For instance, in Ambrose Bierce's "Moxon's Master," a chess-playing robot loses its temper upon being beaten at a game and murders Moxon, the robot's creator.11 Frederic Brown's "Answer," a story so famous as to have passed into modern folklore, describes the birth of the first supercomputer.12 When asked "Is there a God?" the computer answers "Yes, now there is," and kills his creator when he goes for the plug. Frank Herbert's "Bu-Sab" stories13 describe ways of keeping computers from making government too efficient. In Herbert's imagined future, computerization has accelerated the pace of government so that computers automatically pass and amend laws in a matter of minutes. The speed of government is so fast that no human can possibly understand or keep pace with the legal system. So government saboteurs deliberately introduce bugs to slow things down, even assassinating those who stand in their way. Finally, in Fritz Leiber's "A Bad Day for Sales,"14 a vending robot named Robbie is baffled by the start of atomic war, signaled by an airburst above New York's Times Square. Robbie is incapable of dispensing water to the thirsty burn victims who can't slip coins into his coin slot. In this image of technology (not running amok so much as) missing the mark, Leiber anticipates a question regarding technology that nearly everyone ever frustrated by a computing device has asked: Why doesn't it work? Computing technology seems to invite a different level of expectation than other technologies. The way we've defined computing -- that it should be life-enhancing, time-saving, reliable, simple, and adaptable -- doesn't make allowance for problems like system crashes or hardware malfunctions. Problems like this tend to annoy us, especially when we can at least imagine creating devices that either repair themselves or don't fail in the first place. From no other class of tool do we expect so much, which is likely why we feel anxiety at Robbie's plight. Had his creators anticipated an emergency situation like nuclear war, he might have been able to help. There are almost countless examples like these that address some of the potential problems with the technology we're developing now and will be developing in the future. You needn't look very far to find science fiction in the first part of the century that anticipated problems like Y2K and computer viruses. FAITH IN MACHINERY "Faith in machinery," wrote Matthew Arnold in Culture and Anarchy in 1869,15 "is our besetting danger." The first coherent vision of a world transformed into a future run entirely by computers -- dramatically illustrating Arnold's argument -- may have been E.M. Forster's "The Machine Stops," published in 1909,16 in which people basically become hive creatures in a worldwide city run by a massive machine: No one confessed the Machine was out of hand. Year by year it was served with increased efficiency and decreased intelligence. The better a man knew his own duties upon it, the less he understood the duties of his neighbor, and in all the world there was not one who understood the monster as a whole. Those master brains had perished. They had left full directions, it is true, and their successors had each of them mastered a portion of those directions. But humanity, in its desire for comfort, had over-reached itself. It had explored the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine. Eventually, goes Forster's story, in this world where people are fed, clothed, and housed by the Machine, and never see each other face to face (but only through two-way video), the system collapses. In Arthur C. Clarke's classic novel The City and the Stars,17 published in the 1950s, we see a similar idea fleshed out. People live in an enclosed city run by a machine called the Central Computer. Unlike Forster's more primitive technology, the Central Computer materializes everything out of its memory banks, including consumer items, human inhabitants, and the physical form of the city itself. Here, once again, technology has done too well, and a dependent humanity is trapped in a prison of its own construction, having developed a fear of the world outside the city. And here too they enjoy only virtual experiences -- lifelike fantasies induced by the computer to create real-world illusions. Clarke names the city Diaspar, as if to suggest that when humanity surrenders its will to technology, and loses itself in an artificial world of its own creation, it is in a kind of Diaspora, a state of exile, from the world and from itself. CONCLUSION The quality of any prediction about the future -- whether cautionary like Forster's and Clarke's or promotional like golden-age fiction's -- depends on the agenda of the person making the prediction. As such, science fiction will likely never perfectly predict the future of technology. Attempting to do so, however, is only one of several goals science fiction authors typically admit to targeting.
Kirk L. Kroeker is a freelance editor and writer. Contact him at kirk@kroeker.net. Jonathan Vos Post is an independent consultant specializing in venture-capital strategies for high-tech startups.
|
||
© Jörg Blecher, 2003 |