This continues a four part series on Kevin Kelly’s What Technology Wants.
- Part 0: Introduction and Videos
- Part 1: Origins
- Part 2: Imperatives
- Part 3: Choices
- Part 4: Directions
In the first half of his book, Kelly told a story linking biology, technology, and information into a cosmic whole he calls the technium. But in the section called “Directions,” Kelly returns to technology in our everyday world.
He introduces two well-known cases of technological rejection – the Unabomber and the Amish – and carefully acknowledges the problems they see in technology. However, he argues their conclusion to reject technology is only possible on the basis of other people developing even more technology. This leads Kelly to the seemingly paradoxical recommendation that if carefully and continually evaluate technology, we can limit it in our own lives while maximizing it for others.
Chapter 10 – The Unabomber Was Right
Kelly begins 10 by returning to the problems caused technology. Many inventions of the past 150 years (submarines, dynamite, airplanes, etc.) were supposed to bring peace but instead wrought horrific human destruction. More people die using the dominant technological transportation system (cars at 1.2 million/year) than from cancer. Thousands of species go extinct due to energy abuse, and on and on we could go.
It almost seems as if technology is a system bent on human destruction. And in some ways, Kelly agrees with this assessment. In fact, he says that the best articulation of this view comes not from the books of famous philosophers of technology, but from the Unabomber, Ted Kaczynski. “The Unabomber is right that technology a holistic, self-perpetuating machine. He is also right that selfish nature of this system cause specific harms. Certain aspects of the technium are detrimental to the human self, because they defuse our identity.” (212).
But Kaczynski’s solution was not to work toward making technology better – instead he wanted to destroy it. So he moved into a cabin in the woods and began sending home-made bombs to people he felt were perpetuating the evil technological system. Yet, even as he tried to reject technology, he was forced to ride his bike to Walmart to get the supplies he was dependent upon to survive. “Despite the reality of technology’s faults, the Unabomber was wrong to want to exterminate it, not the least of which is that the machine of civilization offers us more actual freedoms than the alternative” (212-13).
For all of technology’s problems, Kelly argues that it always offers slightly more freedom than it takes. Even those who want to reject it (like the Unabomber) cannot fully escape their need for it. The problems of technology are real, but what we need is better evaluation of our tools. “We need – I almost hate to say it – more technology” (215) to help us make better decisions about our technology.
Chapter 11 – Lessons of Amish Hackers
Kelly then turns to another famously anti-technology group – the Amish. Kelly himself has living among the Amish and considers many of them friends. He admires their careful, community-based evaluation of new technology and their slow, discerning adoption of it.
Yet he points out that, like the Unabomber, their ability to reject certain technologies is based on other people developing more and more powerful technologies. For example, they reject cars and the electrical grid, but their shovels are made of steel that some high-tech company extracted from the earth and their backup generators run on gasoline refined by a billion dollar petroleum industry.
Kelly argues that one of technology’s most important benefits is that it gives people more choices as to who they will become. Of himself he writes, “I may not tweet, watch TV, or use a laptop, but I certainly benefit from the effect of others who do” (236). Of people like me, he says, “If you are a web designer, it is only because many tens of thousands of other people around you and before you have been expanding the realm of possibilities” (237). The chief good of technology and the reason we should pursue it is that it extends our choices and the choices of others. “Our missions as humans is not only to discover our fullest selves in the technium, and to find full contentment, but to expand the possibilities of others” (237).
For Kelly this isn’t just about an individual’s choice, it’s about the trajectory of humanity. “Our human nature itself is a malleable that we crop planted 50,000 years ago and continue to garden even today. The field of our nature has never been static. We know that genetically our bodies are changing faster now than at any time on the past million years. Our minds are being rewired by culture” (235).
To become who we want to become, we must strike a delicate balance. To be content ourselves we need to minimize the technology we personally use, but to help others be content we need to maximize the total technology available to others globally.
Chapter 12 – Seeking Conviviality
In chapter 12, Kelly attempts to show us how we might direct technology in such a way that it is “convivial” or “compatible with life.”
But first, Kelly addresses two problems that arise when we attempt to control technology.
The first is that it’s almost impossible to predict the impact of a technology. He cites example after example of how an inventor thought his technology would be used one way, but when it took hold in society things were much different. Edison thought his phonograph (sound recorder) would be used for deathbed recordings alone. “With few exceptions technologies don’t know what they want to be when they group up” (244). We tend to think of new technologies as doing old jobs better, which is why cars were initially called “horseless carriages.”
Because technology is so unpredictable, companies pitch products as if they are perfect, while others can only see the negatives and try to limit technology.
This brings us to Kelly’s second problem: when governments or communities attempt to put a complete stop on a technology, it never works. Technology continues to progress and eventually society (particularly the next generation) embraces it. “[H]istory shows that it is very hard for a society as a whole to say no to technology for very long.” (241) “Prohibitions are in effect postponements” (243).
Kelly returns to the argument in the first half of the book: “these technologies are inevitable” (261). The solution, then, is this: “We can only shape technology’s expression by engaging with it, by riding it with both arms around its neck” (262). Rather than try to stop technology, we must have a clear set of principles that shape its development. The rest of the chapter offers examples and models of how this might happen and concludes with these six values: cooperation, transparency, decentralization, flexibility, redundancy, and efficiency.
It’s hard to escape what Kelly is saying.
Technology will advance, no matter how hard anyone tries to stop it. Our attempts to destroy it (Unabomber) or ignore it (the Amish) won’t work.
The only true solution is to jump in full force and try to shape it toward being more “compatible with life.” The difficulty is, of course, figuring out how to do this in the real world.
This is probably the most practical and least mystical section in the entire book and for my money it’s one of the best and most realistic portrayals of the problems as well as the enormous good technology can bring.