Futuristic Scenarios and Human Nature



ARTICLE | | BY Ullica Segerstrale

Author(s): 
Ullica Segerstrale

Get Full Text in PDF

Abstract

This article discusses current developments in information technology and artificial intelligence and their projected implications for humankind. It examines the arguments and projections of some contemporary technological pessimists and optimists in the light of historical insights about technological development and asks what kind of situation we are in at present. Should we believe, with extreme technological optimists, that we will soon reach a point, the Singularity, where it will be possible to “upload” a person in a computer, making him/her in this way immortal? Is it true that technology has a life-like nature and “wants” to evolve? The extreme arguments, however, are based on an outdated view of technology as information. They also involve an unrealistic view of human nature: humans cannot be reduced to information. We are in need of new models. Moreover, as we learn from a Silicon Valley insider, the Internet was never intended to be used to make some people rich through gathering and aggregating data about others. A new digital humanism would help diminish the fast growing global inequality and restore respect for the creative individual.

Is technology an autonomous force that drives social change, or is the use of technology dependent on human choices? Do we have a scenario of technological determinism or technological voluntarism (or what some call constructivism)? And what are the consequences of choosing one or the other to explain the direction of society? The framing of what is going on and the use of language are not neutral but important tools in a cultural struggle with vast consequences. (Think for instance about the word ‘sharing’). Moreover, these perspectives are not obviously connected to either an optimistic or pessimistic outlook on technology, but can be combined with both.

In the following sections, I will briefly discuss these issues in relation to the current situation in digital technology, taking a look at some recent opinions by prominent individuals in the field. We will see different predictions and suggested solutions. The views will vary from cyber-hype to cautious optimism and realistic warnings to outright scares. The solutions are typically connected to an assessment of where we are right now in regard to technological development, and here the diagnosis depends on one’s historical perspective as well as belief in the future of digital power.

1. Changing Views of the History of Technology

“A closer look at the detailed background history of many inventions shows that they in fact came about through the accumulation of many small increments over time.”

Over the last few decades, the traditional “heroic view” of individual inventors has increasingly given way to a view that is more systems and process oriented. This is largely due to a more complex historical analysis of the way in which technological progress actually took place. A closer look at the detailed background history of many inventions shows that they in fact came about through the accumulation of many small increments over time. Also, much more attention is being paid to such things as the availability and willingness of financial entrepreneurs to support an invention, the availability of suitable supportive technology, and the social need or desire for a particular invention – which may not at all have been obvious at the time.1

There was often a considerable difference between the original intent of the invention and the way in which it was finally used (a good example is the phonograph of Thomas Edison, which was first developed to record the dying wishes of important men; instead, it was used for music recording and mass entertainment). In fact, customer interest was often a driving force for the development of a new technology, and constant feedback from customers led to continuous corrections of mistakes and improvement of performance of a new technology. These kinds of observations make technology seem more like a product of a social process than something invented by single geniuses.2

2. The Current Situation – Uncontrolled Buildup of Control

Where are we now? Are we at the beginning of an era of unprecedented technological innovation and development? Or are we rather at the tail end of an era that started some 70 years ago? Let’s see what some techno-gurus and innovators think. But first, a snapshot of recent developments in artificial intelligence and information technology.

Research in AI is developing rapidly, as indicated by such recent products as self-driving cars and personal assistants like Siri and Google Now. A computer recently won a game of Jeopardy! (Remember when the computer Deep Blue beat the world chess champion Garry Kasparov in 1997?). According to Stephen Hawking, we are now developing the kind of artificial intelligence that is familiar from science fiction movies. Enormous investments are made in information technology and these are bigger than ever before; it can be likened to an arms race. New AI startups are created all the time and receive the financing needed for innovation. Google and other major companies are acquiring artificial intelligence and robotics companies. We could soon have smart robots roaming our streets.3

Another source reports: “Over the past year, Google has bought seven robotics companies…It has bought firms that specialize in natural language processing, gesture recognition, and more recently in machine learning…. If Silicon Valley’s best minds succeed, their software will not only be listening, it will be understanding and anticipating.”4

Indeed, AI is everywhere in some form. Every time you plug into the internet, someone is there to spy on you and track your behavior. It is almost impossible to avoid being tracked. New face recognition software can now identify you to the authorities whenever you are close to one of the many information gathering devices – including a police constable, who doesn’t even need your name if he has your face identity. And devices are everywhere. New wearable computers of various kinds are being developed. The most intrusive seeming futuristic spyware would be “smart dust” flowing around you, taking pictures of you or measuring your bodily proportions. A picture of your key chain lying on a table in a coffee shop may provide sufficient information to copy your keys, suggests Lanier.5

But is all this spying and control actually legal? IIT lawyer and author Lori Andrews has been looking into this. She finds that in the US, at least, there is no law actually forbidding this spying (Which may or may not indicate that the law lags behind technological development and would need to catch up quickly). She has been addressing the issue of smart phones – in fact portable mini-computers, which are providing information about our conversations and movements in real time. In a cleverly titled piece, she asks, “Is your cell phone listening in on you?” Yes, it is - and if it has the hidden program Carrier IQ, it can also read your text messages and emails as you write them. That is one of the many programs installed without your permission; other spy programs you may just unwittingly download together with some legitimate smartphone application. The problem is the existing Wiretap Act. Your consent is not required if your wireless carrier decides that marketing companies are allowed to collect and transmit your personal information.6

3. Optimists and Pessimists among the Tech Insiders

Technological optimists see fantastic possibilities of realizing long-held dreams. They believe that it is possible to increase human intelligence and sensory powers so as to create super-humans of some sort. They believe in an extended human life span. There are those who welcome increasingly “cyber-like” humans. The so-called Transhumanists are the most extreme. Technological pessimists point to unforeseen technological problems and dangerous social consequences. Their views may in fact not be particularly pessimistic, just realistic checks on the situation…

But an important question has to do with how we assess the current situation in the history of humankind. Where are we now? Are we in a historically unique period of unprecedented growth and innovation, and open-ended promise (this is clearly the basic assumption of the tech leaders and investors)? Or are we rather at the end of an earlier historical period, picking the last of the “low hanging fruit” of earlier important innovations? This may sound counterintuitive on the face of it, but it is the recent view of at least one technological pessimist, the economist Robert Gordon, to whom I will now turn.

At the 2013 annual Innovation Forum organized by the Economist at UC Berkeley, Gordon provocatively suggested that “long-term economic growth may grind to a halt”, especially in economies with advanced technology. Looking backward in history he concluded: “Two and a half centuries of rising per-capita incomes could well turn out to be a unique episode in human history”.7

Another technological pessimist is the author of The Big Stagnation, Tyler Cowen.8 He uses the idea of “low hanging fruit” quite effectively, arguing that after the Second World War and the “Sputnik effect” (which triggered a campaign for massive education and innovation in science and technology in the US), there have actually been very few significant innovations. The potential from existing innovations after Sputnik (e.g., the computer, telecommunications) has already been extracted, which is why economic growth is slow. Although Cowen recognizes the Internet, he argues that much of the activity on the net is free and, if anything, the internet rather displaces jobs than create new ones, and he does not count innovations in fields like health-care and finance as having created significant benefits for people in general.

Moreover, he points to a number of very special circumstances that favored the growth of America – earlier types of “low hanging fruit”, such as available land, an inflow of immigrant workers, available education, and scientific and technological progress. So what is driving the Great Stagnation? He says he can formulate it in one sentence: “Recent and current innovation is more geared to private goods than to public goods. That simple observation ties together the three major macroeconomic trends of our time: growing income inequality, stagnant median income, and…the financial crisis.”9

Technological optimists have a different view of the situation. For example, the authors of The Second Machine Age, one, the director of MIT’s Center for Digital Business, and the other, a research scientist at that center, argue that digital technologies are dramatically changing our world and economy: as more and more goods and services are produced, they will become increasingly cheaper. At the same time they admit that computers will increasingly take over human labor, which will cause rising inequality. But the solution is to be found in a new kind of collective intelligence, consisting of networked brains as well as strongly connected intelligent machines.10

Chris Anderson, the editor of Wired magazine with his bestselling book Makers: The New Industrial Revolution, introduces his readers to the new way in which digital technology is now impacting the production of goods as well, and transforming mass production into small scale or even home manufacturing.11 Digital manufacturing will involve among other things 3D printing which is improving all the time. It will also involve different types of financing (e.g. Kickstarter, which is an online platform for funding seed capital for launching a new business). With the new digital technology for production it will be possible for people to follow the “do it yourself” strategy. The “Makers” has already become a movement. Anderson keeps the door open for impact on other fields too, such as health and education.

Two other insiders have an alternative approach. They recognize today’s huge global challenges involving such things as population, food, water, energy, education, and health-care and want to tackle these problems head on on huge market opportunities! These are the authors of the book Abundance: The Future is Better than you Think, Peter Diamandis and Steven Kotler.12 This book, published in 2012, can be seen as a response to Cowen’s pessimism. Peter Diamandis has degrees in molecular genetics and aerospace engineering from MIT and a medical doctorate from Harvard and is the founder of more than a dozen tech companies. He is also in charge of the XPrize Foundation, which provides support to young social entrepreneurs’ innovative ideas and awards them. Kotler is a journalist and book author. Together they suggest that we take the initiative away from slow-moving governments and encourage small innovative teams instead to solve the big challenges facing humankind.13

An even more impressive voice is that of the billionaire Naveen Jain, founder of the World Innovation Institute, who similarly concentrates on finding solutions to difficult global problems with great impact on the quality of life. Health, energy, environment, and education are some of his core areas. For Jain the true measure of progress is not economic productivity but rather improvement of the quality of life. In other words, he is advocating a type of social entrepreneurship, which he is supporting through his institute. Just like the authors of Abundance, he believes the solution lies in creative new applications of information technology, and that major innovations are just around the corner. He is an innovator himself, a developer of Windows and other Microsoft products.14

4. The Promise and Scare of Artificial Intelligence and the Singularity

The possibility of highly intelligent machines has existed a long time in science fiction and in movies. The tension is typically between machine power and human power and the question is the extent to which machine power will come to dominate humans.

Using technology to enhance or modify our human nature is already a reality.

For technological optimists, the benefits of AI are obviously enormous. In fact, it seems that they take a future involving highly intelligent machines for granted. This is clear from the attitudes and jargon among some leaders in Silicon Valley.

A couple of articles from May 2014 describing the culture of Silicon Valley bring this point home; the titles already tell the story: “Silicon Valley: an army of geeks and ‘coders’ shaping our future”, and “In the future, the robots may control you, and Silicon Valley will control them.” We learn about lots of young people working 80 hour weeks without taking weekends off and a startup company “incubator” called Hacker DoJo where anyone can come and work for free on his own project and meanwhile be in close proximity to others with whom they may later form a team. The language of the Valley, interestingly, is full of expressions like “changing the world” and “disruption”, deriving from a certain counter-cultural rhetoric from the sixties and seventies. The place is also said to sustain a spirit that regards failing as acceptable and part of the process, as long as one learns from it.15

The people in the Valley naturally conceive of an unfolding future of AI with an open horizon towards superhuman intelligence. What is more, to the extent the machines become self-replicating or self-improving – which is also expected to happen – they could effectuate a sudden transition, the situation that techno-wizard Ray Kurzweil famously calls “singularity”.16

For Kurzweil, this is an event that is bound to happen, and soon, because following Moore’s Law, the power of information technology rapidly and inevitably increases in sophistication, doubling every 18 months. When this happens, the expectation is for human intelligence to merge with machine intelligence, making it possible to “upload” a person’s digitalized personality for preservation and access in the future, achieving a sort of immortality in this regard. There is a tremendous attraction to this kind of thing, it seems, for some of the leaders in information technology, and also for other techno-enthusiasts. (Experiments at a milder scale are already underway, for instance the possibility of exchanging emails with a deceased person, based on this person’s typical answering pattern).

Is it true that The Singularity is Near, as Kurzweil’s famous book with the same name suggests? Well, it is coming nearer at least in the form of the 2014 blockbuster movie Transcendence, depicting such a state. This will now spread one of the weirdest ideas of the Silicon Valley to the general public. Here is a short description of what is involved by a fellow tech guru who has followed Kurzweil closely:

“The Singularity, recall, is the idea that not only is technology improving, but the speed of improvement is increasing as well…We ordinary humans are supposedly staying the same … while our technology is an autonomous, self-transforming supercreature, and its self-improvement is accelerating. That means it will one day pass us in a great whoosh. In the blink of an eye we will become obsolete. We might then be instantly dead, because the new artificial superintelligence will need our molecules for a much higher purpose. Or maybe we will be kept as pets.”17

We are also informed that Kurzweil “awaits a Virtual Reality heaven that all our brains will be sucked up into as the Singularity occurs, which will be ‘soon’. There we will experience ‘any’ scenario, any joy.” Here we encounter a clearly religion-like atmosphere, which presumably also permeates the Singularity University, which Kurzweil helped found, located next to Google.

Some time ago another technophile, Bill Joy, after first being enthusiastic, reflected on (an early version of) Kurzweil’s optimistic interpretation of the future development of technology. He came to a negative conclusion. “The future doesn’t need us,” was his alarming realization, and the title of a famous long article of his. Joy could not see how humanity could avoid the possibilities for destruction on a mass scale.18

The real scare of AI was expressed most recently by a group of scientists including Stephen Hawking. The fear is that AI technology will end up not only surpassing humans in inventions, but producing things that humans cannot understand, while outsmarting them in various ways. “Success in creating AI would be the biggest event in human history,” Stephen Hawking recently wrote in an op-ed in The Independent. “Unfortunately, it may also be the last”. He continued: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”19

Equally extreme is the idea of a life-like direction to technological progress, argued by the founder and first editor of Wired magazine, Kevin Kelly, in the book What Technology Wants.20 The main thesis of the book is that technology “wants” to evolve. It is a process similar to evolution, which at the same time follows Moore’s Law. This “want” of technology is supposedly so great that humans become just bothersome obstacles to what technology wants. Therefore, it is natural for technology to “want” to transcend humans; we are just its temporary vehicles.

This relative contempt for human beings in favor of technology – or is it concern for humans, it is hard to tell! – can be taken even further. We humans are not only not good enough intellectually, but also morally, according to a book called Unfit for the Future: The Need for Moral Enhancement.21 The authors suggest that we do something to radically enhance human nature – we are not up to the responsibilities that come with the future of technology and the new challenges we will face. We are too morally weak and our traditional methods of transmitting morality are too inefficient. Therefore, in order to guarantee our survival as a civilization we should provide ourselves with more adequate moral capabilities. This is being argued by the Director and Research Fellow of the Program on Ethics and the New Biosciences at Oxford University.

5. What happened to Human Nature?

“The biggest problem with these futuristic scenarios may be the unrealistic way in which they conceptualize human nature. Humans cannot be reduced to information.”

But what happened to human nature in these last projections? It seems that great liberties are being taken with assumptions of who we are. The first two extreme arguments appear to see humans as bundles of information.

The Singularity scenario appears to involve a would-be religious view of information as the essence of what it means to be human. Information was, incidentally, a metaphor also used by molecular biologists – all those scientists (such as Jim Watson, first Director of the Human Genome Project) who early on wanted to persuade us about the importance of the human genome project and how it would reveal to us our “blueprint” or “the very essence of being human”.22

The second case uses the same conception of technology as information, this time actively evolving by itself. But the information model is not of a living organism adapting to its (changing) environment, it is only of its DNA. The claim is entirely dependent on the validity of the information model of the gene. This is particularly ironical today, since it has been recently realized that all those earlier assumptions about DNA as an information code were too simple. They ignored DNA’s ongoing requirements for appropriate stereochemical and environmental conditions for it to function at all. DNA is alive, it is not just a code, and it is far more complex than previously assumed. Also, it turns out to be hard to find simply identifiable “genes for” most human traits.

The biggest problem with these futuristic scenarios may be the unrealistic way in which they conceptualize human nature. Humans cannot be reduced to information; we have bodies and emotions, and are from birth absolutely dependent on nonverbal interaction. Also, even the most extreme information capabilities will not take care of the many inbuilt biases that affect the decision-making of our evolved human minds. We will continue jumping to conclusions, confuse correlation with causality, select cases that support our views, believe in self-fulfilling prophecies, sustain a good image of ourselves through various self-serving biases, etc. (Of course since we know this better now, we should also be better at counteracting it).

In fact, evolutionists have already for some time been concerned about the discrepancy between the speed of technological development and the biological adaptability of humans – exactly because we are not machines!

What about the third extreme suggestion, that of enhancing human morality? The authors’ perception of the necessity for this measure is postulated on their assumptions that humans do not have an innate moral sense, and are therefore dependent on education and culture. But this is an assumption that is being increasingly challenged by scientists such as ethologist Frans de Waal, in books such as Primates and Philosophers and The Age of Empathy.23,24 Frans de Waal is on the forefront of those who point to an evolutionary programming in humans for empathy, altruism and cooperation, in direct opposition to those who present human morality as basically hypocritical and grounded in our self-interest (for instance Robert Wright in his book The Moral Animal).25

This kind of argument about innate morality (and empathy) taps into a fundamental philosophical difference between two camps. There are those who see human nature as “saved” from the brutality of the natural world by the existence of culture, and others who regard humans as part of the natural world, but with the special addition of a cultural dimension. The famous proponent of the first view was Thomas Henry Huxley, whose contrast between nature and culture (education) was later reiterated by Richard Dawkins. Unfortunately, Dawkins’ popular biology book The Selfish Gene (1976) was often seen to further ingrain the idea of natural human selfishness.

A counter-scenario to deterministic arguments emphasizes human choice and the need for and capability of humans to take charge. As responsible humans we should be able to rely on traditional human morality, culture and social norms, instead of referring to technology as a social force somehow external to us. And this is where I wish to bring in Jaron Lanier.

6. Toward a Humanistic Technology

Time has come to bring in one more technology wizard, computer scientist Jaron Lanier, a long time insider of Silicon Valley, best known for having created virtual reality. Lanier believes in technology (obviously). But he is thinking deeply about the actual potential of internet-based technology and culture and asking himself if what we have in place now is the best way to go, and if not, what can be done.

Positive results: the Internet has shown that people are not passive consumers (as some worried during the time of television) but instead want to express themselves. Especially in the developing world, the Internet and mobile phones have had a dramatic effect, empowering people to connect and coordinate with each other.

But, according to Lanier, deterioration began with the rise of so-called “Web 2.0” designs around the turn of the century. These designs valued the information content of the web over individuals. The expressions of real people were aggregated into dehumanized data instead. There are many more things wrong with this. Only the “aggregator” (like Google, for instance) gets rich, while the actual producers of content get poor. Newspapers are dying. “The Internet has become anti-intellectual because Web 2.0 collectivism has killed the individual voice,” he complains.26

His book You Are Not a Gadget takes up this issue with a number of books that glorify “the crowd” or the collective. The popular idea that the collective is smarter than the individual is wrong, he argues. Crowd processes are good for some things, such as setting a market price, or for political elections, but they typically fail in cases that involve creativity and imagination. (An earlier author who examined such aspects of the Internet was Cass Sunstein in his classical book Infotopia. He went through the various potential uses of information technology and worried among other things that the Internet might promote such undesirable phenomena as “group think” on a mass scale).27

Yet another criticism has been that “open culture” sites such as Wikipedia undervalue achievements by human individuals and overvalue the collectivist spirit and anonymity of a crowd community. Lanier’s argument here is that important inventions are not mass phenomena but connected to individuals who struggle and persist, and test and modify their products. The current emphasis is on quantity when it should be on quality!

But this is not a logical consequence, Lanier protests. The internet does not have to be used this way. New radical technologies do not have to deny the uniqueness of the individual. Collectivism is not inherent in the Internet or the Web. The actual challenge will be, and should be, to develop a new digital humanism that can accommodate creative and innovative individuals.

Lanier was recently interviewed on television about his most recent book, Who Owns the Future?The information networks have taken an unexpected turn towards reducing human participation in the economy, he explained. This was not the intent! Lanier himself was part of this when it started: “We wanted to make the system more open and self-regulating,” he said. Instead, big companies with strong computers started aggregating information about humans, trying to learn about them.*

However, computers can only generate a statistical picture of the world. They don’t know and cannot see physical limits. Lanier gave the example of automated machine translation. Back in the 1950s there was a belief that a formula could be created for computers to translate one language into another. Total automation would be achieved. This turned out to be impossible. In fact, computers that do language translation today actually rely on human translators. Computers scan the Internet for examples of language usage and based on this create a statistical picture of translation from one language to another. This automated translation can stay close to reality as long as there are professional human translators whose work the machine can keep aggregating. However, automation lowers the price of translation, and human translators cannot make a living. Today, translators do translations as a side job. Should they quit in larger numbers, there will be no reference base and machine translation will collapse completely!

Lanier used this case as an example of what is going on in other fields too, such as finance, insurance, and other areas where Big Data is involved. According to him, the process of automation has a limit. If people are laid off, the economy will have no workers. His solution is to subdivide the information tasks so that humans will play a role in this. He believes that a new middle class can be created this way. He also believes that there should be a system of micro-payments: every time someone uses data about you, you get paid by them.

Lanier invented virtual reality, but at the same time he is a musician, and has a strong feeling for the creativity of the individual. He also strongly emphasizes the need for people to be paid for their creations. The aggregation of data about people is stealing from them, just as “mash-ups” of pieces of music are not giving royalties to the individual musician. The big mistake that was made with the idea of open source and sharing was that not everybody has the same computer power. Lanier says:

“The old ideas about information being free in the information age ended up screwing over everybody except the owners of the very biggest computers. The biggest computers turned into spying and behavior modification operations, which concentrated wealth and power.

Sharing information freely, without traditional rewards like royalties or paychecks, was supposed to create opportunities for brave, creative individuals. Instead, I have watched each successive generation of young journalists, artists, musicians, photographers, and writers face harsher and harsher odds. The perverse effect of opening up information has been that the status of a young person’s parents matters more and more, since it’s so hard to make one’s way.”

“As the French economist Thomas Piketty has shown in his massively documented and bestselling Capital in the 21st Century, more and more wealth is being concentrated in the hands of the few. According to him, this tendency is inbuilt in capitalism.”

So, who owns the future, or rather, who should own it?

“If we keep on doing things as we are, the answer is clear: The future will be narrowly owned by the people who run the biggest, best connected computers, which will usually be found in giant, remote cloud computing farms.

The answer I am promoting instead is that the future should be owned broadly by everyone who contributes data to the cloud, as robots and other machines animated by cloud software start to drive our vehicles, care for us when we’re sick, mine our natural resources, create the physical objects we use, and so on, as the 21st century progresses.

Right now, most people are only gaining informal benefits from advances in technology, like free internet services, while those who own the biggest computers are concentrating formal benefits to an unsustainable degree.”

In other words, Lanier is here addressing a central problem that others have also commented on and found explanations for: the increase in inequality that is taking place. He approaches it from the point of view of having the technological power to make money. He uses the term “Siren Servers” (for e.g. Google) to indicate the temptations they present to individuals to submit to an ever increasing connectivity and data collection on themselves. He might add that it has been shown that digital media, especially cell phones, can easily become addictive – just as in the case of addiction, a reward center in the brain is being stimulated.

The rising inequality is a serious and fundamental social problem, even without the technological development that hugely magnifies its impact. As the French economist Thomas Piketty has shown in his massively documented and bestselling Capital in the 21st Century, more and more wealth is being concentrated in the hands of the few. According to him, this tendency is inbuilt in capitalism.28 He suggests that we are in fact on the way to a ‘patrimonial society’ where inherited wealth (rather than talent and merit) will increasingly come to dominate the economy which can result in political upheaval. That is, if the government does not do something. In other words, beyond all the tech talk and AI hype, in the 21st century we are back to the very basic problems of political economy.

Notes

  1. Rudi Volti, Society and Technological Change 7th edition (New York: Worth Publishers, 2014).
  2. Volti, Society and Technological Change.
  3. Carina Kolodny, “Stephen Hawking is terrified of artificial intelligence,” The Huffington Post, 5th May 2014 http://www.huffingtonpost.com/2014/05/05/stephen-hawking-artificial-intelligence_n_5267481.html.
  4. Juliette Garside, “Google, Facebook and Amazon race to blur lines between man and machine,” The Guardian, 28th April 2014 http://www.theguardian.com/technology/2014/apr/28/google-facebook-amazon-transcendence-artificial-intelligence
  5. Jaron Lanier, Who Owns the Future? (New York: Simon and Schuster, 2013)
  6. Lori Andrews, “Is your cell phone listening in on you?” Time, 15th December 2011 http://ideas.time.com/2011/12/15/is-your-cell-phone-listening-in-on-you/
  7. Robert J. Gordon, “Is US economic growth over? Faltering innovation confronts the six,” VoxEU 11 September 2012 http://www.voxeu.org/article/us-economic-growth-over
  8. Tyler Cowen, The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better (New York: Penguin Group, 2011).
  9. Cowen, The Great Stagnation.
  10. Erik Brynjolfson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (New York: W.W.Norton, 2014).
  11. Chris Anderson, Makers: The New Industrial Revolution (New York: Crown Business, 2014).
  12. Peter Diamandis and Steven Kotler, Abundance: The Future is Better than You Think (New York: Free Press, 2012).
  13. Doug Henton, “The Optimism/Pessimism Debate: Whither Technology” March 29, 2013 http://doughenton.tumblr.com/post/46608054318/the-optimism-pessimism-debate-whither-technology.
  14. Henton, “The Optimism/Pessimism Debate”.
  15. Andrew Smith, “Silicon Valley: an army of geeks and ‘coders’ shaping our future,” The Observer 10th May 2014 http://www.theguardian.com/technology/2014/may/12/silicon-valley-geeks-coders-programmers
  16. Ray Kurtzweil, The Singularity is Near: When Humans Transcend Biology (New York: Penguin Group, 2005).
  17. Lanier, Who Owns the Future?, 325
  18. Bill Joy, “Why the Future Doesn’t Need Us,” Wired magazine, April 2000 www.wired.com/wired/archive/8.04/joy.html
  19. Kolodny, “Stephen Hawking is terrified of artificial intelligence”.
  20. Kevin Kelly, What Technology Wants (New York: Penguin Books, 2010).
  21. Ingmar Persson and Julian Savulescu, Unfit for the Future: The Need for Moral Enhancement (Oxford: Oxford University Press, 2012).
  22. Daniel Kevles and Leroy Hood, The Code of Codes: Scientific and Social Issues in the Human Genome Project (Cambridge: Harvard University Press, 1992).
  23. Frans De Waal, Primates and Philosophers (Princeton: Princeton University Press, 2009).
  24. Frans De Waal, The Age of Empathy (New York: Harmony Press, 2009).
  25. Robert Wright, The Moral Animal (New York: Random House, 1994).
  26. Jaron Lanier, You Are Not A Gadget (New York: Simon and Schuster, 2010).
  27. Cass Sunstein, Infotopia: How Many Minds Produce Knowledge (Oxford: Oxford University Press, 2006).
  28. Thomas Piketty, Capital in the Twenty-First Century (Cambridge: Belknap Press, 2014).

* See Interview with Charlie Rose, PBS, March 19, 2014 http://www.bing.com/videos/watch/video/ukraine-jaron-lanier-yancey-strickler/17w9xmljt

About the Author(s)

Ullica Segerstrale
Professor of Sociology, Illinois Institute of Technology; Fellow, WAAS
RELATED TERMS: