Yes, it sounds severe. But an article published yesterday in The Atlantic, suggests that synthetic biology, along with nuclear weapons and artificial intelligence, may be the greatest risk factors in the possibility of human extinction.
Nick Bostrom, the subject of the article, is no Luddite (though we’d like him just fine if he were); he is a professor of philosophy and director of Oxford’s Future of Humanity Institute. Bostrom argues that humans tend to underestimate possible risks from our technological capacity:
“I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.”
When asked, What technology, or potential technology, worries you the most?, Bostrom replies:
In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain—you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we’re also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.
It is for precisely this reason that we are co-hosting an upcoming evening of discussion and debate about synthetic biology and its potential impacts, on March 29 in Berkeley.
– Synbiowatch
We’re Underestimating the Risk of Human Extinction
By Ross Andersen, cross-posted from The Atlantic
Mar 6 2012, 1:39 PM ET 85 – Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.
Bostrom, who directs Oxford’s Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.
Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism—the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.
Some have argued that we ought to be directing our resources toward humanity’s existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of presentsuffering. Can you explain why?
To read the complete article of Nick Bostrom’s views on the future of humanity in the shadow of our advanced technological capacity, read The Atlantic.