

Some of Bostrom’s cleverest arguments resemble Swiss Army knives: they are simple, toylike, a pleasure to consider, with colorful exteriors and precisely calibrated mechanics. He rarely makes concrete predictions, but, by relying on probability theory, he seeks to tease out insights where insights seem impossible. Bostrom is arguably the leading transhumanist philosopher today, a position achieved by bringing order to ideas that might otherwise never have survived outside the half-crazy Internet ecosystem where they formed. “Superintelligence” is only his most visible response to ideas that he encountered two decades ago, when he became a transhumanist, joining a fractious quasi-utopian movement united by the expectation that accelerating advances in technology will result in drastic changes-social, economic, and, most strikingly, biological-which could converge at a moment of epochal transformation known as the Singularity. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”Īt the age of forty-two, Bostrom has become a philosopher of remarkable influence. “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. He sometimes notes, as a point of comparison, the trajectories of people and gorillas: both primates, but with one species dominating the planet and the other at the edge of annihilation.

Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology-even nuclear weapons-and that if its development is not managed carefully humanity risks engineering its own extinction. Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford.
