From Wikipedia, the free encyclopedia
 
Nick Bostrom
Nick Bostrom.jpg
Bostrom in Oxford, 2014
 
Born
Niklas Boström

10 March 1973 (age 48)
Helsingborg, Sweden
Education
Awards

EraContemporary philosophy
RegionWestern philosophy
SchoolAnalytic philosophy
InstitutionsSt Cross College, Oxford
Future of Humanity Institute
ThesisObservational Selection Effects and Probability
Main interests
Philosophy of artificial intelligence
Bioethics
Notable ideas
Anthropic bias
Reversal test
Simulation hypothesis
Existential risk
Singleton
Ancestor simulation
Information hazard
Infinitarian paralysis
Self-indication assumption
Self-sampling assumption
Websitenickbostrom.com

Nick Bostrom (/ˈbɒstrəm/ BOST-rəm; Swedish: Niklas Boström [ˈnɪ̌kːlas ˈbûːstrœm]; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list. Bostrom has been highly influential in the emergence of concern about AI in the Rationalist community.

Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence".

Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In 2017, he co-signed a list of 23 principles that all AI development should follow.

Biography