From Wikipedia, the free encyclopedia

Nick Bostrom
Nick Bostrom.jpg
Nick Bostrom, 2014
Born
Niklas Boström

10 March 1973 (age 45)
Helsingborg, Sweden
Education
Awards

EraContemporary philosophy
RegionWestern philosophy
SchoolAnalytic philosophy
InstitutionsSt Cross College, Oxford
Future of Humanity Institute
ThesisObservational Selection Effects and Probability
Main interests
Philosophy of artificial intelligence
Bioethics
Notable ideas
Anthropic bias
Reversal test
Simulation hypothesis
Existential risk
Singleton
Ancestor simulation
Information hazard
Infinitarian paralysis
Self-indication assumption
Self-sampling assumption
WebsiteNickBostrom.com

Nick Bostrom (/ˈbɒstrəm/; Swedish: Niklas Boström [²buːstrœm]; born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University.

Bostrom is the author of over 200 publications, including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002). In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list. Bostrom believes there are potentially great benefits from Artificial General Intelligence, but warns it might very quickly transform into a Superintelligence that would deliberately extinguish humanity out of precautionary self-preservation or some unfathomable motive, making solving the problems of control beforehand an absolute priority. Although his book on superintelligence was recommended by both Elon Musk and Bill Gates, Bostrom has expressed frustration that the reaction to its thesis typically falls into two camps, one calling his recommendations absurdly alarmist because creation of superintelligence is unfeasible, and the other deeming them futile because superintelligence would be uncontrollable. Bostrom notes that both these lines of reasoning converge on inaction rather than trying to solve the control problem while there may still be time.

Biography