Search This Blog

Monday, April 24, 2017

Ray Kurzweil

From Wikipedia, the free encyclopedia
Ray Kurzweil
Raymond Kurzweil Fantastic Voyage.jpg
Kurzweil on or prior to July 5, 2005
Born Raymond Kurzweil
February 12, 1948 (age 69)
Queens, New York City, U.S.
Nationality American
Alma mater Massachusetts Institute of Technology (B.S.)
Occupation Author, entrepreneur, futurist and inventor
Employer Google Inc.
Spouse(s) Sonya Rosenwald Fenster (1975–present)[1]
Awards Grace Murray Hopper Award (1978)
National Medal of Technology (1999)
Website Official website

Raymond "Ray" Kurzweil (/ˈkɜːrzwl/ KURZ-wyl; born February 12, 1948) is an American author, computer scientist, inventor and futurist. Aside from futurism, he is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements, and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.

Kurzweil was the principal inventor of the first charge-coupled device flatbed scanner,[2] the first omni-font optical character recognition,[2] the first print-to-speech reading machine for the blind,[3] the first commercial text-to-speech synthesizer,[4] the Kurzweil K250 music synthesizer capable of simulating the sound of the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.[5]

Kurzweil received the 1999 National Medal of Technology and Innovation, the United States' highest honor in technology, from President Clinton in a White House ceremony. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001,[6] the world's largest for innovation. And in 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office. He has received twenty-one honorary doctorates, and honors from three U.S. presidents. Kurzweil has been described as a "restless genius"[7] by The Wall Street Journal and "the ultimate thinking machine"[8] by Forbes. PBS included Kurzweil as one of 16 "revolutionaries who made America"[9] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[10]

Kurzweil has written seven books, five of which have been national bestsellers. The Age of Spiritual Machines has been translated into 9 languages and was the #1 best-selling book on Amazon in science. Kurzweil's book The Singularity Is Near was a New York Times bestseller, and has been the #1 book on Amazon in both science and philosophy. Kurzweil speaks widely to audiences both public and private and regularly delivers keynote speeches at industry conferences like DEMO, SXSW and TED. He maintains the news website KurzweilAI.net, which has over three million readers annually.[5]

Life, inventions, and business career

Early life

Ray Kurzweil grew up in the New York City borough of Queens. He was born to secular Jewish parents who had emigrated from Austria just before the onset of World War II. He was exposed via Unitarian Universalism to a diversity of religious faiths during his upbringing.[citation needed] His Unitarian church had the philosophy of many paths to the truth – the religious education consisted of spending six months on a single religion before moving onto the next.[citation needed] His father was a musician, a noted conductor, and a music educator. His mother was a visual artist.

Kurzweil decided he wanted to be an inventor at the age of five.[11] As a young boy, Kurzweil had an inventory of parts from various construction toys he’d been given and old electronic gadgets he’d collected from neighbors. In his youth, Kurzweil was an avid reader of science fiction literature. At the age of eight, nine, and ten, he read the entire Tom Swift Jr. series. At the age of seven or eight, he built a robotic puppet theater and robotic game. He was involved with computers by the age of twelve (in 1960), when only a dozen computers existed in all of New York City, and built computing devices and statistical programs for the predecessor of Head Start.[12] At the age of fourteen, Kurzweil wrote a paper detailing his theory of the neocortex.[13] His parents were involved with the arts, and he is quoted in the documentary Transcendent Man as saying that the household always produced discussions about the future and technology.

Kurzweil attended Martin Van Buren High School. During class, he often held onto his class textbooks to seemingly participate, but instead, focused on his own projects which were hidden behind the book. His uncle, an engineer at Bell Labs, taught young Kurzweil the basics of computer science.[14] In 1963, at age fifteen, he wrote his first computer program.[15] He created a pattern-recognition software program that analyzed the works of classical composers, and then synthesized its own songs in similar styles. In 1965, he was invited to appear on the CBS television program I've Got a Secret, where he performed a piano piece that was composed by a computer he also had built.[16] Later that year, he won first prize in the International Science Fair for the invention;[17] Kurzweil's submission to Westinghouse Talent Search of his first computer program alongside several other projects resulted in him being one of its national winners, which allowed him to be personally congratulated by President Lyndon B. Johnson during a White House ceremony. These activities collectively impressed upon Kurzweil the belief that nearly any problem could be overcome.[18]

Mid-life

While in high school, Kurzweil had corresponded with Marvin Minsky and was invited to visit him at MIT, which he did. Kurzweil also visited Frank Rosenblatt at Cornell.[19]

He obtained a B.S. in computer science and literature in 1970 at MIT. He went to MIT to study with Marvin Minsky. He took all of the computer programming courses (eight or nine) offered at MIT in the first year and a half.

In 1968, during his sophomore year at MIT, Kurzweil started a company that used a computer program to match high school students with colleges. The program, called the Select College Consulting Program, was designed by him and compared thousands of different criteria about each college with questionnaire answers submitted by each student applicant. Around this time, he sold the company to Harcourt, Brace & World for $100,000 (roughly $670,000 in 2013 dollars) plus royalties.[20]

In 1974, Kurzweil founded Kurzweil Computer Products, Inc. and led development of the first omni-font optical character recognition system, a computer program capable of recognizing text written in any normal font. Before that time, scanners had only been able to read text written in a few fonts. He decided that the best application of this technology would be to create a reading machine, which would allow blind people to understand written text by having a computer read it to them aloud. However, this device required the invention of two enabling technologies—the CCD flatbed scanner and the text-to-speech synthesizer. Development of these technologies was completed at other institutions such as Bell Labs, and on January 13, 1976, the finished product was unveiled during a news conference headed by him and the leaders of the National Federation of the Blind. Called the Kurzweil Reading Machine, the device covered an entire tabletop.

Kurzweil's next major business venture began in 1978, when Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload paper legal and news documents onto its nascent online databases.

Kurzweil sold his Kurzweil Computer Products to Lernout & Hauspie. Following the legal and bankruptcy problems of the latter, the system became a subsidiary of Xerox later known as Scansoft and now as Nuance Communications, and he functioned as a consultant for the former until 1995.

Kurzweil's next business venture was in the realm of electronic music technology. After a 1982 meeting with Stevie Wonder, in which the latter lamented the divide in capabilities and qualities between electronic synthesizers and traditional musical instruments, Kurzweil was inspired to create a new generation of music synthesizers capable of accurately duplicating the sounds of real instruments. Kurzweil Music Systems was founded in the same year, and in 1984, the Kurzweil K250 was unveiled. The machine was capable of imitating a number of instruments, and in tests musicians were unable to discern the difference between the Kurzweil K250 on piano mode from a normal grand piano.[21] The recording and mixing abilities of the machine, coupled with its abilities to imitate different instruments, made it possible for a single user to compose and play an entire orchestral piece.

Kurzweil Music Systems was sold to South Korean musical instrument manufacturer Young Chang in 1990. As with Xerox, Kurzweil remained as a consultant for several years. Hyundai acquired Young Chang in 2006 and in January 2007 appointed Raymond Kurzweil as Chief Strategy Officer of Kurzweil Music Systems.[22]

Later life

Concurrent with Kurzweil Music Systems, Kurzweil created the company Kurzweil Applied Intelligence (KAI) to develop computer speech recognition systems for commercial use. The first product, which debuted in 1987, was an early speech recognition program.

Kurzweil started Kurzweil Educational Systems in 1996 to develop new pattern-recognition-based computer technologies to help people with disabilities such as blindness, dyslexia and attention-deficit hyperactivity disorder (ADHD) in school. Products include the Kurzweil 1000 text-to-speech converter software program, which enables a computer to read electronic and scanned text aloud to blind or visually impaired users, and the Kurzweil 3000 program, which is a multifaceted electronic learning system that helps with reading, writing, and study skills.
Raymond Kurzweil at the Singularity Summit at Stanford University in 2006

During the 1990s, Kurzweil founded the Medical Learning Company.[23] The company's products included an interactive computer education program for doctors and a computer-simulated patient. Around the same time, Kurzweil started KurzweilCyberArt.com—a website featuring computer programs to assist the creative art process. The site used to offer free downloads of a program called AARON—a visual art synthesizer developed by Harold Cohen—and of "Kurzweil's Cybernetic Poet", which automatically creates poetry. During this period he also started KurzweilAI.net, a website devoted towards showcasing news of scientific developments, publicizing the ideas of high-tech thinkers and critics alike, and promoting futurist-related discussion among the general population through the Mind-X forum.

In 1999, Kurzweil created a hedge fund called "FatKat" (Financial Accelerating Transactions from Kurzweil Adaptive Technologies), which began trading in 2006. He has stated that the ultimate aim is to improve the performance of FatKat's A.I. investment software program, enhancing its ability to recognize patterns in "currency fluctuations and stock-ownership trends."[24] He predicted in his 1999 book, The Age of Spiritual Machines, that computers will one day prove superior to the best human financial minds at making profitable investment decisions. In June 2005, Kurzweil introduced the "Kurzweil-National Federation of the Blind Reader" (K-NFB Reader)—a pocket-sized device consisting of a digital camera and computer unit. Like the Kurzweil Reading Machine of almost 30 years before, the K-NFB Reader is designed to aid blind people by reading written text aloud. The newer machine is portable and scans text through digital camera images, while the older machine is large and scans text through flatbed scanning.

In December 2012, Kurzweil was hired by Google in a full-time position to "work on new projects involving machine learning and language processing".[25] He was personally hired by Google co-founder Larry Page.[26] Larry Page and Kurzweil agreed on a one-sentence job description: "to bring natural language understanding to Google".[27]

He received a Technical Grammy on February 8, 2015, recognizing his diverse technical and creative accomplishments. For purposes of the Grammy, perhaps most notable was the aforementioned Kurzweil K250.[28]

Postmortem life

Kurzweil has joined the Alcor Life Extension Foundation, a cryonics company. In the event of his declared death, Kurzweil plans to be perfused with cryoprotectants, vitrified in liquid nitrogen, and stored at an Alcor facility in the hope that future medical technology will be able to repair his tissues and revive him.[29]

Personal life

Kurzweil is agnostic about the existence of a soul.[30] On the possibility of divine intelligence, Kurzweil is quoted as saying, "Does God exist? I would say, 'Not yet.'"[31]

Kurzweil married Sonya Rosenwald Fenster in 1975 and has two children.[32] Sonya Kurzweil is a psychologist in private practice and clinical instructor in Psychology at Harvard Medical School; she is interested in the way that digital media can be integrated into the lives of children and teens.[33]

He has a son, Ethan Kurzweil, who is a venture capitalist,[34] and a daughter, Amy Kurzweil,[35] who is a writer and cartoonist.

Ray Kurzweil is a cousin of writer Allen Kurzweil.

Creative approach

Kurzweil said "I realize that most inventions fail not because the R&D department can’t get them to work, but because the timing is wrong‍—‌not all of the enabling factors are at play where they are needed. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment."[36][37]

For the past several decades, Kurzweil's most effective and common approach to doing creative work has been conducted during his lucid dreamlike state which immediately precedes his awakening state. He claims to have constructed inventions, solved difficult problems, such as algorithmic, business strategy, organizational, and interpersonal problems, and written speeches in this state.[19]

Books

Kurzweil's first book, The Age of Intelligent Machines, was published in 1990. The nonfiction work discusses the history of computer artificial intelligence (AI) and forecasts future developments. Other experts in the field of AI contribute heavily to the work in the form of essays. The Association of American Publishers' awarded it the status of Most Outstanding Computer Science Book of 1990.[38]

In 1993, Kurzweil published a book on nutrition called The 10% Solution for a Healthy Life. The book's main idea is that high levels of fat intake are the cause of many health disorders common in the U.S., and thus that cutting fat consumption down to 10% of the total calories consumed would be optimal for most people.

In 1999, Kurzweil published The Age of Spiritual Machines, which further elucidates his theories regarding the future of technology, which themselves stem from his analysis of long-term trends in biological and technological evolution. Much emphasis is on the likely course of AI development, along with the future of computer architecture.

Kurzweil's next book, published in 2004, returned to human health and nutrition. Fantastic Voyage: Live Long Enough to Live Forever was co-authored by Terry Grossman, a medical doctor and specialist in alternative medicine.

The Singularity Is Near, published in 2006, was made into a movie starring Pauley Perrette from NCIS. In February 2007, Ptolemaic Productions acquired the rights to The Singularity is Near, The Age of Spiritual Machines and Fantastic Voyage including the rights to film Kurzweil's life and ideas for the documentary film Transcendent Man, which was directed by Barry Ptolemy.

Transcend: Nine Steps to Living Well Forever,[39] a follow-up to Fantastic Voyage, was released on April 28, 2009.

Kurzweil's book, How to Create a Mind: The Secret of Human Thought Revealed, was released on Nov. 13, 2012.[40] In it Kurzweil describes his Pattern Recognition Theory of Mind, the theory that the neocortex is a hierarchical system of pattern recognizers, and argues that emulating this architecture in machines could lead to an artificial superintelligence.[41]

Movies

Kurzweil wrote and co-produced a movie directed by Anthony Waller, called The Singularity Is Near: A True Story About the Future, in 2010 based, in part, on his 2005 book The Singularity Is Near. Part fiction, part non-fiction, he interviews 20 big thinkers like Marvin Minsky, plus there is a B-line narrative story that illustrates some of the ideas, where a computer avatar (Ramona) saves the world from self-replicating microscopic robots. In addition to his movie, an independent, feature-length documentary was made about Kurzweil, his life, and his ideas, called Transcendent Man. Filmmakers Barry Ptolemy and Felicia Ptolemy followed Kurzweil, documenting his global speaking-tour. Premiered in 2009 at the Tribeca Film Festival, Transcendent Man documents Kurzweil's quest to reveal mankind's ultimate destiny and explores many of the ideas found in his New York Times bestselling book, The Singularity Is Near, including his concept exponential growth, radical life expansion, and how we will transcend our biology. The Ptolemys documented Kurzweil's stated goal of bringing back his late father using AI. The film also features critics who argue against Kurzweil's predictions.

In 2010, an independent documentary film called Plug & Pray premiered at the Seattle International Film Festival, in which Kurzweil and one of his major critics, the late Joseph Weizenbaum, argue about the benefits of eternal life.

The feature-length documentary film The Singularity by independent filmmaker Doug Wolens (released at the end of 2012), showcasing Kurzweil, has been acclaimed as "a large-scale achievement in its documentation of futurist and counter-futurist ideas” and “the best documentary on the Singularity to date."[42]

Kurzweil frequently comments on the application of cell-size nanotechnology to the workings of the human brain and how this could be applied to building AI. While being interviewed for a February 2009 issue of Rolling Stone magazine, Kurzweil expressed a desire to construct a genetic copy of his late father, Fredric Kurzweil, from DNA within his grave site. This feat would be achieved by exhumation and extraction of DNA, constructing a clone of Fredric and retrieving memories and recollections—from Ray's mind—of his father. Kurzweil kept all of his father's records, notes, and pictures in order to maintain as much of his father as he could. Ray is known for taking over 200 pills a day, meant to reprogram his biochemistry. This, according to Ray, is only a precursor to the devices at the nano scale that will eventually replace a blood-cell, self updating of specific pathogens to improve the immune system.

Views

The Law of Accelerating Returns

In his 1999 book The Age of Spiritual Machines, Kurzweil proposed "The Law of Accelerating Returns", according to which the rate of change in a wide variety of evolutionary systems (including the growth of technologies) tends to increase exponentially.[43] He gave further focus to this issue in a 2001 essay entitled "The Law of Accelerating Returns", which proposed an extension of Moore's law to a wide variety of technologies, and used this to argue in favor of Vernor Vinge's concept of a technological singularity.[44] Kurzweil suggests that this exponential technological growth is counter-intuitive to the way our brains perceive the world—since our brains were biologically inherited from humans living in a world that was linear and local—and, as a consequence, he claims it has encouraged great skepticism in his future projections.

Stance on the future of genetics, nanotechnology, and robotics

Kurzweil is working with the Army Science Board to develop a rapid response system to deal with the possible abuse of biotechnology. He suggests that the same technologies that are empowering us to reprogram biology away from cancer and heart disease could be used by a bioterrorist to reprogram a biological virus to be more deadly, communicable, and stealthy. However, he suggests that we have the scientific tools to successfully defend against these attacks, similar to the way we defend against computer software viruses. He has testified before Congress on the subject of nanotechnology, advocating that nanotechnology has the potential to solve serious global problems such as poverty, disease, and climate change. "Nanotech Could Give Global Warming a Big Chill".[45]

In media appearances, Kurzweil has stressed the extreme potential dangers of nanotechnology[16] but argues that in practice, progress cannot be stopped because that would require a totalitarian system, and any attempt to do so would drive dangerous technologies underground and deprive responsible scientists of the tools needed for defense. He suggests that the proper place of regulation is to ensure that technological progress proceeds safely and quickly, but does not deprive the world of profound benefits. He stated, "To avoid dangers such as unrestrained nanobot replication, we need relinquishment at the right level and to place our highest priority on the continuing advance of defensive technologies, staying ahead of destructive technologies. An overall strategy should include a streamlined regulatory process, a global program of monitoring for unknown or evolving biological pathogens, temporary moratoriums, raising public awareness, international cooperation, software reconnaissance, and fostering values of liberty, tolerance, and respect for knowledge and diversity."[46]

Health and aging

Kurzweil admits that he cared little for his health until age 35, when he was found to suffer from a glucose intolerance, an early form of type II diabetes (a major risk factor for heart disease). Kurzweil then found a doctor (Terry Grossman, M.D.) who shares his somewhat unconventional beliefs to develop an extreme regimen involving hundreds of pills, chemical intravenous treatments, red wine, and various other methods to attempt to live longer. Kurzweil was ingesting "250 supplements, eight to 10 glasses of alkaline water and 10 cups of green tea" every day and drinking several glasses of red wine a week in an effort to "reprogram" his biochemistry.[47] Lately, he has cut down the number of supplement pills to 150.[30]

Kurzweil has made a number of bold claims for his health regimen. In his book The Singularity Is Near, he claimed that he brought his cholesterol level down from the high 200s to 130, raised his HDL (high-density lipoprotein) from below 30 to 55, and lowered his homocysteine from an unhealthy 11 to a much safer 6.2. He also claimed that his C-reactive protein "and all of my other indexes (for heart disease, diabetes, and other conditions) are at ideal levels." He further claimed that his health regimen, including dramatically reducing his fat intake, successfully "reversed" his type 2 diabetes. (The Singularity Is Near, p. 211)

He has written three books on the subjects of nutrition, health, and immortality: The 10% Solution for a Healthy Life, Fantastic Voyage: Live Long Enough to Live Forever and Transcend: Nine Steps to Living Well Forever. In all, he recommends that other people emulate his health practices to the best of their abilities. Kurzweil and his current "anti-aging" doctor, Terry Grossman, now have two websites promoting their first and second book.

Kurzweil asserts that in the future, everyone will live forever.[48] In a 2013 interview, he said that in 15 years, medical technology could add more than a year to one's remaining life expectancy for each year that passes, and we could then "outrun our own deaths". Among other things, he has supported the SENS Research Foundation's approach to finding a way to repair aging damage, and has encouraged the general public to hasten their research by donating.[27][49]

Nassim Nicholas Taleb, Lebanese American essayist, scholar and statistician, criticized Kurzweil's approach of taking multiple pills to achieve longevity in his book Antifragile.[50]

Kurzweil's view of the human neocortex

According to Kurzweil, technologists will be creating synthetic neocortexes based on the operating principles of the human neocortex with the primary purpose of extending our own neocortexes. He claims to believe that the neocortex of an adult human consists of approximately 300 million pattern recognizers. He draws on the commonly accepted belief that the primary anatomical difference between humans and other primates that allowed for superior intellectual abilities was the evolution of a larger neocortex. He claims that the six-layered neocortex deals with increasing abstraction from one layer to the next. He says that at the low levels, the neocortex may seem cold and mechanical because it can only make simple decisions, but at the higher levels of the hierarchy, the neocortex is likely to be dealing with concepts like being funny, being sexy, expressing a loving sentiment, creating a poem or understanding a poem, etc. He claims to believe that these higher levels of the human neocortex were the enabling factors to permit the human development of language, technology, art, and science. He stated, "If the quantitative improvement from primates to humans with the big forehead was the enabling factor to allow for language, technology, art, and science, what kind of qualitative leap can we make with another quantitative increase? Why not go from 300 million pattern recognizers to a billion?”

Encouraging futurism and transhumanism

Kurzweil's standing as a futurist and transhumanist has led to his involvement in several singularity-themed organizations. In December 2004, Kurzweil joined the advisory board of the Machine Intelligence Research Institute.[51] In October 2005, Kurzweil joined the scientific advisory board of the Lifeboat Foundation.[52] On May 13, 2006, Kurzweil was the first speaker at the Singularity Summit at Stanford University in Palo Alto, California.[53] In May 2013, Kurzweil was the keynote speaker at the 2013 proceeding of the Research, Innovation, Start-up and Employment (RISE) international conference in Seoul, Korea Republic.

In February 2009, Kurzweil, in collaboration with Google and the NASA Ames Research Center in Mountain View, California, announced the creation of the Singularity University training center for corporate executives and government officials. The University's self-described mission is to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity's grand challenges". Using Vernor Vinge's Singularity concept as a foundation, the university offered its first nine-week graduate program to 40 students in June, 2009.

Predictions

Past predictions

Kurzweil's first book, The Age of Intelligent Machines, presented his ideas about the future. It was written from 1986 to 1989 and published in 1990. Building on Ithiel de Sola Pool's "Technologies of Freedom" (1983), Kurzweil claims to have forecast the dissolution of the Soviet Union due to new technologies such as cellular phones and fax machines disempowering authoritarian governments by removing state control over the flow of information.[54] In the book, Kurzweil also extrapolated preexisting trends in the improvement of computer chess software performance to predict that computers would beat the best human players "by the year 2000".[55] In May 1997, chess World Champion Garry Kasparov was defeated by IBM's Deep Blue computer in a well-publicized chess match.[56]

Perhaps most significantly, Kurzweil foresaw the explosive growth in worldwide Internet use that began in the 1990s. At the time of the publication of The Age of Intelligent Machines, there were only 2.6 million Internet users in the world,[57] and the medium was unreliable, difficult to use, and deficient in content. He also stated that the Internet would explode not only in the number of users but in content as well, eventually granting users access "to international networks of libraries, data bases, and information services". Additionally, Kurzweil claims to have correctly foreseen that the preferred mode of Internet access would inevitably be through wireless systems, and he was also correct to estimate that the latter would become practical for widespread use in the early 21st century.

In October 2010, Kurzweil released his report, "How My Predictions Are Faring" in PDF format,[58] which analyzes the predictions he made in his book The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005). Of the 147 total predictions, Kurzweil claims that 115 were 'entirely correct', 12 were "essentially correct", and 17 were "partially correct", and only 3 were "wrong". Adding together the "entirely" and "essentially" correct, Kurzweil's claimed accuracy rate comes to 86%.

Daniel Lyons, writing in Newsweek magazine, criticized Kurzweil for some of his predictions that turned out to be wrong, such as the economy continuing to boom from the 1998 dot-com through 2009, a US company having a market capitalization of more than $1 trillion, a supercomputer achieving 20 petaflops, speech recognition being in widespread use and cars that would drive themselves using sensors installed in highways; all by 2009.[59] To the charge that a 20 petaflop supercomputer was not produced in the time he predicted, Kurzweil responded that he considers Google a giant supercomputer, and that it is indeed capable of 20 petaflops.[59]

Kurzweil's predictions for 2009 were mostly inaccurate, claims Forbes magazine. For example, Kurzweil predicted, "The majority of text is created using continuous speech recognition." This is not the case.[60]

Future predictions

In 1999, Kurzweil published a second book titled The Age of Spiritual Machines, which goes into more depth explaining his futurist ideas. The third and final part of the book is devoted to predictions over the coming century, from 2009 through 2099. In The Singularity Is Near he makes fewer concrete short-term predictions, but includes many longer-term visions.

He states that with radical life extension will come radical life enhancement. He says he is confident that within 10 years we will have the option to spend some of our time in 3D virtual environments that appear just as real as real reality, but these will not yet be made possible via direct interaction with our nervous system. "If you look at video games and how we went from pong to the virtual reality we have available today, it is highly likely that immortality in essence will be possible." He believes that 20 to 25 years from now, we will have millions of blood-cell sized devices, known as nanobots, inside our bodies fighting against diseases, improving our memory, and cognitive abilities. Kurzweil says that a machine will pass the Turing test by 2029, and that around 2045, "the pace of change will be so astonishingly quick that we won't be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating". Kurzweil states that humans will be a hybrid of biological and non-biological intelligence that becomes increasingly dominated by its non-biological component. He stresses that "AI is not an intelligent invasion from Mars. These are brain extenders that we have created to expand our own mental reach. They are part of our civilization. They are part of who we are. So over the next few decades our human-machine civilization will become increasingly dominated by its non-biological component. In Transcendent Man Kurzweil states "We humans are going to start linking with each other and become a metaconnection we will all be connected and all be omnipresent, plugged into this global network that is connected to billions of people, and filled with data." [61] Kurzweil states in a press conference that we are the only species that goes beyond our limitations- "we didn't stay in the caves, we didn't stay on the planet, and we're not going to stay with the limitations of our biology". In his singularity based documentary he is quoted saying "I think people are fooling themselves when they say they have accepted death".

In 2008, Kurzweil said in an expert panel in the National Academy of Engineering that solar power will scale up to produce all the energy needs of Earth's people in 20 years. According to Kurzweil, we only need to capture 1 part in 10,000 of the energy from the Sun that hits Earth's surface to meet all of humanity's energy needs.[62]

Reception

Praise

Kurzweil was referred to as "the ultimate thinking machine" by Forbes[8] and as a "restless genius"[7] by The Wall Street Journal. PBS included Kurzweil as one of 16 "revolutionaries who made America"[9] along with other inventors of the past two centuries. Inc. magazine ranked him #8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".[10]

Criticism

Although the idea of a technological singularity is a popular concept in science fiction, some authors such as Neal Stephenson[63] and Bruce Sterling have voiced skepticism about its real-world plausibility. Sterling expressed his views on the singularity scenario in a talk at the Long Now Foundation entitled The Singularity: Your Future as a Black Hole.[64][65] Other prominent AI thinkers and computer scientists such as Daniel Dennett,[66] Rodney Brooks,[67] David Gelernter[68] and Paul Allen[69] also criticized Kurzweil's projections.

In the cover article of the December 2010 issue of IEEE Spectrum, John Rennie criticizes Kurzweil for several predictions that failed to become manifest by the originally predicted date. "Therein lie the frustrations of Kurzweil's brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable."[70]

Bill Joy, cofounder of Sun Microsystems, agrees with Kurzweil's timeline of future progress, but thinks that technologies such as AI, nanotechnology and advanced biotechnology will create a dystopian world.[71] Mitch Kapor, the founder of Lotus Development Corporation, has called the notion of a technological singularity "intelligent design for the IQ 140 people...This proposition that we're heading to this point at which everything is going to be just unimaginably different—it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me."[24]

Some critics have argued more strongly against Kurzweil and his ideas. Cognitive scientist Douglas Hofstadter has said of Kurzweil's and Hans Moravec's books: "It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid."[72] Biologist P. Z. Myers has criticized Kurzweil's predictions as being based on "New Age spiritualism" rather than science and says that Kurzweil does not understand basic biology.[73][74] VR pioneer Jaron Lanier has even described Kurzweil's ideas as "cybernetic totalism" and has outlined his views on the culture surrounding Kurzweil's predictions in an essay for Edge.org entitled One Half of a Manifesto.[42][75]

British philosopher John Gray argues that contemporary science is what magic was for ancient civilizations. It gives a sense of hope for those who are willing to do almost anything in order to achieve eternal life. He quotes Kurzweil's Singularity as another example of a trend which has almost always been present in the history of mankind.[76]

The Brain Makers, a history of artificial intelligence written in 1994 by HP Newquist, noted that "Born with the same gift for self-promotion that was a character trait of people like P.T. Barnum and Ed Feigenbaum, Kurzweil had no problems talking up his technical prowess . . . Ray Kurzweil was not noted for his understatement." [77]

In a 2015 paper, William D. Nordhaus of Yale University, takes an economic look at the impacts of an impending technological singularity. He comments "There is remarkably little writing on Singularity in the modern macroeconomic literature." [78] Nordhaus supposes that the Singularity could arise from either the demand or supply side of a market economy, but for information technology to proceed at the kind of pace Kurzweil suggests, there would have to be significant productivity trade-offs. Namely, in order to devote more resources to producing super computers we must decrease our production of non-information technology goods. Using a variety of econometric methods, Nordhaus runs six supply side tests and one demand side test to track the macroeconomic viability of such steep rises in information technology output. Of the seven tests only two indicated that a Singularity was economically possible and both of those two predicted, at minimum, 100 years before it would occur.

Awards and honors

  • First place in the 1965 International Science Fair[17] for inventing the classical music synthesizing computer.
  • The 1978 Grace Murray Hopper Award from the Association for Computing Machinery. The award is given annually to one "outstanding young computer professional" and is accompanied by a $35,000 prize.[79] Kurzweil won it for his invention of the Kurzweil Reading Machine.[80]
  • In 1986, Kurzweil was named Honorary Chairman for Innovation of the White House Conference on Small Business by President Reagan.
  • In 1988, Kurzweil was named Inventor of the Year by MIT and the Boston Museum of Science.[81]
  • In 1990, Kurzweil was voted Engineer of the Year by the over one million readers of Design News Magazine and received their third annual Technology Achievement Award.[81][82]
  • The 1994 Dickson Prize in Science. One is awarded every year by Carnegie Mellon University to individuals who have "notably advanced the field of science." Both a medal and a $50,000 prize are presented to winners.[83]
  • The 1998 "Inventor of the Year" award from the Massachusetts Institute of Technology.[84]
  • The 1999 National Medal of Technology.[85] This is the highest award the President of the United States can bestow upon individuals and groups for pioneering new technologies, and the President dispenses the award at his discretion.[86] Bill Clinton presented Kurzweil with the National Medal of Technology during a White House ceremony in recognition of Kurzweil's development of computer-based technologies to help the disabled.
  • The 2000 Telluride Tech Festival Award of Technology.[87] Two other individuals also received the same honor that year. The award is presented yearly to people who "exemplify the life, times and standard of contribution of Tesla, Westinghouse and Nunn."
  • The 2001 Lemelson-MIT Prize for a lifetime of developing technologies to help the disabled and to enrich the arts.[88] Only one is awarded each year – it is given to highly successful, mid-career inventors. A $500,000 award accompanies the prize.[89]
  • Kurzweil was inducted into the National Inventors Hall of Fame in 2002 for inventing the Kurzweil Reading Machine.[90] The organization "honors the women and men responsible for the great technological advances that make human, social and economic progress possible."[91] Fifteen other people were inducted into the Hall of Fame the same year.[92]
  • The Arthur C. Clarke Lifetime Achievement Award on April 20, 2009 for lifetime achievement as an inventor and futurist in computer-based technologies.[93]
  • In 2011, Kurzweil was named a Senior Fellow of the Design Futures Council.[94]
  • In 2013, Kurzweil was honored as a Silicon Valley Visionary Award winner on June 26 by SVForum.[95]
  • In 2014, Kurzweil was honored with the American Visionary Art Museum’s Grand Visionary Award on January 30.[96][97][98]
  • Kurzweil has received 20 honorary doctorates in science, engineering, music and humane letters from Rensselaer Polytechnic Institute, Hofstra University and other leading colleges and universities, as well as honors from three U.S. presidents – Clinton, Reagan and Johnson.[5][99]
  • Kurzweil has received seven national and international film awards including the CINE Golden Eagle Award and the Gold Medal for Science Education from the International Film and TV Festival of New York.[81]

Global catastrophic risk

From Wikipedia, the free encyclopedia
 
Artist's impression of a major asteroid impact. An asteroid with an impact strength of a billion atomic bombs may have caused the extinction of the dinosaurs.[1]

A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale.[2] Some events could cripple or destroy modern civilization. Any event that could cause human extinction is known as an existential risk.[3]

Potential global catastrophic risks include anthropogenic risks (technology risks, governance risks) and natural or external risks. Examples of technology risks are hostile artificial intelligence, biotechnology risks, or nanotechnology weapons. Insufficient global governance creates risks in the social and political domain (potentially leading to a global war with or without a nuclear holocaust, bioterrorism using genetically modified organisms, cyberterrorism destroying critical infrastructures like the electrical grid, or the failure to manage a natural pandemic) as well as problems and risks in the domain of earth system governance (with risks resulting from global warming, environmental degradation, mineral resource exhaustion, fossil energy exhaustion, or famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture). Examples for non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a lethal gamma-ray burst, a geomagnetic storm destroying all electronic equipment, natural long-term climate change, or extraterrestrial life impacting life on Earth.

Classifications

Scope/intensity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"[4]

Global catastrophic vs. existential

Philosopher Nick Bostrom classifies risks according to their scope and intensity.[4] A "global catastrophic risk" is any risk that is at least "global" in scope, and is not subjectively "imperceptible" in intensity. Those that are at least "trans-generational" (affecting all future generations) in scope and "terminal"[clarification needed] in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity (and, presumably, all but the most rudimentary species of non-human lifeforms and/or plant life) entirely or at least prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant.[5]

Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner singles out such events as worthy of special attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole.[6] Posner's events include meteor impacts, runaway global warming, grey goo, bioterrorism, and particle accelerator accidents.

Researchers experience difficulty in studying near human extinction directly, since humanity has never been destroyed before.[7] While this does not mean that it will not be in the future, it does make modelling existential risks difficult, due in part to survivorship bias.

Other classifications

Bostrom identifies four types of existential risk. "Bangs" are sudden catastrophes, which may be accidental or deliberate. He thinks the most likely sources of bangs are malicious use of nanotechnology, nuclear war, and the possibility that the universe is a simulation that will end. "Crunches" are scenarios in which humanity survives but civilization is irreversibly destroyed. The most likely causes of this, he believes, are exhaustion of natural resources, a stable global government that prevents technological progress, or dysgenic pressures that lower average intelligence. "Shrieks" are undesirable futures. For example, if a single mind enhances its powers by merging with a computer, it could dominate human civilization. Bostrom believes that this scenario is most likely, followed by flawed superintelligence and a repressive totalitarian regime. "Whimpers" are the gradual decline of human civilization or current values. He thinks the most likely cause would be evolution changing moral preference, followed by extraterrestrial invasion.[3]

Likelihood

Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanity's extinction in the next century,[8] have had their probabilities predicted with considerable precision (although some scholars claim the actual rate of large impacts could be much higher than originally calculated).[9] Similarly, the frequency of volcanic eruptions of sufficient magnitude to cause catastrophic climate change, similar to the Toba Eruption, which may have almost caused the extinction of the human race,[10] has been estimated at about 1 in every 50,000 years.[11] The 2016 annual report by the Global Challenges Foundation estimates that an average American is more than five times more likely to die during a human-extinction event than in a car crash.[12][13]

The relative danger posed by other threats is much more difficult to calculate. In 2008, a small but illustrious group of experts on different global catastrophic risks at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century. The conference report cautions that the results should be taken "with a grain of salt".[14]
Risk Estimated probability for human extinction before 2100 (2008 expert survey)
Overall probability 19%
Molecular nanotechnology weapons 5%
Superintelligent AI 5%
Non-nuclear wars 4%
Engineered pandemic 2%
Nuclear wars 1%
Nanotechnology accident 0.5%
Natural pandemic 0.05%
Nuclear terrorism 0.03%
Table source: Future of Humanity Institute, 2008.[14]
There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next 100 years, but forecasting for this length of time is difficult. The types of threats posed by nature may prove relatively constant, though new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have only been an issue since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly.
Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history.[7] These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.[4]

Moral importance of existential risk

Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for four billion years before the expansion of the Sun makes the Earth uninhabitable.[15][16] Nick Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years.[5] Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people who will exist in the future.

Exponential discounting might make these future benefits much less significant. However, Gaverick Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.[8]

Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[17] Richard Posner has argued that we are doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.[18]

Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.[19]

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as concerned about 200,000 birds getting stuck in oil as they are about 2,000.[20] Similarly, people are often more concerned about threats to individuals than to larger groups.[19]

There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global good, so even if a large nation decreases it, that nation will only enjoy a small fraction of the benefit of doing so. Furthermore, the vast majority of the benefits may be enjoyed by far future generations, and though these quadrillions of future people would in theory perhaps be willing to pay massive sums for existential risk reduction, no mechanism for such a transaction exists.[4]

Potential sources of risk

Some sources of catastrophic risk are natural, such as meteor impacts or supervolcanos. Some of these have caused mass extinctions in the past.

On the other hand, some risks are man-made, such as global warming,[21] environmental degradation, engineered pandemics and nuclear war. According to the Future of Humanity Institute, human extinction is more likely to result from anthropogenic causes than natural causes.[4][22]

Anthropogenic

In 2012, Cambridge University created The Cambridge Project for Existential Risk which examines threats to humankind caused by developing technologies.[23] The stated aim is to establish within the University a multidisciplinary research centre, Centre for the Study of Existential Risk, dedicated to the scientific study and mitigation of existential risks of this kind.[23]

The Cambridge Project states that the "greatest threats" to the human species are man-made; they are artificial intelligence, global warming, nuclear war, and rogue biotechnology.[24]

Artificial intelligence

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[25] Because of its exceptional scheduling and organizational capability and the range of novel technologies it could develop, it is possible that the first Earth superintelligence to emerge could rapidly become matchless and unrivaled: conceivably it would be able to bring about almost any possible outcome, and be able to foil virtually any attempt that threatened to prevent it achieving its objectives.[26] It could eliminate, wiping out if it chose, any other challenging rival intellects; alternatively it might manipulate or persuade them to change their behavior towards its own interests, or it may merely obstruct their attempts at interference.[26] In Bostrom's book, Superintelligence: Paths, Dangers, Strategies, he defines this as the control problem.[27]
Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity."[28] He suggests that it may be somewhat or possibly very dangerous for humans.[29] This is discussed by a philosophy called Singularitarianism.

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[30] In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[28] Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.[31][32] Eliezer Yudkowsky believes that risks from artificial intelligence are harder to predict than any other known risks. He also argues that research into artificial intelligence is biased by anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims that they underestimate the potential power of AI. He distinguishes between risks due to technical failure of AI, which means that flawed algorithms prevent the AI from carrying out its intended goals, and philosophical failure, which means that the AI is programmed to realize a flawed ideology.[33]

Biotechnology

Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants or animals). In many cases the organism will be a pathogen of humans, livestock or crops. However, any organism able to catastrophically disrupt ecosystem functions poses a biotechnology risk. Examples could be highly competitive weeds, outcompeting essential crops.
A catastrophe may be brought about by usage of biological agents in biological warfare, bioterrorism attacks or by accident.[34] Terrorist applications of biotechnology have historically been infrequent.[34] To what extent this is due to a lack of capabilities or motivation is not resolved.[34]

A biotechnology catastrophe may be caused accidentally either by a genetically engineered organism escaping, or by the planned release of such an organism, which turns out to have unforeseen and catastropic interactions with essential natural or agro-ecosystems.

Exponential growth has been observed in the biotechnology sector and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades.[34] They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users).[34]

Given current development, more risk from novel, engineered pathogens is to be expected in the future.[34] Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics.[34] For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents.[34] The modified virus became highly lethal even in vaccinated and naturally resistant mice.[35][36] The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated.[34]

Noun and Chyba propose three categories of measures to reduce risks from biotechnology and natural pandemics: Regulation or prevention of potentially dangerous research, improved recognition of outbreaks and developing facilities to mitigate disease outbreaks (e.g. better and/or more widely distributed vaccines).[34]

Global warming

Global warming refers to the warming caused by human technology since the 19th century or earlier. Global warming reflects abnormal variations to the expected climate within the Earth's atmosphere and subsequent effects on other parts of the Earth. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms.
It has been suggested that runaway global warming (runaway climate change) might cause Earth to become searingly hot like Venus. In less extreme scenarios, it could cause the end of civilization as we know it.[37]

Environmental disaster

An environmental or ecological disaster, such as world crop failure and collapse of ecosystem services, could be induced by the present trends of overpopulation, economic development,[38] and non-sustainable agriculture. Most of these scenarios involve one or more of the following: Holocene extinction event, scarcity of water that could lead to approximately one half of the Earth's population being without safe drinking water, pollinator decline, overfishing, massive deforestation, desertification, climate change, or massive water pollution episodes. A very recent threat in this direction is colony collapse disorder,[39] a phenomenon that might foreshadow the imminent extinction[40] of the Western honeybee. As the bee plays a vital role in pollination, its extinction would severely disrupt the food chain.

Mineral resource exhaustion

Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth — that is, Earth's capacity to sustain human populations and consumption levels — is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilisation itself.[41]:303f Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that "... all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth]."[42]:370

Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of distributing Earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is little way of knowing in advance if or when mankind will eventually face extinction. In effect, any conceivable intertemporal distribution of the stock will inevitably end up with universal economic decline at some future point.[43]:253–256 [44]:165 [45]:168–171 [46]:150–153 [47]:106–109 [48]:546–549 [49]:142–145

Experimental technology accident

Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System.[50] Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. For example, scientists worried that the first nuclear test might ignite the atmosphere.[51][52] More recently, others worried that the RHIC[53] or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. These particular concerns have been refuted,[54][55][56][57] but the general concern remains.
Biotechnology could lead to the creation of a pandemic, chemical warfare could be taken to an extreme, nanotechnology could lead to grey goo in which out-of-control self-replicating robots consume all living matter on earth while building more of themselves—in both cases, either deliberately or by accident.[58]

Nanotechnology

Many nanoscale technologies are in development or currently in use.[59] The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision.[60] Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions.[59][60] When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software.[59]
Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons.[59] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[59]

Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories:
  1. From augmenting the development of other technologies such as AI and biotechnology.
  2. By enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used.
  3. From uncontrolled self-perpetuating processes with destructive effects.
At the same time, nanotechnology may be used to alleviate several other global catastrophic risks.[59]

Several researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government.[35][59][61] Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races):
  1. A large number of players may be tempted to enter the race since the threshold for doing so is low;[59]
  2. The ability to make weapons with molecular manufacturing will be cheap and easy to hide;[59]
  3. Therefore, lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes;[59][62]
  4. Molecular manufacturing may reduce dependency on international trade,[59] a potential peace-promoting factor;
  5. Wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.[59]
Since self-regulation by all state and non-state actors seems hard to achieve,[63] measures to mitigate war-related risks have mainly been proposed in the area of international cooperation.[59][64] International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed.[64] One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour.[59] The Center for Responsible Nanotechnology also suggests some technical restrictions.[65] Improved transparency regarding technological capabilities may be another important facilitator for arms-control.

A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation[66] and has been a theme in mainstream media and fiction.[67][68] This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nowadays, however, nanotech experts - including Drexler - discredit the scenario. According to Chris Phoenix a "so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident".[69]

Warfare and mass destruction

The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Although the probability of a nuclear war per year is slim, Professor Martin Hellman, described it as inevitable in the long run; unless the probability approaches zero, inevitably there will come a day when civilization's luck runs out.[70] During the Cuban missile crisis, U.S. president John F. Kennedy estimated the odds of nuclear war as being "somewhere between one out of three and even".[71] The United States and Russia have a combined arsenal of 14,700 nuclear weapons,[72] and there is an estimated total of 15,700 nuclear weapons in existence worldwide.[72]

While popular perception sometimes takes nuclear war as "the end of the world", experts assign low probability to human extinction from nuclear war.[73][74] In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia and maybe several hundred million more through follow-up consequences in those same areas.[73]

Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating such a large amount of nuclear weaponry would have a long-term effect on the climate, causing cold weather and reduced sunlight[75] that may generate significant upheaval in advanced civilizations.[76]
Beyond nuclear, other threats to humanity include biological warfare (BW) and bioterrorism. By contrast, chemical warfare while able to create multiple local catastrophes, is unlikely to create a global one.

World population and agricultural crisis

The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity[77] such as the Green Revolution.[78] Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.[79] David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study.[80]

The authors of this study believe that the mentioned agricultural crisis will begin to impact us after 2020, and will become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.[81][82]

Wheat is humanity's 3rd most produced cereal. Extant fungal infections such as Ug99[83] (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and infection spreads on the wind. Should the world's large grain producing areas become infected then there would be a crisis in wheat availability leading to price spikes and shortages in other food products.[84]

Non-anthropogenic

Asteroid impact

Several asteroids have collided with earth in recent geological history. The Chicxulub asteroid, for example, is theorized to have caused the extinction of the non-avian dinosaurs 66 million years ago at the end of the Cretaceous. If such an object struck Earth it could have a serious impact on civilization. It is even possible that humanity would be completely destroyed. For this to occur, the asteroid would need to be at least 1 km (0.62 mi) in diameter, but probably between 3 and 10 km (2–6 miles).[85] Asteroids with a 1 km diameter have impacted the Earth on average once every 500,000 years.[85] Larger asteroids are less common. Small near-Earth asteroids are regularly observed.
In 1.4 million years, the star Gliese 710 is expected to start causing an increase in the number of meteoroids in the vicinity of Earth when it passes within 1.1 light years of the Sun, perturbing the Oort cloud. Dynamic models by García-Sánchez predict a 5% increase in the rate of impact.[86] Objects perturbed from the Oort cloud take millions of years to reach the inner Solar System.

Extraterrestrial invasion

Extraterrestrial life could invade Earth[87] either to exterminate and supplant human life, enslave it under a colonial system, steal the planet's resources, or destroy the planet altogether.
Although evidence of alien life has never been documented, scientists such as Carl Sagan have postulated that the existence of extraterrestrial life is very likely. In 1969, the "Extra-Terrestrial Exposure Law" was added to the United States Code of Federal Regulations (Title 14, Section 1211) in response to the possibility of biological contamination resulting from the U.S. Apollo Space Program. It was removed in 1991.[88] Scientists consider such a scenario technically possible, but unlikely.[89]

Natural climate change

Climate change refers to a lasting change in the Earth's climate. The climate has ranged from ice ages to warmer periods when palm trees grew in Antarctica. It has been hypothesized that there was also a period called "snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, prior to the rise of human civilization about 10 thousand years ago near the end of the last Major Ice Age when the climate became more stable. However, abrupt climate change on the decade time scale has occurred regionally. Since civilization originated during a period of stable climate, a natural variation into a new climate regime (colder or hotter) could pose a threat to civilization.

In the history of the Earth, many ice ages are known to have occurred. More ice ages will be possible at an interval of 40,000–100,000 years. An ice age would have a serious impact on civilization because vast areas of land (mainly in North America, Europe, and Asia) could become uninhabitable. It would still be possible to live in the tropical regions, but with possible loss of humidity and water. Currently, the world is existing in an interglacial period within a much older glacial event. The last glacial expansion ended about 10,000 years ago, and all civilizations evolved later than this. Scientists do not predict that a natural ice age will occur anytime soon.

Cosmic threats

A number of astronomical threats have been identified. Massive objects, e.g. a star, large planet or black hole, could be catastrophic if a close encounter occurred in the Solar System. In April 2008, it was announced that two simulations of long-term planetary movement, one at Paris Observatory and the other at University of California, Santa Cruz indicate a 1% chance that Mercury's orbit could be made unstable by Jupiter's gravitational pull sometime during the lifespan of the Sun. Were this to happen, the simulations suggest a collision with Earth could be one of four possible outcomes (the others being Mercury colliding with the Sun, colliding with Venus, or being ejected from the Solar System altogether). If Mercury were to collide with Earth, all life on Earth could be obliterated: an asteroid 15 km wide is believed to have caused the extinction of the non-avian dinosaurs, whereas Mercury is 4,879 km in diameter.[90]

Another threat might come from gamma ray bursts.[91] Both threats are very unlikely in the foreseeable future.[92]

A similar threat is a hypernova, produced when a hypergiant star explodes and then collapses, sending vast amounts of radiation sweeping across hundreds of lightyears. Hypernovas have never been observed; however, a hypernova may have been the cause of the Ordovician–Silurian extinction events. The nearest hypergiant is Eta Carinae, approximately 8,000 light-years distant.[93] The hazards from various astrophysical radiation sources were reviewed in 2011.[94]

If the Solar System were to pass through a dark nebula, a cloud of cosmic dust, severe global climate change would occur.[95]

A powerful solar flare or solar superstorm, which is a drastic and unusual decrease or increase in the Sun's power output, could have severe consequences for life on Earth.

If our universe lies within a false vacuum, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that we know without forewarning.[96][further explanation needed] Such an occurrence is called a vacuum metastability event.

Geomagnetic reversal

The magnetic poles of the Earth shifted many times in geologic history. The duration of such a shift is still debated. Theories exist that during such times, the Earth's magnetic field would be substantially weakened, threatening civilization by allowing radiation from the Sun, especially solar wind, solar flares or cosmic radiation, to reach the surface. These theories have been somewhat discredited, as statistical analysis shows no evidence for a correlation between past reversals and past extinctions.[97][98]

Global pandemic

Numerous historical examples of pandemics[99] had a devastating effect on a large number of people. The present, unprecedented scale and speed of human movement make it more difficult than ever to contain an epidemic through local quarantines. A global pandemic has become a realistic threat to human civilization.
Naturally evolving pathogens will ultimately develop an upper limit to their virulence.[100] Pathogen with the highest virulence, quickly killing their hosts reduce their chances of spread the infection to new hosts or carriers.[101] This simple model predicts that - if virulence and transmission are not genetically linked - pathogens will evolve towards low virulence and rapid transmission. However, this is not necessarily a safeguard against a global catastrophe, for the following reasons:

1. The fitness advantage of limited virulence is primarily a function of a limited number of hosts. Any pathogene with a high virulence, high transmission rate and long incubation time may have already caused a catastrophic pandemic before ultimately virulence is limited through natural selection.

2. In models where virulence level and rate of transmission are related, high levels of virulence can evolve.[102] Virulence is instead limited by the existence of complex populations of hosts with different susceptibilities to infection, or by some hosts being geographically isolated.[100] The size of the host population and competition between different strains of pathogens can also alter virulence.[103]

3. A pathogen that infects humans as a secondary host and primarily infects another species (a zoonosis) has no constraints on its virulence in people, since the accidental secondary infections do not affect its evolution.[104]

Naturally arising pathogens and Neobiota

In a similar scenario to biotechnology risks, naturally evolving organisms can disrupt essential ecosystem functions.

An example of a pathogen able to threaten global food security is the wheat rust Ug99.

Other examples are neobiota, i.e. organisms that become disruptive to ecosystems once transported - often as a result of human activity - to a new geographical region. Normally the risk is a local disruption. If it becomes coupled with serious crop failures and a global famine it may, however, pose a global catastrophic risk.

Megatsunami

A remote possibility is a megatsunami. It has been suggested that a megatsunami caused by the collapse of a volcanic island could, for example, destroy the entire East Coast of the United States, but such predictions are based on incorrect assumptions and the likelihood of this happening has been greatly exaggerated in the media.[105] While none of these scenarios are likely to destroy humanity completely, they could regionally threaten civilization. There have been two recent high-fatality tsunamis—after the 2011 Tōhoku earthquake and the 2004 Indian Ocean earthquake. A megatsunami could have astronomical origins as well, such as an asteroid impact in an ocean.[106]

Volcanism

A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano[107] could lead to a so-called volcanic winter, similar to a nuclear winter. One such event, the Toba eruption,[108] occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory,[109] the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years.[110] A massive volcano eruption would eject extraordinary volumes of volcanic dust, toxic and greenhouse gases into the atmosphere with serious effects on global climate (towards extreme global cooling; volcanic winter if short term, and ice age if long term) or global warming (if greenhouse gases were to prevail).
When the supervolcano at Yellowstone last erupted 640,000 years ago, the magma and ash ejected from the caldera covered most of the United States west of the Mississippi river and part of northeastern Mexico.[111] Another such eruption could threaten civilization.

Research published in 2011 finds evidence that massive volcanic eruptions caused massive coal combustion, supporting models for significant generation of greenhouse gases. Researchers have suggested that massive volcanic eruptions through coal beds in Siberia would generate significant greenhouse gases and cause a runaway greenhouse effect.[112] Massive eruptions can also throw enough pyroclastic debris and other material into the atmosphere to partially block out the sun and cause a volcanic winter, as happened on a smaller scale in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer. Such an eruption might cause the immediate deaths of millions of people several hundred miles from the eruption, and perhaps billions of deaths[113] worldwide, due to the failure of the monsoon[citation needed], resulting in major crop failures causing starvation on a massive scale.[113]

A much more speculative concept is the Verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory.

Precautions and prevention

Planetary management and respecting planetary boundaries have been proposed as approaches to preventing ecological catastrophes. Within the scope of these approaches, the field of geoengineering encompasses the deliberate large-scale engineering and manipulation of the planetary environment to combat or counteract anthropogenic changes in atmospheric chemistry. Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario.[114] Solutions of this scope may require megascale engineering. Food storage has been proposed globally, but the monetary cost would be high. Furthermore, it would likely contribute to the current millions of deaths per year due to malnutrition. David Denkenberger and Joshua Pearce have proposed in Feeding Everyone No Matter What a variety of alternate foods for global catastrophic risks such as nuclear winter, volcanic winter, asteroid/comet impact, and abrupt climate change.[115] The alternate foods convert fossil fuels or biomass (e.g. trees and wood) into food.[116] However, significantly more research is needed in this field to make it viable for the entire global population to survive using these methods.[117] Asteroid deflection has been proposed to reduce impact risk. Nuclear disarmament has been proposed to reduce the nuclear winter risk. Precautions being taken include:
  • Some survivalists stocking survival retreats with multiple-year food supplies.
  • The Svalbard Global Seed Vault, which is a vault buried 400 feet (120 m) inside a mountain in the Arctic with over ten tons of seeds from all over the world. 100 million seeds from more than 100 countries were placed inside as a precaution to preserve all the world’s crops. A prepared box of rice originating from 104 countries was the first to be deposited in the vault, where it will be kept at −18 °C (0 °F). Thousands more plant species will be added as organizers attempt to get specimens of every agricultural plant in the world. Cary Fowler, executive director of the Global Crop Diversity Trust said that by preserving as many varieties as possible, the options open to farmers, scientists and governments were maximized. “The opening of the seed vault marks a historic turning point in safeguarding the world’s crop diversity,” he said. Even if the permafrost starts to melt, the seeds will be safe inside the vault for up to 200 years. Some of the seeds will even be viable for a millennium or more, including barley, which can last 2,000 years, wheat (1,700 years), and sorghum (almost 20,000 years).[118]

Organizations

The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".[119][120]

Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.[121]

Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000) which aims to reduce the risk of a catastrophe caused by artificial intelligence and the Singularity.[122] The top donors include Peter Thiel and Jed McCaleb.[123] The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe.[124] Most of the research money funds projects at universities.[125] The Global Catastrophic Risk Institute (est. 2011) is a think tank for all things catastrophic risk. It is funded by the NGO Social and Environmental Entrepreneurs. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a year report on the state of global risks.[12][13] The Future of Life Institute (est. 2014) aims to support research and initiatives for safeguarding life considering new technologies and challenges facing humanity.[126] Elon Musk is one of its biggest donors.[127] The Nuclear Threat Initiative seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event.[128] It maintains a nuclear material security index.[129]

University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk. It was founded by Nick Bostrom and is based at Oxford University. The Centre for the Study of Existential Risk (est. 2012) is a Cambridge-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare. All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."[130] Stephen Hawking is an acting adviser. The Millennium Alliance for Humanity & The Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academic in the humanities.[131][132] It was founded by Paul Ehrlich among others.[133] Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk.[134]

Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis.[135] GAR helps member states with training and coordination of response to epidemics.[136] The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source.[137] The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security, counter-terrorism, etc.[138]

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...