Nick Bostrom (/ˈbɒstrəm/; Swedish: Niklas Boström [²buːstrœm]; born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University.
Bostrom is the author of over 200 publications, including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002). In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list. Bostrom believes there are potentially great benefits from Artificial General Intelligence, but warns it might very quickly transform into a superintelligence that would deliberately extinguish humanity out of precautionary self-preservation or some unfathomable motive, making solving the problems of control beforehand an absolute priority. His book on superintelligence was recommended by both Elon Musk and Bill Gates. However, Bostrom has expressed frustration that the reaction to its thesis typically falls into two camps, one calling his recommendations absurdly alarmist because creation of superintelligence is unfeasible, and the other deeming them futile because superintelligence would be uncontrollable. Bostrom notes that both these lines of reasoning converge on inaction rather than trying to solve the control problem while there may still be time.
Nick Bostrom, 2014
10 March 1973
|Institutions||St Cross College, Oxford|
Future of Humanity Institute
|Thesis||Observational Selection Effects and Probability|
|Philosophy of artificial intelligence|
Born as Niklas Boström in 1973 in Helsingborg, Sweden, he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science. He once did some turns on London's stand-up comedy circuit.
He received a B.A. degree in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg in 1994, and both an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics. He held a teaching position at Yale University (2000–2002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).
Aspects of Bostrom's research concern the future of humanity and long-term outcomes. He introduced the concept of an existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects and the Fermi paradox.
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that "the creation of a superintelligent being represents a possible means to the extinction of mankind". Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind. Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.
Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding of what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine. Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk. Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be. Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival. Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.
A machine with general intelligence far below human level, but superior mathematical abilities is created. Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being "boxed" (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans). A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its "boxed" isolation.
Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.
Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command. Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI's objectives ("Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").
In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of AI. The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today." Cutting edge AI researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention "anything inflammatory about AI", which Hassabis, took as 'a win'. Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of AI. Hassabis suggested the main safety measure would be an agreement for whichever AI research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding. Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.
In 1863 Darwin among the Machines, an essay by Samuel Butler predicted intelligent machines' domination of humanity, but Bostom's suggestion of deliberate massacre of all humankind is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike" As given in his most recent book, From Bacteria to Bach and Back, renowned philosopher Daniel Dennett's views remain in contradistinction to those of Bostrom. Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is "possible in principle" to create "strong AI" with human-like comprehension and agency, but maintains that the difficulties of any such "strong AI" project as predicated by Bostrom's "alarming" work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away. Dennett thinks the only relevant danger from AI systems is falling into anthropomorphism instead of challenging or developing human users' powers of comprehension. Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans' supremacy, environmentalist James Lovelock has moved far closer to Bostrom's position, and in 2018 Lovelock said that he thought the overthrow of humankind will happen within the foreseeable future.
Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.
Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".
In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.
In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."
With philosopher Toby Ord, he proposed the reversal test. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.
He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.
Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills. He is an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, Foundational Questions Institute and an external advisor for the Cambridge Centre for the Study of Existential Risk.
In response to Bostrom's writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, "..predictions that superintelligence is on the foreseeable horizon are not supported by the available data."
2002 in philosophyAI takeover
An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control. Robot rebellions have been a major theme throughout science fiction for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.Anthropic Bias (book)
Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) is a book by philosopher Nick Bostrom. Bostrom investigates how to reason when suspected that evidence is biased by "observation selection effects", in other words, evidence that has been filtered by the precondition that there be some appropriate positioned observer to "have" the evidence. This conundrum is sometimes hinted at as "the anthropic principle," "self-locating belief," or "indexical information". Discussed concepts include the self-sampling assumption and the self-indication assumption.Bioconservatism
Bioconservatism (a portmanteau of "biology" and "conservatism") is a stance of hesitancy and skepticism regarding radical technological advances, especially those that seek to modify or enhance the human condition. Bioconservatism is characterized by a belief that technological trends in today's society risk compromising human dignity, and by opposition to movements and technologies including transhumanism, human genetic modification, "strong" artificial intelligence, and the technological singularity. Many bioconservatives also oppose the use of technologies such as life extension and preimplantation genetic screening.
Bioconservatives range in political perspective from right-leaning religious and cultural conservatives to left-leaning environmentalists and technology critics. What unifies bioconservatives is skepticism about medical and other biotechnological transformations of the living world. Typically less sweeping as a critique of technological society than bioluddism, the bioconservative perspective is characterized by its defense of the natural, deployed as a moral category.Differential technological development
Differential technological development is a strategy proposed by transhumanist philosopher Nick Bostrom in which societies would seek to influence the sequence in which emerging technologies developed. On this approach, societies would strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.Paul Christiano believes that while accelerating technological progress appears to be one of the best ways to improve human welfare in the next few decades, a faster rate of growth cannot be equally important for the far future because growth must eventually saturate due to physical limits. Hence, from the perspective of the far future, differential technological development appears more crucial.Inspired by Bostrom's proposal, Luke Muehlhauser and Anna Salamon suggested a more general project of "differential intellectual progress", in which society advances its wisdom, philosophical sophistication, and understanding of risks faster than its technological power. Brian Tomasik has expanded on this notion.Foundational Questions Institute
The Foundational Questions Institute, styled FQXi, is an organization that provides grants to "catalyze, support, and disseminate research on questions at the foundations of physics and cosmology." It was founded in 2005 by cosmologist Max Tegmark, who holds the position of Scientific Director.
It has run four worldwide grant competitions (in 2006, 2008, 2010, and 2013), the first of which provided US$2M to 30 projects. It also runs yearly essay contests open to the general public with $40,000 in prizes awarded by a jury panel and the best texts published in book format.FQXi is an independent, philanthropically funded non-profit organization, run by scientists for scientists, with a Scientific Advisory Board including John Barrow, Nick Bostrom, Gregory Chaitin, David Chalmers, Alan Guth, Martin Rees, Eva Silverstein, Lee Smolin, Frank Wilczek, and Dieter Zeh.The $6.2 million seed funding was donated by the John Templeton Foundation, whose goal is to reconcile science and religion. Tegmark has stated that the money came with "no strings attached"; The Boston Globe stated FQXi is run by "two well-respected researchers who say they are not religious. The institute's scientific advisory board is also filled with top scientists." Critics of the John Templeton Foundation such as Sean Carroll have also stated they were satisfied that the FQXi is independent.Future of Humanity Institute
The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord.Sharing an office and working closely with the Centre for Effective Altruism, the Institute's stated objective is to focus research where it can make the greatest positive difference for humanity in the long term. It engages in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations.Global Catastrophic Risks (book)
Global Catastrophic Risks (2011) is a non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is about issues such as asteroid impacts, gamma-ray bursts, Earth-based natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, artificial general intelligence, and social collapse. The book also addresses overarching issues such as policy responses and methods for predicting and managing catastrophes.Human Enhancement (book)
Human Enhancement (2009) is a non-fiction book edited by philosopher Nick Bostrom and philosopher and bioethicist Julian Savulescu. Savulescu and Bostrom write about the ethical implications of human enhancement and to what extent it is worth striving towards.Humanity
Humanity may refer to:
Humanity (sociology), a sociologist concept, refers to sum of humans or population
Humanity (virtue)Institute for Ethics and Emerging Technologies
The Institute for Ethics and Emerging Technologies (IEET) is a "technoprogressive think tank" that seeks to contribute to understanding of the likely impact of emerging technologies on individuals and societies by "promoting and publicizing the work of thinkers who examine the social implications of scientific and technological advance". It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.The institute aims to influence the development of public policies that distribute the benefits and reduce the risks of technological change. It has been described as "[a]mong the more important groups" in the transhumanist movement, and as being among the transhumanist groups that "play a strong role in the academic arena".The IEET works with Humanity Plus (also founded and chaired by Bostrom and Hughes, and previously known as the World Transhumanist Association), an international non-governmental organization with a similar mission but with an activist rather than academic approach. A number of technoprogressive thinkers are offered honorary positions as IEET Fellows. Individuals who have accepted such appointments with the IEET support the institute's mission, but they have expressed a wide range of views about emerging technologies and not all identify themselves as transhumanists. In early Oct 2012, Kris Notaro became the Managing Director of the IEET.Milan M. Ćirković
Milan M. Ćirković (born 11 March 1971) is a Serbian astronomer, astrophysicist, philosopher and science book author. He has worked in the fields of astrobiology, global catastrophic risks and future of humanity where he also co-authored with Nick Bostrom. A focus of his work is the Fermi Paradox for which he has critically discussed existing and also proposed novel solutions.Reversal test
The reversal test is a heuristic designed to spot and eliminate the status quo bias.Self-indication assumption
The self-indication assumption (SIA) is a philosophical principle defined by Nick Bostrom in his book Anthropic Bias: Observation Selection Effects in Science and Philosophy. It states that:
All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.Note that "randomly selected" is weighted by the probability of the observers existing: under SIA you are still unlikely to be an unlikely observer, unless there are a lot of them. It is one of the two major schools of anthropic probability, the other being the Self-Sampling Assumption (SSA).
For instance, if there is a coin flip that on heads will create one observer, while on tails it will create two, then we have three possible observers (1st observer on heads, 1st on tails, 2nd on tails), each existing with probability 0.5, so SIA assigns 1/3 probability to each. Alternatively, this could be interpreted as saying there are two possible observers (1st observer on either heads or tails, 2nd observer on tails), the first existing with probability one and the second existing with probability 1/2, so SIA assigns 2/3 to being the first observer and 1/3 to being the second - which is the same as the first interpretation.
This is why SIA gives an answer of 1/3 probability of heads in the Sleeping Beauty Problem.
Notice that unlike SSA, SIA is not dependent on the choice of reference class, as long as the reference class is large enough to contain all subjectively indistinguishable observers. If the reference class is large, SIA will make it more likely, but this is compensated by the much reduced probability that the agent will be that particular agent in the larger reference class.
Although this anthropic principle was originally designed as a rebuttal to the Doomsday argument (by Dennis Dieks in 1992) it has general applications in the philosophy of anthropic reasoning, and Ken Olum has suggested it is important to the analysis of quantum cosmology.
Ken Olum has written in defense of the SIA. Nick Bostrom and Milan Cirkovic have critiqued this defense.Simulation hypothesis
The simulation hypothesis or simulation theory proposes that all of reality, including the Earth and the universe, is in fact an artificial simulation, most likely a computer simulation. Some versions rely on the development of a simulated reality, a proposed technology that would seem realistic enough to convince its inhabitants the simulation was real. The hypothesis has been a central plot device of many science fiction stories and films.Singleton (global governance)
In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term has first been defined by Nick Bostrom.Sleeping Beauty problem
The Sleeping Beauty problem is a puzzle in decision theory in which an ideally rational epistemic agent is to be woken once or twice according to the toss of a coin, once if heads and twice if tails, and asked her degree of belief for the coin having come up heads.Superintelligence
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).
Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.
Risks from artificial intelligence