by Nick Bostrom · 3 Jun 2014 · 574pp · 164,509 words
Superintelligence SUPERINTELLIGENCE Paths, Dangers, Strategies NICK BOSTROM Director, Future of Humanity Institute Professor, Faculty of Philosophy & Oxford Martin School University of Oxford Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University
…
research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Nick Bostrom 2014 The moral rights of the author have been asserted First Edition published in 2014 Impression: 1 All rights reserved. No part of this publication
…
, Nick and Sandberg, Anders. 2009b. “The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement.” In Human Enhancement, 1st ed., edited by Julian Savulescu and Nick Bostrom, 375–416. New York: Oxford University Press. Bostrom, Nick, Sandberg, Anders, and Douglas, Tom. 2013. “The Unilateralist’s Curse: The Case for a Principle of
…
–501. Cognitive Technologies. Berlin: Springer. Yudkowsky, Eliezer. 2008a. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–45. New York: Oxford University Press. Yudkowsky, Eliezer. 2008b. “Sustained Strong Recursion.” Less Wrong (blog), December 5. Yudkowsky, Eliezer. 2010
by Nick Bostrom · 26 Mar 2024 · 547pp · 173,909 words
DEEP UTOPIA DEEP UTOPIA LIFE AND MEANING IN A SOLVED WORLD NICK BOSTROM Copyright © 2024 by Nick Bostrom All rights reserved. No part of this book may be reproduced, stored, or transmitted by any means—whether auditory, graphic, mechanical, or electronic—without written
…
is punishable by law. Printed in the United States Ideapress Publishing | www.ideapresspublishing.com All trademarks are the property of their respective companies. Cover Design: Nick Bostrom Interior Design: Jessica Angerstein Cataloging-in-Publication Data is on file with the Library of Congress. Hardcover ISBN: 978-1-64687-164-3 Special Sales
…
and run outside to see it, touch it, experience it, and to play, play, play… MONDAY Hot springs postponed Tessius: Hey, look at this poster. Nick Bostrom is giving a lecture series, here in the Enron Auditorium, on “The Problem of Utopia”. Firafix: Bostrom—is he still alive? He must be as
by Nick Bostrom and Milan M. Cirkovic · 2 Jul 2008
Global Catastrophic Risks Global Catastrophic Risks Edited by Nick Bostrom Milan M. Cirkovic OXFORD UNIVERSITY PRESS OXFORD UNIVERSITY PRESS Great Clarendon Street, Oxford OX2 6DP Oxford University Press is a department of the University of
…
wide readership - and deserve special attention from scientists, policy-makers and ethicists. Martin J . Rees Contents Acknowledgements . . . . . . . . . . . . . . . . . . . . . . .. . . . . . ... . .. . . . . . . . . . . . . . . . . . . . . . . . . . v Foreword .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . ... . . . .... . . . . . . . . . . . . . . . . . . . . vii . Martin]. Rees 1 Introduction .................................................................... 1 Nick Bostrom and Milan M. Cirkovic 1.1 1.2 1.3 1 .4 1.5 1.6 1.7 Part I Why? ...... . . . . . . . . .. . . . . .. . .... . . . . . . . . . . . . . . . . . . . . ... . ............ . .. . . 1 Taxonomy and organization
…
. . . . . . . . . . . .. 504 506 510 511 512 514 514 515 516 518 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authors' biographies . .......................................................... 520 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 · 1 · I ntrod u cti on Nick Bostrom and Milan M. Cirkovic 1 . 1 Why? The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk
…
towards secession make it improbable in the immediate future (Alesina and Spolaore, 2003). At the same time, however, nominally independent countries have 3 In correspondence, Nick Bostrom raises the possibility that the creation of a democratic world government might provide better protection against the emergence of a totalitarian world government than the
…
B C Climate Change Experiment, using public resource distributed computing for climate change modelling. He is married to Professor Irene Tracey and has three children. Nick Bostrom is director of the Future of H umanity Institute at Oxford University. He previously taught in the Faculty of Philosophy and in the Institute for
…
our time; for students focusing on science, society, technology, and public policy; and for academics, policy-makers, and professionals working in these acutely important fields. Nick Bostrom, PhD, is Director of the Future of Humanity Institute and Director of the Programme on Impacts of Future Technology, both in the Oxford Martin School
by William Poundstone · 3 Jun 2019 · 283pp · 81,376 words
the globe, have been enriched by Bayes’s theorem. Much of the ambivalence about AI has its roots in the work of Norwegian-born philosopher Nick Bostrom, now of Oxford. Bostrom did his doctoral thesis on the doomsday argument and the puzzles of self-sampling. He has been influential in proposing that
…
. The evidence supplied by drawing a number 7 ball is circumstantial but not to be neglected by any fully reasonable party. “Rational belief is constrained,” Nick Bostrom wrote, “not only by chains of deduction but also by the rubber bands of probabilistic inference.” Doubting Thomas Bayes’s Essay found a most influential
…
and convincing view about the nature of the error (if there is one.)” “I have encountered over a hundred objections against the doomsday argument,” wrote Nick Bostrom, “… many of them mutually inconsistent. It is as if the doomsday argument is so counterintuitive (or threatening?) that people reckon that every criticism must be
…
uses prior probabilities of human extinction). Since the 1990s the doomsday literature has largely focused on the Carter-Leslie argument, sometimes dismissing Gott’s version. Nick Bostrom offered this curt assessment: “We can distinguish two forms of [the doomsday argument] that have been presented in the literature.… Gott’s version is incorrect
…
(SIA), did something most unusual in this debate. He was persuaded. In a paper titled “Reasoning About the Future: Doom and Beauty” Dieks sided with Nick Bostrom on the Presumptuous Philosopher. He agreed that it is wrong to claim, as a general principle, that a theory predicting more observers is more likely
…
question was “why so many people think it’s an interesting question.” One who takes simulations seriously is entrepreneur Elon Musk, who has helped fund Nick Bostrom’s work. “The strongest argument for us being in a simulation,” Musk said at the 2016 Recode conference, “is the following: 40 years ago, we
…
not a few reviewers of Wolfram’s book pegged him as a genius who had gone off the deep end. Bostrom’s Trilemma In 2003 Nick Bostrom took up the theme. It is Bostrom’s elaboration, framed as an application of self-sampling, that has gained so many serious and semiserious converts
…
, and left. But no one has ever found a convincing alien artifact of any kind. Fermi’s question remains as great a mystery as ever. Nick Bostrom painted this word picture: life on Earth is a single data point, and the Fermi paradox is the question mark over it. The Princess in
…
the emergence of space-traveling and communicating species. This hypothetical something is often called the great filter. “It is not far-fetched to suppose,” wrote Nick Bostrom, “that there might be some possible technology which is such that (a) virtually all sufficiently advanced civilizations eventually discover it and (b) its discovery leads
…
hints that we shouldn’t be too sure such things can’t happen. Is there any rational way to evaluate these possibilities? Max Tegmark and Nick Bostrom address that question in a 2005 article in Nature. They begin by acknowledging the selection effect “that precludes any observer from observing anything other than
…
legitimate reason to favor an infinite multiverse. The virtues of not being too picky can be demonstrated by breaking down the evidence into two parts. Nick Bostrom gives this parable. Imagine we’re disembodied souls, existing outside of time and space. One eon, God goes off to create a universe or universes
…
England but not quite of it. Located in Oxford, the home of William of Ockham and Lewis Carroll, the institute was founded by Swedish-born Nick Bostrom and has been financed largely by the American technology industry. Lead donor James Martin was a former IBM employee in New York who struck it
…
realizing that we don’t know that. We may have been bewitched by a selection effect. Such advocates as Gott, Francis Crick, Brandon Carter, and Nick Bostrom have argued that we cannot exclude the possibility that life, observers, and technological civilizations are vanishingly rare. The exciting thing is that this is a
…
, a monolith marking the end of the world as we know it. We are about to discover the truth of how special we are. Acknowledgments Nick Bostrom, J. Richard Gott III, and John Leslie were exceptionally generous with their time, expertise, and patience. James Dreier, Adam N. Elga, and Arnold Zuboff were
…
or Many Words?” 1997. bit.ly/2rjh1ZS. . Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. New York: Knopf, 2014. Tegmark, Max, and Nick Bostrom. “Is a Doomsday Catastrophe Likely?” Nature 438 (2005): 754. An extended version (2005a) is at bit.ly/2smf3Z1. Thompson, Clive. “If You Liked This, You
…
Burr. The Theory of Investment Value. Cambridge: Harvard University Press, 1938. Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Nick Bostrom and Milan M. Cirkovic, eds., Global Catastrophic Risks. New York: Oxford University Press, 2008, 308–345. Zuboff, Arnold. “One Self: The Logic of Experience.” Inquiry
…
thought, well, you know”: Ferris 1999. 26. “the location of your birth”: Gott 1993, 316. 27. self-sampling assumption; human randomness assumption: The first is Nick Bostrom’s term, the second William Eckhardt’s. See Bostrom 2002 and Eckhardt 1997. 28. “Disturbingly, even extraordinarily low values”: Gott 1993, 317. 29. “The methods
…
traffic: Bostrom 2002, 82–84. 6. Adam and Eve: Almost everyone who thinks about the doomsday argument ends up asking a version of this question. Nick Bostrom used the Cro-Magnon example (Bostrom 2002, 116). John Leslie considered an ancient Roman (Leslie interview, January 17, 2018; Leslie 1996, 205.) 7. Emerald experiment
…
. “We cannot necessarily rely”: Bostrom 2002. 4. “Let an ultraintelligent machine be defined”: Good 1965. 5. Good biography; consulted on 2001: van der Vat 2009. Nick Bostrom, the former artist, designed the Future of Humanity Institute’s logo as an homage to the Kubrick film. The logo is a black, slightly convex
by Tom Chivers · 12 Jun 2019 · 289pp · 92,714 words
first time I’d come into contact with them. I’d been aware of the community since about 2014, when I wrote a review of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. If you’re vaguely aware of a conversation going on about whether or not AI will destroy the world, it
…
fast for us to understand the changes. We would be through the looking glass. This is, roughly, the idea of the ‘fast take-off’ that Nick Bostrom would describe nearly two decades later, although Yudkowsky’s version makes a few weird assumptions and leaps of logic (as, again, is fair enough given
…
list was Eliezer Yudkowsky. ‘This was in the 1990s,’ says Robin Hanson, an economist at George Mason University and an important early Rationalist figure. ‘Myself, Nick Bostrom, Eliezer and many others were on it, discussing big future topics back then.’ But neither Bostrom nor Yudkowsky were satisfied with the Extropians. ‘It was
…
a relatively libertarian take on futurism,’ says Hanson. ‘Some people, including Nick Bostrom, didn’t like that libertarian take, so they created the World Transhumanist Association, explicitly to no longer be so libertarian.’ The World Transhumanist Association later
…
on there. Bostrom and Hanson are both there, and Anna Salamon. Other people who play roles in the story – Michael Vassar, Michael Anissimov – are contributors. Nick Bostrom did a minor double-take when I asked him about SL4 and the Extropians, as though he hadn’t thought about it in a long
…
a few old Extropians/SL4 veterans to come and join him, people who’d impressed him with the quality of their thinking. Among them were Nick Bostrom and Eliezer Yudkowsky. ‘Nick just blogged a few things,’ he said. ‘But Eliezer blogged a lot, which was great.’ It’s at this point that
…
, and that the human population settles at a nice, sustainable 1 billion, less than one-seventh of its current levels. These are the assumptions that Nick Bostrom – author of the aforementioned Superintelligence, and founder of Oxford’s Future of Humanity Institute (FHI) – goes with.3 That would mean that we would have
…
are. And the answer is we don’t know. That said, perhaps we can make an educated guess. *This little history is largely taken from Nick Bostrom’s Superintelligence and from Russell and Norvig’s Artificial Intelligence: A Modern Approach. I am enormously grateful to both; any errors are mine. Chapter 5
…
the next few centuries. But they didn’t feel it was likely to wipe out humanity altogether. I met with Dr Toby Ord, one of Nick Bostrom’s colleagues at Oxford’s FHI. Toby is a likeable Australian who, in contrast to a lot of the people I spoke to for this
…
, exactly. The classic example of an AI that has gone terribly wrong – a ‘misaligned’ or ‘unfriendly’ AI, in Rationalist terms – is a thought experiment that Nick Bostrom wrote about in 2003 (probably following an original idea by Eliezer Yudkowsky): the paperclip maximiser.1 Imagine a human-level AI has been given an
…
something – and the top 10,’ he said. ‘And I was thinking the same thing.’ But then he read an article by Miles Brundage, one of Nick Bostrom’s colleagues at the FHI.3 Brundage pointed out that a previous DeepMind project, Atari AI, was only ‘human-level’ at the Atari games it
…
mentioned before. He’s on record – many times – arguing that AI risk should be taken seriously.) ‘This is actually why I like the survey that Nick Bostrom and Vincent Muller did,’ said Ord, which found that AI researchers, on average, think there’s a 10 per cent chance that AGI will arrive
…
he said – with a lot of caveats – that AI risk really is a thing. ‘My view is probably not that far from the highly nuanced Nick Bostrom view, which is that there’s an important argument here that deserves to be taken seriously.’ His caveats were that it was probably quite a
…
with a low probability, the possibility of a very bad outcome means we still need to think hard about it,’ he said. I also asked Nick Bostrom, who is very much a philosopher too. And he said that at one point it was mainly philosophers who were worrying about AI risk, and
…
say, “This isn’t going to happen, this is absurd, it’s not going to happen for a hundred years,” but if you talk to Nick Bostrom, he’d say, “It’s really important that we think about this, because it might happen in only a hundred years.”’ This is a point
…
very obviously better than others. The links between the Rationalists and the Effective Altruists go back pretty much to the beginning. Ord and MacAskill met Nick Bostrom at Oxford in 2003, and Ord says: ‘I was heavily influenced by Nick in my work on existential risk. I’m pretty sure [the Effective
…
little bit down will be things like nuclear weapons.’ And, as we discussed in Chapter 2, talking about existential risks, going extinct matters. Remember that Nick Bostrom thinks that the number of human-like lives that could be lived is something obscene like 1058. He could be wrong by three dozen orders
…
are 1,000 people working on it, it’ll be almost impossible to keep it secret, so the treaties might be quite effective. I asked Nick Bostrom what he thought the most promising avenue was. ‘Broadly speaking,’ he replied, ‘some way that involves leveraging the AI’s intelligence to infer and learn
…
’ (existential catastrophe), i.e. human extinction. Some people who work on AI risk reckon it’s higher: Rob Bensinger described it as ‘high-probability’, for Nick Bostrom the ‘default outcome’ of an intelligence explosion is ‘doom’.2 But some AI researchers consider that ridiculous: to Toby Walsh, for instance, the basic premise
…
, with great sensitivity and sense, to direct them into careers that might help save the planet. Rob is still Yudkowsky’s messenger on Earth. And Nick Bostrom, of course, is now a globally famous figure in certain niche circles, and is getting referenced in major NBC comedy programmes and White House policy
…
is a small but non-negligible probability that, when we look back on this era in the future, we’ll think that Eliezer Yudkowsky and Nick Bostrom – and the SL4 email list, and LessWrong.com – have saved the world. If Paul Crowley is right and my children don’t die of old
…
Cotra, Andrew Sabisky, Anna Salamon, Buck Shlegeris, Catherine Hollander, David Gerard, Diana Fleischman, Helen Toner, Holden Karnofsky, Katja Grace, Michael Story, Mike Levine, Murray Shanahan, Nick Bostrom, Peter Singer, Rob Bensinger, Robin Hanson, Scott Alexander, Toby Ord, Toby Walsh and everyone else who spoke to me. Plus grudging thanks to Eliezer Yudkowsky
…
http://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of 4. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (OUP, 2014), p. 222 5. https://en.wikipedia.org/wiki/2017_California_wildfires 1: Introducing the Rationalists 1. http://yudkowsky.net
…
September 2008 https://www.readthesequences.com/RaisedInTechnophilia 4. ‘The magnitude of his own folly’, LessWrong sequences, 30 September 2008 https://www.readthesequences.com/TheMagnitudeOfHisOwnFolly 5. Nick Bostrom, ‘A History of Transhumanist Thought’, Journal of Evolution and Technology, vol. 14, issue 1, 2005 https://nickbostrom.com/papers/history.pdf 6. Marie Jean Antoine
…
’, 2000 http://yudkowsky.net/obsolete/plan.html 15. ‘Re: the AI box experiment’, SL4 archives, 2002 http://www.sl4.org/archive/0203/3141.html 16. Nick Bostrom, ‘The simulation argument’, SL4 archives, 2001 http://www.sl4.org/archive/0112/2380.html 17. History of LessWrong, https://wiki.lesswrong.com/wiki/History_of
…
scales’, in Carolus J. Schrijver and George L. Siscoe, Heliophysics: Evolving Solar Activity and the Climates of Space and Earth (Cambridge University Press, 2010) 3. Nick Bostrom, ‘Existential risk prevention as global priority’, 2012 http://www.existential-risk.org/concept.pdf 4. Carl Haub, ‘How many people have ever lived on Earth
…
8. Eliezer Yudkowsky, ‘Pascal’s mugging: Tiny probabilities of vast utilities’, LessWrong, 2007 http://lesswrong.com/lw/kd/pascals_mugging_tiny_probabilities_of_vast/ 9. Nick Bostrom, ‘Pascal’s mugging’, 2009 https://nickbostrom.com/papers/pascal.pdf 10. Scott Alexander, ‘Getting Eulered’, 2014 http://slatestarcodex.com/2014/08/10/getting-eulered/ 11
…
-play with a general reinforcement learning algorithm’, Arxiv, 2017 https://arxiv.org/pdf/1712.01815.pdf 9. Russell and Norvig, Artificial Intelligence, p. 4 10. Nick Bostrom and Vincent C. Müller, ‘Future progress in artificial intelligence: A survey of expert opinion’, Fundamental Issues of Artificial Intelligence, 2016 https://nickbostrom.com/papers/survey
…
.pdf 11. Nick Bostrom, ‘How long before superintelligence?’, International Journal of Future Studies, vol. 2 1998 https://nickbostrom.com/superintelligence.html 4: A history of AI 1. Alan Turing
…
greatest images of the solar system has died’, BuzzFeed, September 2017 https://www.buzzfeed.com/tomchivers/cassini-death-spiral 8: Paperclips and Mickey Mouse 1. Nick Bostrom, ‘Ethical issues in advanced artificial intelligence’, 2003 https://nickbostrom.com/ethics/ai.html 2. http://www.decisionproblem.com/paperclips/index2.html 3. Soares, ‘Ensuring smarter
…
’ https://intelligence.org/2017/04/12/ensuring/ 9: You can be intelligent, and still want to do stupid things 1. Bostrom, Superintelligence, p. 9 2. Nick Bostrom, ‘The superintelligent will: motivation and instrumental rationality in advanced artificial agents’, 2012 https://nickbostrom.com/superintelligentwill.pdf 3. Elezier Yudkowsky, ‘Ghosts in the machine’, 17
by Adam Becker · 14 Jun 2025 · 381pp · 119,533 words
email list for Extropians. “In the mid-nineties, many got [their] first exposure to transhumanist views from the Extropy Institute’s listserv,” wrote the philosopher Nick Bostrom in 2005.44 Bostrom himself was one of those people and was quite active on that list in the 1990s, when he was a graduate
…
upper bounds on the brain’s computational power and memory capacity are too low by many orders of magnitude. Indeed, a 2008 study coauthored by Nick Bostrom suggested that the computational power of the brain could easily be ten million times greater than Kurzweil’s upper bound.95 “There’s a lot
…
was no better end to put that material than filling the cauldron. And we now have more and more brooms over-filling this cauldron.”11 Nick Bostrom, inspired by Yudkowsky’s concerns, developed a thought experiment in 2003 to demonstrate how this could play out in a world where there is a
…
work here, mostly graduate students and postdoctoral researchers, along with a smattering of permanent staff and faculty including William MacAskill, Toby Ord, Anders Sandberg, and Nick Bostrom.i It’s May 2023, and I’ve traveled here from California to talk with MacAskill, but he’s canceled at the last minute. Instead
…
end of time? Longtermists often make arguments depending on such forecasts, and they often invoke speculative future technologies while doing so. In a 2012 paper, Nick Bostrom estimated that, using brain uploading and simulations, the equivalent of 1052 human lives “of ordinary length” could be lived out in computers across space over
…
conservative lower bound) that would not otherwise have existed. Few other philanthropic causes could hope to match that level of utilitarian payoff.93 That’s Nick Bostrom, in his 2003 paper “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” Here’s Toby Ord, from The Precipice: At the ultimate physical scale
…
the memory of a computer in an entirely different reality, run by some other entity, human or otherwise. Musk’s argument echoes one made by Nick Bostrom in a 2003 paper titled “Are You Living in a Computer Simulation?” There, Bostrom concludes that, given the computing power available to a post-Singularity
…
Wong, January 3, 2024, San Francisco, CA Eliezer Yudkowsky, May 24, 2024, video call INTERVIEW REQUESTS Sam Altman (declined) Marc Andreessen (declined) Jeff Bezos (ignored) Nick Bostrom (declined) Eric Drexler (declined) Ray Kurzweil (declined) William MacAskill (canceled, ignored requests to reschedule) Elon Musk (ignored) Stuart Russell (declined) Scott Siskind (declined) Guillaume Verdon
…
Human Era (New York: Thomas Dunne, 2013), 121. 41 Ed Regis, “Meet the Extropians,” Wired, October 1, 1994, www.wired.com/1994/10/extropians/. 42 Nick Bostrom, “A History of Transhumanist Thought,” Journal of Evolution and Technology 14, no. 1 (April 2005), https://jetpress.org/volume14/bostrom.pdf; Extropy: Vaccine for Future
…
. 64 Dylan Matthews, “This Oxford Professor Thinks Artificial Intelligence Will Destroy Us All,” Vox, August 19, 2014, www.vox.com/2014/8/19/6031367/oxford-nick-bostrom-artificial-intelligence-superintelligence. 65 David J. Chalmers, “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies 17 (2010): 3. 66 Chalmers, interview with author. 67
…
13, 2012, www.scientificamerican.com/blog/brainwaves/know-your-neurons-what-is-the-ratio-of-glia-to-neurons-in-the-brain/. 95 Anders Sandberg and Nick Bostrom, Whole Brain Emulation: A Roadmap, Technical Report no. 2008-3 (Oxford: Future of Humanity Institute, 2008), 80–81, www.fhi.ox.ac.uk/brain-emulation
…
AI,” talk given at NYU, 2017, video, YouTube, September 1, 2017, www.youtube.com/watch?v=YicCAgjsky8. 12 First shows up in his 2003 paper: Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, vol. 2, eds. I
…
et al. (Tecumseh, Ontario: International Institute for Advanced Studies in Systems Research and Cybernetics, 2003), 12–17, https://nickbostrom.com/ethics/ai. 13 Rudyard Griffiths, “Nick Bostrom: ‘I Don’t Think the Artificial-Intelligence Train Will Slow Down,’” Globe and Mail, May 1, 2015, www.theglobeandmail.com/opinion/munk-debates
…
/nick-bostrom-i-dont-think-the-artificial-intelligence-train-will-slow-down/article24222185/. 14 Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2017), 77 (for the quote), 94 (for the claim that a
…
fast or moderate takeoff is more likely). 15 Griffiths, “Nick Bostrom.” 16 Ibid. 17 Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines 22, no. 2 (May 2012), https://nickbostrom.com/superintelligentwill.pdf
…
, “Extropian Creed.” 136 John K. Clark, “The Extropian Principles,” Extropians listserv archive, July 29, 1996, www.lucifer.com/exi-lists/extropians.96/0064.html. 137 Nick Bostrom, “Re: Offending People’s Minds,” Extropians listserv archive, August 24, 1996, www.lucifer.com/exi-lists/extropians.96/0441.html (racial slur unredacted in original
…
). 138 Nick Bostrom, “Apology for an Old Email,” January 9, 2023, https://nickbostrom.com/oldemail.pdf. 139 Deb Raji (@rajiinio), Twitter (now X), January 11, 2023, https://twitter
…
Humans (blog), April 23, 2023, https://aiguide.substack.com/p/do-half-of-ai-researchers-believe. 62 Mitchell interview. 63 Ibid. 64 Ord interview. 65 Nick Bostrom, “Existential Risk Prevention as Global Priority,” Global Policy 4, no. 1 (2013): 15–31, https://existential-risk.com/concept. 66 William MacAskill and Hilary Greaves
…
of Astrophysical Objects,” Reviews of Modern Physics 69, no. 337 (April 1, 1997), https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.69.337. 93 Nick Bostrom, “Astronomical Waste: The Opportunity Cost of Delayed Technological Development,” Utilitas 15, no. 3 (2003): 308–314, https://nickbostrom.com/astronomical/waste (emphasis in the original
by Max More and Natasha Vita-More · 4 Mar 2013 · 798pp · 240,182 words
Singularity Models Accelerating Change Discussion 37 A Critical Discussion of Vinge’s Singularity Concept Comment by David Brin: Singularities Comment by Damien Broderick Comment by Nick Bostrom: Singularity and Predictability Comment by Alexander Chislenko: Singularity as a Process, and the Future Beyond Comment by Robin Hanson: Some Skepticism Comment by Max More
…
An Evil Hour (I Books, 2003); and co-authored with Van Ikin and Sean McMullen Strange Constellations: A History of Australian Science Fiction (Praeger, 1999). Nick Bostrom, PhD, is Director, Future of Humanity Institute, Oxford University. He has written numerous papers and authored Anthropic Bias (Routledge, 2010); and co-edited with Julian
…
Interests in Xenotransplantation (Ashgate Publishing, 2004). Anders Sandberg, PhD, is a James Martin Research Fellow, Future of Humanity Institute, Oxford University. He co-authored with Nick Bostrom “Converging Cognitive Enhancements” (Annals of the New York Academy of Science, 2006); and “Whole Brain Emulation: A Roadmap” (Technological Report, 2008). Wrye Sententia, PhD, is
…
design-based practices that approach human enhancement and life extension?” In his essay “Why I Want to be a Posthuman When I Grow Up,” philosopher Nick Bostrom notes that extreme human enhancement could result in “posthuman” modes of being. Being posthuman would mean possessing a general central capacity (healthspan, cognition, or emotion
…
achieving something or actually achieving it, and how clear is the distinction (Nozick 1974)? Taking this line of thinking further, transhumanists from Hans Moravec to Nick Bostrom have asked how likely it is that we are already living in a simulation (Moravec 1989; Bostrom 2003). An obvious metaphysical question to raise here
…
., H+/-: Transhumanism and Its Critics (Bloomington, IN: Xlibris, 2011). Copyright © 2011, Metanexus Institute. 3 Why I Want to be a Posthuman When I Grow Up Nick Bostrom I am apt to think, if we knew what it was to be an angel for one hour, we should return to this world, though
…
Pearce, Den Otter, Doug Bailey, Eugene Leitl, Gustavo Alves, Holger Wagner, Kathryn Aegis, Keith Elis, Lee Daniel Crocker, Max More, Mikhail Sverdlov, Natasha Vita-More, Nick Bostrom, Ralf Fletcher, Shane Spaulding, T.O. Morrow, Thom Quinn. 5 Morphological Freedom – Why We Not Just Want It, but Need It Anders Sandberg Over the
…
(2002) The Illusion of Conscious Will. Cambridge, MA: MIT Press. Yudkowsky, E. (2008) “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Nick Bostrom and Milan Cirkovic, eds., Global Catastrophic Risks. Oxford: Oxford University Press. Further Reading Drexler, Eric (1992) Nanosystems. Cambridge, MA: MIT Press. Goertzel, Ben and Pennachin
…
intra-Enlightenment debates since it is precisely the prospect of radical neuroscience that has made the erasure of the illusion of personal identity so tangible. Nick Bostrom acknowledged the problem of personal identity for transhumanism: Many philosophers who have studied the problem think that at least under some conditions, an upload of
…
. New York: Ballantine Books. Bostrom, Nick and Sandberg, Anders (2009) “The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement.” In Julian Savulescu and Nick Bostrom, eds., Human Enhancement. Oxford: Oxford University Press, pp. 375–416. Cakic, Vince (2009) “Smart Drugs for Cognitive Enhancement: Ethical and Pragmatic Considerations in the Era
…
of Cosmetic Neurology.” Journal of Medical Ethics 35 (October 29), pp. 611–615. Caplan, Arthur L. (2009) “Good, Better, or Best.” In Julian Savulescu and Nick Bostrom, eds., Human Enhancement. Oxford: Oxford University Press, pp. 199–209. Glover, Jonathan (1984) What Sort of People Should There Be? New York: Penguin. Hart, Herbert
…
) Anarchy, State, and Utopia. New York: Basic Books. Overall, Christine (2009) “Life-Enhancement Technologies: The Significance of Social Category Membership.” In Julian Savulescu and Nick Bostrom, eds., Human Enhancement. Oxford: Oxford University Press. PatsFans.com (2010) “Falcons Say Signal Stealing Part of Football …” (June 8). http://www.patsfans.com/new-england
…
. 102–106. http://www.nature.com/nature/journal/v469/n7328/pdf/nature09603.pdf. Kamm, Frances (2008) “What Is and Is Not Wrong with Enhancements.” In Nick Bostrom and Julian Savulescu, eds., Human Enhancement. Oxford: Oxford University Press. Kant, Immanuel (1991) Moral Law: Groundwork of the Metaphysics of Morals, trans. Herbert James Paton
…
wave that “began in earnest” with the working draft of the human genome (circa 2000). Greg Klerkx 2006: 63. 7 Max More, Natasha Vita-More, Nick Bostrom, J. Hughes, Aubrey de Grey, Martine Rothblatt, Ben Goertzel, Ray Kurzweil, to name a few, all of whom are contributors to this seminal volume. 8
…
is less surprising: available evidence shows that human experts are usually weak at long-term forecasting even without singularities. Notes I would like to thank Nick Bostrom, Toby Ord, Stuart Armstrong, Carl Schulman and Roko Mijic for useful comments and additions. 1 “The Singularity Is the Technological Creation of Smarter-than-Human
…
and the future of AGI” following the AGI10 conference in Lugano, Switzerland. 37 A Critical Discussion of Vinge’s Singularity Concept David Brin, Damien Broderick, Nick Bostrom, Alexander “Sasha” Chislenko, Robin Hanson, Max More, Michael Nielsen, and Anders Sandberg Comment by David Brin: Singularities Vernor Vinge’s “singularity” is a worthy contribution
…
of discovery. And there might be no limits to what we can discover, and do, on the far side of the Spike.1 Comment by Nick Bostrom: Singularity and Predictability I find myself to be in close agreement with much of what Vinge has said about the singularity. Like Vinge, I do
by Toby Ord · 24 Mar 2020 · 513pp · 152,381 words
world bereft of human flourishing. Extinction would bring about this failed world and lock it in forever—there would be no coming back. The philosopher Nick Bostrom showed that extinction is not the only way this could happen: there are other catastrophic outcomes in which we lose not just the present, but
…
Leslie, whose 1996 book The End of the World broadened the focus from nuclear war to human extinction in general. After reading Leslie’s work, Nick Bostrom took this a step further: identifying and analyzing the broader class of existential risks that are the focus of this book. Our moral and political
…
are thus important to understanding the relative value of broad versus narrowly targeted efforts. And they are important for estimating the total risk we face. Nick Bostrom has recently pointed to an important class of unforeseen risk.138 Every year as we invent new technologies, we may have a chance of stumbling
…
get it started. Thank you, Andrew. And thanks to all those who contributed to the early conversations on what form it should take: Nick Beckstead, Nick Bostrom, Brian Christian, Owen Cotton-Barratt, Andrew Critch, Allan Dafoe, Daniel Dewey, Luke Ding, Eric Drexler, Hilary Greaves, Michelle Hutchinson, Will MacAskill, Jason Matheny, Luke Muehlhauser
…
many people generously gave their time to read and comment on the manuscript. Thank you to Josie Axford-Foster, Beth Barnes, Nick Beckstead, Haydn Belfield, Nick Bostrom, Danny Bressler, Tim Campbell, Natalie Cargill, Shamil Chandaria, Paul Christiano, Teddy Collins, Owen Cotton-Barratt, Andrew Critch, Allan Dafoe, Max Daniel, Richard Danzig, Ben Delo
…
to Oxford, where I was lucky enough to have them both as mentors. Yet I think the greatest influence on me at Oxford has been Nick Bostrom, through his courage to depart from the well-worn tracks and instead tackle vast questions about the future that seem almost off-limits in academic
…
Extinction. A landmark book that broadened the discussion from nuclear risk to all risks of human extinction, cataloging the threats and exploring new philosophical angles. Nick Bostrom (2002). “Existential Risks: Analyzing Human Extinction Scenarios.” Established the concept of existential risk and introduced many of the most important ideas. Yet mainly of historic
…
interest, for it is superseded by his 2013 paper below. Nick Bostrom (2003). “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” Explored the limits of what humans might be able to achieve in the future, suggesting
…
our civilization by even a tiny amount, yet that even this is overshadowed by the importance of increasing the chance we get there at all. Nick Bostrom (2013). “Existential Risk Prevention as Global Priority.” An updated version of his essay from 2002, this is the go-to paper on existential risk. Nick
…
of the most important aspects of our world have changed over the last two centuries. From the raw data to compelling charts and insightful analysis. Nick Bostrom (2014). Superintelligence: Paths, Dangers, Strategies. The foundational work on artificial intelligence and existential risk. Stuart Russell (2019). Human Compatible: AI and the Problem of Control
…
of Delayed Technological Development.” Utilitas, 15(3), 308–14. —(2005). A Philosophical Quest for our Biggest Problems (talk at TEDGlobal). https://www.ted.com/talks/nick_bostrom_on_our_ biggest_problems. —(2006). “What Is a Singleton.” Linguistic and Philosophical Investigations, 5(2), 48–54. —(2008). “Letter from Utopia.” Studies in Ethics, Law
…
in coming decades. 27 The name was coined by William MacAskill and myself. The ideas build on those of our colleagues Nick Beckstead (2013) and Nick Bostrom (2002b, 2003). MacAskill is currently working on a major book exploring these ideas. 28 We will see in Appendix E that as well as safeguarding
…
to the future of their descendants. Extinction is the undoing of the human enterprise.” 40 See, for example Cohen (2011), Scheffler (2009), Frick (2017). 41 Nick Bostrom (2013) expanded upon this idea: “We might also have custodial duties to preserve the inheritance of humanity passed on to us by our ancestors and
…
this situation has been one of the major themes of my work so far (Greaves & Ord, 2017; MacAskill & Ord, 2018; MacAskill, Bykvist & Ord, forthcoming). 50 Nick Bostrom (2013, p. 24) put this especially well: “Our present understanding of axiology might well be confused. We may not now know—at least not in
…
a human. 93 Omohundro (2008); Bostrom (2012). For a detailed explanation of how these instrumental goals could lead to very bad outcomes for humanity, see Nick Bostrom’s Superintelligence (2014). 94 Learning algorithms rarely deal with the possibility of changes to the reward function at future times. So it is ambiguous whether
…
humanity at a time when we know so little about the nature of conscious experience. 103 Metz (2018). 104 Stuart Russell (2015): “As Steve Omohundro, Nick Bostrom, and others have explained, the combination of value misalignment with increasingly capable decision-making systems can lead to problems—perhaps even species-ending problems if
…
–7) estimated a 30% risk over the next five centuries (after which he thought we’d very likely be on track to achieve our potential). Nick Bostrom (2002b) said about the total existential risk over the long term: “My subjective opinion is that setting this probability lower than 25% would be misguided
…
global security. 7 The name was suggested by William MacAskill, who has also explored the need for such a process and how it might work. Nick Bostrom (2013, p. 24) expressed a closely related idea: “Our present understanding of axiology might well be confused. We may not now know—at least not
…
the globe. 3. A unification of the world under a single government, possessing a monopoly of all the major weapons of war.” And more recently Nick Bostrom has advocated for humanity forming what he calls a “Singleton” (2006). This could be a form of world government, but it doesn’t have to
…
spent on the one that is easiest to improve in relative terms. 38 This argument, about how safety beats haste, was first put forward by Nick Bostrom (2003), though with different empirical assumptions. He measured the annual cost of delay by the energy in the starlight of the settleable region of our
by Martin Ford · 16 Nov 2018 · 586pp · 186,548 words
. MARTIN FORD A Brief Introduction to the Vocabulary of AI How AI Systems Learn 2. YOSHUA BENGIO 3. STUART J. RUSSELL 4. GEOFFREY HINTON 5. NICK BOSTROM 6. YANN LECUN 7. FEI-FEI LI 8. DEMIS HASSABIS 9. ANDREW NG 10. RANA EL KALIOUBY 11. RAY KURZWEIL 12. DANIELA RUS 13.
…
Musk. Nearly everyone I spoke to weighed in on this issue. To ensure that I gave this concern adequate and balanced coverage, I spoke with Nick Bostrom of the Future of Humanity Institute at the University of Oxford. Bostrom is the author of the bestselling book Superintelligence: Paths, Dangers, Strategies, which
…
an arms race with other countries, especially China. Is that something we should take seriously, something we should be very concerned about? STUART J. RUSSELL: Nick Bostrom and others have raised a concern that, if a party feels that strategic dominance in AI is a critical part of their national security and
…
NEC C&C award, the BBVA award, and the NSERC Herzberg Gold Medal, which is Canada’s top award in science and engineering.” Chapter 5. NICK BOSTROM The concern is not that [an AGI] would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and
…
really want. Then you get a future shaped in accordance with alien criteria. PROFESSOR, UNIVERSITY OF OXFORD AND DIRECTOR OF THE FUTURE OF HUMANITY INSTITUTE Nick Bostrom is widely recognized as one of the world’s top experts on superintelligence and the existential risks that AI and machine learning could potentially pose
…
an AGI system turns its energies toward improving itself, creating a recursive improvement loop that results in an intelligence that is vastly superior to humans. NICK BOSTROM: Yes, that’s one scenario and one problem, but there are other scenarios and other ways this transition to a machine intelligence era could unfold
…
outcomes that are harmful to humanity. Can you go into more detail on what that alignment problem, or control problem, is in layman’s terms? NICK BOSTROM: Well, one distinctive problem with very advanced AI systems that’s different from other technologies is that it presents not only the possibility of humans
…
is a system that turns the whole universe into paperclips because it’s a paperclip optimizer. Is that a good articulation of the alignment problem? NICK BOSTROM: The paperclip example is a stand-in for a wider category of possible failures where you ask a system to do one thing and, perhaps
…
concern. Why couldn’t a superintelligent system at some point just decide to have different goals or objectives? Humans do it all of the time! NICK BOSTROM: The reason why this seems less of a concern is that although a superintelligence would have the ability to change its goals, you have to
…
and there are drugs that can change the way the brain works. How do we know there’s not something comparable in the machine space? NICK BOSTROM: I think there well could be, particularly in the earlier stages of development, before the machine achieves sufficient understanding of how AI works to be
…
’re working on at the Future of Humanity Institute, and what other think tanks like OpenAI and the Machine Intelligence Research Institute are focusing on? NICK BOSTROM: Yes, that’s right. We do have a group working on that, but we’re also working on other things. We also have a
…
yours are an appropriate level of resource allocation for AI governance, or do you think that governments should jump into this at a larger scale? NICK BOSTROM: I think there could be more resources on AI safety. It’s not actually just us: DeepMind also has an AI safety group that we
…
think that superintelligence concerns should be more in the public sphere? Do you want to see presidential candidates in the United States talking about superintelligence? NICK BOSTROM: Not really. It’s still a bit too early to seek involvement from states and governments because right now it’s not exactly clear what
…
during their tenure. MARTIN FORD: So, when Elon Musk says superintelligence is a bigger threat than North Korea, could that rhetoric potentially make things worse? NICK BOSTROM: If you are getting into this prematurely, with a view to there being a big arms race, which could lead to a more competitive situation
…
the AI community? How did you first become interested in AI, and how did your career develop to the point it’s at right now? NICK BOSTROM: I’ve been interested in artificial intelligence for as long as I can remember. I studied artificial intelligence, and later computational neuroscience, at university,
…
FORD: In your work at the Future of Humanity Institute you’ve focused on a variety of existential risks, not just AI-related dangers, right? NICK BOSTROM: That’s right, but we’re also looking at the existential opportunities, we are not blind to the upside of technology. MARTIN FORD: Tell me
…
about some of the other risks you’ve looked at, and why you’ve chosen to focus so much on machine intelligence above all. NICK BOSTROM: At the FHI, we’re interested in really big-picture questions, the things that could fundamentally change the human condition in some way. We’
…
indeed all areas where human intelligence currently is useful. MARTIN FORD: What about climate change, for example? Is that on your list of existential threats? NICK BOSTROM: Not so much, partly because we prefer to focus where we think our efforts might make a big difference, which tends to be areas where
…
significant than from climate change, and that we’re allocating our resources and investment in these questions incorrectly? That sounds like a very controversial view. NICK BOSTROM: I do think that there is some misallocation, and it’s not just between those two fields in particular. In general, I don’t think
…
dependent on achieving AGI and beyond that, superintelligence? The risks associated with narrow AI are probably significant, but not what you would characterize as existential. NICK BOSTROM: That’s correct. We do also have some interest in these more near-term applications of machine intelligence, which are interesting in their own right
…
of things like autonomous weapons that can make their own decisions about who to kill. Do you support a ban on weapons of those types? NICK BOSTROM: It would be positive if the world could avoid immediately jumping into another arms race, where huge amounts of money are spent perfecting killer robots
…
these more direct applications of drones to kill or injure people. MARTIN FORD: Do you feel there is a role for regulation of these technologies? NICK BOSTROM: Some regulation, for sure. If you’re going to have killer drones, you don’t want any old criminal to be able to easily assassinate
…
FORD: It’s been about four years since your book Superintelligence: Paths, Dangers, Strategies was published. Are things progressing at the rate that you expected? NICK BOSTROM: Progress has been faster than expected over the last few years, with big advances in deep learning in particular. MARTIN FORD: You had a table
…
a decade out, so that would have been roughly 2024. As things turned out, it actually occurred just two years after you published the book. NICK BOSTROM: I think the statement I made was that if progress continued at the same rate as it had been going over the last several years
…
of these deep learning systems. MARTIN FORD: What are the major milestones or hurdles that you would point to that stand between us and AGI? NICK BOSTROM: There are several big challenges remaining in machine learning, such as needing better techniques for unsupervised learning. If you think about how adult humans come
…
Are there other players that you would point to that are doing important work, that you think may be competitive with what DeepMind is doing? NICK BOSTROM: DeepMind is certainly among the leaders, but there are many places where there is exciting work being done on machine learning or work that might
…
a Western thing, countries like China are investing greatly in building up their domestic capacity. MARTIN FORD: Those are not focused specifically on AGI, though. NICK BOSTROM: Yes, but it’s a fuzzy boundary. Among those groups currently overtly working towards AGI, aside from DeepMind, I guess OpenAI would be another group
…
general AI. MARTIN FORD: What about consciousness? Is that something that might automatically emerge from an intelligent system, or is that an entirely independent phenomenon? NICK BOSTROM: It depends on what you mean by consciousness. One sense of the word is the ability to have a functional form of self-awareness, that
…
I believe I’m conscious, but you don’t have that kind of connection with a machine. It’s a very difficult question to answer. NICK BOSTROM: Yes, I think it is difficult. I wouldn’t say species membership is the main criterion here that we use to posit consciousness, there are
…
So, would you support, for example, a basic income as a mechanism to make sure that everyone can enjoy the fruits of all this progress? NICK BOSTROM: Some functional analog of that could start to look increasingly desirable over time. If AI truly succeeds, and we resolve the technical control problem and
…
reach AGI first, or at the same time as us? It seems to me that the values of whatever culture develops this technology do matter. NICK BOSTROM: I think it might matter less which particular culture happens to develop it first. It matters more how competent the particular people or group that
…
could essentially be uncatchable, so there’s a huge incentive for exactly the kind of competition that you’re saying isn’t a good thing. NICK BOSTROM: In certain scenarios, yes, you could have dynamics like that, but I think the earlier point I made about pursuing this with a credible
…
AI would be an even greater challenge in terms of verifying that people aren’t cheating, even if you did have some sort of agreement. NICK BOSTROM: In some respects it would be more challenging, and in other respects maybe less challenging. The human game has often been played around scarcity—there
…
easier to form cooperative arrangements. MARTIN FORD: Do you think that we will solve these problems and that AI will be a positive force overall? NICK BOSTROM: I’m full of both hopes and fears. I would like to emphasize the upsides here, both in the short term and longer term. Because
…
see all the beneficial uses that this technology could be put to and I hope that this could be a great blessing for the world. NICK BOSTROM is a Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute. He also directs the Governance of Artificial
…
in American politics, clearly illustrating that the desire for power is not correlated with intelligence. MARTIN FORD: There is a pretty reasoned argument, though, that Nick Bostrom, in particular, has raised. The problem is not an innate need to take over the world, but rather that an AI could be given a
…
I wanted to touch on some of those. There’s this idea that there is a true existential threat, something that’s been raised by Nick Bostrom, Elon Musk, and Stephen Hawking, where super intelligence could happen very rapidly, a recursive self-improvement loop. I’ve heard people say that your
…
What about the risks and the downsides associated with AGI? Elon Musk has talked about “raising the demon” and an existential threat. There’s also Nick Bostrom, who I know is on DeepMind’s advisory board and has written a lot on this idea. What do you think about these fears? Should
…
are a lot of complications there, but those are more like geopolitical issues that we need to solve as a society. A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that
…
that could lead into best practices and protocols. I’m pretty confident that path will address a lot of the technical issues that people like Nick Bostrom are worried about, like the collateral consequences of goals not being set correctly. To make advances in that, my view has always been that
…
inform the research that has to be done to come up with the solutions to some of those questions that are posed by people like Nick Bostrom. We are actively thinking about these problems and we’re taking them seriously, but I’m a big believer in human ingenuity to overcome
…
do you make sure this continues to be widely available to everybody? MARTIN FORD: What do you think about the existential concerns? Elon Musk and Nick Bostrom talk about the control problem or the alignment problem. One scenario is where we could have a fast takeoff with recursive improvement, and then we
…
we have no idea how to solve the cybersecurity threats in the near term. MARTIN FORD: What about long-term threats, though? Elon Musk and Nick Bostrom are very concerned about the control problem with AI; the idea that there could be a recursive self-improvement cycle that could lead to an
…
a bit different. That kind of flexibility is, I think, important. In terms of other kinds of risks, I’m not as worried about the Nick Bostrom superintelligence aspect. I do think that as computer scientists and machine learning researchers we have the opportunity and the ability to shape how we want
…
be a trend, but that will certainly be a transition. MARTIN FORD: How do you feel about the risks of superintelligence that Elon Musk and Nick Bostrom have both been talking about? DAVID FERRUCCI: I think there’s a lot of cause to be concerned anytime you give a machine leverage. That
…
specifically to do with AI, only that you must design those systems with concern about error cases and cybersecurity. The other thing that people like Nick Bostrom talk about is how the machine might develop its own goals and decide it’s going to lay waste to the human race to achieve
…
concerned about because there are fewer incentives for machines to react like that. You’d have to program the computer to do something like that. Nick Bostrom talks about the idea that you could give the machine a benign goal but because it’s smart enough it will find a complex plan
…
as a society. MARTIN FORD: Let me ask you to comment more specifically on the prospect for superintelligence and the alignment or control problem that Nick Bostrom has written about. I think his concern is that, while it might be a long time before superintelligence is achieved, it might take us even
…
it’s terribly damaging that as a society that narrative is ongoing. MARTIN FORD: There is a concern expressed by people like Elon Musk and Nick Bostrom, where they talk about the fast take-off scenario, and the control problem related to superintelligence. Their focus is on the fear that AI could
…
enhancing cognitive capability we will be in a better position to control the AI. Is that a realistic view? BRYAN JOHNSON: I’m appreciative of Nick Bostrom for being as thoughtful as he has been about the risks that AI presents. He started this whole discussion, and he’s been fantastic in
by Mark Stevenson · 4 Dec 2010 · 379pp · 108,129 words
this encouraging: I’m still single with no kids, but it appears there’s plenty of time to raise a family, maybe even understand cricket. Nick Bostrom, founder of Oxford University’s Future of Humanity Institute, agrees. In fact, he thinks I could live not for a hundred years but for thousands
…
to compare ageing to fox hunting, claiming they are both ‘traditional,’ ‘keep the numbers down’ and, his punch line, ‘fundamentally barbaric.’ At the same conference, Nick Bostrom later stated, ‘Death is a big problem. If you look at the statistics the odds are not very favourable. So far most people who have
…
grown from my own stem cells to keep me young? How far can this go? How long can I live? How enhanced could I be? Nick Bostrom suggests that the answers to these last two questions could respectively be ‘a very long time’ and ‘as much as you like.’ Take, for instance
…
point of trying to get there – that it’s something we think it’s supremely worth getting to.’ I’m struck by how very human Nick Bostrom’s motivations are – on an emotional level they are the things most people care about: relationships, love, great music. I try to imagine myself ‘posthuman
…
-overs alive. ‘If only he’d known …’ he says, as the sun sets on a dead world. I’m also reminded of my chat with Nick Bostrom in Oxford. ‘Maybe you’re a happy little ant, building a good ant hill. But maybe you’re contributing to Hitler’s war machine?’ I
…
frenzy. Since 1982, genome data held in the NCBI’s ‘GenBank’ has undergone exponential growth, doubling every eighteen months. At a 2008 conference arranged by Nick Bostrom, with the child-friendly title ‘Global Catastrophic Risks,’ Dr Ali Nouri from Princeton University’s Program on Science and Global Security summarised it nicely: ‘What
…
too blinded by the pursuit of knowledge to see a bigger ethical picture. And, now I think about it, he’s not the only one. Nick Bostrom not only arranges conferences to look at this kind of thing, but his first introduction was to invite me to a seminar on biotechnology ethics
…
that goes beyond trickery. Somewhere on this road, we’ll have to deal with the sticky issue of whether robots have, er, human rights. As Nick Bostrom, who I met in Oxford points out, ‘Whether somebody is implemented on silicon or biological tissue – if it does not affect functionality or consciousness – is
…
us, and our fate will be in their hands. Option three is that man and machine merge. As I found out during my chat with Nick Bostrom, this is already happening. In the opening pages of Robot, Rodney Brooks writes, ‘Recently, I was confronted with a researcher in our lab, a double
…
common with science fiction than serious analysis. I first came across Ray’s ideas back in Oxford at the start of my trip. Ray, like Nick Bostrom of Oxford University’s Future of Humanity Institute, is a transhumanist who intends to live for hundreds if not thousands of years. I bumped up
…
a sandwich. I tell him about my trip and the people I’ve seen. He seems satisfied with the roll call. He’s collaborated with Nick Bostrom, nanotechnologist Eric Drexler is a friend, and Cynthia Breazeal appears in Ray’s docu-drama film version of The Singularity Is Near. Ray’s journey
…
next ten or twenty years at best. Everything I’ve talked about on my journey – from Eric Drexler’s nanofactory, to the biological immortality of Nick Bostrom, to George Church’s ‘personal genomics,’ to Cynthia Breazeal’s sociable robots – is coming quicker than we expect, along with synthetic fuels and a solar
…
remaining errors pertaining to their work are mine alone. In rough order of appearance I am delighted to thank the following list of extraordinary people: Nick Bostrom of the Future of Humanity Institute at Oxford University really did blow my mind and helped me dive in at the deep end. George Church
by Susan Schneider · 1 Oct 2019 · 331pp · 47,993 words
by Rizwan Virk · 31 Mar 2019 · 315pp · 89,861 words
by Peter Singer · 1 Jan 2015 · 197pp · 59,656 words
by Keach Hagey · 19 May 2025 · 439pp · 125,379 words
by Brian Christian · 5 Oct 2020 · 625pp · 167,349 words
by Calum Chace · 28 Jul 2015 · 144pp · 43,356 words
by Eliezer Yudkowsky · 11 Mar 2015 · 1,737pp · 491,616 words
by Ray Kurzweil · 25 Jun 2024
by Nate Silver · 12 Aug 2024 · 848pp · 227,015 words
by Robin Hanson · 31 Mar 2016 · 589pp · 147,053 words
by Jacob Turner · 29 Oct 2018 · 688pp · 147,571 words
by James D. Miller · 14 Jun 2012 · 377pp · 97,144 words
by Clive Thompson · 26 Mar 2019 · 499pp · 144,278 words
by James Barrat · 30 Sep 2013 · 294pp · 81,292 words
by Stuart Russell · 7 Oct 2019 · 416pp · 112,268 words
by William MacAskill · 31 Aug 2022 · 451pp · 125,201 words
by Karen Hao · 19 May 2025 · 660pp · 179,531 words
by Alex Kantrowitz · 6 Apr 2020 · 260pp · 67,823 words
by Jamie Susskind · 3 Sep 2018 · 533pp
by Anthony M. Townsend · 15 Jun 2020 · 362pp · 97,288 words
by Erik J. Larson · 5 Apr 2021
by Dan Heath · 3 Mar 2020
by David Runciman · 9 May 2018 · 245pp · 72,893 words
by John Brockman · 19 Feb 2019 · 339pp · 94,769 words
by Yuval Noah Harari · 9 Sep 2024 · 566pp · 169,013 words
by Chris Impey · 12 Apr 2015 · 370pp · 97,138 words
by John Brockman · 5 Oct 2015 · 481pp · 125,946 words
by Paul Scharre · 23 Apr 2018 · 590pp · 152,595 words
by Parmy Olson · 284pp · 96,087 words
by Alan Weisman · 5 Aug 2008 · 482pp · 106,041 words
by Richard A. Clarke · 10 Apr 2017 · 428pp · 121,717 words
by Cade Metz · 15 Mar 2021 · 414pp · 109,622 words
by Robert Skidelsky Nan Craig · 15 Mar 2020
by Richard Susskind and Daniel Susskind · 24 Aug 2015 · 742pp · 137,937 words
by Ray Kurzweil · 14 Jul 2005 · 761pp · 231,902 words
by Mark O'Connell · 28 Feb 2017 · 252pp · 79,452 words
by Jeanette Winterson · 15 Mar 2021 · 256pp · 73,068 words
by Sonia Arrison · 22 Aug 2011 · 381pp · 78,467 words
by David Wallace-Wells · 19 Feb 2019 · 343pp · 101,563 words
by Anil Seth · 29 Aug 2021 · 418pp · 102,597 words
by Thomas W. Malone · 14 May 2018 · 344pp · 104,077 words
by Justin E. H. Smith · 22 Mar 2022 · 198pp · 59,351 words
by Scott Patterson · 5 Jun 2023 · 289pp · 95,046 words
by Michael P. Lynch · 21 Mar 2016 · 230pp · 61,702 words
by Ray Kurzweil · 13 Nov 2012 · 372pp · 101,174 words
by Martin Ford · 13 Sep 2021 · 288pp · 86,995 words
by Joshua Cooper Ramo · 16 May 2016 · 326pp · 103,170 words
by Amy Webb · 5 Mar 2019 · 340pp · 97,723 words
by Simon McCarthy-Jones · 12 Apr 2021
by Anthony Berglas, William Black, Samantha Thalind, Max Scratchmann and Michelle Estes · 28 Feb 2015
by Stuart Armstrong · 1 Feb 2014 · 48pp · 12,437 words
by Stuart Russell and Peter Norvig · 14 Jul 2019 · 2,466pp · 668,761 words
by Garry Kasparov · 1 May 2017 · 331pp · 104,366 words
by Daniel Susskind · 14 Jan 2020 · 419pp · 109,241 words
by Luke Dormehl · 10 Aug 2016 · 252pp · 74,167 words
by Paul Scharre · 18 Jan 2023
by Chuck Klosterman · 6 Jun 2016 · 281pp · 78,317 words
by Kai-Fu Lee · 14 Sep 2018 · 307pp · 88,180 words
by Stephen Witt · 8 Apr 2025 · 260pp · 82,629 words
by Melanie Mitchell · 14 Oct 2019 · 350pp · 98,077 words
by Peter H. Diamandis and Steven Kotler · 28 Jan 2020 · 501pp · 114,888 words
by Peter Boghossian · 1 Nov 2013 · 257pp · 77,030 words
by Brian Clegg · 8 Dec 2015 · 315pp · 92,151 words
by Thomas H. Davenport and Julia Kirby · 23 May 2016 · 347pp · 97,721 words
by Samuel Arbesman · 18 Jul 2016 · 222pp · 53,317 words
by Jeremy Lent · 22 May 2017 · 789pp · 207,744 words
by Bill McKibben · 15 Apr 2019
by Steven Sloman · 10 Feb 2017 · 313pp · 91,098 words
by Daniel Susskind · 16 Apr 2024 · 358pp · 109,930 words
by Christopher Summerfield · 11 Mar 2025 · 412pp · 122,298 words
by Kenneth Payne · 16 Jun 2021 · 339pp · 92,785 words
by Ed Finn · 10 Mar 2017 · 285pp · 86,853 words
by Timothy Ferriss · 6 Dec 2016 · 669pp · 210,153 words
by Ajay Agrawal, Joshua Gans and Avi Goldfarb · 16 Apr 2018 · 345pp · 75,660 words
by Deirdre N. McCloskey · 15 Nov 2011 · 1,205pp · 308,891 words
by P. W. Singer · 1 Jan 2010 · 797pp · 227,399 words
by Yarden Katz
by Nicole Kobie · 3 Jul 2024 · 348pp · 119,358 words
by George Zarkadakis · 7 Mar 2016 · 405pp · 117,219 words
by Jamie Bartlett · 12 Jun 2017 · 390pp · 109,870 words
by Yuval Noah Harari · 1 Mar 2015 · 479pp · 144,453 words
by Daniel Crosby · 15 Feb 2018 · 249pp · 77,342 words
by George Gilder · 16 Jul 2018 · 332pp · 93,672 words
by Tim O'Reilly · 9 Oct 2017 · 561pp · 157,589 words
by Maximilian Kasy · 15 Jan 2025 · 209pp · 63,332 words
by Ben Goertzel and Pei Wang · 1 Jan 2007 · 303pp · 67,891 words
by John Brockman · 14 Feb 2012 · 416pp · 106,582 words
by Sonja Thiel and Johannes C. Bernhardt · 31 Dec 2023 · 321pp · 113,564 words
by Rob Reich · 20 Nov 2018 · 257pp · 75,685 words
by Antonio Damasio · 6 Feb 2018 · 289pp · 87,292 words
by Michael Wooldridge · 2 Nov 2018 · 346pp · 97,890 words
by Braden R. Allenby and Daniel R. Sarewitz · 15 Feb 2011
by Byrne Hobart and Tobias Huber · 29 Oct 2024 · 292pp · 106,826 words
by Benjamin H. Bratton · 19 Feb 2016 · 903pp · 235,753 words
by Jamie Bartlett · 20 Aug 2014 · 267pp · 82,580 words
by David Deutsch · 30 Jun 2011 · 551pp · 174,280 words
by Mustafa Suleyman · 4 Sep 2023 · 444pp · 117,770 words
by Brad Smith and Carol Ann Browne · 9 Sep 2019 · 482pp · 121,173 words
by Fareed Zakaria · 5 Oct 2020 · 289pp · 86,165 words
by Calum Chace · 4 Feb 2014 · 345pp · 104,404 words
by Eric Topol · 1 Jan 2019 · 424pp · 114,905 words
by Nicholas Carr · 5 Sep 2016 · 391pp · 105,382 words
by Lewis Dartnell · 15 Apr 2014 · 398pp · 100,679 words
by David Sumpter · 18 Jun 2018 · 276pp · 81,153 words
by Tim Berners-Lee · 8 Sep 2025 · 347pp · 100,038 words
by Ethan Mollick · 2 Apr 2024 · 189pp · 58,076 words
by Mo Gawdat · 29 Sep 2021 · 259pp · 84,261 words
by Jaron Lanier · 21 Nov 2017 · 480pp · 123,979 words
by Calum Chace · 17 Jul 2016 · 477pp · 75,408 words
by Peter Thiel and Blake Masters · 15 Sep 2014 · 185pp · 43,609 words
by Byron Reese · 23 Apr 2018 · 294pp · 96,661 words
by Richard Yonck · 7 Mar 2017 · 360pp · 100,991 words
by Mark Thomas · 7 Aug 2019 · 286pp · 79,305 words
by Carl Benedikt Frey · 17 Jun 2019 · 626pp · 167,836 words
by Timothy Ferriss · 14 Jun 2017 · 579pp · 183,063 words
by Adam Greenfield · 29 May 2017 · 410pp · 119,823 words
by Marc Goodman · 24 Feb 2015 · 677pp · 206,548 words
by Brett King · 5 May 2016 · 385pp · 111,113 words
by Benjamin Wallace · 18 Mar 2025 · 431pp · 116,274 words
by Rob Reich, Mehran Sahami and Jeremy M. Weinstein · 6 Sep 2021
by Adrienne Mayor · 27 Nov 2018
by Ozan Varol · 13 Apr 2020 · 389pp · 112,319 words
by Niall Ferguson · 13 Nov 2007 · 471pp · 124,585 words
by Jamie Bartlett · 4 Apr 2018 · 170pp · 49,193 words
by Bruce Schneier · 3 Sep 2018 · 448pp · 117,325 words
by James Bridle · 6 Apr 2022 · 502pp · 132,062 words
by Frank Pasquale · 14 May 2020 · 1,172pp · 114,305 words
by John Cheney-Lippold · 1 May 2017 · 420pp · 100,811 words
by Eliezer Yudkowsky and Nate Soares · 15 Sep 2025 · 215pp · 64,699 words
by Nouriel Roubini · 17 Oct 2022 · 328pp · 96,678 words
by Yuval Noah Harari · 5 Apr 2018 · 97pp · 31,550 words
by Jeff Booth · 14 Jan 2020 · 180pp · 55,805 words
by Michio Kaku · 15 Mar 2011 · 523pp · 148,929 words
by Jerry Kaplan · 3 Aug 2015 · 237pp · 64,411 words
by Walter Isaacson · 9 Mar 2021 · 700pp · 160,604 words
by Pedro Domingos · 21 Sep 2015 · 396pp · 117,149 words
by Viktor Mayer-Schönberger and Thomas Ramge · 27 Feb 2018 · 267pp · 72,552 words
by K. Eric Drexler · 6 May 2013 · 445pp · 105,255 words
by Martin J. Rees · 14 Oct 2018 · 193pp · 51,445 words
by Seth Stephens-Davidowitz · 8 May 2017 · 337pp · 86,320 words
by Diane Ackerman · 9 Sep 2014 · 380pp · 104,841 words
by Sabine Hossenfelder · 11 Jun 2018 · 340pp · 91,416 words
by Jared Diamond · 6 May 2019 · 459pp · 144,009 words
by Ben Mezrich · 6 Nov 2023 · 279pp · 85,453 words
by Guillaume Pitron · 14 Jun 2023 · 271pp · 79,355 words