by Martin Ford · 16 Nov 2018 · 586pp · 186,548 words
AI-powered technologies such as facial recognition will impact privacy seem well-founded. Warnings that robots will soon be weaponized, or that truly intelligent (or superintelligent) machines might someday represent an existential threat to humanity, are regularly reported in the media. A number of very prominent public figures—none of whom are
…
address those concerns? Is there a role for government regulation? Will AI unleash massive economic and job market disruption, or are these concerns overhyped? Could superintelligent machines someday break free of our control and pose a genuine threat? Should we worry about an AI “arms race,” or that other countries with authoritarian
…
and the economy? JOSH TENENBAUM: Some of the risks that people have advertised a lot are that we’ll see some kind of singularity, or superintelligent machines that take over the world or have their own goals that are incompatible with human existence. It’s possible that could happen in the far
by Susan Schneider · 1 Oct 2019 · 331pp · 47,993 words
value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from eating an orange. If superintelligent machines are not conscious, either because it’s impossible or because they aren’t designed to be, we could be in trouble. It is important to
by Stuart Russell · 7 Oct 2019 · 416pp · 112,268 words
neutron-induced nuclear chain reaction, superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I am, however, fairly confident that we have some breathing space
…
of the “take over the world” variety. The existential risk from AI does not come primarily from simple-minded killer robots. On the other hand, superintelligent machines in conflict with humanity could certainly arm themselves this way, by turning relatively stupid killer robots into physical extensions of a global control system. Eliminating
…
species has essentially no future beyond that which we deign to allow. We do not want to be in a similar situation vis-à-vis superintelligent machines. I’ll call this the gorilla problem—specifically, the problem of whether humans can maintain their supremacy and autonomy in a world that includes machines
…
mind.” This commandment precludes computing devices of any kind. All these drastic responses reflect the inchoate fears that machine intelligence evokes. Yes, the prospect of superintelligent machines does make one uneasy. Yes, it is logically possible that such machines could take over the world and subjugate or eliminate the human race. If
…
upon us noiselessly and by imperceptible approaches.” The prologue to Max Tegmark’s Life 3.0 describes in some detail a scenario in which a superintelligent machine gradually assumes economic and political control over the entire world while remaining essentially undetected. The Internet and the global-scale machines that it supports—the
…
the true nature of the problem. I don’t mean to suggest that there cannot be any reasonable objections to the view that poorly designed superintelligent machines would present a serious risk to humanity. It’s just that I have yet to see such an objection. Since the issue seems to be
…
questioners so that they ask only simple questions. And, finally, we have yet to invent a firewall that is secure against ordinary humans, let alone superintelligent machines. I think there might be solutions to some of these problems, particularly if we limit Oracle AI systems to be provably sound logical or Bayesian
…
pleasure existed—no knowledge, no love, no enjoyment of beauty, no moral qualities.”9 This finds its modern echo in Stuart Armstrong’s point that superintelligent machines tasked with maximizing pleasure might “entomb everyone in concrete coffins on heroin drips.”10 Another example: in 1945, Karl Popper proposed the laudable goal of
…
extreme caution. 10 PROBLEM SOLVED? If we succeed in creating provably beneficial AI systems, we would eliminate the risk that we might lose control over superintelligent machines. Humanity could proceed with their development and reap the almost unimaginable benefits that would flow from the ability to wield far greater intelligence in advancing
…
a global scale along with radical changes in how our society works. To avoid making a bad situation worse, we might need the help of superintelligent machines, both in shaping the solution and in the actual process of achieving a balance for each individual. Any parent of a small child is familiar
…
–42, 165–69 governance of, 249–53 health advances and, 101 history of, 4–6, 40–42 human preferences and (See human preferences) imagining what superintelligent machines could do, 93–96 intelligence, defining, 39–61 intelligent personal assistants and, 67–71 limits of superintelligence, 96–98 living standard increases and, 98–100
by Paul Scharre · 23 Apr 2018 · 590pp · 152,595 words
to be a mirage. If our benchmark for “intelligent” is what humans do, advanced artificial intelligence may be so alien that we never recognize these superintelligent machines as “true AI.” This dynamic already exists to some extent. Micah Clark pointed out that “as soon as something works and is practical it’s
by Max More and Natasha Vita-More · 4 Mar 2013 · 798pp · 240,182 words
. Both human beings and bacteria have good claims to being the “dominant species” on Earth – depending upon how one defines dominant. It is possible that superintelligent machines may wish to dominate some niche that is not presently occupied in any serious fashion by human beings. If this is the case, then from
…
AI occur? I can imagine several scenarios, and I’m sure other people can imagine more. Perhaps the most important point to make is that superintelligent machines may not be competing in the same niche with human beings for resources, and would therefore have little incentive to dominate us. In such a
by Martin Ford · 13 Sep 2021 · 288pp · 86,995 words
issue into an already dysfunctional political process. Do we really want politicians with little or no understanding of the technology tweeting about the dangers of superintelligent machines? Given the very limited ability of the U.S. government in particular to accomplish almost anything at all, I also worry that hyping or politicizing
by Amy Webb · 5 Mar 2019 · 340pp · 97,723 words
for innovation and progress. We’re in the midst of a very long transition, from artificial narrow intelligence to artificial general intelligence and, very possibly, superintelligent machines. Any regulations created in 2019 would be outdated by the time they went into effect. They might alleviate our concerns for a short while, but
by Nick Bostrom · 26 Mar 2024 · 547pp · 173,909 words
ones to perfect. Nevertheless, I believe it could be done at technological maturity. I think it will not be humans that invent this technology, but superintelligent machines. Let’s imagine what the procedure might be like. Your brain is infiltrated by an armada of millions of coordinated nanobots. (Maybe they get there
by Mark O'Connell · 28 Feb 2017 · 252pp · 79,452 words
and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, where a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its own particular goals? Certainly, if you were to take at face value
…
radically. And whether they would change for the better or for the worse is an open question. The fundamental risk, Nick argued, was not that superintelligent machines might be actively hostile toward their human creators, or antecedents, but that they would be indifferent. Humans, after all, weren’t actively hostile toward most
…
made extinct over the millennia of our ascendance; they simply weren’t part of our design. The same could turn out to be true of superintelligent machines, which would stand in a similar kind of relationship to us as we ourselves did to the animals we bred for food, or the ones
by Mo Gawdat · 29 Sep 2021 · 259pp · 84,261 words
it to ensure its own survival. All in all, whichever way this may go, sooner or later capital markets will be traded by a few superintelligent machines, which will be owned by a few massively wealthy individuals – people who will decide the fate of every company, shareholder and value in our human
…
. These ideas aim to make sure that we will be able to make the right decisions at the right time; that we will only allow superintelligent machines into the real world when we have tested and trusted them; that we will retain the ability to only allow them a confined playground after
by Nick Bostrom · 3 Jun 2014 · 574pp · 164,509 words
by Melanie Mitchell · 14 Oct 2019 · 350pp · 98,077 words
by Erik J. Larson · 5 Apr 2021
by James Barrat · 30 Sep 2013 · 294pp · 81,292 words
by John Brockman · 19 Feb 2019 · 339pp · 94,769 words
by Eliezer Yudkowsky and Nate Soares · 15 Sep 2025 · 215pp · 64,699 words
by Tom Chivers · 12 Jun 2019 · 289pp · 92,714 words
by John Brockman · 5 Oct 2015 · 481pp · 125,946 words
by Adam Becker · 14 Jun 2025 · 381pp · 119,533 words
by Keach Hagey · 19 May 2025 · 439pp · 125,379 words
by Nick Bostrom and Milan M. Cirkovic · 2 Jul 2008
by Michael Kearns and Aaron Roth · 3 Oct 2019
by Parmy Olson · 284pp · 96,087 words
by Nate Silver · 12 Aug 2024 · 848pp · 227,015 words
by Richard Yonck · 7 Mar 2017 · 360pp · 100,991 words
by Luke Dormehl · 10 Aug 2016 · 252pp · 74,167 words
by Jeanette Winterson · 15 Mar 2021 · 256pp · 73,068 words
by Christopher Summerfield · 11 Mar 2025 · 412pp · 122,298 words
by Ray Kurzweil · 25 Jun 2024
by Joshua Cooper Ramo · 16 May 2016 · 326pp · 103,170 words
by Samuel Arbesman · 18 Jul 2016 · 222pp · 53,317 words
by Ray Kurzweil · 14 Jul 2005 · 761pp · 231,902 words
by Yuval Noah Harari · 9 Sep 2024 · 566pp · 169,013 words
by Jeff Hawkins · 15 Nov 2021 · 253pp · 84,238 words
by Timothy Ferriss · 6 Dec 2016 · 669pp · 210,153 words
by Stuart Russell and Peter Norvig · 14 Jul 2019 · 2,466pp · 668,761 words
by Thomas H. Davenport and Julia Kirby · 23 May 2016 · 347pp · 97,721 words
by Anthony Berglas, William Black, Samantha Thalind, Max Scratchmann and Michelle Estes · 28 Feb 2015
by John Markoff · 24 Aug 2015 · 413pp · 119,587 words
by David Goodhart · 7 Sep 2020 · 463pp · 115,103 words
by Rob Reich, Mehran Sahami and Jeremy M. Weinstein · 6 Sep 2021
by Pedro Domingos · 21 Sep 2015 · 396pp · 117,149 words
by Jevin D. West and Carl T. Bergstrom · 3 Aug 2020
by Jeff Hawkins and Sandra Blakeslee · 1 Jan 2004 · 246pp · 81,625 words