by Eliezer Yudkowsky and Nate Soares · 15 Sep 2025 · 215pp · 64,699 words
puzzles and invent new technologies, and to plan and strategize and plot, and to reflect on and improve itself. We might call AI like that “artificial superintelligence” (ASI), once it exceeds every human at almost every mental task. AI isn’t there yet. But AIs are smarter today than they were in
…
would have told you that ChatGPT-level artificial conversation wouldn’t be in reach for another thirty or fifty years. We didn’t know when artificial superintelligence would arrive, but we agreed it should be a global priority. In fact, we think the open letter drastically undersells the issue. We were invited
…
. More recently, as AI has begun to take off, we watched with concern as some of the newer people starting AI companies began talking about artificial superintelligence as a source of vast, wonderful powers. Powers that they assumed they’d control. The main danger, according to many of these founders, was that
…
focus to conveying one single point, the warning at the core of this book: If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die. We do
…
they had believed life would go back to normal before matters went too far. Once upon a time, humanity was on the brink of creating artificial superintelligence… Normality always ends. This is not to say that it’s inevitably replaced by something worse; sometimes it is and sometimes it isn’t, and
…
of death. This book is not full of great news, we admit. But we’re not here to tell you that you’re doomed, either. Artificial superintelligence doesn’t exist yet. Humanity could still decide not to build it. In the 1950s, many people expected that there would be a nuclear war
…
about brains, because you’ll understand every event that goes on inside the neurons. WOMAN: Let’s change the subject. BEFORE WE CAN EXPLAIN WHY ARTIFICIAL SUPERINTELLIGENCE achieved using anything like modern methods would inevitably go wrong, we need to quickly survey those modern methods: how they work, what they produce, and
…
giving you 0.2 percent of their wealth, not the same way you rationalize reasons they should want to. In much the same way, an artificial superintelligence will not want to find reasons to keep humanity around—not in the same way that humans desperately want to find reasons to be kept
…
the sort of technology we’d have unlocked by the year 3000, if our civilization survived that long? And how long would it take an artificial superintelligence? A thousand years of thinking takes about a month to something running at 10,000 times the speed of humans. If it was running at
…
naked in the savannah, and figured out how to exploit reality and compound advantages until they were building guns and nuclear weapons and supercomputers. An artificial superintelligence would be even more resourceful, at even greater speeds. It would have no limits but the laws of physics. In a sense, the scenario we
…
for a long time. IfAnyoneBuildsIt.com/ii PART III FACING THE CHALLENGE CHAPTER 10 A CURSED PROBLEM THE GREATEST AND MOST CENTRAL DIFFICULTY IN ALIGNING artificial superintelligence is navigating the gap between before and after. Before, the AI is not powerful enough to kill us all, nor capable enough to resist our
…
attempts to change its goals. After, the artificial superintelligence must never try to kill us, because it would succeed. Engineers must align the AI before, while it is small and weak, and can’t
…
that civilization could do a little better next time. They risked and harmed only themselves, and all humanity benefited. When it comes to aligning an artificial superintelligence (ASI), humanity will not have the luxury of learning from sufficiently bad mistakes. This also means we can’t rely on the luxury of experience
…
off its coolant water and then overheating more, leave little room for error. And nuclear engineers don’t even have it that bad, compared to artificial superintelligence developers. Nuclear reactors that get too hot don’t start intelligently redesigning themselves to increase their own reactivity rate. Overheating nuclear reactors don’t start
…
reactors. Computer security. What do all these lessons add up to, and what can we learn from them about the difficulty of aligning an artificial superintelligence? An artificial superintelligence is like a space probe, in that we cannot test it in quite the same environment where it needs to work, and by default it
…
contrivance fails. And ASI alignment has it even worse than space probes: Failure will destroy not just billions of dollars of investment, but everything. An artificial superintelligence is like a nuclear reactor, in that its underlying reality involves immense, potentially self-amplifying forces, whose inner processes run faster than humans can react
…
. An artificial superintelligence is like a computer security problem, in that every constraint an engineer tries to place upon the system might be bypassed by the intelligent forces
…
nuclear engineering. It was by instructing their workers to lick radium-coated paintbrushes. What level of game is humanity bringing to the task of shaping artificial superintelligence? Elon Musk, the head of a major AI lab named xAI, shared his plan for ASI alignment in a 2023 interview: I’m going to
…
, but we think that in a substantial fraction of cases it is real. Unfortunately, even a very sincere idealism isn’t enough to prevent an artificial superintelligence from killing us all. That would take a mature science. It’s normal for a scientific community to be overly optimistic in the early days
…
a disadvantage if you agree to stop climbing the AI escalation ladder. We have already mentioned that Rishi Sunak acknowledged the existence of risks from artificial superintelligence in October 2023, while he was the prime minister of the United Kingdom. Also in October 2023, Chinese General Secretary Xi Jinping gave (what seems
…
should be regulated as a dangerous and powerful technology. A 2025 poll found that 60 percent of surveyed U.K. voters support laws against creating artificial superintelligence, and 63 percent support the prohibition of AIs that can make smarter AIs. This issue is not yet at the top of voters’ minds, but
…
fully persuaded: We do not like retreating to maybes. We think our argument stands on its merits. We think it is an easy call that artificial superintelligence will not dutifully serve the people who created it, and that ASI will repurpose the Earth in a fashion that leaves no survivors. But we
…
even just to your friends and family. If many people in many countries say with one voice that they’d rather not die to an artificial superintelligence and would prefer an international treaty—well, that would not itself prevent the disaster. Preventing nuclear war was more complicated than lots of people being
by John Brockman · 5 Oct 2015 · 481pp · 125,946 words
, sensor, and algorithmic technologies and it’s clear that today’s narrow AIs are on a trajectory toward a world of robust AI. Long before artificial superintelligences arrive, evolving AIs will be pressed into performing once unthinkable tasks, from firing weapons to formulating policy. Meanwhile, today’s primitive AIs tell us much
…
own. Only minds that comprehend cause and effect conjure up motives. So if goals, wants, values are features of human minds, then why predict that artificial superintelligences will become more than tools in the hands of those who program in those preferences? If the welter of prognostications about AI and machine learning
by Nick Bostrom and Milan M. Cirkovic · 2 Jul 2008
impact of Artificial I ntelligence. (This applies to underestimating potential good impacts, as well as potential bad impacts.) Even the phrase 'transhuman A I ' or 'artificial superintelligence' may still evoke images of book-smarts-in-a-box: an AI that is really good at cognitive tasks stereotypically associated with 'intelligence', like chess
by Calum Chace · 28 Jul 2015 · 144pp · 43,356 words
PART ONE: ANI (ARTIFICIAL NARROW INTELLIGENCE) CHAPTER 1 CHAPTER 2 CHAPTER 3 PART TWO: AGI (ARTIFICIAL GENERAL INTELLIGENCE) CHAPTER 4 CHAPTER 5 PART THREE: ASI (ARTIFICIAL SUPERINTELLIGENCE) CHAPTER 6 CHAPTER 7 PART FOUR: FAI (FRIENDLY ARTIFICIAL INTELLIGENCE) CHAPTER 8 CHAPTER 9 ACKNOWLEDGEMENTS ENDNOTES COMMENTS ON SURVIVING AI A sober and easy-to
…
to the swirls of controversy that surround discussion of what is likely to be the single most important event in human history – the emergence of artificial superintelligence. Throughout, Surviving AI remains clear and jargon-free, enabling newcomers to the subject to understand why many of today’s most prominent thinkers have felt
…
century, and if Moore’s Law continues for another decade or so then very dramatic developments are possible. PART THREE: ASI Artificial Superintelligence CHAPTER 6 WILL ARTIFICIAL GENERAL INTELLIGENCE LEAD TO SUPERINTELLIGENCE? Artificial superintelligence (ASI) is generally known simply as superintelligence. It does not need the prefix artificial since there is no natural predecessor
by Nick Bostrom · 3 Jun 2014 · 574pp · 164,509 words
an extrapolation base encompassing all humans. A further clarification: The formulation is not intended to necessarily exclude the possibly of post-transition property rights in artificial superintelligences or their constituent algorithms and data structures. The formulation is meant to be agnostic about what legal or political systems would best serve to organize
by Eliezer Yudkowsky · 11 Mar 2015 · 1,737pp · 491,616 words
you do. Only in my case, it’s people who know the amazingly simple utility function that is all you need to program into an artificial superintelligence and then everything will turn out fine. Some people, when they encounter the how-to-program-a-superintelligence problem, try to solve the problem immediately
by Paul Scharre · 23 Apr 2018 · 590pp · 152,595 words
-advanced AIs in a runaway intelligence explosion, a process sometimes simply called “AI FOOM.” Experts disagree widely about how quickly the transition from AGI to artificial superintelligence (sometimes called ASI) might occur, if at all. A “hard takeoff” scenario is one where AGI evolves to superintelligence within minutes or hours, rapidly leaving
…
artificial general intelligence AGM air-to-ground missile AI artificial intelligence AMRAAM Advanced Medium-Range Air-to-Air Missile ARPA Advanced Research Projects Agency ASI artificial superintelligence ASW anti-submarine warfare ATR automatic target recognition BDA battle damage assessment BWC Biological Weapons Convention CCW Convention on Certain Conventional Weapons C&D Command
…
; see also advanced artificial intelligence; artificial general intelligence “Artificial Intelligence, War, and Crisis Stability” (Horowitz), 302, 312 Artificial Intelligence for Humans, Volume 3 (Heaton), 132 artificial superintelligence (ASI), 233 Art of War, The (Sun Tzu), 229 Asaro, Peter, 265, 285, 287–90 Asimov, Isaac, 26–27, 134 Assad, Bashar al-, 7, 331
by Amy Webb · 5 Mar 2019 · 340pp · 97,723 words
2 The Insular World of AI’s Tribes 3 A Thousand Paper Cuts: AI’s Unintended Consequences Part II: Our Futures 4 From Here to Artificial Superintelligence: The Warning Signs 5 Thriving in the Third Age of Computing: The Optimistic Scenario 6 Learning to Live with Millions of Paper Cuts: The Pragmatic
…
optimistic to pragmatic and catastrophic, and they will reveal both opportunity and risk as we advance from artificial narrow intelligence to artificial general intelligence to artificial superintelligence. These scenarios are intense—they are the result of data-driven models, and they will give you a visceral glimpse at how AI might evolve
…
, you surrender your will. You give it to him in utter submission, in full renunciation.” —FEODOR DOSTOYEVSKY, THE BROTHERS KARAMAZOV CHAPTER FOUR FROM HERE TO ARTIFICIAL SUPERINTELLIGENCE: THE WARNING SIGNS The evolution of artificial intelligence, from robust systems capable of completing narrow tasks to general thinking machines, is now underway. At this
…
need for direct human involvement. Artificial intelligence is typically defined using three broad categories: artificial narrow or weak intelligence (ANI), artificial general intelligence (AGI), and artificial superintelligence (ASI). The Big Nine are currently moving swiftly toward building and deploying AGI systems, which they hope will someday be able to reason, solve problems
…
to things like better medical diagnoses and new ways to solve tough engineering problems. Improvements to AGI should, eventually, bring us to the third category: artificial superintelligence. ASI systems range from being slightly more capable at performing human cognitive tasks than we are to AIs that are literally trillions of times generally
…
. Good, begins in the late 2060s. It’s becoming clear now that our AGIs are gaining profound levels of intelligence, speed, and power and that artificial superintelligence is a near-term possibility. For the past decade, the Big Nine and GAIA have been preparing for this event—and it has calculated that
…
://www.cnbc.com/2016/05/10/baidu-ceo-tells-staff-to-put-values-before-profit-after-cancer-death-scandal.html. CHAPTER 4: FROM HERE TO ARTIFICIAL SUPERINTELLIGENCE: THE WARNING SIGNS 1. I modeled the scenarios in Part II using research from a variety of sources, and their references are in the bibliography
…
), 53, 131, 143, 150, 158, 169; commercial applications, 53; sensory computation in optimistic scenario of future, 160 Artificial neural networks (ANNs), first, 32–33, 34 Artificial superintelligence (ASI), 143, 144, 147–148, 150, 177; China and catastrophic scenario of future, 229; China and pragmatic scenario of future, 206 Asimov, Isaac, 26, 236
by Martin Ford · 13 Sep 2021 · 288pp · 86,995 words
Russell and physicists Max Tegmark and Frank Wilczek co-authored an open letter published in the U.K.’s Independent declaring that the advent of artificial superintelligence “would be the biggest event in human history,” and that a computer with superhuman intellectual capability might be capable of “outsmarting financial markets, out-inventing
by Mark O'Connell · 28 Feb 2017 · 252pp · 79,452 words
, too, in Elon Musk’s and Bill Gates’s and Stephen Hawking’s increasingly vehement warnings about the prospect of our species’ annihilation by an artificial superintelligence, not to mention in Google’s instatement of Ray Kurzweil, the high priest of the Technological Singularity, as its director of engineering. I saw the
…
-clip-manufacturing facilities. The scenario was deliberately cartoonish, but as an example of the kind of ruthless logic we might be up against with an artificial superintelligence, its intent was entirely serious. “I wouldn’t describe myself these days as a transhumanist,” Nick told me one evening over dinner at an Indian
…
us. “I still think,” he said, “that within a few generations it will be possible to transform the substrate of our humanity. And I think artificial superintelligence will be the engine that drives that.” Like many transhumanists, Nick was fond of pointing out the vast disparity in processing power between human tissue
…
of the human cranium, where it is technically possible to build computer processors the size of skyscrapers. Such factors, he maintained, created the conditions for artificial superintelligence. And because of our tendency to conceive of intelligence within human parameters, we were likely to become complacent about the speed with which machine intelligence
…
human being is on this quantity of meat.” We were talking, Nate Soares and I, about the benefits that might come with the advent of artificial superintelligence. For Nate, the most immediate benefit would be the ability to run a human being—to run, specifically, himself—on something other than this quantity
…
begun to think of as magical rationalism. He spoke, now, of the great benefits that would come, all things being equal, with the advent of artificial superintelligence. By developing such a transformative technology, he said, we would essentially be delegating all future innovations—all scientific and technological progress—to the machine. These
…
claims were more or less standard among those in the tech world who believed that artificial superintelligence was a possibility. The problem-solving power of such a technology, properly harnessed, would lead to an enormous acceleration in the turnover of solutions and
…
were working toward now, for all its vast complexity and remoteness, could be achieved in the course of a long weekend by the kind of artificial superintelligence Nate was talking about.) It was easy to forget, Nate continued, that as we sat here talking we were in fact using nanotech protein computers
…
there was nothing special about carbon. And so the best-case scenario of the Singularitarians, the version of the future in which we merge with artificial superintelligence and become immortal machines, was, for me, no more appealing, in fact maybe even less appealing, than the worst-case scenario, in which
…
artificial superintelligence destroyed us all. And it was this latter scenario, the failure mode as opposed to the God mode, that I had come to learn about,
…
his colleagues—at MIRI, at the Future of Humanity Institute, at the Future of Life Institute—were working to prevent was the creation of an artificial superintelligence that viewed us, its creators, as raw material that could be reconfigured into some more useful form (not necessarily paper clips). And the way Nate
…
artificer, is the symbol and spirit of this understanding of ourselves, of our ambitions—the shadow of the waxwing slain, cast darkly across history, plummeting. Artificial superintelligence, I was repeatedly told, was a dangerous prospect precisely because of how unlike us it would be, how inhuman, how immune to anger and hatred
…
, whose production line was almost entirely roboticized, and whose CEO, Elon Musk—the same Elon Musk who was so publicly terrified by the prospect of artificial superintelligence—had recently announced the company’s plans to develop its own self-driving system within three to five years. Although I had not beheld him
…
mostly online community of biohackers, or “practical transhumanists.” These are people who don’t want to wait around for the Singularity to happen, or for artificial superintelligence to finally materialize and subsume the informational content of their human minds, their wetware. With the means currently at hand, they are doing what they
by Parmy Olson · 284pp · 96,087 words
of the universe. For all the entertainment value of games, Hassabis would eventually become gripped by a powerful desire to use them to create an artificial superintelligence that would help him unlock the secrets of human consciousness. That calling to understand the mysteries of the universe went beyond the aims of most
…
’re going to be companies building products, because AI is not really about research anymore.” Nick Bostrom’s story about the paper clip, where an artificial superintelligence destroys civilization as it converts all the world’s resources into the tiny metal widgets might sound like science fiction, but in many ways, it
…
want to get there with a maniacal sense of urgency. Maniacal.” Musk believes that with brain implants, humans will be able to prevent a future artificial superintelligence from wiping us out, and so he wants Neuralink to perform surgeries on more than 22,000 people by 2030. But a more pressing issue
by Calum Chace · 17 Jul 2016 · 477pp · 75,408 words
types of economies among other things. Highly recommended.” Dr. Roman V. Yampolskiy, Professor of Computer Engineering and Computer Science, Director of Cybersecurity lab, Author of Artificial Superintelligence: a Futuristic Approach Unprecedented productivity gains and unlimited leisure—what could possibly go wrong? Everything, says Calum Chace, if we don’t evolve a social
by Kai-Fu Lee · 14 Sep 2018 · 307pp · 88,180 words
. Some predict that with the dawn of AGI, machines that can improve themselves will trigger runaway growth in computer intelligence. Often called “the singularity,” or artificial superintelligence, this future involves computers whose ability to understand and manipulate the world dwarfs our own, comparable to the intelligence gap between human beings and, say
…
to, ix–xi See also China; deep learning; economy and AI; four waves of AI; global AI story; human coexistence with AI; new world order artificial superintelligence. See superintelligence Association for the Advancement of Artificial Intelligence, 88–89 Austria, 159 automation in factories and farms, 20, 165–66, 167–68 Fink’s
by Kenneth Payne · 16 Jun 2021 · 339pp · 92,785 words
most sophisticated AI will reduce war to something predictable, even computable. There’ll be no lifting the ‘fog of war’ with even the most powerful artificial superintelligence, something we’ll explore further in later chapters. Still, the advent of warbots means that decision-making in war will no longer be entirely, or
by Michael Bhaskar · 2 Nov 2021
. It encompasses all of that and implies something more: a phase transition in the human mind itself. Perhaps we don't need to posit an artificial superintelligence or augmented minds because something like them is already here. The Great Convergence is beyond any country or organisation; it is truly a planetary moment
by James Barrat · 30 Sep 2013 · 294pp · 81,292 words
still improving. The scientists have passed a historic milestone! For the first time humankind is in the presence of an intelligence greater than its own. Artificial superintelligence, or ASI. Now what happens? AI theorists propose it is possible to determine what an AI’s fundamental drives will be. That’s because once
…
called AGI. Shortly after that, someone (or some thing) will create an AI that is smarter than humans, often called artificial superintelligence. Suddenly we may find a thousand or ten thousand artificial superintelligences—all hundreds or thousands of times smarter than humans—hard at work on the problem of how to make themselves better
…
at making artificial superintelligences. We may also find that machine generations or iterations take seconds to reach maturity, not eighteen years as we humans do. I. J. Good, an
…
it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute Artificial superintelligence does not yet exist, nor does artificial general intelligence, the kind that can learn like we do and will in many senses match and exceed
…
mortality, will be solved. Artificial intelligence is the star of the Singularity media spectacle, but nanotechnology plays an important supporting role. Many experts predict that artificial superintelligence will put nanotechnology on the fast track by finding solutions for seemingly intractable problems with nanotech’s development. Some think it would be better if
…
beings run into less advanced ones: Christopher Columbus versus the Tiano, Pizzaro versus the Inca, Europeans versus Native Americans. Get ready for the next one. Artificial superintelligence versus you and me. * * * Perhaps technology thinkers have considered AI’s downside, but believe it’s too unlikely to worry about. Or they get it
…
is wrong for a lot of reasons. The assumption becomes even more dangerous after the AGI’s intelligence rockets past ours, and it becomes ASI—artificial superintelligence. So how do you create friendly AI? Or could you impose friendliness on advanced AIs after they’re already built? Yudkowsky has written a book
…
a problem with many versions of itself, super high-speed calculations, running 24/7, mimicking friendliness, playing dead, and more. We’ve proposed that an artificial superintelligence won’t be satisfied with remaining isolated; its drives and intelligence would thrust it into our world and put our existence at risk. But why
…
’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us. Consider an artificial superintelligence a thousand times more intelligent than the smartest human. As we noted in chapter 1, nuclear weapons are our own species’ most destructive invention. What
…
the hard kernel of the Busy Child scenario, the rapid recursive self-improvement that enables an AI to bootstrap itself from artificial general intelligence to artificial superintelligence. It’s commonly called the “intelligence explosion.” A self-aware, self-improving system will seek to better fulfill its goals, and minimize vulnerabilities, by improving
…
if this could be done then, at double the cost, the machine could exhibit ultraintelligence. So, for a few dollars more you can get ASI, artificial superintelligence, Good proposes. But then watch out for the civilization-wide ramifications of sharing the planet with smarter than human intelligence. In 1962, before he’d
…
on risks of, see risks of artificial intelligence Singularity and, see Singularity tight coupling in utility function of virtual environments for artificial neural networks (ANNs) artificial superintelligence (ASI) anthropomorphizing gradualist view of dealing with jump from AGI to; see also intelligence explosion morality of nanotechnology and runaway Artilect War, The (de Garis
…
) ASI, see artificial superintelligence Asilomar Guidelines ASIMO Asimov, Isaac: Three Laws of Robotics of Zeroth Law of Association for the Advancement of Artificial Intelligence (AAAI) asteroids Atkins, Brian and
by John Brockman · 19 Feb 2019 · 339pp · 94,769 words
you emerged from sleep this morning, a premonition of the much greater consciousness that would arrive once you opened your eyes and fully awoke. Perhaps artificial superintelligence will enable life to spread throughout the cosmos and flourish for billions or trillions of years, and perhaps this will be because of decisions we
…
in identifying the corporations and bureaus that he called “machines of flesh and blood” as the first intelligent machines. He anticipated the dangers of creating artificial superintelligences with goals not necessarily aligned with our own. What is now clear, whether or not it was apparent to Wiener, is that these organizational superintelligences
by Stuart Russell and Peter Norvig · 14 Jul 2019 · 2,466pp · 668,761 words
, 2007), held its first conference and organized the Journal of Artificial General Intelligence in 2008. At around the same time, concerns were raised that creating artificial superintelligence or ASI—intelligence that far surpasses human ability—might be a bad idea (Yudkowsky, 2008; Omohundro, 2008). Turing (1996) himself made the same point in
…
risks, 49–52, 1038–1047 safety, 1052–1056 societies, 53 strong, 1032, 1056, 1057 weak, 1032, 1056, 1057 artificial intelligence (AI), 1 artificial life, 161 artificial superintelligence (ASI), 51 Arulampalam, M. S., 517, 1086 Arulkumaran, K., 871, 1086 Arunachalam, R., 639, 1086 arXiv.org, 45, 839, 1069 Asada, M., 983, 1101 asbestos
by Ethan Mollick · 2 Apr 2024 · 189pp · 58,076 words
the stock market, makes some money, and begins the process of augmenting its intelligence further. Soon, it becomes more intelligent than a human, an ASI—artificial superintelligence. The moment an ASI is invented, humans become obsolete. We cannot hope to understand what it is thinking, how it operates, or what its goals
by Nate Silver · 12 Aug 2024 · 848pp · 227,015 words
strategic choices. AGI: Artificial general intelligence. The term lacks a clear definition but refers to at least broad human-level intelligence, sometimes as distinguished from artificial superintelligence (ASI), which surpasses that of humans. AI: See: artificial intelligence. AK: Ace-king, the best starting hand in Hold’em apart from a pocket pair
…
, 236, 263, 485 Archipelago, The, 22, 310, 478 arms race, 478 See also mutually assured destruction; nuclear existential risk art world, 329–30, 331n ASI (artificial superintelligence), 478 Asian Americans, 135–36, 513n See also race asymmetric odds, 248–49, 255, 259, 260–62, 276, 277 attack surfaces, 177, 187, 478 attention
by Tim O'Reilly · 9 Oct 2017 · 561pp · 157,589 words
, and Elon Musk postulate that once it exists, it will rapidly outstrip humanity, with unpredictable consequences. Bostrom calls this hypothetical next step in strong AI “artificial superintelligence.” Deep learning pioneers Demis Hassabis and Yann LeCun are skeptical. They believe we’re still a long way from artificial general intelligence. Andrew Ng, formerly
…
giant Baidu, compared worrying about hostile AI of this kind to worrying about overpopulation on Mars. Even if we never achieve artificial general intelligence or artificial superintelligence, though, I believe that there is a third form of AI, which I call hybrid artificial intelligence, in which much of the near-term risk
…
the future are in its thrall as much as any one of us. It is this hybrid artificial intelligence of today, not some fabled future artificial superintelligence, that we must bring under control. PART IV IT’S UP TO US The best way to predict the future is to invent it. —Alan
by Malcolm Harris · 14 Feb 2023 · 864pp · 272,918 words
faith is weak. I think this post, though a joke on most levels, captures an important truth about Roko’s Basilisk. There’s no emerging artificial superintelligence that will automatically arbitrate the thoughts and claims of all people. There is just capitalism, an impersonal system that acts through people toward the increasing
by Meredith Broussard · 19 Apr 2018 · 245pp · 83,272 words
) is built on the same foundation of code, data, binary, and electrical impulses. Understanding what is real and what is imaginary in AI is crucial. Artificial superintelligences, like on the TV show Person of Interest or Star Trek, are imaginary. Yes, they’re fun to imagine, and it can inspire wonderful creativity
…
C. Clarke worked with Stanley Kubrick to develop 2001: A Space Odyssey, he turned to his friend Minsky for advice on how to imagine an artificial superintelligence on a spaceship that tries to save the world and ends up destroying the crew. Minsky delivered. Together, they created HAL 9000, a computer that
by Chip Walter · 7 Jan 2020 · 232pp · 72,483 words
who invented the XPRIZE, had conceived the idea for Singularity University. “Singularity” was Kurzweil’s term for explaining that by 2045, the rapid rise of artificial superintelligence would trigger runaway technological growth, resulting in changes in human civilization so radical they would be impossible to describe. Diamandis, therefore, suggested creating a university
by Jaron Lanier · 6 May 2013 · 510pp · 120,048 words
pass us in a great whoosh. In the blink of an eye we will become obsolete. We might then be instantly dead, because the new artificial superintelligence will need our molecules for a much higher purpose. Or maybe we’ll be kept as pets. Ray Kurzweil, who helped found the university, awaits
by Ronald J. Deibert · 14 Aug 2020
problems. Among the areas that pose the greatest risks for abuse of power, and even existential risks (when projected forward in time), are artificial intelligence, artificial superintelligence (a computer vastly more intelligent than any human), machine learning, quantum mechanics, and facial recognition — either separately or, more likely, in combination. Abuses and built
by Sergey Young · 23 Aug 2021 · 326pp · 88,968 words
to learn about these subjects, I recommend watching Wired UK’s video on quantum computing by Amit Katwala and Science Time’s YouTube documentary on artificial superintelligence.4 For now, my own, truly “dumbed down” version goes like this: In conventional computing, every piece of information is made up of ones and
…
Katwala, “Quantum computers will change the world (if they work),” Wired, last modified March 5, 2020, https://www.wired.co.uk/article/quantum-computing-explained; “Artificial Superintelligence Documentary - AGI,” Science Time, last modified October 12, 2019, https://www.youtube.com/watch?v=2h4tIiPNu-0. 5Frank Arute et al., “Quantum supremacy using a
by David J. Leinweber · 31 Dec 2008 · 402pp · 110,972 words
also won five Hugo awards, deals with the topic in much of his work, including his latest novel, Rainbow’s End. If we construct an artificial superintelligent entity, what will it think of us? † Since renamed Wall Street & Technology, and a useful resource for nerds on Wall Street (NOWS) at www.wallstreetandtech
by Ananyo Bhattacharya · 6 Oct 2021 · 476pp · 121,460 words
, could not continue’.2 Whether that would be in a negative or positive sense remains a matter of debate: thinkers have variously speculated that an artificial superintelligence might end up fulfilling all human desires, or cosseting us like pets, or eradicating us altogether. The cynical side of von Neumann’s personality, shaped
by Nicole Aschoff
Singularity. The Singularity is a hypothesis, shared by prominent scientists and futurists, contending that we’re walking down a technological path that ends with an artificial superintelligence so powerful it will someday have the power to reproduce itself and remake human civilization in unfathomable ways. Kurzweil, an optimistic “transhumanist,” thinks this future
by Vivek Wadhwa and Alex Salkever · 2 Apr 2017 · 181pp · 52,147 words
rules or guidance. Such broader reasoning ability is known as artificial general intelligence (A.G.I.), or hard A.I. One step beyond this is artificial superintelligence, the stuff out of science fiction that is still so far away—and crazy—that I don’t even want think about it. This is
by Raghuram Rajan · 26 Feb 2019 · 596pp · 163,682 words
will be more extraordinary than anything we have seen. Maybe most of us will be unemployed in a decade, rendered redundant by robots and generalized artificial superintelligence. I doubt it—ever since the 1950s, experts have been predicting that generalized artificial intelligence, that is algorithms that can replace humans fully, is less
by Vaclav Smil · 23 Sep 2019
(1972), Moravec (1988), Coren (1998), and Kurzweil (2005). Many of these writings either imply or explicitly posit the arrival of singularity when the contributions of artificial superintelligence will rise to such a level that they will be transformed into an unprecedented runaway process. This implies not only artificial intelligence surpassing any human
by Bruce Schneier · 7 Feb 2023 · 306pp · 82,909 words
movies, the AI that can sense, think, and act in a very general and human way. If it’s smarter than humans, it’s called “artificial superintelligence.” Combine it with robotics and you have an android, one that may look more or less like a human. The movie robots that try to