artificial general intelligence

back to index

description: theoretical class of AI able to perform any intelligence-based task a human can

99 results

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

Adams, Eric Baum, Pei Wang, Steve Grand, Ben Goertzel and Phil Goetz 283 Author Index 295 Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms B. Goertzel and P. Wang (Eds.) IOS Press, 2007 © 2007 The authors and IOS Press. All rights reserved. 1 Introduction: Aspects of Artificial General Intelligence Pei WANG and Ben GOERTZEL Introduction This book contains materials that come out of the Artificial General Intelligence Research Institute (AGIRI) Workshop, held in May 20-21, 2006 at Washington DC. The theme of the workshop is “Transitioning from Narrow AI to Artificial General Intelligence.” In this introductory chapter, we will clarify the notion of “Artificial General Intelligence”, briefly survey the past and present situation of the field, analyze and refute some common objections and doubts regarding this area of research, and discuss what we believe needs to be addressed by the field as a whole in the near future.

The next major step in this direction was the May 2006 AGIRI Workshop, of which this volume is essentially a proceedings. The term AGI, artificial general intelligence, was introduced as a modern successor to the earlier strong AI. Artificial General Intelligence What is artificial general intelligence? The AGIRI website lists several features, describing machines • • • • with human-level, and even superhuman, intelligence. that generalize their knowledge across different domains. that reflect on themselves. and that create fundamental innovations and insights. Even strong AI wouldn’t push for this much, and this general, an intelligence. Can there be such an artificial general intelligence? I think there can be, but that it can’t be done with a brain in a vat, with humans providing input and utilizing computational output.

This is the situation that led to the organization of the 2006 AGIRI (Artificial General Intelligence Research Institute) workshop; and to the decision to pull together a book from contributions by the speakers at the conference. The themes of the book and the contents of the chapters are discussed in the Introduction by myself and Pei Wang; so in this Preface I will restrict myself to a few brief and general comments. As it happens, this is the second edited volume concerned with Artificial General Intelligence (AGI) that I have co-edited. The first was entitled simply Artificial General Intelligence; it appeared in 2006 under the Springer imprimatur, but in fact most of the material in it was written in 2002 and 2003.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

The notion of the prediction of radical conceptual innovation is itself conceptually incoherent.6 In other words, to suggest that we are on a “path” to artificial general intelligence whose arrival can be predicted presupposes that there is no conceptual innovation standing in the way—a view that even AI scientists convinced of the coming of artificial general intelligence and who are willing to offer predictions, like Ray Kurzweil, would not assent to. We all know, at least, that for any putative artificial general intelligence system to arrive at an as yet unknown facility for understanding natural language, there must be an invention or discovery of a commonsense, generalizing component.

The idea that we can predict the arrival of AI typically sneaks in a premise, to varying degrees acknowledged, that successes on narrow AI systems like playing games will scale up to general intelligence, and so the predictive line from artificial intelligence to artificial general intelligence can be drawn with some confidence. This is a bad assumption, both for encouraging progress in the field toward artificial general intelligence, and for the logic of the argument for prediction. Predictions about scientific discoveries are perhaps best understood as indulgences of mythology; indeed, only in the realm of the mythical can certainty about the arrival of artificial general intelligence abide, untrammeled by Popper’s or MacIntyre’s or anyone else’s doubts. Mythology about AI is not all bad.

An AI system can implement a syllogism, for instance, and also a planning algorithm (rules of the form: {A, B, C, …}→G, where A, B, and C are actions to be taken and G is the desired goal). There have been no major breakthroughs toward artificial general intelligence using such methods, but even modern AI scientists like Stuart Russell continue to insist that symbolic logic will be an important component of any eventual artificial general intelligence system—for intelligence is, among other things, about reasoning and planning. Aristotle thus kicked off formal studies of inference thousands of years ago. A few decades ago, he also helped kick off work on AI.

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

There is no reason to suppose that humans have attained anywhere near the maximum possible level of intelligence, and it seems highly probable that we will eventually create machines that are more intelligent than us in all respects – assuming we don't blow ourselves up first. We don't yet know whether those machines will be conscious, let alone whether they will be more conscious than us – if that is even a meaningful question. Artificial General Intelligence (AGI) and Superintelligence As we noted in chapter 1, the term for a machine which equals or exceeds human intelligence in all respects is artificial general intelligence, or AGI. The day when the first such machine is built will be a momentous one, as the arrival of superintelligence will not be far beyond it. The likelihood of an intelligence explosion is commonly referred to as the technological singularity.

The fact that Watson is an amalgam – some would say a kludge – of numerous different techniques does not in itself mark it out as different and perpetually inferior to human intelligence. It is nowhere near an artificial general intelligence which is human-level or beyond in all respects. It is not conscious. It does not even know that it won the Jeopardy match. But it may prove to be an early step in the direction of artificial general intelligence. In January 2016, an AI system called AlphaGo developed by Google's DeepMind beat Fan Hui, the European champion of Go, a board game. This was hailed as a major step forward: the game of chess has more possible moves (3580) than there are atoms in the visible universe, but Go has even more – 250150.

When you reach a singularity, the normal rules break down, and the future becomes even harder to predict than usual. In recent years, the term has been applied to the impact of technology on human affairs.[iv] Superintelligence and the technological singularity The technological singularity is most commonly defined as what happens when the first artificial general intelligence (AGI) is created – a machine which can perform any intellectual task that an adult human can. It continues to improve its capabilities and becomes a superintelligence, much smarter than any human. It then introduces change to this planet on a scale and at a speed which un-augmented humans cannot comprehend.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

Whether intelligence resides in the machine or in the software is analogous to the question of whether it resides in the neurons in your brain or in the electrochemical signals that they transmit and receive. Fortunately we don’t need to answer that question here. ANI and AGI We do need to discriminate between two very different types of artificial intelligence: artificial narrow intelligence (ANI) and artificial general intelligence (AGI (4)), which are also known as weak AI and strong AI, and as ordinary AI and full AI. The easiest way to do this is to say that artificial general intelligence, or AGI, is an AI which can carry out any cognitive function that a human can. We have long had computers which can add up much better than any human, and computers which can play chess better than the best human chess grandmaster.

As we saw in the introduction to this book, nobody suggested thirty years ago that we would have powerful AIs in our pockets in the form of telephones, even though now that it has happened it seems a natural and logical development. PART TWO: AGI Artificial General Intelligence CHAPTER 4 CAN WE BUILD AN AGI? 4.1 – Is it possible in principle? The three biggest questions about artificial general intelligence (AGI) are: Can we build one? If so, when? Will it be safe? The first of these questions is the closest to having an answer, and that answer is “probably, as long as we don’t go extinct first”. The reason for this is that we already have proof that it is possible for a general intelligence to be developed using very common materials.

TABLE OF CONTENTS TITLE PAGE INTRODUCTION: SURVIVING AI PART ONE: ANI (ARTIFICIAL NARROW INTELLIGENCE) CHAPTER 1 CHAPTER 2 CHAPTER 3 PART TWO: AGI (ARTIFICIAL GENERAL INTELLIGENCE) CHAPTER 4 CHAPTER 5 PART THREE: ASI (ARTIFICIAL SUPERINTELLIGENCE) CHAPTER 6 CHAPTER 7 PART FOUR: FAI (FRIENDLY ARTIFICIAL INTELLIGENCE) CHAPTER 8 CHAPTER 9 ACKNOWLEDGEMENTS ENDNOTES COMMENTS ON SURVIVING AI A sober and easy-to-read review of the risks and opportunities that humanity will face from AI. Jaan Tallinn, co-founder Skype, co-founder Centre for the Study of Existential Risk (CSER), co-founder Future of Life Institute (FLI) Understanding AI – its promise and its dangers – is emerging as one of the great challenges of coming decades and this is an invaluable guide to anyone who’s interested, confused, excited or scared.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

With the G-MAFIA, federal government, and GAIA taking active roles in the transition from artificial narrow intelligence to artificial general intelligence, we feel comfortably nudged. 2049: The Rolling Stones Are Dead (But They’re Making New Music) By the 2030s, researchers working within the G-MAFIA published an exciting paper, both because of what it revealed about AI and because of how the work was completed. Working from the same set of standards and supported with ample funds (and patience) by the federal government, researchers collaborated on advancing AI. As a result, the first system to reach artificial general intelligence was developed. The system had passed the Contributing Team Member Test.

We will also take a deep dive into the unique situations faced by America’s Big Nine members and by Baidu, Alibaba, and Tencent in China. In Part II, you’ll see detailed, plausible futures over the next 50 years as AI advances. The three scenarios you’ll read range from optimistic to pragmatic and catastrophic, and they will reveal both opportunity and risk as we advance from artificial narrow intelligence to artificial general intelligence to artificial superintelligence. These scenarios are intense—they are the result of data-driven models, and they will give you a visceral glimpse at how AI might evolve and how our lives will change as a result. In Part III, I will offer tactical and strategic solutions to all the problems identified in the scenarios along with a concrete plan to reboot the present.

They started making wild, bold predictions about AI, saying that within ten years—meaning by 1967—computers would • beat all the top-ranked grandmasters to become the world’s chess champion, • discover and prove an important new mathematical theorem, and • write the kind of music that even the harshest critics would still value.26 Meantime, Minsky made predictions about a generally intelligent machine that could do much more than take dictation, play chess, or write music. He argued that within his lifetime, machines would achieve artificial general intelligence—that is, computers would be capable of complex thought, language expression, and making choices.27 The Dartmouth workshop researchers wrote papers and books. They sat for television, radio, newspaper, and magazine interviews. But the science was difficult to explain, and so oftentimes explanations were garbled and quotes were taken out of context.

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

Patti Domm, “False Rumor of Explosion at White House Causes Stocks to Briefly Plunge; AP Confirms Its Twitter Feed Was Hacked,” April 23, 2013, http://www.cnbc.com/id/100646197. 185 deep neural networks to understand text: Xiang Zhang and Yann LeCun, “Text Understanding from Scratch,” April 4, 2016, https://arxiv.org/pdf/1502.01710v5.pdf. 185 Associated Press Twitter account was hacked: Domm, “False Rumor of Explosion at White House Causes Stocks to Briefly Plunge; AP Confirms Its Twitter Feed Was Hacked.” 186 design deep neural networks that aren’t vulnerable: “Deep neural networks are easily fooled.” 186 “counterintuitive, weird” vulnerability: Jeff Clune, interview, September 28, 2016. 186 “[T]he sheer magnitude, millions or billions”: JASON, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,” 28–29. 186 “the very nature of [deep neural networks]”: Ibid, 28. 186 “As deep learning gets even more powerful”: Jeff Clune, interview, September 28, 2016. 186 “super complicated and big and weird”: Ibid. 187 “sobering message . . . tragic extremely quickly”: Ibid. 187 “[I]t is not clear that the existing AI paradigm”: JASON, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,” Ibid, 27. 188 “nonintuitive characteristics”: Szegedy et al., “Intriguing Properties of Neural Networks.” 188 we don’t really understand how it happens: For a readable explanation of this broader problem, see David Berreby, “Artificial Intelligence Is Already Weirdly Inhuman,” Nautilus, August 6, 2015, http://nautil.us/issue/27/dark-matter/artificial-intelligence-is-already-weirdly-inhuman. 12 Failing Deadly: The Risk of Autonomous Weapons 189 “I think that we’re being overly optimistic”: John Borrie, interview, April 12, 2016. 189 “If you’re going to turn these things loose”: John Hawley, interview, December 5, 2016. 189 “[E]ven with our improved knowledge”: Perrow, Normal Accidents, 354. 191 “robo-cannon rampage”: Noah Shachtman, “Inside the Robo-Cannon Rampage (Updated),” WIRED, October 19, 2007, https://www.wired.com/2007/10/inside-the-robo/. 191 bad luck, not deliberate targeting: “ ‘Robotic Rampage’ Unlikely Reason for Deaths,” New Scientist, accessed June 12, 2017, https://www.newscientist.com/article/dn12812-robotic-rampage-unlikely-reason-for-deaths/. 191 35 mm rounds into a neighboring gun position: “Robot Cannon Kills 9, Wounds 14,” WIRED, accessed June 12, 2017, https://www.wired.com/2007/10/robot-cannon-ki/. 191 “The machine doesn’t know it’s making a mistake”: John Hawley, interview, December 5, 2016. 193 “incidents of mass lethality”: John Borrie, interview, April 12, 2016. 193 “If you put someone else”: John Hawley, interview, December 5, 2016. 194 “I don’t have a lot of good answers for that”: Peter Galluch, interview, July 15, 2016. 13 Bot vs.

A800 Mobile Autonomous Robotic System (MARS), 114 AAAI (Association for the Advancement of Artificial Intelligence), 243 AACUS (Autonomous Aerial Cargo/Utility System) helicopter, 17 ABM (Anti-Ballistic Missile) Treaty (1972), 301 accidents, see failures accountability gap, 258–63 acoustic homing seeker, 39 acoustic shot detection system, 113–14 active seekers, 41 active sensors, 85 adaptive malware, 226 advanced artificial intelligence; see also artificial general intelligence aligning human goals with, 238–41 arguments against regarding as threat, 241–44 building safety into, 238–41 dangers of, 232–33 drives for resource acquisition, 237–38 future of, 247–48 in literature and film, 233–36 militarized, 244–45 psychological dimensions, 233–36 vulnerability to hacking, 246–47 “advanced chess,” 321–22 Advanced Medium-Range Air-to-Air Missile (AMRAAM), 41, 43 Advanced Research Projects Agency (ARPA), 76–77 adversarial actors, 177 adversarial (fooling) images, 180–87, 181f, 183f, 185f, 253, 384n Aegis combat system, 162–67 achieving high reliability, 170–72 automation philosophy, 165–67 communications issues, 304 and fully autonomous systems, 194 human supervision, 193, 325–26 Patriot system vs., 165–66, 171–72 simulated threat test, 167–69 testing and training, 176, 177 and USS Vincennes incident, 169–70 Aegis Training and Readiness Center, 163 aerial bombing raids, 275–76, 278, 341–42 Afghanistan War (2001– ), 2–4 distinguishing soldiers from civilians, 253 drones in, 14, 25, 209 electromagnetic environment, 15 goatherder incident, 290–92 moral decisions in, 271 runaway gun incident, 191 AGI, see advanced artificial intelligence; artificial general intelligence AGM-88 high-speed antiradiation missile, 141 AI (artificial intelligence), 5–6, 86–87; see also advanced artificial intelligence; artificial general intelligence AI FOOM, 233 AIM-120 Advanced Medium-Range Air-to-Air Missile, 41 Air Force, U.S. cultural resistance to robotic weapons, 61 future of robotic aircraft, 23–25 Global Hawk drone, 17 nuclear weapons security lapse, 174 remotely piloted aircraft, 16 X-47 drone, 60–61 Air France Flight 447 crash, 158–59 Alexander, Keith, 216, 217 algorithms life-and-death decisions by, 287–90 for stock trading, see automated stock trading Ali Al Salem Air Base (Kuwait), 138–39 Alphabet, 125 AlphaGo, 81–82, 125–27, 150, 242 AlphaGo Zero, 127 AlphaZero, 410 al-Qaeda, 22, 253 “always/never” dilemma, 175 Amazon, 205 AMRAAM (Advanced Medium-Range Air-to-Air Missile), 41, 43 Anderson, Kenneth, 255, 269–70, 286, 295 anthropocentric bias, 236, 237, 241, 278 anthropomorphizing of machines, 278 Anti-Ballistic Missile (ABM) Treaty (1972), 301 antipersonnel autonomous weapons, 71, 355–56, 403n antipersonnel mines, 268, 342; see also land mines anti-radiation missiles, 139, 141, 144 anti-ship missiles, 62, 302 Anti-submarine warfare Continuous Trail Unmanned Vessel (ACTUV), 78–79 anti-vehicle mines, 342 Apollo 13 disaster, 153–54 appropriate human involvement, 347–48, 358 appropriate human judgment, 91, 347, 358 approval of autonomous weapons, see authorization of autonomous weapons Argo amphibious ground combat robot, 114 Arkhipov, Vasili, 311, 318 Arkin, Ron, 280–85, 295, 346 armed drones, see drones Arms and Influence (Schelling), 305, 341 arms control, 331–45 antipersonnel weapons, 355–56 ban of fully autonomous weapons, 352–55 debates over restriction/banning of autonomous weapons, 266–69 general principles on human judgment’s role in war, 357–59 inherent problems with, 284, 346–53 legal status of treaties, 340 limited vs. complete bans, 342–43 motivations for, 345 preemptive bans, 343–44 “rules of the road” for autonomous weapons, 356–57 successful/unsuccessful treaties, 332–44, 333t–339t types of weapons bans, 332f unnecessary suffering standards, 257–58 verification regimes, 344–45 arms race, 7–8, 117–19 Armstrong, Stuart, 238, 240–42 Army, U.S.

cultural resistance to robotic weapons, 61 Gray Eagle drone, 17 overcoming resistance to killing, 279 Patriot Vigilance Project, 171–72 Shadow drone, 209 ARPA (Advanced Research Projects Agency), 76–77 Article 36, 118 artificial general intelligence (AGI); See also advanced artificial intelligence and context, 238–39 defined, 231 destructive potential, 232–33, 244–45 ethical issues, 98–99 in literature and film, 233–36 narrow AI vs., 98–99, 231 timetable for creation of, 232, 247 as unattainable, 242 artificial intelligence (AI), 5–6, 86–87; see also advanced artificial intelligence; artificial general intelligence “Artificial Intelligence, War, and Crisis Stability” (Horowitz), 302, 312 Artificial Intelligence for Humans, Volume 3 (Heaton), 132 artificial superintelligence (ASI), 233 Art of War, The (Sun Tzu), 229 Asaro, Peter, 265, 285, 287–90 Asimov, Isaac, 26–27, 134 Assad, Bashar al-, 7, 331 Association for the Advancement of Artificial Intelligence (AAAI), 243 Atari, 124, 127, 247–48 Atlas ICBM, 307 atomic bombs, see nuclear weapons ATR (automatic target recognition), 76, 84–88 attack decision to, 269–70 defined, 269–70 human judgment and, 358 atypical events, 146, 176–78 Australia, 342–43 authorization of autonomous weapons, 89–101 DoD policy, 89–90 ethical questions, 90–93 and future of lethal autonomy, 96–99 information technology and revolution in warfare, 93–96 past as guide to future, 99–101 Auto-GCAS (automatic ground collision avoidance system), 28 automated machines, 31f, 32–33 automated (algorithmic) stock trading, 200–201, 203–4, 206, 210, 244, 387n automated systems, 31 automated weapons first “smart” weapons, 38–40 precision-guided munitions, 39–41 automatic machines, 31f automatic systems, 30–31, 110 automatic target recognition (ATR), 76, 84–88 automatic weapons, 37–38 Gatling gun as predecessor to, 35–36 machine guns, 37–38 runaway gun, 190–91 automation (generally) Aegis vs.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

The first two were polls taken at academic conferences: PT-AI, participants of the conference Philosophy and Theory of AI in Thessaloniki 2011 (respondents were asked in November 2012), with a response rate of 43 out of 88; and AGI, participants of the conferences Artificial General Intelligence and Impacts and Risks of Artificial General Intelligence, both in Oxford, December 2012 (response rate: 72/111). The EETN poll sampled the members of the Greek Association for Artificial Intelligence, a professional organization of published researchers in the field, in April 2013 (response rate: 26/250). The TOP100 poll elicited the opinions among the 100 top authors in artificial intelligence as measured by a citation index, in May 2013 (response rate: 29/100). 82.

(The different lines in the plot correspond to different data sets, which yield slightly different estimates.6) Great expectations Machines matching humans in general intelligence—that is, possessing common sense and an effective ability to learn, reason, and plan to meet complex information-processing challenges across a wide range of natural and abstract domains—have been expected since the invention of computers in the 1940s. At that time, the advent of such machines was often placed some twenty years into the future.7 Since then, the expected arrival date has been receding at a rate of one year per year; so that today, futurists who concern themselves with the possibility of artificial general intelligence still often believe that intelligent machines are a couple of decades away.8 Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible to suppose that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred.

A more relevant distinction for our purposes is that between systems that have a narrow range of cognitive capability (whether they be called “AI” or not) and systems that have more generally applicable problem-solving capacities. Essentially all the systems currently in use are of the former type: narrow. However, many of them contain components that might also play a role in future artificial general intelligence or be of service in its development—components such as classifiers, search algorithms, planners, solvers, and representational frameworks. One high-stakes and extremely competitive environment in which AI systems operate today is the global financial market. Automated stock-trading systems are widely used by major investing houses.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

MARTIN FORD: You’ve noted the limitations in current narrow or specialized AI technology. Let’s talk about the prospects for AGI, which promises to someday solve these problems. Can you explain exactly what Artificial General Intelligence is? What does AGI really mean, and what are the main hurdles we need to overcome before we can achieve AGI? STUART J. RUSSELL: Artificial General Intelligence is a recently coined term, and it really is just a reminder of our real goals in AI—a general-purpose intelligence much like our own. In that sense, AGI is actually what we’ve always called artificial intelligence.

The McKinsey Global Institute is a leader in conducting research into this area, and this conversation includes many important insights into the nature of the unfolding workplace disruption. The second question I directed at everyone concerns the path toward human-level AI, or what is typically called Artificial General Intelligence (AGI). From the very beginning, AGI has been the holy grail of the field of artificial intelligence. I wanted to know what each person thought about the prospect for a true thinking machine, the hurdles that would need to be surmounted and the timeframe for when it might be achieved. Everyone had important insights, but I found three conversations to be especially interesting: Demis Hassabis discussed efforts underway at DeepMind, which is the largest and best funded initiative geared specifically toward AGI.

However, it is also one of the most difficult challenges facing the field. A breakthrough that allowed machines to efficiently learn in a truly unsupervised way would likely be considered one of the biggest events in AI so far, and an important waypoint on the road to human-level AI. ARTIFICIAL GENERAL INTELLIGENCE (AGI) refers to a true thinking machine. AGI is typically considered to be more or less synonymous with the terms HUMAN-LEVEL AI or STRONG AI. You’ve likely seen several examples of AGI—but they have all been in the realm of science fiction. HAL from 2001 A Space Odyssey, the Enterprise’s main computer (or Mr.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

My informal survey of about two hundred computer scientists at a recent AGI conference confirmed what I’d expected. The annual AGI Conferences, organized by Goertzel, are three-day meet-ups for people actively working on artificial general intelligence, or like me who are just deeply interested. They present papers, demo software, and compete for bragging rights. I attended one generously hosted by Google at their headquarters in Mountain View, California, often called the Googleplex. I asked the attendees when artificial general intelligence would be achieved, and gave them just four choices—by 2030, by 2050, by 2100, or not at all? The breakdown was this: 42 percent anticipated AGI would be achieved by 2030; 25 percent by 2050; 20 percent by 2100; 10 percent by 2100, and 2 percent never.

Andrew Rubin, Google’s Senior Vice President of Mobile: Fried, Ina, “Android Chief Says Your Phone Should Not Be Your Assistant,” All Things D, October 19, 2011, http://allthingsd.com/20111019/android-chief-says-your-phone-should-not-be-your-assistant/ (accessed November 13, 2011). It may be that we need a scientific breakthrough: Goertzel, Ben, “Editor’s Blog Report on the Fourth Conference on Artificial General Intelligence,” H+ Magazine, September 1, 2011, http://hplusmagazine.com/2011/09/01/report-on-the-fourth-conference-on-artificial-general-intelligence/ (accessed November 22, 2011). LIDA scores like a human: Biever, Celeste, “Bot shows signs of consciousness,” New Scientist, April 1, 2011, http://www.newscientist.com/article/mg21028063.400-bot-shows-signs-of-consciousness.html (accessed June 1, 2011).

Aboujaoude, Elias accidents AI and, see risks of artificial intelligence nuclear power plant Adaptive AI affinity analysis agent-based financial modeling “Age of Robots, The” (Moravec) Age of Spiritual Machines, The: When Computers Exceed Human Intelligence (Kurzweil) AGI, see artificial general intelligence AI, see artificial intelligence AI-Box Experiment airplane disasters Alexander, Hugh Alexander, Keith Allen, Paul Allen, Robbie Allen, Woody AM (Automatic Mathematician) Amazon Anissimov, Michael anthropomorphism apoptotic systems Apple iPad iPhone Siri Arecibo message Aristotle artificial general intelligence (AGI; human-level AI): body needed for definition of emerging from financial markets first-mover advantage in jump to ASI from; see also intelligence explosion by mind-uploading by reverse engineering human brain time and funds required to develop Turing test for artificial intelligence (AI): black box tools in definition of drives in, see drives as dual use technology emotional qualities in as entertainment examples of explosive, see intelligence explosion friendly, see Friendly AI funding for jump to AGI from Joy on risks of, see risks of artificial intelligence Singularity and, see Singularity tight coupling in utility function of virtual environments for artificial neural networks (ANNs) artificial superintelligence (ASI) anthropomorphizing gradualist view of dealing with jump from AGI to; see also intelligence explosion morality of nanotechnology and runaway Artilect War, The (de Garis) ASI, see artificial superintelligence Asilomar Guidelines ASIMO Asimov, Isaac: Three Laws of Robotics of Zeroth Law of Association for the Advancement of Artificial Intelligence (AAAI) asteroids Atkins, Brian and Sabine Automated Insights availability bias Banks, David L.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

The Difference between Narrow and Wide A lifetime of sci-fi movies and books have ingrained in us the expectation that there will be some Singularity-style ‘tipping point’ at which Artificial General Intelligence will take place. Devices will get gradually smarter and smarter until, somewhere in a secret research lab deep in Silicon Valley, a message pops up on Mark Zuckerberg or Sergey Brin’s computer monitor, saying that AGI has been achieved. Like Ernest Hemingway once wrote about bankruptcy, Artificial General Intelligence will take place ‘gradually, then suddenly’. This is the narrative played out in films like James Cameron’s seminal Terminator 2: Judgment Day.

‘Unfortunately, the chatbots of today can only resort to trickery to hopefully fool a human into thinking they are sentient,’ one recent entrant in the Loebner Prize told me. ‘And it is highly unlikely without a yet-undiscovered novel approach to simulating an AI that any chatbot technology employed today could ever fool an experienced chatbot creator into believing they possess [artificial] general intelligence.’ Turing wasn’t particularly concerned with the metaphysical question of whether a machine can actually think. In his famous 1950 essay, ‘Computing Machinery and Intelligence’, he described it as ‘too meaningless to deserve discussion’. Instead he was interested in getting machines to perform activities that would be considered intelligent if they were carried out by a human.

Manipulating human leaders could meanwhile refer to the handing-over of important tasks to the AI assistants that will come to run our lives, while the development of AI weapons has been a goal since virtually the field’s earliest days. What he and Musk were specifically pointing towards was something called Artificial General Intelligence, or AGI. So far, all of the applications of Artificial Intelligence described in this book have come under the broad umbrella heading of ‘Narrow AI’ or ‘Weak AI’. This has nothing to do with how robust the technology is. As we saw in the early chapters, today’s deep learning neural networks are orders of magnitude less brittle than the symbol-crunching Artificial Intelligence that made up Good Old-Fashioned AI.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

Acknowledgements 4. Overview 2. Part I: Could Computers Ever Think? 1. People Thinking About Computers 1. The Question 2. Vitalism 3. Science vs. vitalism 4. The vital mind 5. Computers cannot think now 6. Diminishing returns 7. AI in the background 8. Robots leave factories 9. Intelligent tasks 10. Artificial General Intelligence (AGI) 11. Existence proof 12. Simulating neurons, feathers 13. Moore's law 14. Definition of intelligence 15. Turing Test 16. Robotic vs cognitive intelligence 17. Development of intelligence 18. Four year old child 19. Recursive self-improvement 20. Busy Child 21. AI foom 2. Computers Thinking About People 1.

AI programs often surprise their developers with what they can (and cannot) do. Kasparov stated that Deep Blue had produced some very creative chess moves even though it used a relatively simple brute force strategy. Certainly Deep Blue was a much better chess player than its creators. Artificial General Intelligence (AGI) It is certainly the case that computers are becoming ever more intelligent and capable of addressing a widening variety of difficult problems. This book argues that it is only a matter of time before they achieve general, human level intelligence. This would mean that they could reason not only about the tasks at hand but also about the world in general, including their own thoughts.

At some point computers will have basic human level-intelligence for every-day tasks but will not yet be intelligent enough to program themselves by themselves. These machines will be very intelligent in some ways, yet quite limited in others. It is unclear how long this intermediate period will last, it could be months or many decades. Such machines are often referred to as being an Artificial General Intelligence, or AGI. General meaning general purpose, not restricted in the normal way that programs are. Artificial intelligence techniques such as genetic algorithms are already being used to help create artificial intelligence software as is discussed in part II. This process is likely to continue, with better tools producing better machines that produce better tools.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

See Project Maven Allen Institute for Artificial Intelligence, 272–74 Alphabet, 186, 216, 301 AlphaGo in China, 214–17, 223–24 in Korea, 169–78, 198, 216 Altman, Sam, 161–65, 282–83, 287–88, 292–95, 298–99 ALVINN project, 43–44 Amazon contest for developing robots for warehouse picking, 278–79 facial recognition technology (Amazon Rekognition), 236–38 Android smartphones and speech recognition, 77–79 Angelova, Anelia, 136–37 ANNA microchip, 52–53 antitrust concerns, 255 Aravind Eye Hospital, 179–80, 184 artificial general intelligence (AGI), 100, 109–10, 143, 289–90, 295, 299–300, 309–10 artificial intelligence (AI). See also intelligence ability to remove flagged content, 253 AI winter, 34–35, 288 AlphaGo competition as a milestone event, 176–78, 198 artificial general intelligence (AGI), 100, 109–10, 143, 289–90, 295, 299–300, 309–10 the black-box problem, 184–85 British government funding of, 34–35 China’s plan to become the world leader in AI by 2030, 224–25 content moderation system, 231 Dartmouth Summer Research Conference (1956), 21–22 early predictions about, 288 Elon Musk’s concerns about, 152–55, 156, 158–60, 244, 245, 289 ethical AI team at Google, 237–38 “Fake News Challenge,” 256–57 Future of Life Institute, 157–60, 244, 291–92 games as the starting point for, 111–12 GANs (generative adversarial networks), 205–06, 259–60 government investment in, 224–25 importance of human judgment, 257–58 as an industry buzzword, 140–41 major contributors, 141–42, 307–08, 321–26 possibilities regarding, 9–11 practical applications of, 113–14 pushback against the media hype regarding, 270–71 robots using dreaming to generate pictures and spoken words, 200 Rosewood Hotel meeting, 160–63 the Singularity Summit, 107–09, 325–26 superintelligence, 105–06, 153, 156–60, 311 symbolic AI, 25–26 timeline of events, 317–20 tribes, distinct philosophical, 192 unpredictability of, 10 use of AI technology by bad actors, 243 AT&T, 52–53 Australian Centre for Robotic Vision, 278 autonomous weapons, 240, 242, 244, 308 backpropagation ability to handle “exclusive-or” questions, 38–39 criticism of, 38 family tree identification, 42 Geoff Hinton’s work with, 41 Baidu auction for acquiring DNNresearch, 5–9, 11, 218–19 as competition for Facebook and Google, 132, 140 interest in neural networks and deep learning, 4–5, 9, 218–20 key players, 324 PaddlePaddle, 225 translation research, 149–50 Ballmer, Steve, 192–93 Baumgartner, Felix, 133–34 Bay Area Vision Meeting, 124–25 Bell Labs, 47, 52–53 Bengio, Yoshua, 57, 162, 198–200, 205–06, 238, 284, 305–08 BERT universal language model, 273–74 bias Black in AI, 233 of deep learning technology, 10–11 facial recognition systems, 231–32, 234–36, 308 of training data, 231–32 Billionaires’ Dinner, 154 Black in AI, 233 Bloomberg Businessweek, 132 Bloomberg News, 130, 237 the Boltzmann Machine, 28–30, 39–40, 41 Bostrom, Nick, 153, 155 Bourdev, Lubomir, 121, 124–26 Boyton, Ed, 287–88 the brain innate machinery, 269–70 interface between computers and, 291–92 mapping and simulating, 288 the neocortex’s biological algorithm, 82 understanding how the brain works, 31–32 using artificial intelligence to understand, 22–23 Breakout (game), 111–12, 113–14 Breakthrough Prize, 288 Brin, Sergey building a machine to win at Go, 170–71 and DeepMind, 301 at Google, 216 Project Maven meetings, 241 Brockett, Chris, 54–56 Brockman, Greg, 160–64 Bronx Science, 17, 21 Buolamwini, Joy, 234–38 Buxton, Bill, 190–91 Cambridge Analytica, 251–52 Canadian Institute for Advanced Research, 307 capture the flag, training a machine to play, 295–96 Carnegie Mellon University, 40–41, 43, 137, 195, 208 the Cat Paper, 88, 91 Chauffeur project, 137–38, 142 Chen, Peter, 283 China ability to develop self-driving vehicles, 226–27 development of deep learning research within, 220, 222 Google’s presence in, 215–17, 220–26 government censorship, 215–17 plan to become the world leader in AI by 2030, 224–25, 226–27 promotion of TensorFlow within, 220–22, 225 use of facial recognition technology, 308 Wuzhen AlphaGo match, 214–17, 223–24 Clarifai, 230–32, 235, 239–40, 249–50, 325 Clarke, Edmund, 195 cloud computing, 221–22, 245, 298 computer chips.

He had already landed Hinton, Sutskever, and Krizhevsky from the University of Toronto. Now, in the last days of December 2013, he was flying to London in pursuit of DeepMind. Founded around the same time as Google Brain, DeepMind was a start-up dedicated to an outrageously lofty goal. It aimed to build what it called “artificial general intelligence”—AGI—technology that could do anything the human brain could do, only better. That endgame was still years, decades, or perhaps even centuries away, but the founders of this tiny company were confident it would one day be achieved, and like Andrew Ng and other optimistic researchers, they believed that many of the ideas brewing at labs like the one at the University of Toronto were a strong starting point.

Hassabis, Legg, and Suleyman would each stamp a unique point of view on a company that looked toward the horizons of artificial intelligence but also aimed to solve problems in the nearer term, while openly raising concerns over the dangers of this technology in both the present and the future. Their stated aim—contained in the first line of their business plan—was artificial general intelligence. But at the same time, they told anyone who would listen, including potential investors, that this research could be dangerous. They said they would never share their technology with the military, and in an echo of Legg’s thesis, they warned that superintelligence could become an existential threat.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

nuclear winter, 129 postcatastrophe recovery of, 132–134 slavery in agricultural civilisations, 47 suffering of farmed animals, 208–211 technological development feedback loop, 153 AI governance, 225 AI safety, 244 air pollution, 25, 25(fig.), 141, 227, 261 alcohol use and abuse, values and, 67, 78 alignment problem of AI, 87 AlphaFold 2, 81 AlphaGo, 80 AlphaZero, 80–81 al-Qaeda: bioweapons programme, 112–113 al-Zawahiri, Ayman, 112 Ambrosia start-up, 85 Animal Rights Militia, 240–241 animal welfare becoming vegetarian, 231–232 political activism, 72–73 the significance of values, 53 suffering of farmed animals, 208–211, 213 wellbeing of animals in the wild, 211–213 animals, evolution of, 56–57 anthrax, 109–110 anti-eutopia, 215–220 Apple iPhone, 198 Arab slavery, 47 “Are Ideas Getting Harder to Find?” 151 Armageddon (film), 106 Arrhenius, Svante, 42 artificial general intelligence (AGI) averting civilisational stagnation, 156 longterm importance of, 80–83 predicting the arrival of, 89–91 prioritising threats to improve on, 228 the pursuit of immortality, 83–86 reducing future uncertainty, 228–229 surpassing human abilities, 86–88 values lock-in, 92–95 artificial intelligence (AI) addressing neglected problems, 231 AI safety, 244 alignment problem, 87 artificial general intelligence, 80–83 defining, 80 future threats and benefits, 6 missing moments of plasticity, 43 prioritising future solutions, 228–229 uncertainty over the future, 224–226 value lock-in, 79 arts and literature preserving and projecting, 22–23 the value of non-wellbeing goods, 214–215 asteroids, collision with, 105–107, 113 Atari, 82–83 Atlantis, 12 Australia: effects of all-out nuclear war, 131 average view of wellbeing, 177–179, 179(fig.)

See value lock-in lock-in paradox, 101–102 Long Peace, 114 long reflection, 98–99 longtermism arguments for and against, 4–7, 257–261 concerns for future generations, 10–12 contingency of moral norms, 71–72 empowering future generations, 9 expedition into uncharted terrain, 6–7 longterm consequences of small actions, 173–175 perspective on civilisational stagnation, 159–163 population ethics, 168–171 the size of the future, 19 understanding the implications of, 229–230 values changes, 53–55 lottery winners, 203 Lustig, Richard, 203 lying, negative effects of, 241 Lyons, Oren, 11 Macaulay, Zachary, 69 MacFarquhar, Larissa, 168 machine learning artificial general intelligence development, 80–81 predicting AGI completion, 90–91 See also artificial general intelligence; artificial intelligence mammals evolution of, 4, 13(fig.) lifespan, 13, 13(fig.) megafauna, 29–30 Mao Zedong, 218–219 Marlowe, Frank, 206–207 Mars rovers, 189 mathematics Islamic Golden Age, 143 noncontingency, 32–33 Mauritania: abolition of slavery, 69–70 McKibben, Bill, 43 medicine: expected value theory in decisionmaking, 38 megafauna, 29–30 megatherium, 29 Mercy for Animals, 72–73 Metaculus forecasting platform, 113, 116 metaphors of humanity, history, and longtermism, 6–7 Middle Ages: history of civilisational stagnation, 157 migration.

When thinking about lock-in, the key technology is artificial intelligence.35 Writing gave ideas the power to influence society for thousands of years; artificial intelligence could give them influence that lasts millions. I’ll discuss when this might occur later; for now let’s focus on why advanced artificial intelligence would be of such great longterm importance. Artificial General Intelligence Artificial intelligence (AI) is a branch of computer science that aims to design machines that can mimic or replicate human intelligence. Because of the success of machine learning as a paradigm, we’ve made enormous progress in AI over the last ten years. Machine learning is a method of creating useful algorithms that does not require explicitly programming them; instead, it relies on learning from data, such as images, the results of computer games, or patterns of mouse clicks.

pages: 48 words: 12,437

Smarter Than Us: The Rise of Machine Intelligence
by Stuart Armstrong
Published 1 Feb 2014

See, for instance, Bill Hibbard, “Super-Intelligent Machines,” ACM SIGGRAPH Computer Graphics 35, no. 1 (2001): 13–15, http://www.siggraph.org/publications/newsletter/issues/v35/v35n1.pdf; Ben Goertzel and Joel Pitt, “Nine Ways to Bias Open-Source AGI Toward Friendliness,” Journal of Evolution and Technology 22, no. 1 (2012): 116–131, http://jetpress.org/v22/goertzel-pitt.htm. 4. Ben Goertzel, “CogPrime: An Integrative Architecture for Embodied Artificial General Intelligence,” OpenCog Foundation, October 2, 2012, accessed December 31, 2012, http://wiki.opencog.org/w/CogPrime_Overview. Chapter 10 A Summary There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can. Once computers achieve something at a human level, they typically achieve it at a much higher level soon thereafter.

Amnon Eden et al., The Frontiers Collection (Berlin: Springer, 2012); Stuart Armstrong, Anders Sandberg, and Nick Bostrom, “Thinking Inside the Box: Controlling and Using an Oracle AI,” Minds and Machines 22, no. 4 (2012): 299–324, doi:10.1007/s11023-012-9282-2. 2. Stephen M. Omohundro, “The Basic AI Drives,” in Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Frontiers in Artificial Intelligence and Applications 171 (Amsterdam: IOS, 2008), 483–492. 3. Roman V. Yampolskiy, “Leakproofing the Singularity: Artificial Intelligence Confinement Problem,” Journal of Consciousness Studies 2012, nos. 1–2 (2012): 194–214, http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00014. 4.

Journal of Consciousness Studies 17, nos. 9–10 (2010): 7–65. http://www.ingentaconnect.com/content/imp/jcs/2010/00000017/f0020009/art00001. Eden, Amnon, Johnny Søraker, James H. Moor, and Eric Steinhart, eds. Singularity Hypotheses: A Scientific and Philosophical Assessment. The Frontiers Collection. Berlin: Springer, 2012. Goertzel, Ben. “CogPrime: An Integrative Architecture for Embodied Artificial General Intelligence.” OpenCog Foundation. October 2, 2012. Accessed December 31, 2012. http://wiki.opencog.org/w/CogPrime_Overview. Goertzel, Ben, and Joel Pitt. “Nine Ways to Bias Open-Source AGI Toward Friendliness.” Journal of Evolution and Technology 22, no. 1 (2012): 116–131. http://jetpress.org/v22/goertzel-pitt.htm.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

OpenAI will be able to leverage massive computational resources hosted by Microsoft’s Azure service—something that is essential given its focus on building ever larger neural networks. Only cloud computing can deliver compute power on the scale that OpenAI requires for its research. Microsoft, in turn, will gain access to practical innovations that are spawned by OpenAI’s ongoing quest for artificial general intelligence. This will likely result in applications and capabilities that can be integrated into Azure’s cloud services. Perhaps just as importantly, the Azure brand will benefit from an association with one of the world’s leading AI research organizations and better position Microsoft to compete with Google, which enjoys a strong reputation for AI leadership, in part because of its ownership of DeepMind.14 This synergy extends far beyond this single example.

Many of the startup companies and university researchers working in this area believe, like Covariant, that a strategy founded on deep neural networks and reinforcement learning is the best way to fuel progress toward more dexterous robots. One notable exception is Vicarious, a small AI company based in the San Francisco Bay Area. Founded in 2010—two years before the 2012 ImageNet competition brought deep learning to the forefront—Vicarious’s long-term objective is to achieve human-level or artificial general intelligence. In other words, the company is, in a sense, competing directly with higher-profile and far better funded initiatives like those at DeepMind and OpenAI. We’ll delve into the paths being forged by those two companies and the general quest for human-level AI in Chapter 5. One of Vicarious’s major objectives has been to build applications that are more flexible—or as AI researchers would say, less “brittle”—than typical deep learning systems.

The manipulative work performed by doctors and nurses presents an extraordinary challenge for artificial intelligence because it requires extreme dexterity combined with problem solving and interpersonal skills, as well as the ability to handle an unpredictable environment where every situation, and every patient, is unique. As far as physical healthcare robots are concerned, the productivity scaling effect that we have seen in factories or warehouses likely lies in the distant future and will require not just vastly improved robotic dexterity, but quite possibly artificial general intelligence or something very close to it. Given the limitations of physical robots, it seems likely that any truly significant near-term AI impact on healthcare will emerge in activities that require no moving parts. In other words, artificial intelligence will make its mark in the processing of information and in purely intellectual endeavors, such as diagnosis or the development of treatment plans.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

He is the co-author (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Computer scientist Stuart Russell, along with Elon Musk, Stephen Hawking, Max Tegmark, and numerous others, has insisted that attention be paid to the potential dangers in creating an intelligence on the superhuman (or even the human) level—an AGI, or artificial general intelligence, whose programmed purposes may not necessarily align with our own. His early work was on understanding the notion of “bounded optimality” as a formal definition of intelligence that you can work on. He developed the technique of rational metareasoning, “which is, roughly speaking, that you do the computations that you expect to improve the quality of your ultimate decision as quickly as possible.”

Chapter 8 LET’S ASPIRE TO MORE THAN MAKING OURSELVES OBSOLETE MAX TEGMARK Max Tegmark is an MIT physicist and AI researcher, president of the Future of Life Institute, scientific director of the Foundational Questions Institute, and the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence. I was introduced to Max Tegmark some years ago by his MIT colleague Alan Guth, the father of inflation theory. A distinguished theoretical physicist and cosmologist himself, Max’s principal concern nowadays is the looming existential risk posed by the creation of an AGI (artificial general intelligence—that is, one that matches human intelligence). Four years ago, Max co-founded, with Jaan Tallinn and others, the Future of Life Institute (FLI), which bills itself as “an outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity.” While on a book tour in London, he was in the midst of planning for FLI, and he admits to being driven to tears in a tube station after a trip to the London Science Museum, with its exhibitions spanning the gamut of humanity’s technological achievements.

This suggests that we’ve seen just the tip of the intelligence iceberg; there’s an amazing potential to unlock the full intelligence latent in nature and use it to help humanity flourish—or flounder. Others, including some of the authors in this volume, dismiss the building of an AGI (artificial general intelligence—an entity able to accomplish any cognitive task at least as well as humans) not because they consider it physically impossible but because they deem it too difficult for humans to pull off in less than a century. Among professional AI researchers, both types of dismissal have become minority views because of recent breakthroughs.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

You can use your device’s search function to locate particular terms in the text. 0–1 loss function, 354n47 3CosAdd algorithm, 316, 397n13 Abbeel, Pieter, 257, 258–59, 267–68, 297 Ackley, Dave, 171–72 actor-critic architecture, 138 actualism vs. possibilism, 234–40 effective altruism and, 237–38 imitation and, 235, 239–40, 379n71 Professor Procrastinate problem, 236–37, 379n61 actuarial models, 92–93 addiction, 135, 153, 205–08, 374n65 See also drug use AdSense, 343n72 adversarial examples, 279–80, 387n8 affine transformations, 383n16 African Americans. See racial bias Against Prediction (Harcourt), 78–79 age bias, 32, 396n7 AGI. See artificial general intelligence Agüera y Arcas, Blaise, 247 AI Now Institute, 396n9 AI safety artificial general intelligence delay risks and, 310 corrigibility, 295–302, 392–93n51 field growth, 12, 249–50, 263 gridworlds for, 292–93, 294, 295, 390n29 human-machine cooperation and, 268–69 irreversibility and, 291, 293 progress in, 313–14 reward optimization and, 368n56 uncertainty and, 291–92 See also value alignment AIXI, 206–07, 263 Alciné, Jacky, 25, 29, 50 ALE.

See child development inference, 251–53, 269, 323–24, 385n39, 398nn29–30 See also inverse reinforcement learning information theory, 34–35, 188, 197–98, 260–61 Innocent IX (Pope), 303 Institute for Ophthalmic Research, 287 intelligence artificial general intelligence and, 209 reinforcement learning and, 144, 149, 150–51 See also artificial general intelligence interest. See curiosity interface design, 269 interference, 292 interpretability, 113–17 See also transparency interventions medical predictive models and, 84, 86, 352n12 risk-assessment models and, 80–81, 317–18, 351nn87, 90 intrinsic motivation addiction and, 205–8, 374n65 boredom and, 188, 201, 202, 203–04 knowledge-seeking agents, 206–07, 209–10, 374n73 novelty and, 189–94, 207, 370–71nn29–30, 32, 35 reinforcement learning and, 186–89, 370n12 sole dependence on, 200–03, 373n58 surprise and, 195–200, 207–08, 372nn49–50, 373nn53–54, 58 inverse reinforcement learning (IRL), 253–68 aspiration and, 386–87n55 assumptions on, 324 cooperative (CIRL), 267–68, 385nn40, 43–44 demonstration learning for, 256–61, 323–24, 383nn22–23, 398n30 feedback learning and, 262, 263–66, 384–85n37 gait and, 253–55 as ill-posed problem, 255–56 inference and, 251–53, 385n39 maximum-entropy, 260–61 inverse reward design (IRD), 301–02 irreversibility, 290–91, 292, 293, 320, 391n39, 397n22 Irving, Geoffrey, 344n86 Irwin, Robert, 326 Isaac, William, 75–77, 349n76 Jackson, Shirley Ann, 9 Jaeger, Robert, 184 Jain, Anil, 31 James, William, 121, 122, 124 Jefferson, Geoffrey, 329 Jefferson, Thomas, 278 Jim Crow laws, 344n83 Jobs, Steve, 98 Johns Hopkins University, 196–97 Jorgeson, Kevin, 220–21 Jurafsky, Dan, 46 Kabat-Zinn, Jon, 321 Kaelbling, Leslie Pack, 266, 371n30 Kage, Earl, 28 Kahn, Gregory, 288 Kalai, Adam, 6–7, 38, 41–42, 48, 316 Kálmán, Rudolf Emil, 383n15 Kant, Immanuel, 37 Kasparov, Garry, 205, 235, 242 Kaufman, Dan, 87–88 Kellogg, Donald, 214–15, 375n7 Kellogg, Luella, 214–15, 375n7 Kellogg, Winthrop, 214–15, 375n7 Kelsey, Frances Oldham, 315 Kerr, Steven, 163, 164, 168 Kim, Been, 112–17 kinesthetic teaching, 261 Kleinberg, Jon, 67, 69, 70, 73–74 Klopf, Harry, 127, 129, 130, 133, 138, 150 knowledge-seeking agents, 206–07, 209–10, 374n73 Knuth, Donald, 311, 395n2 Ko, Justin, 104, 356n55 Kodak, 28–29 Krakovna, Victoria, 292–93, 295 Krizhevsky, Alex, 21, 23–25, 285, 339n20 l0-norm, 354n47 Labeled Faces in the Wild (LFW), 31–32, 340n44 Lab for Analytical Sciences, North Carolina State University, 88 labor recruiting, 22 Landecker, Will, 103–04, 355n54 language models.

Contemporary state-of-the-art reinforcement-learning systems really are general—at least in the domain of board and video games—in a way that Deep Blue was not. DQN could play dozens of Atari games with equal felicity. AlphaZero is just as adept at chess as it is at shogi and Go. What’s more, artificial general intelligence (AGI) of the kind that can learn to operate fluidly in the real world may indeed require the sorts of intrinsic-motivation architectures that can make it “bored” of a game it’s played too much. At the other side of the spectrum from boredom is addiction—not a disengagement but its dark reverse, a pathological degree of repetition or perseverance.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

But remember: it isn’t real! 11. http://tinyurl.com/y7nbo58p. 12. Unfortunately, the terminology in the literature is imprecise and inconsistent. Most people seem to use ‘artificial general intelligence’ to refer to the goal of producing general-purpose human-level intelligence in machines, without being concerned with philosophical questions such as whether they are self-aware. In this sense, artificial general intelligence is roughly the equivalent of Searle’s weak AI. However, just to confuse things, sometimes the term is used to mean something much more like Searle’s strong AI. In this book, I use it to mean something like weak AI, and I will just call it ‘General AI’. 13. http://tinyurl.com/y76xdfd9

For this reason, although strong AI is an important and fascinating part of the AI story, it is largely irrelevant to contemporary AI research. Go to a contemporary AI conference, and you will hear almost nothing about it – except possibly late at night, in the bar. A lesser goal is to build machines that have general-purpose human-level intelligence. Nowadays, this is usually referred to as Artificial General Intelligence (AGI) or just General AI. AGI roughly equates to having a computer that has the full range of intellectual capabilities that a person has – this would include the ability to converse in natural language (cf. the Turing test), solve problems, reason, perceive its environment and so on, at or above the same level as a typical person.

agent-based interface The idea of having a computer interface mediated by an AI-powered software agent. The software agent works with us on a task, as an active assistant, rather than passively waiting to be told what to do, as is the case with regular computer applications. AGI see Artificial General Intelligence AI winter The period immediately following the publication of the Lighthill Report in the early 1970s, which was extremely critical of AI. Characterized by funding cuts to AI research, and considerable scepticism about the field as a whole. Followed by the era of knowledge-based AI. AlexNet A breakthrough image recognition system, which in 2012 demonstrated dramatic improvements in image recognition.

Succeeding With AI: How to Make AI Work for Your Business
by Veljko Krunic
Published 29 Mar 2020

Available from: https://www.hhs.gov/hipaa/ for-professionals/security/laws-regulations/index.html June Life, Inc. The do-it-all oven. [Cited 2019 Jul 15.] Available from: https:// juneoven.com/ Hubbard DW. How to measure anything: Finding the value of intangibles in business. 2nd ed. Hoboken, NJ: Wiley; 2010. Wikimedia Foundation. Artificial general intelligence. Wikipedia. [Cited 2018 Jun 13.] Available from: https://en.wikipedia.org/w/index.php?title=Artificial _general_intelligence Shani G, Gunawardana A. Evaluating recommendation systems. In: Ricci F, Rokach L, Shapira B, Kantor PB, editors. Recommender systems handbook. New York: Springer; 2011. p. 257–297. Konstan JA, McNee SM, Ziegler , Torres R, Kapoor N, Riedl JT.

Imagine that you’re a CEO Suppose you’re running a company that’s making $30 billion a year, and you’re in a business that’s associated with AI. Let’s go a step further and assume that there’s a 1% chance that someone in the next 10 years might invent something approaching a strong, human-level AI—so called Artificial General Intelligence (AGI) [76]. If the search for AGI fails, there may still be an autonomous vehicle [38] as the consolation prize. Finally, you know that your competitors are investing heavily into AI. Will you invest substantial money into AI and hire accomplished researchers to help you advance the frontiers of AI knowledge?

AI will make actuarial mistakes that an average human, uninformed about AI, will see as malicious. Juries, whether in court or in the court of public opinion, are made up of humans. WARNING There’s no way to know if AI will ever develop common sense. It may not for quite a while; maybe not even until we get strong AI/Artificial General Intelligence [76]. Accounting for AI’s actuarial view is a part of your problem domain and part of why understanding your domain is crucial. It’s difficult to account for the differences between the actuarial view AI takes and human social expectations. Accounting for those differences is not an engineering problem and something that you should pass on to the engineering team to solve.

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

Some primates have brains similar in size to or even larger than human brains, but they do not exhibit anything approaching human acumen. Likely, development will yield AI “savants”—programs capable of dramatically exceeding human performance in specific areas, such as advanced scientific fields. THE DREAM OF ARTIFICIAL GENERAL INTELLIGENCE Some developers are pushing the frontiers of machine-learning techniques to create what has been dubbed artificial general intelligence (AGI). Like AI, AGI has no precise definition. However, it is generally understood to mean AI capable of completing any intellectual task humans are capable of—in contrast to today’s “narrow” AI, which is developed to complete a specific task.

This notion also communicated the sense of possibility engendered by disrupting the established monopoly on information, which was largely in the hands of the church. Now the partial end of the postulated superiority of human reason, together with the proliferation of machines that can match or surpass human intelligence, promises transformations potentially more profound than even those of the Enlightenment. Even if advances in AI do not produce artificial general intelligence (AGI)—that is, software capable of human-level performance of any intellectual task and capable of relating tasks and concepts to others across disciplines—the advent of AI will alter humanity’s concept of reality and therefore of itself. We are progressing toward great achievements, but those achievements should prompt philosophical reflection.

In a world where an intelligence beyond one’s comprehension or control draws conclusions that are useful but alien, is it foolish to defer to its judgments? Spurred by this logic, a re-enchantment of the world may ensue, in which AIs are relied upon for oracular pronouncements to which some humans defer without question. Especially in the case of AGI (artificial general intelligence), individuals may perceive godlike intelligence—a superhuman way of knowing the world and intuiting its structures and possibilities. But deference would erode the scope and scale of human reason and thus would likely elicit backlash. Just as some opt out of social media, limit screen time for children, and reject genetically modified foods, so, too, will some attempt to opt out of the “AI world” or limit their exposure to AI systems in order to preserve space for their reason.

pages: 798 words: 240,182

The Transhumanist Reader
by Max More and Natasha Vita-More
Published 4 Mar 2013

Shapiro, Lee Silver, Gregory Stock, Natasha Vita-More, Roy Walford, and Michael West. See http://www.extropy.org/summitkeynotes.htm. Statement for Extropy Institute Vital Progress Summit February 18, 2004. Index accelerating change adaptability aesthetics ageless AGI, see artificial general intelligence aging alchemy alterity anti-aging Aristotle Armstrong, Rachel artifact artificial general intelligence artificial intelligence artificial life Ascott, Roy atheism atom atomic augmentation authoritarian autonomous self (agent) autonomy avatar Bacon, Francis Bailey, Ronald Bainbridge, William Beloff, Laura Berger, Ted Benford, Gregory Beyond Therapy: Biotechnology and the Pursuit of Happiness bias bioart bioconservative biocultural capital bioethics biofeedback biopolitics biotechnology Blackford, Russell Blue Brain body alternative body biological body biopolitic computer interaction and body cyborg body modification morphological freedom posthuman body prosthetic body regenerated simulated transformative transhuman body wearable, see Hybronaut Bostrom, Nick brain–computer interface brain–machine interface (BMI) brain preservation Brin, David Broderick, Damien Caplan, Arthur Chalmers, David Chislenko, Alexander “Sasha,” Church, George Clark, Andy Clarke, Arthur C.

Nanorobotics Nanorobotics Revolution by the 2020s Conclusions 7 Life Expansion Media Living Matter Degeneration/Regeneration Transmutation Dialectics of Desirability and Viability Cybernetics Human-machine Interfaces and the Prosthetic Body Life Expansion 8 The Hybronaut Affair Techno-Organic Environment The Umwelt Bubble Network and the Hybronaut The Appendix-tail Conclusion 9 Transavatars Avatars and Simulation Avatar Censuses Secondary and Posthumous Avatars Conclusion 10 Alternative Biologies Biology as Technology The Rise of Machines Complexity The Science of Complexity Synthetic Biology – Complex Embodied Technology Top-Down Synthetic Biology Bottom-Up Synthetic Biology Protocells Artificial Biology From Proposition to Reality Future Venice Artificial Biology and Human Enhancement Part III Human Enhancement: The Cognitive Sphere 11 Re-Inventing Ourselves I. Introduction: Where the Rubber Meets the Road II. What’s in an Interface? III. New Systemic Wholes IV. Incorporation Versus Use V. Extended Cognition VI. Profound Embodiment VII. Enhancement or Subjugation? VIII. Conclusions 12 Artificial General Intelligence and the Future of Humanity The Top Priority for Mankind AGI and the Transformation of Individual and Collective Experience AGI and the Global Brain What is a Mind that We Might Build One? Why So Little Work on AGI? Why the “AGI Sputnik” Will Change Things Dramatically and Launch a New Phase of the Intelligence Explosion The Risks and Rewards of Advanced AGI 13 Intelligent Information Filters and Enhanced Reality Preface Text Translation and Its Consequences Enhanced Multimedia Structure of Enhanced Reality Historical Observations Truth vs.

Ben Goertzel, PhD, is an AGI Researcher, Novamente LLC, Chief Scientist, Aidyia Holdings, and Vice Chair, Humanity+. He authored The Hidden Pattern: A Patternist Philosophy of Mind (Brown Walker Press, 2006); A Cosmist Manifesto: Practical Philosophy for the Posthuman Age (Humanity + Press, 2010); and co-edited with Cassio Pennachin Artificial General Intelligence (Springer, 2007). Robin Hanson, PhD, is Associate Professor of Economics, George Mason University. He authored “Meet the New Conflict, Same as the Old Conflict” (Journal of Consciousness Studies 19, 2012); “Enhancing our Truth Orientation” (Human Enhancement, Oxford University Press, 2009); and “Insider Trading and Prediction Markets” (Journal of Law, Economics, and Policy 4, 2008).

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

The Fourth Age: Robots and AI 5. Three Big Questions PART TWO: NARROW AI AND ROBOTS THE STORY OF JOHN HENRY 6. Narrow AI 7. Robots 8. Technical Challenges 9. Will Robots Take All Our Jobs? 10. Are There Robot-Proof Jobs? 11. The Big Questions 12. The Use of Robots in War PART THREE: ARTIFICIAL GENERAL INTELLIGENCE THE STORY OF THE SORCERER’S APPRENTICE 13. The Human Brain 14. AGI 15. Should We Build an AGI? PART FOUR: COMPUTER CONSCIOUSNESS THE STORY OF JOHN FRUM 16. Sentience 17. Free Will 18. Consciousness 19. Can Computers Become Conscious? 20. Can Computers Be Implanted in Human Brains?

Just six years later, in 2010, you could buy that same 125,000 transistors that cost a million dollars in 1960 for the same price as you would pay for a single grain of rice. Technology is relentless: It gets better and cheaper, never stopping. And it is on this fact that many computer scientists base their claims on the future capabilities of computers, such as artificial general intelligence and machine consciousness, the topics we will discuss for the rest of the book. Just how deeply have we embedded computers into the fabric of our lives? No one knows how many billions of computers are in operation around the world. It is believed that computers use roughly 10 percent of all of the electricity produced.

The kind of AI we have today is narrow AI, also known as weak AI. It is the only kind of AI we know how to build, and it is incredibly useful. Narrow AI is the ability for a computer to solve a specific kind of problem or perform a specific task. The other kind of AI is referred to by three different names: general AI, strong AI, or artificial general intelligence (AGI). Although the terms are interchangeable, I will use AGI from this point forward to refer to an artificial intelligence as smart and versatile as you or me. A Roomba vacuum cleaner, Siri, and a self-driving car are powered by narrow AI. A hypothetical robot that can unload the dishwasher would be powered by narrow AI.

pages: 345 words: 104,404

Pandora's Brain
by Calum Chace
Published 4 Feb 2014

‘We are engaged in a race, gentlemen. A race for the survival of our species. Humanity is sleepwalking towards an apocalypse.’ Matt and Leo were listening attentively, but Ivan held up his hand anyway, as if to forestall interruptions. ‘The great majority of our fellow human beings have no clue that the first artificial general intelligence – human-level AI, a conscious machine – will almost certainly be created in the first half of this century. Of the few who do realise where the technology is heading, most are Californian dreamers who think nothing can go wrong: they love technology and they love computers, and they cannot conceive that an intelligent computer will not be their friend.’

Their resources are formidable. We’re supervised by the Strategic Technology Office, and I have a high level of clearance. The organisation I run was no mean outfit before I teamed up with Norman and his pals, so I like to think that if the US Army does turn out to be the first institution to build an artificial general intelligence, there will be a well-informed and well-connected civilian organisation standing shoulder-to-shoulder with them and making sure they don’t go off in all sorts of unhealthy directions. ‘Norman and his colleagues have been incredibly helpful. Not only with money, but with contacts, technologies, advice, and of course intelligence.

‘If the idea is effectively sprung on people just before it becomes a reality, the panic you are worried about could be enormously damaging.’ Leo was nodding as Matt spoke. ‘If we withhold some of the story and then it gets out, people will be suspicious about what else is being hidden. If it leaks out that the US Army is close to creating the first artificial general intelligence, and has been less than truthful about it, a lot of people will get very concerned. But to be honest I’m more concerned about the more immediate problems. For instance, is it realistic to insist that Matt never speaks to anyone outside this room about his experience – not now and not for the rest of his life?

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future
by Martin Ford
Published 4 May 2015

See artificial intelligence (AI) “AI winters,” 231 Alaska, annual dividend, 268 algorithms acceleration in development of, 71 automated trading, 56, 113–115 increasing efficiency of, 64 machine learning, 89, 93, 100–101, 107–115, 130–131 threat to jobs, xv, 85–86 alien invasion parable, 194–196, 240 “All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines” (Carr), 254 all-payer ceiling, 168–169 all-payer rates, 167–169 Amazon.com, 16–17, 76, 89 artificial intelligence and, 231 cloud computing and, 104–105, 107 delivery model, 190, 190n “Mechanical Turk” service, 125n AMD (Advanced Micro Devices), 70n American Airlines, 179 American Hospital Association, 168 American Motors, 76 Andreesen, Marc, 107 Android, 6, 21, 79, 121 Apple, Inc., 17, 20, 51, 92, 106–107, 279 Apple Watch, 160 apps, difficulty in monetizing, 79 Arai, Noriko, 127–128 Aramco, 68 Ariely, Dan, 47n Arrow, Kenneth, 162, 169 art, machines creating, 111–113 Artificial General Intelligence (AGI), 231–233 dark side of, 238–241 the Singularity and, 233–238 artificial intelligence (AI), xiv arms race and, 232, 239–240 in medicine, 147–153 narrow, 229–230 offshoring and, 118–119 warnings concerning dangers of, 229 See also Artificial General Intelligence (AGI); automation; information technology Artificial Intelligence Laboratory (Stanford University), 6 artificial neural networks, 90–92. See also deep learning The Atlantic (magazine), 71, 237, 254, 273 AT&T, 135, 159, 166 Audi, 184 Australian agriculture, x–xi, 24–25 Australian Centre for Field Robotics (ACFR), 24–25 AutoDesk, 234 automated invention machines, 110 automated trading algorithms, 56, 113–115 automation alien invasion parable, 194–196, 240 anti-automation view, 253–257 cars and (see autonomous cars) effect on Chinese manufacturing, 3, 10–11, 225–226 effect on prices, 215–216 health care jobs and, 172–173 information technology and, 52 job-market polarization and, 50–51 low-wage jobs and, 26–27 offshoring as precursor to, 115, 118–119 predictions of effect of, 30–34 reshoring and, 10 retail sector and, 16–20 risk of, 256 service sector and, 12–20 solutions to rise of, 273–278 (see also basic income guarantee) as threat to workers with varying education and skill levels, xiv–xv, 59 of total US employment, 223 Triple Revolution report, 30–31 white-collar, 85–86, 105–106, 126–128 See also robotics; robots automotive industry, 3, 76, 193–194 autonomous cars, xiii, 94, 176, 181–191 as shared resource, 186–190 Autor, David, 50 Average Is Over (Cowen), 123, 126n aviation, 66–67, 179, 256 AVT, Inc., 18 Ayres, Ian, 125 Babbage, Charles, 79 Baker, Stephen, 96n, 102n Barra, Hugo, 121 Barrat, James, 231, 238–239 basic income guarantee, 31n, 257–261 approaches to, 261–262 downsides and risks of, 268–271 economic argument for, 264–267 economic risk taking and, 267–268 incentives and, 261–264 paying for, 271–273 Baxter (robot), 5–6, 7, 10 BD Focal Point GS Imaging System, 153 Beaudry, Paul, 127 Beijing Genomics Institute, 236n Bell Labs, 159 Berg, Andrew G., 214–215 Bernanke, Ben, 37 big data, xv, 25n, 86–96 collection of, 86–87 correlation vs. cause and, 88–89, 102 deep learning and, 92–93 health care and, 159–160 knowledge-based jobs and, 93–96 machine learning and, 89–92 The Big Switch (Carr), 72 Bilger, Burkhard, 186 “BinCam,” 125n “Bitter Pill” (Brill), 160 Blinder, Alan, 117–118, 119 Blockbuster, 16, 19 Bloomberg, 113–114 Bluestone, Barry, 220 Borders, 16 Boston Consulting Group, 9 Boston Globe (newspaper), 149 Boston Red Sox, 83 Boston University, 141 Bowley, Arthur, 38 Bowley’s Law, 38–39, 41 box-moving robot, 1–2, 5–6 brain, reverse engineering of human, 237 breast cancer screening, 152 Brill, Steven, 160, 163 Brin, Sergey, 186, 188, 189, 236 Brint, Steven, 251 Brooks, Rodney, 5 Brown, Jerry, 134 Brynjolfsson, Erik, 60, 122, 254 Bureau of Labor Statistics, 13, 16, 38n, 158, 222–223, 281 Bush, George W., 116 business interest lobbying, economic policy and, 57–58 “Busy child scenario,” (Barrat) 238–239 Calico, 236 California Institute of Technology, 133 Canada, 41, 58, 167n, 251 “Can Nanotechnology Create Utopia?”

The extraordinary power of today’s computers combined with advances in specific areas of AI research, as well as in our understanding of the human brain, are generating a great deal of optimism. James Barrat, the author of a recent book on the implications of advanced AI, conducted an informal survey of about two hundred researchers in human-level, rather than merely narrow, artificial intelligence. Within the field, this is referred to as Artificial General Intelligence (AGI). Barrat asked the computer scientists to select from four different predictions for when AGI would be achieved. The results: 42 percent believed a thinking machine would arrive by 2030, 25 percent said by 2050, and 20 percent thought it would happen by 2100. Only 2 percent believed it would never happen.

AI is becoming indispensable to militaries, intelligence agencies, and the surveillance apparatus in authoritarian states.* Indeed, an all-out AI arms race might well be looming in the near future. The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well. If AI researchers do eventually manage to make the leap to AGI, there is little reason to believe that the result will be a machine that simply matches human-level intelligence. Once AGI is achieved, Moore’s Law alone would likely soon produce a computer that exceeded human intellectual capability.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

Bringing them to market requires no major new breakthroughs in AI research, just the nuts-and-bolts work of everyday implementation: gathering data, tweaking formulas, iterating algorithms in experiments and different combinations, prototyping products, and experimenting with business models. But the age of implementation has done more than make these practical products possible. It has also set ablaze the popular imagination when it comes to AI. It has fed a belief that we’re on the verge of achieving what some consider the Holy Grail of AI research, artificial general intelligence (AGI)—thinking machines with the ability to perform any intellectual task that a human can—and much more. Some predict that with the dawn of AGI, machines that can improve themselves will trigger runaway growth in computer intelligence. Often called “the singularity,” or artificial superintelligence, this future involves computers whose ability to understand and manipulate the world dwarfs our own, comparable to the intelligence gap between human beings and, say, insects.

These breakthroughs would need to remove key constraints on the “narrow AI” programs that we run today and empower them with a wide array of new abilities: multidomain learning; domain-independent learning; natural-language understanding; commonsense reasoning, planning, and learning from a small number of examples. Taking the next step to emotionally intelligent robots may require self-awareness, humor, love, empathy, and appreciation for beauty. These are the key hurdles that separate what AI does today—spotting correlations in data and making predictions—and artificial general intelligence. Any one of these new abilities may require multiple huge breakthroughs; AGI implies solving all of them. The mistake of many AGI forecasts is to simply take the rapid rate of advance from the past decade and extrapolate it outward or launch it exponentially upward in an unstoppable snowballing of computer intelligence.

I cannot guarantee that scientists definitely will not make the breakthroughs that would bring about AGI and then superintelligence. In fact, I believe we should expect continual improvements to the existing state of the art. But I believe we are still many decades, if not centuries, away from the real thing. There is also a real possibility that AGI is something humans will never achieve. Artificial general intelligence would be a major turning point in the relationship between humans and machines—what many predict would be the most significant single event in the history of the human race. It’s a milestone that I believe we should not cross unless we have first definitively solved all problems of control and safety.

pages: 315 words: 89,861

The Simulation Hypothesis
by Rizwan Virk
Published 31 Mar 2019

Finally, I’d like to thank Ellen McDonough for her endless patience with me as I droned on and on about the simulation hypothesis, and for her never-ending support through thick and thin! Index A The Adjustment Bureau, 8, 79 The Adjustment Team (Dick), 8–9 AFK - away from keyboard, 209–10 AGI (Artificial Generalized Intelligence), 90–91, 96–99 AGI (Artificial Generalized Intelligence) and social media, 104–5 AI (artificial intelligence) as element of Great Simulation, 280–81 ethics and uses, 97–100 gods, angels and the simulation hypothesis, 226–28 and NPCs, 82–84 super-intelligence, 100–101 and virtual reality and simulated consciousness, 16–18 AI (artificial intelligence), history of AI and games, 85–86 DeepMind, AlphaGo and video games, 86–88 digital psychiatrist, 88–89 NLP, AI and quest to pass the Turing Test, 89–92 Turing Test, 84–85 Al-Akhirah, 221–23 Al-Dunya, 221–23 Alexa, 88, 90 aliens, 275–76 allegory of the cave, 270–71 Almheiri, Ahmed, 260 AlphaGo, 86–88 Altered Carbon (Morgan, 2002), 103–4 analog, 161 ancestor simulation, 108–9, 114–15 Anderson, Kevin J., 97 Andreessen, Marc, 287 angels, 225–26 AR (augmented reality), 62–64 AR glasses, 62 arcade-type mechanics, 34 “Are You Living in a Simulation?”

Index A The Adjustment Bureau, 8, 79 The Adjustment Team (Dick), 8–9 AFK - away from keyboard, 209–10 AGI (Artificial Generalized Intelligence), 90–91, 96–99 AGI (Artificial Generalized Intelligence) and social media, 104–5 AI (artificial intelligence) as element of Great Simulation, 280–81 ethics and uses, 97–100 gods, angels and the simulation hypothesis, 226–28 and NPCs, 82–84 super-intelligence, 100–101 and virtual reality and simulated consciousness, 16–18 AI (artificial intelligence), history of AI and games, 85–86 DeepMind, AlphaGo and video games, 86–88 digital psychiatrist, 88–89 NLP, AI and quest to pass the Turing Test, 89–92 Turing Test, 84–85 Al-Akhirah, 221–23 Al-Dunya, 221–23 Alexa, 88, 90 aliens, 275–76 allegory of the cave, 270–71 Almheiri, Ahmed, 260 AlphaGo, 86–88 Altered Carbon (Morgan, 2002), 103–4 analog, 161 ancestor simulation, 108–9, 114–15 Anderson, Kevin J., 97 Andreessen, Marc, 287 angels, 225–26 AR (augmented reality), 62–64 AR glasses, 62 arcade-type mechanics, 34 “Are You Living in a Simulation?” (Bostrom, 2003), 109 artificial consciousness, portrayals of, 95–97 Artificial Generalized Intelligence (AGI), 90–91, 96–99 artificial intelligence (AI). See AGI (Artificial Generalized Intelligence); AI (artificial intelligence); AI (artificial intelligence), history of Aserinsky, Eugene, 189 Ashely-Farrand, 206 Asimov, Isaac, 99 assembly language, 33 Asteroids, 36–37 Atari, 2, 4, 32, 38 atom, 167–68 atomic clocks, 170 augmented images, photorealistic, 63–64 augmented reality (AR), 62–64 Avatar, 58, 64 avatars, 44–45, 46f, 49, 273–74 B bag of karma, 117, 208 basic game loop, 31 BASIC programming language, 33 Beane, Silas, 255 Bhagavad Gita, 204–5 big game world, 30 “big TOE” (Theory of Everything), 156–57 biological materials, 3D printers, 71–72 bitmap, 163–64 black holes, 178–79 Blackthorn, 55 Blade Runner, 9, 77–78, 94 Blade Runner 2049, 65 Bohr, Niels, 13, 122, 124–25, 131, 167 Book of the Dead/Bardo Thol, 192 Boolean logic gates, 258 Born, Max, 131, 167 Bostrom, Nick, 5, 24–26, 105, 114–15, 220–21, 247, 281 Bostrom’s Simulation Argument, 110–11 Bostrom’s Simulation Argument, statistical basis for, 111–14 Brahman, 191 branching, 159 Breakout, 87 A Brief History of Time (Hawking), 10 Brinkley, Dannion, 229–231, 241 Buddha, 1, 183, 249 Buddhism, 14–15 Buddhist Dream Yoga, 191–94 Bushnell, Nolan, 34 butterfly effect, 18–19 Byte, 33 C c (speed of light), 174 C# programming language, 33, 171–73 CAD (computer-aided design), 287 Cameron, James, 64, 96–97 Campbell, Thomas, 156–57, 173–76, 250, 254–55 Capra, Fritjof, 203–4 Carmack, John, 59–60 central processing units (CPUs), 137 CGI (computer-generated imagery) techniques, 63–66 Chalmers, David, 246–47 chaos theory, 18–19 chat-bot, 31, 88, 98, 118 checksums, 256 Chess, 104 chess-playing computer, 86f Choose Your Own Adventure, 83 Christianity, 15–16 Christianity and Judaism, 223–25 Clarke, Arthur C., 96 classical physics, 29, 125, 161, 166, 283–84, 288 classical vs. relativistic physics, 122–24 Cline, Ernest, 56 clock-speed and quantized time, computer simulations, 171–73 Close Encounters of the Third Kind, 232, 276 cloud of probabilities, 127 collective dream, 187–88 Colossal Cave Adventure, 27–29, 32, 34 Colossal Cave Adventure, map of, 29f computation, 18–19 computation, and other sciences, 287 computation, evidence of, 256–57, 267–68 overview, 246–47 computation in nature, evidence of, 263–66 computational irreducibility, 18, 79, 266 computer simulations clock-speed and quantized time, 171–73 . see also ancestor simulation; Great Simulation; Simulation Argument; simulation hypothesis; Simulation Point computer-generated imagery (CGI) techniques, 64–66 “Computing Machinery and Intelligence” (Turing, 1950), 85 conditional rendering, evidence of, 253–55 conflict resolution, 173 conscious players, people as, 114–15 consciousness, 148 as digital informaion, 17–18 as information and computation, 82 consciousness, defined, 115–16 consciousness, digital vs. spiritual, 116–18 consciousness and metaphysical experiments, 249–250 consciousness as information, 104–5 consciousness transference, 198–99 Constraints on the Universe as a Numerical Simulation (Beane, Davoudi and Savage), 255 Copenhagen interpretation, 131 Cosmos, 251 CPUs (central processing units), 137 . see also GPUs/CPUs Creative Labs, 62 Crichton, Michael, 71–72 Crick, Francis, 116 Crowther, Will, 27 Curry, Adam, 76 D Dalai Lama, 207 Data, Star Trek: The Next Generation, 95–96, 115 Davoudi, Zohreh, 255 deathmatch mode, 43–44 Deep Blue, 86 DeepMind, 86–88, 94, 98 déjà vu, 240–41 delayed-choice double slit experiment, 145f delayed-choice experiment, 143–46 delayed-measurement experiment, 146 DELTA t (T), 174 Department of Defense (DOD), 232 Descartes, René, 11 DeWitt, Bryce, 149 dharma, 191 Dick, Leslie “Tessa” B., 8–9 Dick, Philip K., 274, 289 and alternate realities, 8–9 computer simulations and variables, 19 and implanted memories, 77–78 life as computer-generated simulation, 78–79 Metz Sci-Fi Convention, 1977, 2 question of reality vs. fiction, 71–72 simulated worlds, 80 speculative technologies, 53 digital consciousness, 116–18 digital film resolution, 65 digital immortality, 82, 105 digital psychiatrist, 88–89, 161 directed graph, 153–55 Discrete World, 165–66 Do Androids Dream of Electric Sheep, 9 Donkey Kong, 1 Doom, 43–44, 43f, 59–60, 137–38 DOTA 2, 87, 94 dot-matrix printers (2D), 69–71 double slit experiment, 128–29, 129f downloadable consciousness, 54, 101–4, 198, 207, 281 downloadable consciousness and seventh yoga, 197–99 Dr.

After the initial news about Google Duplex, though, there was widespread concern that robo-calls could now sound authentic and that this might lead to a whole new wave of spam phone calls! Google quickly backtracked and decided that it would always have autonomous agents making phone calls “self-identify” as an agent. An AI that can pass the Turing Test and do other things that humans can do has been dubbed “Artificial Generalized Intelligence,” or AGI. Thus far, most AI applications have focused on specific tasks—reading handwriting, predicting certain patterns from numbers, helping a human with solving limited tasks, etc. While developments in NLP technology have made incredible strides in the past few decades, many experts still believe that we are probably within a decade of being able to create artificially intelligent characters (or NPCs) that can pass the Turing Test, within games or in the real world.

Falter: Has the Human Game Begun to Play Itself Out?
by Bill McKibben
Published 15 Apr 2019

You’ll be able to drink IPAs for hours at your local tavern, and the self-driving car will take you home—and it may well be able to recommend precisely which IPAs you’d like best. But it won’t be able to carry on an interesting discussion about whether this is the best course for your life. That next step up is artificial general intelligence, sometimes referred to as “strong AI.” That’s a computer “as smart as a human across the board, a machine that can perform any intellectual task a human being can,” in Urban’s description. This kind of intelligence would require “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”9 Five years ago a pair of researchers asked hundreds of AI experts at a series of conferences when we’d reach this milestone—more precisely, it asked them to name a “median optimistic year,” when there was a 10 percent chance we’d get there; a median realistic year, a 50 percent chance; and a “pessimistic” year, in which there was a 90 percent chance.

Hawking wrote that success in AI would be “the biggest event in human history,” but it might “also be the last, unless we learn to avoid the risks.”20 And here’s Michael Vassar, president of the Machine Intelligence Research Institute: “I definitely think people should try to develop Artificial General Intelligence with all due care. In this case all due care means much more scrupulous caution than would be necessary for dealing with Ebola or plutonium.”21 Why are people so scared? Let the Swedish philosopher Nick Bostrom explain. He’s hardly a Luddite. Indeed, he gave a speech in 1999 to a California convention of “transhumanists” that may mark the rhetorical high water of the entire techno-utopian movement.

As the science writer Tim Urban points out, an AI “wouldn’t see human-level intelligence as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to stop at our level. And given the advantages over us that even human-intelligence-equivalent artificial general intelligence (AGI) would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.”5 After all, AGI’s got better components. Already today’s microprocessors run about ten million times the speed of our brains, whose internal communications “are horribly outmatched by a computer’s ability to communicate optically at the speed of light,” Urban observes.

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind
by Susan Schneider
Published 1 Oct 2019

The development of AI is driven by market forces and the defense industry—billions of dollars are now pouring into constructing smart household assistants, robot supersoldiers, and supercomputers that mimic the workings of the human brain. Indeed, the Japanese government has launched an initiative to have androids take care of the nation’s elderly, in anticipation of a labor shortage. Given the current rapid-fire pace of its development, AI may advance to artificial general intelligence (AGI) within the next several decades. AGI is intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. Indeed, AI is already projected to outmode many human professions within the next decades. According to a recent survey, for instance, the most-cited AI researchers expect AI to “carry out most human professions at least as well as a typical human” within a 50 percent probability by 2050, and within a 90 percent probability by 2070.1 I’ve mentioned that many observers have warned of the rise of superintelligent AI: synthetic intelligences that outthink the smartest humans in every domain, including common sense reasoning and social skills.

“Response to Susan Schneider’s ‘The Philosophy of “Her,’ ” H+ Magazine, March 26, http://hplusmagazine.com/2014/03/26/response-to-susan-schneiders-the-philosophy-of-her/. Zimmer, Carl. 2010. “Sizing Up Consciousness By Its Bits,” New York Times, September 20. INDEX Page numbers in italics indicate illustrations. Aaronson, Scott, 64 ACT test, 50–57, 60, 65, 67 Active SETI, 105–9 afterlife of the brain, 8, 145 AGI (artificial general intelligence), 9, 43 AI (artificial intelligence), 1–15, 148–50 alien/extraterrestrial, 5, 98–119 (See also alien/extraterrestrial AI) consciousness issues, 2–6 (See also consciousness) development of, 9–10 implications, importance of thinking through, 2–3, 10 Jetsons fallacy and, 12–13 merging humans with AI, 6–8, 72–81 (See also merging humans with AI) mind design, concept of, 1 postbiological, 99 singularity, approach of, 11–12 software, mind viewed as, 7–8, 120–47 (See also software, mind viewed as) transhumanism, 13–15 (See also transhumanism) uncertainties and unknowns regarding, 15 AI consciousness, problem of, 3–6, 16–32, 148–49 alien intelligences, postbiological, 5 alien/extraterrestrial AI, 110–11 biological naturalism, arguments against, 18–22, 34, 158n4 capability of machines for consciousness, 17–18 Chinese Room thought experiment and, 19–22, 20, 34, 148 control problem, 4–5 ethical treatment of conscious/potentially conscious AIs, 39, 67–69, 149 isomorph thought experiment and, 26–31, 57, 158nn13–14, 159nn10–11 “problem of other minds” and, 158n3 slavery and, 4, 39 techno-optimism, arguments for, 18, 23–26, 31, 34 value placed on humans by, 5 “Wait and See Approach” to, 33–34, 45 AI slavery, 4, 39 Alcor, 121, 145 alien/extraterrestrial AI, 5, 98–119 BISAs (biologically inspired superintelligent aliens), 113–19 consciousness, 110–11 control problem and, 104–5 postbiological cosmos approach, 99–104 SETI (Search for Extraterrestrial Intelligence), 101, 105–9, 106 software theory, 119 superintelligent AI minds, encountering, 109–19 Alzheimer’s disease, 44, 58 Amazon, 131 Arrival (film), 107 artificial general intelligence (AGI), 9, 43 artificial intelligence.

Aaronson, Scott, 64 ACT test, 50–57, 60, 65, 67 Active SETI, 105–9 afterlife of the brain, 8, 145 AGI (artificial general intelligence), 9, 43 AI (artificial intelligence), 1–15, 148–50 alien/extraterrestrial, 5, 98–119 (See also alien/extraterrestrial AI) consciousness issues, 2–6 (See also consciousness) development of, 9–10 implications, importance of thinking through, 2–3, 10 Jetsons fallacy and, 12–13 merging humans with AI, 6–8, 72–81 (See also merging humans with AI) mind design, concept of, 1 postbiological, 99 singularity, approach of, 11–12 software, mind viewed as, 7–8, 120–47 (See also software, mind viewed as) transhumanism, 13–15 (See also transhumanism) uncertainties and unknowns regarding, 15 AI consciousness, problem of, 3–6, 16–32, 148–49 alien intelligences, postbiological, 5 alien/extraterrestrial AI, 110–11 biological naturalism, arguments against, 18–22, 34, 158n4 capability of machines for consciousness, 17–18 Chinese Room thought experiment and, 19–22, 20, 34, 148 control problem, 4–5 ethical treatment of conscious/potentially conscious AIs, 39, 67–69, 149 isomorph thought experiment and, 26–31, 57, 158nn13–14, 159nn10–11 “problem of other minds” and, 158n3 slavery and, 4, 39 techno-optimism, arguments for, 18, 23–26, 31, 34 value placed on humans by, 5 “Wait and See Approach” to, 33–34, 45 AI slavery, 4, 39 Alcor, 121, 145 alien/extraterrestrial AI, 5, 98–119 BISAs (biologically inspired superintelligent aliens), 113–19 consciousness, 110–11 control problem and, 104–5 postbiological cosmos approach, 99–104 SETI (Search for Extraterrestrial Intelligence), 101, 105–9, 106 software theory, 119 superintelligent AI minds, encountering, 109–19 Alzheimer’s disease, 44, 58 Amazon, 131 Arrival (film), 107 artificial general intelligence (AGI), 9, 43 artificial intelligence. See AI asbestos, 66 Asimov, Isaac, “Robot Dreams,” 57 astronauts and conscious AI, 41–43, 42, 103 Battlestar Galactica (TV show), 99 Bello, Paul, 159n1 Berger, Theodore, 44 Bess, Michael, 12 Big Think, 126 biological naturalism, 18–22, 34, 158n4 biologically inspired superintelligent aliens (BISAs), 113–19 Black Box Problem, 46 black holes, 10 Blade Runner (film), 17, 57 Block, Ned, 159n1, 162n11 “The Mind as the Software of the Brain,” 134 body.

pages: 175 words: 45,815

Automation and the Future of Work
by Aaron Benanav
Published 3 Nov 2020

Across much of the literature, research and development in the digital age is presented as a matter of engineers in white lab coats following the technology “wherever it leads them” without having to worry about “end results” or “social outcomes.”33 Graphs of exponentially rising computing capacities—with Moore’s law of rising processor speeds standing in for technical change in general—suggest that technology develops automatically down pre-set paths.34 That suggestion in turn feeds into the fantasy of a coming “singularity,” when machine intelligence will finally give birth to science fiction–style artificial general intelligence, developing at speeds far beyond human comprehension.35 In reality, technological development is highly resource intensive, forcing researchers to pursue certain paths of inquiry at the expense of others. In our society, firms must focus on developing technologies that lead to profitable outcomes. Turning profits off of digital services, which are mostly offered to end users for free online, has proven elusive. Rather than focus on generating advances in artificial general intelligence, engineers at Facebook spend their time studying slot machines to figure out how to get people addicted to their website, so that they keep coming back to check for notifications, post content, and view advertisements.36 The result is that, like all modern technologies, these digital offerings are far from “socially neutral.”37 The internet, as developed by the US government and shaped by capitalist enterprises, is not the only internet that could exist.38 The same can be said of robotics: in choosing among possible pathways of technological progress, capital’s command over the work process remains paramount.39 Technologies that would empower line workers are not pursued, whereas technologies allowing for detailed surveillance of those same workers are fast becoming hot commodities.40 These features of technological change in capitalist societies have important implications for anyone seeking to turn existing technical means toward new, emancipatory aims.

The “effective operation of the automatic price mechanism,” he explained, “depends critically” on a peculiar feature of modern technology, namely that in spite of bringing about “an unprecedented rise in total output,” it nevertheless “strengthened the dominant role of human labour in most kinds of productive processes.”19 In other words, technology has made workers more productive without making work itself unnecessary. Since workers continue to earn wages, their demand for goods is effective. At any time, a technological breakthrough could destroy this fragile pin holding capitalist societies together. Artificial general intelligence, for example, might eliminate many occupations in a single stroke, rendering large quantities of labor unsalable at any price. At that point, information about the preferences of large sections of the population would vanish from the market, rendering it inoperable. Drawing on this insight—and adding that such a breakthrough now exists—automation theorists frequently argue that capitalism must be a transitory mode of production, which will give way to a new form of life that does not organize itself around wage work and monetary exchange.20 Automation may be a constant feature of capitalist societies; the same is not true of the theory of a coming age of automation, which extrapolates from instances of technological change to a broader account of social transformation.

pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us
by Tim O'Reilly
Published 9 Oct 2017

This leads to the fear that these programs will become increasingly independent of their creators. Artificial general intelligence (also sometimes referred to as “strong AI”) is still the stuff of science fiction. It is the product of a hypothetical future in which an artificial intelligence isn’t just trained to be smart about a specific task, but to learn entirely on its own, and can effectively apply its intelligence to any problem that comes its way. The fear is that an artificial general intelligence will develop its own goals and, because of its ability to learn on its own at superhuman speeds, will improve itself at a rate that soon leaves humans far behind.

Bostrom calls this hypothetical next step in strong AI “artificial superintelligence.” Deep learning pioneers Demis Hassabis and Yann LeCun are skeptical. They believe we’re still a long way from artificial general intelligence. Andrew Ng, formerly the head of AI research for Chinese search giant Baidu, compared worrying about hostile AI of this kind to worrying about overpopulation on Mars. Even if we never achieve artificial general intelligence or artificial superintelligence, though, I believe that there is a third form of AI, which I call hybrid artificial intelligence, in which much of the near-term risk resides.

The highly publicized victory of AlphaGo over Lee Sedol, one of the top-ranked human Go players, represented a milestone for AI, because of the difficulty of the game and the impossibility of using brute-force analysis of every possible move. But DeepMind cofounder Demis Hassabis wrote, “We’re still a long way from a machine that can learn to flexibly perform the full range of intellectual tasks a human can—the hallmark of true artificial general intelligence.” Yann LeCun also blasted those who oversold the significance of AlphaGo’s victory, writing, “most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake.

pages: 385 words: 111,113

Augmented: Life in the Smart Lane
by Brett King
Published 5 May 2016

This does not prohibit the intelligence from having machine learning or cognition capabilities so that it can learn new tasks or process new information outside of its initial programming. In fact, many machine intelligences already have this capability. Examples include: Google self-driving car, IBM Watson, high-frequency trading (HFT) algorithms, facial recognition software • Artificial General Intelligence—a human-equivalent machine intelligence that not only passes the Turing Test and responds as a human would but can also make human equivalent decisions. It will likely also process non-logic or informational cues such as emotion, tone of voice, facial expression and nuances that currently a living intelligence could (can your dog tell if you are angry or sad?).

Amazingly, the right robots might enable her to have the best of both worlds… What if Maria could stay in Manila and still work with patients in the United States? Imagine Maria in a call centre or even working from home. She is at her computer monitoring ten robot companions in an assisted care facility in Los Angeles. Each patient has a personal dedicated companion robot sitting by his or her bedside, running standard artificial general intelligence (AGI) software in a semi-autonomous mode. In this mode, the personal robot will be able to carry on conversations, answer basic questions and help the patient get assistance or entertainment. Cameras and sensors in the robot will be able to read the patient’s blood pressure, wakefulness, heart rate, emotional state, etc.

He has cameras on his eyes and on his chest, which allow him to recognize people’s faces, not only that, but recognize their gender, their age, whether they are happy or sad, and that makes him very exciting for places like hotels for example, where you need to appreciate the customers in front of you and react accordingly.” Jong Lee, CEO of Hanson Robotics Hanson Robotics is combining EQ in the form of an advanced artificial general intelligence and the most human robots on the planet. If you like gambling, you might soon be at a table, money burning a hole in your pocket, and meet Eva, who is being tested to be a beautiful baccarat dealer for casinos in Macau, China. Eva will be able to stand at the dealer’s position and deal the cards from a real deck and interact with the players.

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

Consider the final words of On the Origin of Species: “There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”19 This is not the writing of a metaphysical grinch. Darwin’s view of life without a creator has a “grandeur” to it, and is articulated with an almost religious sense of awe. One day we may feel that way about our unhuman machines as well. ARTIFICIAL GENERAL INTELLIGENCE The ancient Greek poet Archilochus once wrote: “The fox knows many things, but the hedgehog knows one big thing.” Isaiah Berlin, who found this mysterious line in the surviving scraps of Archilochus’s poetry, famously used it as a metaphor to distinguish between two types of human being: people who know a little about a lot (the foxes) and people who know a lot about a little (the hedgehogs).20 In our setting, we can repurpose that metaphor to think about human beings and machines.

Human beings, on the other hand, are proud foxes, who might now find themselves thrashed by machines at certain undertakings, but can still outperform them at a wide spread of others. For many AI researchers, the intellectual holy grail is to build machines that are foxes rather than hedgehogs. In their terminology, they want to build an “artificial general intelligence” (AGI), with wide-ranging capabilities, rather than an “artificial narrow intelligence” (ANI), which can only handle very particular assignments.21 That is what interests futurists like Ray Kurzweil and Nick Bostrom. But there has been little success in that effort, and critics often put forward the elusiveness of AGI as a further reason for being skeptical about the capabilities of machines.

abandonment ability bias Acemoglu, Daron adaptive learning systems admissions policies, conditional basic income and affective capabilities affective computing Age of Labor ALM hypothesis and optimism and overview of before and during twentieth century in twenty-first century Agesilaus AGI. See artificial general intelligence agoge agriculture Airbnb airborne fulfillment centers Alaska Permanent Fund Alexa algorithms alienation al-Khwarizmi, Abdallah Muhammad ibn Musa ALM (Autor-Levy-Murnane) hypothesis AlphaGo AlphaGo Zero AlphaZero Altman, Sam Amara, Roy Amazon artificial intelligence and changing-pie effect and competition and concerns about driverless vehicles and market share of network effects and profit and Andreessen, Marc ANI.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

Whatever the objective function, the autonomous system has to be capable of navigating an unpredictable and changing environment: different kinds of roads, weather, and lighting conditions; the behavior of other cars and pedestrians; and surprises such as obstacles in the roadway, including children chasing balls into the street. Until machines are capable of defining their own goals, the choices of the problems we want to solve with these technologies—what goals are worthy to pursue—are still ours. There is an outer frontier of AI that occupies the fantasies of some technologists: the idea of artificial general intelligence (AGI). Whereas today’s AI progress is marked by a computer’s ability to complete specific narrow tasks (“weak AI”), the aspiration to create AGI (“strong AI”) involves developing machines that can set their own goals in addition to accomplishing the goals set by humans. Although few believe that AGI is on the near horizon, some enthusiasts claim that the exponential growth in computing power and the astonishing advances in AI in just the past decade make AGI a possibility in our lifetimes.

As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.” OpenAI was created in 2015 as a nonprofit organization funded by wealthy technologists, including Elon Musk, Peter Thiel, Sam Altman, and Reid Hoffman, who were concerned with charting a path toward safe artificial general intelligence. With a social rather than profit-making mission, the team worried that the powerful tool it created could easily be put to illicit or even nefarious use producing fake text analogous to deep-fake images and videos. Middle school students could ask it to write short essays, leading to widespread and undetectable cheating.

To give a sense of scale, the training data for GPT-3 is nearly 45 terabytes in size, or more than four times the estimated size of all the printed material in the Library of Congress in 2000. GPT-3 represents an important frontier in AI research. The power of the model is undeniable, with some calling it the closest thing yet to artificial general intelligence. Without having been trained on any specific topic, it can generate convincing text based on an enormous variety of prompts. To give one example of its range and seeming ability to understand nuance and humor, consider this: Kanye West Exclusive—Why He’s Running for the Presidency, and What His Priorities Would Be as President.

Human Frontiers: The Future of Big Ideas in an Age of Small Thinking
by Michael Bhaskar
Published 2 Nov 2021

Had a new gear been found? DeepMind was already known for its ambitious use of ML. Founded in London in 2010, its stated goal was to ‘solve intelligence’ by pioneering the fusion and furtherance of modern ML techniques and neuroscience: to build not just artificial intelligence (AI), but artificial general intelligence (AGI), a multi-purpose learning engine analogous to the human mind. DeepMind made headlines when it created the first software to beat a human champion at Go. In 2016 its AlphaGo program played 9th dan Go professional Lee Sedol over five matches in Seoul and, in a shock result beyond even that of CASP13, won four of them.

Whether tackling Schrödinger's equation, producing new agricultural techniques, modelling galaxies, developing investment strategies, predicting demand for wind energy or writing a symphony, ML is already a fertile producer of the next generation of big ideas, a technology capable of learning, not processing; creating, not copying.34 But this is not the end. The notion of artificial general intelligence (AGI), a truly intelligent machine, already expands the potential for big thinking. Of necessity, we cannot be sure what such a machine would do, how or what it would think. We can however be sure that its impact would be colossal. It might create many kinds of intelligence – each qualitatively and quantitatively different, producing new vistas on the world, capable of understanding or theorising in new modes.35 Not just AGI then, but a cornucopia of AGIs.

Yet more significant are the first fully functioning Von Neumann probes: self-replicating robots thrown out into the galaxy to endlessly reproduce themselves, ultimately to form a numberless swarm strung out across space. A full Type II civilisation is eventually realised with the construction of a Dyson sphere around the Sun, collecting all its energy. Closer to home, whole-brain emulation is a reality, ushering in a new em civilisation in parallel. Everything changes again when artificial general intelligence finally becomes a reality and the intelligence explosion kicks in. Type III Ideas (long-term) At this distance, ideas grow hazy – like gazing at a distant star, their light fades in and out, blurry and indistinct. Superintelligences master faster-than-light interstellar travel.

pages: 181 words: 52,147

The Driver in the Driverless Car: How Our Technology Choices Will Create the Future
by Vivek Wadhwa and Alex Salkever
Published 2 Apr 2017

Ultimately, powerful computational systems—Siris on steroids—will reason creatively to solve problems in mathematics and physics that have bedeviled humans. These systems will synthesize inputs to arrive at something resembling original works or to solve unstructured problems without benefit of specific rules or guidance. Such broader reasoning ability is known as artificial general intelligence (A.G.I.), or hard A.I. One step beyond this is artificial superintelligence, the stuff out of science fiction that is still so far away—and crazy—that I don’t even want think about it. This is when the computers become smarter than us. I would rather stay focused on today’s A.I., the narrow and practical stuff that is going to change our lives.

Robots will soon become sure-footed; and a robot will, rather than merely open a door, succeed in opening it while holding a bag of groceries and ensuring that the dog doesn’t escape. When I buy Rosie, I may have to show her around the house, but she’ll quickly learn what I need, where my washer and dryer are located, and how to navigate around and clean the bathroom. And I expect that she will be as witty and lovable as she was on TV. No, she won’t have the artificial general intelligence that will make her seem human, but she will be able to have fun conversations with us. In fact, a very limited version of Rosie can be found at hospitals around the country. Her name is Tug, and she is produced by Aethon Inc. of Pittsburgh. Tug performs the most essential duties of today’s hospital orderly, such as delivering medications and equipment to different floors.

INDEX Abbeel, Pieter, 85–86 Accelerating returns, law of, 12–13 Addison Lee, 8–9 Africa health care in, 75, 116 technology in, 14, 23, 72n, 181, 186–187, 189 AIC Chile, 181 Airliner, supersonic, 7 Anderson, Chris, 115 Anger in society, 3–4 Argus retinal prosthesis, 167–168 Artificial general intelligence (A.G.I.), 40 Artificial intelligence (A.I.), 7, 12, 13. See also specific topics benefits of, 43 fostering autonomy vs. dependence, 44–46 hard, 40 how it will affect our lives and take our jobs, 40–43 and the labor market, 96–97 in medical field, 73–76 narrow/soft, 38 Preparing for the Future of Artificial Intelligence, 45–46 risks of, 45 Artificial intelligence (A.I.) assistants, 43, 46, 85.

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

We have only started to see its effects, and it will get better quickly and accelerate across industries—to the point where instead of training it, we are not needed. Beyond this, though, researchers and businesses continue to work on artificial general intelligence (AGI): intelligence that can generalize and take knowledge from one domain to another. How far out is artificial general intelligence, where AI might be smarter than a human at all things? I asked Ben Goertzel, one of the preeminent researchers in AGI. Ben has spent much of his life thinking about AGI and working to create it. And in his estimation, we will have it within five to thirty years depending on how efforts are directed.

Ben believes that there is high risk if AI is controlled by a corporation or government. Goals for an organization or government may be very different than that of a population. If AI is owned by corporations or governments, then the benefits will accrue to very few. He has long advocated that artificial general intelligences have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness. His company, SingularityNET, aims to decentralize AI and open the benefits to everyone.

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

The world’s best Go players had long thought that their play was close to perfection, so were shocked to find themselves beaten so decisively.80 As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong… I would go as far as to say not a single human has touched the edge of the truth of Go.”81 It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. This goal is sometimes known as artificial general intelligence (AGI), to distinguish it from the narrow approaches that had come to dominate. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari games at levels far exceeding human ability.82 Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learned and mastered these games directly from the score and the raw pixels on the screen.

Or any other of Earth’s species. As we saw in Chapter 1, our unique position in the world is a direct result of our unique mental abilities. Unmatched intelligence led to unmatched power and thus control of our destiny. What would happen if sometime this century researchers created an artificial general intelligence surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. So without a very good plan to keep control, we should also expect to cede our status as the most powerful species, and the one that controls its own destiny.88 On its own, this might not be too much cause for concern.

We need to better understand the existential risks—how likely they are, their mechanisms, and the best ways to reduce them. While there has been substantial research into nuclear war, climate change and biosecurity, very little of this has looked at the most extreme events in each area, those that pose a threat to humanity itself.64 Similarly, we need much more technical research into how to align artificial general intelligence with the values of humanity. We also need more research on how to address major risk factors, such as war between the great powers, and on major security factors too. For example, on the best kinds of institutions for international coordination or for representing future generations.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

Omohundro, “The Basic AI Drives”, in Proceedings of the First Conference on Artificial General Intelligence, 2008. 139Stuart Russell, “Should We Fear Supersmart Robots?”, Scientific American, Vol. 314 (June 2016), 58–59. 140Nate Soares and Benja Fallenstein, “Aligning Superintelligence with Human Interests: A Technical Research Agenda”, in The Technological Singularity (Berlin and Heidelberg: Springer, 2017), 103–125. See also Stephen M. Omohundro, “The Basic AI Drives”, in Proceedings of the First Conference on Artificial General Intelligence, 2008. 141Ibid. 142Nick Bostrom, Superintelligence : Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014), Chapter 9. 143See John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior (Princeton, NJ: Princeton University Press, 1944). 144Nate Soares and Benja Fallenstein, “Toward Idealized Decision Theory”, Technical Report 2014–7 (Berkeley, CA: Machine Intelligence Research Institute, 2014), https://​arxiv.​org/​abs/​1507.​01986, accessed 1 June 2018. 145See, for example, Thomas Harris, The Silence of the Lambs (London: St.

For instance, Margaret Boden was one of the most well-known proponents of the sceptical view, although in her latest work, Margaret Boden, AI: Its nature and Future (Oxford: Oxford University Press, 2016), 119 et seq she acknowledges the potential for “real” artificial intelligence, but maintains that “…no one knows for sure, whether [technology described as Artificial General Intelligence] could really be intelligent”. 20See further Chapter 3 at s. 2.1.2. 21As to AI systems developing the capacity to self-improve, see further FN 114 below and more generally Chapter 2 at s. 3.2. 22Our prediction for the process of narrow AI gradually coming closer to general AI is similar to evolution.

See also Daniel Kahneman, Thinking, Fast and Slow (London: Penguin, 2011). 59See more general discussion in Chapter 8 at s. 5.4.2. 60See Laurent Orseau and Stuart Armstrong, “Safely Interruptible Agents”, 28 October 2016, http://​intelligence.​org/​files/​Interruptibility​.​pdf, accessed 1 June 2018; El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, and Alexandre Maure, “Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning”, EPFL Working Paper (2017) No. EPFL-WORKING-229332. 61Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell, “The Off-Switch Game”, arXiv preprint arXiv:1611.08219 (2016), 1. 62See, for example, Stephen Omohundro, “The Basic AI Drives”, in Proceedings of the First Conference on Artificial General Intelligence (2008). 63Ibid. 64Arguably, an excess of confidence in the “rightness” of an ultimate goal—particularly where that goal is not of a nature that is observable in the natural world—can lead to undesirable consequences in human actions, as well as those of AI. For instance, it might be said that belief-based fundamentalists, whether on the basis of religion, animal rights , nationalism, etc., suffer from an excess of confidence.

pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning
by Justin E. H. Smith
Published 22 Mar 2022

A machine that may be observed from a third-person point of view to fulfill all the cognitive functions of which a human being is capable is said, again, to possess artificial general intelligence. This is something widely held to be attainable, but not yet actual. David Chalmers, who does not believe we currently have a science of consciousness or that we have any reason to believe that consciousness is something that may be instantiated by machines, nonetheless believes that machines will have artificial general intelligence within the next forty to one hundred years.15 John Basl and Eric Schwitzgebel for their part predict that “we will soon have AI approximately as cognitively sophisticated as mice or dogs.”16 Although this is not the central point of the latter authors’ work, and does not represent the considered view of at least one of them,17 we may still note that it is a surprising claim.

And we are always free to trade in our metaphors when when others come along that better satisfy our desire to make sense of things, or that simply fit better with the spirit of the times. Dark Conjurations It should not be surprising by now to learn that the quest to build an artificial general-intelligence machine goes back much further in history than is ordinarily supposed, nor that the line between general intelligence and strong intelligence, between computational ability and consciousness, has often been blurred. One legendary example of a general-intelligence device is the thirteenth-century English natural philosopher Roger Bacon’s “Brazen Head,” a genie made of brass, purportedly capable of answering any yes-or-no question put to it: a medieval Siri, if you will.

pages: 326 words: 88,968

The Science and Technology of Growing Young: An Insider's Guide to the Breakthroughs That Will Dramatically Extend Our Lifespan . . . And What You Can Do Right Now
by Sergey Young
Published 23 Aug 2021

That future remains a delicious mystery. However, there are two anticipated developments in particular that will arrive soon enough. These are crucial pillars of Kurzweil’s theory, and in my opinion, represent a kind of “Kitty Hawk” moment of the relatively near future. These pillars are quantum computing and artificial general intelligence (AGI). This is not a computing book; I am not a computer scientist. Both quantum computing and AGI are subjects far too complex to do service to in the little amount of space I have available. If you’d like to learn about these subjects, I recommend watching Wired UK’s video on quantum computing by Amit Katwala and Science Time’s YouTube documentary on artificial superintelligence.4 For now, my own, truly “dumbed down” version goes like this: In conventional computing, every piece of information is made up of ones and zeros, which can be understood as “on” and “off.”

The result is that quantum computing will be trillions of times faster than the fastest of conventional computers and able to perform far more complex maneuvers. Meanwhile, whereas artificial intelligence today is usually narrowly focused on a single task or field, and requires extensive training by humans to execute that task, artificial general intelligence will be able to devise solutions to almost any task through observation, research, application of past experiences, and so on—just like a human being. Google, IBM, and others have already created first-generation quantum computers. Google even claimed in October 2019 that its entry achieved “quantum supremacy”—the ability to solve a problem that no classical computer could solve—when it performed a function in two hundred seconds that “a state-of-the-art classical supercomputer would take approximately ten thousand years” to execute.5 As for AGI, most experts predict its arrival within a few decades.

Imagine that it could then design and execute accurate simulations of these various courses of action to determine the very best answers, within a statistically insignificant margin of error. Imagine that it could accurately determine the resources necessary to execute those courses of action, and even manage that execution. And imagine it could do all of that in less time than it took you to read this paragraph. That is the promise of quantum computing plus artificial general intelligence. With such technologies, even those things that we consider wildly impossible today could soon be considered banal and routine, just as happened with human flight. Immortality is not just theoretically possible. It is probable. EXTREME LONGEVITY MEETS TECHNICAL INEVITABILITY Biologically speaking, there is no rule of nature that prevents immortality (or at least extreme longevity) from happening.

pages: 219 words: 63,495

50 Future Ideas You Really Need to Know
by Richard Watson
Published 5 Nov 2013

timeline 1662 Dodo becomes extinct 1966 Last Arabian ostrich 1989 Golden toads extinct 1998 Poll of 400 scientists reveals that 70 percent believe mass extinction is happening 2000 Last Pyrenean Ibex dies (it’s cloned back in 2009 but later dies) 2005 Extinct Laotian Rock Rat rediscovered 2006 Freshwater dolphin declared dead 2035 50 percent of European amphibians extinct 46 The Singularity Moore’s Law (named after Gordon Moore) says that computers double their processing ability every 18 months or so. But imagine if this rate of exponential growth was itself exponential. That’s one potential consequence of what future tech-heads call the “Singularity,” where computers will be able to create AGIs (artificial general intelligences) more intelligent than human beings. Proponents of the Singularity, most notably the inventor and futurist Ray Kurzweil, say that if computers continue to advance at their current rate, the singularity is a mere 20–30 years away—perhaps sooner if useful quantum computers are developed.

In many ways it could be worse to deal with if it were not, because it’s entirely possible that such an intellect could not be reasoned with using human logic or emotion. the condensed idea Machines much smarter than people timeline 2011 Voice declines significantly as a human-to-human communication medium 2040 AGI (artificial general intelligence) exists 2045 The distinction between virtual and real life becomes almost meaningless 2050 Full virtual-reality immersion 2060 The first human brain enters a machine body 2070 Computer viruses become the main threat to human existence 2080 Scientists acknowledge that immortality exists for those that want it 2095 Human-robot hybrids (brains in boxes) take off to explore distant galaxies 47 Me or we?

Sumner Redstone, chairman, Viacom and CBS 2002 “There is no doubt that Saddam Hussein has weapons of mass destruction.” Dick Cheney Glossary 3D printer A way to produce 3D objects from digital instructions and layered materials dispersed or sprayed on via a printer. Affective computing Machines and systems that recognize or simulate human effects or emotions. AGI Artificial general intelligence, a term usually used to describe strong AI (the opposite of narrow or weak AI). It is machine intelligence that is equivalent to, or exceeds, human intelligence and it’s usually regarded as the long-term goal of AI research and development. Ambient intelligence Electronic or artificial environments that recognize the presence of other machines or people and respond to their needs.

pages: 377 words: 97,144

Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World
by James D. Miller
Published 14 Jun 2012

This “crowdsourcing,” which occurs when a problem is thrown open to anyone, helps a company by allowing them to draw on the talents of strangers, while only paying the strangers if they help the firm. This kind of crowdsourcing works only if, as with a video recommendation system, there is an easy and objective way of measuring progress toward the crowdsourced goal. 13.Potential Improvement All the Way Up to Superhuman Artificial General Intelligence—A recommendation AI could slowly morph into a content creator. At first, the AI might make small changes to content, such as improving sound quality, zooming in on the interesting bits of the video, or running in slow motion the part of a certain cat video in which a kitten falls into a bowl of milk.

Even with the money, Vassar admitted, the Institute would succeed only if it attracted extremely competent programmers because the programming team would be working under the disadvantage of trying to make an AI that’s mathematically certain to yield a friendly ultra-intelligence, whereas other organizations trying to build artificial general intelligence might not let concerns about friendliness slow them down. The Institute’s annual budget is currently around $500,000 per year. Even if Eliezer and the Singularity Institute have no realistic chance of creating a friendly AI, they still easily justify their institute’s existence. As Michael Anissimov, media director for the Institute, once told me in a personal conversation, at the very least the Institute has reduced the chance of humanity’s destruction by repeatedly telling artificial-intelligence programmers about the threat of unfriendly AI.

I doubt much time would elapse between the creation of Rosie, the robot maid on the 1960s TV show The Jetsons, and a Singularity. Similarly, I believe we would have a Singularity thrust upon us very quickly after someone creates an AI like HAL from the movie 2001: A Space Odyssey. Any artificial general intelligence such as HAL could almost certainly become much smarter and more capable just by running on faster or more numerous computers. Consequently (and perhaps tragically), I strongly suspect that: HAL + Continued Exponential Growth in Computing Power = Not-Too-Distant Singularity Brain implants that can raise the general intelligence of a healthy person would be a strong sign that mankind is near a Singularity.

pages: 391 words: 71,600

Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
by Satya Nadella , Greg Shaw and Jill Tracie Nichols
Published 25 Sep 2017

AI needs data to learn. The cloud has made tremendous computing power available to everyone, and complex algorithms can now be written to discern insights and intelligence from the mountains of data. But far from Baymax or Brenner, AI today is some ways away from becoming what’s known as artificial general intelligence (AGI), the point at which a computer matches or even surpasses human intellectual capabilities. Like human intelligence, artificial intelligence can be categorized by layer. The bottom layer is simple pattern recognition. The middle layer is perception, sensing more and more complex scenes.

Like humans, computers will go beyond mimicking what people do and will invent new, better solutions to problems. Deep neural networks and transfer learning are leading to breakthroughs today, but AI is like a ladder and we are just on the first step of that ladder. At the top of the ladder is artificial general intelligence and complete machine understanding of human language. It’s when a computer exhibits intelligence that is equal to or indistinguishable from a human. One of our top AI researchers decided to try an experiment to demonstrate how a computer can learn to learn. A highly esteemed computer scientist and medical doctor, Eric Horvitz, runs our Redmond research lab and has long been fascinated with machines that perceive, learn, and reason.

(landlord), 36–37 Ali, Syed B., 20 Alien and Sedition Acts, 188 Allen, Colin, 209 Allen, Paul, 4, 21, 28, 64, 69, 87, 127 Alphago, 199 ALS, 10–11 Altair, 87 Althoff, Judson, 82 Amar, Akhil Reed, 186 Amazon, 47, 51, 54, 59, 85, 122, 125, 200, 228 Amazon Fire, 125 Amazon Web Service (AWS), 45–46, 52, 54, 58 ambient intelligence, 228–39 ambition, 76–78, 80, 90 American Dream, 238 American Revolution, 185–86 Amiss, Dennis, 37 Anderson, Brad, 58, 82 Android. 59, 66, 70–72, 123, 125, 132–33, 222 antitrust case, 130 AOL, 174 Apple Computer, 15, 45, 51, 66, 69–70, 72, 128, 132, 174, 177–78, 189 partnership with, 121–25 apprenticeship, 227 artificial general intelligence (AGI), 150, 153–54 artificial intelligence (AI), 11, 13, 50, 52, 59, 76, 88, 110, 139–42, 149–59, 161, 164, 166–67, 186, 212, 223, 239 ethics and, 195–210 Artificial Intelligence and Life in 2030 (Stanford report), 208 Asia, 86, 219 Asimov, Isaac, 202–3 astronauts, 146, 148 asynchronous transfer model (ATM), 30 AT&T, 174 Atari 2600, 146 at-scale services, 53, 61 auction-based pricing, 47, 50 Australia, 38–39, 149, 228, 230 autism, 149 Autodesk, 127–28 automation, 208, 214, 226, 231–32, 236 automobile, 127, 153, 230 driverless, 209, 226, 228 aviation, 210 Azure, 58–61, 85, 125, 137 backdoors, 177–78 Bahl, Kunal, 33 Baig, Abbas Ali, 36 Bain Capital, 220 Baldwin, Richard, 236 Ballmer, Steve, 3–4, 12, 14, 29, 46–48, 51–55, 64, 67, 72, 91, 94, 122 Banga, Ajay Singh, 20 Baraboo project, 145 Baraka, Chris, 97 BASIC, 87, 143 Batelle, John, 234 Bates, Tony, 64 Bayesian estimators, 54 Baymax (robot), 150 Beauchamp, Tom, 179 Belgium, 215 Best Buy, 87, 127 Bezos, Jeff, 54 bias, 113–15 Bicycle Corporation of America, 232 Big Data, 13, 58, 70, 150–51, 183–84 Big Hero 6 (film), 150 Bill & Melinda Gates Foundation, 46, 74 Bill of Rights, 190 Bing, 47–54, 57, 59, 61, 125, 134 Birla Institute of Technology, 21 Bishop, Christopher, 199 black-hat groups, 170 Blacks @ Microsoft (BAM), 116–17.

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

The ANGELINA Videogame Design System, Parts I and II. IEEE Transactions on Computational Intelligence and AI in Games, 9(2/3), 1–13. Gallie, W. B. (1955). Essentially Contested Concepts. Proceedings of the Aristotelian Society, 56, 167–198. Goertzel, B. (2014). Artificial General Intelligence: Concept, State of the Art, and Future Prospects. Journal of Artificial General Intelligence, 5(1), 1–26. Hurst, D. (2018, February 6). Japan Lays Groundwork for Boom in Robot Carers. The Guardian. Price, R. (2019, April 12). Uber Says Its Future is Riding on the Success of Self-­ driving Cars, but Warns Investors That There’s a Lot That Can Go Wrong.

113 for example overnight. While this is a brilliant and much-used science fiction meme, it is, unfortunately, bad fictional science. No AI researcher I know has the first clue about how we could achieve overnight superintelligence, and as far as I know, no-one has a reasonable road-map for so-called Artificial General Intelligence, with metrics for partial progress remaining controversial and problematic (Goertzel 2014). It is worth debunking a couple of Bostrom’s ideas on how such rapid superintelligence could be achieved. One is incremental automated AI engineering, that is an AI system writing a slightly more intelligent AI system, which itself writes an even more intelligent system, and so on.

pages: 1,737 words: 491,616

Rationality: From AI to Zombies
by Eliezer Yudkowsky
Published 11 Mar 2015

Rebuilding Intelligence Yudkowsky is a decision theorist and mathematician who works on foundational issues in Artificial General Intelligence (AGI), the theoretical study of domain-general problem-solving systems. Yudkowsky’s work in AI has been a major driving force behind his exploration of the psychology of human rationality, as he noted in his very first blog post on Overcoming Bias, The Martial Art of Rationality: Such understanding as I have of rationality, I acquired in the course of wrestling with the challenge of Artificial General Intelligence (an endeavor which, to actually succeed, would require sufficient mastery of rationality to build a complete working rationalist out of toothpicks and rubber bands).

But concepts are not useful or useless of themselves. Only usages are correct or incorrect. In the step Marcello was trying to take in the dance, he was trying to explain something for free, get something for nothing. It is an extremely common misstep, at least in my field. You can join a discussion on Artificial General Intelligence and watch people doing the same thing, left and right, over and over again—constantly skipping over things they don’t understand, without realizing that’s what they’re doing. In an eyeblink it happens: putting a non-controlling causal node behind something mysterious, a causal node that feels like an explanation but isn’t.

While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems. This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take Artificial Intelligence, for example. A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world—a Friendly AI, loosely speaking—why, that problem is so incredibly difficult that an actual majority resolve the whole issue within fifteen seconds.

pages: 589 words: 147,053

The Age of Em: Work, Love and Life When Robots Rule the Earth
by Robin Hanson
Published 31 Mar 2016

However, researchers who don’t go out of their way to publish predictions, but are instead asked for forecasts in a survey, tend to give durations roughly 10 years longer than researchers who do make public predictions (Armstrong and Sotala 2012; Grace 2014). Shorter durations are given by researchers in the small AI subfield of “artificial general intelligence,” which is more ambitious in trying to write software that is good at a great many tasks at once. A recent survey of the 100 most cited living AI researchers got 29 responses, who gave a median forecast of 37 years until there is a 50% chance of human level AI (Müller and Bostrom 2014).

For example, if it is hard to protect cities against nuclear attacks, cities will be smaller and spread further apart. However, to the extent that there are em enclaves well protected against attack, those probably look more like the scenario described in this book. In a second variation, we might create artificial general intelligence that is similar to ems, except that it is made via a shallower analysis of higher-level human brain processes, instead of via directly emulating lower-level brain processes as in a classic em. Such variations on ems probably are not greatly redesigned at the highest levels of organization, and thus are relatively human in behavior and style.

Alston, Julian, Matthew Andersen, Jennifer James, and Philip Pardey. 2011. “The Economic Returns to U.S. Public Agricultural Research.” American Journal of Agricultural Economics 93(5): 1257–1277. Alstott, Jeff. 2013. “Will We Hit a Wall? Forecasting Bottlenecks to Whole Brain Emulation Development.” Journal of Artificial General Intelligence 4(3): 153–163. Alvanchi, Amin, SangHyun Lee, and Simaan AbouRizk. 2012. “Dynamics of Working Hours in Construction.” Journal of Construction Engineering and Management 138(1): 66–77. Alwin, Duane, and Jon Krosnick. 1991. “Aging, Cohorts, and the Stability of Sociopolitical Orientations Over the Life Span.”

pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next
by Jeanette Winterson
Published 15 Mar 2021

Think about it: land-grab, colonisation, urban creep, loss of habitat, the current fad for seasteading (sea cities with vast oceans at their disposal). And space itself – the go-to fascination of rich men: Richard Branson, Elon Musk, Jeff Bezos. When I think about artificial intelligence, and what is surely to follow – artificial general intelligence, or superintelligence – it seems to me that what this affects most, now and later, isn’t space but time. The brain uses chemicals to transmit information. A computer uses electricity. Signals travel at high speeds through the nervous system (neurons fire 200 times a second, or 200 hertz) but computer processors are measured in gigahertz – billions of cycles per second.

Embodiment is one option, but for AI it isn’t the only option – or even the best option. I want to clarify here that narrow AI, the kind of everyday AI that manages one task or goal (playing chess, sorting the mail) is a small part of what AI is becoming as we seek to develop ‘solid’ AI, or AGI – artificial general intelligence —a multitasking, thinking entity that will eventually become autonomous – able to set its own goals and make its own decisions. Intelligence, certainly, and consciousness, probably, is proving not to be dependent on biology. That shouldn’t be such a surprise. Every religion in the world starts from that premise.

* * * Elon Musk and Sam Altman (CEO of the start-up funder Y Combinator) launched OpenAI in 2015 as a non-profit organisation promoting more inclusive AI – more benefits for more people – and to explore safe AGI. (We don’t want a Skynet situation.) Musk, who has since left the organisation due to what he calls conflicts of interest, is notably worried about artificial general intelligence – the point where AI becomes an autonomous self-monitoring system. He may be worried because AGI could take one glance at self-described Techno-kings like Musk, and shut him down. But that’s another story. AGI as the potential enemy ‘out there’, where humans like their enemies to be – ‘the other’ – is more exciting, in many ways more manageable, psychologically, than the fact of us humans as the real threat, unable to turn the AI we are developing to uses that benefit all of humanity.

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

The rest of this chapter will help you to understand the state of artificial intelligence today. AI vs. AGI Artificial intelligence, also known as AI, has been misused in pop culture to describe almost any kind of computerized analysis or automation. To avoid confusion, technical experts in the field of AI prefer to use the term Artificial General Intelligence (AGI) to refer to machines with human-level or higher intelligence, capable of abstracting concepts from limited experience and transferring knowledge between domains. AGI is also called “Strong AI” to differentiate from “Weak AI” or “Narrow AI," which refers to systems designed for one specific task and whose capabilities are not easily transferable to other systems.

A System That Masters is an intelligent agent capable of constructing abstract concepts and strategic plans from sparse data. By creating modular, conceptual representations of the world around us, we are able to transfer knowledge from one domain to another, a key feature of general intelligence. As we discussed earlier, no modern AI system is an AGI, or artificial general intelligence. While humans are Systems That Master, current AI programs are not. Systems That Evolve This final category refers to systems that exhibit superhuman intelligence and capabilities, such as the ability to dynamically change their own design and architecture to adapt to changing conditions in their environment.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

Bachman, adapted for ebook Cover design: Christopher Brand and Oliver Munday ep_prh_6.1_144835715_c0_r0 Contents Cover Title Page Copyright Glossary of Key Terms Prologue Chapter 1: Containment Is Not Possible Part I: Homo Technologicus Chapter 2: Endless Proliferation Chapter 3: The Containment Problem Part II: The Next Wave Chapter 4: The Technology of Intelligence Chapter 5: The Technology of Life Chapter 6: The Wider Wave Chapter 7: Four Features of the Coming Wave Chapter 8: Unstoppable Incentives Part III: States of Failure Chapter 9: The Grand Bargain Chapter 10: Fragility Amplifiers Chapter 11: The Future of Nations Chapter 12: The Dilemma Part IV: Through the Wave Chapter 13: Containment Must Be Possible Chapter 14: Ten Steps Toward Containment Life After the Anthropocene Acknowledgments Notes Index About the Authors _144835715_ GLOSSARY OF KEY TERMS AI, AGI, AND ACI: Artificial intelligence (AI) is the science of teaching machines to learn humanlike capabilities. Artificial general intelligence (AGI) is the point at which an AI can perform all human cognitive skills better than the smartest humans. ACI, or artificial capable intelligence, is a fast-approaching point between AI and AGI: ACI can achieve a wide range of complex tasks but is still a long way from being fully general.

Not a talking point or an engineering ambition, but a reality. It happened in DeepMind’s first office in London’s Bloomsbury one day in 2012. After founding the company and securing initial funding, we spent a few years in stealth mode, focusing on the research and engineering of building AGI, or artificial general intelligence. The “general” in AGI refers to the technology’s intended broad scope; we wanted to build truly general learning agents that could exceed human performance at most cognitive tasks. Our quiet approach shifted with the creation of an algorithm called DQN, short for Deep Q-Network. Members of the team trained DQN to play a raft of classic Atari games, or, more specifically, we trained it to learn how to play the games by itself.

GO TO NOTE REFERENCE IN TEXT Internal research on GPT-4 @GPT-4 Technical Report, OpenAI, March 14, 2023, cdn.openai.com/​papers/​gpt-4.pdf. See mobile.twitter.com/​michalkosinski/​status/​1636683810631974912 for one of the early experiments. GO TO NOTE REFERENCE IN TEXT Early research even claimed Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” arXiv, March 27, 2023, arxiv.org/​abs/​2303.12712. GO TO NOTE REFERENCE IN TEXT AIs are already finding ways Alhussein Fawzi et al., “Discovering Novel Algorithms with AlphaTensor,” DeepMind, Oct. 5, 2022, www.deepmind.com/​blog/​discovering-novel-algorithms-with-alphatensor.

pages: 562 words: 201,502

Elon Musk
by Walter Isaacson
Published 11 Sep 2023

In his modern London office is an original edition of Alan Turing’s seminal 1950 paper, “Computing Machinery and Intelligence,” which proposed an “imitation game” that would pit a human against a ChatGPT–like machine. If the responses of the two were indistinguishable, he wrote, then it would be reasonable to say that machines could “think.” Influenced by Turing’s argument, Hassabis cofounded a company called DeepMind that sought to design computer-based neural networks that could achieve artificial general intelligence. In other words, it sought to make machines that could learn how to think like humans. “Elon and I hit it off right away, and I went to visit him at his rocket factory,” Hassabis says. While sitting in the canteen overlooking the assembly lines, Musk explained that his reason for building rockets that could go to Mars was that it might be a way to preserve human consciousness in the event of a world war, asteroid strike, or civilization collapse.

These include Neuralink, which aims to plant microchips in human brains; Optimus, a humanlike robot; and Dojo, a supercomputer that can use millions of videos to train an artificial neural network to simulate a human brain. It also spurred him to become obsessed with pushing to make Tesla cars self-driving. At first these endeavors were rather independent, but eventually Musk would tie them all together, along with a new chatbot company he founded called X.AI, to pursue the goal of artificial general intelligence. Musk’s determination to develop artificial intelligence capabilities at his own companies caused a break with OpenAI in 2018. He tried to convince Altman that OpenAI, which he thought was falling behind Google, should be folded into Tesla. The OpenAI team rejected that idea, and Altman stepped in as president of the lab, starting a for-profit arm that was able to raise equity funding.

“Whenever Elon pulls out his phone to take a video, you know that you’ve impressed him,” Lars Moravy says. Afterward, Musk announced that they would hold a public demonstration that would feature Optimus, Full Self-Driving, and Dojo. “In all of these,” he said, “we’re tackling the huge task of creating artificial general intelligence.” The event would be at Tesla’s Palo Alto headquarters on September 30, 2022, and be called AI Day 2. His design team created a logo that showed Optimus touching together its beautifully tapered fingers to form the shape of a heart. 78 Uncertainty Twitter, July–September 2022 Ari Emanuel hosing Musk down in Mykonos Alex Spiro The terminator Unsure what he wanted to do about Twitter, Musk asked for three options in June 2022.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

“TURING+” QUESTIONS TOMASO POGGIO Eugene McDermott Professor, Department of Brain and Cognitive Sciences, and director, Center for Brains, Minds, and Machines, MIT Recent months have seen an increasingly public debate forming around the risks of artificial intelligence—in particular, AGI (artificial general intelligence). AI has been called by some (including the physicist Stephen Hawking) the top existential risk to humankind, and such recent films as Her and Transcendence have reinforced the message. Thoughtful comments by experts in the field—Rod Brooks and Oren Etzioni among them—have done little to settle the debate.

My suspicion is that replicating the effectiveness of this evolved intelligence in an artificial agent will require amounts of computation not that much lower than evolution has required, which would far outstrip our abilities for many decades, even given exponential growth in computational efficiency per Moore’s Law—and that’s even if we understood how to correctly employ that computation. I assign a probability of about 1 percent for artificial general intelligence (AGI) arising in the next ten years, and about 10 percent over the next thirty years. (This essentially reflects a probability that my analysis is wrong, times a probability more representative of AI experts, who—albeit with lots of variation—tend to assign somewhat higher numbers.) On the other hand, I assign a rather high probability that, if AGI is created (and especially if it arises relatively quickly), it will be—in a word—insane.

The majority of predictions, like three-day weeks, personal jet packs, and the paperless office, tell us more about the times in which they were proposed than about contemporary experience. When people point to the future, we’d do well to run an eye back up the arm to see who’s doing the pointing. The possibility of artificial general intelligence has long invited such crystal-ball gazing, whether utopian or dystopian in tone. Yet speculations on this theme have reached such a pitch and intensity in the last few months alone (enough to trigger an Edge Question, no less) that this may reveal something about ourselves and our culture today.

pages: 285 words: 86,858

How to Spend a Trillion Dollars
by Rowan Hooper
Published 15 Jan 2020

In the latter part of this chapter we’ll look at creating a biological life form from scratch, which will mean tackling the question of what life is. First, though, we’ll join the race to build a digital, computer-based entity capable of human-level flexible thinking. $ $ $ OUR GOAL IS WHAT IS CALLED artificial general intelligence (AGI). The general is the thing. There are accomplished AI systems already in operation, but their skills are non-transferrable. One of the world’s leading AI firms is DeepMind, which is owned by Google. It created a computer program called AlphaZero, which became the greatest chess player of all time when it was given the rules of the game and played itself over and over again, hundreds of millions of times.

As with space travel, our aim should be to create a well-funded umbrella organisation that fosters collaboration between these competing partners. Let’s call it the Tangled Bank, a nod to a key weird aspect of quantum physics, and to the poetic final paragraph in Darwin’s Origin of Species, where he considers a tangled bank on a riverside and its mass of evolutionary potential. $ $ $ WE SHOULD INVEST IN WORK towards both artificial general intelligence and quantum supremacy. Even if we don’t get to AGI, and we shouldn’t underestimate how tricky a problem it is, we’ll get lots of benefits on the way. A deep question here is whether a computer with human-level intelligence needs to be conscious. I think working on AGI is helpful for this question because it forces us to get to the heart of what we mean by consciousness.

Basic synthetic bacterial organisms made at scale for multiple purposes; progress on creating synthetic ‘higher’ life forms based on yeast cells. Stretch goals of two genesis events: a machine with human-level intelligence; a synthetic eukaryote life form with functions and genome that have not evolved by natural selection. Money spent Research towards the creation of artificial general intelligence $100 billion The Tangled Bank organisation for quantum computing $100 billion Creation of the Synthetic Alliance, an organisation aiming to develop artificial and synthetic life forms $100 billion Development of craft biorefineries $10 billion Total $310 billion EPILOGUE How to spend it HOW TO SPEND A TRILLION DOLLARS started out as a bit of fun.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

The caliber of AI systems today is so limited and rudimentary that there is no straight line from even the most capable AI systems today to the kind of super-advanced AI that could change the nature of war. But neither do such scenarios depend on the development of advanced AI as it is popularly conceived of, either artificial general intelligence (AGI), a hypothesized form of AI with intelligence comparable to humans, or superintelligence, with capabilities far beyond humans. Even advanced forms of AI that lack human-level abilities in a variety of aspects could still be quite powerful. It is a fallacy to think that the pinnacle of intelligence is a human-like form of cognition or even that the natural trajectory of AI is toward intelligence that is more human-like.

My brother, Steve, has been a constant source of support, ideas, and inspiration. And I owe everything to my parents, Janice and David, for always believing in me. ABBREVIATIONS ABC American Broadcasting Company ACE Air Combat Evolution ACLU American Civil Liberties Union AFWERX Air Force Works AGI artificial general intelligence AI artificial intelligence AIDS acquired immunodeficiency syndrome ALS amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease) ASIC application-specific integrated circuit AU African Union AWACS airborne warning and control system AWCFT Algorithmic Warfare Cross-Functional Team BAAI Beijing Academy of Artificial Intelligence BBC British Broadcasting Corporation BERT Bidirectional Encoder Representations from Transformers BCE before common era C4ISR Command, Control, Communication, Cloud, Intelligence, Surveillance, and Reconnaissance CBC Canadian Broadcasting Corporation CBP Customs and Border Patrol CCP Chinese Communist Party CEIEC China National Electronics Import and Export Corporation CEO chief executive officer CFIUS Committee on Foreign Investment in the United States CIA Central Intelligence Agency CLIP Contrastive Language–Image Pretraining CMU Carnegie Mellon University COBOL common business-oriented language COVID coronavirus disease CPU central processing unit CSAIL Computer Science and Artificial Intelligence Laboratory DARPA Defense Advanced Research Projects Agency DC District of Columbia DDS Defense Digital Service DEA Drug Enforcement Administration DIU Defense Innovation Unit DIUx Defense Innovation Unit—Experimental DNA deoxyribonucleic acid DoD Department of Defense EOD explosive ordnance disposal EPA Environmental Protection Agency ERDCWERX Engineer Research and Development Center Works EU European Union EUV extreme ultraviolet FBI Federal Bureau of Investigation FedRAMP Federal Risk and Authorization Management Program FEMA Federal Emergency Management Agency FOUO For Official Use Only FPGA field-programmable gate arrays GAN generative adversarial network GAO Government Accountability Office GB gigabytes GDP gross domestic product GDPR General Data Protection Regulation GIF graphics interchange format GNP gross national product GPS global positioning system GPU graphics processing unit HA/DR humanitarian assistance / disaster relief HUD head-up display IARPA Intelligence Advanced Research Projects Activity ICE Immigration and Customs Enforcement IEC International Electrotechnical Commission IED improvised explosive device IEEE Institute for Electrical and Electronics Engineers IJOP Integrated Joint Operations Platform IoT Internet of Things IP intellectual property IP internet protocol ISIS Islamic State of Iraq and Syria ISO International Organization for Standardization ISR intelligence, surveillance, and reconnaissance ITU International Telecommunication Union JAIC Joint Artificial Intelligence Center JEDI Joint Enterprise Defense Infrastructure KGB Komitet Gosudarstvennoy Bezopasnosti (Комитет государственной безопасности) MAGA Make America Great Again MAVLab Micro Air Vehicle Lab MIRI Machine Intelligence Research Institute MIT Massachusetts Institute of Technology MPS Ministry of Public Service MRAP mine-resistant ambush protected NASA National Aeronautics and Space Administration NATO North Atlantic Treaty Organization NBC National Broadcasting Company NGA National Geospatial-Intelligence Agency NLG Natural Language Generation nm nanometer NOAA National Oceanic and Atmosphere Administration NREC National Robotics Engineering Center NSIC National Security Innovation Capital NSIN National Security Innovation Network NUDT National University of Defense Technology OTA other transaction authority PhD doctor of philosophy PLA People’s Liberation Army QR code quick response code R&D research and development RFP request for proposals RYaN Raketno Yadernoye Napadenie (Ракетно ядерное нападение) [nuclear missile attack] SEAL sea, air, land SMIC Semiconductor Manufacturing International Corporation SOFWERX Special Operations Forces Works SpaceWERX Space Force Works STEM science, technology, engineering, and mathematics TEVV test and evaluation, verification and validation TPU Tensor Processing Unit TRACE Target Recognition and Adaptation in Contested Environments TSA Transportation Security Administration TSMC Taiwan Semiconductor Manufacturing Company TTC Trade and Technology Council UAV unmanned aerial vehicle UK United Kingdom UN United Nations U.S.

Ortega, Vishal Maini, and the DeepMind safety team, “Building Safe Artificial Intelligence: Specification, Robustness, and Assurance,” Medium, September 27, 2018, https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1. 251“test and evaluation”: Jim Alley et al., Autonomy Community of Interest (COI) Test and Evaluation, Verification and Validation (TEVV) Working Group Technology Investment Strategy 2015-2018 (Office of the Assistant Secretary of Defense for Research & Engineering, May 2015), https://defenseinnovationmarketplace.dtic.mil/wp-content/uploads/2018/02/OSD_ATEVV_STRAT_DIST_A_SIGNED.pdf 251adequate TEVV for autonomous systems: United States Air Force Office of the Chief Scientist, “Autonomous Horizons: System Autonomy in the Air Force—A Path to the Future,” AF/ST TR 15-01, (2015); Alley et al., Autonomy Community of Interest (COI) Test and Evaluation, Verification and Validation (TEVV) Working Group Technology Investment Strategy. 251“immature as regards the ‘illities’”: The MITRE Corporation, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,” JSR-16-Task-003 (The MITRE Corporation, January 2017), 27, 55. 251“We know the fundamentals”: Howell, interview. 252DoD needs improve its processes for AI: Flournoy, Haines, and Chefitz, “Building Trust through Testing.” 252“nowhere close to ensuring the performance”: Tarraf et al., “The Department of Defense Posture for Artificial Intelligence: Assessment and Recommendations,” xiii, xv. 252“TEVV of traditional legacy systems is not sufficient”: National Security Commission on Artificial Intelligence, Final Report (n.d.), 137, https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf. 252raft of recommendations: National Security Commission on Artificial Intelligence, Final Report, 131–140. 252DoD bureaucratic structures to ensure “responsible AI”: Kathleen Hicks, “Implementing Responsible Artificial Intelligence in the Department of Defense,” memorandum for senior Pentagon leadership et al., May 26, 2021, https://media.defense.gov/2021/May/27/2002730593/-1/-1/0/IMPLEMENTING-RESPONSIBLE-ARTIFICIAL-INTELLIGENCE-IN-THE-DEPARTMENT-OF-DEFENSE.PDF. 252AI “safety”: Patrick Tucker, “US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2,” Defense One, June 22, 2021, https://www.defenseone.com/technology/2021/06/us-needs-defend-its-artificial-intelligence-better-says-pentagon-no-2/174876/. 252“responsible AI guidelines”: Jared Dunnmon, Bryce Goodman, Peter Kirechu, Carol Smith, and Alexandrea Van Deusen, “Responsible AI Guidelines in Practice,” Defense Innovation Unit, November 15, 2021, https://assets.ctfassets.net/3nanhbfkr0pc/acoo1Fj5uungnGNPJ3QWy/6ec382b3b5a20ec7de6defdb33b04dcd/2021_RAI_Report.pdf. 252Responsible Artificial Intelligence Strategy: U.S.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

From the standpoint of modern AI, the laws fail to acknowledge any element of probability and risk: the legality of robot actions that expose a human to some probability of harm—however infinitesimal—is therefore unclear. 9. The notion of instrumental goals is due to Stephen Omohundro, “The nature of self-improving artificial intelligence” (unpublished manuscript, 2008). See also Stephen Omohundro, “The basic AI drives,” in Artificial General Intelligence 2008: Proceedings of the First AGI Conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin (IOS Press, 2008). 10. The objective of Johnny Depp’s character, Will Caster, seems to be to solve the problem of physical reincarnation so that he can be reunited with his wife, Evelyn. This just goes to show that the nature of the overarching objective doesn’t matter—the instrumental goals are all the same. 11.

Letting humans push the button: Robert Heath, “Electrical self-stimulation of the brain in man,” American Journal of Psychiatry 120 (1963): 571–77. 25. A first mathematical treatment of wireheading, showing how it occurs in reinforcement learning agents: Mark Ring and Laurent Orseau, “Delusion, survival, and intelligent agents,” in Artificial General Intelligence: 4th International Conference, ed. Jürgen Schmidhuber, Kristinn Thórisson, and Moshe Looks (Springer, 2011). One possible solution to the wireheading problem: Tom Everitt and Marcus Hutter, “Avoiding wireheading with value reinforcement learning,” arXiv:1605.03143 (2016). 26. How it might be possible for an intelligence explosion to occur safely: Benja Fallenstein and Nate Soares, “Vingean reflection: Reliable reasoning for self-improving agents,” technical report 2015-2, Machine Intelligence Research Institute, 2015. 27.

How it might be possible for an intelligence explosion to occur safely: Benja Fallenstein and Nate Soares, “Vingean reflection: Reliable reasoning for self-improving agents,” technical report 2015-2, Machine Intelligence Research Institute, 2015. 27. The difficulty agents face in reasoning about themselves and their successors: Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence,” in Artificial General Intelligence: 7th International Conference, ed. Ben Goertzel, Laurent Orseau, and Javier Snaider (Springer, 2014). 28. Showing why an agent might pursue an objective different from its true objective if its computational abilities are limited: Jonathan Sorg, Satinder Singh, and Richard Lewis, “Internal rewards mitigate agent boundedness,” in Proceedings of the 27th International Conference on Machine Learning, ed.

Demystifying Smart Cities
by Anders Lisdorf

Furthermore, we do not expect AI to be merely indistinguishable from humans; we typically want it to also be superior to humans, whether in precision, scope, time, or some other parameter. We typically want AI to be better than us. Another thing to keep in mind is a distinction between Artificial General Intelligence (AGI), as measured by the Turing test, and Artificial Narrow Intelligence (ANI), which is an application of humanlike intelligence in a particular area for a particular purpose. In our context, we will not go further into AGI and the philosophical implications of this but focus on ANI since this has many contemporary applications.

In many ways, this is already the case. A more peaceful case in point is online dating: a programmer has essentially decided who should find love and who shouldn’t through the matching algorithm and the input used. Inside the AI is the programmer making decisions no one ever agreed they should. Artificial General Intelligence is as elusive as ever – no matter how many resources we throw at AI and no matter how impressive it can be at simple games. Life will throw us the same problems as it always has, and at the end of the day, the intelligence will be human anyway. Artificial Intelligence meets the real world Another important constraint for AI is ecological – not in the sense of the tech ecosystem consisting of different vendors, projects, and organizations.

pages: 533

Future Politics: Living Together in a World Transformed by Tech
by Jamie Susskind
Published 3 Sep 2018

There is a spectrum of approach, for instance, between those who seek to recreate the neural engineering of the human brain, just as ‘early designs for flying machines included flapping wings’, and those who employ entirely new techniques tailored for artificial computation.24 Some researchers seek the holy grail of an artificial general intelligence like the human mind, endowed with consciousness, creativity, common sense, and the ability to ‘think’ abstractly across different environments. One way to achieve this goal might be whole-brain emulation, currently being pursued in the Blue Brain project in Switzerland. This involves trying to map, simulate, and replicate the activity of the (more than) 80 billion neurons and tens of trillions of synapses in the human brain, together with the workings of the central nervous system.25 Whole-brain emulation remains a remote prospect but is not thought to be technically impossible.26 As Murray Shanahan argues, our own brains are proof that it’s physically possible to assemble ‘billions of ultra-low-power, nano-scale components into a device capable of human-level intelligence’.27 Most contemporary AI research, however, is not concerned with artificial general intelligence or whole-brain emulation.

This involves trying to map, simulate, and replicate the activity of the (more than) 80 billion neurons and tens of trillions of synapses in the human brain, together with the workings of the central nervous system.25 Whole-brain emulation remains a remote prospect but is not thought to be technically impossible.26 As Murray Shanahan argues, our own brains are proof that it’s physically possible to assemble ‘billions of ultra-low-power, nano-scale components into a device capable of human-level intelligence’.27 Most contemporary AI research, however, is not concerned with artificial general intelligence or whole-brain emulation. Rather, it is geared toward creating machines capable of performing specific, often quite narrow, tasks with an extraordinary degree of efficacy. AlphaGo, Deep Blue, and Watson did not possess ‘minds’ like those of a human being. Deep Blue, whose only function was to play chess, used ‘brute number-crunching force’ to process hundreds of millions of positions each second, generating every possible move for up to twenty or so moves.28 OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS 34 FUTURE POLITICS It’s tempting to get hung up on the distinction between machines that have a narrow field of cognitive capacity and those able to ‘think’ or solve problems more generally.

First, they would have to be self-directing in the sense of being sufficiently coded to discharge their functions without any further intervention by human beings. This would either mean being engineered to deal with every possible situation or capable of ‘learning’ how to deal with new situations on the job. (This kind of self-direction does not, however, require artificial general intelligence or even a sense of morality. Aeroplane autopilot systems have a high degree of self-direction but no moral or cognitive capacity, and yet we trust their ability to keep us safe in the sky. Like an aeroplane, which is neither a moral agent nor even conscious of its own existence, a system could exert power without being aware that it is doing so.)48 Second, such systems would have to be self-sustaining, in the sense of being able to survive for a decent period without assistance from humans.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

Whether and when we will ever build autonomous agents with superintelligence that operate like sentient life, designing and building new iterations of themselves, that can accomplish any goal at least as well as humans is unclear. We have clearly been inoculated with the idea, however, after exposure to cumulative doses of Skynet in The Terminator, HAL 9000 in 2001: A Space Odyssey, and Agent Smith in The Matrix. These extremely popular films portrayed sentient machines with artificial general intelligence, and many sci-fi movies have proven to be prescient, so fears about AI shouldn’t come as much of a surprise.73 We’ve heard doom projections from high-profile figures like Stephen Hawking (“the development of full AI could spell the end of the human race”), Elon Musk (“with AI we are summoning the demon”), Henry Kissinger (“could cause a rupture in history and unravel the way civilization works”), Bill Gates (“potentially more dangerous than a nuclear catastrophe”), and others.

In addition, he gave $10 million to the Future of Life Institute, in part to construct worst-case scenarios so that they can be anticipated and avoided.81 Max Tegmark, the MIT physicist who directs that institute, convened an international group of AI experts to forecast when we might see artificial general intelligence. The consensus, albeit with a fair amount of variability, was by the year 2055. Similarly, a report by researchers at the Future of Humanity Institute at Oxford and Yale Universities from a large survey of machine learning experts concluded, “There is a 50 percent chance of AI outperforming humans in all tasks in 45 years and automating all human jobs in 120 years.”82 Of interest, the Future of Humanity Institute’s director Nick Bostrom is the author of Superintelligence and the subject of an in-depth profile in the New Yorker as the proponent of AI as the “Doomsday Invention.”83 Tegmark points to the low probability of that occurring: “Superintelligence arguably falls into the same category as a massive asteroid strike such as the one that wiped out the dinosaurs.”84 Regardless of what the future holds, today’s AI is narrow.

Similarly, a report by researchers at the Future of Humanity Institute at Oxford and Yale Universities from a large survey of machine learning experts concluded, “There is a 50 percent chance of AI outperforming humans in all tasks in 45 years and automating all human jobs in 120 years.”82 Of interest, the Future of Humanity Institute’s director Nick Bostrom is the author of Superintelligence and the subject of an in-depth profile in the New Yorker as the proponent of AI as the “Doomsday Invention.”83 Tegmark points to the low probability of that occurring: “Superintelligence arguably falls into the same category as a massive asteroid strike such as the one that wiped out the dinosaurs.”84 Regardless of what the future holds, today’s AI is narrow. Although one can imagine an artificial general intelligence that will treat humans as pets or kill us all, it’s a reach to claim that the moment is upon us: we’re Life 2.0 now, as Tegmark classifies us, such that we, as humans, can redesign our software, learning complex new skills but quite limited with respect to modulating our biological hardware.

pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech
by Franklin Foer
Published 31 Aug 2017

There’s a school of incrementalists, who cherish everything that has been accomplished to date—victories like the PageRank algorithm or the software that allows ATMs to read the scrawled writing on checks. This school holds out little to no hope that computers will ever acquire anything approximating human consciousness. Then there are the revolutionaries who gravitate toward Kurzweil and the singularitarian view. They aim to build computers with either “artificial general intelligence” or “strong AI.” For most of Google’s history, it trained its efforts on incremental improvements. During that earlier era, the company was run by Eric Schmidt—an older, experienced manager, whom Google’s investors forced Page and Brin to accept as their “adult” supervisor. That’s not to say that Schmidt was timid.

“This is the culmination of literally 50 years of my focus on artificial intelligence,” Kurzweil said upon signing up with Google. When you listen to Page talk to his employees, he returns time and again to the metaphor of the moonshot. The company has an Apollolike program for reaching artificial general intelligence: a project called Google Brain, a moniker with creepy implications. (“The Google policy on a lot of things is to get right up to the creepy line and not cross it,” Eric Schmidt has quipped.) Google has spearheaded the revival of a concept first explored in the sixties, one that has failed until recently: neural networks, which involve computing modeled on the workings of the human brain.

pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future
by Andrew McAfee and Erik Brynjolfsson
Published 26 Jun 2017

We have not yet designed symbolic digital systems that understand how the world actually works as well as our own biological System 1 does. Our systems are increasingly effective at “narrow” artificial intelligence, for particular domains like Go or image recognition, but we are far from achieving what Shane Legg, a cofounder of Deep-Mind, has dubbed artificial general intelligence (AGI), which can apply intelligence to a variety of unanticipated types of problems. Polanyi’s Pervasive Paradox Davis and Marcus describe what is perhaps the most instrumental barrier to building such systems: “In doing commonsense reasoning, people . . . are drawing on . . . reasoning processes largely unavailable to introspection.”

Acton, Brian, 140 additive manufacturing, 107; See also 3D printing Adore Me, 62 adults, language learning by, 68–69 advertising content platforms and, 139 data-driven decision making for, 48, 50–51 Facebook and, 8–9 radio airplay as, 148 advertising agencies, 48 advertising revenue Android as means of increasing, 166 Craigslist’s effect on, 139 free apps and, 162 print media and, 130, 132, 139 African Americans identifying gifted students, 40 and search engine bias, 51–52 aggregators, 139–40 AGI (artificial general intelligence), 71 agriculture automated milking systems, 101 drones and, 99–100 “food computers,” 272 machine learning and, 79–80 robotics and, 101–2 Airbnb future of, 319–20 hotel experience vs., 222–23 lack of assets owned by, 6–7 limits to effects on hotel industry, 221–23 network effects, 193 as O2O platform, 186 peer reviews, 209–10 rapid growth of, 9 as two-sided network, 214 value proposition compared to Uber, 222 Airline Deregulation Act, 181n airlines, revenue management by, 181–82 air travel, virtualization in, 89 Akerlof, George, 207, 210 albums, recorded music, 145 algorithms; See also data-driven decision making bias in systems, 51–53 and Cambrian Explosion of robotics, 95–96 comparing human decisions to, 56 O2O platforms and, 193 Quantopian and, 267–70 superiority to System 1 reasoning, 38–41 “algo traders,” 268; See also automated investing Alibaba, 6–8 Alipay, 174 AlphaGo, 4–6, 14, 74, 80 Alter, Lloyd, 90 Amazon automatic price changes, 47 bar code reader app, 162 data-driven product recommendations, 47 development of Web Services, 142–43 Mechanical Turk, 260 as stack, 295 warehouse robotics, 103 Amazon EC2, 143 Amazon Go, 90–91 Amazon S3, 143 Amazon Web Services (AWS), 75, 142–43 American Airlines (AA), 182 amino acid creation, 271–72 analog copies, digital copies vs., 136 “Anatomy of a Large-Scale Hypertextual Web Search Engine, The” (Page and Brin), 233 Anderson, Chris, 98–100 Anderson, Tim, 94 Andreessen, Marc on crowdfunding, 262–63 and Netscape, 34 as self-described “solutionist,” 297 on Teespring, 263–64 Android Blackberry vs., 168 contribution to Google revenue/profits, 204 iOS vs., 166–67 Angry Birds, 159–61 anonymity, digital currency and, 279–80 Antikythera mechanism, 66 APIs (application programming interfaces), 79 apophenia, 44n apparel, 186–88 Apple; See also iPhone acquiring innovation by acquiring companies, 265 and industrywide smartphone profits, 204 leveraging of platforms by, 331 Postmates and, 173, 185 profitability (2015), 204 revenue from paid apps, 164 “Rip, Mix, Burn” slogan, 144n as stack, 295 application programming interfaces (APIs), 79 AppNexus, 139 apps; See also platforms for banking, 89–90 demand curve and, 157–61 iPhone, 151–53 App Store, 158 Apter, Zach, 183 Aral, Sinan, 33 Archilochus, 60–61 architecture, computer-designed, 118 Aristophanes, 200 Arnaout, Ramy, 253 Arthur, Brian, 47–48 artificial general intelligence (AGI), 71 artificial hands, 272–75 artificial intelligence; See also machine learning current state of, 74–76 defined, 67 early attempts, 67–74 implications for future, 329–30 rule-based, 69–72 statistical pattern recognition and, 72–74 Art of Thinking Clearly, The (Dobelli), 43 arts, digital creativity in, 117–18 Ashenfelter, Orley, 38–39 ASICs (application-specific integrated circuits), 287 assets and incentives, 316 leveraging with O2O platforms, 196–97 replacement by platforms, 6–10 asymmetries of information, 206–10 asymptoting, 96 Atkeson, Andrew, 21 ATMs, 89 AT&T, 96, 130 August (smart door lock), 163 Austin, Texas, 223 Australia, 100 Authorize.Net, 171 Autodesk, 114–16, 119, 120 automated investing, 266–70 automation, effect on employment/wages, 332–33 automobiles, See cars Autor, David, 72, 101 background checks, 208, 209 back-office work, 82–83 BackRub, 233 Baidu, 192 Bakos, Yannis, 147n Bakunin, Mikhail, 278 Ballmer, Steve, 151–52 bandwagon effect, 217 banking, virtualization and, 89–90, 92 Bank of England, 280n bank tellers, 92 Barksdale, Jim, 145–46 barriers to entry, 96, 220 Bass, Carl, 106–7, 119–20 B2B (business-to-business) services, 188–90 Beastmode 2.0 Royale Chukkah, 290 Behance, 261 behavioral economics, 35, 43 Bell, Kristen, 261, 262 Benioff, Mark, 84–85 Benjamin, Robert, 311 Benson, Buster, 43–44 Berlin, Isiah, 60n Berners-Lee, Tim, 33, 34n, 138, 233 Bernstein, Michael, 260 Bertsimas, Dimitris, 39 Bezos, Jeff, 132, 142 bias of Airbnb hosts, 209–10 in algorithmic systems, 51–53 digital design’s freedom from, 116 management’s need to acknowledge, 323–24 and second-machine-age companies, 325 big data and Cambrian Explosion of robotics, 95 and credit scores, 46 and machine learning, 75–76 biology, computational, 116–17 Bird, Andrew, 121 Bitcoin, 279–88 China’s dominance of mining, 306–7 failure mode of, 317 fluctuation of value, 288 ledger for, 280–87 as model for larger economy, 296–97 recent troubles with, 305–7 and solutionism, 297 “Bitcoin: A Peer-to-Peer Electronic Cash System” (Nakamoto), 279 BlaBlaCar, 190–91, 197, 208 BlackBerry, 168, 203 Blitstein, Ryan, 117 blockchain as challenge to stacks, 298 and contracts, 291–95 development and deployment, 283–87 failure of, 317 and solutionism, 297 value as ledger beyond Bitcoin, 288–91 Blockchain Revolution (Tapscott and Tapscott), 298 Bloomberg Markets, 267 BMO Capital Markets, 204n Bobadilla-Suarez, Sebastian, 58n–59n Bock, Laszlo, 56–58 bonds, 131, 134 bonuses, credit card, 216 Bordeaux wines, 38–39 Boudreau, Kevin, 252–54 Bowie, David, 131, 134, 148 Bowie bonds, 131, 134 brand building, 210–11 Brat, Ilan, 12 Bredeche, Jean, 267 Brin, Sergey, 233 Broward County, Florida, 40 Brown, Joshua, 81–82 Brusson, Nicolas, 190 Burr, Donald, 177 Bush, Vannevar, 33 business conference venues, 189 Business Insider, 179 business processes, robotics and, 88–89 business process reengineering, 32–35 business travelers, lodging needs of, 222–23 Busque, Leah, 265 Buterin, Vitalik, 304–5 Byrne, Patrick, 290 Cairncross, Francis, 137 California, 208; See also specific cities Calo, Ryan, 52 Cambrian Explosion, 94–98 Cameron, Oliver, 324 Camp, Garrett, 200 capacity, perishing inventory and, 181 Card, David, 40 Care.com, 261 cars automated race car design, 114–16 autonomous, 17, 81–82 decline in ownership of, 197 cash, Bitcoin as equivalent to, 279 Casio QV-10 digital camera, 131 Caves, Richard, 23 Caviar, 186 CDs (compact discs), 145 cell phones, 129–30, 134–35; See also iPhone; smartphones Census Bureau, US, 42 central bankers, 305 centrally planned economies, 235–37 Chabris, Chris, 3 Chambers, Ephraim, 246 Champy, James, 32, 34–35, 37, 59 Chandler, Alfred, 309n Chase, 162 Chase Paymentech, 171 check-deposit app, 162 children, language learning by, 67–69 China Alibaba in, 7–8 concentration of Bitcoin wealth in, 306–7 and failure mode of Bitcoin, 317 mobile O2O platforms, 191–92 online payment service problems, 172 robotics in restaurants, 93 Shanghai Tower design, 118 Xiaomi, 203 Chipotle, 185 Choudary, Sangeet, 148 Christensen, Clay, 22, 264 Churchill, Winston, 301 Civil Aeronautics Board, US, 181n Civis Analytics, 50–51 Clash of Clans, 218 classified advertising revenue, 130, 132, 139 ClassPass, 205, 210 and economics of perishing inventory, 180–81 future of, 319–20 and problems with Unlimited offerings, 178–80, 184 and revenue management, 181–84 user experience, 211 ClassPass Unlimited, 178–79 Clear Channel, 135 clinical prediction, 41 Clinton, Hillary, 51 clothing, 186–88 cloud computing AI research, 75 APIs and, 79 Cambrian Explosion of robotics, 96–97 platform business, 195–96 coaches, 122–23, 334 Coase, Ronald, 309–13 cognitive biases, 43–46; See also bias Cohen, Steven, 270 Coles, John, 273–74 Collison, John, 171 Collison, Patrick, 171–74 Colton, Simon, 117 Columbia Record Club, 131 commoditization, 220–21 common sense, 54–55, 71, 81 companies continued dominance of, 311–12 continued relevance of, 301–27 DAO as alternative to, 301–5 decreasing life spans of, 330 economics of, 309–12 future of, 319–26 leading past the standard partnership, 323–26 management’s importance in, 320–23 markets vs., 310–11 as response to inherent incompleteness of contracts, 314–17 solutionism’s alternatives to, 297–99 TCE and, 312–15 and technologies of disruption, 307–9 Compass Fund, 267 complements (complementary goods) defined, 156 effect on supply/demand curves, 157–60 free, perfect, instant, 160–63 as key to successful platforms, 169 and open platforms, 164 platforms and, 151–68 and revenue management, 183–84 Stripe and, 173 complexity theory, 237 Composite Fund (D.

Acton, Brian, 140 additive manufacturing, 107; See also 3D printing Adore Me, 62 adults, language learning by, 68–69 advertising content platforms and, 139 data-driven decision making for, 48, 50–51 Facebook and, 8–9 radio airplay as, 148 advertising agencies, 48 advertising revenue Android as means of increasing, 166 Craigslist’s effect on, 139 free apps and, 162 print media and, 130, 132, 139 African Americans identifying gifted students, 40 and search engine bias, 51–52 aggregators, 139–40 AGI (artificial general intelligence), 71 agriculture automated milking systems, 101 drones and, 99–100 “food computers,” 272 machine learning and, 79–80 robotics and, 101–2 Airbnb future of, 319–20 hotel experience vs., 222–23 lack of assets owned by, 6–7 limits to effects on hotel industry, 221–23 network effects, 193 as O2O platform, 186 peer reviews, 209–10 rapid growth of, 9 as two-sided network, 214 value proposition compared to Uber, 222 Airline Deregulation Act, 181n airlines, revenue management by, 181–82 air travel, virtualization in, 89 Akerlof, George, 207, 210 albums, recorded music, 145 algorithms; See also data-driven decision making bias in systems, 51–53 and Cambrian Explosion of robotics, 95–96 comparing human decisions to, 56 O2O platforms and, 193 Quantopian and, 267–70 superiority to System 1 reasoning, 38–41 “algo traders,” 268; See also automated investing Alibaba, 6–8 Alipay, 174 AlphaGo, 4–6, 14, 74, 80 Alter, Lloyd, 90 Amazon automatic price changes, 47 bar code reader app, 162 data-driven product recommendations, 47 development of Web Services, 142–43 Mechanical Turk, 260 as stack, 295 warehouse robotics, 103 Amazon EC2, 143 Amazon Go, 90–91 Amazon S3, 143 Amazon Web Services (AWS), 75, 142–43 American Airlines (AA), 182 amino acid creation, 271–72 analog copies, digital copies vs., 136 “Anatomy of a Large-Scale Hypertextual Web Search Engine, The” (Page and Brin), 233 Anderson, Chris, 98–100 Anderson, Tim, 94 Andreessen, Marc on crowdfunding, 262–63 and Netscape, 34 as self-described “solutionist,” 297 on Teespring, 263–64 Android Blackberry vs., 168 contribution to Google revenue/profits, 204 iOS vs., 166–67 Angry Birds, 159–61 anonymity, digital currency and, 279–80 Antikythera mechanism, 66 APIs (application programming interfaces), 79 apophenia, 44n apparel, 186–88 Apple; See also iPhone acquiring innovation by acquiring companies, 265 and industrywide smartphone profits, 204 leveraging of platforms by, 331 Postmates and, 173, 185 profitability (2015), 204 revenue from paid apps, 164 “Rip, Mix, Burn” slogan, 144n as stack, 295 application programming interfaces (APIs), 79 AppNexus, 139 apps; See also platforms for banking, 89–90 demand curve and, 157–61 iPhone, 151–53 App Store, 158 Apter, Zach, 183 Aral, Sinan, 33 Archilochus, 60–61 architecture, computer-designed, 118 Aristophanes, 200 Arnaout, Ramy, 253 Arthur, Brian, 47–48 artificial general intelligence (AGI), 71 artificial hands, 272–75 artificial intelligence; See also machine learning current state of, 74–76 defined, 67 early attempts, 67–74 implications for future, 329–30 rule-based, 69–72 statistical pattern recognition and, 72–74 Art of Thinking Clearly, The (Dobelli), 43 arts, digital creativity in, 117–18 Ashenfelter, Orley, 38–39 ASICs (application-specific integrated circuits), 287 assets and incentives, 316 leveraging with O2O platforms, 196–97 replacement by platforms, 6–10 asymmetries of information, 206–10 asymptoting, 96 Atkeson, Andrew, 21 ATMs, 89 AT&T, 96, 130 August (smart door lock), 163 Austin, Texas, 223 Australia, 100 Authorize.Net, 171 Autodesk, 114–16, 119, 120 automated investing, 266–70 automation, effect on employment/wages, 332–33 automobiles, See cars Autor, David, 72, 101 background checks, 208, 209 back-office work, 82–83 BackRub, 233 Baidu, 192 Bakos, Yannis, 147n Bakunin, Mikhail, 278 Ballmer, Steve, 151–52 bandwagon effect, 217 banking, virtualization and, 89–90, 92 Bank of England, 280n bank tellers, 92 Barksdale, Jim, 145–46 barriers to entry, 96, 220 Bass, Carl, 106–7, 119–20 B2B (business-to-business) services, 188–90 Beastmode 2.0 Royale Chukkah, 290 Behance, 261 behavioral economics, 35, 43 Bell, Kristen, 261, 262 Benioff, Mark, 84–85 Benjamin, Robert, 311 Benson, Buster, 43–44 Berlin, Isiah, 60n Berners-Lee, Tim, 33, 34n, 138, 233 Bernstein, Michael, 260 Bertsimas, Dimitris, 39 Bezos, Jeff, 132, 142 bias of Airbnb hosts, 209–10 in algorithmic systems, 51–53 digital design’s freedom from, 116 management’s need to acknowledge, 323–24 and second-machine-age companies, 325 big data and Cambrian Explosion of robotics, 95 and credit scores, 46 and machine learning, 75–76 biology, computational, 116–17 Bird, Andrew, 121 Bitcoin, 279–88 China’s dominance of mining, 306–7 failure mode of, 317 fluctuation of value, 288 ledger for, 280–87 as model for larger economy, 296–97 recent troubles with, 305–7 and solutionism, 297 “Bitcoin: A Peer-to-Peer Electronic Cash System” (Nakamoto), 279 BlaBlaCar, 190–91, 197, 208 BlackBerry, 168, 203 Blitstein, Ryan, 117 blockchain as challenge to stacks, 298 and contracts, 291–95 development and deployment, 283–87 failure of, 317 and solutionism, 297 value as ledger beyond Bitcoin, 288–91 Blockchain Revolution (Tapscott and Tapscott), 298 Bloomberg Markets, 267 BMO Capital Markets, 204n Bobadilla-Suarez, Sebastian, 58n–59n Bock, Laszlo, 56–58 bonds, 131, 134 bonuses, credit card, 216 Bordeaux wines, 38–39 Boudreau, Kevin, 252–54 Bowie, David, 131, 134, 148 Bowie bonds, 131, 134 brand building, 210–11 Brat, Ilan, 12 Bredeche, Jean, 267 Brin, Sergey, 233 Broward County, Florida, 40 Brown, Joshua, 81–82 Brusson, Nicolas, 190 Burr, Donald, 177 Bush, Vannevar, 33 business conference venues, 189 Business Insider, 179 business processes, robotics and, 88–89 business process reengineering, 32–35 business travelers, lodging needs of, 222–23 Busque, Leah, 265 Buterin, Vitalik, 304–5 Byrne, Patrick, 290 Cairncross, Francis, 137 California, 208; See also specific cities Calo, Ryan, 52 Cambrian Explosion, 94–98 Cameron, Oliver, 324 Camp, Garrett, 200 capacity, perishing inventory and, 181 Card, David, 40 Care.com, 261 cars automated race car design, 114–16 autonomous, 17, 81–82 decline in ownership of, 197 cash, Bitcoin as equivalent to, 279 Casio QV-10 digital camera, 131 Caves, Richard, 23 Caviar, 186 CDs (compact discs), 145 cell phones, 129–30, 134–35; See also iPhone; smartphones Census Bureau, US, 42 central bankers, 305 centrally planned economies, 235–37 Chabris, Chris, 3 Chambers, Ephraim, 246 Champy, James, 32, 34–35, 37, 59 Chandler, Alfred, 309n Chase, 162 Chase Paymentech, 171 check-deposit app, 162 children, language learning by, 67–69 China Alibaba in, 7–8 concentration of Bitcoin wealth in, 306–7 and failure mode of Bitcoin, 317 mobile O2O platforms, 191–92 online payment service problems, 172 robotics in restaurants, 93 Shanghai Tower design, 118 Xiaomi, 203 Chipotle, 185 Choudary, Sangeet, 148 Christensen, Clay, 22, 264 Churchill, Winston, 301 Civil Aeronautics Board, US, 181n Civis Analytics, 50–51 Clash of Clans, 218 classified advertising revenue, 130, 132, 139 ClassPass, 205, 210 and economics of perishing inventory, 180–81 future of, 319–20 and problems with Unlimited offerings, 178–80, 184 and revenue management, 181–84 user experience, 211 ClassPass Unlimited, 178–79 Clear Channel, 135 clinical prediction, 41 Clinton, Hillary, 51 clothing, 186–88 cloud computing AI research, 75 APIs and, 79 Cambrian Explosion of robotics, 96–97 platform business, 195–96 coaches, 122–23, 334 Coase, Ronald, 309–13 cognitive biases, 43–46; See also bias Cohen, Steven, 270 Coles, John, 273–74 Collison, John, 171 Collison, Patrick, 171–74 Colton, Simon, 117 Columbia Record Club, 131 commoditization, 220–21 common sense, 54–55, 71, 81 companies continued dominance of, 311–12 continued relevance of, 301–27 DAO as alternative to, 301–5 decreasing life spans of, 330 economics of, 309–12 future of, 319–26 leading past the standard partnership, 323–26 management’s importance in, 320–23 markets vs., 310–11 as response to inherent incompleteness of contracts, 314–17 solutionism’s alternatives to, 297–99 TCE and, 312–15 and technologies of disruption, 307–9 Compass Fund, 267 complements (complementary goods) defined, 156 effect on supply/demand curves, 157–60 free, perfect, instant, 160–63 as key to successful platforms, 169 and open platforms, 164 platforms and, 151–68 and revenue management, 183–84 Stripe and, 173 complexity theory, 237 Composite Fund (D.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

“Elon Musk Just Now Realizing That Self-Driving Cars Are a ‘Hard Problem.’” Verge, July 5. www.the verge.com/2021/7/5/22563751/tesla-elon-musk-full-self-driving-admission-autopilot-crash. Heaven, Will Douglas. 2020. “Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?” MIT Technology Review, October 15. www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai. Heldring, Leander, James Robinson, and Sebastian Vollmer. 2021a. “The Economic Effects of the English Parliamentary Enclosures.” NBER Working Paper no. 29772. DOI:10.3386/w29772.

I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable. These hopes of human-level intelligence, sometimes also called “artificial general intelligence” (AGI), were soon dashed. Tellingly, nothing of great value came from the Dartmouth conference. As the spectacular promises made by AI researchers were all unmet, funding for the field dried up, and what came to be called the first “AI winter” set in. There was renewed enthusiasm in the early 1980s based on advances in computing technology and some limited success of expert systems, which promised to provide expert-like advice and recommendations.

An excellent account of Engelbart’s life and work, with an explicit discussion of two visions of how computers can be used, is the highly readable book by Markoff (2015). We should note that these ideas are still far from the mainstream in the area, which tends to be much more optimistic about the benefits of AI and even the possibility of artificial general intelligence. See, for example, Bostrom (2017), Christian (2020), Stuart Russell (2019), and Ford (2021) on advances in artificial intelligence, and Kurzweil (2005) and Diamandis and Kotler (2014) on the economic abundance that this would create. Our discussion of routine and nonroutine tasks builds on Autor, Levy, and Murnane’s (2003) seminal paper and Autor’s (2014) discussion of limits to automation.

pages: 238 words: 77,730

Final Jeopardy: Man vs. Machine and the Quest to Know Everything
by Stephen Baker
Published 17 Feb 2011

For most of these people, programming machines to catalogue knowledge and answer questions, whether manually or by machine, was a bit pedestrian. They weren’t looking for advances in technology that already existed. Instead, they were focused on a bolder challenge, the development of deep and broad machine intelligence known as Artificial General Intelligence. This, they believed, would lead to the next step of human evolution. The heart of the Singularity argument, as explained by the technologists Vernor Vinge and Ray Kurzweil, the leading evangelists of the concept, lies in the power of exponential growth. As Samuel Butler noted, machines evolve far faster than humans.

Research papers on the brain were also doubling every year. Some fifty thousand academic papers on neuroscience had been published in 2008 alone. “If you looked at neuroscience in 2005, or before that, you’re way out of date now,” he said. But which areas of brain research would lead to the development of Artificial General Intelligence? Hassabis had followed an unusual path toward AI research. At thirteen, he was the highest ranked chess player of his age on earth. But computers were already making inroads in chess. So why dedicate his brain, which he had every reason to believe was exceptional, to a field that machines would soon conquer?

pages: 492 words: 141,544

Red Moon
by Kim Stanley Robinson
Published 22 Oct 2018

There also had to be encouragements in the form of actually programmed prompts to help machine learning occur mechanically, to make algorithms create more algorithms. All this was hard; and even if he managed to do some of it, at best he would still be left with nothing more than an advanced search engine. Artificial general intelligence was just a phrase, not a reality. Nothing even close to consciousness would be achieved; a mouse had more consciousness than an AI by a factor that was essentially all to nothing, so a kind of infinity. But despite its limitations, this particular combination of programs might still find more than he or it knew it was looking for.

He had had to weave those particular taps into the system as potentialities only, and Little Eyeball would have to turn them on and make its way through them back into the Great Firewall and elsewhere. But the AI would still be operating, and he had left precise instructions for this contingency. Precise at first, anyway, then completely general: do the best you can! Help all good causes! It would be a test to see just how general its intelligence was. Artificial general intelligence: these names were so presumptuous, such hopeful bits of hype. As if calling something new by an old name would give it those old qualities. People did that a lot. It was a fund-raiser’s ontology. But on the other hand, attempts had to be made. So his little system would stay powered, hopefully, and even if restricted to a single device in Chengdu, it would at least not be destroyed.

Initiate direct insertion of improvements into current codes and laws. Announce these improvements after insertions completed. Press them by way of persuasive design methodology as outlined in captology and exploitationware studies. Flood the seams between system and lifeworld (Habermas). Always remember: an artificial general intelligence is not like human intelligence. AI operates by way of a set of algorithms, without consciousness. Its volition is as algorithmic as the rest of its operations, and is based on programmed axioms. Its sphere of action is sharply circumscribed. What it can do is extend its reach where it can.

pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It
by Marc Goodman
Published 24 Feb 2015

“Don Watson” might even engage in murder for hire by geo-locating human targets and hacking into objects connected to the Internet of Things surrounding victims, such as cars, elevators, and robots, in order to cause accidents resulting in the death of its prey. While such activities would be at the extreme level of what a narrow AI might accomplish, they would be easy for the next generation of computing: artificial general intelligence. Man’s Last Invention: Artificial General Intelligence By the time Skynet became self-aware, it had spread into millions of computer servers all across the planet. Ordinary computers in office buildings, dorm rooms, everywhere. It was software, in cyberspace. There was no system core. It could not be shut down.

It’s Written All Over Your Face On Your Best Behavior Augmenting Reality The Rise of Homo virtualis CHAPTER 15: RISE OF THE MACHINES: WHEN CYBER CRIME GOES 3-D We, Robot The Military-Industrial (Robotic) Complex A Robot in Every Home and Office Humans Need Not Apply Robot Rights, Law, Ethics, and Privacy Danger, Will Robinson Hacking Robots Game of Drones Robots Behaving Badly Attack of the Drones The Future of Robotics and Autonomous Machines Printing Crime: When Gutenberg Meets Gotti CHAPTER 16: NEXT-GENERATION SECURITY THREATS: WHY CYBER WAS ONLY THE BEGINNING Nearly Intelligent Talk to My Agent Black-Box Algorithms and the Fallacy of Math Neutrality Al-gorithm Capone and His AI Crime Bots When Watson Turns to a Life of Crime Man’s Last Invention: Artificial General Intelligence The AI-pocalypse How to Build a Brain Tapping Into Genius: Brain-Computer Interface Mind Reading, Brain Warrants, and Neuro-hackers Biology Is Information Technology Bio-computers and DNA Hard Drives Jurassic Park for Reals Invasion of the Bio-snatchers: Genetic Privacy, Bioethics, and DNA Stalkers Bio-cartels and New Opiates for the Masses Hacking the Software of Life: Bio-crime and Bioterrorism The Final Frontier: Space, Nano, and Quantum PART THREE SURVIVING PROGRESS CHAPTER 17: SURVIVING PROGRESS Killer Apps: Bad Software and Its Consequences Software Damages Reducing Data Pollution and Reclaiming Privacy Kill the Password Encryption by Default Taking a Byte out of Cyber Crime: Education Is Essential The Human Factor: The Forgotten Weak Link Bringing Human-Centered Design to Security Mother (Nature) Knows Best: Building an Immune System for the Internet Policing the Twenty-First Century Practicing Safe Techs: The Need for Good Cyber Hygiene The Cyber CDC: The World Health Organization for a Connected Planet CHAPTER 18: THE WAY FORWARD Ghosts in the Machine Building Resilience: Automating Defenses and Scaling for Good Reinventing Government: Jump-Starting Innovation Meaningful Public-Private Partnership We the People Gaming the System Eye on the Prize: Incentive Competitions for Global Security Getting Serious: A Manhattan Project for Cyber Final Thoughts Appendix: Everything’s Connected, Everyone’s Vulnerable: Here’s What You Can Do About It Acknowledgments Notes PROLOGUE The Irrational Optimist: How I Got This Way My entrée into the world of high-tech crime began innocuously in 1995 while working as a twenty-eight-year-old investigator and sergeant at the LAPD’s famed Parker Center police headquarters.

• This “telephone” has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us (internal memo at Western Union, 1878). Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing.

pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence
by Jeff Hawkins
Published 15 Nov 2021

A self-driving car may be a safer driver than any human, but it can’t play Go or fix a flat tire. The long-term goal of AI research is to create machines that exhibit human-like intelligence—machines that can rapidly learn new tasks, see analogies between different tasks, and flexibly solve new problems. This goal is called “artificial general intelligence,” or AGI, to distinguish it from today’s limited AI. The essential question today’s AI industry faces is: Are we currently on a path to creating truly intelligent AGI machines, or will we once again get stuck and enter another AI winter? The current wave of AI has attracted thousands of researchers and billions of dollars of investment.

When you are in the middle of a bubble, it is easy to get swept up in the enthusiasm and believe it will go on forever. History suggests we should be cautious. I don’t know how long the current wave of AI will continue to grow. But I do know that deep learning does not put us on the path to creating truly intelligent machines. We can’t get to artificial general intelligence by doing more of what we are currently doing. We have to take a different approach. Two Paths to AGI There are two paths that AI researchers have followed to make intelligent machines. One path, the one we are following today, is focused on getting computers to outperform humans on specific tasks, such as playing Go or detecting cancerous cells in medical images.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

But at least those decisions have traditionally been centralized into the hands of a few. With AI, two smart fourteen-year-old kids will have the power to release an untested AI on the internet and disrupt our way of life. It is not unlikely. I think you AGreeP. If You Can’t Beat Them . . . Some of those who recognize that we will not be able to control an artificial general intelligence that is smarter than us, suggest that we plug them directly into our bodies instead. A sort of ‘if you can’t beat them, join them’ mentality. There are already several examples of technology that plugs artificially intelligent computers directly into our cerebral cortex. While the current prototypes are still in their infancy, they certainly work, and judging by the current trajectory of their development, there seem to be no big obstacles to prevent us from a future where we could become cyborgs – half-human, half-machine – with our own intelligence extended in a limitless way by borrowing from the intelligence of the machines.

Warnings about the threats of superintelligence have been loud and clear since the day we conceived of the possibility of machine intelligence. Concern came as early as 1951 when, in his lecture Intelligent Machinery, A Heretical Theory, Alan Turing predicted that ‘once the machine thinking method had started, it would not take long to outstrip our feeble powers.’ As AI transitions to AGI, artificial general intelligence, and beyond the confines of the programmable tasks the machine was invented to carry out, the concerns heighten. Irving Good, who was a consultant on the supercomputers in 2001: A Space Odyssey, warned of an intelligence explosion that prominent thinkers and tech marvels the likes of Stephen Hawking and Elon Musk have repeatedly warned against.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

*11 Despite being the first American woman to spacewalk, Sullivan was no more of a fan of the movie Gravity—the Best Picture nominee centered around Sandra Bullock on a spacewalk gone awry—than Vescovo was of Top Gun. “Forget Gravity,” said Sullivan. “I mean, the visuals were cool. Everything about the way they operated and the physics—it’s all bullshit. Forget it.” *12 Although, this could change with the development of AGI or so-called artificial general intelligence—so bookmark this page and see if it still holds up in twenty years. *13 Though Dwan conveyed to me that he was relatively confident—considerably higher than 29 percent— by the time he decided to call. It was a huge pot and he wanted to give Wesley time to reveal another tell that might compel him to reevaluate

“It is the biggest existential risk in some category. And also the upsides are so great, we can’t not do it.” Altman told me that AI could be the best thing that’s ever happened to humanity. “If you have something like an AGI, I think poverty really does just end,” he said.[*3] (AGI refers to “artificial general intelligence.” The meaning of this term is so ambiguous that I’m not going to attempt a precise definition—just think of it as “really advanced AI.”) “We’re going to look back at this era in fifty or one hundred years and be like, we really let people live in poverty? Like, how did we do that?” So is @SamA in the same bucket as that other, highly problematic Sam, @SBF?

Or informally, getting more action from customers you don’t want, such as an all-you-can-eat sushi restaurant that sets up shop near a sumo wrestling tournament. Agency: As defined more completely in chapter ∞, being empowered to make robust, well-informed decisions; knowing which factors are inside one’s control. Agent: In game theory or AI, an entity possessed of enough intelligence to make reasonable strategic choices. AGI: Artificial general intelligence. The term lacks a clear definition but refers to at least broad human-level intelligence, sometimes as distinguished from artificial superintelligence (ASI), which surpasses that of humans. AI: See: artificial intelligence. AK: Ace-king, the best starting hand in Hold’em apart from a pocket pair.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

This condescending robot is quite ready to dismiss all of our intellectual accomplishments, but the human emotional experience remains a treasure beyond its grasp. Over the following decades, many authors would explore the idea of machines trying to comprehend and manipulate human feelings, as if they foresaw emotion all too soon becoming the one remaining bastion of humanity. In Arthur C. Clarke’s 2001, the artificial general intelligence known as HAL 9000 is capable of reading and to some degree expressing emotions.3 In fact, within the film version of 2001 by Stanley Kubrick, HAL is probably (and intentionally) the most emotive of all the characters. Much film analysis is been written about HAL going insane, but in many respects the computer was merely applying pure logic to the matter of its personal survival and fulfilling its mission.

Some recent efforts by organizations such as DARPA seek to ensure that AIs can “explain” their reasoning, but it remains questionable whether such an approach will be successful.10 Then there’s the argument about the difficulty or impossibility of developing a human-equivalent AI. This is a common assumption and a common error. Many people conflate the terms human intelligence, human-equivalent AI, and human-level machine intelligence with one another, when each must by definition be distinctly different. A truly human, artificially generated intelligence may never exist outside of a biologically-based substrate. Or if it ever does, it won’t be for a very long time. Likewise for human-equivalent AI. AI that thinks exactly as humans do will be extremely difficult to attain and therefore will probably take a significant amount of time to realize.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

Google Translate can render an English movie review into Chinese, but it can’t tell you if the reviewer liked the movie or not, and it certainly can’t watch and review the movie itself. The terms narrow and weak are used to contrast with strong, human-level, general, or full-blown AI (sometimes called AGI, or artificial general intelligence)—that is, the AI that we see in movies, that can do most everything we humans can do, and possibly much more. General AI might have been the original goal of the field, but achieving it has turned out to be much harder than expected. Over time, efforts in AI have become focused on particular well-defined tasks—speech recognition, chess playing, autonomous driving, and so on.

A Aaronson, Scott abstraction; in Bongard problems; in convolutional neural networks; in human cognition; in letter-string analogy problems activation maps activations: in encoder-decoder systems; formula for computing; in neural networks; in neurons; in recurrent neural networks; in word2vec active-symbol architecture active symbols adversarial examples: for computer vision; for deep Q-learning systems; for natural-language processing systems; for self-driving cars; for speech-recognition systems adversarial learning AGI, see general or human-level AI Agüera y Arcas, Blaise AI, see artificial intelligence AI Singularity, see Singularity AI spring AI winter AlexNet algorithm Allen, Paul Allen Institute for Artificial Intelligence; science questions data set AlphaGo; intelligence of; learning in AlphaGo Fan AlphaGo Lee AlphaGo Zero AlphaZero Amazon Mechanical Turk; origin of name American Civil Liberties Union (ACLU) analogy: in humans; letter-string microworld; relationship to categories and concepts; using word vectors; in visual situations; see also Copycat artificial general intelligence, see general or human-level AI artificial intelligence: beneficial; bias in; creativity in; definition of; explainability; general or human-level; moral; origin of term; regulation of; relationship to deep learning and machine learning; “right to explanation”; spring; strong; subsymbolic; symbolic; unemployment due to; weak; winter Asimov, Isaac; fundamental Rules of Robotics Atari video games; see also Breakout automated image captioning autonomous vehicles, see self-driving cars B back-propagation; in convolutional neural networks; in deep reinforcement learning barrier of meaning Barsalou, Lawrence beneficial AI Bengio, Yoshua bias; in face recognition; in word vectors big data bilingual evaluation understudy (BLEU) board positions; in checkers; in chess; in Go Bongard, Mikhail Bongard problems Bored Yann LeCun Bostrom, Nick Brackeen, Brian Breakout; deep Q-learning for; transfer learning on Brin, Sergey brittleness of AI systems Brooks, Rodney C CaptionBot Centre for the Study of Existential Risk checkers; see also Samuel’s checkers-playing program chess; see also Deep Blue Clark, Andy Clarke, Arthur C.

pages: 337 words: 96,666

Practical Doomsday: A User's Guide to the End of the World
by Michal Zalewski
Published 11 Jan 2022

The AI expands and improves the operation, developing new assembly methods and new resource extraction and recycling procedures, until it’s done converting the entire planet and its many inhabitants into paperclips. The point of the tale is simple: the AI doesn’t need to hate you or love you; it suffices that you’re made of atoms it has a different use for. One could argue that the development of artificial general intelligence (AGI)—a complete “brain in a jar,” if you will—would be the point of singularity, a moment in the development of our species where the change is so monumental and so sudden that we can’t meaningfully reason about what lies beyond. The extinction of our species is one distinct possibility, but we can imagine many other, more optimistic outcomes—and have no way to truly measure the risk.

Numbers 2FA (two-factor authentication), 108 3M gasmaks, 177 3M respirators, 175 9-11 attacks, 26–27 9mm Luger, 214 28 Days Later, 32 .38 Special, 214 A Abine, 110 accidental injuries car accidents, 96–98 firearms, 196, 217–218 industrial accidents, 19–20, 177 statistics on, 12–13 action plan, 121–125 Acxiom, 110 addiction, 99–100 adverse judgments, 69–70 advertising industry, 110 AGI (artificial general intelligence), 42 AI winter, 42 AIDS, 32 alarm systems, 112 alcohol use, 100–101 Alinco, 188 alkaline batteries, 154–155 allergies, 150 aluminum zirconium tetrachlorohydrex gly, 149 amateur radio, 188–189 ammunition, 217 amoxicillin, 151 anesthetics miconazole nitrate, 150 animal-borne diseases, 176 antibiotics, 151 antifungal cream, 150 anti-itch cream, 150 antiperspirant, 149 antivirus programs, 106 Anytime Mailbox, 111 apocalypse, predictions of, 30–31 appliances, 152–159 APRS (Automatic Packet Reporting System), 188–189 Aquatabs, 134 Aqua-Tainer products, 133 AR-15, 215–216 Arm & Hammer, 149 Armero volcano eruption, 35 artificial intelligence, 31, 42 asteroids, 34 asthma, 150 atom bombs, 30–31, 39–40, 178–180 Augason Farms, 141 auto accidents, 96–98 auto insurance, 51–52 auto repairs, 163–164 automated billing, 51 automatic center punches, 164 Automatic Packet Reporting System (APRS), 188–189 avian flu, 26 B backyard gardens, 144 bail-ins, 22 bailouts, 68–69 baking soda, 148 ballistic vests, 129–130 bank accounts, 68–69, 76–77 banking crises, 22, 68–69 Bankrate.com, 10 Banqiao dam failure, 20 BaoFeng, 188 Barbot, Oxiris, 25 barter, 58–59, 73–74 batteries, 154–155, 163 bear spray, 171 BeenVerified, 110 Beirut nitrate explosion, 20 benzalkonium chloride (BZK) wipes, 149 benzocaine ointments, 151 The Bet (Sabin), 8 Bhopal disaster, 20 bicycles, 166 billionaires, 73 bird shot, 217 birth certificates, 166–167 Bitcoin, 66–68 Black Death, 32, 176 black holes, 35 bleeding, 151, 177 blizzards, 18 blockchain, 67 blunt instruments, 208 Bogle, John C., 80–81 bonds, 77–78 books, 191–192 bows, 208–209 brain in a jar, 42 break-ins, 111–112, 201 bromadiolone, 177 bromethalin, 177 Brooks, Max, 30 buckshot, 217 bug-out situations, 165–171 bulletproof vests, 129–130 bullets, 217 Bureau of Justice Statistics, 13 burglaries, 13, 111–112, 201 Butte fire complex, 18 BZK (benzalkonium chloride) wipes, 149 C caffeine pills, 150 California Consumer Privacy Act (CCPA), 111 California Gun Laws (Michel and Cubeiro), 196 calorie needs, 138–139 calorie restriction, 117–118 cameras, 201 camping, 167–168, 171 Canberra MRAD113, 179 candles, 154 Capital in the Twenty-First Century (Piketty), 72 car accidents, 96–98 car break-ins, 111 car insurance, 51–52 car repairs, 163–164 career planning, 91–93 cash, 49–55, 75–76 Cato Institute, 23 CB (citizens band) radios, 186 CCPA (California Consumer Privacy Act), 111 CDC (Centers for Disease Control and Prevention), 13 cell phones, 155–156, 181 cetirizine, 150 chains, 163 chainsaws, 162 Champion generators, 156 Charlie’s Soap, 149 Chernobyl Nuclear Power Plant, 20, 31 Chicago Sunday Tribune, 37–38 chlorination, 134 choking, 104 cholecalciferol, 177 Christmas Island, 33 citizens band (CB) radios, 186 class tensions, 72–73 clathrate gun hypothesis, 33 cleaning, 148 climate change, 18, 33–34 clip-on pulse oximeters, 150 CME (coronal mass ejection), 35–36 cocaine, 100 coincidence of wants, 58 coins, 59–63 collectibles, 86–87, 201 Colorado floods, 11 come-alongs, 162–163 commodity futures options, 71, 81–83 commodity money, 60 communications, 181–190 community property, 125 compensation, 50 confiscatory taxes, 72–73 constitutional carry, 206 consumer debt, 53–54 consumer lending, 63 consumer prices, 70–71 contraceptives, 150 convictions, 6–8 cooking, 158 CoreLogic, 110 coronal mass ejection (CME), 35–36 cosmic threats, 35–36 cost of living, 11 cough, 150 court fights, 69–70 coveralls, 175 COVID-19 pandemic, 25–26, 174–176 CPR procedure, 151–152 credit cards, 53–54 Cretaceous–Paleogene extinction, 34 criminal victimization, 13 crisis indicators, 124 critical decision points, 124 crony beliefs, 6–8 crossbows, 208–209 cryptocurrencies, 66–68, 84, 87 Cubeiro, Matthew D., 196 currencies, history of, 58–68 customer data, 110 cuts, 150 Cypriot debt crisis, 69, 73 D data brokers, 110 Datrex, 143 d-CON, 177 DDT (dichlorodiphenyltrichloroethane), 176 De Waal, Frans, 20 death causes of, 13 planning, 14–15, 124–125 debt, 10–11, 50, 53–54, 59–60 debt crisis, 22 Debt: The First 5000 Years (Graeber), 59 de-escalation skills, 13 defensive driving, 96–98 dehydration, 134 DeleteMe Help Center, 110 deltamethrin, 176 dental care, 151 dental picks, 151 developing countries, 33–34 dextromethorphan, 150 diarrhea, 150 dichlorodiphenyltrichloroethane (DDT), 176 Didion, Joan, 122 dietary supplements, 180 diets, 115–118 digital communications, 188–189 Digital Mobile Radio (DMR), 188 diindolylmethane (DIM), 179 dinosaurs, 34 diseases, 32–33, 173–177 dishwashing, 148 Diversey Oxivir Five 16, 176 Diversey PERdiem, 148 diversified portfolios, 88–-90 divorce, 69–70 documents, 166–167 Dogecoin, 68 dogs as burglary deterrent, 112 domestic terrorism, 26–27 driving habits, 96–98 drowning, 104 drugs, 99–101, 150 D-STAR, 188 “duck and cover,” 39 DuPont Tychem coveralls, 175 dust storms, 18 duty to retreat, 205 Dynarex, 149 E earthquake probabilities, 19 Ebola, 26, 32 economic crises, 22–24 economic hardships, 10–11 economic persecution, 72 ecosystem collapse, 34 Ehrlich, Paul R., 7–8, 30 elastic bandages, 150 electricity, 36, 152–159 electrolyte imbalance, 150 emergency ration bars, 143 emergency repairs, 162–163 EMP (electromagnetic pulse), 40–41 employment, 91–93 encephalitis lethargica, 25 Energizer Ultimate batteries, 155 entertainment, 191–192 epinephrine inhalers, 150 Epsilon Data Management, 110 Equifax, 110 equities, 79–81 escheatment, 77 eugenics, 37 eugenol, 151 evacuation, 165–171 exercise, 117–118 expenses, 51–52 Experian, 110 Expose, 176 extinction, 34 extraterrestrial life, 43 extreme weather, 18, 156–158, 168 F Facebook, 109, 110, 155 fall injuries, 98–99 false vacuum decay, 35 Family Radio Service (FRS), 186–187 farming, 137 Federal Emergency Management Agency (FEMA), 19, 132 fever, 150 fiat money, 64–65 fiction, 29–30 fighting, 113, 206 financial problems, 10–11 firearms, 196–197, 211–219 fires Butte fire complex, 18 house fires, 11, 18, 103–104 wildfires, 18, 44, 124 firewood, 158, 170 first aid, 149–152 fitness, 115–118 fixed-blade knives, 170 flashlights, 154–155 flat tires, 163 floods, 19, 147–148 floss, 151 flu, 25 fluticasone propionate, 150 FMJ (full metal jacket) bullets, 217 food-borne illness, 141–142 food preparation, 158 food security, 137–144 foraging, 168–169 foreclosures, 10–11 foreign currencies, 78–79 Forgey, William W., 152 Forster, E.

Global Catastrophic Risks
by Nick Bostrom and Milan M. Cirkovic
Published 2 Jul 2008

Given a different empirical belief about the actual real­ world consequences of a communist system, the decision may undergo a corresponding change. We would expect a true A I , an Artificial General Intelligence, to be capable of changing its empirical beliefs (or its probabilistic world model, etc.). If somehow Charles Babbage had lived before Nicolaus Copernicus, somehow computers had been invented before telescopes, and somehow the programmers of that day and age successfully created an Artificial General Intelligence, it would not follow that the AI would believe forever after that the Sun orbited the Earth. The AI might transcend the factual error of its programmers, provided that the programmers understood inference rather better than they understood astronomy.

We cannot rely on having distant advance warning before AI is created; past technological revolutions usually did not telegraph themselves to people alive at the time, whatever was said afterwards Artificial Intelligence in global risk 327 in hindsight. The mathematics and techniques of Friendly A I will not materialize from nowhere when needed; it takes years to lay firm foundations. Furthermore, we need to solve the Friendly AI challenge before Artificial General Intelligence is created, not afterwards; I should not even have to point this out. There will be difficulties for Friendly AI because the field of AI itself is in a state oflow consensus and high entropy. But that does not mean we do not need to worry about Friendly A I . It means there will be difficulties.

The physics of information processing superobjects: daily life mong the Jupiter brains. ]. Evol. Techno!., 5. http:/ jftp.nada.kth.sejpubjhomejasaj workjBrainsjBrains2 Schmidhuber, J. (2003). Goede! machines: self-referential universal problem solvers making provably optimal self-improvements. In Goertzel, B. and Pennachin, C. (eds.), Artificial General Intelligence, (New York: Springer-Verlag) . Sober, E. ( 1984). The Nature of Selection (Cambridge, MA: M IT Press). Tooby, J . and Cosmides, L. ( 1992). The psychological foundations ofculture. In Barkow, J . H . , Cosmides, L. and Tooby, J. (eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture, (New York: Oxford University Press).

pages: 1,034 words: 241,773

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress
by Steven Pinker
Published 13 Feb 2018

The second fallacy is to think of intelligence as a boundless continuum of potency, a miraculous elixir with the power to solve any problem, attain any goal.23 The fallacy leads to nonsensical questions like when an AI will “exceed human-level intelligence,” and to the image of an ultimate “Artificial General Intelligence” (AGI) with God-like omniscience and omnipotence. Intelligence is a contraption of gadgets: software modules that acquire, or are programmed with, knowledge of how to pursue various goals in various domains.24 People are equipped to find food, win friends and influence people, charm prospective mates, bring up children, move around in the world, and pursue other human obsessions and pastimes.

Understanding does not obey Moore’s Law: knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.25 Devouring the information on the Internet will not confer omniscience either: big data is still finite data, and the universe of knowledge is infinite. For these reasons, many AI researchers are annoyed by the latest round of hype (the perennial bane of AI) which has misled observers into thinking that Artificial General Intelligence is just around the corner.26 As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious but because the concept is barely coherent. The 2010s have, to be sure, brought us systems that can drive cars, caption photographs, recognize speech, and beat humans at Jeopardy!

Ph.D. dissertation, Pennsylvania State University. Olfson, M., Druss, B. G., & Marcus, S. C. 2015. Trends in mental health care among children and adolescents. New England Journal of Medicine, 372, 2029–38. Omohundro, S. M. 2008. The basic AI drives. In P. Wang, B. Goertzel, & S. Franklin, eds., Artificial general intelligence 2008: Proceedings of the first AGI conference. Amsterdam: IOS Press. Oreskes, N., & Conway, E. 2010. Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. New York: Bloomsbury Press. Ortiz-Ospina, E., Lee, L., & Roser, M. 2016.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

To address the Oumaima Hajri: The Hidden Costs of AI decolonization of AI concretely, two crucial factors come to the fore: the language surrounding AI and the imaginaries we associate with it. The language used to represent AI often masks the accountability of the human beings who develop the technology. AI is portrayed as ‘intelligent’ and ‘self-learning’. This raises important questions about human intelligence and creativity. Moreover, the ‘artificial general intelligence’ concept prompts us to ponder the potential implications for humanity. Additionally, while discussions about robot rights are ongoing, it is essential to acknowledge that some human beings lack fundamental human rights (De Graaf/Hindriks/Hindriks 2021). The language we use to discuss AI erodes our understanding of what it means to be human.

What culture can do here is reflect on the underlying images of humanity and make visible the basic assumption that is widespread in literature and public perception: reflecting on AI as an independent and powerful agent and showing how these ideas are already anchored communicatively in various cultural or even religious practices. Not to be distracted by the constant need to generate anthropomorphizing images of AI or the still unrealized idea of an artificial general intelligence (AGI) or singularity speculations, which are being widely discussed in academia (e.g. Chalmers 2010) and development (e.g. Ray Kurzweil), the focus on human agency and the expansion of interaction possibilities seems to be a central category, as well as the question of the extent to which AI systems expand or restrict the scope of action and freedom.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences. Discussions about the state of AI technology today veer into the realm of science fiction or perhaps religion.

Abbeel, Pieter, 268 Abelson, Robert, 180–181 Abovitz, Rony, 271–275 Active Ontologies, 304 Ad Hoc Committee on the Triple Revolution, 73–74 agent-based interfaces, 195–226. see also Siri (Apple) avatars, 304, 305 Baxter (robot), 195–196, 204–205, 205, 207 Brooks and, 201–204 CALO, 31, 297, 302–304, 310, 311 chatbots, 221–225, 304 early personal computing and, 196–201 ethics of, 339–342 “golemics” and, 208–215 Google and, 12–13, 341 Microsoft and, 187–191, 215–220 Rethink Robotics and, 204–208 singularity and, 220–221 Agents, Inc., 191–192 aging, of humans, 93–94, 236–237, 245, 327–332 “Alchemy and Artificial Intelligence” (Dreyfus), 177 Allen, Paul, 267, 268, 337 Alone Together (Turkle), 173, 221–222 Amazon, 97–98, 206, 247 Ambler (robot), 33, 202 Anderson, Chris, 88 Andreessen, Marc, 69 Apocalypse AI (Geraci), 85, 116–117 Apple. see also Siri (Apple) early history of, 7, 8, 214, 279–281, 307 iPhone, 23, 93, 239, 275, 281 iPod, 194, 275, 281 Jobs and, 13, 35, 112, 131, 194, 214, 241, 281–282, 320–323 Knowledge Navigator, 188, 300, 304, 305–310, 317, 318 labor force of, 83–84 Rubin and, 240 Sculley and, 35, 280, 300, 305, 306, 307, 317 Architecture Machine, The (Negroponte), 191 Architecture Machine Group, 306–307, 308–309 Arkin, Ronald, 333–335 Armer, Paul, 74 Aronson, Louise, 328 Artificial General Intelligence, 26 artificial intelligence (AI). see artificial intelligence (AI) history; autonomous vehicles; intelligence augmentation (IA) versus AI; labor force; robotics advancement; Siri (Apple) artificial intelligence (AI) history, 95–158. see also intelligence augmentation (IA) versus AI AI commercialization, 156–158 AI terminology, xii, 105–109 AI Winter, 16, 130–131, 140 Breiner and, 125–135 deep learning neural networks, 150–156, 151 early neural networks, 141–150 expert systems, 134–141, 285 McCarthy and, 109–115 Moravec and, 115–125 Silicon Valley inception, 95–99, 100, 256 SRI inception, 99–105 Strong artificial intelligence, 12, 26, 272 “Artificial Intelligence” (Lighthill), 130 “Artificial Intelligence of Hubert L.

pages: 190 words: 46,977

Elon Musk: A Mission to Save the World
by Anna Crowley Redding
Published 1 Jul 2019

“I met with Obama for one reason”—to talk about the dangers of artificial intelligence.169 In 2015, Elon cofounded a nonprofit called OpenAI to research the development of AI and how can AI be used to benefit humanity instead of … um … to annihilate us. OpenAI is a nonprofit “AI research company, discovering and enacting the path to safe artificial general intelligence.”170 “It’s going to be very tempting to use AI as a weapon. In fact, it will be used as a weapon. The on-ramp to serious AI, the danger is going to be more humans using it against each other, I think, most likely. That will be the danger,”171 he explained in a podcast with Joe Rogan.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

This platform might also be able to write basic accounts of events like sports games or what happened in the stock market, summarize long texts, and become a great companion tool for reporters, financial analysts, writers, and anyone who works with language. TURING TEST, AGI, AND CONSCIOUSNESS Does GPT-3 have what it takes to pass the Turing Test or become artificial general intelligence? Or at least take a solid step in that direction? Skeptics will say that GPT-3 is merely memorizing examples in a clever way but has no understanding and is not truly intelligent. Central to human intelligence are the abilities to reason, plan, and create. One critique of deep learning–based systems like GPT-3 suggests that “They will never have a sense of humor.

Perhaps in twenty years, GPT-23 will read every word ever written and watch every video ever produced and build its own model of the world. This all-knowing sequence transducer would contain all the accumulated knowledge of human history. All you’ll have to do is ask it the right questions. So, will deep learning eventually become “artificial general intelligence” (AGI), matching human intelligence in every way? Will we encounter “singularity” (see chapter 10)? I don’t believe it will happen by 2041. There are many challenges that we have not made much progress on or even understood, such as how to model creativity, strategic thinking, reasoning, counter-factual thinking, emotions, and consciousness.

pages: 196 words: 61,981

Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside
by Xiaowei Wang
Published 12 Oct 2020

The concept that human life can be optimized, of human actions being calibrated toward better performance, is a central belief of the ET Agricultural Brain project: it may eventually replace human farmers with AI farmers. The optimization of life is a distinctly modern endeavor. Some proponents of a world run by artificial intelligence (AI, when a computer program can perform defined tasks as well as humans can) and artificial general intelligence (AGI, computers more powerful than AI, with the ability to understand the world as well as humans can) present an optimized version of human life that is very seductive: rational, error-proof, and objective. Others have similar convictions: if we can quantify human consciousness and emotions through mechanisms like AI, we might be able to reduce suffering by optimizing our world to decrease those emotions.

pages: 205 words: 61,903

Survival of the Richest: Escape Fantasies of the Tech Billionaires
by Douglas Rushkoff
Published 7 Sep 2022

The only question left is how much autonomy AI will choose to grant us once it’s inevitably in charge of everything. I’m not so sure about all that. For the time being, AI and machine learning don’t really work so well. They can beat humans at Jeopardy (most of the time) and chess (some of the time), but they have not gotten anywhere near what is called human-level artificial general intelligence, or AGI—the ability to do any task a human can do. Whether AI will develop human and superhuman abilities in the next decade, century, millennium, if ever, may matter less right now than AI’s grip over the tech elite, and what this obsession tells us about The Mindset. Holders of The Mindset appear less immediately afraid of AI technology itself than the people this technology is bound to replace.

pages: 551 words: 174,280

The Beginning of Infinity: Explanations That Transform the World
by David Deutsch
Published 30 Jun 2011

But my guess is that when we do understand them, artificially implementing evolution and intelligence and its constellation of associated attributes will then be no great effort. TERMINOLOGY Quale (plural qualia) The subjective aspect of a sensation. Behaviourism Instrumentalism applied to psychology. The doctrine that science can (or should) only measure and predict people’s behaviour in response to stimuli. SUMMARY The field of artificial (general) intelligence has made no progress because there is an unsolved philosophical problem at its heart: we do not understand how creativity works. Once that has been solved, programming it will not be difficult. Even artificial evolution may not have been achieved yet, despite appearances. There the problem is that we do not understand the nature of the universality of the DNA replication system. 8 A Window on Infinity Mathematicians realized centuries ago that it is possible to work consistently and usefully with infinity.

*These are not the ‘parallel universes’ of the quantum multiverse, which I shall describe in Chapter 11. Those universes all obey the same laws of physics and are in constant slight interaction with each other. They are also much less speculative. * Hence what I am calling ‘AI’ is sometimes called ‘AGI’: Artificial General Intelligence. *First, they announce to the existing guests, ‘For each natural number N, will the guest in room number N please move immediately to room number N (N +1)/2.’ Then they announce, ‘For all natural numbers N and M, will the Nth passenger from the Mth train please go to room number [(N + M)2 + N – M/2.’

pages: 547 words: 173,909

Deep Utopia: Life and Meaning in a Solved World
by Nick Bostrom
Published 26 Mar 2024

Like thus: first we paint the idea; then we put nails in to hold the paint in place; then we put little screws in each nail to ensure they stay put; then we add little drops of superglue on the screws to prevent them from unwinding. And yet, as soon as we launch the construction, it all comes off. Maybe we need to add some tiny nails to keep the glue from flaking? 106 “AI complete”—meaning that if we knew how to do that, we’d also know how to create at least human-level artificial general intelligence. 107 Hoffer (1954), p. 151 108 “Ask yourself whether you are happy, and you cease to be so. The only chance is to treat, not happiness, but some end external to it, as the purpose of life.” (Mill, 1873, p. 147). 109 L. (1848), p. 2. (This line, which seems to have first appeared in 1848 in an article in a New Orleans newspaper signed only “L.”, is widely misattributed to Nathaniel Hawthorne, probably because of misleading formatting in an 1891 book; see O’Toole, 2014.) 110 Bostrom (2008a) 111 Ibid. 112 Bostrom & Ord (2006) 113 Bostrom (2008a) 114 Cf.

“Boredom Proneness–The Development and Correlates of a New Scale”. Journal of Personality Assessment, 50(1), 4–17. Feinberg, J. 1980. “Absurd Self-Fulfillment”. In P. van Inwagen (Ed.), Time and Cause: Essays Presented to Richard Taylor (pp. 255–281). London: D. Reidel. Finnveden, L., Riedel, C. J., & Shulman, C. 2022. “Artificial General Intelligence and Lock-In”. Unpublished manuscript. https://lukasfinnveden.substack.com/p/agi-and-lock-in Fischer, J. M. 1994. “Why Immortality Is Not So Bad”. International Journal of Philosophical Studies, 2(2), 257–270. Fisher, C. D. 1993. “Boredom at Work: A Neglected Concept”. Human Relations, 46(3), 395–417.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Later, some influential founders of AI, including John McCarthy (2007), Marvin Minsky (2007), and Patrick Winston (Beal and Winston, 2009), concurred with Nilsson’s warnings, suggesting that instead of focusing on measurable performance in specific applications, AI should return to its roots of striving for, in Herb Simon’s words, “machines that think, that learn and that create.” They called the effort human-level AI or HLAI—a machine should be able to learn to do anything a human can do. Their first symposium was in 2004 (Minsky et al., 2004). Another effort with similar goals, the artificial general intelligence (AGI) movement (Goertzel and Pennachin, 2007), held its first conference and organized the Journal of Artificial General Intelligence in 2008. At around the same time, concerns were raised that creating artificial superintelligence or ASI—intelligence that far surpasses human ability—might be a bad idea (Yudkowsky, 2008; Omohundro, 2008). Turing (1996) himself made the same point in a lecture given in Manchester in 1951, drawing on earlier ideas from Samuel Butler (1863):15 It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. ...

Über formal unentscheidbare Sätze der Principia mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38, 173–198. Goebel, J., Volk, K., Walker, H., and Gerbault, F. (1989). Automatic classification of spectra from the infrared astronomical satellite (IRAS). Astronomy and Astrophysics, 222, L5–L8. Goertzel, B. and Pennachin, C. (2007). Artificial General Intelligence. Springer. Gogate, V. and Domingos, P. (2011). Approximation by quantization. In UAI-11. Gold, E. M. (1967). Language identification in the limit. Information and Control, 10, 447–474. Goldberg, A. V., Kaplan, H., and Werneck, R. F. (2006). Reach for A*: Efficient point-to-point shortest path algorithms.

Laird, J., Newell, A., and Rosenbloom, P. S. (1987). SOAR: An architecture for general intelligence. AIJ, 33, 1–64. Laird, J., Rosenbloom, P S., and Newell, A. (1986). Chunking in Soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. Laird, J. (2008). Extending the Soar cognitive architecture. In Artificial General Intelligence Conference. Lake, B., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human‑level concept learning through probabilistic program induction. Science, 350, 1332–1338. Lakoff, G. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

pages: 254 words: 76,064

Whiplash: How to Survive Our Faster Future
by Joi Ito and Jeff Howe
Published 6 Dec 2016

The point of this book isn’t to scare you with dread visions of the future. It’s as useful to entertain visions of life on Kepler-62e. Because “Artificial Intelligence” is used as a label for everything from Siri to Tesla automobiles, we now describe this kind of problem-solving AI as “narrow” or “specialized” AI, to differentiate it from AGI—artificial general intelligence. Artificial intelligence expert Ben Goertzel suggests that an AGI would be a machine that could apply to college, be admitted, and then get a degree. There are many differences between a specialized AI and an AGI but neither is programmed. They are “trained” or they “learn.” Specialized AIs are carefully trained by engineers who tweak the data and algorithms, and keep testing them until they do the specific things that are required of them.

pages: 381 words: 78,467

100 Plus: How the Coming Age of Longevity Will Change Everything, From Careers and Relationships to Family And
by Sonia Arrison
Published 22 Aug 2011

“The human brain simply was not evolved for the integrative analysis of a massive number of complexly-interrelated, highdimensional biological datasets,” he writes.15 “In the short term, the most feasible path to working around this problem is to supplement human biological scientists with increasingly advanced AI software, gradually moving toward the goal of an AGI (Artificial General Intelligence) bioscientist.”16 Just as Google is a form of artificial intelligence that allows for fast searching of the Internet, a software program that could “read” biological studies and help to sort the data for human scientists would make the task of finding repair mechanisms for the human body that much easier.

The Smartphone Society
by Nicole Aschoff

Run by Demis Hassabis, a neuroscientist, video game developer, and former child chess prodigy, and a team of about two hundred computer scientists and neuroscientists, the Alphabet subsidiary’s researchers have operationalized the idea that intelligence, thought, and perhaps even consciousness are nothing more than a collection of discrete, local processes that can be “solved” with enough computing power and data. Hassabis is one of a growing number of scientists who say the artificial general intelligence we were promised so long ago is finally within reach. This core belief in the power of technology and data is part of a broader worldview encapsulated in popular Silicon Valley sayings such as “Move fast and break things” and the abbreviated “Ask for forgiveness, not permission.” Google never asked permission to photograph the front of everyone’s home, link it to a physical address, and put it on the web.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

Finally, the school would adjust other elements of the work flow to take advantage of being able to provide instantaneous school admission decisions. 13 Decomposing Decisions Today’s AI tools are far from the machines with human-like intelligence of science fiction (often referred to as “artificial general intelligence” or AGI, or “strong AI”). The current generation of AI provides tools for prediction and little else. This view of AI does not diminish it. As Steve Jobs once remarked, “One of the things that really separates us from the high primates is that we’re tool builders.” He used the example of the bicycle as a tool that had given people superpowers in locomotion above every other animal.

pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death
by Mark O'Connell
Published 28 Feb 2017

(He was the coauthor, with Google’s research director Peter Norvig, of Artificial Intelligence: A Modern Approach, the book most widely used as a core AI text in university computer science courses.) In 2014, Russell and three other scientists—Stephen Hawking, Max Tegmark, and Nobel laureate physicist Frank Wilczek—had published a stern warning, in of all venues The Huffington Post, about the dangers of AI. The idea, common among those working on AI, that because an artificial general intelligence is widely agreed to be several decades from realization we can just keep working on it and solve safety problems if and when they arise is one that Russell and his esteemed coauthors attack as fundamentally wrongheaded. “If a superior alien civilization sent us a text message saying ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’?

pages: 282 words: 81,873

Live Work Work Work Die: A Journey Into the Savage Heart of Silicon Valley
by Corey Pein
Published 23 Apr 2018

Kurzweil is now known primarily as a purveyor of far-out ideas, of which the Singularity is only one, but his early pronouncements are remarkably restrained in comparison. In a 1984 conference speech, he lamented the overly optimistic predictions of AI researchers, who were forever claiming that the holy grail of the field, “artificial general intelligence”—a computerized mind equivalent to that of a human, in capabilities if not in design—was just a decade or two away, only to be proven wrong time and again. Such romanticism, he said, had created a “credibility problem” that plagued the field. In 1990, MIT Press published Kurzweil’s first book, The Age of Intelligent Machines, which collected predictions from more than twenty authors.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

The British government, for example, has been clear about never developing full AI weapons. The small print though is revealing—for the UK, an ‘autonomous system is capable of understanding higher-level intent and direction’.19 As we’ve seen, that’s well beyond the state of the art for today’s AI. It’s more like the human-like Artificial General Intelligence of Hollywood movies. Anything less can be considered merely as an ‘automated weapon’. It’s an artful get-out that allows the development and deployment of highly capable AI weapon systems that will soon force humans well away from the loop. And reassuring noises aside, advanced states haven’t banned them.

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
by Thomas H. Davenport and Julia Kirby
Published 23 May 2016

Vendors like IBM, Cognitive Scale, SAS, and Tibco are adding new cognitive functions and integrating them into solutions. Deloitte is working with companies like IBM and Cognitive Scale to create not just a single application, but a broad “Intelligent Automation Platform.” Even when progress is made on these types of integration, the result will still fall short of the all-knowing “artificial general intelligence” or “strong AI” that we discussed in Chapter 2. That may well be coming, but not anytime soon. Still, these short-term combinations of tools and methods may well make automation solutions much more useful. Broadening Application of the Same Tools —In addition to employing broader types of technology, organizations that are stepping forward are using their existing technology to address different industries and business functions.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

Much of today’s AI is best described as sophisticated machine-based pattern recognition, perhaps spiced up with a bit of planning. Whether intelligent or not, these systems do what they do without being conscious of anything. Projecting into the future, the stated moonshot goal of many AI researchers is to develop systems with the general intelligence capabilities of a human being – so-called ‘artificial general intelligence’, or ‘general AI’. And beyond this point lies the terra incognita of post-Singularity intelligence. But at no point in this journey is it warranted to assume that consciousness just comes along for the ride. What’s more, there may be many forms of intelligence that deviate from the humanlike, complementing rather than substituting or amplifying our species-specific cognitive toolkit – again without consciousness being involved.

pages: 328 words: 96,678

MegaThreats: Ten Dangerous Trends That Imperil Our Future, and How to Survive Them
by Nouriel Roubini
Published 17 Oct 2022

Indeed, AI initially replaced routine jobs. Then it started to replace cognitive jobs that repeat sequences of steps that a machine can master. Now AI is gradually able to perform even creative jobs. So for workers, including those in the creative industries, there is nowhere to hide. All this is vaulting us even closer to artificial general intelligence, or AGI, where super intelligent machines leave humans in the dust. Author Ray Kurzweil and other visionaries predict a pivotal moment that will disrupt everything we know. An intelligence explosion will occur when computers develop motivation to learn on their own at warp speed without human direction.

pages: 338 words: 104,815

Nobody's Fool: Why We Get Taken in and What We Can Do About It
by Daniel Simons and Christopher Chabris
Published 10 Jul 2023

And although they weren’t attempting to con anyone, Trafton Drew and his colleagues showed that practicing radiologists were so good at finding tumors on CT scans (the thing that they expected to see) that they often missed tumor-sized gorillas that had been mischievously inserted into the images.5 When experts stray too far from their specialty without realizing it, they can be exploited by cons that meet their expectations but don’t fool true experts. Some leaders in the technology industry repeatedly proclaim the imminence of artificial general intelligence—the development of entities that are at least as capable as human beings across a wide swath of intelligent behavior. Their expertise in developing sophisticated computational models is genuine, but it is not the expertise necessary to evaluate whether a model’s output constitutes generally intelligent behavior.

pages: 502 words: 124,794

Nexus
by Ramez Naam
Published 16 Dec 2012

Thailand did look amazingly beautiful, with jungles and waterfalls and beaches, and temple after temple after temple. If only I was coming here for a vacation, he thought. The conference guide yielded up a plethora of fascinating talks: Neural Substrates of Symbolic Reasoning, Intelligence and Prospects for Increasing It, Emotive-Loop Programming: A New Path to Artificial General Intelligence. How could they even hold these talks? In the US the topics of half of them would be classified as Emerging Technological Threats. No wonder the international meeting trumps the US neuroscience meetings these days, Kade thought. The cutting edge stuff isn't legal at home any more. He looked over at Sam.

pages: 428 words: 121,717

Warnings
by Richard A. Clarke
Published 10 Apr 2017

Just as artificial intelligence is an overbroad term, so are the terms used to refer to its two constituent halves, weak and strong. Weak AI is often also called narrow AI. 3. Arthur Samuel offered this definition of machine learning in 1959. 4. Superintelligence is a term made popular by philosopher Nick Bostrom and is often also called artificial general intelligence. 5. Referenced from Luke Muehlhauser’s fantastic work which was a great guide for the authors. 6. James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era (New York: Thomas Dunne Books, 2013). Barrat’s book was an important source for the authors. 7. Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” http://www.nickbostrom.com/ethics/ai.html (accessed Nov. 9, 2016).

pages: 412 words: 128,042

Extreme Economies: Survival, Failure, Future – Lessons From the World’s Limits
by Richard Davies
Published 4 Sep 2019

As Siim Sikkut, the government’s top tech guru, puts it, ‘You can’t bribe a computer.’ The aim for many of those seeking to crack the holy grail of artificial intelligence is to use computers to strengthen democratic systems too, says Ahti Heinla, inventor of the delivery robot. Teams across the world are racing to create something known as an artificial general intelligence, or AGI. This would be a computerized mind so powerful that it could reason, planning what to learn and building its own digital brain in a strategic way, rather than being taught what to do by its human masters. People pursuing this kind of research think an AGI might help humans solve intractable problems such as the politics of nuclear disarmament or the economics of trade deals.

pages: 381 words: 120,361

Sunfall
by Jim Al-Khalili
Published 17 Apr 2019

She still vividly remembered with nostalgic fondness a hiking trip with her father in the Alborz Mountains when she was thirteen. She recalled being cold and tired, but the scenery had been stunning. They had spent hours discussing AI and the way the world was changing. Her father explained to her that the notion of artificial general intelligence, when machines could do everything humans could, required AIs to be sentient, to develop self-awareness. Otherwise, they would stay just very clever zombies, with no true understanding of what they were doing. All the time this remained the case, humans could keep one step ahead of them.

pages: 486 words: 150,849

Evil Geniuses: The Unmaking of America: A Recent History
by Kurt Andersen
Published 14 Sep 2020

To me one of the most interesting recent accomplishments is an AI that designed new AI software as well as or better than engineers could design it. All of that is why the funding of AI start-ups quadrupled just between 2015 and 2018, to $40 billion, and why the total investment put into in AI businesses in 2019 reached $70 billion. The debate among technologists tends to focus on when they’ll manage to create artificial general intelligence, machines able to figure out any problem and carry out any cognitive task that a person can. People at Facebook and Google and Stanford and elsewhere say they’ll do it by the mid-2020s, that they’ll then have machines “better than human level at all of the primary human senses” and “general cognition” (Zuckerberg), true “human-level A.I.”

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

There are encouraging signs as this book goes to press that major governments are taking the challenge seriously—like the Bletchley Declaration following the 2023 AI Safety Summit in the UK—but much will depend on how such initiatives are actually implemented.[78] One optimistic argument, which is based on the principle of the free market, is that each step toward superintelligence is subject to market acceptance. In other words, artificial general intelligence will be created by humans to solve real human problems, and there are strong incentives to optimize it for beneficial purposes. Since AI is emerging from a deeply integrated economic infrastructure, it will reflect our values because in an important sense it will be us. We are already a human-machine civilization.

pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers
by Timothy Ferriss
Published 6 Dec 2016

Suntory helps you forget.’” MacAskill, Will: “It would be outside the Gates Foundation, or maybe outside Bill Gates’s house . . . where ultimately, he’s going to donate $100 billion. And it would say, ‘Bill, you have spoken about the risks and potential upside in the long run from development of artificial general intelligence, yet you’re not doing anything about it yet. You haven’t gotten involved.’” MacKenzie, Brian: “‘Ego is how we want the world to see us. Confidence is how we see ourselves.’” McCarthy, Nicholas: “‘Anything is possible.’ I wholeheartedly believe that. Why wouldn’t I think that?

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
Published 14 Jul 2005

Yudkowsky formed the Singularity Institute for Artificial Intelligence (SIAI) to develop "Friendly AI," intended to "create cognitive content, design features, and cognitive architectures that result in benevolence" before near-human or better-than-human Als become possible. SIAI has developed The SIAI Guidelines on Friendly AI: "Friendly AI," http://www.singinst.org/friendly/. Ben Goertzel and his Artificial General Intelligence Research Institute have also examined issues related to developing friendly AI; his current focus is on developing the Novamente AI Engine, a set of learning algorithms and architectures. Peter Voss, founder of Adaptive A.I., Inc., has also collaborated on friendly-AI issues: http://adaptiveai.com/. 46.