AI risk

back to index

24 results

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

F., 222, 225 Sleepwalkers, The (Koestler), 153 Sloan Foundation, 202 social sampling, 198–99 software failure to advance in conjunction with increased processing power, 10 lack of standards of correctness and failure in engineering of, 60–61 Solomon, Arthur K., xvi–xvii “Some Moral and Technical Consequences of Automation” (Wiener), 23 Stapledon, Olaf, 75 state/AI scenario, in relation of machine superintelligences to hybrid superintelligences, 175–76 statistical, model-blind mode of learning, 16–17, 19 Steveni, Barbara, 218 Stewart, Potter, 247 Steyerl, Hito on AI visualization programs, 211–12 on artificial stupidity, 210–11 subjective method of prediction, 233, 234–35 subjugation fear in AI scenarios, 108–10 Superintelligence: Paths, Dangers, Strategies (Bostrom), 27 supervised learning, 148 surveillance state dystopias, 105–7 switch-it-off argument against AI risk, 25 Szilard, Leo, 26, 83 Tallinn, Jaan, 88–99 AI-risk message, 92–93 background and overview of work of, 88–89 calibrating AI-risk message, 96–98 deniers of AI-risk, motives of, 95–96 environmental risk, AI risk as, 97–98 Estonian dissidents, messages of, 91–92 evolution’s creation of planner and optimizer greater than itself, 93–94 growing awareness of AI risk, 98–99 technological singularity. See singularity Tegmark, Max, 76–87 AI safety research, 81 Asilomar AI Principles, 2017, 81, 84 background and overview of work of, 76–77 competence of superintelligent AGI, 85 consciousness as cosmic awakening, 78–79 general expectation AGI achievable within next century, 79 goal alignment for AGI, 85–86 goals for a future society that includes AGI, 84–86 outlook, 86–87 rush to make humans obsolescent, reasons behind, 82–84 safety engineering, 86 societal impact of AI, debate over, 79–82 Terminator, The (film), 242 three laws of artificial intelligence, 39–40 Three Laws of Robotics, Asimov’s, 250 threshold theorem, 164 too-soon-to-worry argument against AI risk, 26–27, 81 Toulmin, Stephen, 18–19 transhumans, rights of, 252–53 Treister, Suzanne, 214–15 Trolley Problem, 244 trust networks, building, 200–201 Tsai, Wen Ying, 258, 260–61 Turing, Alan, 5, 25, 35, 43, 60, 103, 168, 180 AI-risk message, 93 Turing Machine, 57, 271 Turing Test, 5, 46–47, 276–77 Tversky, Amos, 130–31, 250 2001: A Space Odyssey (film), 183 Tyka, Mike, 212 Understanding Media (McLuhan), 208 understanding of computer results, loss of, 189 universal basic income, 188 Universal Turing Machine, 57 unsupervised learning, 225 value alignment (putting right purpose into machines) Dragan on, 137–38, 141–42 Griffiths on, 128–33 Pinker on, 110–11 Tegmark on, 85–86 Wiener on, 23–24 Versu, 217 Veruggio, Gianmarco, 243 visualization programs, 211–13 von Foerster, Heinz, xxi, 209–10, 215 Vonnegut, Kurt, 250 von Neumann, John, xx, 8, 35, 60, 103, 168, 271 digital computer architecture of, 58 second law of AI and, 39 self-replicating cellular automaton, development of, 57–58 use of symbols for computing, 164–65 Watson, 49, 246 Watson, James, 58 Watson, John, 225 Watt, James, 3, 257 Watts, Alan, xxi Weaver, Warren, xviii, 102–3, 155 Weizenbaum, Joe, 45, 48–50, 105, 248 Wexler, Rebecca, 238 Whitehead, Alfred North, 275 Whole Earth Catalog, xvii “Why the Future Doesn’t Need Us” (Joy), 92 Wiener, Norbert, xvi, xviii–xx, xxv, xxvi, 35, 90, 96, 103, 112, 127, 163, 168, 256 on automation, in manufacturing, 4, 154 on broader applications of cybernetics, 4 Brooks on, 56–57, 59–60 control via feedback, 3 deep-learning and, 9 Dennett on, 43–45 failure to predict computer revolution, 4–5 on feedback loops, 5–6, 103, 153–54 Hillis on, 178–80 on information, 5–6, 153–59, 179 Kaiser on Wiener’s definition of information, 153–59 Lloyd on, 3–7, 9, 11–12 Pinker on, 103–5, 112 on power of ideas, 112 predictions/warnings of, xviii–xix, xxvi, 4–5, 11–12, 22–23, 35, 44–45, 93, 104, 172 Russell on, 22–23 on social risk, 97 society, cybernetics impact on, 103–4 what Wiener got wrong, 6–7 Wilczek, Frank, 64–75 astonishing corollary (natural intelligence as special case of AI), 67–70 astonishing hypothesis of Crick, 66–67 background and overview of work of, 64–65 consciousness, creativity and evil as possible features of AI, 66–68 emergence, 68–69 human brain’s advantage over AI, 72–74 information-processing technology capacities that exceed human capabilities, 70–72 intelligence, future of, 70–75 Wilkins, John, 275 wireheading problem, 29–30 With a Rhythmic Instinction to Be Able to Travel Beyond Existing Forces of Life (Parreno), 263–64 Wolfram, Stephen, 266–84 on AI takeover scenario, 277–78 background and overview of work of, 266–67 computational knowledge system, creating, 271–77 computational thinking, teaching, 278–79 early approaches to AI, 270–71 on future where coding ability is ubiquitous, 279–81 goals and purposes, of humans, 268–70 image identification system, 273–74 on knowledge-based programming, 278–81 purposefulness, identifying, 281–84 Young, J.

THE PRESENT SITUATION So here we are, more than half a century after the original warnings by Turing, Wiener, and Good, and a decade after people like me started paying attention to the AI-risk message. I’m glad to see that we’ve made a lot of progress in confronting this issue, but we’re definitely not there yet. AI risk, although no longer a taboo topic, is not yet fully appreciated among AI researchers. AI risk is not yet common knowledge either. In relation to the timeline of the first dissident message, I’d say we’re around the year 1988, when raising the Soviet-occupation topic was no longer a career-ending move but you still had to somewhat hedge your position.

Ultimately, we don’t have the luxury of waiting before all the corporate heads and AI researchers are willing to concede the AI risk. Imagine yourself sitting in a plane about to take off. Suddenly there’s an announcement that 40 percent of the experts believe there’s a bomb on board. At that point, the course of action is already clear, and sitting there waiting for the remaining 60 percent to come around isn’t part of it. CALIBRATING THE AI-RISK MESSAGE While uncannily prescient, the AI-risk message from the original dissidents has a giant flaw—as does the version dominating current public discourse: Both considerably understate the magnitude of the problem as well as AI’s potential upside.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

In Yudkowsky’s stylized version of the dialog, Musk expressed his concern about AI risk by suggesting it was “important to become a multiplanetary species—you know, like set up a Mars colony. And Demis said, ‘They’ll follow you.’ ” Before I unpack how Yudkowsky came to this grim conclusion, I should say that he’d slightly mellowed on his certainty of p(doom) by the time I caught up with him again at the Manifest conference in September 2023. He’d been heartened by the scientific community’s increasing concern about AI risk, a topic he was years ahead of the curve on, having founded the Machine Intelligence Research Institute in 2000.

Anyone’s totalizing vision of the future, be it Le Guin’s, Marc Andreessen’s, or mine, is someone else’s nightmare. “Utopia always leaves somebody out,” Émile Torres told me. The Best Arguments for and Against AI Risk This tour of the River will end soon. It’s last call at the snack bar, and those of you who need your parking validated should see one of my assistants in their fetching fox-colored vests. I’m going to close with a quick summary of what I think are the best arguments for and against AI risk. Then in the final chapter, 1776, I’ll zoom out to consider the shape of things to come—the moment our civilization finds itself in—and propose some principles to guide us through the next decades and hopefully far beyond.

GO TO NOTE REFERENCE IN TEXT addressing the world: “Statement by the President Announcing the Use of the A-Bomb at Hiroshima,” Harry S. Truman Museum, August 6, 1945, trumanlibrary.gov/library/public-papers/93/statement-president-announcing-use-bomb-hiroshima. GO TO NOTE REFERENCE IN TEXT simple, one-sentence statement: “Statement on AI Risk,” Center for AI Safety, 2023, safe.ai/statement-on-ai-risk. GO TO NOTE REFERENCE IN TEXT highly-regarded AI companies: This is not my personal opinion but my interpretation of the Silicon Valley consensus after extensive reporting. Other companies such as Meta are considered a step or two behind. Interestingly, far fewer Meta employees signed the one-sentence statement than those at OpenAI, Google and Anthropic; the company may take more of an acceleratonist stance since it feels as though it’s behind.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

So, in the interests of having that debate, and in the hope that the reader will contribute to it, let me provide a quick tour of the highlights so far, such as they are. Denial Denying that the problem exists at all is the easiest way out. Scott Alexander, author of the Slate Star Codex blog, began a well-known article on AI risk as follows:2 “I first became interested in AI risk back around 2007. At the time, most people’s response to the topic was ‘Haha, come back when anyone believes this besides random Internet crackpots.’” Instantly regrettable remarks A perceived threat to one’s lifelong vocation can lead a perfectly intelligent and usually thoughtful person to say things they might wish to retract on further analysis.

Elon Musk, Stephen Hawking, and Bill Gates are certainly very familiar with scientific and technological reasoning, and Musk and Gates in particular have supervised and invested in many AI research projects. And it would be even less plausible to argue that Alan Turing, I. J. Good, Norbert Wiener, and Marvin Minsky are unqualified to discuss AI. Finally, Scott Alexander’s blog piece mentioned earlier, which is titled “AI Researchers on AI Risk,” notes that “AI researchers, including some of the leaders in the field, have been instrumental in raising issues about AI risk and superintelligence from the very beginning.” He lists several such researchers, and the list is now much longer. Another standard rhetorical move for the “defenders of AI” is to describe their opponents as Luddites. Oren Etzioni’s reference to “weavers throwing their shoes in the mechanical looms” is just this: the Luddites were artisan weavers in the early nineteenth century protesting the introduction of machinery to replace their skilled labor.

An argument that a sufficiently intelligent machine cannot help but pursue human objectives: Rodney Brooks, “The seven deadly sins of AI predictions,” MIT Technology Review, October 6, 2017. 32. Pinker, “Thinking does not imply subjugating.” 33. For an optimistic view arguing that AI safety problems will necessarily be resolved in our favor: Steven Pinker, “Tech prophecy.” 34. On the unsuspected alignment between “skeptics” and “believers” in AI risk: Alexander, “AI researchers on AI risk.” CHAPTER 7 1. For a guide to detailed brain modeling, now slightly outdated, see Anders Sandberg and Nick Bostrom, “Whole brain emulation: A roadmap,” technical report 2008-3, Future of Humanity Institute, Oxford University, 2008. 2. For an introduction to genetic programming from a leading exponent, see John Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection (MIT Press, 1992). 3.

pages: 48 words: 12,437

Smarter Than Us: The Rise of Machine Intelligence
by Stuart Armstrong
Published 1 Feb 2014

The challenge is that, at the moment, we are far from having powerful AI and so it feels slightly ridiculous to warn people about AI risks when your current program may, on a good day, choose the right verb tense in a translated sentence. Still, by raising the issue, by pointing out how fewer and fewer skills remain “human-only,” you can at least prepare the community to be receptive when their software starts reaching beyond the human level of intelligence. This is a short book about AI risk, but it is important to remember the opportunities of powerful AI, too. Allow me to close with a hopeful paragraph from a paper by Luke Muehlhauser and Anna Salamon: We have argued that AI poses an existential threat to humanity.

(MIRI also commissioned and published this book.) Meanwhile, Nick Bostrom founded the Future of Humanity Institute (FHI), a research group within the University of Oxford. FHI is dedicated to analyzing and reducing all existential risks—risks that could drive humanity to extinction or dramatically curtail its potential, of which AI risk is just one example. Bostrom is currently finishing a scholarly monograph about machine superintelligence, to be published by Oxford University Press. (This book’s author currently works at FHI.) Together MIRI and FHI have been conducting research in technological forecasting, mathematics, computer science, and philosophy, in order to have the pieces in place for a safe transition to AI dominance.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

Classification: LCC Q334.7 .F67 2021 | DDC 006.301—dc23 LC record available at https://lccn.loc.gov/2021016340 ISBNs: 978-1-5416-7473-8 (hardcover); 978-1-5416-7472-1 (ebook) E3-20210813-JV-NF-ORI CONTENTS Cover Title Page Copyright Dedication 1 The Emerging Disruption 2 AI as the New Electricity 3 Beyond Hype: A Realist’s View of Artificial Intelligence as a Utility 4 The Quest to Build Intelligent Machines 5 Deep Learning and the Future of Artificial Intelligence 6 Disappearing Jobs and the Economic Consequences of AI 7 China and the Rise of the AI Surveillance State 8 AI Risks Conclusion: Two AI Futures Acknowledgments Discover More Notes About the Author Also by Martin Ford For my mother, Sheila Explore book giveaways, sneak peeks, deals, and more. Tap here to learn more. CHAPTER 1 THE EMERGING DISRUPTION ON NOVEMBER 30, 2020, DEEPMIND, A LONDON-BASED ARTIFICIAL intelligence company owned by Google parent Alphabet, announced a stunning, and likely historic, breakthrough in computational biology, an innovation with the potential to genuinely transform science and medicine.

To seize the major strategic opportunity for the development of AI, to build China’s first-mover advantage in the development of AI, to accelerate the construction of an innovative nation and global power in science and technology, in accordance with the requirements of the CCP Central Committee and the State Council, this plan has been formulated. The answer is that B is the human-translated version. (See endnote 8.) CHAPTER 8 AI RISKS IT’S EARLY NOVEMBER, JUST TWO DAYS BEFORE THE PRESIDENTIAL election in the United States. The Democratic candidate has spent much of her career fighting to advance civil rights and expand protections for marginalized communities. Her record on this issue is seemingly impeccable. It therefore comes as an immense shock when an audio recording that purports to be of the candidate engaging in a private conversation appears, and then immediately goes viral, on social media.

While much work remains to be done, programs like AI4ALL together with an industry commitment to attracting inclusive AI talent will likely produce a significantly more diverse set of researchers in the coming years and decades. Bringing a broader range of perspective into the field will likely translate directly into more effective and fair artificial intelligence systems. AN EXISTENTIAL THREAT FROM SUPERINTELLIGENCE AND THE “CONTROL PROBLEM” The AI risk that transcends all others is the possibility that machines with superhuman intelligence might someday wrest themselves from our direct control and pursue a course of action that ultimately presents an existential threat to humanity. Security issues, weaponization and algorithmic bias all pose immediate or near-term dangers.

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

“How Much Does Agriculture Depend on Pollinators? Lessons from Long-Term Trends in Crop Production.” Annals of Botany, 103(9), 1,579–88. Alexander, S. (2014). Meditations on Moloch. https://slatestarcodex.com/2014/07/30/meditations-on-moloch/. —(2015). AI Researchers on AI Risk. https://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/. Alibek, K. (2008). Biohazard. Random House. Allen, M., et al. (2018). “Framing and Context,” in V. Masson-Delmotte, et al. (eds.), Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change (pp. 49–91), in press.

But it shows that many AI researchers take seriously the possibilities that AGI will be developed within 50 years and that it could be an existential catastrophe. There is a lot of uncertainty and disagreement, but it is not at all a fringe position. There is one interesting argument for skepticism about AI risk that gets stronger—not weaker—when more researchers acknowledge the risks. If researchers can see that building AI would be extremely dangerous, then why on earth would they go ahead with it? They are not simply going to build something that they know will destroy them.112 If we were all truly wise, altruistic and coordinated, then this argument would indeed work.

This analysis gives us a starting point: a generic assessment of the value of allocating new resources to a risk. But there are often resources that are much more valuable when applied to one risk rather than another. This is especially true when it comes to people. A biologist would be much more suited to working on risks of engineered pandemics than retraining to work on AI risk. The ideal portfolio would thus take people’s comparative advantage into account. And there are sometimes highly leveraged opportunities to help with a particular risk. Each of these dimensions (fit and leverage) could easily change the value of an opportunity by a factor of ten (or more). Let’s consider three more heuristics for setting our priorities: focusing on risks that are soon, sudden and sharp.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

With a series of “rationality boot camps” MIRI and its sister organization, the Center for Applied Rationality (CFAR), hope to train tomorrow’s potential AI builders and technology policy makers in the discipline of rational thinking. When these elites grow up, they’ll use that education in their work to avoid AI’s most harrowing pitfalls. Quixotic as this scheme may sound, MIRI and CFAR have their fingers on an important factor in AI risk. The Singularity is trending high, and Singularity issues will come to the attention of more and smarter people. A window for education about AI risk is starting to open. But any plan to create an advisory board or governing body over AI is already too late to avoid some kinds of disasters. As I mentioned in chapter 1, at least fifty-six countries are developing robots for the battlefield.

When a dystopian viewpoint rears its head, many bloggers, editorialists, and technologists reflexively fend it off with some version of “Oh no, not the Terminator again! Haven’t we heard enough gloom and doom from Luddites and pessimists?” This reaction is plain lazy, and it shows in flimsy critiques. The inconvenient facts of AI risk are not as sexy or accessible as techno-journalism’s usual fare of dual core 3-D processors, capacitive touch screens, and the current hit app. I also think its popularity as entertainment has inoculated AI from serious consideration in the not-so-entertaining category of catastrophic risks. For decades, getting wiped out by artificial intelligence, usually in the form of humanoid robots, or in the most artful case a glowing red lens, has been a staple of popular movies, science-fiction novels, and video games.

Policy makers spending public dollars will not feel they require our informed consent any more than they did before recklessly deploying Stuxnet. As I worked on this book I made the request of scientists that they communicate in layman’s terms. The most accomplished already did, and I believe it should be a requirement for general conversations about AI risks. At a high or overview level, this dialogue isn’t the exclusive domain of technocrats and rhetoricians, though to read about it on the Web you’d think it was. It doesn’t require a special, “insider” vocabulary. It does require the belief that the dangers and pitfalls of AI are everyone’s business.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
by Michael Kearns and Aaron Roth
Published 3 Oct 2019

In fact, when Google negotiated the purchase of DeepMind in 2014 for $400 million, one of the conditions of the sale was that Google would set up an AI ethics board. All of this makes for good press, but in this section, we want to consider some of the arguments that are causing an increasingly respectable minority of scientists to be seriously worried about AI risk. Most of these fears are premised on the idea that AI research will inevitably lead to superintelligent machines in a chain reaction that will happen much faster than humanity will have time to react to. This chain reaction, once it reaches some critical point, will lead to an “intelligence explosion” that could lead to an AI “singularity.”

If the return on research investment is linear, then the growth in intelligence is exponential. It is hard to even visualize this kind of growth; when plotted, it seems to stay constant until the very end, when it just shoots up. This is the scenario in which we might want to invest heavily in thinking about AI risk now—even though it appears that we are a long way off from superintelligence, it will keep looking like that until it is too late. But suppose researching AI is more like writing an essay. In other words, it exhibits diminishing marginal returns. The more research we plow into it, the more intelligence we get, but at a slowing rate.

And to paraphrase Ed Felten, a professor of computer science at Princeton, we should not simply deposit a few dollars in an interest-bearing savings account and then plan our retirement assuming that we will soon experience a “wealth explosion” that will suddenly make us unimaginably rich. Exponential growth can still seem slow on human time scales. But even if an intelligence explosion is not certain, the fact that it remains a possibility, together with the potentially dire consequences it would entail, make methods for managing AI risk worth taking seriously as a topic of algorithmic research. After all, the core concern—that it is hard to anticipate the unintended side effects of optimizing seemingly sensible objectives—has been one of the driving forces behind every example of algorithmic misbehavior we have discussed in this book.

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

If the crude state of AI today powers learning thermostats, automated stock trading, and self-driving cars, what tasks will the machines of tomorrow manage? To help get some perspective on AI risk, I spoke with Tom Dieterrich, the president of the Association for the Advancement of Artificial Intelligence (AAAI). Dietterich is one of the founders of the field of machine learning and, as president of the AI professional society, is now smack in the middle of this debate about AI risk. The mission of AAAI is not only to promote scientific research in AI but also to promote its “responsible use,” which presumably would include not killing everyone.

The fear that AI could one day develop to the point where it threatens humanity isn’t shared by everyone who works on AI. It’s hard to dismiss people like Stephen Hawking, Bill Gates, and Elon Musk out of hand, but that doesn’t mean they’re right. Other tech moguls have pushed back against AI fears. Steve Ballmer, former CEO of Microsoft, has said AI risk “doesn’t concern me.” Jeff Hawkins, inventor of the Palm Pilot, has argued, “There won’t be an intelligence explosion. There is no existential threat.” Facebook CEO Mark Zuckerberg has said that those who “drum up these doomsday scenarios” are being “irresponsible.” David Brumley of Carnegie Mellon, who is on the cutting edge of autonomy in cybersecurity, similarly told me he was “not concerned about self-awareness.”

You can chat with the 2016 winner, “Rose,” here: http://ec2-54-215-197-164.us-west-1.compute.amazonaws.com/speech.php. 236 AI virtual assistant called “Amy”: “Amy the Virtual Assistant Is So Human-Like, People Keep Asking It Out on Dates,” accessed June 15, 2017, https://mic.com/articles/139512/xai-amy-virtual-assistant-is-so-human-like-people-keep-asking-it-out-on-dates. 236 “If we presume an intelligent alien life”: Micah Clark, interview, May 4, 2016. 237 “any level of intelligence could in principle”: Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” http://www.nickbostrom.com/superintelligentwill.pdf. 237 “The AI does not hate you”: Eliezer S. Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” http://www.yudkowsky.net/singularity/ai-risk. 238 “[Y]ou build a chess playing robot”: Stephen M. Omohundro, “The Basic AI Drives,” https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf. 238 “Without special precautions”: Ibid. 238 lead-lined coffins connected to heroin drips: Patrick Sawer, “Threat from Artificial Intelligence Not Just Hollywood Fantasy,” June 27, 2015, http://www.telegraph.co.uk/news/science/science-news/11703662/Threat-from-Artificial-Intelligence-not-just-Hollywood-fantasy.html. 239 “its final goal is to make us happy”: Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014), Chapter 8. 239 “a system that is optimizing a function”: Stuart Russell, “Of Myths and Moonshine,” Edge, November 14, 2014, https://www.edge.org/conversation/the-myth-of-ai#26015. 239 “perverse instantiation”: Bostrom, Superintelligence, Chapter 8. 239 learned to pause Tetris: Tom Murphy VII, “The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky,” https://www.cs.cmu.edu/~tom7/mario/mario.pdf.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

Predicting Judgment 10. Taming Complexity 11. Fully Automated Decision Making Part Three: Tools 12. Deconstructing Work Flows 13. Decomposing Decisions 14. Job Redesign Part Four: Strategy 15. AI in the C-Suite 16. When AI Transforms Your Business 17. Your Learning Strategy 18. Managing AI Risk Part Five: Society 19. Beyond Business Notes Index About the Authors Acknowledgments We express our thanks to the people who contributed to this book with their time, ideas, and patience. In particular, we thank Abe Heifets of Atomwise, Liran Belanzon of BenchSci, Alex Shevchenko of Grammarly, Marc Ossip, and Ben Edelman for the time they spent with us in interviews, as well as Kevin Bryan for his comments on the overall manuscript.

In some cases, the tradeoff is clear, such as with Google Inbox, where the benefits of faster learning outweigh the cost of poor performance. In other cases, such as autonomous driving, the trade-off is more ambiguous given the size of the prize for being early with a commercial product weighed against the high cost of an error if the product is released before it is ready. 18 Managing AI Risk Latanya Sweeney, who was the chief technology officer for the US Federal Trade Commission and is now a professor at Harvard University, was surprised when a colleague Googled her name to find one of her papers and discovered ads suggesting she had been arrested.1 Sweeney clicked on the ad, paid a fee, and learned what she already knew: she had never been arrested.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

Systems may work brilliantly in one setting, then fail dramatically if the environment slightly changes. The “black box” nature of many AI methods means that it may be difficult to accurately predict when they will fail or even understand why they failed in retrospect. Potentially even more dangerous, the global competition in AI risks a “race to the bottom” on safety. In a desire to beat others to the punch, countries may cut corners to deploy AI systems before they have been fully tested. We are careening toward a world of AI systems that are powerful but insecure, unreliable, and dangerous. But technology is not destiny, and there are people around the world working to ensure that technological progress arcs toward a brighter future.

The problem is that AI can supercharge repression itself, allowing the state to deploy vast intelligent surveillance networks to monitor and control the population at a scale and degree of precision that would be impossible with humans. AI-enabled control is not only repressive, but further entrenches the system of repression itself. AI risks locking in authoritarianism, making it harder for the people to rise up and defend their freedoms. The spread of AI surveillance technologies has the potential to tilt the global balance between freedom and authoritarianism. AI is already being used in deeply troubling ways, and those uses are spreading.

Zhu Qichao, a leading Chinese thinker on the militarization of autonomy and AI, wrote in a 2019 article coauthored with Long Kun, “Both sides should clarify the strategic boundaries of AI weaponization (such as whether AI should be used for nuclear weapons systems), prevent conflict escalation caused by military applications of AI systems, and explore significant issues such as how to ensure strategic stability in the era of AI.” The Chinese government has formally spoken out on AI risks and the need for states to consider regulation. In a 2021 position paper issued at the United Nations, the Chinese government stated, “We need to enhance the efforts to regulate military applications of AI.” The paper argued for the need to consider “potential risks” and highlighted “global strategic stability” and the need to “prevent [an] arms race.”

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

Then you can ask what you can do to move the community closer to an ideal allocation of resources, given everyone’s personal fit and comparative advantage. Taking a community perspective, the primary question becomes not “How can I personally have the biggest impact?” but “Who in the community is relatively best placed to do what?” For example, my colleague Greg Lewis believes that AI risk is the most important issue of our time. But he thinks the risk from engineered pandemics is also important, and because he has a medical degree, it makes more sense for him to focus on that threat and let others focus on AI. The portfolio approach can also give greater value to experimentation and learning.

The classic reference is Omohundro (2008), and Bostrom (2012) discusses similar issues, such as the “instrumental convergence thesis.” 80. Other books on the risks posed by AGI include Christian (2021); Russell (2019); and Tegmark (2017). 81. Some of these scenarios are discussed in Superintelligence, too (Bostrom 2014b). Some of the most illuminating recent discussions about AI risk have not been formally published but are available online—see, for instance, Ngo (2020); Carlsmith (2021); Drexler (2019), and the work of AI Impacts (https://aiimpacts.org/). For an overview of different ways in which an AGI takeover might happen, see Clarke and Martin (2021). 82. The AI Alignment Forum (https://www.alignmentforum.org/) is a good place to follow cutting-edge discussions on AI alignment.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

A 2019 study by the Brookings Institution, which drew on work by Stanford Ph.D. candidate Michael Webb, examined the overlap between the text of AI-related patents and the text of job descriptions from a Department of Labor database, looking for phrases that appeared in both, like “predict quality” and “generate recommendation.” Of the 769 job categories included in the study, Webb and the Brookings researchers found that 740—essentially all of them—had at least some near-term risk of automation. Workers with bachelor’s or graduate degrees were nearly four times as exposed to AI risk as workers with only a high school degree. Some of the most automation-prone jobs, the study found, were in highly paid occupations in major metropolitan areas like San Jose, Seattle, and Salt Lake City. This is radically different from the way we normally think about AI and automation risk.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

As per its instructions, the Roomba had detected the mess, reversed over it several times in an attempt to clean it up, then trailed it all over the house as it went about its cleaning rounds. ‘I couldn’t be happier right now,’ the miserable DiGiorgio says in a YouTube video that went viral after attracting the attention of Reddit users. DiGiorgio’s story hardly represents the kind of potentially catastrophic AI risk we’ve been describing so far in this chapter. It is a far cry from AIs seizing control of the world’s nuclear weapons supply (à la Terminator) or locking our brains in a giant simulation (The Matrix). However, it demonstrates another side to the AI coin: that artificial stupidity may turn out to be as big a risk as true Artificial Intelligence.

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

Wikipedia offers a more general but less euphonious definition: “Work is the product of the force applied and the displacement of the point where the force is applied in the direction of the force.” [cclxv] http://www.wsj.com/articles/can-the-sharing-economy-provide-good-jobs-1431288393 [cclxvi] https://www.edge.org/conversation/kevin_kelly-the-technium [cclxvii] https://www.singularityweblog.com/techemergence-surveys-experts-on-ai-risks/ [cclxviii] http://uk.businessinsider.com/social-skills-becoming-more-important-as-robots-enter-workforce-2015-12 [cclxix] http://www.history.com/topics/inventions/automated-teller-machines [cclxx] http://www.theatlantic.com/technology/archive/2015/03/a-brief-history-of-the-atm/388547/ [cclxxi] http://www.wsj.com/articles/SB10001424052748704463504575301051844937276 [cclxxii] http://kalw.org/post/robotic-seals-comfort-dementia-patients-raise-ethical-concerns#stream/0 [cclxxiii] http://viterbi.usc.edu/news/news/2013/a-virtual-therapist.htm [cclxxiv] http://observer.com/2014/08/study-people-are-more-likely-to-open-up-to-a-talking-computer-than-a-human-therapist/ [cclxxv] http://mindthehorizon.com/2015/09/21/avatar-virtual-reality-mental-health-tech/ [cclxxvi] http://www.handmadecake.co.uk/ [cclxxvii] http://www.bbc.co.uk/news/magazine-15551818 [cclxxviii] http://www.oxforddnb.com/view/article/19322 [cclxxix] http://www.ft.com/cms/s/2/c5cf07c4-bf8e-11e5-846f-79b0e3d20eaf.html#axzz3yLGlrr1J [cclxxx] http://www.bls.gov/cps/cpsaat11.htm [cclxxxi] https://en.wikipedia.org/wiki/No_Man%27s_Sky [cclxxxii] http://www.ft.com/cms/s/2/c5cf07c4-bf8e-11e5-846f-79b0e3d20eaf.html#axzz3yLGlrr1J [cclxxxiii] http://www.inc.com/john-brandon/22-inspiring-quotes-from-famous-entrepreneurs.html [cclxxxiv] http://www.uh.edu/engines/epi265.htm [cclxxxv] http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html [cclxxxvi] http://www.bbc.co.uk/news/technology-35977315 [cclxxxvii] http://fee.org/freeman/the-economic-fantasy-of-star-trek/ [cclxxxviii] https://www.wired.co.uk/news/archive/2012-11/16/iain-m-banks-the-hydrogen-sonata-review [cclxxxix] http://www.ft.com/cms/s/0/dfe218d6-9038-11e3-a776-00144feab7de.html#axzz3yUOe9Hkp [ccxc] http://www.brautigan.net/machines.html [ccxci] As noted in chapter 3.4, Anders Sandberg is James Martin Fellow at the Future of Humanity Institute at Oxford University.

pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe
by William Poundstone
Published 3 Jun 2019

Even if the original discoverer takes it slow, word would get out. Perhaps a hacker on the other side of the world could write a virus to usurp the processing power of infected computers all over the globe and launch a home-brew intelligence explosion. This may not require institutional support or a Google budget. The AI-risk think tanks have bright people confronting ethical, political, and philosophical questions that may not be foremost in the minds of AI engineers. The longer the think tanks are able to do that, the more likely they are to come up with conceptual frameworks, options, and solutions that would be useful when an intelligence explosion becomes imminent.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

These forces are combining to create a unique historical phenomenon, one that will shake the foundations of our labor markets, economies, and societies. Even if the most dire predictions of job losses don’t fully materialize, the social impact of wrenching inequality could be just as traumatic. We may never build the folding cities of Hao Jingfang’s science fiction, but AI risks creating a twenty-first-century caste system, one that divides the population into the AI elite and what historian Yuval N. Harari has crudely called the “useless class,” people who can never generate enough economic value to support themselves. Even worse, recent history has shown us just how fragile our political institutions and social fabric can be in the face of intractable inequality.

Global Catastrophic Risks
by Nick Bostrom and Milan M. Cirkovic
Published 2 Jul 2008

Hibbard, B. (2004) . Reinforcement learning as a Context for Integrating AI Research. Presented at the 2004 AAAI Fall Symposium on Achieving Human-level Intelligence through Integrated Systems and Research. edited by N. Cassimatis & D. Winston, The AAAI Press, Mento Park, California. H ibbard, B. (2006). Reply to AI Risk. http:/ jwww .ssec.wisc.eduj�billh/g/AIRisk_Reply .html Hofstadter, D. ( 1979). Giidel, Escher, Bach: An Eternal Golden Braid (New York: Random House). Jaynes, E.T. and Bretthorst, G .L. (2003). Probability Theory: The Logic of Science (Cambridge: Cambridge University Press). jensen, A.R. (1999) .

The Matrix (Warner Bros, 1 3 5 min, U SA). Weisburg, R. (1986). Creativity, Genius and Other Myths (New York: W.H. Freeman). Williams, G.C. (1966) . Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought (Princeton, N ) : Princeton University Press) . Yudkowsky, E . (2006). Reply to AI Risk. http:/ jwww. ssec.wisc.eduj�billhjgjAI Risk_ Reply.html · 16 · B i g tro u b les, i m agi n e d a n d rea l Frank Wilczek Modern physics suggests several exotic ways in which things could go terribly wrong on a very large scale. Most, but not all, are highly speculative, unlikely, or remote.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

Versus M.D.”17 The adversarial relationship between humans and their technology, which had a long history dating back to the steam engine and the first Industrial Revolution, had been rekindled. 1936—Turing paper (Alan Turing) 1943—Artificial neural network (Warren McCullogh, Walter Pitts) 1955—Term “artificial intelligence” coined (John McCarthy), 1957—Predicted ten years for AI to beat human at chess (Herbert Simon) 1958—Perceptron (single-layer neural network) (Frank Rosenblatt) 1959—Machine learning described (Arthur Samuel) 1964—ELIZA, the first chatbot 1964—We know more than we can tell (Michael Polany’s paradox) 1969—Question AI viability (Marvin Minsky) 1986—Multilayer neural network (NN) (Geoffrey Hinton) 1989—Convolutional NN (Yann LeCun) 1991—Natural-language processing NN (Sepp Hochreiter, Jurgen Schmidhuber) 1997—Deep Blue wins in chess (Garry Kasparov) 2004—Self-driving vehicle, Mojave Desert (DARPA Challenge) 2007—ImageNet launches 2011—IBM vs. Jeopardy! champions 2011—Speech recognition NN (Microsoft) 2012—University of Toronto ImageNet classification and cat video recognition (Google Brain, Andrew Ng, Jeff Dean) 2014—DeepFace facial recognition (Facebook) 2015—DeepMind vs. Atari (David Silver, Demis Hassabis) 2015—First AI risk conference (Max Tegmark) 2016—AlphaGo vs. Go (Silver, Demis Hassabis) 2017—AlphaGo Zero vs. Go (Silver, Demis Hassabis) 2017—Libratus vs. poker (Noam Brown, Tuomas Sandholm) 2017—AI Now Institute launched TABLE 4.2: The AI timeline. Kasparov’s book, Deep Thinking, which came out two decades later, provides remarkable personal insights about that pivotal AI turning point.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

Referencing the Asilomar principles, they cited reasons familiar to those reading this book: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” Shortly after, Italy banned ChatGPT. A complaint against LLMs was filed with the Federal Trade Commission aiming for much tighter regulatory control. Questions about AI risk were asked at the White House press briefing. Millions of people discussed the impacts of technology—at work, at the dinner table. Something is building. Containment it is not, but for the first time the questions of the coming wave are being treated with the urgency they deserve. Each of the ideas outlined so far represents the beginning of a seawall, a tentative tidal barrier starting with the specifics of the technology itself and expanding outward to the imperative of forming a massive global movement for positive change.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

The x-axis represents the relative importance of capability versus safety investment in determining the speed of a team’s progress toward AI. (At 0.5, the safety investment level is twice are important as capability; at 1, the two are equal; at 2, capability is twice as important as safety level; and so forth.) The y-axis represents the level of AI risk (the expected fraction of their maximum utility that the winner of the race gets). Figure 14 Risk levels in AI technology races. Levels of risk of dangerous AI in a simple model of a technology race involving either (a) two teams or (b) five teams, plotted against the relative importance of capability (as opposed to investment in safety) in determining which project wins the race.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

But to do that is going to require engineers who don’t just think about replacing people, but who work with social scientists and ethicists to figure out, “OK. I can put this kind of capability in, but what does it mean if I do that? How does it fit with people?” We need to support building the kind of systems we should build, not just the systems that in the short-term look like they’ll sell and save money. MARTIN FORD: What about AI risks beyond the economic impact? What do you think we should be genuinely concerned about in terms of artificial intelligence, both in the near term and further out? BARBARA GROSZ: From my perspective, there is a set of questions around the capabilities AI provides, the methods it has and what they can be used for, and the design of AI systems that go out in the world.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

Chapter 9 (“Artificial Struggle”) explains that the post-1980 vision that led us astray has also come to define how we conceive of the next phase of digital technologies, artificial intelligence, and how AI is exacerbating the trends toward economic inequality. In contrast to claims made by many tech leaders, we will also see that in most human tasks existing AI technologies bring only limited benefits. Additionally, the use of AI for workplace monitoring is not just boosting inequality but also disempowering workers. Worse, the current path of AI risks reversing decades of economic gains in the developing world by exporting automation globally. None of this is inevitable. In fact, this chapter argues that AI, and even the emphasis on machine intelligence, reflects a very specific path for the development of digital technologies, one with profound distributional effects—benefiting a few people and leaving the rest behind.

pages: 1,034 words: 241,773

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress
by Steven Pinker
Published 13 Feb 2018

Robots turning us into paper clips and other Value Alignment Problems: Bostrom 2016; Hanson & Yudkowsky 2008; Omohundro 2008; Yudkowsky 2008; P. Torres, “Fear Our New Robot Overlords: This Is Why You Need to Take Artificial Intelligence Seriously,” Salon, May 14, 2016. 30. Why we won’t be turned into paper clips: B. Hibbard, “Reply to AI Risk,” http://www.ssec.wisc.edu/~billh/g/AIRisk_Reply.html; R. Loosemore, “The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation,” Institute for Ethics and Emerging Technologies, July 24, 2014, http://ieet.org/index.php/IEET/more/loosemore20140724; A. Elkus, “Don’t Fear Artificial Intelligence,” Slate, Oct. 31, 2014; R.