AlphaGo

back to index

description: an artificial intelligence developed by Google's DeepMind to play the board game Go

103 results

pages: 337 words: 103,522

The Creativity Code: How AI Is Learning to Write, Paint and Think
by Marcus Du Sautoy
Published 7 Mar 2019

Rather than resting, though, Sedol stayed up till 6 a.m. the next morning analysing the games he’d lost so far with a group of fellow professional Go players. Did AlphaGo have a weakness they could exploit? The machine wasn’t the only one who could learn and evolve. Sedol felt he might learn something from his losses. Sedol played a very strong opening to game 3, forcing AlphaGo to manage a weak group of stones within his sphere of influence on the board. Commentators began to get excited. Some said Sedol had found AlphaGo’s weakness. But then, as one commentator posted: ‘Things began to get scary. As I watched the game unfold and the realisation of what was happening dawned on me, I felt physically unwell.’ Sedol pushed AlphaGo to its limits but in so doing he revealed the hidden powers that the program seemed to possess.

Sedol and his team had stayed up all of Saturday night trying to reverse-engineer from AlphaGo’s games how it played. It seemed to work on a principle of playing moves that incrementally increase its probability of winning rather than betting on the potential outcome of a complicated single move. Sedol had witnessed this when AlphaGo preferred lazy moves to win game 3. The strategy they’d come up with was to disrupt this sensible play by playing the risky single moves. An all-or-nothing strategy might make it harder for AlphaGo to score so easily. AlphaGo seemed unfazed by this line of attack. Seventy moves into the game, commentators were already beginning to see that AlphaGo had once again gained the upper hand.

Seventy moves into the game, commentators were already beginning to see that AlphaGo had once again gained the upper hand. This was confirmed by a set of conservative moves that were AlphaGo’s signal that it had the lead. Sedol had to come up with something special if he was going to regain the momentum. If move 37 of game 2 was AlphaGo’s moment of creative genius, move 78 of game 4 was Sedol’s retort. He’d sat there for thirty minutes staring at the board, staring at defeat, when he suddenly placed a white stone in an unusual position, between two of AlphaGo’s black stones. Michael Redmond, who was commentating on the YouTube channel, spoke for everyone: ‘It took me by surprise.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

said that he, too, felt a sadness: Cade Metz, “The Sadness and Beauty of Watching Google’s AI Play Go,” Wired, March 11, 2016, https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/. “There was an inflection point”: Ibid. Lee Sedol lost the third game: Metz, “What the AI Behind AlphaGo Can Teach Us About Being Human.” “I don’t know what to say today”: Cade Metz, “In Two Moves, AlphaGo and Lee Sedol Redefined the Future,” Wired, March 16, 2016, https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/. Hassabis found himself hoping the Korean: Metz, “What the AI Behind AlphaGo Can Teach Us About Being Human.” “All the thinking that AlphaGo had done up to that point was sort of rendered useless”: Ibid. “I have improved already”: Ibid. CHAPTER 11: EXPANSION nearly 70 million people are diabetic: “Diabetes Epidemic: 98 Million People in India May Have Type 2 Diabetes by 2030,” India Today, November 22, 2018, https://www.indiatoday.in/education-today/latest-studies/story/98-million-indians-diabetes-2030-prevention-1394158-2018-11-22.

There were more people on the Internet in China: “Number of Internet Users in China from 2017 to 2023,” Statista, https://www.statista.com/statistics/278417/number-of-internet-users-in-china/ An estimated 60 million Chinese had watched the match against Lee Sedol: “AlphaGo Computer Beats Human Champ in Hard-Fought Series,” Associated Press, March 15, 2016, https://www.cbsnews.com/news/googles-alphago-computer-beats-human-champ-in-hard-fought-series/. With a private order sent to all Chinese media in Wuzhen: Cade Metz, “Google Unleashes AlphaGo in China—But Good Luck Watching It There,” Wired, May 23, 2017, https://www.wired.com/2017/05/google-unleashes-alphago-china-good-luck-watching/. Baidu opened its first outpost in Silicon Valley: Daniela Hernandez, “‘Chinese Google’ Opens Artificial-Intelligence Lab in Silicon Valley,” Wired, April 12, 2013, https://www.wired.com/2013/04/baidu-research-lab/.

Still, over this lunch of dumplings and kimchi and grilled meats—which he didn’t eat—Hassabis said he was “cautiously confident.” What the pundits didn’t grasp, he explained, was that AlphaGo had continued to hone its skills since the match in October. He and his team originally taught the machine to play Go by feeding 30 million moves into a deep neural network. From there, AlphaGo played game after game against itself, all the while carefully tracking which moves proved successful and which didn’t—much like the systems the lab had built to play old Atari games. In the months since beating Fan Hui, the machine had played itself several million more times. AlphaGo was continuing to teach itself the game, learning at a faster rate than any human ever could.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

Demis Hassabis noted that “the thing that separates out top Go players [is] their intuition” and that “what we’ve done with AlphaGo is to introduce with neural networks this aspect of intuition, if you want to call it that.”26 How AlphaGo Works There have been several different versions of AlphaGo, so to keep them straight, DeepMind started naming them after the human Go champions the programs had defeated—AlphaGo Fan and AlphaGo Lee—which to me evoked the image of the skulls of vanquished enemies in the collection of a digital Viking. Not what DeepMind intended, I’m sure. In any case, AlphaGo Fan and AlphaGo Lee both used an intricate mix of deep Q-learning, “Monte Carlo tree search,” supervised learning, and specialized Go knowledge.

Johnson quoted one Go enthusiast’s prediction: “It may be a hundred years before a computer beats humans at Go—maybe even longer.” A mere twenty years later, AlphaGo, which learned to play Go via deep Q-learning, beat Lee Sedol in a five-game match. AlphaGo Versus Lee Sedol Before I explain how AlphaGo works, let’s first commemorate its spectacular wins against Lee Sedol, one of the world’s best Go players. Even after watching AlphaGo defeat the then European Go champion Fan Hui half a year earlier, Lee remained confident that he would prevail: “I think [AlphaGo’s] level doesn’t match mine.… Of course, there would have been many updates in the last four or five months, but that isn’t enough time to challenge me.”21 Perhaps you were one of the more than two hundred million people who watched some part of the AlphaGo-Lee match online in March 2016.

This newer version is called AlphaGo Zero because, unlike its predecessor, it started off with “zero” knowledge of Go besides the rules.27 In a hundred games of AlphaGo Lee versus AlphaGo Zero, the latter won every single game. Moreover, DeepMind applied the same methods (though with different networks and different built-in game rules) to learn to play both chess and shogi (also known as Japanese chess).28 The authors called the collection of these methods AlphaZero. In this section, I’ll describe how AlphaGo Zero worked, but for conciseness I’ll simply refer to this version as AlphaGo. FIGURE 31: An illustration of Monte Carlo tree search The word intuition has an aura of mystery, but AlphaGo’s intuition (if you want to call it that) arises from its combination of deep Q-learning with a clever method called “Monte Carlo tree search.”

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

Let’s unpack this last concern a bit. Consider AlphaGo: What purpose does it have? That’s easy, one might think: AlphaGo has the purpose of winning at Go. Or does it? It’s certainly not the case that AlphaGo always makes moves that are guaranteed to win. (In fact, it nearly always loses to AlphaZero.) It’s true that when it’s only a few moves from the end of the game, AlphaGo will pick the winning move if there is one. On the other hand, when no move is guaranteed to win—in other words, when AlphaGo sees that the opponent has a winning strategy no matter what AlphaGo does—then AlphaGo will pick moves more or less at random.

Could something similar happen to machines that are running reinforcement learning algorithms, such as AlphaGo? Initially, one might think this is impossible, because the only way that AlphaGo can gain its +1 reward for winning is actually to win the simulated Go games that it is playing. Unfortunately, this is true only because of an enforced and artificial separation between AlphaGo and its external environment and the fact that AlphaGo is not very intelligent. Let me explain these two points in more detail, because they are important for understanding some of the ways that superintelligence can go wrong. AlphaGo’s world consists only of the simulated Go board, composed of 361 locations that can be empty or contain a black or white stone.

This setup corresponds to the abstract mathematical model of reinforcement learning, in which the reward signal arrives from outside the universe. Nothing AlphaGo can do, as far as it knows, has any effect on the code that generates the reward signal, so AlphaGo cannot indulge in wireheading. Life for AlphaGo during the training period must be quite frustrating: the better it gets, the better its opponent gets—because its opponent is a near-exact copy of itself. Its win percentage hovers around 50 percent, no matter how good it becomes. If it were more intelligent—if it had a design closer to what one might expect of a human-level AI system—it would be able to fix this problem. This AlphaGo++ would not assume that the world is just the Go board, because that hypothesis leaves a lot of things unexplained.

The Deep Learning Revolution (The MIT Press)
by Terrence J. Sejnowski
Published 27 Sep 2018

Some thought this was cheating— an autonomous AI program should be able to learn how to play Go without human knowledge. In October, 2017, a new version, called AlphaGo Zero, was revealed that learned to play Go starting with only the rules of the game, and trounced AlphaGo Master, the version that beat Kie Jie, winning 100 games to none.35 Moreover, AlphaGo Zero learned 100 times faster and with 10 times less compute power than AlphaGo Master. By completely ignoring human knowledge, AlphaGo Zero became super-superhuman. 20 Chapter 1 There is no known limit to how much better AlphaGo might become as machine learning algorithms continue to improve. AlphaGo Zero had dispensed with human play, but there was still a lot of Go knowledge handcrafted into the features that the program used to represent the board.

Even DeepMind, the company that had developed AlphaGo, did not know how strong their deep learning program was. Since its last match, AlphaGo had played millions of games with several versions of itself and there was no way to benchmark how good it was. It came as a shock to many when AlphaGo won the first three of five games, exhibiting an unexpectedly high level of play. This was riveting viewing in South Korea, where all the major television stations had a running commentary on the games. Some of the moves made by AlphaGo were revolutionary. On the thirty-eighth move in the match’s second game, AlphaGo made a brilliantly creative play that surprised Lee Sedol, who took nearly ten minutes to respond.

AlphaGo used the same learning algorithm that the basal ganglia evolved to evaluate sequences of The Rise of Machine Learning 17 Figure 1.8 Go board during play in the five-game match that pitted Korean Go champion Lee Sedol against AlphaGo, a deep learning neural network that had learned how to play Go by playing itself. actions to maximize future rewards (a process that will be explained in chapter 10). AlphaGo learned by playing itself—many, many times The Go match that pitted AlphaGo against Lee Sedol had a large following in Asia, where Go champions are national figures and treated like rock stars. AlphaGo had earlier defeated a European Go champion, but the level of play was considerably below the highest levels of play in Asia, and Lee Sedol was not expecting a strong match.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

One early tell: the AI would not play aggressively unless it was behind. It was a tight first match. AlphaGo earned a very narrow victory, by just 1.5 points. Hui used that information going into the second game. If AlphaGo wasn’t going to play aggressively, then Hui decided that he’d fight early. But then AlphaGo started playing more quickly. Hui mentioned that perhaps he needed a bit more time to think between turns. On move 147, Hui tried to prevent AlphaGo from claiming a big territory in the center of the board, but the move misfired, and he was forced to resign. By game three, Hui’s moves were more aggressive, and AlphaGo followed suit. Halfway through, Hui made a catastrophic overplay, which AlphaGo punished, and then another big mistake, which rendered the game effectively over.

Ari Goldfarb and Daniel Trefler, “AI and International Trade,” The National Bureau of Economic Research, January 2018, http://www.nber.org/papers/w24254.pdf. 36. Toby Manning, “AlphaGo,” British Go Journal 174 (Winter 2015–2016): 15, https://www.britgo.org/files/2016/deepmind/BGJ174-AlphaGo.pdf. 37. Sam Byford, “AlphaGo Retires from Competitive Go after Defeating World Number One 3-0,” Verge, May 27, 2017, https://www.theverge.com/2017/5/27/15704088/alphago-ke-jie-game-3-result-retires-future. 38. David Silver et al., “Mastering the Game of Go Without Human Knowledge,” Nature 550 (October 19, 2017): 354–359, https://deepmind.com/documents/119/agz_unformatted_nature.pdf. 39.

See also 2001: A Space Odyssey La Mettrie, Julien Offray de, 22 Law enforcement: in optimistic scenario of future, 176; in pragmatic scenario of future, 199; social policing in China, 80 Leadership, need for courageous, 246 Learned helplessness: in pragmatic scenario of future, 190–201 Learning. See AlphaGo; AlphaGo Zero; Learning machines; Machine learning; Watson Learning machines, 30, 31–32. See also AlphaGo; AlphaGo Zero; Watson Lecun, Yann, 41, 42, 59 Lee, Peter, 122 Legg, Shane, 43 Leibniz, Gottfried Wilhelm von, 20, 21, 27; step reckoner, 21, 24 Levesque, Hector, 50 Li, Fei-Fei, 65 Li, Robin, 67, 82, 129 Li Deng, 43 LibriSpeech, 181 Life expectancy: in catastrophic scenario of future, 225 Lighthill, James, 37, 38 Liu Guozhi, 79 Logic Theorist program, 30, 34 Lovelace, Ada, 23, 259; Analytical Engine, 23, 24; Difference Engine, 23 Lyft, BAT investment in, 72 Ma, Jack, 68, 69, 70 Ma Huateng, 70 Machine learning, 29, 31, 32–33, 36, 41, 77, 93, 114; algorithms, 123, 183, 237; machines playing games, 38–50; models, 49, 91; platforms, 91; systems, 182, 184, 257; techniques, 253; technologies, 110.

pages: 254 words: 76,064

Whiplash: How to Survive Our Faster Future
by Joi Ito and Jeff Howe
Published 6 Dec 2016

Another AI researcher, Jonathan Schaeffer, noted that Deep Blue was regularly beating chess grandmasters by 1989, but it had taken another eight years for it to become good enough to beat Garry Kasparov. AlphaGo was about to receive its Kasparov moment. In March, Nature revealed, the software would play Lee Sedol, commonly regarded as the greatest living master, or sensei, of the game. “No offence to the AlphaGo team, but I would put my money on the human,” Schaeffer told Nature News. “Think of AlphaGo as a child prodigy. All of a sudden it has learned to play really good Go, very quickly. But it doesn’t have a lot of experience. What we saw in chess and checkers is that experience counts for a lot.”7 Not everyone has cheered on the machine’s inexorable invasion of all aspects of our lives.

It wasn’t designed to wow the 280 million people who would eventually watch the series, but from someone of Sedol’s rank, it constituted nearly unbeatable play, and Sedol exuded a quiet but unmistakeable confidence. Then, as the game began to enter its middle phase, AlphaGo did something unusual: it instructed its human attendant to place a black stone in a largely unoccupied area to the right of the board. This might have made sense in another context, but on that board at that moment AlphaGo seemed to be abandoning the developing play in the lower half of the board. This historic move was something that no human would have feasibly played—AlphaGo calculated the probability that a human would play that move at 1 in 10,000.9 It produced instant shock and confusion among the spectators.

Having so handily defeated Sedol at his ingenious best, AlphaGo seemed fated to execute a clean sweep in the last two games. And nothing during the first half of game four seemed to indicate the contrary. But then Sedol did something radical and unexpected—he played a “wedge” move in the middle of the board. AlphaGo, it suddenly became clear to millions of people around the world, had no idea how to respond. It made several clumsy plays and then conceded. Sedol, commentators noted, had created a masterpiece—a potential myoshu all his own. AlphaGo ended up winning four out of the five matches. One could imagine that a computer beating a historically legendary Go champion might diminish interest in Go for humans or make it less interesting to play.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

Reg. 3967 (February 14, 2019), https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence. 73updated R&D plan: The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (Select Committee on Artificial Intelligence, National Science & Technology Council, June 2019), https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf. 73Chinese leaders issued a series of implementation plans: “AI in China,” OECD.AI Policy Observatory, updated September 21, 2021, https://oecd.ai/dashboards/countries/China. 73“Three-Year Action Plan”: “工业和信息化部发布《促进新一代人工智能产业发展三年行动计划(2018-2020年)》[The Ministry of Industry and Information Technology issued the ‘Three-Year Action Plan (2018-2020) for Promoting the Development of the New Generation Artificial Intelligence Industry’],” Ministry of Industry and Information Technology of the People’s Republic of China, December 14, 2017, http://www.miit.gov.cn/n1146290/n4388791/c5960863/content.html (page discontinued), https://web.archive.org/web/20180821120845/http://www.miit.gov.cn/n1146290/n4388791/c5960863/content.html; Paul Triolo, Elsa Kania, and Graham Webster, “Translation: Chinese Government Outlines AI Ambitions through 2020,” New America Blog, January 26, 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinese-government-outlines-ai-ambitions-through-2020/. 73“Thirteenth Five-Year Science and Technology Military-Civil Fusion Special Projects Plan”: PRC Ministry of Science and Technology, “The ‘13th Five-Year’ Special Plan for S&T Military-Civil Fusion Development,” translated by Etcetera Language Group, Center for Security and Emerging Technology, June 10, 2020, https://cset.georgetown.edu/wp-content/uploads/t0163_13th_5YP_mil_civ_fusion_EN.pdf. 73“Accelerating the development of a new generation of AI”: Elsa Kania and Rogier Creemers, “Xi Jinping Calls for ‘Healthy Development’ of AI (Translation),” New America Blog, November 5, 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/xi-jinping-calls-for-healthy-development-of-ai-translation/. 73“There’s no question that there was a Sputnik moment”: Eric Schmidt, interview by author, June 9, 2020. 73DeepMind’s AlphaGo: “AlphaGo,” DeepMind, n.d., https://deepmind.com/research/case-studies/alphago-the-story-so-far; Alex Hern, “China Censored Google’s AlphaGo Match against World’s Best Go Player,” The Guardian, May 24, 2017, https://www.theguardian.com/technology/2017/may/24/china-censored-googles-alphago-match-against-worlds-best-go-player; “AlphaGo China,” DeepMind, 2017, https://deepmind.com/alphago-china. 73“not only was it notable, but they also censored”: Schmidt, interview. 73Go is an ancient strategy game: “A Brief History of Go,” American Go Association, n.d., https://www.usgo.org/brief-history-go; Peter Shotwell, The Game of Go: Speculations on Its Origins and Symbolism in Ancient China (American Go Association, updated February 2008), https://www.usgo.org/sites/default/files/bh_library/originsofgo.pdf. 73“I am not a person who believes that we are adversaries with China”: Schmidt, interview. 74a “strategic competitor”: Summary of the 2018 National Defense Strategy of the United States of America: Sharpening the American Military’s Competitive Edge (U.S.

Brown, Measuring the Algorithmic Efficiency of Neural Networks (arXiv.org, n.d.), https://arxiv.org/pdf/2005.04305.pdf. 298compute efficiency for both training and inference: Hernandez and Brown, Measuring the Algorithmic Efficiency of Neural Networks, 9–10; Radosvet Desislavov et al., Compute and Energy Consumption Trends in Deep Learning Inference (arXiv.org, September 12, 2021), https://arxiv.org/pdf/2109.05472.pdf. 298progress in algorithmic efficiency: Katja Grace, Algorithmic Progress in Six Domains (technical report no. 2013-3, Machine Intelligence Research Institute, 2013), https://intelligence.org/files/AlgorithmicProgress.pdf. 298compute-heavy models much more accessible: Desislavov et al., Compute and Energy Consumption Trends in Deep Learning Inference. 298ASIC optimized for deep learning: “Cloud TPU,” Google Cloud, n.d., https://cloud.google.com/tpu; “Cloud Tensor Processing Units (TPUs),” Google Cloud, n.d., https://cloud.google.com/tpu/docs/tpus. 298reduced energy consumption: The metric DeepMind used to compare AlphaGo versions, thermal design power (TDP), is not a direct measure of energy consumption. It is a rough first-order proxy, however, for power consumption. David Silver and Demis Hassabis, “AlphaGo Zero: Starting From Scratch,” DeepMind Blog, October 18, 2017, https://deepmind.com/blog/article/alphago-zero-starting-scratch. 298reduced compute usage to only 4 TPUs: Silver and Hassabis, “AlphaGo Zero: Starting From Scratch”; “AlphaGo,” DeepMind, n.d., https://deepmind.com/research/case-studies/alphago-the-story-so-far; David Silver et al., “Mastering the Game of Go without Human Knowledge,” Nature 550 (October 19 2017), 354–355, https://www.nature.com/articles/nature24270.epdf. 298reduced the compute needed for training by a factor of eight: Hernandez and Brown, Measuring the Algorithmic Efficiency of Neural Networks, 18. 298may make AI models available: Desislavov et al., Compute and Energy Consumption Trends in Deep Learning Inference; Sharir et al., The Cost of Training NLP Models, 3. 298AI training costs could be as much as thirty times higher: Khan and Mann, AI Chips, 26. 299costly and locks out university researchers: Rodney Brooks, “A Better Lesson,” Rodney Brooks (personal website), March 19, 2019, https://rodneybrooks.com/a-better-lesson/; Kevin Vu, “Compute Goes Brrr: Revisiting Sutton’s Bitter Lesson for Artificial Intelligence,” DZone.com, March 11, 2021, https://dzone.com/articles/compute-goes-brrr-revisiting-suttons-bitter-lesson; Bommasani et al., On the Opportunities and Risks of Foundation Models. 299contributes to carbon emissions: “On the Dangers of Stochastic Parrots”; Brooks, “A Better Lesson”; Vu, “Compute Goes Brrr”; Lasse F.

Heron Systems’ AI in the AlphaDogfight competition employed high-precision, split-second gunshots, demonstrating a “superhuman capability” making shots that were “almost impossible” for humans, as one fighter pilot explained. During AlphaGo’s celebrated victory over Lee Sedol, it made a move that so stunned Lee that he got up from the table and left the room. AlphaGo calculated the odds that a human would have made that move (based on its database of 30 million expert human moves) as 1 in 10,000. AlphaGo’s move wasn’t just better. It was inhuman. AlphaGo’s unusual move wasn’t a fluke. AlphaGo plays differently than humans in a number of ways. It will carry out multiple simultaneous attacks on different parts of the board, whereas human players tend to focus on one region.

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

v=JNrXgpSEEIE. 126 “I thought it was a mistake”: Ibid. 126 “It’s not a human move”: Cade Metz, “The Sadness and Beauty of Watching Google’s AI Play Go,” WIRED, March 11, 2016, https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/. 126 1 in 10,000: Cade Metz, “In Two Moves, AlphaGo and Lee Sedol Redefined the Future,” WIRED, accessed June 7, 2017, https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/. 126 “I kind of felt powerless”: Moyer, “How Google’s AlphaGo Beat a Go World Champion.” 126 “AlphaGo isn’t just an ‘expert’ system”: “AlphaGo,” January 27, 2016. 127 AlphaGo Zero: “AlphaGo Zero: Learning from Scratch,” DeepMind, accessed October 22, 2017, https://deepmind.com/blog/alphago=zero=learning=scratch/. 127 neural network to play Atari games: Volodymyr Mnih et al., “Human-Level Control through Deep Reinforcement Learning,” Nature 518, no. 7540 (February 26, 2015): 529–33. 127 deep neural network: JASON, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD.” 129 Inception-v3: Inception-v3 is trained for the Large Scale Visual Recognition Challenge (LSVRC) using the 2012 data.

Not only did the move feel like a move no human player would never make, it was a move no human player probably would never make. AlphaGo rated the odds that a human would have made that move as 1 in 10,000. Yet AlphaGo made the move anyway. AlphaGo went on to win game 2 and afterward Lee Sedol said, “I really feel that AlphaGo played the near perfect game.” After losing game 3, thus giving AlphaGo the win for the match, Lee Sedol told the audience at a press conference, “I kind of felt powerless.” AlphaGo’s triumph over Lee Sedol has implications far beyond the game of go. More than just another realm of competition in which AIs now top humans, the way DeepMind trained AlphaGo is what really matters.

Connecticut teenager: Rick Stella, “Update: FAA Launches Investigation into Teenager’s Gun-Wielding Drone Video,” Digital Trends, July 22, 2015, https://www.digitaltrends.com/cool-tech/man-illegally-straps-handgun-to-a-drone/. 119 For under $500: “Spark,” DJI.com. 122 Shield AI: “Shield AI,” http://shieldai.com/ 122 grant from the U.S. military: Mark Prigg, “Special Forces developing ‘AI in the sky’ drones that can create 3D maps of enemy lairs: Pentagon reveals $1m secretive ‘autonomous tactical airborne drone’ project,” DailyMail.com, http://www.dailymail.co.uk/sciencetech/article-3776601/Special-Forces-developing-AI-sky-drones-create-3D-maps-enemy-lairs-Pentagon-reveals-1m-secretive-autonomous-tactical-airborne-drone-project.html. 123 “Robotics and artificial intelligence are”: Brandon Tseng, email to author, June 17, 2016. 124 “fully automated combat module”: “Kalashnikov Gunmaker Develops Combat Module based on Artificial Intelligence.” 125 more possible positions in go: “AlphaGo,” DeepMind, accessed June 7, 2017, https://deepmind.com/research/alphago/. 125 “Our goal is to beat the best human players”: “AlphaGo: Using Machine Learning to Master the Ancient Game of Go,” Google, January 27, 2016, http://blog.google:443/topics/machine-learning/alphago-machine-learning-game-go/. 126 game 2, on move 37: Daniel Estrada, “Move 37!! Lee Sedol vs AlphaGo Match 2” video, https://www.youtube.com/watch?v=JNrXgpSEEIE. 126 “I thought it was a mistake”: Ibid. 126 “It’s not a human move”: Cade Metz, “The Sadness and Beauty of Watching Google’s AI Play Go,” WIRED, March 11, 2016, https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go/. 126 1 in 10,000: Cade Metz, “In Two Moves, AlphaGo and Lee Sedol Redefined the Future,” WIRED, accessed June 7, 2017, https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/. 126 “I kind of felt powerless”: Moyer, “How Google’s AlphaGo Beat a Go World Champion.” 126 “AlphaGo isn’t just an ‘expert’ system”: “AlphaGo,” January 27, 2016. 127 AlphaGo Zero: “AlphaGo Zero: Learning from Scratch,” DeepMind, accessed October 22, 2017, https://deepmind.com/blog/alphago=zero=learning=scratch/. 127 neural network to play Atari games: Volodymyr Mnih et al., “Human-Level Control through Deep Reinforcement Learning,” Nature 518, no. 7540 (February 26, 2015): 529–33. 127 deep neural network: JASON, “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD.” 129 Inception-v3: Inception-v3 is trained for the Large Scale Visual Recognition Challenge (LSVRC) using the 2012 data.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

See “Explore the AlphaGo Master series”, DeepMind Website, https://​deepmind.​com/​research/​alphago/​match-archive/​master/​, accessed 16 August 2018. DeepMind, promptly announced AlphaGo’s retirement from the game to pursue other interests. See Jon Russell, “After Beating the World’s Elite Go Players, Google’s AlphaGo AI Is Retiring”, Tech Crunch, 27 May 2017, https://​techcrunch.​com/​2017/​05/​27/​googles-alphago-ai-is-retiring/​ accessed 1 June 2018. Rather like a champion boxer tempted out of retirement for one more fight, AlphaGo (or at least a new program bearing a similar name, AlphaGo Zero) returned a year later to face a new challenge: AlphaGo Zero.

, Quora, 28 July 2016, https://​www.​quora.​com/​What-are-some-recent-and-potentially-upcoming-breakthroughs-in-deep-learning, accessed 16 August 2018. 128Andrea Bertolini, “Robots as Products: The Case for a Realistic Analysis of Robotic Applications and Liability Rules”, Law Innovation and Technology, Vol. 5, No. 2 (2013), 214–247, 234–235. 129See Chapter 1 at s. 5 and FN 111. A subsequent iteration of AlphaGo, “AlphaGo Master” beat Ke Jie, at the time the world’s top-ranked human player, by three games to nil in May 2017. See “AlphaGo at The Future of Go Summit, 23–27 May 2017”, DeepMind Website, https://​deepmind.​com/​research/​alphago/​alphago-china/​, accessed 16 August 2018. 130Silver et al., “AlphaGo Zero: Learning from Scratch”, DeepMind Website, 18 October 2017, https://​deepmind.​com/​blog/​alphago-zero-learning-scratch/​, accessed 1 June 2018. See also the paper published by the DeepMind team: David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis, “Mastering the Game of Go Without Human Knowledge”, Nature, Vol. 550 (19 October 2017), 354–359, https://​doi.​org/​10.​1038/​nature24270, accessed 1 June 2018. 131Silver et al., “AlphaGo Zero: Learning from Scratch”, DeepMind Website, 18 October 2017, https://​deepmind.​com/​blog/​alphago-zero-learning-scratch/​, accessed 1 June 2018. 132Matej Balog, Alexander L.

For an account, see Chris Baraniuk, “The Cyborg Chess Player Who Can’t Be Beaten”, BBC Website, 4 December 2015, http://​www.​bbc.​com/​future/​story/​20151201-the-cyborg-chess-players-that-cant-be-beaten, accessed 1 June 2018. 109The situation is somewhat complicated in that Kasparov had held the Fédération Internationale des Échecs (FIDE) world title until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association. 110Nick Bostrom, Superintelligence : Paths, Dangers and Strategies (Oxford: Oxford University Press, 2014), 16. 111In May 2017, a subsequent version of the program, “AlphaGo Master”, defeated the world champion Go player, Ke Jie by three games to nil. See “AlphaGo at The Future of Go Summit, 23–27 May 2017”, DeepMind Website, https://​deepmind.​com/​research/​alphago/​alphago-china/​, accessed 16 August 2018. Perhaps as a control against accusations that top players were being beaten psychologically by the prospect of playing an AI system rather than on the basis of skill, DeepMind had initially deployed AlphaGo Master in secret, during which period it beat 50 of the world’s top players online, playing under the pseudonym “Master”.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

The engineers simply thought the board offered too many possibilities for a computer to evaluate. But on this day AlphaGo wasn’t just beating Ke Jie—it was systematically dismantling him. Over the course of three marathon matches of more than three hours each, Ke had thrown everything he had at the computer program. He tested it with different approaches: conservative, aggressive, defensive, and unpredictable. Nothing seemed to work. AlphaGo gave Ke no openings. Instead, it slowly tightened its vise around him. THE VIEW FROM BEIJING What you saw in this match depended on where you watched it from. To some observers in the United States, AlphaGo’s victories signaled not just the triumph of machine over man but also of Western technology companies over the rest of the world.

Remove Deep Blue from the geometric simplicity of an eight-by-eight-square chessboard and it wouldn’t seem very intelligent at all. In the end, the only job it was threatening to take was that of the world chess champion. This time, things are different. The Ke Jie versus AlphaGo match was played within the constraints of a Go board, but it is intimately tied up with dramatic changes in the real world. Those changes include the Chinese AI frenzy that AlphaGo’s matches sparked amid the underlying technology that powered it to victory. AlphaGo runs on deep learning, a groundbreaking approach to artificial intelligence that has turbocharged the cognitive capabilities of machines. Deep-learning-based programs can now do a better job than humans at identifying faces, recognizing speech, and issuing loans.

These internet juggernauts had given the United States a dominance of the digital world that matched its military and economic power in the real world. With AlphaGo—a product of the British AI startup DeepMind, which had been acquired by Google in 2014—the West appeared poised to continue that dominance into the age of artificial intelligence. But looking out my office window during the Ke Jie match, I saw something far different. The headquarters of my venture-capital fund is located in Beijing’s Zhongguancun (pronounced “jong-gwan-soon”) neighborhood, an area often referred to as “the Silicon Valley of China.” Today, Zhongguancun is the beating heart of China’s AI movement. To people here, AlphaGo’s victories were both a challenge and an inspiration.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

Google’s co-founder Sergey Brin encouraged us to tackle it, arguing that any progress would be impressive enough. AlphaGo initially learned by watching 150,000 games played by human experts. Once we were satisfied with its initial performance, the key next step was creating lots of copies of AlphaGo and getting it to play against itself over and over. This meant the algorithm was able to simulate millions of new games, trying out combinations of moves that had never been played before, and therefore efficiently explore a huge range of possibilities, learning new strategies in the process. Then, in March 2016, we organized a tournament in South Korea. AlphaGo was pitted against Lee Sedol, a virtuoso world champion.

Within the AI community, it represented a first high-profile public test of deep reinforcement learning and one of the first research uses of a very large cluster of GPU computation. In the press the matchup between AlphaGo and Lee Sedol was presented as an epic battle: human versus machine; humanity’s best and brightest against the cold, lifeless force of a computer. Cue all the tired tropes of Terminators and robot overlords. But under the surface, another, more important dimension was becoming clear, a tension I’d dimly worried about ahead of the contest, but the contours of which emerged more starkly as the event unfolded. AlphaGo wasn’t just human versus machine. As Lee Sedol squared up against AlphaGo, DeepMind was represented by the Union Jack, while the Sedol camp flew the taegeukgi, South Korea’s unmistakable flag.

AlphaGo was pitted against Lee Sedol, a virtuoso world champion. It was far from clear who would win. Most commentators backed Sedol going into round one. But AlphaGo won the first game, much to our shock and delight. In the second game came move number 37, a move now famous in the annals of both AI and Go. It made no sense. AlphaGo had apparently blown it, blindly following a losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a “very strange move” and thought it was “a mistake.” It was so unusual that Sedol took fifteen minutes to respond and even got up from the board to take a walk outside.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

Not only is this subtle, human word warped by its wishfully mnemonic application to this program, it even seems wishful to say that AlphaGo plays Go at all, if one considers the human definition of the word ‘play’: ‘to engage in activity for enjoyment and recreation rather than a serious or practical purpose’. In that sense, AlphaGo does not play Go so much as it reduces the game to the maximization of a value function. There is no psychological evidence to suggest that any human being plays Go in this way, or that human intuition has anything in common with any given aspect of AlphaGo’s processing. In fact, given that AlphaGo has reduced the game to a mathematical optimization problem, examining more moves in its training and search than any human master is ever likely to play, and utilizing up to 1378 computer CPUs in that process, the dramatic triumph is that a human is able to win even one round against AlphaGo.

Maddison, et al. (2016), Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature, 529, 484–48,. doi: 10.1038/nature16961 15Steven Borowiec, 2016, AlphaGo Seals 4-1 Victory Over Go Grandmaster Lee Sedol. Guardian, www.theguardian.com/technology/2016/mar/15/googles-alphago-seals-4-1-victory-over-grandmaster-lee-sedol 16Adrian Lee, 2016, The Meaning of AlphaGo, The AI Program that Beat a Go Champ. MacLean’s, www.macleans.ca/society/science/the-meaning-of-alphago-the-ai-program-that-beat-a-go-champ/ 17The emphases on the words ‘think’ and ‘intuitively’ are mine. 18Galang Lufityanto, Chris Donkin and Joel Pearson, 2014, Measuring Intuition: Unconscious Emotional Information Boost Decision-Making Accuracy and Confidence. 18th Association for the Scientific Study of Consciousness , Psychological Science 27(5), www.researchgate.net/publication/265165687_Measuring_Intuition_Unconscious_Emotional_Information_Boost_Decision-Making_Accuracy_and_Confidence 19Association for Psychological Science, 2016, Intuition – It’s More than a Feeling, www.psychologicalscience.org/news/minds-business/intuition-its-more-than-a-feeling.html 20Ariadna Matamoros-Fernández, 2018, Inciting Anger through Facebook Reactions in Belgium: The Use of Emoji and Related Vernacular Expressions in Racist Discourse.

The neural networks provide you with good intuitions, and that’s what the other programs were lacking, and that’s what people didn’t really understand computers could do. If you think that sounds quite incredible, it is, in the true meaning of that word. The implication is that AlphaGo is a computer program that can think about all the possible alternatives and then intuitively decide on the most strategic move in one of the most challenging human games. But given what we know of wishful mnemonics, it’s important to consider how AlphaGo actually works in order to discover whether it is thinking and intuiting. AlphaGo’s system consists of two deep-learning neural networks (of the type discussed in the previous chapter); that is to say, two deeply layered sets of massive, nested mathematical functions.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

But it had also made it possible. No sooner had AlphaGo reached the pinnacle of the game of Go, however, than it was, in 2017, summarily dethroned, by an even stronger program called AlphaGo Zero.86 The biggest difference between the original AlphaGo and AlphaGo Zero was in how much human data the latter had been fed to imitate: zero. From a completely random initialization, tabula rasa, it simply learned by playing against itself, again and again and again and again. Incredibly, after just thirty-six hours of self-play, it was as good as the original AlphaGo, which had beaten Lee Sedol. After seventy-two hours, the DeepMind team set up a match between the two, using the exact same two-hour time controls and the exact version of the original AlphaGo system that had beaten Lee.

Recent research has looked into ways to automatically identify tasks of appropriate difficulty, and examples that can maximally promote learning in the network. The early results in this vein are promising, and work is ongoing.30 Perhaps the single most impressive achievement in automated curriculum design, however, is DeepMind’s board game–dominating work with AlphaGo and its successors AlphaGo Zero and AlphaZero. “AlphaGo always has an opponent at just the right level,” explains lead researcher David Silver.31 “It starts off extremely naïve; it starts off with completely random play. And yet at every step of the learning process, it has an opponent—a sparring partner, if you like—that’s exactly calibrated to its current level of performance.”

After seventy-two hours, the DeepMind team set up a match between the two, using the exact same two-hour time controls and the exact version of the original AlphaGo system that had beaten Lee. AlphaGo Zero, which consumed a tenth of the power of the original system, and which seventy-two hours earlier had never played a single game, won the hundred-game series—100 games to 0. As the DeepMind research team wrote in their accompanying Nature paper, “Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books.”87 AlphaGo Zero discovered it all and more in seventy-two hours. But there was something very interesting, and very instructive, going on under the hood.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

The DeepMind team began by using a supervised learning technique to train AlphaGo’s neural networks on thirty million moves extracted from detailed records of games played by the best human players. It then turned to reinforcement learning, essentially turning the system loose to play against itself. Over the course of thousands of simulated practice games, and under the relentless pressure of a reward-based drive to improve, AlphaGo’s deep neural networks gradually progressed toward superhuman proficiency.7 The triumph of AlphaGo over Lee Sedol in 2016, and then over the world’s top-ranked player, Ke Jie, a year later, once again sent shock waves through the AI research community.

However, China’s recent rapid progress in artificial intelligence has been significantly accelerated and orchestrated by an explicit industrial policy articulated by the central government. Many observers believe that the catalyst for the sudden surge of interest in AI on the part of the Chinese Communist Party was the highly touted contest between DeepMind’s AlphaGo system and Go champion Lee Sedol that took place in March 2016. The game of Go originated in China at least 2,500 years ago and is wildly popular and revered among the Chinese public. AlphaGo’s 4–1 triumph, which took place over seven days in Seoul, South Korea, was viewed live by more than 280 million people in China—nearly three times the audience that tunes in for a few hours to watch a typical Super Bowl.

The specter of a computer defeating a top human player at an intellectual pursuit so deeply rooted in Chinese history and culture made an indelible impression on the public as well as on Chinese academics, technologists and government bureaucrats. Kai-Fu Lee, a Beijing-based venture capitalist and author, calls the AlphaGo–Lee Sedol match “China’s Sputnik moment,” in reference to the Soviet satellite that galvanized public support for the U.S. space program in the 1950s.6 Just over a year later, a second contest was held in Wuzhen, China. In a three-game match carrying a $1.5 million prize for the winner, AlphaGo defeated the Chinese player Ke Jie, who was then ranked number one in the world, by prevailing in three straight games. This time around, however, there was no live audience.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

Trying to understand AlphaGo in this way is pointless: AlphaGo is a program that was optimized to do one single thing – to play the game of Go. We want to attribute motives and reasoning and strategy to the program, but there are none of these: AlphaGo’s extraordinary capability is captured in the weightings in its neural nets. These neural nets are nothing more than very long lists of numbers, and we have no way of extracting or rationalizing the expertise that they embody. AlphaGo can’t tell us why it made its moves, and this, as we will see, is one of the key challenges with deep learning. AlphaGo was widely touted as a triumph for the new AI of deep learning and big data – and indeed it was.

But if you dig beneath the surface, you will find that an awful lot of the clever engineering in AlphaGo is classic AI search. Arthur Samuel, who in the 1950s developed the checkers-playing program we discussed in Chapter 2, would have had no difficulty in understanding the search techniques used in AlphaGo: there is an unbroken thread from his work in the 1950s through to the most celebrated AI system of the modern era. One might think that two landmark achievements were enough, but, just 18 months later, DeepMind were in the news again, this time with a generalization of AlphaGo called AlphaGo Zero. The extraordinary thing about AlphaGo Zero is that it learned how to play to a super-human level without any human supervision at all: it just played against itself.16 To be fair, it had to play itself a lot, but nevertheless it was a striking result, and it was further generalized in another follow-up system called AlphaZero, which learned to play a range of other games, including chess: after just nine hours of self-play, AlphaZero was able to consistently beat or draw against Stockfish, one of the world’s leading dedicated chess-playing programs.

Before the system was announced, DeepMind hired Fan Hui, a European Go champion, to play against AlphaGo: the system beat him five games to zero. This was the first time a Go program had beaten a human champion player in a full game. Shortly after, DeepMind announced that AlphaGo was going to be pitted against Lee Sedol, a world champion Go player, in a five-match competition to be held in Seoul, Korea, in March 2016. The science of AlphaGo is fascinating, and AI researchers – myself included – were intrigued to see what would happen. (For the record, my guess was that AlphaGo might win one or two matches at most, but that Sedol would decisively win the competition overall.)

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

In March 2016, as much as ten years before even the most optimistic AI analysts predicted it would happen, AlphaGo beat champion Lee Sedol, then ranked second worldwide in Go, in a five-game match. Then, in 2017, at the ‘Future of Go’ summit, its successor, AlphaGo Master, beat Ke Jie, the world’s number-one-ranked player at the time, in a three-game match. So AlphaGo Master officially became the world champion. With no humans left to beat, DeepMind developed a new AI from scratch – AlphaGo Zero – to play against AlphaGo Master. After just a short period of training, AlphaGo Zero achieved a 100–0 victory against the champion, AlphaGo Master. Its successor, the self-taught AlphaZero, is currently perceived as the world champion of Go.

But within nineteen hours, this had changed. AlphaGo Zero had learned by then the fundamentals of Go strategies, such as life-and-death, influence and territory. Within seventy hours it was playing at superhuman level and had surpassed the abilities of AlphaGo, the version that beat world champion Lee Sedol. After twenty-one days it had reached the level of AlphaGo Master, the version that defeated sixty top professionals online and the world champion, Ke Jie, in a three-out-of-three game. By day forty, AlphaGo Zero surpassed all other versions of AlphaGo and, arguably, this newly born intelligent being had already become the smartest being in existence on the task it had set out to learn.

How far-reaching would a delay of forty-five days be in response to the first AI threat warning? If the rate at which AlphaGo Zero learned to master humanity’s most challenging strategy game is any indication, then by the end of forty-five days, humanity would be toast. Let me explain. After beating the world champion in Go, DeepMind, who created the AI, started from scratch with AlphaGo Zero. On Day Zero, AlphaGo Zero had no prior knowledge of the game Go and was only given the basic rules as input. Three hours later, AlphaGo Zero was already playing like a beginner, forgoing long-term strategy to focus on greedily capturing as many stones as possible.

pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future
by Andrew McAfee and Erik Brynjolfsson
Published 26 Jun 2017

He predicted he would win at least four games out of five, saying, “Looking at the match in October, I think (AlphaGo’s) level doesn’t match mine.” The games between Sedol and AlphaGo attracted intense interest throughout Korea and other East Asian countries. AlphaGo won the first three games, ensuring itself of victory overall in the best-of-five match. Sedol came back to win the fourth game. His victory gave some observers hope that human cleverness had discerned flaws in a digital opponent, ones that Sedol could continue to exploit. If so, they were not big enough to make a difference in the next game. AlphaGo won again, completing a convincing 4–1 victory in the match.

A team at Google DeepMind, a London-based company specializing in machine learning (a branch of artificial intelligence we’ll discuss more in Chapter 3), published “Mastering the Game of Go with Deep Neural Networks and Tree Search,” and the prestigious journal Nature made it the cover story. The article described AlphaGo, a Go-playing application that had found a way around Polanyi’s Paradox. The humans who built AlphaGo didn’t try to program it with superior Go strategies and heuristics. Instead, they created a system that could learn them on its own. It did this by studying lots of board positions in lots of games. AlphaGo was built to discern the subtle patterns present in large amounts of data, and to link actions (like playing a stone in a particular spot on the board) to outcomes (like winning a game of Go).§ The software was given access to 30 million board positions from an online repository of games and essentially told, “Use these to figure out how to win.”

.§ The software was given access to 30 million board positions from an online repository of games and essentially told, “Use these to figure out how to win.” AlphaGo also played many games against itself, generating another 30 million positions, which it then analyzed. The system did conduct simulations during games, but only highly focused ones; it used the learning accumulated from studying millions of positions to simulate only those moves it thought most likely to lead to victory. Work on AlphaGo began in 2014. By October of 2015, it was ready for a test. In secret, AlphaGo played a five-game match against Fan Hui, who was then the European Go champion. The machine won 5–0.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

When IBM beat world chess champion Garry Kasparov with Deep Blue in 1997, the supercomputer was filled with all the know-how its programmers could gather from human chess experts.[80] It was not useful for anything else; it was a chess-playing machine. By contrast, AlphaGo Zero was not given any human information about Go except for the rules of the game, and after about three days of playing against itself, it evolved from making random moves to easily defeating its previous human-trained incarnation, AlphaGo, by 100 games to 0.[81] (In 2016, AlphaGo had beaten Lee Sedol, who at the time ranked second in international Go titles, in four out of five games.) AlphaGo Zero used a new form of reinforcement learning in which the program became its own instructor. It took AlphaGo Zero just twenty-one days to reach the level of AlphaGo Master, the version that defeated sixty top professionals online and the world champion Ke Jie in three out of three games in 2017.[82] After forty days, AlphaGo Zero surpassed all other versions of AlphaGo and became the best Go player in human or computer form.[83] It achieved this with no encoded knowledge of human play and no human intervention.

And as deep learning is applied more broadly, those fields benefit from exponentially increasing intelligence. See “AlphaGo,” Google DeepMind, accessed January 30, 2023, https://deepmind.com/research/case-studies/alphago-the-story-so-far; “AlphaGo Zero: Starting from Scratch,” Google DeepMind, October 18, 2017, https://deepmind.com/blog/article/alphago-zero-starting-scratch; Tom Simonite, “This More Powerful Version of AlphaGo Learns On Its Own,” Wired, October 18, 2017, https://www.wired.com/story/this-more-powerful-version-of-alphago-learns-on-its-own; David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (January 27, 2016): 484–89, https://doi.org/10.1038/nature16961; Christof Koch, “How the Computer Beat the Go Master,” Scientific American, March 19, 2016, https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master; Josh Patterson and Adam Gibson, Deep Learning: A Practitioner’s Approach (Sebastopol, CA: O’Reilly, 2017), 6–8, https://books.google.com/books?

Kasparov: How a Chess Match Started the Big Data Revolution,” The Conversation, May 11, 2017, https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-76882. BACK TO NOTE REFERENCE 80 DeepMind, “AlphaGo Zero: Starting from Scratch,” DeepMind, October 18, 2017, https://deepmind.com/blog/article/alphago-zero-starting-scratch; DeepMind, “AlphaGo”; Tom Simonite, “This More Powerful Version of AlphaGo Learns on Its Own,” Wired, October 18, 2017, https://www.wired.com/story/this-more-powerful-version-of-alphago-learns-on-its-own; David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (January 27, 2016): 484–89, https://doi.org/10.1038/nature16961.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

But it didn’t take long before even that achievement was superseded. In the fall of 2017, AlphaGo Zero, the next iteration of algorithm beyond AlphaGo, took the game world by storm.32 AlphaGo Zero played millions of games against itself, just starting from random moves. In the Nature paper, “Mastering the Game of Go Without Human Knowledge,” the researchers concluded that “it is possible [for an algorithm] to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.” It was also a stunning example of doing more with less: AlphaGo Zero, in contrast to AlphaGo, had fewer than 5 million training games compared with 30 million, three days of training instead of several months, a single neural network compared with two separate ones, and it performed via a single tensor processing unit (TPU) chip compared with forty-eight TPUs and multiple machines.33 If that wasn’t enough, just a few months later a preprint was published that this same AlphaGo Zero algorithm, with only basic rules as input and no prior knowledge of chess, played at a champion level after teaching itself for only four hours.34 This was presumably yet another “holy shit” moment for Tegmark, who tweeted, “In contrast to AlphaGo, the shocking AI news here isn’t the ease with which AlphaGo Zero crushed human players, but the ease with which it crushed human AI researchers, who’d spent decades hand-crafting ever better chess software.”35 AI has also progressed to superhuman performance on a similarly hyper-accelerated course in the game of Texas hold’em, the most popular form of poker.

It was also a stunning example of doing more with less: AlphaGo Zero, in contrast to AlphaGo, had fewer than 5 million training games compared with 30 million, three days of training instead of several months, a single neural network compared with two separate ones, and it performed via a single tensor processing unit (TPU) chip compared with forty-eight TPUs and multiple machines.33 If that wasn’t enough, just a few months later a preprint was published that this same AlphaGo Zero algorithm, with only basic rules as input and no prior knowledge of chess, played at a champion level after teaching itself for only four hours.34 This was presumably yet another “holy shit” moment for Tegmark, who tweeted, “In contrast to AlphaGo, the shocking AI news here isn’t the ease with which AlphaGo Zero crushed human players, but the ease with which it crushed human AI researchers, who’d spent decades hand-crafting ever better chess software.”35 AI has also progressed to superhuman performance on a similarly hyper-accelerated course in the game of Texas hold’em, the most popular form of poker.

champions 2011—Speech recognition NN (Microsoft) 2012—University of Toronto ImageNet classification and cat video recognition (Google Brain, Andrew Ng, Jeff Dean) 2014—DeepFace facial recognition (Facebook) 2015—DeepMind vs. Atari (David Silver, Demis Hassabis) 2015—First AI risk conference (Max Tegmark) 2016—AlphaGo vs. Go (Silver, Demis Hassabis) 2017—AlphaGo Zero vs. Go (Silver, Demis Hassabis) 2017—Libratus vs. poker (Noam Brown, Tuomas Sandholm) 2017—AI Now Institute launched TABLE 4.2: The AI timeline. Kasparov’s book, Deep Thinking, which came out two decades later, provides remarkable personal insights about that pivotal AI turning point.

pages: 340 words: 90,674

The Perfect Police State: An Undercover Odyssey Into China's Terrifying Surveillance Dystopia of the Future
by Geoffrey Cain
Published 28 Jun 2021

And DeepMind, founded by three brilliant technologists including a child chess prodigy, made an AI software program called AlphaGo.33 Its programmers wanted to see if AlphaGo could learn to play this incredibly complex game on its own, without a human hand. So they developed a new AlphaGo program that didn’t need any data inputs whatsoever. It would learn the game all by itself, and then go head-to-head with world champions. What happened was startling. After only seventy hours of playing matches with itself, the new AlphaGo program reached a level capable of beating top human players. Then, after a reboot, it took AlphaGo only forty days to learn the sum of humans’ knowledge of Go.

After that it beat the previous, most advanced, version of AlphaGo 90 percent of the time. The developers were puzzled. Even they had little idea how the new AlphaGo got so smart in such a short time. These discoveries potentially had enormous implications. If similar levels of AI were applied to other areas, they could upend the way we worked, lived, drove, went shopping, ate, and even conducted diplomacy and fought wars. AlphaGo’s AI system, after all, closely mimicked the thinking behind warfare and could have battlefield applications. In China, few were paying attention until AlphaGo was put to the test against Go grandmasters, most of whom were based in East Asia.

Everyone was expanding so fast and so they had debts to pay off. We were changing our direction several times a year, and replacing a lot of employees. It was a hard time. But we managed to survive for one reason: AlphaGo.”3 Just over a year after beating South Korean Lee Sedol, AlphaGo was ready to take on the world champion, the Chinese player Ke Jie, who agreed to compete against AlphaGo over five matches. After three grueling days and on his third and final game, Ke Jie capitulated. He had lost them all.4 “No human on earth could do this better than Ke Jie,” wrote AI expert and investor Kai-fu Lee, “but today he was pitted against a Go player on a level no one had seen before.”5 The match was a turning point.

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

That’s 230 million times as many possibilities as in chess at that same early point in the game.29 In chess, Deep Blue’s victory came in part from its ability to use brute-force processing power to calculate further ahead in a game than Kasparov could. But because of go’s complexity, that strategy would not work for AlphaGo. Instead, it took a different approach. First it reviewed 30 million moves from games played by the best human experts. Then it learned from playing repeatedly against itself, crunching through thousands of games and drawing insights from those, too. In this way, AlphaGo was able to win while evaluating far fewer positions than Deep Blue had done in its matches. In 2017, a yet more sophisticated version of the program was unveiled, called AlphaGo Zero. What made this system so remarkable is that it had wrung itself dry of any residual role for human intelligence altogether.

Buried within Deep Blue’s code were still a few clever strategies that chess champions had worked out for it to follow in advance.30 And in studying that vast collection of past games by great human players, AlphaGo was in a sense relying on them for much of its difficult computational work. But AlphaGo Zero required none of this. It did not need to know anything about the play of human experts; it did not need to try to mimic human intelligence at all. All it needed was the rules of the game. Given nothing more than those, it played itself for three days to generate its own data—and it returned to thrash its older cousin, AlphaGo.31 Other systems are using similar techniques to engage in pursuits that more closely resemble the messiness of real life.

Another good example of this is AlphaGo, the go-playing machine that beat the world champion Lee Sedol. Almost as remarkable as its overall victory was a particular move that AlphaGo made—the thirty-seventh move in the second game—and the reaction of those watching. The commentators were shocked. They had never seen a move like it. Lee Sedol himself appeared deeply unsettled. Thousands of years of human play had forged a rule of thumb known even to beginners: early in the game, avoid placing stones on the fifth line from the edge. And yet, this is exactly what AlphaGo did in that move.31 The system had not discovered an existing but hitherto unarticulated human rule.

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

Top players in the world commentating first dismissed the move as a mistake by the AI, but then realized it was no mistake. The move was brilliant, and AlphaGo went on to beat Sedol in the game and win the five-game match 4–1. Later, pundits would say how creative the move was. It was the first time that an AI was ever said to be creative, a domain always thought to be owned solely by humans. Just one year later, in 2017, Google launched a newer version called AlphaGo Zero that beat AlphaGo 100 games to zero. Not only was that version much more powerful than its predecessor, It also didn’t require any “training” from human games. Understanding only the rules of the game, AlphaGo Zero became its own teacher, playing itself millions of times and through deep reinforcement learning getting stronger with each game.

Until 2014, even top AI researchers believed top human competitors would beat computers for years to come because of the complexity of the game and the fact that algorithms had to compare every move, which required enormous compute power. But in 2016, Google’s DeepMind program AlphaGo beat one of the top players in the world, Lee Sedol, in a match that made history. AlphaGo’s program was based on deep learning, which was “trained” using thousands of human amateur and professional games. It made history not only because it was the first time a computer beat a top Go master, but also because of the way it did so. In game 2 and the thirty-seventh move, the computer made a move that defied logic, placing a black stone in the middle of an open area—away from the other stones.

While I agree with the prognosis that in the short term humans are needed to help train and error correct artificial intelligence, it does not appear to me that this is any more than a transition step. We will error correct the machines until they are more “intelligent” than us. So for a short term, there might be more jobs, but then those “training the AI” jobs fall away as AI takes knowledge to the next level. Remember that just a year after AlphaGo’s release, AlphaGo Zero came out, not needing people, and winning 100 games to zero. It’s a potent example of what is possible. The AI race But it’s not just about increases in compute power. We are at an inflection point where it is about gathering the right data in data sets that can be analyzed by machines and then helping train those data sets.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

In Go, as is often said, there are more possible moves than atoms in the universe. Surely, observers thought, the raw computer power required to search deeply into that domain would be beyond modern AI? Not so. In 2016, DeepMind’s AlphaGo demonstrated that computation trumps human skill, even in a truly vast search space. In deposing Lee Sedol, the world champion, AlphaGo’s limited version of creativity overwhelmed the richer human variety.5 Go was a formidable computer science problem, but still just a board game. Nonetheless, when all was done and Sedol defeated, the match clarified some big things that have relevance for our study of AI strategists.

At move 37 in game 2, the computer stunned onlookers and Sedol by making a radical move, one vanishingly unlikely to have been played by an expert human. There were gasps from the commentators and a startled Sedol left the table to ponder his reply. It was a game-winning move, which aficionados eagerly attributed to AlphaGo’s ingenuity, or creativity. But was the machine really ‘inspired’, or were the onlookers just anthropomorphising? In Boden’s terms, the move was new, surprising and valuable—and so genuinely creative. But it was simply exploratory creativity on steroids—searching in the available universe of possible moves for a new angle that would bring a marginally better probability of success, many moves further on.

But it was simply exploratory creativity on steroids—searching in the available universe of possible moves for a new angle that would bring a marginally better probability of success, many moves further on. Because humans, even expert ones, don’t search so extensively in pursuit of marginal gains, the move looked highly novel. Sedol was stunned too, returning to the table to think for many minutes, before going down rapidly to defeat. Yet something was lacking. If it was creative, AlphaGo certainly wasn’t creative in the sense that Sedol was. Like other genius players, Sedol didn’t see the board as an exercise in number crunching, but as a combination of visionary strategic play and short-range tactics. And there was an intensely psychological dimension to his approach. Sedol described looking across the board, as you would in playing a human.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

Eventually, these batches were pooled, resulting in the thirty million games collected by the AlphaGo team. The programmers used the thirty million games to “train” the model that they named AlphaGo. What you must remember is that people who play Go professionally spend ages playing Computer Go. It’s how they train. Therefore, the thirty million games recorded included data from the world’s greatest Go players. Millions of hours of human labor went into creating the training data—yet most versions of the AlphaGo story focus on the magic of the algorithms, not the humans who invisibly and over the course of years worked (without compensation) to create the training data.

AlphaGo is a remarkable mathematical achievement that was made possible by equally remarkable advances in computing hardware and software. AlphaGo’s team of designers deserves praise for this outstanding technical achievement. AlphaGo is not an intelligent machine, however. It has no consciousness. It does only one thing: plays a computer game. It contains data from thirty million games played by amateurs and by the world’s most talented players. On some level, AlphaGo is supremely dumb. It uses brute force and the combined effort of many, many humans to defeat a single Go master. The program and its underlying computational methods will likely be deployed for other useful tasks involving massive number-crunching, and that’s good for the world—but not everything in the world is a calculation.

Half a century has been spent trying to make a machine that could beat a human chess master. Finally, IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997. AlphaGo, the AI program that won three of three games against Go world champion Ke Jie in 2017, is often cited as an example of a program that proves general AI is just a few years in the future. Looking closely at the program and its cultural context reveals a different story, however. AlphaGo is a human-constructed program running on top of hardware, just like the “Hello, world” program you wrote in chapter two. Its developers explain how it works in a 2016 paper published in Nature, the international journal of science.1 The opening lines of the paper read: “All games of perfect information have an optimal value function, v*(s), which determines the outcome of the game, from every board position or state s, under perfect play by all players.

pages: 346 words: 97,330

Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass
by Mary L. Gray and Siddharth Suri
Published 6 May 2019

Take, for example, the celebrated accomplishments of the AI powering AlphaGo, most recently chronicled in technologist Scott Hartley’s book The Fuzzy and the Techie.14 In May 2017, AlphaGo became the first computer program to beat Ke Jie, the reigning world champion of the ancient Chinese board game go. Five months later, AlphaGo fell to its progeny, AlphaGo Zero. But, lest we be too impressed, it’s important to keep in mind that the rules of go are fixed and fully formalized and it is played in a closed environment where only the two players’ actions determine the outcome. AlphaGo and AlphaGo Zero’s human programmers at the Google-backed company DeepMind gave the programs clear definitions of winning versus losing.

[back] 14. Scott Hartley, The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World (Boston: Houghton Mifflin Harcourt, 2017). Hartley focuses on the case of AlphaGo. Both AlphaGo and AlphaGo Zero were the brainchildren of DeepMind, a London-based research lab acquired by Google in 2014. [back] 15. Tom Dietterich, personal conversation, April 13, 2018. Noted AI researcher Dietterich put it this way: the version of AlphaGo that defeated Ke Jie was “told” the rules of go (in the sense that it could invoke code to compute all legal moves for any board state and it was given the definitions of winning and losing).

AlphaGo and AlphaGo Zero’s human programmers at the Google-backed company DeepMind gave the programs clear definitions of winning versus losing. Winning go is about foreseeing the long-term consequences of one’s actions as one plays them out against those of an opponent.15 So AlphaGo was trained on billions of board positions using a large database of games between human experts, as well as games against itself, allowing it to learn what constitutes a better move or a stronger board position.16 AlphaGo Zero was then steeped in all of those prior experiences by playing against AlphaGo, a mirror image of self. But, as Tom Dietterich, a noted expert in artificial intelligence research, suggests, “we must rely on humans to backfill with their broad knowledge of the world” to accomplish most day-to-day tasks.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

That could also mean learning how to reason better by experience—such as discovering which reasoning steps turn out to be useful for solving a problem, and which reasoning steps turn out to be less useful. AlphaGo, for example, is a modern AI Go program that recently beat the best human world-champion players, and it really does learn. It learns how to reason better from experience. As well as learning to evaluate positions, AlphaGo learns how to control its own deliberations so that it more effectively reaches high decision-quality moves more quickly, with less computation. MARTIN FORD: Can you also define neural networks and deep learning?

Perception and image recognition are both important aspects of operating successfully in the real world, but deep learning is only one part of the picture. AlphaGo, and its successor AlphaZero, created a lot of media attention around deep learning with stunning advances in Go and Chess, but they’re really a hybrid of classical search-based AI and a deep learning algorithm that evaluates each game position that the classical AI system searches through. While the ability to distinguish between good and bad positions is central to AlphaGo, it cannot play world-champion-level Go just by deep learning. Self-driving car systems also use a hybrid of classical search-based AI and deep learning.

He was elected as a Fellow of the Royal Society, has been a recipient of the Society’s Mullard Award, and was also awarded an Honorary Doctorate by Imperial College London. Demis co-founded DeepMind along with Shane Legg and Mustafa Suleyman in 2010. DeepMind was acquired by Google in 2014 and is now part of Alphabet. In 2016 DeepMind’s AlphaGo system defeated Lee Sedol, arguably the world’s best player of the ancient game of Go. That match is chronicled in the documentary film AlphaGo (https://www.alphagomovie.com/). Chapter 9. ANDREW NG The rise of supervised learning has created a lot of opportunities in probably every major industry. Supervised learning is incredibly valuable and will transform multiple industries, but I think there is a lot of room for something even better to be invented.

pages: 94 words: 33,179

Novacene: The Coming Age of Hyperintelligence
by James Lovelock
Published 27 Aug 2019

It did this much faster than any human player, but to play Go you need more than this one-dimensional approach. AlphaGo used two systems – machine-learning and tree-searching – which combined human input with the machine's ability to teach itself. This was an enormous step forward, but an even bigger one followed. In 2017 DeepMind announced two successors: AlphaGo Zero and AlphaZero, neither of which used human input. The computer simply played against itself. AlphaZero turned itself into a superhuman chess, Go and Shogi (otherwise known as Japanese chess) player within twenty-four hours. Remarkably, AlphaGo searched a mere 80,000 positions per second when playing chess; the best conventional program, Stockfish, searched 70 million.

Thanks to the wonders of the age of fire, we have taken the first step. We now stand at a critical moment in this process, the moment when the Anthropocene gives way to the Novacene. The fate of the knowing cosmos hangs upon our response. PART THREE Into the Novacene 15 AlphaGo In October 2015 AlphaGo, a computer program developed by Google DeepMind, beat a professional Go player. At first glance you may have shrugged and thought, ‘So what?’ Ever since 1997, when IBM's computer Deep Blue beat Garry Kasparov, the greatest chess player of all time, we have known that computers play these sorts of brain games better than humans.

Also, perhaps, we can hope that our contribution will not be entirely forgotten as wisdom and understanding spread outwards from the Earth to embrace the cosmos. Index acceleration Anthropocene 41–4 of global warming 65, 72 Novacene 119–20 Adams, Douglas: Hitchhiker's Guide to the Galaxy books 23 AI see artificial intelligence air power 45 aircraft speed 42–3 albedo 107 AlphaGo (computer program) 79–80 AlphaGo Zero (computer program) 80 AlphaZero (computer program) 80, 82, 84, 117 anthropic principle 25–7, 75, 89, 123 Anthropocene acceleration 41–4 changes of planetary significance 23, 37–40; see also climate change cities 50–53 debate over good or bad nature of 67–73 evolution 35–6, 43, 70 guilt feelings 54, 56 and market forces 35–6 as a new age 37–40 Newcomen's engine and the start of 34–5, 37, 87 pride at expansion of knowledge and information 28, 74–5 spiritual loss and severance from nature 54–6 and war 45–9, 54 Aristotle 16 arms race 46–7 artificial intelligence 29, 30, 82 autonomy 82, 84 computer programmes, see computer programmes computers designing and making themselves 84, 114 cyborgs, see cyborgs dangers and safeguards 94, 114–17 humanoid ideas of 90, 91–2, 93–4 intuition 80 military development concerns 116 speed 81 Asimov, Isaac 94 asteroids deflection possibilities 7–8 strikes 6–8, 58–9, 63, 65–6 autonomous weapons 115–17 autopilots 112–13 Babbage, Charles 83 Barrow, John (with Frank Tipler): The Anthropic Cosmological Principle 24, 25–6, 27, 123 bees’ nests 51–2 Bell, Alexander Graham 128 bits (information) 88, 89 Bolin, Bert 17 Boltzmann, Ludwig 87–8 brain 82, 92–3, 96, 98 Brautigan, Richard 104 Byron, Lord: ‘Darkness’ 7 calcium carbonate shells 64 Čapek, Karel: R.U.R.

pages: 193 words: 51,445

On the Future: Prospects for Humanity
by Martin J. Rees
Published 14 Oct 2018

This may not seem a ‘big deal’ because it’s been more than twenty years since IBM’s supercomputer Deep Blue beat Garry Kasparov, the world chess champion. But it was a ‘game change’ in the colloquial as well as literal sense. Deep Blue had been programmed by expert players. In contrast, the AlphaGo machine gained expertise by absorbing huge numbers of games and playing itself. Its designers don’t know how the machine makes its decisions. And in 2017 AlphaGo Zero went a step further; it was just given the rules—no actual games—and learned completely from scratch, becoming world-class within a day. This is astonishing. The scientific paper describing the feat concluded with the thought that humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books.

It excels in optimising elaborate networks, like the electricity grid or city traffic. When the energy management of its large data farms was handed over to a machine, Google claimed energy savings of 40 percent. But there are still limitations. The hardware underlying AlphaGo used hundreds of kilowatts of power. In contrast, the brain of Lee Sedol, AlphaGo’s Korean challenger, consumes about thirty watts (like a lightbulb) and can do many other things apart from play board games. Sensor technology, speech recognition, information searches, and so forth are advancing apace. So (albeit with a more substantial lag) is physical dexterity.

But it’s becoming possible to calculate the properties of materials, and to do this so fast that millions of alternatives can be computed, far more quickly than actual experiments could be performed. Suppose that a machine came up with a unique and successful recipe. It might have succeeded in the same way as AlphaGo. But it would have achieved something that would earn a scientist a Nobel prize. It would have behaved as though it had insight and imagination within its rather specialised universe—just as AlphaGo flummoxed and impressed human champions with some of its moves. Likewise, searches for the optimal chemical composition for new drugs will increasingly be done by computers rather than by real experiments, just as for many years aeronautical engineers have simulated air flow over wings by computer calculations rather than depending on wind-tunnel experiments.

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

In the fall of 2015, we had another one of those Skynet-like moments when a form of artificial intelligence utterly destroys a human. In this case, it involved “AlphaGo”—software designed by DeepMind, a subsidiary of the Google empire—playing a wickedly great game of Go. To test their AI, DeepMind had arranged for it to play against Fan Hui, the European Go champion. It was no contest: The computer won 5 games out of 5. A few months later, AlphaGo fought Lee Sedol, an even more elite player—and again, AlphaGo dominated, 4 to 1. AlphaGo was so good at the game partly because it incorporated “deep learning,” a hot new neural-net technique that let the computer analyze millions of Go games and, on its own, build up a model of how the game worked; feed any board with Go positions into the model, and it could, in conjunction with a more traditional “Monte Carlo” algorithm, then predict a future move.

There are considerably more possible Go games than there are atoms in the universe. So AlphaGo’s creators didn’t go that route. They didn’t sit around writing logic rules, like traditional programmers. Instead deep learning allowed AlphaGo to analyze 30 million positions from preexisting games and build up an extraordinarily sophisticated model of the game—so dense and convoluted the creators themselves could not tell you precisely how it works. But work it did. AlphaGo was a master at the game, albeit in a somewhat alien fashion. It sometimes pulled off moves no human had ever before executed. In the second game against Sedol, during move 37, AlphaGo made a play that at first flummoxed the Go experts who observed the game, as Wired reported.

atoms in the universe: Alan Levinovitz, “The Mystery of Go, the Ancient Game That Computers Still Can’t Win,” Wired, May 12, 2014, accessed August 19, 2018, https://www.wired.com/2014/05/the-world-of-computer-go; David Silver and Demis Hassabis, “AlphaGo: Mastering the Ancient Game of Go with Machine Learning,” Google AI Blog, January 27, 2016, accessed August 19, 2018, https://ai.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html. model of the game: Silver and Hassabis, “AlphaGo.” tales about AlphaGo: Cade Metz, “The Sadness and Beauty of Watching Google’s AI Play Go,” Wired, March 11, 2016, accessed August 19, 2018, https://www.wired.com/2016/03/sadness-beauty-watching-googles-ai-play-go.

pages: 410 words: 119,823

Radical Technologies: The Design of Everyday Life
by Adam Greenfield
Published 29 May 2017

A book analyzing his games against Chinese “master of masters” Gu Li is simply titled Relentless.4 In Seoul Lee fell swiftly, losing to AlphaGo by four matches to one. Here is DeepMind lead developer David Silver, recounting the advantages AlphaGo has over Lee, or any other human player: “Humans have weaknesses. They get tired when they play a very long match; they can play mistakes. They are not able to make the precise, tree-based computation that a computer can actually perform. And perhaps even more importantly, humans have a limitation in terms of the actual number of go games that they’re able to process in a lifetime. A human can perhaps play a thousand games a year; AlphaGo can play through millions of games every single day.”5 Understand that here Silver is giving AlphaGo considerably short shrift.

A human can perhaps play a thousand games a year; AlphaGo can play through millions of games every single day.”5 Understand that here Silver is giving AlphaGo considerably short shrift. A great deal of what he describes—that it doesn’t tire, that it can delve a deep tree, that it can review and learn from a very large number of prior games—is simply brute force. That may well have been how Deep Blue beat Kasparov. It is not how AlphaGo defeated Lee Sedol. For many, I suspect, Next Rembrandt will feel like a more ominous development than AlphaGo. The profound sense of recognition we experience in the presence of a Rembrandt is somehow more accessible than anything that might appear in the austere and highly abstract territorial maneuvering of go.

But this was the quality that made it irresistible to artificial intelligence researchers, some of the brightest of whom took it up on a professional level simply so they could get a better sense for its dynamics. A few of the most dedicated wound up working together at a London-based subsidiary of Google called DeepMind, where they succeeded in developing a program named AlphaGo.3 AlphaGo isn’t just one thing, but a stack of multiple kinds of neural network and learning algorithm laminated together. Its two primary tools are a “policy network,” trained to predict and select the moves that the most expert human players would make from any given position on the board, and a “value network,” which plays each of the moves identified by the policy network forward to a depth of around thirty turns, and evaluates where Black and White stand in relation to one another at that juncture.

Artificial Whiteness
by Yarden Katz

John Brockman (New York: Penguin, 2019), 80.   26.   DeepMind has developed two major systems that play Go: AlphaGo and its successor AlphaGo Zero. I will simply refer to “AlphaGo” since my comments apply to both systems.   27.   DeepMind, “AlphaGo Zero: Discovering New Knowledge,” October 18, 2017, https://www.youtube.com/watch?v=WXHFqTvfFSw.   28.   Brenden M. Lake et al., “Building Machines That Learn and Think Like People,” Behavioral and Brain Sciences 40, no. E253 (2016): 1–101.   29.   Lake et al., 8.   30.   DeepMind, “AlphaGo Zero.”   31.   Microsoft COCO is described in Tsung-Yi Lin et al., “Microsoft COCO: Common Objects in Context,” in Proceedings of the European Conference on Computer Vision, 2014, 740–55.

People, by contrast, can flexibly adopt different goals and styles of play: if asked to play with a different goal, such as losing as quickly as possible, or reaching the next level in the game but just barely, many people have little difficulty doing so. The AlphaGo system suffers from similar limitations. It is highly tuned to the configuration of the Go game on which it was trained. If the board size were to change, for example, there would be little reason to expect AlphaGo to work without retraining. AlphaGo also reveals that these deep learning systems are not as radically empiricist as advertised. The rules of Go are built into AlphaGo, a fact that is typically glossed over. This is hard-coded, symbolic knowledge, not the blank slate that was trumpeted.

“Kissinger himself has become the demonstrative effect,” Grandin writes, “whatever substance there was eroded by the constant confusion of ends and means, the churn of power to create purpose and purpose defined as the ability to project power.”118 It is easier to see in this light why Kissinger is captivated by a system like Google’s AlphaGo, which learns to “dominate” humans at the game of Go by action—merely adapting to game wins and defeats—and making “strategically unprecedented moves,” as Kissinger put it, apparently without any preset notion of meaning. In AI, Kissinger found a reflection of his own imperialist project: an endeavor molded by power, circular and empty

Industry 4.0: The Industrial Internet of Things
by Alasdair Gilchrist
Published 27 Jun 2016

In August 2015, IBM announced it had offered $1 billion to acquire medical imaging company, Merge Healthcare, which in conjunction with Watson will provide the means for machine learning. Astonishingly, Google’s AlphaGo beat the world champion Lee Sodol at the board game GO, which is a hugely complex game with a best of five win. What was strange is that both Lee Sodol and the European champion (who had been beat previously by AlphaGo) could not understand Google’s AlphaGo’s logic. Seemingly, AlphaGo played a move no human could understand; indeed all the top players in the world believed that AlphaGo had made a huge mistake. Even its challenger the world champion Lee Sodol thought it was a mistake; indeed he was so shocked by AlphaGo’s move he took a break to consider it, until it dawned on him the absolutely brilliance of the move.

“It was not a human move … in fact I have never seen a human make this move”. Needless to say, Google’s AlphaGo went on to win the game. Why did AlphaGo beat the brilliant Lee Sodol? Is it simply because as a machine AlphaGo can play games 63 64 Chapter 3 |TheTechnical and Business Innovators of the Industrial Internet against itself, and replay all known human games, building up such a memory of possible moves as a process of 24/7 learning that it keep can continuously keep improving its strategic game. Google’s team analyzed the victory and realized that AlphaGo did something very strange—it calculated a move, based on its millions of known human play training movements that a human player would only have had a one in ten thousand chance of recognizing and countering that seemingly crazy move.

Google’s team analyzed the victory and realized that AlphaGo did something very strange—it calculated a move, based on its millions of known human play training movements that a human player would only have had a one in ten thousand chance of recognizing and countering that seemingly crazy move. In fairness to the great Lee Sodol, he did manage to outwit AlphaGo to win one of the best of five games, and that appears to be an amazing achievement. References http://www.wired.com/2016/03/googles-ai-viewed-move-no-human-understand/ www.ibm.com/smarterplanet/us/en/ibmwatson http://www.wired.com/2016/03/googles-ai-viewed-move-no-human-understand/ http://3dprinting.com/what-is-3d-printing/ http://www.ptc.com/augmentedreality https://www.sdxcentral.com/articles/contributed/nfv-and-sdn-whatsthe-difference/2013/03/ CHAPTER 4 IIoT Reference Architecture The Industrial Internet is reliant on the structure of M2M technology.

pages: 301 words: 89,076

The Globotics Upheaval: Globalisation, Robotics and the Future of Work
by Richard Baldwin
Published 10 Jan 2019

But that’s not the end of the amazing part. In a classic example of AI’s inhuman speed, the owner of AlphaGo Master developed a new version of AlphaGo that skipped the “learning from human games” part and just let it learn from playing itself from scratch. All it started with were the rules. Since computing power had increased so much since AlphaGo Master was “trained,” the results were astounding. In just 40 days of playing itself, the new version, AlphaGo Zero, beat the world’s best Go player, which, at the time was AlphaGo Master. The victory came just six months after AlphaGo Master’s astounding victory over the best human player. But machine learning is not just fun and games.

That’s when a computer program, called AlphaGo Master, used machine learning techniques to beat the world’s best Go player.10 The how is as amazing as the what. AlphaGo Master, owned by the leading AI company DeepMind, learned the ropes by studying 30 million board positions from 160,000 actual games. This is a bit intimidating. There are only about 26 million minutes in a human working life, so AlphaGo Master started with more than a lifetime of experience. But then things got even more daunting for human players hoping to compete with this technology. To learn from experience, AlphaGo Master played more games against itself in six months than a human could play in six decades.

Likewise, software robots aren’t very good when the nature of the problem and the nature of the solution are just intrinsically vague. That’s the case when identifying new patterns: the whole idea is that the pattern is new, so there cannot be a big dataset by definition. For example, a human Go master could presumable do fairly well on a slightly different-sized board, but AI couldn’t. At a 2017 conference, the AlphaGo Master team admitted that the AI-software would be useless if the game was played on an even slightly altered board—say one that was twenty-nine-by-twenty-nine squares instead of the standard nineteen by nineteen.6 Table 6.3 CAPABILITIES OF AI IN SOCIAL SKILLS Social Skill Description AI Skill vs.

pages: 97 words: 31,550

Money: Vintage Minis
by Yuval Noah Harari
Published 5 Apr 2018

Shortly afterwards AI scored an even more sensational success, when Google’s AlphaGo software taught itself how to play Go, an ancient Chinese strategy board game significantly more complex than chess. Go’s intricacies were long considered far beyond the reach of AI programs. In March 2016 a match was held in Seoul between AlphaGo and the South Korean Go champion, Lee Sedol. AlphaGo trounced Lee 4–1 by employing unorthodox moves and original strategies that stunned the experts. Whereas prior to the match most professional Go players were certain that Lee would win, after analysing AlphaGo’s moves most concluded that the game was up and that humans no longer had any hope of beating AlphaGo and its progeny.

Whereas prior to the match most professional Go players were certain that Lee would win, after analysing AlphaGo’s moves most concluded that the game was up and that humans no longer had any hope of beating AlphaGo and its progeny. Computer algorithms have recently proven their worth in ball games, too. For many decades, baseball teams used the wisdom, experience and gut instincts of professional scouts and managers to pick players. The best players fetched millions of dollars, and naturally enough the rich teams grabbed the cream of the crop, whereas poorer teams had to settle for the scraps. In 2002 Billy Beane, the manager of the low-budget Oakland Athletics, decided to beat the system. He relied on an arcane computer algorithm developed by economists and computer geeks to create a winning team from players whom human scouts had overlooked or undervalued.

pages: 301 words: 85,263

New Dark Age: Technology and the End of the Future
by James Bridle
Published 18 Jun 2018

And he added, ‘So beautiful.’26 In the history of the 2,500-year-old game, nobody had ever played like this. AlphaGo went on to win the game, and the series. AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them. The sophistication of the moves that must have been played in those games between the shards of AlphaGo is beyond imagination, too, but we are unlikely to ever see and appreciate them; there’s no way to quantify sophistication, only winning instinct.

At the time of the match, it was the 259th most powerful computer on the planet, and it was dedicated purely to chess. It could simply hold more outcomes in mind when choosing where to play next. Kasparov was not outthought, merely outgunned. By contrast, when the Google Brain–powered AlphaGo software defeated the Korean Go professional Lee Sedol, one of the highest-rated players in the world, something had changed. In the second of five games, AlphaGo played a move that stunned Sedol and spectators alike, placing one of its stones on the far side of the board, and seeming to abandon the battle in progress. ‘That’s a very strange move,’ said one commentator. ‘I thought it was a mistake,’ said the other.

But this is as close as we shall ever get, for once again, we are peering through the window of Infinite Fun Land – an arcade we will never get to visit. Compounding this error, in 2016 a pair of researchers at Google Brain decided to see if neural networks could keep secrets.34 The idea stemmed from that of the adversary: an increasingly common component of neural network designs, and one that would no doubt have pleased Friedrich Hayek. Both AlphaGo and Facebook’s bedroom generator were trained adversarially; that is, they consisted not of a single component that generated new moves or places, but of two competing components that continually attempted to outperform and outguess the other, driving further improvement. Taking the idea of an adversary to its logical conclusion, the researchers set up three networks called, in the tradition of cryptographic experiments, Alice, Bob, and Eve.

pages: 501 words: 114,888

The Future Is Faster Than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives
by Peter H. Diamandis and Steven Kotler
Published 28 Jan 2020

Typically, the game tree complexity of chess is about 1040—which means, essentially, if every one of the 7 plus billion people on Earth paired up and started playing chess, it would take them trillions and trillions of years to play every single variation of the game. Yet, in 2017, Google’s AlphaGo defeated the world Go champion, Lee Sedol. Go has a game tree complexity of 10360—it’s chess for superheroes. Put differently, we humans are the only species known to have the cognitive capacity to play Go. It only took a couple hundred thousand years of evolution to develop that capability. AI, meanwhile, got there in less than two decades. Still, AI wasn’t done. A few months after that victory, Google upgraded AlphaGo to AlphaGo Zero by updating their training style. AlphaGo was educated via machine learning, essentially fed thousands of games previously played by humans, and taught the proper move and countermove for every possible position.

AlphaGo was educated via machine learning, essentially fed thousands of games previously played by humans, and taught the proper move and countermove for every possible position. AlphaGo Zero, meanwhile, required zero data. Instead, it relies on “reinforcement learning”—it learns by playing itself. Starting with little more than a few simple rules, AlphaGo Zero took three days to beat its parent, AlphaGo, the same system that beat Lee Sedol. Three weeks later, it trounced the sixty best players in the world. In total, it took forty days for AlphaGo Zero to become the undisputed best Go player on Earth. And if that wasn’t strange enough, in May of 2017, Google used the same kind of reinforcement learning to have an AI build another AI.

Abe, Shinzo, 47 Ablow, Keith, 247 Abu Dhabi, 217 abundance, exponential technologies and, 261–63 Abundance (Diamandis and Kotler), xi, 7, 78, 82, 99, 145, 163, 204, 213, 261–62 Abundance Digital, 264, 265 Abundance360, xii, 264, 265 Advano, Aman, 108–9 advertising: AI assistants and, 123–24 big data and, 118 Spatial Web and, 118–20 technological change and, 117–24 aerial ridesharing, 4 AeroFarms, 205 aeroponics, 204, 205 Affectiva, 137 affective computing, 136–38 Affective Computing Group, 137 aging, 170–72 as programmed process, 88–89, 169–70 see also longevity agriculture, reinvention of, 225–26 AI assistants, 35, 37, 132, 135–36, 138, 198 advertising and, 123–24 shopping and, 100–102, 113, 123–24 AI personas, 132 Airbnb, 84, 234 Akonia Holographics, 52 Aleph Farms, 208 Alexa, 100 algorithms, 87, 88 Alibaba, 99, 100, 107, 114 Alipay, 192 Alkahest, 178 Allen, Mark, 178 Allen, Paul, 176 All Nippon Airways (ANA), 26 Alphabet, 46, 89, 162, 235 Project Loon of, 39–40 Verily Life Sciences of, 157 see also Google AlphaFold, 167 AlphaGo, 36 AlphaGo Zero, 36, 37 Alzheimer’s disease, 82, 178 Amarasiriwardena, Gihan, 108–9 Amazon, 4, 21, 47, 100, 107, 108, 114, 119, 127 disruptive business model of, 98–99 Echo of, 35, 101, 132 Project Kuiper and, 40 Amazon Go, 105, 196, 229 ANA Avatar XPRIZE, 26 anandamide, 247 Andreesen, Marc, 32 Andrews, T.

pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us
by Tim O'Reilly
Published 9 Oct 2017

Retrieved March 28, 2016, https://web-beta.archive.org/web/20160328210752/https://deepmind.com/. 167 “the hallmark of true artificial general intelligence”: Demis Hassabis, “What We Learned in Seoul with AlphaGo,” Google Blog, March 16, 2016, https://blog.google/topics/machine-learning /what-we-learned-in-seoul-with-alphago/. 167 “getting to true AI”: Ben Rossi, “Google DeepMind’s AlphaGo Victory Not ‘True AI,’ Says Facebook’s AI Chief,” Information Age, March 14, 2016, http://www.information-age.com/google-deepminds-alphago-victory-not-true-ai-says-face books-ai-chief-123461099/. 169 “thinking about how to make people click ads”: Ashlee Vance, “This Tech Bubble Is Different,” Bloomberg Businessweek, April 14, 2011, https://www.bloomberg.com/news/articles/2011-04-14/this-tech-bubble-is-different.

Everything is amazing, everything is horrible, and it’s all moving too fast. We are heading pell-mell toward a world shaped by technology in ways that we don’t understand and have many reasons to fear. WTF? Google AlphaGo, an artificial intelligence program, beat the world’s best human Go player, an event that was widely predicted to be at least twenty years in the future—until it happened in 2016. If AlphaGo can happen twenty years early, what else might hit us even sooner than we expect? For starters: An AI running on a $35 Raspberry Pi computer beat a top US Air Force fighter pilot trainer in combat simulation.

Google purchased DeepMind in 2014 for $500 million, after it demonstrated an AI that had learned to play various older Atari computer games simply by watching them being played. The highly publicized victory of AlphaGo over Lee Sedol, one of the top-ranked human Go players, represented a milestone for AI, because of the difficulty of the game and the impossibility of using brute-force analysis of every possible move. But DeepMind cofounder Demis Hassabis wrote, “We’re still a long way from a machine that can learn to flexibly perform the full range of intellectual tasks a human can—the hallmark of true artificial general intelligence.” Yann LeCun also blasted those who oversold the significance of AlphaGo’s victory, writing, “most of human and animal learning is unsupervised learning.

pages: 304 words: 80,143

The Autonomous Revolution: Reclaiming the Future We’ve Sold to Machines
by William Davidow and Michael Malone
Published 18 Feb 2020

In 1997, Deep Blue, a chess-playing computer developed by IBM, beat the Russian grandmaster Garry Kasparov in a six-game match.10 Kasparov said that he had sensed a thinking presence inside his computer opponent. Then, in 2016, Google DeepMind’s artificial-intelligence program, AlphaGo, defeated Lee Sedol, a Go champion, 4–1. Go is a more difficult game for a computer to play than chess, and AlphaGo’s victory is perhaps the best harbinger of what is to come. While Deep Blue relied on hard-coded functions written by human experts for its decision-making processes, AlphaGo used neural networks and reinforcement learning. In other words, its system studied numerous games and played games against itself so it could write its own rules.11 The lesson here is that it is now possible to use inexpensive computer power to develop intelligent processes.

Tanya Lewis, “A Brief History of Artificial Intelligence,” Live Science, December 4, 2014, http://www.livescience.com/49007-history-of-artificial-intelligence.html (accessed June 26, 2019). 10. “Deep Blue,” Wikipedia, https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer) (accessed June 26, 2019). 11. “AlphaGo vs Deep Blue,” Reddit, https://www.reddit.com/r/MachineLearning/comments/4a7lc4/alphago_vs_deep_blue/ (accessed June 26, 2019). 12. “Why AI Researchers Like Video Games,” The Economist, May 13, 2017, https://www.economist.com/news/science-and-technology/21721890-games-help-them-understand-reality-why-ai-researchers-video-games (accessed June 26, 2019). 13.

Index Aboujaoude, Elias, 144, 147 abundance, 7, 14, 67, 94, 188, 194–195 accidents, 156 Adams, Henry, x, 37 addiction: gambling, 137, 138–139, 140, 143, 150 gaming, 89, 132, 138–144 income inequality and, 166 neuroscience behind, 136–138, 140 protections against, in virtual space, 148–151 smartphone, 133, 138, 145 social media/networking, 144–145, 148–151 virtual space/Internet, 132–133, 136–146, 148–151 advertising industry, 62–64, 89, 90, 120–123, 134–135 Agricultural Revolution: Autonomous Revolution contrasted with, 25–26 cities’ origin in, 24, 25–26, 35, 151–152, 183–184 constitutional rights crafted during, 159 cultural norms’ creation in, 151–152 governance rules and systems shift in, 25 population growth during, 35, 36 Second, 24–25 social phase change of, 6, 11, 13, 14–15, 17, 21, 23–25, 26, 36–37, 134, 183–184 structural transformations early in, 23–24, 134 substitutional equivalence in, 14–15 timeline/rates of change, 13, 17, 21, 25, 36–37, 193 Airbnb model, 44, 70, 86 airline industry, 72, 97–98, 103–104 algorithms and algorithmic prisons, 13, 114, 123–128 Alibaba, 70, 76 AlphaGo, 46–47 Amazon, 64–65, 87–88, 90–91, 119, 150 “Amazon Effect,” 105 Anglo-Saxon culture, 162–163, 166 antitrust violations, 93, 160 anxiety, 148, 150 Apple, 10, 88–90 Arthur, W. Brian, 29, 97–98, 103, 194 artificial intelligence: in behavior prediction and modification, 117 history and evolution of, 45–47 job loss with, 43, 110 in law enforcement, 115 nonmonetizable productivity of, 60 substitutional equivalences with, 15–16, 45 threats from, xi, xii, 4, 16, 43, 48–49, 110, 117 Assante, Michael, 174 authoritarianism, 115, 158–159, 161–162, 180, 193 automatons/automation: in airline industry, 72, 97–98, 103–104 benefits possible from, xii, 4–5, 7, 14, 19 economy impacted by, 12–13 in financial industry, 10, 43, 77–78, 81–83, 102 government service and, 105 health care impacts with, 14, 48 human knowledge pursuit impacted by, xi, 16 job displacement predictions with, 98, 105–106 job market impacts from, 7, 11, 12, 31, 34, 43, 47–49, 73, 77–78, 95–106, 108, 187–189 vehicle, 84, 99–102 ZEVs created with, 12, 48–49 automobile industry: autonomous vehicle evolution and impact on, 84, 99–102 car-sharing impact on, 70, 84–86, 100, 101–102 Ford Motor Company production in, xii, 22, 33, 53, 103 horse-related industry impacted by, 54 Industrial Revolution role of, 53–54, 152 industrial robots and early use in, xii infrastructures as result of, 107–108 innovations leading up to, 52–53 social phase change with, 32, 53–54 Autonomous Economy, 60–61, 96–98 Autonomous Revolution: abundance available with, 7, 14, 67, 94, 188, 194–195 action-oriented approach to, 7, 14, 17, 19, 20, 94, 107, 180 Agricultural Revolution contrasted with, 25–26 cultural norms in adaptation to, 151, 153–157, 159 defining and key factors of, 6, 11, 34, 58, 95 early impacts of, 7, 11, 12 optimism and, 193–195 rate of change in, 13, 17–18, 34, 37, 192–193 recommendations for offsetting negative impacts of, 107–112 substitutional equivalence forms and examples of, 15–17, 42–50.

pages: 533

Future Politics: Living Together in a World Transformed by Tech
by Jamie Susskind
Published 3 Sep 2018

In short, they now beat the finest human players in almost every single one, including backgammon (1979), checkers (1994), and chess, in which IBM’s Deep Blue famously defeated world champion Garry Kasparov (1997). In 2016, to general astonishment, Google DeepMind’s AI system AlphaGo defeated Korean Grandmaster Lee Sedol 4–1 at the ancient game of Go, deploying dazzling and innovative tactics in a game exponentially more complex than chess. ‘I . . . was able to get one single win,’ said Lee Sedol rather poignantly; ‘I wouldn’t exchange it for anything in the world.’16 OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS 32 FUTURE POLITICS A year later, a version of AlphaGo called AlphaGo Master thrashed Ke Jie, the world’s finest human player, in a 3–0 clean sweep.17 A radically more powerful version now exists, called AlphaGo Zero.

‘I . . . was able to get one single win,’ said Lee Sedol rather poignantly; ‘I wouldn’t exchange it for anything in the world.’16 OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS 32 FUTURE POLITICS A year later, a version of AlphaGo called AlphaGo Master thrashed Ke Jie, the world’s finest human player, in a 3–0 clean sweep.17 A radically more powerful version now exists, called AlphaGo Zero. AlphaGo Zero beat AlphaGo Master 100 times in a row.18 As long ago as 2011, IBM’s Watson vanquished the two all-time greatest human champions at Jeopardy!—a TV game show in which the moderator presents general knowledge ‘answers’ relating to sports, science, pop culture, history, art, literature, and other fields and the contestants are required to provide the ‘questions’. Jeopardy! demands deep and wide-ranging knowledge, the ability to process natural language (including wordplay), retrieve relevant information, and answer using an acceptable form of speech—all before the other contestants do the same.19 The human champions were no match for Watson, whose victory marked a milestone in the development of artificial intelligence.

Cade Metz, ‘Google’s AI Wins Fifth and Final Game Against Go’, Wired, 15 March 2016 <https://www.wired.com/2016/03/googlesai-wins-fifth-final-game-go-genius-lee-sedol/> (accessed 28 November 2017); Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014), 12–13. Sam Byford, ‘AlphaGo beats Ke Jie Again to Wrap Up Three-part March’, The Verge, 25 May 2017 <https://www.theverge.com/2017/ 5/25/15689462/alphago-ke-jie-game-2-result-google-deepmindchina> (accessed 28 November 2017). David Silver et al., ‘Mastering the Game of Go Without Human Knowledge’, Nature 550 (19 October 2017): 354–9. OUP CORRECTED PROOF – FINAL, 30/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS Notes 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 373 Susskind and Susskind, Future of the Professions, 165.

pages: 626 words: 167,836

The Technology Trap: Capital, Labor, and Power in the Age of Automation
by Carl Benedikt Frey
Published 17 Jun 2019

Instead professionals play by recognizing patterns that emerge “when clutches of stones surround empty spaces.”3 As discussed above, humans still held the comparative advantage in pattern recognition when Frank Levy and Richard Murnane published their brilliant book The New Division of Labor in 2004.4 At the time, computers were nowhere near capable of challenging the human brain in identifying patterns. But now they are. Much more important than the fact that AlphaGo won is how it did so. While Deep Blue was a product of the rule-based age of computing, whose success rested upon the ability of a programmer to write explicit if-then-do rules for various board positions, AlphaGo’s evaluation engine was not explicitly programmed. Instead of following prespecified rules of the programmer, the machine was able to mimic tacit human knowledge, circumventing Polanyi’s paradox. Deep Blue was built on top-down programming. AlphaGo, in contrast, was the product of bottom-up machine learning. The computer inferred its own rules from a series of trials using a large data set.

The computer inferred its own rules from a series of trials using a large data set. To learn, AlphaGo first watched previously played professional Go games, and then it played millions of games against itself, steadily improving its performance. Its training data set, consisting of thirty million board positions reached by 160,000 professional players, was far greater than the experience any professional player could accumulate in a lifetime. The event marks what Erik Brynjolfsson and Andrew McAfee have called the “second half of the chessboard.”5 As Scientific American marveled, “An era is over and a new one is beginning. The methods underlying AlphaGo, and its recent victory, have huge implications for the future of machine intelligence.”6 Deep Blue may have beaten Kasparov at chess.

The only thing Deep Blue could do was evaluate two hundred million board positions per second. It was designed for one specific purpose. AlphaGo, on the other hand, relies on neural networks, which can be used to perform a seemingly endless number of tasks. Using neural networks, DeepMind has already achieved superhuman performance at some fifty Atari video games, including Video Pinball, Space Invaders, and Ms. Pac-Man.7 Of course, a programmer provided the instruction to maximize the game score, but an algorithm learned the best game strategies by itself over thousands of trials. Unsurprisingly, AlphaGo (or AlphaZero, as the generalized version is called), also outperforms preprogrammed computers at chess.

pages: 197 words: 49,296

The Future We Choose: Surviving the Climate Crisis
by Christiana Figueres and Tom Rivett-Carnac
Published 25 Feb 2020

Many people alive today will at some point likely encounter a machine that is smarter than they are in almost every way. The world famously got a taste of what that might be like in 2017. The AI program AlphaGo Zero figured out how to win at the ancient and notoriously difficult Chinese strategy game of Go, learning entirely by itself, essentially accumulating thousands of years of human knowledge, and improving on it, in just forty days.75 Deep Mind, the company that developed AlphaGo Zero, says the technology is not limited to machines that can outcompete human beings in strategy games but is intended to be used to inform new technology that will positively impact society.76 But we can’t rely on the promises of corporations to ensure that a technology is aligned with our goals for regenerating nature and pursuing the conditions that will help humanity thrive.

World Bank, “Accounting Reveals That Costa Rica’s Forest Wealth Is Greater Than Expected,” May 31, 2016, https://www.worldbank.org/​en/​news/​feature/​2016/​05/​31/​accounting-reveals-that-costa-ricas-forest-wealth-is-greater-than-expected. 73. See http://happyplanetindex.org/​countries/​costa-rica. 74. For a helpful introduction to AI, see Snips, “A 6-Minute Intro to AI,” https://snips.ai/​content/​intro-to-ai/​#ai-metrics. 75. David Silver and Demis Hassabis, “AlphaGo Zero: Starting from Scratch,” DeepMind, October 18, 2017, https://deepmind.com/​blog/​alphago-zero-learning-scratch/. 76. DeepMind, https://deepmind.com/. 77. Rupert Neate, “Richest 1% Own Half the World’s Wealth, Study Finds,” Guardian (U.S. edition), November 14, 2017, https://www.theguardian.com/​inequality/​2017/​nov/​14/​worlds-richest-wealth-credit-suisse. 78.

It will be impossible for so many people to live here if we have the same impact per capita on our atmosphere as we do today. Technology, specifically machine learning and AI, has the potential to transform our presence here. Issues and problems, including how we can effectively use natural resources in a circular rather than linear way, that have long eluded us may finally be unlocked. When AlphaGo Zero was learning to play and win at Go, the developers noticed that as it taught itself techniques perfected by professional players over generations, it occasionally made decisions to discard those techniques in favor of new, better ones that human beings had not yet had time to learn. In a race against time, the speed of learning that AI offers has extraordinary—exponential—potential to accelerate climate solutions, if it is deployed and governed well.

pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It)
by Jamie Bartlett
Published 4 Apr 2018

A few years ago, DeepMind, a Google-owned AI firm, built software to play the game, called AlphaGo. It was trained the ‘classic’ ML way, using thousands of human games; for example, being taught that in position x humans played move y; and in position a, humans played move b, and so on. From that starting point AlphaGo played itself billions of times to improve its knowledge of the game. In 2016, to the surprise of many experts, AlphaGo decisively beat the world’s best Go player, Lee Sedol. This stunning result was quickly surpassed when, in late 2017, Deep Mind released AlphaGo Zero, a software that was given no human examples at all and was taught the rules of how to win by using a deep learning technique with no prior examples.

This stunning result was quickly surpassed when, in late 2017, Deep Mind released AlphaGo Zero, a software that was given no human examples at all and was taught the rules of how to win by using a deep learning technique with no prior examples. It started off dreadfully bad but improved slightly with each game, and within 40 days of constant self-play it had become so strong that it thrashed the original AlphaGo 100–0. Go is now firmly in the category of ‘games that humans will never win against machines again’. Most people in Silicon Valley agree that machine learning is the next big thing, although some are more optimistic than others. Tesla and SpaceX boss Elon Musk recently said that AI is like ‘summoning the demon’, while others have compared its significance to the ‘scientific method, on steroids’, the invention of penicillin and even electricity.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

In Alice’s context, we’ll call these System X, for competence on well-defined tasks like game play (as in chess or Go) and System Y, for general intelligence. The latter system includes Bob’s competence at reading and conversation, but also the murkier area of novel ideas and insights. Bob is terrible at chess, and in fact his X system is pathetic compared not only to a system like AlphaGo but also to many other humans. His short-term memory is worse than most people’s; he scores poorly on IQ tests; and he struggles with crossword puzzles. As for his Y system, his general intelligence shows a conspicuous lack of interest in or ability at novel or insightful thinking. Bob is not the kind of neighbor that gets many invitations to dinner parties.

“Once we get to human-levels of intelligence, the system can design a smarter-than-human version of itself,” so the hope goes. But, we already have “human-level” intelligence—we’re human. Can we do this? What are the intelligence explosion promoters really talking about? This is another way of saying that the powers of the human mind outstrip our ability to mechanize it in the sense necessary for “scaling up,” from AlphaGo to a Bob-Machine to a Turing-Machine, and beyond. The intelligence explosion idea itself is not a particularly good System Y candidate for progress on AI toward general intelligence. THE EVOLUTIONARY TECHNOLOGISTS Many AI enthusiasts who hold to an inevitability thesis (superintelligent machines are coming, no matter what we do) hold to this because it plays on evolutionary themes, and thus conveniently absolves individual scientists from the responsibility of needing to make scientific breakthroughs or develop revolutionary ideas.

INDUCTION WORKS ON GAMES, NOT LIFE The real world is a dynamic environment, which means it’s constantly changing in both predictable and unpredictable ways, and we can’t enclose it in a system of rules. Board games, though, are enclosed in a system of rules, which helps explain why inductive approaches that learn from experience of gameplay work so well. AlphaGo (or its successor AlphaZero) uses a kind of machine learning known as deep learning to play the difficult game of Go. It plays against itself, using something called deep reinforcement learning, and induces hypotheses about the best moves to make on the board given its position and the opponent’s.

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

The system has to figure out how to behave according to those signals.[lxx]) A match against the world champion Lee Se-Dol followed in March 2016. Se-Dol was confident, believing it would take a few more years before a computer could beat him. He was genuinely shocked to lose the series four games to one, and observers were impressed by AlphaGo’s sometimes unorthodox style of play. AlphaGo’s achievement was another landmark in computer science, and perhaps equally a landmark in human understanding that something important is happening, especially in the Far East, where the game of Go is far more popular than it is in the West. DeepMind did not rest on its laurels.

It is nowhere near an artificial general intelligence which is human-level or beyond in all respects. It is not conscious. It does not even know that it won the Jeopardy match. But it may prove to be an early step in the direction of artificial general intelligence. In January 2016, an AI system called AlphaGo developed by Google's DeepMind beat Fan Hui, the European champion of Go, a board game. This was hailed as a major step forward: the game of chess has more possible moves (3580) than there are atoms in the visible universe, but Go has even more – 250150.[lxix] The system uses a hybrid of AI techniques: it was partly programmed by its creators, but it also taught itself using a machine learning approach called deep reinforcement learning.

utm_content=bufferb9e5d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer [ccxxxix] http://forbesindia.com/article/hidden-gems/thyrocare-technologies-testing-new-waters-in-medical-diagnostics/41051/1 [ccxl] http://www.ucsf.edu/news/2011/03/9510/new-ucsf-robotic-pharmacy-aims-improve-patient-safety [ccxli] http://www.qmed.com/news/ibms-watson-could-diagnose-cancer-better-doctors [ccxlii] http://www.ft.com/cms/s/2/dced8150-b300-11e5-8358-9a82b43f6b2f.html#axzz3xL3RoRdy [ccxliii] http://www.ft.com/cms/s/2/dced8150-b300-11e5-8358-9a82b43f6b2f.html#axzz3xL3RoRdy [ccxliv] http://www.theverge.com/2016/3/10/11192774/demis-hassabis-interview-alphago-google-deepmind-ai [ccxlv] http://qz.com/567658/searching-for-eureka-ibms-path-back-to-greatness-and-how-it-could-change-the-world/ [ccxlvi] http://www.forbes.com/sites/peterhigh/2016/01/18/ibm-watson-head-mike-rhodin-on-the-future-of-artificial-intelligence/#24204aab3e2922228b9c30cc [ccxlvii] http://www.dotmed.com/news/story/29020 [ccxlviii] http://www.wsj.com/articles/SB10001424052702303983904579093252573814132 [ccxlix] http://www.outpatientsurgery.net/outpatient-surgery-news-and-trends/general-surgical-news-and-reports/ethicon-pulling-sedasys-anesthesia-system--03-10-16 [ccl] http://www.wired.co.uk/news/archive/2016-05/05/autonomous-robot-surgeon [ccli] https://www.edsurge.com/news/2016-04-18-gradescope-raises-2-6m-to-apply-artificial-intelligence-to-grading-exams [cclii] http://www.wsj.com/articles/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621 [ccliii] https://www.sigfig.com/site/#/home [ccliv] http://www.nytimes.com/2016/01/23/your-money/robo-advisers-for-investors-are-not-one-size-fits-all.html?

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

What we have called AI in this book is not general artificial intelligence but decidedly narrower prediction machines. Developments such as AlphaGo Zero by Google’s DeepMind have raised the specter that superintelligence might not be so far away. It outperformed the world champion–beating AlphaGo at the board game Go without human training (learning by playing games against itself), but it isn’t ready to be called superintelligence. If the game board changed from nineteen by nineteen to twenty-nine by twenty-nine or even eighteen by eighteen, the AI would struggle, whereas a human would adjust. And don’t even think of asking AlphaGo Zero to make you a grilled cheese sandwich; it’s not that smart.

Some regular citizens experienced their AI moment later that year when renowned physicist Stephen Hawking emphatically explained, “[E]verything that civilisation has to offer is a product of human intelligence … [S]uccess in creating AI would be the biggest event in human history.”1 Others experienced their AI moment the first time they took their hands off the wheel of a speeding Tesla, navigating traffic using Autopilot AI. The Chinese government experienced its AI moment when it witnessed DeepMind’s AI, AlphaGo, beating Lee Se-dol, a South Korean master of the board game Go, and then later that year beating the world’s top-ranked player, Ke Jie of China. The New York Times described this game as China’s “Sputnik moment.”2 Just as massive American investment in science followed the Soviet Union’s launch of Sputnik, China responded to this event with a national strategy to dominate the AI world by 2030 and a financial commitment to make that claim plausible.

Learning by Simulation One intermediate step to soften this trade-off is to use simulated environments. When human pilots are training, before they get their hands on a real plane in flight, they spend hundreds of hours in what are very sophisticated and realistic simulators. A similar approach is available for AI. Google trained DeepMind’s AlphaGo AI to defeat the best Go players in the world not just by looking at thousands of games played between humans but also by playing against another version of itself. One form of this approach is called adversarial machine learning, which pits the main AI and its objective against another AI that tries to foil that objective.

The Book of Why: The New Science of Cause and Effect
by Judea Pearl and Dana Mackenzie
Published 1 Mar 2018

Perhaps the prototypical example is AlphaGo, a convolutional neural-network-based program that plays the ancient Asian game of Go, developed by DeepMind, a subsidiary of Google. Among human games of perfect information, Go had always been considered the toughest nut for AI. Though computers conquered humans in chess in 1997, they were not considered a match even for the lowest-level professional Go players as recently as 2015. The Go community thought that computers were still a decade or more away from giving humans a real battle. That changed almost overnight with the advent of AlphaGo. Most Go players first heard about the program in late 2015, when it trounced a human professional 5–0.

Most Go players first heard about the program in late 2015, when it trounced a human professional 5–0. In March 2016, AlphaGo defeated Lee Sedol, for years considered the strongest human player, 4–1. A few months later it played sixty online games against top human players without losing a single one, and in 2017 it was officially retired after beating the current world champion, Ke Jie. The one game it lost to Sedol is the only one it will ever lose to a human. All of this is exciting, and the results leave no doubt: deep learning works for certain tasks. But it is the antithesis of transparency. Even AlphaGo’s programmers cannot tell you why the program plays so well.

Even AlphaGo’s programmers cannot tell you why the program plays so well. They knew from experience that deep networks have been successful at tasks in computer vision and speech recognition. Nevertheless, our understanding of deep learning is completely empirical and comes with no guarantees. The AlphaGo team could not have predicted at the outset that the program would beat the best human in a year, or two, or five. They simply experimented, and it did. Some people will argue that transparency is not really needed. We do not understand in detail how the human brain works, and yet it runs well, and we forgive our meager understanding. So, they argue, why not unleash deep-learning systems and create a new kind of intelligence without understanding how it works?

pages: 289 words: 86,165

Ten Lessons for a Post-Pandemic World
by Fareed Zakaria
Published 5 Oct 2020

That leap in cognitive capacity was a watershed moment for AI. The board game Go is considered to be the most complex in the world, with vastly more potential moves than there are atoms in the observable universe. Google’s AlphaGo learned the game, and in March 2016, consistently beat the eighteen-time world champion, Lee Sedol. (In 2017, its successor program, AlphaZero, taught itself Go in just three days and defeated AlphaGo, one hundred games to zero.) AlphaGo was seen by computer scientists as a mark that machines could teach themselves and also think in nonlinear, creative ways. In March 2020, its makers revealed that another one of their programs merely watched the screen as a series of Atari video games were played—and then mastered all fifty-seven games, outperforming humans in every single one.

Norton, 1963), 358–73. 113 Jetson of the 1960s cartoon: “works three hours a day, three days a week,” per Sarah Ellison, “Reckitt Turns to Jetsons to Launch Detergent Gels,” Wall Street Journal, January 13, 2003; pushing a button, per Hanna-Barbera Wiki, “The Jetsons,” https://hanna-barbera.fandom.com/wiki/The_Jetsons. 113 four-day workweek: Zoe Didali, “As PM Finland’s Marin Could Renew Call for Shorter Work Week,” New Europe, January 2, 2020, https://www.neweurope.eu/article/finnish-pm-marin-calls-for-4-day-week-and-6-hours-working-day-in-the-country/. 114 “bullshit jobs”: David Graeber, Bullshit Jobs: A Theory (New York: Simon & Schuster, 2018). 115 “slaves of time without purpose”: McEwan, Machines Like Me. 116 atoms in the observable universe: David Silver and Demis Hassabis, “AlphaGo: Mastering the Ancient Game of Go with Machine Learning,” Google DeepMind, January 27, 2016, https://ai.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html. 116 all fifty-seven games: Kyle Wiggers, “DeepMind’s Agent57 Beats Humans at 57 Classic Atari Games,” Venture Beat, March 31, 2020; Rebecca Jacobson, “Artificial Intelligence Program Teaches Itself to Play Atari Games—And It Can Beat Your High Score,” PBS NewsHour, February 20, 2015. 117 Stuart Russell: Stuart Russell, “3 Principles for Creating Safer AI,” TED2017, https://www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai/transcript?

But if AI produces better answers than we can without revealing its logic, then we will be going back to our species’ childhood and relying on faith. We will worship artificial intelligence that, as was said of God, works in a mysterious way, his wonders to perform. Perhaps the period from Gutenberg to AlphaGo will prove to be the exception, a relatively short era in history when humans believed they were in control. Before that, for millennia, they saw themselves as small cogs in a vast system they did not fully comprehend, subject to laws of God and nature. The AI age could return us to a similarly humble role.

pages: 306 words: 82,909

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back
by Bruce Schneier
Published 7 Feb 2023

It can be impossible to understand how the system reached its conclusion, even if you are the system’s designer and can examine the code. Researchers don’t know precisely how an AI image-classification system differentiates turtles from rifles, let alone why one of them mistook one for the other. In 2016, the AI program AlphaGo won a five-game match against one of the world’s best Go players, Lee Sedol—something that shocked both the AI and the Go-playing worlds. AlphaGo’s most famous move was in game two: move thirty-seven. It’s hard to explain without diving deep into Go strategy, but it was a move that no human would ever have chosen to make. It was an instance of an AI thinking differently.

Martin Ford (2018), Architects of Intelligence: The Truth About AI from the People Building It, Packt Publishing. 208Def: Robot /bät/ (noun): Kate Darling (2021), The New Breed: What Our History with Animals Reveals about Our Future with Robots, Henry Holt. 52. THE EXPLAINABILITY PROBLEM 212Deep Thought informs them: Douglas Adams (1978), The Hitchhiker’s Guide to the Galaxy, BBC Radio 4. 212AlphaGo won a five-game match: Cade Metz (16 Mar 2016), “In two moves, AlphaGo and Lee Sedol redefined the future,” Wired, https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future. 213“the magical number seven”: George A. Miller (1956), “The magical number seven, plus or minus two: Some limits on our capacity for processing information,” Psychological Review 63, no. 2, http://psychclassics.yorku.ca/Miller. 214explainability is especially important: J.

A/B testing, 225 abortion, 133–34 Abrams, Stacey, 167 addiction, 185–87 Adelson, Sheldon, 169 administrative burdens, 132–34, 163, 164, 165 administrative state, 154 adversarial machine-learning, 209–10 advertising attention and, 183, 184–85 fear and, 197 persuasion and, 188–89 trust and, 194 AI hacking ability to find vulnerabilities and, 229–30 cognitive hacks and, 181–82, 201–2, 216, 218–19 competitions for, 228–29 computer acceleration of, 224–26, 242–43 defenses against, 236–39 experimentation and, 225 fear and, 197 financial systems and, 241–43, 275n future of, 4–5, 205–6, 240–44, 272n, 275n goals and, 231–35, 240 governance systems for, 245–48 humanization and, 216–19 persuasion and, 188, 218–19, 220–23 politics and, 220–22, 225–26 scale and, 225–26, 242–43, 274n scope and, 226, 243 sophistication and, 226, 243 speed and, 224–25, 242 trust and, 193, 194, 218 AI systems ability to find vulnerabilities, 229–30, 238–39 ambiguity and, 240–41 defined, 206 explainability problem, 212–15, 234 hacking vulnerabilities of, 4, 209–11, 226–27 qualities of, 207–8 specialized vs. general, 206–7, 272n value alignment and, 237 AIBO, 222–23 Air Bud, 259n Airbnb, 124 airline frequent-flier hacks, 38–40, 46 Alexa, 217 Alita: Battle Angel, 218 AlphaGo, 212 Alternative Minimum Tax (AMT), 61 Amazon, 124–25 ambiguity, 240–41 American Jobs Creation Act (2004), 157 Anonymous, 103 ant farms, 1–2 antitrust laws, 185 architecture, 109 artificial intelligence. See AI hacking; AI systems ATM hacks, 31–34, 46, 47, 63 attention, 183–87 authoritarian governments, 174–75 AutoRun, 58, 68 Bank Holding Company Act (1956), 75 banking hacks, 74–78, 119, 260n Barrett, Amy Coney, 121 beneficial ownership, 86, 88 Berkoff, David, 42 Biden, Joseph, 129, 130 Big Lie technique, 189 biological systems, 19–20 Bipartisan Campaign Reform Act (2002), 169 Black Codes, 162–63 Boeing 737 MAX, 116–17 Bongo, Ali, 193 border closures, 126 Borodin, Andrey, 87 bots, 188, 210, 220, 221–22, 225–26, 274n Boxie, 218 brands, 194 Breaking Bad, 32 Breakout, 236–37 Briffault, Richard, 151 bug bounties, 56–57 bugs, 14–15 bureaucracy hacks, 115–18 Burr, Aaron, 155 business email compromise, 53–54, 192 buyers’ agency, 99 capitalism.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

DeepMind claims that AlphaGo “was a decade ahead of its time” (DeepMind 2020). This might refer to a 2014 prediction by Rémi Coulom, the developer of one of the best Go programmes prior to AlphaGo (Levinovitz 2014). However, this may be exaggerated. Go programmes had been reliably improving for years, and a simple trend extrapolation would have predicted that programmes would beat the best human players within a few years of 2016—see, e.g., Katja Grace (2013, Section 5.2). After correcting for the unprecedented amount of hardware DeepMind was willing to employ, it is not clear whether AlphaGo deviates from the trend of algorithmic improvements at all (Brundage 2016). 37.

Machine learning is a method of creating useful algorithms that does not require explicitly programming them; instead, it relies on learning from data, such as images, the results of computer games, or patterns of mouse clicks. One well-publicised breakthrough was DeepMind’s AlphaGo in 2016, which beat eighteen-time international champion Go player Lee Sedol.36 But AlphaGo is just a tiny sliver of all the impressive achievements that have come out of recent developments in machine learning. There have also been breakthroughs in generating and recognising speech, images, art, and music; in real-time strategy games like StarCraft; and in a wide variety of tasks associated with understanding and generating humanlike text.37 You probably use artificial intelligence every day, for example in a Google search.38 AI has also driven significant improvements in voice recognition, email text completion, and machine translation.39 The ultimate achievement of AI research would be to create artificial general intelligence, or AGI: a single system, or collection of systems working together, that is capable of learning as wide an array of tasks as human beings can and performing them to at least the same level as human beings.40 Once we develop AGI, we will have created artificial agents—beings (not necessarily conscious) that are capable of forming plans and executing on them in just the way that human beings can.

An AGI could learn not only to play board games but also to drive, to have conversations, to do mathematics, and countless other tasks. So far, artificial intelligence has been narrow. AlphaGo is extraordinarily good at playing Go but is incapable of doing anything else.41 But some of the leading AI labs, such as DeepMind and OpenAI, have the explicit goal of building AGI.42 And there have been indications of progress, such as the performance of GPT-3, an AI language model which can perform a variety of tasks it was never explicitly trained to perform, such as translation or arithmetic.43 AlphaZero, a successor to AlphaGo, taught itself how to play not only Go but also chess and shogi, ultimately achieving world-class performance.44 About two years later, MuZero achieved the same feat despite initially not even knowing the rules of the game.45 The development of AGI would be of monumental longterm importance for two reasons.

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

Human chess champions, for instance highlight the humanity in the heroic battles of two people over a chessboard, and are not perturbed much by the fact that there is software that can beat them. Moreover, AI spectacles such as AlphaGo beating the world Go champion serve to make the game more popular, rather than less. For instance, the world apparently ran out of Go boards to sell, shortly after the AlphaGo event (Shead 2016). A more sensible view of AI abilities, which is held by the majority of practitioners in the field, should extend to projections of software counterparts taking over from human lawyers, doctors, scientists, journalists, and so on, speculation of which is rife across the media and in some academic circles.

In essence, the main difference to rule-­ based age of computing is that top down programming is no longer required for automation to happen. Instead of having a programmer specifying what a computer technology must do at any given contingency, computers can now infer the rules themselves through “examples” or “experience” provided in what is known as “big data”. As is well-­ known, to beat the world champion at Go, AlphaGo drew upon a training dataset of 30 million board positions from 160,000 professional players. Thus, its experience was far greater than that of any professional Go player. This way, computers are already learning how to perform a variety of non-rule-based tasks, like diagnosing disease, writing shorter news stories, and driving trucks, which were non-automatable only a decade ago.

We (AI researchers) love chess and Go, precisely because playing them is a relatively easy activity for software: given the closed world, simple rules and transparent nature of the competition, they are ideally suited to AI-style search techniques, and indeed games continue to be a huge driving force for our field. Hence we should take a more realistic look at recent breakthroughs in AI, for instance the super-human Go playing abilities exhibited by the AlphaGo Zero system from Google DeepMind (Silver et al. 2018). While it’s a huge achievement, especially as the software learns to be a grandmaster from scratch by repeatedly playing against itself, we should not extrapolate too far from this milestone being reached. Importantly, of course, this level of super-human intelligence is not likely to negatively impact the world of work.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

Such a deep-learning program was used to teach a computer to play Go, a game that only a few years ago was thought to be beyond the reach of AI because it was so hard to calculate how well you were doing. It seemed that top Go players relied a great deal on intuition and a feel for position, so proficiency was thought to require a particularly human kind of intelligence. But the AlphaGo program produced by DeepMind, after being trained on thousands of high-level Go games played by humans and then millions of games with itself, was able to beat the top human players in short order. Even more amazingly, the related AlphaGo Zero program, which learned from scratch by playing itself, was stronger than the version trained initially on human games! It was as though the humans had been preventing the computer from reaching its true potential.

You may need to scroll forward from that location to find the corresponding reference on your e-reader. Adams, Douglas, 97 Adams, Scott, 249 aesthetics, of computer-generated images, 211–13 Afterwords (Brockman), xxi AI visualization programs, 211–13 al-Khwarizmi, 233 AlphaGo, 16, 184–85 AlphaGo Zero, 184–85, 225–26 altruistic objectives of intelligent machines argument against AI risk, 28, 81 amplification, 179 analog and digital computation, distinguished, 35–39 Anderson, Chris, 143–50 AI, and gradient descent, 148–50 background and overview of work of, 143–44 gradient descent, 145–50 human brain, and gradient descent, 147–48 local minima/local maxima problem, 147, 149–50 mosquito example of gradient descent, 145–46 universe, and gradient descent, 146–47 Anderson, Philip, 68 Aristotle, 222 Arnold, Matthew, 157 Artificial Intelligence: A Modern Approach (Russell and Norvig), 141 artificial stupidity, 210–11 artistic decision making, 213 Ascent of Man, The (Bronowski), 118 Ashby, W.

But this argument has its limitations. The reason we can forgive our meager understanding of how human brains work is because our brains work the same way, and that enables us to communicate with other humans, learn from them, instruct them, and motivate them in our own native language. If our robots will all be as opaque as AlphaGo, we won’t be able to hold a meaningful conversation with them, and that would be unfortunate. We will need to retrain them whenever we make a slight change in the task or in the operating environment. So rather than experimenting with opaque learning machines, I am trying to understand their theoretical limitations and examine how these limitations can be overcome.

pages: 332 words: 93,672

Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy
by George Gilder
Published 16 Jul 2018

Also at Google in late October 2017, the DeepMind program launched yet another iteration of the AlphaGo program, which, you may recall, repeatedly defeated Lee Sedol, the five-time world champion Go player. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks trained by immersion in records of human expert moves and by reinforcement from self-play. The blog Kurzweil.ai now reports a new iteration of AlphaGo based solely on reinforcement learning, without direct human input beyond the rules of the game and the reward structure of the program. In a form of “generic adversarial program,” AlphaGo plays against itself and becomes its own teacher.

In a form of “generic adversarial program,” AlphaGo plays against itself and becomes its own teacher. “Starting tabula rasa,” the Google paper concludes, “our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.”10 The claim of “superhuman performance” seemed rather overwrought to me. Outperforming unaided human beings is what machines—from a 3D printer to a plow—are supposed to do. Otherwise we wouldn’t build them. A deterministic problem with few constraints—a galactic field to plow—Go is perfectly suited to a super-fast computer. Functioning at millions of iterations per second, the machine soon reduces all human games of Go ever played to an infinitesimal subset of its own experience.

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

That’s the goal Kevin Hassett and his cubic model failed at. We’re a lot like AlphaGo. The program learns an approximate law that assigns a score to each position of the board. The score does not tell us, on the nose, whether a position is a W, an L, or a D; that’s beyond the capacity of any machine to compute, whether it’s implemented on a cluster or inside our skull. But the job of the program isn’t to get that answer exactly right; it’s to give us good advice about which of the many paths before us is most likely to have victory at the end. Modeling a pandemic is harder than AlphaGo in at least one way; in Go, the rules stay the same the whole game.

The chess tree is a redwood to checkers’s shrub, and we don’t know whether the root should be marked W, L, or D. But what if we did? Would people still give their lives to chess if they knew a perfect game always ended in a tie, that there was no winning by magnificence, only losing by screwing up? Or would it feel empty? Lee Se-dol, one of the best Go players alive, quit the game after losing a match to AlphaGo, a machine player developed by the AI firm DeepMind. “Even if I become the number one,” he said, “there is an entity that cannot be defeated.” And Go isn’t even solved! Compared to the redwood that’s chess, Go is—well, if there were a tree somewhat bigger than a googol redwoods it would be that tree.

The first computer program that played Go didn’t come until the late 1960s, when Albert Zobrist wrote one as part of his University of Wisconsin PhD thesis in computer science. In 1994, while Chinook was matching Marion Tinsley blow for blow, Go machines were helpless against professional human players. Things have changed fast, as Lee Se-dol found out. What does a top-tier Go machine like AlphaGo, without a small human crouched inside it to move the pieces, actually do? It doesn’t label each node of the Go tree with a W or an L (we don’t need D, since there aren’t draws in standard Go). The tree of Go is deep and bushy; no one can solve the damn thing. But as with Fermat’s test, we can be content with an approximation, a function that assigns each position of the board a score in some readily computable way.

Human Frontiers: The Future of Big Ideas in an Age of Small Thinking
by Michael Bhaskar
Published 2 Nov 2021

The pivotal moment in the contest was the now-legendary thirty-seventh move of the second game. AlphaGo played a move completely outside the game's conventional thinking. It just didn't make sense. But later in the match it proved decisive. With that move Go was changed forever. So was our worldview; machines could forge new paths, paths hidden from us. They could be radically creative and deploy revolutionary insight. In the history of Go, move thirty-seven was a big idea that, in thousands of years, humans hadn't thought of. A machine did. Thanks to the program, previously unthinkable moves are now part of the tactical lexicon. AlphaGo, like AlphaFold, jolted the game out of a local maximum.

Founded in London in 2010, its stated goal was to ‘solve intelligence’ by pioneering the fusion and furtherance of modern ML techniques and neuroscience: to build not just artificial intelligence (AI), but artificial general intelligence (AGI), a multi-purpose learning engine analogous to the human mind. DeepMind made headlines when it created the first software to beat a human champion at Go. In 2016 its AlphaGo program played 9th dan Go professional Lee Sedol over five matches in Seoul and, in a shock result beyond even that of CASP13, won four of them. This was years, even decades ahead of what anyone had expected. There are 1082 atoms in the observable universe but 10172 possible positions in Go; this makes it exceptionally difficult for classic machine-driven approaches, exponentially tougher than chess.7 Only a new approach to AI could have triumphed.

Yet what AI will do to human knowledge, to our ability to comprehend and see and discover and create, has received coverage incommensurate with its potential impact. That should change. AI is a cognitive technology, a meta-idea, and so goes to the heart of questions about how ideas are produced. New forms of knowledge and perception, quite unlike those of humans, beyond our unaided capabilities, are starting to accelerate the production of ideas. AlphaGo and AlphaFold are signposts to an era where those closest to a particular toolset are best positioned to push back the frontiers of knowledge. Proximity to these tools helps accelerate discovery, producing watershed moments like those at Seoul and Cancun, not to mention other moves from DeepMind alone into areas like medical diagnosis and the modelling of physical processes.

Succeeding With AI: How to Make AI Work for Your Business
by Veljko Krunic
Published 29 Mar 2020

WIRED. 2008 Jun 23 [cited 2018 Jul 2]. Available from: https:// www.wired.com/2008/06/pb-theory/ Wikimedia Foundation. AlphaGo versus Lee Sedol. Wikipedia. [Cited 2018 Jun 21.] Available from: https://en.wikipedia.org/w/index.php?title=AlphaGo_versus _Lee_Sedol&oldid=846917953 DeepMind. AlphaGo. DeepMind. [Cited 2018 Jul 2.] Available from: https:// deepmind.com/research/alphago/ Wikimedia Foundation. AlphaGo. Wikipedia. [Cited 2019 Jul 10.] Available from: https://en.wikipedia.org/w/index.php?title=AlphaGo The AlphaStar Team. AlphaStar: Mastering the real-time strategy game StarCraft II. DeepMind. [Cited 2019 Sep 9.]

pages: 371 words: 98,534

Red Flags: Why Xi's China Is in Jeopardy
by George Magnus
Published 10 Sep 2018

We can see the relevance of this and other governance issues by taking a detailed look at one of the exciting areas that could help China to generate new productivity gains in future and circumvent the middle-income trap: technology. Going all out for technology leadership In March 2016, Lee Sedol, a South Korean master of the ancient and complex board game Go, was defeated by AlphaGo, a Google computer program. Two months later, AlphaGo was deployed in China to take on the world’s leading Go player, Ke Jie, and won. The event is alleged to have had profound consequences on the thinking of leading Chinese scientists and politicians, who were taken aback by the cutting edge in artificial intelligence (AI) seemingly shown by the US.

Their focus is on key sectors, including advanced rail, ship, aviation and aerospace equipment, agricultural machinery and technology, low and new-energy vehicles, new materials, robotics, biopharmaceuticals and high-end medical equipment, integrated circuits, and 5G mobile telecommunications. Taken aback by AlphaGo’s victory, as noted earlier, China stepped up a few gears to formalise and launch nationally an ambitious AI strategy, already underway at the local government level. A year after the match, the State Council set out the Next Generation AI Development Plan with the goal of boosting China’s AI status, from being in line with competitors by 2202, to world-leading by 2025, and the world’s primary source by 2030.

Xu Huang and Michael Harris Bond, Edward Elgar Publishing, 2012 Yasheng Huang, Capitalism with Chinese Characteristics: Entrepreneurship and the State, MIT Press, 2008 INDEX Unattributed entries, for example geography, refer to the book’s metatopic, China. 1st Five-Year Plan (i) 1st Party Congress (Chinese Communist Party) (i) 5G networks (i) 9/11 (i) 11th Central Committee, third plenum (i) 11th Party Congress (i) 13th Five-Year Plan advanced information and digital systems (i) aims of (i) BRI incorporated into (i) manufacturing and technology (i) pension schemes (i) transport (i) 14th Party Congress (i) 15th Party Congress (i) 18th Party Congress (i), (ii) third plenum (i), (ii), (iii), (iv) 19th Party Congress ‘central contradiction’ restated (i) supply-side reforms (i) Xi addresses (i), (ii), (iii) 21st-Century Maritime Silk Road see Belt and Road Initiative 2000 Olympic Games (i) 2008 Olympic Games (i), (ii) Abe, Shinzō (i) Acemoglu, Daron (i) Action Plan (AI) (i) Addis Ababa (i), (ii) Africa Admiral Zheng (i) BRI concept and (i) Chinese interest in (i) colonialist criticism (i) Japan and (i) loans to (i) metal ore from (i) Silk Road (i) Sub-Saharan Africa (i) ageing trap (i) see also population statistics birth rate (i) consequences of ageing (i) demographic dividends (i), (ii) family structures (i) healthcare (i) ‘iron rice bowl’ (i) mortality rates (i) non-communicable disease (i) old-age dependency ratios (i), (ii), (iii) pensions (i) retirement age (i) Agricultural Bank of China (i) Agricultural Development Bank of China (i) agriculture (i), (ii), (iii) Agriculture and Rural Affairs, Ministry of (i) AI (i), (ii), (iii) AI Innovation and Development Megaproject (i) AI Potential Index (i) Air China (i) Airbus (i), (ii) Aixtron SE (i) Alibaba (i), (ii), (iii), (iv) Alphabet (i) AlphaGo (i), (ii) Alsace-Lorraine (i) Amoy (i) Anbang Insurance (i), (ii), (iii) Angola (i) Angus Maddison project (i) Ant Financial (i), (ii) anti-corruption campaigns 2014 (i) in financial sector (i) Ming dynasty (i) Xi launches (i), (ii), (iii) Apple (i), (ii), (iii) Arab Spring (i) Arabian Sea (i) Arctic (i) Argentina (i), (ii), (iii) Armenia (i) Article IV report (IMF) (i) see also IMF ASEAN (Association of South East Asian Nations) (i), (ii) Asia China the dominant power (i), (ii) Global Innovation Index (i) Obama tours (i) Paul Krugman’s book (i) ‘Pivot to Asia’ (i) state enterprises and intervention (i) Asia-Pacific Economic Cooperation (i) Asian Development Bank (i), (ii) Asian Financial Crisis (1997–98) (i), (ii), (iii), (iv) Asian Infrastructure Investment Bank (i), (ii), (iii), (iv) Asian Tiger economies (i), (ii), (iii), (iv) Atatürk, Mustafa Kemal (i) Australia Chinese investment in (i) Chinese seapower and (i) free trade agreement with (i) immigration rates and WAP (i) innovation statistics (i) pushing back against China (i), (ii) Renminbi reserves (i) Austria (i), (ii) Austria-Hungary (i) automobiles (i), (ii) Babylonia (i) bad debt see debt bad loans (i), (ii) Baidu (i), (ii) Balkans (i) Baltic (i) Baluchistan (i) Bandung (i), (ii) Bangladesh heavy involvement with (i) Indian sphere of influence (i) low value manufacturing moves to (i), (ii) Padma Bridge project (i) Bank of China (i), (ii) Bank for International Settlements (i) banks (i) see also debt and finance; WMPs (wealth management products) assets growth, effects of (i) bad loans problem (i) bank failures (i) central bank created (i) major banks see individual entries non-performing loans (i), (ii), (iii), (iv), (v), (vi) regulators step in (i) repo market (i), (ii) shadow banks (i), (ii), (iii), (iv), (v), (vi), (vii), (viii), (ix) n18 smaller banks at risk (i) Baoneng Group (i) Baosteel (i) BBC (i) Bear Stearns (i) Beijing see also Peking 1993 (i) central and local government (i), (ii), (iii) Mao arrives (i) Olympics (i) pollution (i) price rises (i) US delegation (i) water supply (i) Beijing-Hangzhou Grand Canal (i) Belarus (i) Belgrade (i) Bell (i) Belt and Road Initiative (BRI) (i) debt problems in recipient nations (i) description, size and nature (i) economic drivers (i) financing and funding (i), (ii) first Forum (i) geopolitical drivers and disputes (i) Marshall Plan and (i), (ii) project investment (i), (ii) reordering of Indo-Pacific (i) Silk Road and (i), (ii), (iii) ways of looking at (i), (ii) benevolent dictators (i) Bering Strait (i) big data (i) birth rate (i) see also population statistics Bloomberg (i) Bo Xilai (i) Boeing (i), (ii) bond markets (i) Bosphorus Strait (i) Boxers (i), (ii) Brazil BRICS (i), (ii), (iii) middle income, example of (i), (ii), (iii) US steel imports (i) Bretton Woods (i) Brexit (i), (ii) BRICS (i) ‘Building Better Global BRICs’ (Goldman Sachs) (i) BRICS Bank (i), (ii) Britain (i) Boxer Rebellion (i) Brexit (i), (ii) Hong Kong (i) new claims (i) Renminbi reserves (i) Broadcom (i) Brunei Darussalam (i), (ii), (iii) Brzezinski, Zbigniew (i) Budapest (i) budget constraints (i), (ii) Bulgaria (i) Bund, the (Shanghai) (i) Bundesbank (i) bureaucracy (i), (ii), (iii), (iv) Bush, George W.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

Just ten hours later, Google announced that DeepMind had built an AI able to not only beat every Go program ever built, but also (for the first time) a professional-level human player. Things moved quickly from there. By March 2016, the world’s greatest Go player, Lee Sedol, was taking on Google’s AlphaGo AI in a South Korean hotel room, watched by more than 60 million people around the globe. At the end of a series, AlphaGo had beaten Sedol four games to one. Not everything about the myriad changes prompted by AI is rosy, of course. Artificial Intelligence will also be responsible for the disruption of many professions and livelihoods over the years to come, although this will also create new, previously unimagined opportunities for human workers.

_r=1 2 Rogers, Adam, ‘We Asked a Robot to Write an Obit for AI Pioneer Marvin Minsky’, Wired, 26 January 2016: wired.com/2016/01/we-asked-a-robot-to-write-an-obit-for-ai-pioneer-marvin-minsky/ 3 Minsky, Marvin, Society of Mind (New York: Simon and Schuster, 1986). 4 HAL 90210, ‘No Go: Facebook Fails to Spoil Google’s Big AI Day’, Guardian, 28 January 2016: theguardian.com/technology/2016/jan/28/go-playing-facebook-spoil-googles-ai-deepmind 5 Moyer, Christopher, ‘How Google’s AlphaGo Beat a Go World Champion,’ Atlantic, 28 March 2016: http://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611 6 ‘US Military Shelves Google Robot Plan Over “Noise Concerns”’, BBC News, 30 December 2015: bbc.co.uk/news/technology-35201183 7 Collins, Ben, ‘Meet the Robot Writing “Friends” Sequels’, Daily Beast, 20 January 2016: thedailybeast.com/articles/2016/01/20/meet-the-robot-writing-friends-sequels.html Index The page references in this index correspond to the printed edition from which this ebook was created.

To find a specific word or phrase from the index, please use the search feature of your ebook reader. 2001: A Space Odyssey (1968) 2, 228, 242–4 2045 Initiative 217 accountability issues 240–4, 246–8 Active Citizen 120–2 Adams, Douglas 249 Advanced Research Projects Agency (ARPA) 19–20, 33 Affectiva 131 Age of Industry 6 Age of Information 6 agriculture 150–1, 183 AI Winters 27, 33 airlines, driverless 144 algebra 20 algorithms 16–17, 59, 67, 85, 87, 88, 145, 158–9, 168, 173, 175–6, 183–4, 186, 215, 226, 232, 236 evolutionary 182–3, 186–8 facial recognition 10–11, 61–3 genetic 184, 232, 237, 257 see also back-propagation AliveCor 87 AlphaGo (AI Go player) 255 Amazon 153, 154, 198, 236 Amy (AI assistant) 116 ANALOGY program 20 Analytical Engine 185 Android 59, 114, 125 animation 168–9 Antabi, Bandar 77–9 antennae 182, 183–5 Apple 6, 35, 56, 65, 90–1, 108, 110–11, 113–14, 118–19, 126–8, 131–2, 148–9, 158, 181, 236, 238–9, 242 Apple iPhone 108, 113, 181 Apple Music 158–9 Apple Watch 66, 199 architecture 186 Artificial Artificial Intelligence (AAI) 153, 157 Artificial General Intelligence (AGI) 226, 230–4, 239–40, 254 Artificial Intelligence (AI) 2 authentic 31 development problems 23–9, 32–3 Good Old-Fashioned (Symbolic) 22, 27, 29, 34, 36, 37, 39, 45, 49–52, 54, 60, 225 history of 5–34 Logical Artificial Intelligence 246–7 naming of 19 Narrow/Weak 225–6, 231 new 35–63 strong 232 artificial stupidity 234–7 ‘artisan economy’ 159–61 Asimov, Isaac 227, 245, 248 Athlone Industries 242 Atteberry, Kevan J. 112 Automated Land Vehicle in a Neural Network (ALVINN) 54–5 automation 141, 144–5, 150, 159 avatars 117, 193–4, 196–7, 201–2 Babbage, Charles 185 back-propagation 50–3, 57, 63 Bainbridge, William Sims 200–1, 202, 207 banking 88 BeClose smart sensor system 86 Bell Communications 201 big business 31, 94–6 biometrics 77–82, 199 black boxes 237–40 Bletchley Park 14–15, 227 BMW 128 body, machine analogy 15 Bostrom, Nick 235, 237–8 BP 94–95 brain 22, 38, 207–16, 219 Brain Preservation Foundation 219 Brain Research Through Advanced Innovative Neurotechnologies 215–16 brain-like algorithms 226 brain-machine interfaces 211–12 Breakout (video game) 35, 36 Brin, Sergey 6–7, 34, 220, 231 Bringsjord, Selmer 246–7 Caenorhabditis elegans 209–10, 233 calculus 20 call centres 127 Campbell, Joseph 25–6 ‘capitalisation effect’ 151 cars, self-driving 53–56, 90, 143, 149–50, 247–8 catering 62, 189–92 chatterbots 102–8, 129 Chef Watson 189–92 chemistry 30 chess 1, 26, 28, 35, 137, 138–9, 152–3, 177, 225 Cheyer, Adam 109–10 ‘Chinese Room, the’ 24–6 cities 89–91, 96 ‘clever programming’ 31 Clippy (AI assistant) 111–12 clocks, self-regulating 71–2 cognicity 68–9 Cognitive Assistant that Learns and Organises (CALO) 112 cognitive psychology 12–13 Componium 174, 176 computer logic 8, 10–11 Computer Science and Artificial Intelligence Laboratory (CSAIL) 96–7 Computer-Generated Imagery (CGI) 168, 175, 177 computers, history of 12–17 connectionists 53–6 connectomes 209–10 consciousness 220–1, 232–3, 249–51 contact lenses, smart 92 Cook, Diane 84–6 Cook, Tim 91, 179–80 Cortana (AI assistant) 114, 118–19 creativity 163–92, 228 crime 96–7 curiosity 186 Cyber-Human Systems 200 cybernetics 71–4 Dartmouth conference 1956 17–18, 19, 253 data 56–7, 199 ownership 156–7 unlabelled 57 death 193–8, 200–1, 206 Deep Blue 137, 138–9, 177 Deep Knowledge Ventures 145 Deep Learning 11–12, 56–63, 96–7, 164, 225 Deep QA 138 DeepMind 35–7, 223, 224, 245–6, 255 Defense Advanced Research Projects Agency (DARPA) 33, 112 Defense Department 19, 27–8 DENDRAL (expert system) 29–31 Descartes, René 249–50 Dextro 61 DiGiorgio, Rocco 234–5 Digital Equipment Corporation (DEC) 31 Digital Reasoning 208–9 ‘Digital Sweatshops’ 154 Dipmeter Advisor (expert system) 31 ‘do engines’ 110, 116 Dungeons and Dragons Online (video game) 197 e-discovery firms 145 eDemocracy 120–1 education 160–2 elderly people 84–6, 88, 130–1, 160 electricity 68–9 Electronic Numeric Integrator and Calculator (ENIAC) 12, 13, 92 ELIZA programme 129–30 Elmer and Elsie (robots) 74–5 email filters 88 employment 139–50, 150–62, 163, 225, 238–9, 255 eNeighbor 86 engineering 182, 183–5 Enigma machine 14–15 Eterni.me 193–7 ethical issues 244–8 Etsy 161 Eurequa 186 Eve (robot scientist) 187–8 event-driven programming 79–81 executives 145 expert systems 29–33, 47–8, 197–8, 238 Facebook 7, 61–2, 63, 107, 153, 156, 238, 254–5 facial recognition 10–11, 61–3, 131 Federov, Nikolai Fedorovich 204–5 feedback systems 71–4 financial markets 53, 224, 236–7 Fitbit 94–95 Flickr 57 Floridi, Luciano 104–5 food industry 141 Ford 6, 230 Foxbots 149 Foxconn 148–9 fraud detection 88 functional magnetic resonance imaging (fMRI) 211 Furbies 123–5 games theory 100 Gates, Bill 32, 231 generalisation 226 genetic algorithms 184, 232, 237, 257 geometry 20 glial cells 213 Go (game) 255 Good, Irving John 227–8 Google 6–7, 34, 58–60, 67, 90–2, 118, 126, 131, 155–7, 182, 213, 238–9 ‘Big Dog’ 255–6 and DeepMind 35, 245–6, 255 PageRank algorithm 220 Platonic objects 164, 165 Project Wing initiative 144 and self-driving cars 56, 90, 143 Google Books 180–1 Google Brain 61, 63 Google Deep Dream 163–6, 167–8, 184, 186, 257 Google Now 114–16, 125, 132 Google Photos 164 Google Translate 11 Google X (lab) 61 Government Code and Cypher School 14 Grain Marketing Adviser (expert system) 31 Grímsson, Gunnar 120–2 Grothaus, Michael 69, 93 guilds 146 Halo (video game) 114 handwriting recognition 7–8 Hank (AI assistant) 111 Hawking, Stephen 224 Hayworth, Ken 217–21 health-tracking technology 87–8, 92–5 Healthsense 86 Her (film, 2013) 122 Herd, Andy 256–7 Herron, Ron 89–90 High, Rob 190–1 Hinton, Geoff 48–9, 53, 56, 57–61, 63, 233–4 hive minds 207 holograms 217 HomeChat app 132 homes, smart 81–8, 132 Hopfield, John 46–7, 201 Hopfield Nets 46–8 Human Brain Project 215–16 Human Intelligence Tasks (HITs) 153, 154 hypotheses 187–8 IBM 7–11, 136–8, 162, 177, 189–92 ‘IF THEN’ rules 29–31 ‘If-This-Then-That’ rules 79–81 image generation 163–6, 167–8 image recognition 164 imagination 178 immortality 204–7, 217, 220–1 virtual 193–8, 201–4 inferences 97 Infinium Robotics 141 information processing 208 ‘information theory’ 16 Instagram 238 insurance 94–5 Intellicorp 33 intelligence 208 ambient 74 ‘intelligence explosion’ 228 top-down view 22, 25, 246 see also Artificial Intelligence internal combustion engine 140–1, 150–1 Internet 10, 56 disappearance 91 ‘Internet of Things’ 69, 70, 83, 249, 254 invention 174, 178, 179, 182–5, 187–9 Jawbone 78–9, 92–3, 254 Jennings, Ken 133–6, 138–9, 162, 189 Jeopardy!

The Ages of Globalization
by Jeffrey D. Sachs
Published 2 Jun 2020

More recently, we have seen stunning breakthroughs in deep neural networks, that is neural networks with hundreds of layers of artificial neurons. In 2016, an AI system, AlphaGo from the company Deep Mind, took on the world’s eighteen-time world Go champion, Lee Sedol. Go is a board game of such sophistication and subtlety that it was widely believed that machines would be unable to compete with human experts for years or decades to come. Sedol, like Kasparov before him, believed that he would triumph easily over AlphaGo. In the event, he was decisively defeated by the system. Then, to make matters even more dramatic, AlphaGo was decisively defeated by a next-generation AI system that learned Go from scratch in self-play over a few hours.

Abbasid Caliphate, 87 Achaemenid Empire, 74–75, 77 Achaemenid Persia, 66 Africa, 23, 33; diseases of, 152; European empires dividing, 153; Europe’s onslaught of, 151–52; farm animals of, 55; indigenous people and slaves from, 116–20; migration from, 34–35, 41; slave trade from, 118, 118–19; tsetse-infested, 56; wild ass of, 58 Agenda 21, 197 agriculture, 3; in ecological zones, 45–46; emergence of, 41–42, 42; horses used in, 65–66; in Neolithic Age, 5, 8; population and, 135; sedentism leading to, 41; in Song Dynasty, 90; sustainable, 13 air pollution, 187–88, 190, 190 Akkadian Empire, 66 Alexander the Great, 28, 65–66, 75–76, 76 Alexander VI (Spanish pope), 108–9 Alexandria, 70 algal blooms, 190, 191 algorithms, 174–75 Allison, Graham, 193 alluvial civilizations, 46–48 alpacas, 56, 61 AlphaGo (AI system), 176 Anatolia, migration from, 64 ancient urban centers, 67 Anglo-American hegemony, 130, 153–56 animal domestication, 54–56 animal husbandry, 50 anopheles gambiae (mosquito), 152 Anthony, Marc, 77 anti-fascist alliance, 207 anti-trade policy, of China, 97 Aquinas, Thomas, 78 Arab caliphates, 88 Aristotle, 69, 70, 75, 77, 212 artificial intelligence, 174, 175–76, 185 artificial neural networks, 174 Asia: climate and population in, 113; East, 165; Europe’s divergence with, 144–45; fossil fuels polluting air of, 190; migration from, 227n11; steppes of, 53; trade control sought by, 107–8 Asian tigers, 180 Assyrian kingdom, 66 Athens, 74 Aurelius, Marcus, 84 automobiles, 141 Avars, 86 Axial Age, 70–72 Babylonian kingdom, 66 Bacon, Francis, 106 Bacon, Roger, 136 Battle of Plassey (1757), 148 Battle of Tours (732 CE), 87 Bayt-al-Hikmah (House of Wisdom), 78 Beckert, Sven, 120–21 Bell Labs, 173 Belt and Road Initiative (BRI), 205, 206 Beringian land bridge, 19 biodegradable waste products, 199–200 biodiversity, 17–18, 184, 188–89, 199 biology, 75 biomass burning, 190 Black Death, 92 blank-slate learning (tabula rasa), 176 Bolshevik Revolution, 113 book writings, 71 botany, 106 Boulton, Richard, 137 Boxer Rebellion, 147 BRI.

pages: 402 words: 126,835

The Job: The Future of Work in the Modern Era
by Ellen Ruppel Shell
Published 22 Oct 2018

One striking hallmark of that change came in March 2016, when the Google artificial intelligence program AlphaGo beat South Korean world master Lee Sedol in Go, an ancient board game known for its baffling complexity. Each game of Go has 10,360 possible moves, an unimaginably large number that makes exhaustive evaluation of individual moves utterly unrealistic. This complexity makes the game far more unpredictable than chess; rather than see possibilities, players perceive possibilities either consciously or unconsciously by gazing at the pieces on the board. To do something similar, AlphaGo relies on what scientists call neural networks, essentially mathematical versions of the networks of nerve cells operating in biological systems.

To do something similar, AlphaGo relies on what scientists call neural networks, essentially mathematical versions of the networks of nerve cells operating in biological systems. Much like the human brain, AlphaGo has an unquenchable ability to learn, and not only from “observing” games played by human experts. The program is designed to play millions of games against itself, continuously improving its performance without any human intervention. In other words, AlphaGo seems to have what in the previous chapter was deemed critical for good work in the digital age—analytic skill. That is, these machines have the ability to visualize, articulate, conceptualize, or solve problems by making decisions that are sensible given the available information.

That is, these machines have the ability to visualize, articulate, conceptualize, or solve problems by making decisions that are sensible given the available information. And they are getting better and better at it. AlphaGo is just a sample of what scientists hope to accomplish with this line of research, and any number of corporations—large and small—are hot on the trail. For example, tech giant IBM boasts that “over the next five years, machine learning applications will lead to new breakthroughs that will amplify human abilities, assist us in making good choices, look out for us and help us navigate our world in powerful new ways.” The company targets five arenas ripe for disruption: medicine, education, retail, online security, and what it calls “sentient cities”—apparently, cities that through technology know what residents want and need before the residents themselves do.

pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI
by Paul R. Daugherty and H. James Wilson
Published 15 Jan 2018

Banks use them for fraud detection; dating websites use them to suggest potential matches; marketers use them to try to predict who will respond favorably to an ad; and photo-sharing sites use them for automatic face recognition. We’ve come a long way since checkers. In 2016, Google’s AlphaGo demonstrated a significant machine-learning advance. For the first time, a computer beat a human champion of Go, a game far more complex than checkers or chess. In a sign of the times, AlphaGo exhibited moves that were so unexpected that some observers deemed them to actually be creative and even “beautiful.”c The growth of AI and machine learning has been intermittent over the decades, but the way that they’ve crept into products and business operations in recent years shows that they’re more than ready for prime time.

See General Electric (GE) Geekbot, 196 General Data Protection Regulation (GDPR), 108, 124 General Electric (GE), 10, 75, 183–184, 194–195, 209 Predix system, 27, 29–30, 75 General Motors (GM), 67, 128 Gershgorn, Dave, 23 gesture recognition, 65 Gigster, 52–54, 59 GlaxoSmithKline, 99 GNS Healthcare, 10, 72–74, 80 Goldman Sachs, 49 Google, 209 AlphaGo, 42–43 autocomplete feature, 197, 200 Home, 146 marketing, 99 PAIR initiative, 179 trainers at, 179 Gorbis, Marina, 187 government regulations, 213 GPS navigation, 6–7 Gridspace Sift, 196 guardrails, 168–169 Harrison, Brent, 130 Haverford College, 71 health care, 82 cost reduction in, 167–168 embodied AI in, 150–151 hospital bed allocation in, 173 personalized, 79–80 precision medicine in, 72–74 radiology augmentation in, 139, 141–143 referrals for, 96–97 rehumanizing time in, 187–188 risk management in, 81 Sophie in, 119 Heller, Laura, 162 Heppenstall, Tal, 188 Hido, Shohei, 21–22 Hill, Colin, 73, 80 Hill, Kashmir, 79 HireVue, 51–52 hiring and recruitment, 51–52, 133, 198–199 Hitachi, 23 H&M, 91 holistic melding, 12, 197, 200–201 hospital bed allocation, 173 Hoyle, Rhiannon, 28 Huffington Post, 49 humanness attribute training, 116 human resources (HR), 53–54, 133 humans AI vs., 7, 19, 106, 209 augmentation of, 5 collaboration of with AI, 1–3, 25 judgment integration, 191–193 replacement of, 4–5, 19 roles of in developing and deploying AI, 113–133 skills of machines vs., 20–21, 105–106 Hwange, Tim, 170 IBM’s Watson.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

Reinforcement Learning For example, many robots implement Reinforcement Learning algorithms to learn how to walk. DeepMind’s AlphaGo program is also a good example of Reinforcement Learning: it made the headlines in March 2016 when it beat the world champion Lee Sedol at the game of Go. It learned its winning policy by analyzing millions of games, and then playing many games against itself. Note that learning was turned off during the games against the champion; AlphaGo was just applying the policy it had learned. Batch and Online Learning Another criterion used to classify Machine Learning systems is whether or not the system can learn incrementally from a stream of incoming data.

They are versatile, powerful, and scalable, making them ideal to tackle large and highly complex Machine Learning tasks, such as classifying billions of images (e.g., Google Images), powering speech recognition services (e.g., Apple’s Siri), recommending the best videos to watch to hundreds of millions of users every day (e.g., YouTube), or learning to beat the world champion at the game of Go by examining millions of past games and then playing against itself (DeepMind’s AlphaGo). In this chapter, we will introduce artificial neural networks, starting with a quick tour of the very first ANN architectures. Then we will present Multi-Layer Perceptrons (MLPs) and implement one using TensorFlow to tackle the MNIST digit classification problem (introduced in Chapter 3). From Biological to Artificial Neurons Surprisingly, ANNs have been around for quite a while: they were first introduced back in 1943 by the neurophysiologist Warren McCulloch and the mathematician Walter Pitts.

But a revolution took place in 2013 when researchers from an English startup called DeepMind demonstrated a system that could learn to play just about any Atari game from scratch,2 eventually outperforming humans3 in most of them, using only raw pixels as inputs and without any prior knowledge of the rules of the games.4 This was the first of a series of amazing feats, culminating in March 2016 with the victory of their system AlphaGo against Lee Sedol, the world champion of the game of Go. No program had ever come close to beating a master of this game, let alone the world champion. Today the whole field of RL is boiling with new ideas, with a wide range of applications. DeepMind was bought by Google for over 500 million dollars in 2014.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

In the past five years, however, AI has become the world’s hottest technology. A stunning turning point came in 2016 when AlphaGo, a machine built by DeepMind engineers, defeated Lee Sedol in a five-round Go contest known as the Google DeepMind Challenge Match. Go is a board game more complex than chess by one million trillion trillion trillion trillion times. Also, in contrast to chess, the game of Go is believed by its millions of enthusiastic fans to require true intelligence, wisdom, and Zen-like intellectual refinement. People were shocked that the AI competitor vanquished the human champion. AlphaGo, like most of the commercial breakthroughs in AI, was built on deep learning, a technology that draws on large data sets to teach itself things.

The deceivingly simple name of the exhibition was nowhere near a sufficient representation of the diversity and complexity it contained. Each room of the exhibit revealed new wonders, all with a connection to the curators’ expansive definition of what AI encompasses. There was Golem, a mythical creature in Jewish folklore; Doraemon, the well-loved Japanese anime hero; Charles Babbage’s preliminary computer science experiments; AlphaGo, the program designed to challenge humans’ fundamental intellect; Joy Buolamwini’s analysis on the gender bias of facial recognition software; and teamLab’s large-scale interactive digital art infused with Shinto philosophy and aesthetics. It was a magnificent and mind-expanding reminder of the power of interdisciplinary thinking.

Among the many subfields of AI, machine learning is the field that has produced the most successful applications, and within machine learning, the biggest advance is “deep learning”—so much so that the terms “AI,” “machine learning,” and “deep learning” are sometimes used interchangeably (if imprecisely). Deep learning supercharged excitement in AI in 2016 when it powered AlphaGo’s stunning victory over a human competitor in Go, Asia’s most popular intellectual board game. After that headline-grabbing turn, deep learning became a prominent part of most commercial AI applications, and it is featured in most of the stories in AI 2041. “The Golden Elephant” explores deep learning’s stunning potential—as well as its potential pitfalls, like perpetuating bias.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

In 2012, a University of Toronto artificial intelligence team made up of Hinton and two of his students won the annual ImageNet Large Scale Visual Recognition Competition with a deep learning neural network that blew the competition away.5 More recently, Google DeepMind used deep learning to develop the Go-playing AI, AlphaGo, training it by using a database of thirty million recorded moves from expert-level games. In March 2016, AlphaGo beat the world Go grandmaster, Lee Sedol, in four out of five games. Playing Go is considered a much bigger AI challenge than playing chess. Performance at this level wasn’t expected within AI circles for another decade. As important as the underlying algorithms are, the method of training is at least as important.

Today, in the twenty-first century, we find ourselves facing a future in which our machines are consistently and repeatedly besting us in all manner of intellectual pursuits. IBM’s Deep Blue beat world chess champion Garry Kasparov in a six-game match in 1997. In 2011, IBM’s Watson (DeepQA) defeated the two all-time Jeopardy champions Brad Rutter and Ken Jennings in a two-day contest of general knowledge. Google’s AlphaGo soundly trounced the longtime world Go grandmaster, Lee Sedol, in four games out of five in March 2016. Given all this, it seems one of the few remaining aspects of machine intelligence left to explore in fiction is how they interact with the world emotionally. In A.I. Artificial Intelligence, Steven Spielberg tells the story of David, a mecha or highly advanced robot in the form of an eleven-year-old child who wants to become a “real boy” so that his mother will love him.

See Access-consciousness AARP (2010 study), 153 Abigail, 3–4, 161–162 Access-consciousness, 242–249, 270 ACLU, 145 adaptive learning technology, 117–118 addictive behaviors and digitized emotion, 220 adrenaline, 186, 221 Affdex, 66, 69 affect, 47 Affect in Speech, 57 Affectiva, 66, 68–72, 118, 275 Affective Computing Company (tACC), 72 Affective Computing (Picard), 47–48, 51 Affective Computing Research Group, Media Lab, 52–54, 57, 60 AI and social experiments, 195–198 AI Watson, 197 “AI winter,” 37–38 AIBO, 200 “AI-human symbiote,” 264 Air Force Research Lab, Wright-Patterson AFB, OH, 128–129 Aldebaran, 82, 112–113, 152 alexithymia, 34 Alone Together (Turkle), 199 AlphaGo, 68, 233 Alzheimer’s disease, 205 AM (deranged supercomputer), 232 American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-5), 187 Amin, Wael, 59 amygdala, 19, 34, 221 anterior cingulate cortex (ACC), 19–20, 34, 247 anthropomorphism, 80–81 Apollo Program, 272 Apple, 75 application programming interfaces (APIs), 65, 72 Ardipethicus ramidus, 14 artificial intelligence, 52–53 development of, 35–36 foundations of, 36 term coined, 37 artificial neural networks (ANNs), 66, 251 artificially generated emotions, 102.

pages: 328 words: 96,678

MegaThreats: Ten Dangerous Trends That Imperil Our Future, and How to Survive Them
by Nouriel Roubini
Published 17 Oct 2022

To beat world chess champion Garry Kasparov multiple times in 1997, IBM Deep Blue devised inventive strategies. Yet that was just an opening gambit compared to Deep Mind, a self-teaching algorithm. In 2016, a Deep Mind computer christened AlphaGo mastered a game with more possible moves than there are atoms in the universe. “It studies games that humans have played, it knows the rules and then it comes up with creative moves,” Wired editor in chief Nicholas Thompson told PBS Frontline.4 In a much-touted contest, AlphaGo outplayed the reigning world Go champion Lee Sedol in four out of five tries. Game two marked a watershed moment for AI. The thirty-seventh placement of a piece on the Go board “was a move that humans could not fathom, but yet it ended up being brilliant and woke people up to say, ‘Wow, after thousands of years of playing, we never thought about making a move like that,’” AI scientist Kai-Fu Lee told Frontline.

Another expert observer suggested, in a sobering coda, that the victory for AI wasn’t so much about a computer beating a human as one form of intelligence beating another. In this battle of brains, neither side enjoys special status. “You can get into semantics about what does reasoning mean, but clearly the AI system was reasoning at that point,” says New York Times journalist Craig Smith, who now hosts the podcast Eye on AI.5 A year later, AlphaGo Zero bested AlphaGo by learning the rules of the game and then generating billions of data points in just three days. Deep learning has progressed with mind-bending speed. In 2020, Deep Mind’s AlphaFold2 revolutionized the field of biology by solving “the protein-folding problem” that had stumped medical researchers for five decades.

pages: 331 words: 104,366

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins
by Garry Kasparov
Published 1 May 2017

In 2016, nineteen years after my loss to Deep Blue, the Google-backed AI project DeepMind and its Go-playing offshoot AlphaGo defeated the world’s top Go player, Lee Sedol. More importantly, and also as predicted, the methods used to create AlphaGo were more interesting as an AI project than anything that had produced the top chess machines. It uses machine learning and neural networks to teach itself how to play better, as well as other sophisticated techniques beyond the usual alpha-beta search. Deep Blue was the end; AlphaGo is a beginning. THE LIMITATIONS of chess weren’t the only fundamental misconceptions in the equation.

Getting a machine system to a 90 percent effectiveness rate may be enough to make it useful, but it’s often even harder to get it from 90 percent to 95 percent, let alone to the 99.99 percent you would want before trusting it to translate a love letter or drive your kids to school. The machine-learning approach might have eventually worked with chess, and some attempts have been made. Google’s AlphaGo uses these techniques extensively with a database of around thirty million moves. As predicted, rules and brute force alone weren’t enough to beat the top Go players. But by 1989, Deep Thought had made it quite clear that such experimental techniques weren’t necessary to be good enough at chess to challenge the world’s best players.

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

AGI is also called “Strong AI” to differentiate from “Weak AI” or “Narrow AI," which refers to systems designed for one specific task and whose capabilities are not easily transferable to other systems. We go into more detail about the distinction between AI and AGI in our Machine Intelligence Continuum in Chapter 2. Though Deep Blue, which beat the world champion in chess in 1997, and AlphaGo, which did the same for the game of Go in 2016, have achieved impressive results, all of the AI systems we have today are “Weak AI." Narrowly intelligent programs can defeat humans in specific tasks, but they can’t apply that expertise to other tasks, such as driving cars or creating art. Solving tasks outside of the program’s original parameters requires building additional programs that are similarly narrow.

Neural networks were invented in the 1950s, but recent advances in computational power and algorithm design—as well as the growth of big data—have enabled deep learning algorithms to approach human-level performance in tasks such as speech recognition and image classification. Deep learning, in combination with reinforcement learning, enabled Google DeepMind’s AlphaGo to defeat human world champions of Go in 2016, a feat that many experts had considered to be computationally impossible. Much media attention has been focused on deep learning, and an increasing number of sophisticated technology companies have successfully implemented deep learning for enterprise-scale products.

pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives
by David Sumpter
Published 18 Jun 2018

This challenge of finding out how much a computer can learn from scratch, is central to understanding how far we are from creating a general AI. After I talked to Harm, in October 2017 I contacted David Silver, who leads the DeepMind team that is training neural networks to play the board game Go. The AlphaGo algorithm that David’s team created had beaten the world number-one Go player, Ke Jie in May 2017. The algorithm had begun life by learning a playbook of 30 million moves made by the world’s best Go players. It then fine-tuned its skills by repeatedly playing against different variations of itself.

David had also been involved with the Atari games project, so I felt that he would have insight into the balance between learning from scratch and building a specialised algorithm, as he had done with Go. I emailed him a series of questions about this, but he replied asking me to be patient because a ‘new paper in a few weeks’ would answer my questions. It was worth the wait. On 19 October 2017, David and his team published an article in the journal Nature describing AlphaGo Zero, a new Go-playing algorithm that beat all previous algorithms. Not only that, this algorithm worked without human assistance. They set up a neural network, let it play lots of games of Go against itself and a few days later it was the best Go player in the world. I was impressed, much more so than when a computer won at chess or poker, or even with David’s first Go champion.

Clearly, you both understand everything to do with our life online a lot better than I do, so thanks for patiently explaining it to me. And for being the best kids ever. Index 70 News here Acharya, Anurag here, here, here Adamic, Lada here, here, here, here advertising here, here, here, here, here retargeted advertising here Albright, Jonathan here algorithms here, here, here, here, here, here AlphaGo Zero here ‘also liked’ here, here Amazon here black box algorithms here, here, here, here calibration here, here, here COMPAS algorithm here, here, here, here eliminating bias here, here filter algorithms here, here GloVe here Google here, here, here, here, here, here language here Libratus here neural networks here, here PCRA algorithm here personality analysis here, here predicting football results here predictive polls here regression models here, here Word2vec here, here, here, here Allcott, Hunt here, here Allen Institute for Artificial Intelligence here Amazon here, here, here, here, here, here, here Angwin, Julia here, here, here ants here Apple here, here, here Apple Music here Aral, Sinan here Arrow, Kenneth here artificial intelligence (AI) here, here, here, here, here, here, here, here limitations here neural networks here superintelligence here, here Turing test here ASI Data Science here Atari here, here, here, here, here bacteria (E. coli) here, here Banksy here, here, here, here ‘Islamic Banksy’ here Bannon, Steve here Barabási, Albert-László here BBC here, here BBC Bitesize here bees here, here bell-shaped curves here Bezos, Jeff here bias here, here, here fairness and unfairness here gender bias here racial bias here, here, here Biederman, Felix here Biro, Dora here BlackHatWorld here Blizzard here, here Bolukbasi, Tolga here Bostrom, Nick here bots here, here Boxing here Breakout here, here Breitbart here, here, here, here Brennan, Tim here, here, here, here, here Brexit here, here, here, here, here, here voter analysis here, here Brier score here Broome, Fiona here browsing histories here, here Bryson, Joanna here, here, here, here Buolamwini, Joy here Burrell, Jenna here Bush, George W. here, here Business Insider here BuzzFeed here, here Cadwalladr, Carole here CAFE here calibration bias here, here, here Cambridge Analytica (CA) here, here, here, here, here, here, here, here regression models here, here, here Cameron, David here Campbell’s Soup here Captain Pugwash here careerchange.com here Chalabi, Mona here, here chatbots here, here, here chemtrails here Chittka, Lars here citations here Clinton, Hillary here, here, here, here, here, here, here CNN here, here Connelly, Brian here, here Conservative Party here, here conspiracy theories here, here, here, here Corbett-Davies, Sam here criminal reoffending here, here, here COMPAS algorithm here, here, here, here Cruz, Ted here Daily Mail here, here Daily Star here data see online data collection here databases here, here myPersonality project here Datta, Amit here, here, here Davis, Steve J. here Deep Blue here, here Defense Advanced Research Projects Agency (DARPA) US here Del Vicario, Michela here, here, here, here, here Democrat Party here, here, here, here, here dogs here double logarithmic plots here, here Dragan, Anca here Dressel, Julia here, here Drudge Report here DudePerfect here Dugan, Regina here Dussutour, Audrey here Dwork, Cynthia here echo chambers here, here, here, here, here, here, here, here, here Economist here, here Economist 1843 here Eom, Young-Ho here Etzioni, Oren here European Union (EU) here, here, here, here, here Facebook here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here artificial intelligence (AI) here, here, here, here, here Facebook friends here, here, here Facebook profiles here, here Messenger here, here myPersonality project here news feed algorithm here, here patents here Will B.

pages: 462 words: 129,022

People, Power, and Profits: Progressive Capitalism for an Age of Discontent
by Joseph E. Stiglitz
Published 22 Apr 2019

See, e.g., Stiglitz, Freefall; Commission of Experts on Reforms of the International Monetary and Financial System appointed by the President of the United Nations General Assembly, The Stiglitz Report: Reforming the International Monetary and Financial Systems in the Wake of the Global Crisis (New York: The New Press, 2010); Simon Johnson and James Kwak, 13 Bankers: The Wall Street Takeover and the Next Financial Meltdown (New York: Random House, 2010); and Rana Foroohar, Makers and Takers: How Wall Street Destroyed Main Street (New York: Crown, 2016). CHAPTER 6: THE CHALLENGE OF NEW TECHNOLOGIES 1.Google’s Go-playing computer program AlphaGo, developed by the tech giant’s AI company, DeepMind, beat Go world champion Lee Se-dol in March 2016. See Choe Sang-Hun, “Google’s Computer Program Beats Lee Se-dol in Go Tournament,” New York Times, Mar. 15, 2016. A year and a half later, Google announced the release of a program with even larger AI capabilities. See Sarah Knapton, “AlphaGo Zero: Google DeepMind Supercomputer Learns 3,000 Years of Human Knowledge in 40 Days,” Telegraph, Oct. 18, 2017. 2.Robert J.

abuses of power checks and balances to prevent, 163–67 money and, 167–70 academic publishing, 76 active labor market policies, 187 Acton, Lord, 164 Adelson, Sheldon, 331n26 Adobe, 65 advantage, intergenerational transmission of, 199–201 advertising, 124, 132 affirmative action, 203 Affordable Care Act (Obamacare), 40, 211–13 African Americans; See also racial discrimination disenfranchisement of, 161 and GI Bill, 210 and inequality, 40–41 intergenerational transmission of disadvantage, 279–80n43 and Jim Crow laws, 241, 271n3 mass incarceration, 202 agricultural subsidies, 96 agriculture, Great Depression and, 120 AI, See artificial intelligence AIG, 107 airline subsidies, 96–97 Akerlof, George, 63–64 alcoholism, 42 AlphaGo, 315n1 “alternative facts,” 136 alternative minimum tax, 85 Amazon, 62, 74, 123, 127, 128; See also Bezos, Jeff American Airlines, 69 American dream failings masked by myths, 224–26 and inequality of opportunity, 44–45 American exceptionalism, 35, 211–12 American Express, 60 American individualism, See individualism American-style capitalism dangers of, 28–29 and mortgage market, 218 and national identity, xxvi other countries’ view of, 97 and patent infringement suits, 59 and values, 30 anticompetitive behavior, 68–76 antipoaching agreements, 65–66 antitrust, 51, 62, 68–76 Apple market power, 56 patent infringement suits, 59 share buybacks, 109 tax avoidance, 85, 108 applied research, 24–25; See also research arbitration clauses, 73 arbitration panels, 56 Arizona campaign finance case, 170 artificial intelligence (AI) advances in, 117 in China, 94, 96 globalization in era of, 135 and IA innovations, 119 and job loss, 118 market power and, 123–35 Association for Molecular Pathology, 127 atomistic labor markets, 64–66 AT&T, 75, 147, 325n17 Australia, 17 authoritarian governments, Big Data and, 127, 128 automation, See technology balanced budget principle, 194–95 bank bailout (2008), 102–3, 113–14, 143–44, 151 bankers, 4, 7, 104 banks danger posed to democracy by, 101–2 and 2008 financial crisis, 101–4 and fiscal paradises, 86 mergers and acquisitions, 107–8 need for regulation of, 143–44 traditional vs. modern, 109–10 Bannon, Steve, 18 Baqaee, David, 62 barriers to entry/competition, 48, 57–60, 62–64, 183, 289n47 behavioral economics, 30 Berlin Wall, fall of, 3 Bezos, Jeff, 5, 33 bias, See discrimination Big Data; See also artificial intelligence (AI) in China, 94 and customer targeting, 125–26 and market power, 123–24 and privacy, 127–28 regulation of, 128–31 and research, 126–27 as threat to democracy, 131–35 Big Pharma, 60, 88–89, 99, 168 bilateral trade deficit, 90–91 Bill of Rights, 164 Blackberry, 286n34 Blankfein, Lloyd, 104 bonds, government, 215 Brexit, 3 browser wars, 58 Buckley v.

Federal Election Commission, 166, 169–70, 172 class warfare, 6 climate change and attacks on truth, 20 and intergenerational justice, 204–5 markets’ failure to address, xxiii money’s effects on debate, 20 Clinton, Bill, and administration, xiii, 4, 5, 168, 238, 242 Clinton, Hillary, 4, 6 coal companies, 20 Cold War, end of, 28; See also Communism, collapse of collective action, 138–56 balancing with individualism, 139 circumstances requiring, 140–42 government failures, 148–52 increasing need for government action, 152–55 in preamble to Constitution, 138–39 regulation as, 143–48 collective judgments, 262–63n20 college, income inequality and, 200 Comcast, 147 Communications Decency Act, 320n32 Communism, collapse of, 3, 28 comparative advantage, 82–84 competition market concentration, 55–56 market failures, 23 in marketplace of ideas, 75–76 market power, 57–60 power vs., 22 competitive equilibrium model, 47, 280n1 Comprehensive and Progressive Agreement for Trans-Pacific Partnership, 306n25 compulsion, power of, 155 confirmatory bias, 225 conflicts of interest, 70, 72, 124 Congress antitrust laws, 51, 68 and Great Recession, 39, 215 and lobbyists, See lobbyists and money in politics, 171–72 and Obamacare, 213 and regulatory process, 145–46 Supreme Court nominations, 166 and USTR, 100 conservatism, embracing change vs., 226–28 Constitution of the United States collective action reference in preamble, 138–39 economic changes since writing of, 227 “General Welfare” in Preamble, 242 individual liberties vs. collective interest in, 229 and minority rights, 6 as product of reasoning and argumentation, 229 three-fifths clause, 161 consumer demand, See demand consumer surplus, 64 cooperatives, 245 Copenhagen Agreement, 207 copyright extensions, 74 Copyright Term Extension Act (1998), 74 corporate taxes, 108, 206, 269n44 corporate tax rates, globalization and, 84–85 corporate welfare, 107 corporations and labor force participation, 182 and money in politics, 172–73 as people, 169–70 rights as endowed by the State, 172 corruption, 50 cost-benefit analysis, 146, 204–5 Council of Economic Advisers (CEA), xii credit, 102, 145, 186, 220 credit cards, 59–60, 70, 105 credit default swaps, 106 credit unions, 245 culture, economic behavior and, 30 customer targeting, 125–26 cybersecurity, 127–28 cybertheft, 308n35 Daraprim, 296n72 data exclusivity, 288n40 data ownership, 129–30 Deaton, Angus, 41–42 debt, 220; See also credit DeepMind, 315n1 defense contractors, 173 deficits, See budget deficits deglobalization, 92 deindustrialization early days of, xix effect on average citizens, 4, 21 facilitating transition to postindustrial world, 186–88 failure to manage, xxvi in Gary, Indiana, xi globalization and, 4, 79, 87 place-based policies and, 188 deliberation, 228–29 demand automation and, 120 and job creation, 268n41 Keynesian economics and, xv market power’s effect on, 63 demand for labor, technological suppression of, 122 democracy, 159–78 agenda for reducing power of money in politics, 171–74 curbing the influence of wealth on, 176–78 fragility of norms and institutions, 230–36 inequality as threat to, 27–28 maintaining system of checks and balances, 163–67 need for a new movement, 174–76 new technologies’ threat to, 131–35 and power of money, 167–70 as shared value, 228 suppression by minority, xx Trump’s disdain for, xvii voting reforms, 161–63 democratic institutions, fragility of, 230–36 Democratic Party gerrymandering’s effect on, 159 and Great Recession, 152 need for reinvention of, 175 popular support for, 6 renewal of, 242 and voter disenfranchisement, 162 demographics, xx, 181 “deplorables,” 4 deregulation, 25, 105, 143–44, 152, 239; See also supply-side economics derivatives, 80, 88, 106–7, 144 Detroit, Michigan, 188 Dickens, Charles, 12 Digital Millennium Copyright Act, 320–21n32 disadvantage, intergenerational transmission of, 199–201 disclosure laws, 171 discourse, governance and, 11 discrimination, 201–4; See also gender discrimination; racial discrimination by banks, 115 and economics texts, 23 forms of, 202 under GI Bill, 210 and inequality, 40–41, 198–99 and labor force participation, 183 means of addressing, 203–4 and myths about affirmative action, 225 reducing to improve economy, 201–4 diseases of despair, 42–43 disenfranchisement, 27, 161–62 disintermediation, 109 Disney, 65, 74 dispute resolution, 56–57, 309n40 Dodd–Frank Wall Street Reform and Consumer Protection Act, 70, 102, 107 driverless cars, 118 drug overdoses, 42 Durbin Amendment, 70 East Asia, 149 economic justice historical perspectives, 241–42 intergenerational justice, 204–5 racial justice and, 176, 203–4 tax system and, 205–8 economics, assumptions about individuals in, 29–30, 223 economic segregation, 200 economies of scale, 72 economies of scope, 347–48n15 economy and collective action, 153–54 decent jobs with good working conditions, 192–97 deterioration since early 1980s, 32–46 failure’s effect on individuals and society, 29–31 failure since late 1980s, 3–5 government involvement in, 141–42, 150–55 intergenerational transmission of advantage/disadvantage, 199–201 reducing discrimination in, 201–4 restoring fairness to tax system, 205–8 restoring growth and productivity, 181–86 restoring justice across generations, 204–5 restoring opportunity and social justice, 197–201 social protection, 188–91 “sugar high” from Trump’s tax cut, 236–38 transition to postindustrial world, 186–88 education equalizing opportunity of, xxv–xxvi, 219–20 improving access to, 203 returns on government investment in, 232 taxation and, 25 undermining of institutions, 233–34 Eggers, Dave, 128 Eisenhower, Dwight, and administration, 210 elderly, labor force growth and, 181–82 election of 1992, 4 election of 2000, 165–66 election of 2012, 159, 178 election of 2016, xix, 132, 178 elections, campaign spending in, 171–73 elite control of economy by, 5–6 and distrust in government, 151 and 2008 financial crisis, 5 promises of growth from market liberalization, 21–22 rules written by, 230 employers, market power over workers, 64–67 employment, See full employment; jobs; labor force participation End of History, The (Fukuyama), 3 Enlightenment, the, 10–12 attack on ideals of, 14–22 and standard of living, 264n24 environment carbon tax, 194, 206–7 and collective action, 153 economic growth and, 176 economists’ failure to address, 34 markets’ failure to protect, 24 and true economic health, 34 environmental justice, economic justice and, 176 Environmental Protection Agency (EPA), 267n38 epistemology, 10, 234 equality as basis for well-running economy, xxiv–xxv economic agenda for, xxvii as shared value, 228 Equifax, 130 equity value, rents as portion of, 54 ethnic discrimination, 201–4 Europe data regulation, 128–29 globalization, 81 infrastructure investment, 195–96 privacy protections, 135 trade agreements favoring, 80 unity against Trump, 235 European Investment Bank, 195–96 evergreening, 60 excess profits, as rent, 54 exchange rate, 89, 307n28, 307n32 exploitation in current economy, 26 in economics texts, 23 financial sector and, 113 market power and, 47–78 reducing, 197 as source of wealth, 144–45 wealth creation vs., 34 and wealth redistribution, 50 exports, See globalization; trade wars Facebook anticompetitive practices, 70 and Big Data, 123, 124, 127–28 competition for ad revenue, 56 and conflicts of interest, 124 market power in relaxed antitrust environment, 62 as natural monopoly, 134 and preemptive mergers, 60, 73 reducing market power of, 124 regulation of advertising on, 132 fact-checking, 132, 177 “Fading American Dream, The” (Opportunity Insights report), 44–45 “fake news,” 167 family leave, 197 Farhi, Emmanuel, 62 farmers, Great Depression and, 120 fascism, 15–16, 18, 235 Federal Communication Commission (FCC), 147 Federal Reserve Board, 70, 112 Federal Reserve System, 121, 214–15 Federal Trade Commission, 69 fees bank profits from, 105, 110 credit card, 60, 70, 105 for mergers and acquisitions, 108 mortgages and, 107, 218 “originate-to-distribute” banking model, 110 private retirement accounts and, 215 fiduciary standard, 314n21, 347n10 finance (financial sector); See also banks and American crisis, 101–16 contagion of maladies to rest of economy, 112 disintermediation, 109 dysfunctional economy created by, 105–9 gambling by, 106–8 and government guarantees, 110–11 history of dysfunctionality, 109–12 as microcosm of larger economy, 113 mortgage reform opposed by, 216–18 private vs. social interests, 111–12 and public option, 215–16 shortsightedness of, 104–5 stopping societal harm created by, 103–5 and trade agreements, 80 financial crisis (2008), 101; See also Great Recession bank bailout, See bank bailout [2008] China and, 95 deregulation and, 25, 143–44 as failure of capitalism, 3 government response to, 5 housing and, 216 as man-made failure, 153–54 market liberalization and, 4 and moral turpitude of bankers, 7 regulation in response to, 101–2 as symptomatic of larger economic failures, 32–33 and unsustainable growth, 35 financial liberalization, See market liberalization First National Bank, 101 “fiscal paradises,” 85–86 fiscal policy, 121, 194–96 fiscal responsibility, 237 food industry, 182 forced retirement, 181–82 Ford Motor Company, 120 Fox News, 18, 133, 167, 177 fractional reserve banking, 110–11 fraud, 103, 105, 216, 217 freedom, regulation and, 144 free-rider problem, 67, 155–56, 225–26 Friedman, Milton, 68, 314–15n22 FUD (fear, uncertainty, and doubt), 58 Fukuyama, Francis, 3, 259n1 full employment, 83, 193–94, 196–97 Galbraith, John K., 67 gambling, by banks, 106–7, 207 Garland, Merrick, 166–67 Gates, Bill, 5, 117 GDP elites and, 22 as false measure of prosperity, 33, 227 financial sector’s increasing portion of, 109 Geithner, Tim, 102 gender discrimination, 41, 200–204 gene patents, 74–75 general welfare, 242–47 generic medicines, 60, 89 genetically modified food (GMO), 88 genetics, 126–27 George, Henry, 206 Germany, 132, 152 gerrymandering, 6, 159, 162 GI Bill, 210 Gilded Age, 12, 246 Glass-Steagall Act, 315n25, 341n39 globalization, 79–100 budget deficits and trade imbalances, 90 collective action to address, 154–55 effect on average citizens, 4, 21 in era of AI, 135 failure to manage, xxvi false premises about, 97–98 and global cooperation in 21st century, 92–97 and intellectual property, 88–89 and internet legal frameworks, 135 and low-skilled workers, 21, 82, 86, 267n39 and market power, 61 pain of, 82–87 and protectionism, 89–92 and 21st-century trade agreements, 87–89 and tax revenue, 84–86 technology vs., 86–87 and trade wars, 93–94 value systems and, 94–97 GMO (genetically modified food), 88 Goebbels, Joseph, 266n35 Goldman Sachs, 104 Google AlphaGo, 315n1 antipoaching conspiracy, 65 and Big Data, 123, 127, 128 conflicts of interest, 124 European restrictions on data use, 129 gaming of tax laws by, 85 market power, 56, 58, 62, 128 and preemptive mergers, 60 Gordon, Robert, 118–19 Gore, Al, 6 government, 138–56 assumption of mortgage risk, 107 Chicago School’s view of, 68–69 debate over role of, 150–52 and educational system, 220 failure of, 148–52 in finance, 115–16 and fractional reserve banking, 111 and Great Depression, 120 hiring of workers by, 196–97 increasing need for, 152–55 interventions during economic downturns, 23, 120 lack of trust in, 151 lending guarantees, 110–11 managing technological change, 122–23 and need for collective action, 140–42 and political reform, xxvi pre-distribution/redistribution by, xxv in progressive agenda, 243–44 public–private partnerships, 142 regulation and rules, 143–48 restoring growth and social justice, 179–208 social protection by, 231 government bonds, 215 Great Britain, wealth from colonialism, 9 Great Depression, xiii, xxii, 13, 23, 120 “great moderation,” 32 Great Recession, xxvi; See also financial crisis (2008) deregulation and, 25 diseases of despair, 42 elites and, 151 employment recovery after, 193 inadequate fiscal stimulus after, 121 as market failure, 23 pace of recovery from, 39–40 productivity growth after, 37 and retirement incomes, 214–15 weak social safety net and, 190 Greenspan, Alan, 112 Gross Fixed Capital Formation, 271n4 gross investment, 271n4 growth after 2008 financial crisis, 103 in China, 95 decline since 1980, 35–37 economic agenda for, xxvii failure of financial sector to support, 115 and inequality, 19 international living standard comparisons, 35–37 knowledge and, 183–86 labor force, 181–82 market power as inimical to, 62–64 in post-1970s US economy, 32 restoring, 181–86 taxation and, 25 guaranteed jobs, 196–97 Harvard University, 16 Hastert Rule, 333n31 health inequality in, 41–43 and labor force participation, 182 health care and American exceptionalism, 211–12 improving access to services, 203 public option, 210–11 in UK and Europe, 13 universal access to, 212–13 hedonic pricing, 347n13 higher education, 219–20; See also universities Hispanic Americans, 41 hi-tech companies, 54, 56, 60, 73 Hitler, Adolf, 152, 266n35 Hobbes, Thomas, 12 home ownership, 216–18 hours worked per week, US ranking among developed economies, 36–37 House of Representatives, 6, 159 housing, as barrier to finding new jobs, 186 housing bubble, 21 housing finance, 216–18 human capital index (World Bank), 36 Human Development Index, 36 Human Genome Project, 126 hurricanes, 207 IA (intelligence-assisting) innovations, 119 identity, capitalism’s effect on, xxvi ideology, science replaced by, 20 immigrants/immigration, 16, 181, 185 imports, See globalization; trade wars incarceration, 161, 163, 193, 201, 202 incentive payments for teachers, 201 voting reform and, 162–63 income; See also wages average US pretax income (1974-2014), 33t universal basic income, 190–91 income inequality, 37, 177, 200, 206 income of capital, 53 India, guaranteed jobs in, 196–97 individualism, 139, 225–26 individual mandate, 212, 213 industrial policies, 187 industrial revolution, 9, 12, 264–65n24 inequality; See also income inequality; wealth inequality benefits of reducing, xxiv–xxv and current politics, 246 in early years after WWII, xix economists’ failure to address, 33 education system as perpetuator of, 219 and election of 2016, xix–xxi and excess profits, 49 and financial system design, 198 growth of, xii–xiii, 37–45 in health, 41–43 in opportunity, 44–45 in race, ethnicity, and gender, 40–41 and 2017 tax bill, 236–37 technology’s effect on, 122–23 in 19th and early 20th century, 12–13 20th-century attempts to address, 13–14 tolerance of, 19 infrastructure European Investment Bank and, 195–96 fiscal policy and, 195 government employment and, 196–97 public–private partnerships, 142 returns on investment in, 195, 232 taxation and, 25 and 2017 tax bill, 183 inheritance tax, 20 inherited wealth, 43, 278n38 innovation intellectual property rights and, 74–75 market power and, 57–60, 63–64 net neutrality and, 148 regulation and, 134 slowing pace of, 118–19 and unemployment, 120, 121 innovation economy, 153–54 insecurity, social protection to address, 188–91 Instagram, 70, 73, 124 institutions fragility of, 230–36 in progressive agenda, 245 undermining of, 231–33 insurance companies, 125 Intel, 65 intellectual property rights (IPR) China and, 95–96 globalization and, 88–89, 99 and stifling of innovation, 74–75 and technological change, 122 in trade agreements, 80, 89 intelligence-assisting (IA) innovations, 119 interest rates, 83, 110, 215 intergenerational justice, 204–5 intergenerational transmission of advantage/disadvantage, xxv–xxvi, 199–201, 219 intermediation, 105, 106 Internal Revenue Service (IRS), 217 International Monetary Fund, xix internet, 58, 147 Internet Explorer, 58 inversions, 302n10 investment buybacks vs., 109 corporate tax cuts and, 269n44 and intergenerational justice, 204 long-term, 106 weakening by monopoly power, 63 “invisible hand,” 76 iPhone, 139 IPR, See intellectual property rights Ireland, 108 IRS (Internal Revenue Service), 217 Italy, 133 IT sector, 54; See also hi-tech companies Jackson, Andrew, 101, 241 Janus v.

pages: 285 words: 86,853

What Algorithms Want: Imagination in the Age of Computing
by Ed Finn
Published 10 Mar 2017

A few weeks before Google purchased it, the company made international news with a machine learning algorithm that had learned to play twenty-nine Atari games better than the average human with no direct supervision.1 Now the same algorithm has replaced “sixty handcrafted rule-based systems” at Google, from image recognition to speech transcription.2 Most spectacularly, in March 2016 DeepMind’s AlphaGo defeated go grandmaster Lee Sedol 4–1, demonstrating its conquest of one of humanity’s subtlest and most artistic games.3 After a long doldrums, Google and a range of other research outfits seem to be making progress on systems that can gracefully adapt themselves to a wide range of conceptual challenges.

Indeed, we spend so much time worrying about the rise of a renegade independent artificial intelligence that we rarely pause to consider the many ways in which we are already collaborating with autonomous systems of varied intelligence. This moves far beyond our reliance on digital address books, mail programs, or file archives: Google’s machine learning algorithms can now suggest appropriate responses to emails, and AlphaGo gives grandmasters of that venerable art form some of their most interesting games. Widening the scope further, we can begin to see how we are changing the fundamental terms of cognition and imagination. The age of the algorithm marks the moment when technical memory has evolved to store not just our data but far more sophisticated patterns of practice, from musical taste to our social graphs.

Index Abortion, 64 Abstraction, 10 aesthetics and, 83, 87–112 arbitrage and, 161 Bogost and, 49, 92–95 capitalism and, 165 context and, 24 cryptocurrency and, 160–180 culture machines and, 54 (see also Culture machines) cybernetics and, 28, 30, 34 desire for answer and, 25 discarded information and, 50 effective computability and, 28, 33 ethos of information and, 159 high frequency trading (HFT) and imagination and, 185, 189, 192, 194 interfaces and, 52, 54, 92, 96, 103, 108, 110–111 ladder of, 82–83 language and, 2, 24 Marxism and, 165 meaning and, 36 money and, 153, 159, 161, 165–167, 171–175 Netflix and, 87–112, 205n36 politics of, 45 pragmatist approach and, 19–21 process and, 2, 52, 54 reality and, 205n36 Siri and, 64–65, 82–84 Turing Machine and, 23 (see also Turing Machine) Uber and, 124–126, 129 Wiener and, 28–29, 30 work of algorithms and, 113, 120, 123–136, 139–149 Adams, Douglas, 123 Adams, Henry, 80–81 Adaptive systems, 50, 63, 72, 92, 174, 176, 186, 191 Addiction, 114–115, 118–119, 121–122, 176 AdSense, 158–159 Advent of the Algorithm, The (Berlinski), 9, 24 Advertisements AdSense and, 158–159 algorithmic arbitrage and, 111, 161 Apple and, 65 cultural calculus of waiting and, 34 as cultural latency, 159 emotional appeals of, 148 Facebook and, 113–114 feedback systems and, 145–148 Google and, 66, 74, 156, 158–160 Habermas on, 175 Netflix and, 98, 100, 102, 104, 107–110 Uber and, 125 Aesthetics abstraction and, 83, 87–112 arbitrage and, 109–112, 175 culture machines and, 55 House of Cards and, 92, 98–112 Netflix Quantum Theory and, 91–97 personalization and, 11, 97–103 of production, 12 work of algorithms and, 123, 129, 131, 138–147 Agre, Philip, 178–179 Airbnb, 124, 127 Algebra, 17 Algorithmic reading, 52–56 Algorithmic trading, 12, 20, 99, 155 Algorithms abstraction and, 2 (see also Abstraction) arbitrage and, 12, 51, 97, 110–112, 119, 121, 124, 127, 130–134, 140, 151, 160, 162, 169, 171, 176 Berlinski on, 9, 24, 30, 36, 181 Bitcoin and, 160–180 black boxes and, 7, 15–16, 47–48, 51, 55, 64, 72, 92–93, 96, 136, 138, 146–147, 153, 162, 169–171, 179 blockchains and, 163–168, 171, 177, 179 Bogost and, 16, 33, 49 Church-Turing thesis and, 23–26, 39–41, 73 consciousness and, 2, 4, 8, 22–23, 36–37, 40, 76–79, 154, 176, 178, 182, 184 DARPA and, 11, 57–58, 87 desire and, 21–26, 37, 41, 47, 49, 52, 79–82, 93–96, 121, 159, 189–192 effective computability and, 10, 13, 21–29, 33–37, 40–49, 52–54, 58, 62, 64, 72–76, 81, 93, 192–193 Elliptic Curve Digital Signature Algorithm and, 163 embodiment and, 26–32 encryption, 153, 162–163 enframing and, 118–119 Enlightenment and, 27, 30, 38, 45, 68–71, 73 experimental humanities and, 192–196 Facebook and, 20 (see also Facebook) faith and, 7–9, 12, 16, 78, 80, 152, 162, 166, 168 gamification and, 12, 114–116, 120, 123–127, 133 ghost in the machine and, 55, 95 halting states and, 41–46 high frequency trading (HFT) and, 151–158, 168–169, 177 how to think about, 36–41 ideology and, 7, 9, 18, 20–23, 26, 33, 38, 42, 46–47, 54, 64, 69, 130, 144, 155, 160–162, 167, 169, 194 imagination and, 11, 55–56, 181–196 implementation and, 47–52 intelligent assistants and, 11, 57, 62, 64–65, 77 intimacy and, 4, 11, 35, 54, 65, 74–78, 82–85, 97, 102, 107, 128–130, 172, 176, 185–189 Knuth and, 17–18 language and, 24–28, 33–41, 44, 51, 54–55 machine learning and, 2, 15, 28, 42, 62, 66, 71, 85, 90, 112, 181–184, 191 mathematical logic and, 2 meaning and, 35–36, 38, 44–45, 50, 54–55 metaphor and, 32–36 Netflix Prize and, 87–91 neural networks and, 28, 31, 39, 182–183, 185 one-way functions and, 162–163 pragmatist approach and, 18–25, 42, 58, 62 process and, 41–46 programmable culture and, 169–175 quest for perfect knowledge and, 13, 65, 71, 73, 190 rise of culture machines and, 15–21 (see also Culture machines) Siri and, 59 (see also Siri) traveling salesman problem and Turing Machine and, 9 (see also Turing Machine) as vehicle of computation, 5 wants of, 81–85 Weizenbaum and, 33–40 work of, 113–149 worship of, 192 Al-Khwārizmī, Abū ‘Abdullāh Muhammad ibn Mūsā, 17 Alphabet Corporation, 66, 155 AlphaGo, 182, 191 Amazon algorithmic arbitrage and, 124 artificial intelligence (AI) and, 135–145 Bezos and, 174 Bitcoin and, 169 business model of, 20–21, 93–94 cloud warehouses and, 131–132, 135–145 disruptive technologies and, 124 effective computability and, 42 efficiency algorithms and, 134 interface economy and, 124 Kindle and, 195 Kiva Systems and, 134 Mechanical Turk and, 135–145 personalization and, 97 physical logistics of, 13, 131 pickers and, 132–134 pragmatic approach and, 18 product improvement and, 42 robotics and, 134 simplification ethos and, 97 worker conditions and, 132–134, 139–140 Android, 59 Anonymous, 112, 186 AOL, 75 Apple, 81 augmenting imagination and, 186 black box of, 169 cloud warehouse of, 131 company value of, 158 effective computability and, 42 efficiency algorithms and, 134 Foxconn and, 133–134 global computation infrastructure of, 131 iOS App Store and, 59{tab} iTunes and, 161 massive infrastructure of, 131 ontology and, 62–63, 65 physical logistics of, 131 pragmatist approach and, 18 product improvement and, 42 programmable culture and, 169 search and, 87 Siri and, 57 (see also Siri) software and, 59, 62 SRI International and, 57, 59 Application Program Interfaces (APIs), 7, 113 Apps culture machines and, 15 Facebook and, 9, 113–115, 149 Her and, 83 identity and, 6 interfaces and, 8, 124, 145 iOS App Store and, 59 Lyft and, 128, 145 Netflix and, 91, 94, 102 third-party, 114–115 Uber and, 124, 145 Arab Spring, 111, 186 Arbesman, Samuel, 188–189 Arbitrage algorithmic, 12, 51, 97, 110–112, 119, 121, 124, 127, 130–134, 140, 151, 160, 162, 169, 171, 176 Bitcoin and, 51, 169–171, 175–179 cultural, 12, 94, 121, 134, 152, 159 differing values and, 121–122 Facebook and, 111 Google and, 111 high frequency trading (HFT) and, 151–158, 168–169, 177 interface economy and, 123–131, 139–140, 145, 147 labor and, 97, 112, 123–145 market issues and, 152, 161 mining value and, 176–177 money and, 151–152, 155–163, 169–171, 175–179 Netflix and, 94, 97, 109–112 PageRank and, 159 pricing, 12 real-time, 12 trumping content and, 13 valuing culture and, 155–160 Archimedes, 18 Artificial intelligence (AI) adaptive systems and, 50, 63, 72, 92, 174, 176, 186, 191 Amazon and, 135–145 anthropomorphism and, 83, 181 anticipation and, 73–74 artificial, 135–141 automata and, 135–138 DARPA and, 11, 57–58, 87 Deep Blue and, 135–138 DeepMind and, 28, 66, 181–182 desire and, 79–82 ELIZA and, 34 ghost in the machine and, 55, 95 HAL and, 181 homeostat and, 199n42 human brain and, 29 intellectual history of, 61 intelligent assistants and, 11, 57, 62, 64–65, 77 intimacy and, 75–76 job elimination and, 133 McCulloch-Pitts Neuron and, 28, 39 machine learning and, 2, 15, 28, 42, 62, 66, 71, 85, 90, 112, 181–186 Mechanical Turk and, 12, 135–145 natural language processing (NLP) and, 62–63 neural networks and, 28, 31, 39, 182–183, 185 OS One (Her) and, 77 renegade independent, 191 Samantha (Her) and, 77–85, 154, 181 Siri and, 57, 61 (see also Siri) Turing test and, 43, 79–82, 87, 138, 142, 182 Art of Computer Programming, The (Knuth), 17 Ashby, Ross, 199n42 Asimov, Isaac, 45 Atlantic, The (magazine), 7, 92, 170 Automation, 122, 134, 144, 188 Autopoiesis, 28–30 Babbage, Charles, 8 Banks, Iain, 191 Barnet, Belinda, 43–44 Bayesian analysis, 182 BBC, 170 BellKor’s Pragmatic Chaos (Netflix), 89–90 Berlinski, David, 9, 24, 30, 36, 181, 184 Bezos, Jeff, 174 Big data, 11, 15–16, 62–63, 90, 110 Biology, 2, 4, 26–33, 36–37, 80, 133, 139, 185 Bitcoin, 12–13 arbitrage and, 51, 169–171, 175–179 blockchains and, 163–168, 171–172, 177, 179 computationalist approach and cultural processing and, 178 eliminating vulnerability and, 161–162 Elliptic Curve Digital Signature Algorithm and, 163 encryption and, 162–163 as glass box, 162 intrinsic value and, 165 labor and, 164, 178 legitimacy and, 178 market issues and, 163–180 miners and, 164–168, 171–172, 175–179 Nakamoto and, 161–162, 165–167 one-way functions and, 162–163 programmable culture and, 169–175 transaction fees and, 164–165 transparency and, 160–164, 168, 171, 177–178 trust and, 166–168 Blockbuster, 99 Blockchains, 163–168, 171–172, 177, 179 Blogs early web curation and, 156 Facebook algorithms and, 178 Gawker Media and, 170–175 journalistic principles and, 173, 175 mining value and, 175, 178 Netflix and, 91–92 turker job conditions and, 139 Uber and, 130 Bloom, Harold, 175 Bogost, Ian abstraction and, 92–95 algorithms and, 16, 33, 49 cathedral of computation and, 6–8, 27, 33, 49, 51 computation and, 6–10, 16 Cow Clicker and, 12, 116–123 Enlightenment and, 8 gamification and, 12, 114–116, 120, 123–127, 133 Netflix and, 92–95 Boolean conjunctions, 51 Bosker, Bianca, 58 Bostrom, Nick, 45 Bowker, Geoffrey, 28, 110 Boxley Abbey, 137 Brain Pickings (Popova), 175 Brain plasticity, 38, 191 Brand, Stewart, 3, 29 Brazil (film), 142 Breaking Bad (TV series), 101 Brin, Sergei, 57, 155–156 Buffett, Warren, 174 Burr, Raymond, 95 Bush, Vannevar, 18, 186–189, 195 Business models Amazon and, 20–21, 93–94, 96 cryptocurrency and, 160–180 Facebook and, 20 FarmVille and, 115 Google and, 20–21, 71–72, 93–94, 96, 155, 159 Netflix and, 87–88 Uber and, 54, 93–94, 96 Business of Enlightenment, The (Darnton) 68, 68 Calculus, 24, 26, 30, 34, 44–45, 98, 148, 186 CALO, 57–58, 63, 65, 67, 79, 81 Campbell, Joseph, 94 Campbell, Murray, 138 Capitalism, 12, 105 cryptocurrency and, 160, 165–168, 170–175 faking it and, 146–147 Gawker Media and, 170–175 identity and, 146–147 interface economy and, 127, 133 labor and, 165 public sphere and, 172–173 venture, 9, 124, 174 Captology, 113 Carr, Nicholas, 38 Carruth, Allison, 131 Castronova, Edward, 121 Cathedral and the Bazaar, The (Raymond), 6 Cathedral of computation, 6–10, 27, 33, 49, 51 Chess, 135–138, 144–145 Chun, Wendy Hui Kyong, 3, 16, 33, 35–36, 42, 104 Church, Alonzo, 23– 24, 42 Church-Turing thesis, 23–26, 39–41 Cinematch (Netflix), 88–90, 95 Citizens United case, 174 Clark, Andy, 37, 39–40 Cloud warehouses Amazon and, 135–145 interface economy and, 131–145 Mechanical Turk and, 135–145 worker conditions and, 132–134, 139–140 CNN, 170 Code.

pages: 315 words: 89,861

The Simulation Hypothesis
by Rizwan Virk
Published 31 Mar 2019

Index A The Adjustment Bureau, 8, 79 The Adjustment Team (Dick), 8–9 AFK - away from keyboard, 209–10 AGI (Artificial Generalized Intelligence), 90–91, 96–99 AGI (Artificial Generalized Intelligence) and social media, 104–5 AI (artificial intelligence) as element of Great Simulation, 280–81 ethics and uses, 97–100 gods, angels and the simulation hypothesis, 226–28 and NPCs, 82–84 super-intelligence, 100–101 and virtual reality and simulated consciousness, 16–18 AI (artificial intelligence), history of AI and games, 85–86 DeepMind, AlphaGo and video games, 86–88 digital psychiatrist, 88–89 NLP, AI and quest to pass the Turing Test, 89–92 Turing Test, 84–85 Al-Akhirah, 221–23 Al-Dunya, 221–23 Alexa, 88, 90 aliens, 275–76 allegory of the cave, 270–71 Almheiri, Ahmed, 260 AlphaGo, 86–88 Altered Carbon (Morgan, 2002), 103–4 analog, 161 ancestor simulation, 108–9, 114–15 Anderson, Kevin J., 97 Andreessen, Marc, 287 angels, 225–26 AR (augmented reality), 62–64 AR glasses, 62 arcade-type mechanics, 34 “Are You Living in a Simulation?”

The milestones included, writing poetry, orchestrating music, translating from one language to another, and generally accomplishing other tasks that only humans would be capable of at the time. Deep Mind, Alpha Go and Video Games Not only is the history of AI and games intertwined, it continues to be in the near future. Google’s DeepMind group created AlphaGo, the first computer program to beat a professional Go player in 2015. It also beat the South Korean Go champion Lee Sedol in 2016. An interesting twist on the “AI learns to play games” mechanic was when the DeepMind team trained the AI to play video games. This was done not through rules-based AI for a specific game, like the Tic Tac Toe algorithm I had written as a kid, but by watching the screen and controls.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Game playing: When Deep Blue defeated world chess champion Garry Kasparov in 1997, defenders of human supremacy placed their hopes on Go. Piet Hut, an astrophysicist and Go enthusiast, predicted that it would take “a hundred years before a computer beats humans at Go—maybe even longer.” But just 20 years later, ALPHAGO surpassed all human players (Silver et al., 2017). Ke Jie, the world champion, said, “Last year, it was still quite human-like when it played. But this year, it became like a god of Go.” ALPHAGO benefited from studying hundreds of thousands of past games by human Go players, and from the distilled knowledge of expert Go players that worked on the team. A followup program, ALPHAZERO, used no input from humans (except for the rules of the game), and was able to learn through self-play alone to defeat all opponents, human and machine, at Go, chess, and shogi (Silver et al., 2018).

Visual pattern recognition was proposed as a promising technique for Go by Zobrist (1970), while Schraudolph et al. (1994) analyzed the use of reinforcement learning, Lubberts and Miikkulainen (2001) recommended neural networks, and Brügmann (1993) introduced Monte Carlo tree search to Go. ALPHAGO (Silver et al., 2016) put those four ideas together to defeat top-ranked professionals Lee Sedol (by a score of 4–1 in 2015) and Ke Jie (by 3–0 in 2016). Ke Jie remarked “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong. I would go as far as to say not a single human has touched the edge of the truth of Go.” Lee Sedol retired from Go, lamenting, “Even if I became the number one, there is an entity that cannot be defeated.” In 2018, ALPHAZERO surpassed ALPHAGO at Go, and also defeated top programs in chess and shogi, learning through self-play without any expert human knowledge and without access to any past games.

Subsequent work produced deep RL systems that generated more extensive exploratory behaviors and were able to conquer Montezuma’s Revenge and other difficult games. DeepMind’s ALPHAGO system also used deep reinforcement learning to beat the best human players at the game of Go (see Chapter 6). Whereas a Q-function with no look-ahead suffices for Atari games, which are primarily reactive in nature, Go requires substantial lookahead. For this reason, ALPHAGO learned both a value function and a Q-function that guided its search by predicting which moves are worth exploring. The Q-function, implemented as a convolutional neural network, is accurate enough by itself to beat most amateur human players without any search at all. 23.7.2Application to robot control The setup for the famous cart–pole balancing problem, also known as the inverted pendulum, is shown in Figure 23.9(a).

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

So the algorithm seems to be picking up something, but we suspect that facial contours and “facial femininity” are readily influenced by aspects of self-presentation including makeup, lighting, hairstyle, angle, photo choice, and so forth. *9 The AlphaGo program, which beat one of the best human Go players in the world, provides a good example. AlphaGo didn’t start with any axioms, scoring systems, lists of opening moves, or anything of the sort. It taught itself how to play the game, and made probabilistic decisions based on the given board configuration. This is pretty amazing given the 10350 possible moves that Go affords. By comparison, chess has “only” about 10123. Go masters have learned some new tricks from the play of AlphaGo, but good luck trying to understand what the machine is doing at any broad and general level

pages: 174 words: 56,405

Machine Translation
by Thierry Poibeau
Published 14 Sep 2017

It appeals to a more general principle of learning multiple levels of composition, which can be applied in machine learning frameworks that are not necessarily neurally inspired.” This approach has received extensive press coverage. This was particularly the case in March 2016, when Google Deepmind’s system AlphaGo—based on deep learning—beat the world champion in the game of Go. This approach is especially efficient in complex environments such as Go, where it is impossible to systematically explore all the possible combinations due to combinatorial explosion (i.e., there are very quickly too many possibilities to be able to explore all of them systematically).

“Compendium of translation software.” http://www.hutchinsweb.me.uk/Compendium.htm. Index Adamic language, 40 Adams, Douglas, 1, 256 Adequacy. See Evaluation measure and test Advertisement, 226, 229, 232 Aeronautic industry, 243, 250 Agglutinative language, 214–216, 261 Agreement (linguistic), 175 Aligned texts. See Parallel corpus ALPAC Report, 35, 75–83, 199 AlphaGo, 182 AltaVista, 227 Ambiguity, 15–18, 21, 23, 40, 56–59, 64–65, 72, 178, 239, 252, 261 American defense agencies, 77, 88. See also Defense industry American intelligence agencies. See American defense agencies Analogy. See Example-based machine translation Analytical language, 215–216 Android, 240 Apertium.

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Published 7 Feb 2017

“The computer kind of bottom-up comprehension will eventually submerge the human kind, overpowering it with the sheer size and speed of its learning.” The latest breakthrough in AI, AlphaGo, the deep-learning program that has recently beaten Lee Seedol, regarded by many as the best human player of Go in the world, supports this expectation in one regard if not in others. I noted that Frances Arnold and David Cope each play a key quality-control role in the generation processes they preside over, as critics whose scientific or aesthetic judgments decide which avenues to pursue further. They are, you might say, piloting the exploration machines they have designed through Design Space. But AlphaGo itself does something similar, according to published reports: its way of improving its play is to play thousands of Go games against itself, making minor exploratory mutations in them all, evaluating which are (probably) progress, and using those evaluations to adjust the further rounds of practice games.

But AlphaGo itself does something similar, according to published reports: its way of improving its play is to play thousands of Go games against itself, making minor exploratory mutations in them all, evaluating which are (probably) progress, and using those evaluations to adjust the further rounds of practice games. It is just another level of generate and test, in a game that could hardly be more abstract and insulated from real-world noise and its attendant concerns, but AlphaGo is learning to make “intuitive” judgments about situations that have few of the hard-edged landmarks that computer programs excel at sorting through. With the self-driving car almost ready for mass adoption—a wildly optimistic prospect that not many took seriously only a few years ago—will the self-driving scientific exploration vehicle be far behind?

active symbols, 344 adaptationism, 22, 80, 117, 249, 265 functions in, 29 Gould-Lewontin attack on, 29–30, 32 just-so stories and, 121 Adelson, Glenn, 48, 140 adjacent possible, 399 affordances, 101, 119, 128, 152, 233, 336, 356, 388 artifacts as, 135 brains as collectors of, 150, 165–71, 272, 274, 412 infants and, 299 inherited behavior and, 123 learning and, 165–66 memes as, 287 natural selection and, 165–66 proto-linguistic phenomena as, 265–66 repetition in creation of, 208 use of term, 79 words as, 198, 204 see also semantic information; Umwelt age of intelligent design, 309, 331, 371, 379–80 age of post-intelligence design, 5, 372, 413 see also artificial intelligence agriculture, dawn of, 8–9 Alain (Émile Chartier), 214 alarm calls, 265, 289, 343 algorithms, of natural selection, 43, 384 alleles, 234, 235, 237 AlphaGo, 391–92 altriciality, 286 Amish, 240 Anabaptists, 240 analog-to-digital converters (ADCs), 109–10, 113 “analysis by synthesis” model, 169 anastomosis, 180, 323 Ancestors’ Tale, The (Dawkins), 35 animals: behavior of, see behavior, animal communication in, 287, 289 comprehension attributed to, 86–94 consciousness and, 298–99 domestication of, 9, 87, 172, 197, 315 feral, 172 memes of, 282 as Popperian creatures, 100 Anscombe, G.

pages: 247 words: 60,543

The Currency Cold War: Cash and Cryptography, Hash Rates and Hegemony
by David G. W. Birch
Published 14 Apr 2020

6 However, there is also a good reason why smart observers do not dismiss it: ‘censorship-resistant’ implies an open, neutral platform that could be a driver of permissionless innovation. 7 John Cryan, while CEO of Deutsche Bank, was famously quoted in the Financial Times as saying that his bank would shift from employing people to act like robots to employing robots to act like people. 8 As I asked at Digital Jersey’s Annual Review in 2018, in an echo of Fred Schwed’s 1940s financial services classic … where are the customers’ bots? 9 AlphaGo Zero, which taught itself to play, has already beaten AlphaGo, which was taught to play by humans, by a hundred games to zero. You heard that right: zero. 10 As the economist Diane Coyle pointed out in a Financial Times article (published 26 January 2017), it may be that transparency is the key to making this work, which highlights at least one area where the technology of shared ledgers and machine learning – blockchains and bots – may come together. 11 At the time of writing, the trading of tokens has just overtaken the trading of cryptocurrency on the Ethereum blockchain. 12 They go on to say, and I strongly agree, that this means it is important to achieve a social consensus on how such smart money should be integrated into the existing financial system.

pages: 391 words: 71,600

Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
by Satya Nadella , Greg Shaw and Jill Tracie Nichols
Published 25 Sep 2017

The following year Deep Blue went a giant step further when it defeated Russian chess legend Garry Kasparov in an entire six-game match. It was stunning to see a computer win a contest in a domain long regarded as representing the pinnacle of human intelligence. By 2011, IBM Watson had defeated two masters of the game show Jeopardy!, and in 2016 Google DeepMind’s AlphaGo outplayed Lee Se-dol, a South Korean master of Go, the ancient, complex strategy game played with stones on a grid of lines, usually nineteen by nineteen. Make no mistake, these are tremendous science and engineering feats. But the future holds far greater promise than computers beating humans at games.

See also artificial intelligence (AI) AI super-computer, 152. See also artificial intelligence (AI) Alexa, 201 algorithms, 150–51, 159 accountability, 205 quantum, 161, 166 Ali, Abi (cricket player), 36–37 Ali, Mr. (landlord), 36–37 Ali, Syed B., 20 Alien and Sedition Acts, 188 Allen, Colin, 209 Allen, Paul, 4, 21, 28, 64, 69, 87, 127 Alphago, 199 ALS, 10–11 Altair, 87 Althoff, Judson, 82 Amar, Akhil Reed, 186 Amazon, 47, 51, 54, 59, 85, 122, 125, 200, 228 Amazon Fire, 125 Amazon Web Service (AWS), 45–46, 52, 54, 58 ambient intelligence, 228–39 ambition, 76–78, 80, 90 American Dream, 238 American Revolution, 185–86 Amiss, Dennis, 37 Anderson, Brad, 58, 82 Android. 59, 66, 70–72, 123, 125, 132–33, 222 antitrust case, 130 AOL, 174 Apple Computer, 15, 45, 51, 66, 69–70, 72, 128, 132, 174, 177–78, 189 partnership with, 121–25 apprenticeship, 227 artificial general intelligence (AGI), 150, 153–54 artificial intelligence (AI), 11, 13, 50, 52, 59, 76, 88, 110, 139–42, 149–59, 161, 164, 166–67, 186, 212, 223, 239 ethics and, 195–210 Artificial Intelligence and Life in 2030 (Stanford report), 208 Asia, 86, 219 Asimov, Isaac, 202–3 astronauts, 146, 148 asynchronous transfer model (ATM), 30 AT&T, 174 Atari 2600, 146 at-scale services, 53, 61 auction-based pricing, 47, 50 Australia, 38–39, 149, 228, 230 autism, 149 Autodesk, 127–28 automation, 208, 214, 226, 231–32, 236 automobile, 127, 153, 230 driverless, 209, 226, 228 aviation, 210 Azure, 58–61, 85, 125, 137 backdoors, 177–78 Bahl, Kunal, 33 Baig, Abbas Ali, 36 Bain Capital, 220 Baldwin, Richard, 236 Ballmer, Steve, 3–4, 12, 14, 29, 46–48, 51–55, 64, 67, 72, 91, 94, 122 Banga, Ajay Singh, 20 Baraboo project, 145 Baraka, Chris, 97 BASIC, 87, 143 Batelle, John, 234 Bates, Tony, 64 Bayesian estimators, 54 Baymax (robot), 150 Beauchamp, Tom, 179 Belgium, 215 Best Buy, 87, 127 Bezos, Jeff, 54 bias, 113–15 Bicycle Corporation of America, 232 Big Data, 13, 58, 70, 150–51, 183–84 Big Hero 6 (film), 150 Bill & Melinda Gates Foundation, 46, 74 Bill of Rights, 190 Bing, 47–54, 57, 59, 61, 125, 134 Birla Institute of Technology, 21 Bishop, Christopher, 199 black-hat groups, 170 Blacks @ Microsoft (BAM), 116–17.

pages: 222 words: 70,132

Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy
by Jonathan Taplin
Published 17 Apr 2017

I have suggested that policy makers begin exploring a universal basic income, or UBI, a concept that has support on both the left and right. It does seem to me that to ignore the dystopian possibility that software will “eat the world” would be foolhardy. Just because some techno-optimists continue to insist that old jobs will be replaced by new jobs we can’t imagine yet does not mean it is true. Google’s AlphaGo artificial intelligence system may have bested the world’s greatest Go player, but I’m not worried that it’s going to replace our greatest musicians, filmmakers, and authors, even though an NYU artificial intelligence laboratory has programmed a robot named Benjamin to be a screenwriter. And even if you believe that robots will be able to fill most jobs, MIT’s Andrew McAfee and Erik Brynjolfsson have pointed out that “understanding and addressing the societal challenges brought on by rapid technological progress remain tasks that no machine can do for us.”

His belief that Aldous Huxley’s vision of the future was correct is more true than ever. Mark Grief, The Age of the Crisis of Man: Thought and Fiction in America, 1933–1973 (Princeton: Princeton University Press, 2015). Chapter Twelve: The Digital Renaissance Christopher Moyer, “How Google’s AlphaGo Beat Lee Sedol, a Go World Champion,” Atlantic, March 28, 2016, www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611/. Lawrence Summers and J. Bradford DeLong, “The ‘New Economy’: Background, Historical Perspective, Questions, and Speculations,” Federal Reserve Bank of Kansas City, August 2001, www.kansascityfed.org/publicat/sympos/2001/papers/S02delo.pdf.

pages: 241 words: 70,307

Leadership by Algorithm: Who Leads and Who Follows in the AI Era?
by David de Cremer
Published 25 May 2020

Some signs indicate that it may well happen. Take the example of the South Korean Lee Sedol, who was the world champion at the ancient Chinese board game Go. This board game is highly complex and was considered for a long time beyond the reach of machines. All that changed in 2016 when the computer program AlphaGO beat Lee Sedol four matches to one. The loss against AI made him doubt his own (human) qualities so much that he decided to retire in 2019. So, if even the world champion admits defeat, why would we not expect that one day machines will develop to the point where they run our organizations? To tackle this question, I will start from the premise that the leadership we need in a humane society is likely not to emerge through more sophisticated technology.

AI witnessed a comeback in the last decade, primarily because the world woke up to the realization that deep learning by machines is possible to the level where they can actually perform many tasks better than humans. Where did this wake-up call come from? From a simple game called Go. In 2016, AlphaGo, a program developed by Google DeepMind, beat the human world champion in the Chinese board game, Go. This was a surprise to many, as Go – because of its complexity – was considered the territory of human, not AI, victors. In a decade where our human desire to connect globally, execute tasks faster, and accumulate massive amounts of data, was omnipresent, such deep learning capabilities were, of course, quickly embraced.

pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death
by Mark O'Connell
Published 28 Feb 2017

I would open up Twitter or Facebook, and my timelines—flows of information that were themselves controlled by the tidal force of hidden algorithms—would contain a strange and unsettling story about the ceding of some or other human territory to machine intelligence. I read that a musical was about to open in London’s West End, with a story and music and words all written entirely by an AI software called Android Lloyd Webber. I read that an AI called AlphaGo—also the work of Google’s DeepMind—had beaten a human grandmaster of Go, an ancient Chinese strategy board game that was exponentially more complex, in terms of possible moves, than chess. I read that a book written by a computer program had made it through the first stage of a Japanese literary award open to works written by both humans and AIs, and I thought of the professional futurist I had talked to in the pub in Bloomsbury after Anders’s talk, and his suggestion that works of literature would come increasingly to be written by machines.

In one sense, I was less disturbed by the question of what the existence of computer-generated novels or musicals might mean for the future of humanity than by the thought of having to read such a book, or endure such a performance. And neither had I taken any special pride in the primacy of my species at strategy board games, and so I found it hard to get excited about the ascendancy of AlphaGo, which seemed to me like a case of computers merely getting better at what they’d always been good at anyway, which was the rapid and thorough calculation of logical outcomes—a highly sophisticated search algorithm. But in another sense, it seemed reasonable to assume that these AIs would only get better at doing what they already did: that the West End musicals and sci-fi books would become incrementally less shit over time, and that more and more complicated tasks would be performed more and more efficiently by machines.

Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurelien Geron
Published 14 Aug 2019

Reinforcement Learning For example, many robots implement Reinforcement Learning algorithms to learn how to walk. DeepMind’s AlphaGo program is also a good example of Reinforcement Learning: it made the headlines in May 2017 when it beat the world champion Ke Jie at the game of Go. It learned its winning policy by analyzing millions of games, and then playing many games against itself. Note that learning was turned off during the games against the champion; AlphaGo was just applying the policy it had learned. Batch and Online Learning Another criterion used to classify Machine Learning systems is whether or not the system can learn incrementally from a stream of incoming data.

pages: 909 words: 130,170

Work: A History of How We Spend Our Time
by James Suzman
Published 2 Sep 2020

As a result, what appeared to be impossibly distant milestones in automation just a couple of years ago are now looming large. In 2017, for instance, Xiaoyi, a robot developed by Tsinghua University in Beijing, in collaboration with a state-owned company, sailed through China’s National Medical Licensing Examination, and Google’s AlphaGO thrashed the world’s best human Go players. This was considered a particularly important milestone because, unlike chess, Go cannot be won using information-processing power alone. In 2019, an austere black column, the IBM Debater, which had been practising sharpening its tongue arguing in private with IBM employees for several years, put in a losing but persuasive and ‘surprisingly charming’ performance arguing in favour of pre-school subsidies against a one-time grand finalist from the World Debating Championships.

As a result, what appeared to be impossibly distant milestones in automation just a couple of years ago are now looming large. In 2017, for instance, Xiaoyi, a robot developed by Tsinghua University in Beijing, in collaboration with a state-owned company, sailed through China’s National Medical Licensing Examination, and Google’s AlphaGO thrashed the world’s best human Go players. This was considered a particularly important milestone because, unlike chess, Go cannot be won using information-processing power alone. In 2019, an austere black column, the IBM Debater, which had been practising sharpening its tongue arguing in private with IBM employees for several years, put in a losing but persuasive and ‘surprisingly charming’ performance arguing in favour of pre-school subsidies against a one-time grand finalist from the World Debating Championships.

Index aardvarks here, here abiogenesis here Abrahamic religions here, here Académie des Sciences here acetogens here Acheulean hand-axes here, here, here Adam and Eve here, here adenosine triphosphate (ATP) here advertising here Africa, human expansion out of here agriculture and the calendar here, here catastrophes here and climate change here, here human transition to here inequality as consequence of here and investment here Natufians and here, here, here, here productivity gains here, here, here, here, here proportion employed in here spread of here and urbanisation here Akkadian Empire here Alexander the Great here American Federation of Labor here American Society of Mechanical Engineers here animal domestication here, here, here, here, here animal tracks here animal welfare here animals’ souls here anomie here anthrax here Anthropocene era here anti-trust laws here ants here, here, here Aquinas, Thomas here, here, here archery here Archimedes here, here, here Aristotle here, here, here, here Arkwright, Richard here armies, standing here Aronson, Ben here artificial intelligence here, here, here, here, here, here, here, here, here ass’s jawbone here asset ownership here AT&T here Athens, ancient here, here aurochs here Australian Aboriginals here, here, here, here, here Australopithecus here, here, here, here, here, here, here, here automation here, here, here, here, here, here, here, here, here Aztecs here baboons here Baka here BaMbuti here, here, here, here, here bank holidays here, here Bantu civilisations here barter here, here, here, here, here Batek here Bates, Dorothea here beer here, here, here, here, here bees here, here, here, here Belgian Congo here, here Bergen Work Addiction Scale here Biaka here billiards here biodiversity loss here, here, here birds of paradise here bison, European here Black-Connery 30-Hours Bill here Blombos Cave here, here Blurton-Jones, Nicholas here boa constrictors here boats, burning of here Bolling Allerød Interstadial here, here, here Boltzmann, Ludwig here boredom here, here Boucher de Crèvecœur de Perthes, Jacques here, here bovine pleuropneumonia here bowerbirds here, here brains here, here, here increase in size here and social networks here Breuil, Abbé here Broca’s area here Bryant and May matchgirls’ strike here bubonic plague here ‘bullshit jobs’ here butchery, ancient here Byron, Lord here Calico Acts here Cambrian explosion here cannibalism here caps, flat here carbon dioxide, atmospheric here, here cartels here Çatalhöyük here, here Cato institute here cattle domestication of here as investment here cave paintings, see rock and cave paintings census data here CEOs here, here, here, here, here, here cephalopods here cereals, high-yielding here Chauvet Cave painting here cheetahs here, here, here child labour here childbirth, deaths in here Childe, Vere Gordon here, here, here, here chimpanzees here, here, here, here, here, here, here China here, here, here, here, here, here Han dynasty here medical licensing examination here Qin dynasty here services sector here, here Shang dynasty here, here Song dynasty here value of public wealth here Chomsky, Noam here circumcision, universal here Ciudad Neza here clam shells here Clark, Colin here, here climate change here, here, here see also greenhouse gas emissions Clinton, Bill here clothing, and status here Club of Rome here, here coal here Coast Salish here, here cognitive threshold, humans cross here, here ‘collective consciousness’ here ‘collective unconscious’ here colonialism here commensalism here Communism, collapse of here Conrad, Joseph here consultancy firms here Cook, Captain James here cooking here, here, here coral reefs here Coriolis, Gaspard-Gustave here coronaviruses here corporate social responsibility here Cotrugli, Benedetto here cotton here, here, here credit and debt arrangements here Crick, Francis here crop rotation here cyanobacteria here, here Cyrus the Great here Darius the Great here Darwin, Charles here, here, here, here, here, here, here, here debt, personal and household here deer here, here demand-sharing here Denisovans here Descartes, René here, here, here, here DeVore, Irven here Dharavi here diamonds here, here diamphidia larvae here Dinka here division of labour here, here, here DNA here, here, here mitochondrial here dogs here, here, here, here, here, here domestication of here Lubbock’s pet poodle here Pavlov’s here wild here, here double-entry bookkeeping here dreaming here Dunbar, Robin here, here Durkheim, Emile here, here, here, here Dutch plough here dwellings drystone-walled here mammoth-bone here earth’s atmosphere, composition of here earth’s axis, shifts in alignment of here East India Company here, here ‘economic problem’ here, here, here, here, here, here, here, here, here, here economics ‘boom and bust’ here definitions of here, here formalists v. substantivists here fundamental conflict within here ‘trickle-down’ here ecosystem services here Edward III, King here efficiency movement here egalitarianism here, here, here, here, here, here egrets here Egypt, Roman here, here Egyptian Empire here einkorn here Einstein, Albert here elands here, here elderly, care of here, here, here, here elephants here, here, here, here, here, here energy-capture here, here, here Enlightenment here, here, here, here Enron here Enterprise Hydraulic Works here, here entropy here, here, here, here, here, here EU Working Time Directive here Euclid here eukaryotes here eusociality here, here evolution here, here and selfish traits here see also natural selection Facebook here, here Factory Acts here, here factory system here, here famines and food shortages here, here fertilisers here, here fighting, and social hierarchies here financial crisis (2007–8) here, here financial deregulation here fire, human mastery of here, here, here, here, here see also cooking fisheries here flightless birds here foot-and-mouth disease here Ford, Henry here, here, here, here, here fossil fuels here, here, here, here, here, here, here Fox, William here foxes here, here bat-eared here Franklin, Benjamin here, here, here, here, here, here, here free markets here, here, here free time (leisure time) here, here, here, here, here, here, here, here, here freeloaders here, here, here Freud, Sigmund here Frey, Carl, and Michael Osborne here funerary inscriptions here Galbraith, John Kenneth here, here Galileo here Gallup State of the Global Workplace report here Garrod, Dorothy here, here gazelle bones here, here genomic studies here, here, here, here, here, here, here, here, here and domesticated dogs here geometry here gift giving here Gilgamesh here glacial periods here, here gladiators here Gladwell, Malcolm here globalisation here Göbekli Tepe here, here Gompers, Samuel here Google here Google AlphaGO here Gordon, Wendy here Gorilla Sign Language here gorillas here, here, here, here see also Koko Govett’s Leap here Graeber, David here, here granaries here graves here graveyards, Natufian here Great Decoupling here, here Great Depression here, here, here ‘great oxidation event’ here, here Great Zimbabwe here greenhouse gas emissions here, here see also climate change Greenlandic ice cores here grewia here Grimes, William here Gurirab, Thadeus here, here gut bacteria here Hadzabe here, here, here, here, here, here, here Harlan, Jack here harpoon-heads here Hasegawa, Toshikazu here Health and Safety Executive here health insurance here Heidegger, Martin here Hephaistos here Hero of Alexandria here Hesiod here hippopotamuses here Hitler, Adolf here hominins, evidence for use of fire here Homo antecessor here Homo erectus here, here, here, here, here, here, here, here, here, here Homo habilis here, here, here, here, here, here, here, here Homo heidelbergensis here, here, here, here, here Homo naledi here horses here, here wild here household wealth, median US here housing, improved here human resources here, here, here Humphrey, Caroline here Hunduism here hunter-gatherers, ‘complex’ here hyenas here, here, here, here, here immigration here Industrial Revolution here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here inequality here, here, here, here, here in ancient Rome here influenza here ‘informavores’ here injuries, work-related here Institute of Bankers here, here Institute of Management here intelligence here, here evolution of here interest here internal combustion engine here, here Inuit here, here, here, here Iroquois Confederacy here Ituri Forest here jackals here Japan here, here, here jealousy, see self-interest jewellery here, here, here, here, here Ju/’hoansi here, here, here, here, here, here, here, here, here, here, here, here, here, here, here and animals’ souls here contrasted with ‘complex hunter-gatherers’ here contrasted with farming communities here, here, here creation mythologies here and ‘creatures of the city’ here and demand-sharing here, here egalitarianism here, here, here, here, here, here energy-capture rates here life expectancy here and mockery here village sizes here Jung, Carl Gustav here kacho-byo (‘manager’s disease’) here kangaroos here Karacadag here karo jisatsu here, here karoshi here, here Kathu Pan hand-axes here, here, here Kavango here Kellogg, John Harvey here Kellogg, Will here Kennedy, John F. here Keynes, John Maynard here, here, here, here, here, here, here, here, here, here, here, here, here Khoisan here, here, here Kibera here Kish here Koko here, here, here Kubaba, Queen here Kwakwaka’wakw here, here labour/debt relationships here labour theory of value here Lake Eyasi here Lake Hula here Lake Turkana here, here language arbitrary nature of here evolution of here Gossip and Grooming hypothesis here, here Grammaticalisation theory here processing here Single Step theory here langues de chat (cat’s tongues) here latifundia here le Blanc, Abbé Jean here leatherwork here Lee, Richard Borshay here, here, here, here, here leisure activities here leisure time, see free time Leopold II, King of the Belgians here, here Lévi-Strauss, Claude here, here, here, here Liebenberg, Louis here life expectancy here, here life on earth, evolution of here lignin here Limits to Growth, The here lions here, here, here, here, here, here, here, here, here, here literacy, see writing living standards, rising here Loki here, here London neighbourhoods here longhouses here Louis XIV, King of France here Louis XVI, King of France here Löwenmensch (Lion Man) sculpture here Lubbock, Sir John here, here Luddites here, here, here Luoyang (Chengzhou) here, here luxury goods here McKinsey and Co. here, here, here ‘malady of infinite aspirations’ here Malthus, Rev.

pages: 308 words: 85,880

How to Fix the Future: Staying Human in the Digital Age
by Andrew Keen
Published 1 Mar 2018

He is encouraged, for example, by what he describes as the ethical maturity of the three cofounders of DeepMind, particularly Demis Hassabis, its young Cambridge-educated CEO. This is the London-based tech company whose investors include Jaan Tallinn and Elon Musk, a start-up founded in 2011 and then acquired by Google for $500 million in 2014. DeepMind made the headlines in March 2016 when AlphaGo, its specially designed algorithm, defeated a South Korean world champion Go player in this 5,500-year-old Chinese board game, the oldest and one of the most complex games ever invented by humans. But in addition to the commercial development of artificial intelligence, Price explains, the DeepMind founders—with other Big Tech companies like Microsoft, Facebook, IBM, and Amazon—are helping engineer an industrywide moral code about smart technology.

See also Utopia (More) disruption to, 16–23 education and, 286 free will and, 17, 21, 262 humanity and AI, 23–28, 268–272 (See also artificial intelligence (AI)) industrial revolution and, 35–36 Moore’s Law and, 12–16 Snowden on, 8–12 social responsibility and, 200 tools to fix the future, overview, 31–42 (See also competitive innovation; education; regulation; social responsibility; worker and consumer choice) “age of acceleration,” 13–14 Ahuja, Anjana, 207 “AI Control Problem, The” (Tallinn), 54 AirBnB, 145, 254 AlphaGo (DeepMind), 199 Alter, Adam, 67, 213, 281–282 Altman, Sam, 199, 260 Amazon Bezos and, 205, 211–213, 223 centralized power of, 64–70 regulation issues, 140, 144, 151, 158 social responsibility and, 202, 204, 211, 212, 224 Ambassadors, The (Holbein), 20–21 America Online, 30, 33, 136 Amnesty International, 105–106 Anderson, Chris, 281–282 Andreessen, Marc, 69, 175 Ansip, Andrus, 152–153, 158 antitrust regulation.

Know Thyself
by Stephen M Fleming
Published 27 Apr 2021

In March 2016, its flagship algorithm, AlphaGo, beat Lee Sedol, the world champion at the board game Go and one of the greatest players of all time. In Go, players take turns placing their stones on intersections of a nineteen-by-nineteen grid, with the objective of encircling or capturing the other player’s stones. Compared to chess, the number of board positions is vast, outstripping the estimated number of atoms in the universe. But by playing against itself millions of times, and updating its predictions about valuable moves based on whether it won or lost, AlphaGo could achieve superhuman skill at a game that is considered so artful that it was once one of four essential skills that Chinese aristocrats were expected to master.8 These kinds of neural networks rely on supervised learning.

pages: 661 words: 156,009

Your Computer Is on Fire
by Thomas S. Mullaney , Benjamin Peters , Mar Hicks and Kavita Philip
Published 9 Mar 2021

Many have already begun to realize that such frontiers may not be located where we might assume. If the vanguard of artificial intelligence research once resided in the defeat of Russian Garry Kasparov by chess-playing Deep Blue, more recently it lies in the 2016 defeat of Korean Lee Sedol at the “hands” of Google’s Go/Weiqi/Baduk-playing system AlphaGo.5 In the same year, China took top prize as the global leader in supercomputing for the seventh time in a row, with its Sunway TaihuLight clocking a theoretical peak performance of 125.4 petaflops, or 1,254 trillion floating point calculations per second.6 Meanwhile, banking systems long reliant upon state-governed infrastructures are rapidly being displaced on the African continent by the cellphone-based money transfer system M-Pesa—the largest mobile-money business in the world.7 And bringing us full circle, many of the cellphones upon which this multibillion-dollar economy runs still use T9 text-input technology—a technology that, although invented in North America, found its first active user base in Korea via the Korean Hangul alphabet.

hiring, 257–259, 263 human created, 53–54, 383 image recognition, 4, 118–124, 127–129 machine learning, 64–65 policing, 5, 66, 126, 206 social media, 54 software, 199, 201, 333 system builders, 6 typing, 356–358 Alibaba, 316 Allende, Salvador, 75, 79–80 Alpha60, 108 Alphabet, 31, 54. See also Google Alphabets Arabic, 215–217 Cyrillic, 215, 344 Hangul, 7, 341–344, 351 Latin, 4, 188, 218, 338–339, 341–345, 351, 353, 356–358, 382 new, 217 Roman, 213 (see also Alphabets, Latin) AlphaGo, 7 Altavista, 241 Amazon, 87, 160, 190, 201, 254, 257, 261, 308 Alexa, 179–180, 184, 189, 190t, 202, 381 business model, 29–33, 35–36, 43, 46 and the environment, 34 infrastructures, 37, 42 Prime, 31, 265 Rekognition, 126, 208 workers, 23 American dream, 136 American University of Beirut, 216, 222 Android, 273, 318 Anku, Kwame, 266 AOL (America Online), 323, 331 Apartheid, 18, 314, 325 anti-, 322 API (Application programming interface), 328 Apple, 190–191, 201, 223–224, 254, 318, 321, 365.

See also Gender inequality anti-, 144 Ferraiolo, Angela, 235 Fibonacci sequence, 275–277 Fido, Bulletin Board System, 322–324 FidoNet, 322–326, 327, 333 demise, 326 nodes, 322–324 zonemail, 322 Fire crisis, 6, 22–24, 159 crowded theatre, 363, 373 flames, 267, 368 gaming, 233, 242, 245 infrastructures, 313–333 passim, 322 optimism, 309 physical, 5, 44, 321 pyrocene, 364 spread, 6–7 and technologies, relationship to, 13, 111, 313 Triangle Shirtwaist Factory, 22 typography, 213, 227 your computer is on, 4, 232 First Round, 265 Fiber optic link around the globe (FLAG), 101 FLAG, 101 Flanagan, Mary, 235 Flickering signifier, 284 Flores, Fernando, 79 Flowers, Tommy, 143 Forecasting, 6, 57, 110 FOSS (Free and open-source software), 191 414 gang, 287–288 Fowler, Susan, 254 France (French), 39, 117, 145, 216, 219, 221, 320 French (language), 341, 344, 380 Free and open-source software, 191 Free speech, 59, 61–62, 373–374 Friedman, Thomas, 308 Friendster, 17, 313 Future Ace, 299 Games big game companies, 245–246 computer, 232–233 limits of, 232–233, 243, 244–245 rhetoric vs. reality, 232 skinning, 233–236 video, 241, 246 Garbage In, Garbage Out (GIGO), 58 Gates, Bill, 18, 29 Gem Future Academy, 299 Gender inequality, 4–6, 8, 21, 136, 184, 187, 381 artificial intelligence, 121, 127–128 British civil service, 140, 144–145, 148, 150–152 hiring gaps, 253–257, 259–260, 267, 367 and IBM, 159–175 passim internet’s structural, 97, 99, 102, 109–110 and robotics, 199–204 stereotyping, 106 technical design, 370 of work, 302–303, 307–309, 367, 375 Gender Resource Center, 298 Gendered Innovations initiative, 200 Germany, 221, 290, 341 IBM and West, 160–161, 166–175 Nazi, 15, 63, 143 Gerritsen, Tim, 238 Ghana, 45, 149f, 330 Gilded Age, 13, 32 Glass ceiling, 136, 143 Global North, 191, 324–325 Global South, 91–92, 94, 309, 325, 333, 367 Global System for Mobile (GSM) Communications network, 327–328 Global Voices, 331 Glushkov, Viktor M., 77–78, 83 Google, 5, 7, 84, 87, 160, 201, 254, 263, 318, 321, 329, 333 advertising, 136–137 Alphabet, 31, 54 AlphaGo, 7 business concerns, 17 Docs, 224 Drive, 224 employees, 23, 207, 262 ethics board, 22 hiring, 257 Home, 179, 180, 184, 189, 190t image recognition, 4, 120 original motto, (“don’t be evil”), 17, 22 Photos, 265 search, 66, 203, 328 voice recognition, 188 Graham, Paul, 256 Grand Theft Auto: San Andreas, 245 GRC (Gender Resource Center), 298 Great book tourism, 366–367, 374 GreenNet (UK), 324–325 Grubhub, 210 Global System for Mobile Communications (GSM) network, 327–328 GSM (Global System for Mobile Communications) network, 327–328 G-Tech Foundation, 298, 301 Guest, Arthur, 217 Hacking, 15, 81, 87, 256, 266, 287, 291 hacker, 263–264, 266, 287, 291 tourist, 100–102 Haddad, Selim, 216–218, 220 Hangul, 7, 341–344, 351 Hanscom Air Force Base Electronic Systems Division, 274 Harvard University, 14, 257, 349 Hashing, 57, 66, 124–126, 129 #DREAMerHack, 266 #YesWeCode, 253, 264–266 Hayes, Patrick, 52 Haymarket riots, 168 Health insurance, 53 Hebrew, 217, 222, 224–225, 341, 343–344, 354 Henderson, Amy, 265 Heretic, 237 Heterarchy, 86t, 87 Heteronormative, 139, 154 Hewlett-Packard, 318 High tech, 12–13, 21, 35, 37, 46, 147–148 sexism in, 136–138, 152–153 High-level languages (HLL), 275, 277–278, 284, 290 Hindi, 190, 215, 342, 355f Hiring.

pages: 526 words: 160,601

A Generation of Sociopaths: How the Baby Boomers Betrayed America
by Bruce Cannon Gibney
Published 7 Mar 2017

In the 1990s, the threat did not seem credible, and inaction then might have been excusable. But AI, which had been a joke for years, constantly failing to live up to its promises, has begun to exceed even more optimistic forecasts. In 2016, DeepMind’s AlphaGo program beat a human master at Go 4–1, an achievement many thought unlikely to occur before 2025. Because of the flexible way AlphaGo learns, and the enormous difficulty of the game it was playing (Go is to chess what chess is to checkers), an AI that can win at Go is something we need to take seriously. The government has essentially shrugged its shoulders, and by default, AI has been consigned to private hands, to private ends, and private gains.* It is no coincidence that AI, which is comparatively cheap to develop and has received sustained attention from private institutions, is a bright spot in the R&D landscape.

* Full disclosure: I invested in DeepMind personally in its earlier years; the company was then acquired by Google, in which I now hold stock. Wall Street has long dismissed Google’s side projects like self-driving cars and AI as money sinks, but Google has a thoughtful plan and one you may not be fully comfortable with. Google (in the verb sense; may as well start there) “self-driving car,” “AlphaGo,” and “Android Marketshare” and you’ll get a sense for the future Google might have in mind. You can add in Boston Dynamics +Atlas +Google, and you might get a sense of Google’s terminal ambitions, even if it ultimately ditches Boston Dynamics in favor of other robotics companies. * My subject is generational; I stake little territory in the largely unhelpful and mostly pseudoscientific debate (on both sides) regarding the inherent capacities of a given group for a given subject.

pages: 632 words: 163,143

The Musical Human: A History of Life on Earth
by Michael Spitzer
Published 31 Mar 2021

This goes for creativity in general, the last refuge of human exceptionalism. A year before being defeated by the Deep Blue chess computer, Gary Kasparov darkly warned: ‘[Computers] must not cross into the area of human creativity. It would threaten the existence of human control in such areas as art, literature, and music.’36 The shockwaves of AlphaGo’s destruction of world Go champion Fan Hui continue to reverberate. Ada Lovelace, the nineteenth-century mathematician and the mother of computer programming, had predicted that the Analytical Engine (as Charles Babbage’s early computer was called) might act upon other things besides number … supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.37 Nevertheless, Lovelace cautioned that the music’s creativity or originality would come from the programmer, not the machine.

INDEX Abba, here Abbasid Caliphate, here, here, here Abreu, José, here Abu’l Fazl, here Acheulean hand-axes, here, here, here, here, here, here acousmatic music, here, here, here acoustic ratios, here action tendencies (emotional), here, here, here Adams, Douglas, here Adele, here, here, here, here Adorno, Theodor, here, here, here, here Aegidius Zamorensis, here Aeschines, here Aeschylus, here, here affect attunement, here African music, here, here, here, here, here Agawu, Kofi, here ageing, here Ainu, here AIVA, here Akbar the Great, Emperor, here, here Akhenaten, pharaoh, here, here, here, here Akkadians, here, here, here, here al-Andalus, here, here Alap, here, here Albigensian Crusade, here Albinoni, Tomaso, here Alexander the Great, here Alexander Mosaic, here Alexander VI, Pope, here Alfonso X, King of Castile and León, here Alhambra Palace, here Ali, Muhammad, here AlphaGo, here altered state of consciousness (ASC), here Alvarado, Pedro de, here, here Amaterasu (goddess), here American National Theatre and Academy (ANTA), here Amis, Kingsley, here amygdala, here, here, here, here, here Anderson, Benedict, here Andrews, Julie, here Anicius Gallus, here animal horns, here, here see also shofar animals, dislike human music, here Anne of Lusignan, here antebellum America, here antiphony, here apes and monkeys, here, here, here, here, here, here, here see also chimpanzees Apollo, here, here Apple Music, here appraisal theory (emotional), here Aqqu, here Aquinas, Thomas, here Arcade Fire, here Ardi (australopithecine), here, here Arion, here Aristophanes, here Aristotle, here, here, here, here, here, here Aristoxenus, here, here Armstrong, Louis, here Arnold, Magda, here Artaxerxes II, King of Persia, here Assyrians, here, here, here, here asteroseismology, here astronomical clocks, here Atkinson, Rowan, here Attenborough, David, here auditory scene analysis, here, here Augustus, Emperor, here aulos, here, here, here, here, here, here Australian Aboriginal peoples, here, here, here, here, here, here, here, here, here, here authenticity, here, here, here, here autistic spectrum, here, here Auto-Tune, here Avirett, James Battle, here ‘Away in a Manger’, here Axial Age, here, here Aymara, here Aztecs, here, here, here, here, here Babbage, Charles, here Babur, Emperor, here Babylonians, here, here, here, here, here Bach, J.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

This means that if you get the functional relations right – if you ensure that a system has the right kind of ‘input–output mappings’ – then this will be enough to give rise to consciousness. In other words, for functionalists, simulation means instantiation – it means coming into being, in reality. How reasonable is this? For some things, simulation certainly counts as instantiation. A computer that plays Go, such as the world-beating AlphaGo Zero from the British artificial intelligence company DeepMind, is actually playing Go. But there are many situations where this is not the case. Think about weather forecasting. Computer simulations of weather systems, however detailed they may be, do not get wet or windy. Is consciousness more like Go or more like the weather?

Brains are very different: Matthew Cobb’s engrossing The Idea of the Brain (2020) relates the history of how brain function has been interpreted using the dominant technology of the day (and sometimes the other way around). A computer that plays Go: Silver et al. (2017). The story of the original program, AlphaGo, is beautifully told in a film of the same name: https://www.alphagomovie.com/. Some might quibble that these programs are more accurately described as playing ‘the history of Go’ rather than Go itself. there’s a valid question: A more sophisticated version of this argument has been developed by John Searle in his famous ‘Chinese room’ thought experiment.

pages: 374 words: 111,284

The AI Economy: Work, Wealth and Welfare in the Robot Age
by Roger Bootle
Published 4 Sep 2019

Tesler’s Theorem defines artificial intelligence as that which a machine cannot yet do.10 And the category of things that a machine cannot do appears to be shrinking all the time. In 2016 an AI system developed by Google’s DeepMind called AlphaGo beat Fan Hui, the European Champion at the board game Go. The system taught itself using a machine learning approach called “deep reinforcement learning.” Two months later AlphaGo defeated the world champion four games to one. This result was regarded as especially impressive in Asia, where Go is much more popular than it is in Europe or America. It is the internet that has impelled AI to much greater capability and intelligence.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

In 2017, scientists from Google’s DeepMind group used machine learning to build a program that would go on to beat Ke Jie, the number one Go player in the world. Until that time, many game players believed that Go, a game that has far more than a billion billion billion more board configurations than chess, was beyond the scope of world-championship, or even expert-level, play by a computer. The success of AlphaGo left many of those gamers startled, with some players reporting the computer’s play as “from another dimension.” These are astonishing technical accomplishments. And they’re just the beginning. Rapid advances in artificial intelligence and its deployment in autonomous systems herald an age in which machines can outperform humans at more than just games.

IBM’s Deep Blue computer had “unseated humanity”: Bruce Weber, “Swift and Slashing, Computer Topples Kasparov,” New York Times, May 12, 1997, https://www.nytimes.com/1997/05/12/nyregion/swift-and-slashing-computer-topples-kasparov.html. “from another dimension”: Dawn Chan, “The AI That Has Nothing to Learn from Humans,” Atlantic, October 20, 2017, https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/. 50 percent of jobs in the American economy: Carolyn Dimitri, Anne Effland, and Neilson Conklin, “The 20th Century Transformation of U.S. Agriculture and Farm Policy,” Economic Information Bulletin Number 3, June 2005, https://www.ers.usda.gov/webdocs/publications/44197/13566_eib3_1_.pdf.

Seeking SRE: Conversations About Running Production Systems at Scale
by David N. Blank-Edelman
Published 16 Sep 2018

This outcome got everyone thinking that machines like Deep Blue would solve very important problems. In 2015, nearly 20 years later, the ancient Chinese game Go, which has many more possible moves than chess, was won by DeepMind,4 using a program called AlphaGo, via the application of deep reinforcement learning, which is a much different approach than using search algorithms. Table 18-1 compares the two machines and their methodologies. Table 18-1. The two machines and their methodologies Deep Blue; Chess; May 1997 DeepMind; AlphaGo; October 2015 Brute force Search Algorithm Developer: IBM Adversary: Garry Kasparov Deep learning Machine learning Developer: Google Adversary: Fan Hui But the first game that gripped the world’s attention in AI was checkers, with the pioneer Arthur Samuel, who coined the term “machine learning” back in 1959.

Index A abandonment expense (AbEx), Project Operating Expense and Abandonment Expense access control, Early Intervention and Education Through Evangelism accommodations, for on-call personnel, Accommodations active learning, Active Teaching and Learning-A Call to Action: Ditch the Boring Slidesbasics, Active Learning costs of failing to learn, The Costs of Failing to Learn Incident Manager card game, Active Learning Example: Incident Manager (a Card Game)-Active Learning Example: Incident Manager (a Card Game) learning habits of effective SRE teams, Learning Habits of Effective SRE Teams-Postmortems postmortems and, Postmortems production meetings and, Production Meetings SRE Classroom, Active Learning Example: SRE Classroom Wheel of Misfortune game, Active Learning Example: Wheel of Misfortune activism (see social activism) address resolution protocol (ARP) tables, Technical learnings adopt-to-buy abandonment scenario, Project Operating Expense and Abandonment Expense advocate phase of SRE execution, Phase 3: Advocates/Partners Affordable Care Act, Elegy for Complex Systems Agilent Technologies, Introducing SRE in Large Enterprises-Closing Thoughts alarming, Observability and Alarming alerts, On-Call and Alerting Allspaw, John, SRE Cognitive Work, Introduction-Conclusion Almeida, Daniel Prata, SRE Without SRE, SRE Without SRE: The Spotify Case Study-The Future: Speed at Scale, Safely AlphaGo, From Chess to Go: How Deep Can We Dive? Amaro, Ricardo, Introduction to Machine Learning for SRE, Why Use Machine Learning for SRE?-Success Stories Amazon Glacier, Offline storage Amazon Web Services (AWS), Self-Service Is More Than a Button Andersen, Kurt, SRE as a Success Culture, SRE as a Success Culture-Focus on the Details of Success Anderson, Brian, Origin Story antipatterns, SRE Antipatterns-So, That’s It, Then?

pages: 181 words: 52,147

The Driver in the Driverless Car: How Our Technology Choices Will Create the Future
by Vivek Wadhwa and Alex Salkever
Published 2 Apr 2017

storyId=405270046 (accessed 21 October 2016). 2. The Verge, “The 2015 DARPA Robotics Challenge Finals,” https://www.youtube.com/watch?v=8P9geWwi9e0 (accessed 21 October 2016). 3. Richard Lawler, “Google DeepMind AI wins final Go match for 4– 1 series win,” Engadget 14 March 2016, https://www.engadget.com/2016/03/14/the-final-lee-sedol-vs-alphago-match-is-about-to-start (accessed 21 October 2016). 4. Wan He, Daniel Goodkind, and Paul Kowal, U.S. Census Bureau, An Aging World: 2015, International Population Reports P95/16-1, Washington, D.C.: U.S. Government Publishing Office, 2016, http://www.census.gov/content/dam/Census/library/publications/2016/demo/p95-16-1.pdf (accessed 21 October 2016). 5.

pages: 194 words: 56,074

Angrynomics
by Eric Lonergan and Mark Blyth
Published 15 Jun 2020

Despite the efforts of many Japanese firms, there is no robot for taking care of Grandma – nor do most people want one. As we’ve discussed, the mere existence of a technology does not make it a success. Demand drives supply. Tech boosters and neo-Luddites assume the opposite. Third, while AI and ML are real and are different, as seen in the success of products such as Amazon’s Alexa and Google’s AlphaGo engine, it’s not clear that their deployment at scale, which is still a long way off, is zero-sum against either workers or wages. Take AI and power grid optimization. An intelligent programme monitoring and optimizing flow across a carbon-smart grid could save huge amounts of energy, reduce costs for all firms and households, and help with climate change.

Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
by Adrienne Mayor
Published 27 Nov 2018

“Narrow AI” allows a machine to carry out specific tasks, while “general AI” is a machine with “all-purpose algorithms” to carry out intellectual tasks that humans are capable of, with abilities to reason, plan, “think” abstractly, solve problems, and learn from experience. AI can also be classified by types: Type I machines are reactive, acting on what they have been programmed to perceive at the present, with no memory or ability to learn from past experience (examples include IBM’s Deep Blue chess computer, Google’s AlphaGo, and the ancient bronze robot Talos and the self-moving tripods in the Iliad). Type II AI machines have limited capacity to make memories and can add observations to their preprogrammed representations of the world (examples: self-driving cars, chatbots, and Hephaestus’s automated bellows). Type III, as yet undeveloped, would possess theory of mind and the ability to anticipate others’ expectations or desires (fictional examples: Star Wars’ C-3PO, Hephaestus’s Golden Servants, the Phaeacian ships).

Driverless: Intelligent Cars and the Road Ahead
by Hod Lipson and Melba Kurman
Published 22 Sep 2016

We may be finally seeing the resolution of Moravec’s paradox, as roboticists and computer scientists find creative new ways to apply deep learning to automate artificial perception and response. Since 2012, deep learning has given driverless cars the ability to “see,” and has improved the language comprehension of speech-recognition software. In a high-profile demonstration of its power and versatility, in 2016, deep-learning software enabled Google’s AlphaGo program to trounce the world’s best players of go, a board game considered by many to be more challenging than chess. To encourage third-party developers to build intelligent applications using their software tools, Google, Microsoft, and Facebook have each launched their own version of an open source deep-learning development platform.

pages: 312 words: 92,131

Beginners: The Joy and Transformative Power of Lifelong Learning
by Tom Vanderbilt
Published 5 Jan 2021

*2 One can, of course, learn to make gelato in New York City, or climb in indoor gyms, but that somehow didn’t sound as exciting. *3 As Martin Amis has argued about chess, “Nowhere in sport, perhaps nowhere in human activity, is the gap between the tryer and the expert so astronomical.” *4 Although it also has been suggested as an acronym for “beginning of one’s tour.” *5 AlphaGo Zero, the artificial intelligence engine developed by DeepMind to teach itself the strategy game Go, was seen, early in its learning process, to focus “greedily on capturing stones, much like a human beginner.” David Silver et al., “Mastering the Game of Go Without Human Knowledge,” Nature, Oct. 19, 2017, 354–59

pages: 285 words: 86,858

How to Spend a Trillion Dollars
by Rowan Hooper
Published 15 Jan 2020

Rather than program all the possible outcomes into the software – which is what software engineers used to try to do, with inevitable shortcomings – in machine learning with a neural network, the computer learns on its own. There has been spectacular success with a turbo form of machine learning called deep learning; it’s behind the ability of DeepMind’s AlphaGo and AlphaZero, and it’s the basis of a system developed by OpenAI called Generative Pre-trained Transformer, or GPT. A publicly available version called GPT-2 can generate original text, perhaps a sports report, a movie review, or maybe even poetry, when given a prompt. It is a kind of neural network that relies on what’s called unsupervised learning.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

That’s why sportsbook operators are reluctant to turn things over to the machines. This is a problem for AIs in adversarial situations—they can be probed for vulnerabilities by humans or by other AIs, and then attacked at the weakest link. Take the case of the game Go. It’s well known as a proving ground for AI advancement, like with Google’s development of AlphaGo. But in 2023 a group of programmers detected a flaw in a different, supposedly superhuman AI engine, KataGo, and repeatedly defeated it. So until the machines become less error-prone, anytime anything happens in one of the dozen sports the Westgate lets you bet on, one of those nerds in the SuperBook boiler room has to make a decision.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A abstraction, 23–24, 29, 30–31, 130, 477 academia, 26, 27, 28, 294–96 See also Village accelerationists, 31, 250, 411–13, 455–56, 477, 539n accelerators, 405–6, 477 action, 477 adaptability, 235–37, 264 addiction, 164–65, 166, 167, 168–69, 213–14, 321 Addiction by Design: Machine Gambling in Las Vegas (Schüll), 154–55 Adelson, Sheldon, 146 Adelstein, Garrett, 100, 102, 106 Robbi hand, 80–86, 89, 117, 123–29, 130, 444–45, 512n advantage play, 158–61, 478 adverse selection, 478 Aella, 375–77 Age of Em, The (Hanson), 379 agency, 453, 469–70, 478 agents, 478 aggressiveness, 120 AGI (artificial general intelligence), defined, 478 Aguiar, Jon, 199 AI (artificial intelligence) accelerationists, 31, 250, 411–13, 455–56, 477, 539n adaptability and, 236n agency and, 469–70 alignment and, 441–42, 478 Sam Altman and, 406 analogies for, 446, 541n bias and, 440n breakthrough in, 414–15 commercial applications, 452–53 culture wars and, 273 decels, 477 defined, 478 economic growth and, 407n, 463–64 effective altruism and, 21, 344, 348, 355, 359, 380 engineers and, 411–12 excitement about, 409–10 impartiality and, 359, 366 moral hazard and, 261 New York Times lawsuit, 27, 295 OpenAI founding, 406–7, 414 optimism and, 407–8, 413 poker and, 40, 46–48, 60–61, 430–33, 437, 439, 507n poor interpretability of, 433–34, 437, 479 prediction markets and, 369, 372 probabilistic thinking and, 439 randomization and, 438 rationalism and, 353, 355 regulation of, 270, 458, 541n religion and, 434 risk impact and, 91 risk tolerance and, 408 River-Village conflict and, 27 SBF and, 401, 402 sports betting and, 175–76 technological singularities and, 449–50, 497 transformers, 414–15, 434–41, 479, 499 Turing test and, 499 See also AI existential risk AI existential risk accelerationists and, 412–13, 455–56, 539n alignment and, 441 arguments against, 458–60 Bid-Ask spread and, 444–46 commercial applications and, 452–53 Cromwell’s law and, 415–16 determinism and, 297 effective altruism/rationalism and, 21, 355, 456 EV maximizing and, 457 excitement about AI and, 410 expert statement on, 409, 539n Hyper-Commodified Casino Capitalism and, 452–53 instrumental convergence and, 418 interpretability and, 433–34 Kelly criterion and, 408–9 models and, 446–48 Musk and, 406n, 416 optimism and, 413–14 orthogonality thesis and, 418 politics and, 458, 541n prisoner’s dilemma and, 417 reference classes and, 448, 450, 452, 457 societal institutions and, 250, 456–57 takeoff speed and, 418–19, 498 technological Richter scale and, 450–52, 451, 498 Yudkowsky on, 372, 415–19, 433, 442, 443, 446 Alexander, Scott, 353, 354, 355, 376–77, 378 algorithms, 47, 478 alignment (AI), 441–42, 478 all-in (poker), 478 alpha, 241–42, 478 AlphaGo, 176 Altman, Sam, 401 AI breakthrough and, 415 AI existential risk and, 419n, 451, 459 OpenAI founding and, 406–7 OpenAI’s attempt to fire, 408, 411, 452n optimism and, 407–8, 413, 414 Y Combinator and, 405–6 Always Coming Home (Le Guin), 454–55, 541n American odds, 477, 491 American Revolution, 461 analysis, 23, 24, 478 analytics casinos and, 153–54 defined, 23, 478 empathy and, 224 limitations of, 253–54, 259 politics and, 254 sports betting and, 171, 191 venture capital and, 249 anchoring bias, 222n, 478 Anderson, Dave, 219–20, 230, 231 Andreessen, Marc accelerationists and, 411 AI analogies and, 446, 541n AI existential risk and, 446 Adam Neumann and, 281–82 on patience, 260 politics and, 267–68 River-Village conflict and, 295 techno-optimism and, 249, 250–51, 270, 296, 498 VC profitability and, 293, 526n VC stickiness and, 290, 291–92 angles, 192–94, 235–36, 305, 478 angle-shooters, 478 ante (poker), 478 anti-authority attitude, 111–12, 118, 137 See also contrarianism apeing, 479 arbitrage (arb), 171, 172–74, 206, 478, 489, 516n, 517n Archilochus, 236, 263, 485 Archipelago, The, 22, 310, 478 arms race, 478 See also mutually assured destruction; nuclear existential risk art world, 329–30, 331n ASI (artificial superintelligence), 478 Asian Americans, 135–36, 513n See also race asymmetric odds, 248–49, 255, 259, 260–62, 276, 277 attack surfaces, 177, 187, 478 attention (AI), 479 attention to detail, 233–35 autism, 282–84, 363, 525n B back doors, 479 backtesting, 479 bad beats, 479 “bag of numbers,” 433, 479 bank bailouts, 261 Bankman, Joseph, 383–84 Bankman-Fried, Sam (SBF) AI and, 401, 402 angles and, 305 attitude toward risk, 334–35 bankruptcy and arrest of, 298–301, 373–74 cryptocurrency business model and, 308–9 cults of personality and, 31, 338–39 culture wars and, 341n as dangerous, 403–4 disagreeability and, 280 effective altruism and, 20, 340–42, 343, 374, 397–98, 401 as focal point, 334 fraud and, 124, 374 Kelly criterion and, 397–98 moral hazard and, 261 NOT INVESTMENT ADVICE and, 491 personas of, 302 politics and, 26, 341n, 342 public image of, 338 responses to bankruptcy and arrest, 303–5, 383–85, 386–88 risk tolerance and, 334–35, 397–403, 537–38n River and, 299 theories of, 388–96 trial of, 382–83, 385–86, 387, 403 utilitarianism and, 360, 400, 402–3, 471, 498 venture capital and, 337–39 warning signs, 374 bankrolls, 479 Baron-Cohen, Simon, 101n, 283, 284 Barzun, Jacques, 466 baseball, 58–59, 174 See also sports betting base rates, 479 basis points (bips), 479 basketball, 174 See also sports betting Bayesian reasoning, 237, 238, 353, 355, 478, 479, 493–94, 499 Bayes’ theorem, 479 beards, 207–8, 479 See also whales bednets, 479 Bennett, Chris, 177, 178 Bernoulli, Nicolaus, 498 Betancourt, Johnny, 332–33 bet sizing, 396, 479 Bezos, Jeff, 277, 410 Bid-ask spread, 444–46, 479 Biden, Joe, 269, 375 big data, 432–33, 479 Billions, 112 Bitcoin bubble in, 6, 306, 307, 307, 310, 312 creation of, 322–23, 496 vs.

pages: 521 words: 110,286

Them and Us: How Immigrants and Locals Can Thrive Together
by Philippe Legrain
Published 14 Oct 2020

‘I’d say we have to engage with the real world today.’3 Demis went on to become a neuroscientist and met Shane Legg, a Kiwi machine-learning researcher, at University College London. Combining their different talents and perspectives, they co-founded DeepMind, which was bought by Google for $500 million (£385 million) in 2014. In 2017 DeepMind’s AlphaGo bested the world number one at the Japanese game of Go – not by copying successful human strategies, but by devising its own better ones. Less than a third of recent patents and only a fifth of recent scientific papers were written by a single author – and even lone authors are stimulated by others.4 ‘Creativity comes from spontaneous meetings, from random discussions,’ observed the late Apple founder, Steve Jobs.

pages: 475 words: 134,707

The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health--And How We Must Adapt
by Sinan Aral
Published 14 Sep 2020

The total and per capita adoption of global cellular subscriptions is shown from 2000 to 2010. The adoption of machine intelligence depicts annual funding for artificial intelligence worldwide in hundreds of millions of dollars from 2006 to 2016. The dates of the launches of Facebook, the iPhone, and the AI software AlphaGo are shown under their respective trends. If we want to understand this information-processing machine, we have to understand its three component parts: its substrate (the digital social network), which structures our interactions; its process (the Hype Loop), which, through the interplay of machine and human intelligence, controls the flow of information over the substrate; and its medium (the smartphone, at least for now), which is the primary input/output device through which we provide information to and receive information from the Hype Machine (Figure 3.3).

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

While it is difficult to convert between handicap stones and Elo at these extremely high levels of play, this is in the same ballpark as the predictions for perfect play (Labelle, 2017). It would be fascinating to see a version of AlphaZero play against the best humans with increasing handicaps to see how many stones ahead it really is. 81 Technically Ke Jie was referring to the “Master” version of AlphaGo Zero which preceded AlphaZero (Wall Street Journal, 2017). 82 The breakthrough result was the DQN algorithm (Mnih et al., 2015) which successfully married deep learning and reinforcement learning. DQN gave human-level performance on 29 out of 49 Atari games. But it was wasn’t fully general: like AlphaZero, it needed a different copy of the network to be trained for each game.

Blueprint: The Evolutionary Origins of a Good Society
by Nicholas A. Christakis
Published 26 Mar 2019

The robotic car might create a cascade of benefits, modifying the behavior not only of drivers with whom it has direct contact but also of others with whom it has not interacted.69 Still other advances in artificial intelligence will affect our social lives. One of the most remarkable features of AlphaGo, the software that beat the reigning human champion, Lee Sedol, at the ancient game of go in March of 2016, is not its astonishing ability to learn the game on its own but the fact that, after playing against the machine, Lee Sedol reported that he himself had learned new things from the weird, beautiful, and previously unimagined moves the machine made.70 That is, interacting with this artificial intelligence changed how Sedol interacted with other humans.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

They can help lawyers and paralegals sift through thousands of documents to find the relevant precedents for a court case. They can turn natural-language instructions into computer code. They can even compose new music that sounds eerily like Johann Sebastian Bach and write (dull) newspaper articles. In 2016 the AI company DeepMind released AlphaGo, which went on to beat one of the two best Go players in the world. The chess program AlphaZero, capable of defeating any chess master, followed one year later. Remarkably, this was a self-taught program and reached a superhuman level after only nine hours of playing against itself. Buoyed by these victories, it has become commonplace to assume that AI will affect every aspect of our lives—and for the better.