OpenAI

back to index

description: an artificial intelligence research lab consisting of the for-profit OpenAI LP and its parent company, the non-profit OpenAI Inc.

67 results

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

v=cUTMhmVh1qs&t=6122s; “Takeaways from OpenAI Five (2019) [AI/ML, Dota Summary,” senrigan.io, April 22, 2019, updated June 25, 2020, https://senrigan.io/blog/takeaways-from-openai-5/. 268OpenAI’s Dota 2 agents: Mike, “OpenAI & DOTA 2: Game Is Hard,” Games by Angelina, updated August 10, 2018, http://www.gamesbyangelina.org/2018/08/openai-dota-2-game-is-hard/; “Open AI Five Benchmark,” streamed on Twitch, 2018, https://www.twitch.tv/videos/293517383?t=2h11m08s; (deleted user), “OpenAI Hex was Within the 200ms Response Time,” r/DotA2, Reddit, 2018, https://www.reddit.com/r/DotA2/comments/94vdpm/openai_hex_was_within_the_200ms_response_time/e3ofipk/; OpenAI et al., Dota 2, 52. 269precisely coordinate their attacks: Mike, “OpenAI & DOTA 2: Game Is Hard.” 269excel in team fights: ProGuides Dota 2 Tips Tricks and Guides, “Dota2: What Can We Learn from OpenAI Five?

Other researchers have come up with somewhat different rates of progress in the deep learning era, see Dario Amodei and Danny Hernandez, “AI and Compute,” openai.com, May 16, 2018, https://openai.com/blog/ai-and-compute/. 26Moore’s law: Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics 38, no. 8 (April 19, 1965), https://newsroom.intel.com/wp-content/uploads/sites/11/2018/05/moores-law-electronics.pdf. 26“thousands of GPUs over multiple months”: Open AI et al., Dota 2 with Large Scale Deep Reinforcement Learning (arXiv.org, December 13, 2019), 2, https://arxiv.org/pdf/1912.06680.pdf. 26equivalent to a human playing for 45,000 years: OpenAI, “OpenAI Five Defeats Dota 2 World Champions,” OpenAI blog, April 15, 2019, https://openai.com/blog/openai-five-defeats-dota-2-world-champions/. 2613,000 years of simulated computer time: Ilge Akkaya et al., Solving Rubik’s Cube With a Robot Hand (arXiv.org, October 17, 2019), https://arxiv.org/pdf/1910.07113.pdf. 26spending millions on compute: Ryan Carey, “Interpreting AI Compute Trends,” AI Impacts, n.d., https://aiimpacts.org/interpreting-ai-compute-trends/; Dan H., “How Much Did AlphaGo Zero Cost?”

.”; “Takeaways from OpenAI Five”; Statt, “OpenAI’s Dota 2 AI Steamrolls World Champion e-Sports Team.” 271“Human players are often cautious”: OpenAI et al., Dota 2, 10. 271“Unexpectedly, increasing [actions per minute]”: Vinyals et al., “Grandmaster Level in StarCraft II Using Multi-Agent Reinforcement Learning,” 353. 271settling for suboptimal strategies: OpenAI et al., Dota 2, 62. 272overly reliant on their team-fighting skills: Wiggers, “OpenAI’s Dota 2 Bot Defeated 99.4% of Players in Public Matches”; Mike, “OpenAI & DOTA 2: Game Is Hard”; Statt, “OpenAI’s Dota 2 AI Steamrolls World Champion e-Sports Team.” 272poor lineup of characters: “OpenAI Five Benchmark: Results,” OpenAI Blog, August 6, 2018, https://openai.com/blog/openai-five-benchmark-results/. 272AI agents performed poorly and inflexibly: Mike, “OpenAI & DOTA 2: Game Is Hard.” 272certain characters and types of actions off-limits: “OpenAI Five Benchmark,” OpenAI Blog, July 18, 2018, https://openai.com/blog/openai-five-benchmark/; OpenAI et al., Dota 2. 27299.4 percent win average: OpenAI et al., Dota 2. 272perform surgery: OpenAI et al., Dota 2, 7–8, 10–13, 25–29. 272Superhuman precision and speed: Vinyals et al., “Grandmaster Level in StarCraft II”; Pietkäinen, “An Analysis on How Deepmind’s Starcraft 2 AI’s Superhuman Speed Is Probably a Band-Aid.” 272forward-quarter gunshots: Colin “Farva” Price, “Navy F/A-18 Squadron Commander’s Take on AI Repeatedly Beating Real Pilot In Dogfight,” The Drive, August 24, 2020, https://www.thedrive.com/the-war-zone/35947/navy-f-a-18-squadron-commanders-take-on-ai-repeatedly-beating-real-pilot-in-dogfight. 272capture-the-flag computer game: Jaderberg et al., “Human-Level Performance in 3D Multiplayer Games,” 3. 272AlphaStar’s superhuman click rate: In refining their StarCraft II agent, AlphaStar, DeepMind went to great lengths to handicap the AI agent so that it was limited to playing at the rough equivalent to a human level.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

GO TO NOTE REFERENCE IN TEXT played poker on the evening before Trinity: Richard Rhodes, The Making of the Atomic Bomb, Kindle ed. (New York: Simon & Schuster Paperbacks, 2012), 971. GO TO NOTE REFERENCE IN TEXT Post describes him: Nitasha Tiku, “OpenAI Leaders Warned of Abusive Behavior before Sam Altman’s Ouster,” The Washington Post, December 8, 2023, washingtonpost.com/technology/2023/12/08/open-ai-sam-altman-complaints. GO TO NOTE REFERENCE IN TEXT companies like OpenAI and Anthropic: “Google Brain Drain: Where are the Authors of ‘Attention Is All You Need’ Now?” AIChat, aichat.blog/google-exodus-where-are-the-authors-of-attention-is-all-you-need-now.

GO TO NOTE REFERENCE IN TEXT he was fired: Elizabeth Dwoskin and Nitasha Tiku, “Altman’s Polarizing Past Hints at OpenAI Board’s Reason for Firing Him,” The Washington Post, November 22, 2023, washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham. GO TO NOTE REFERENCE IN TEXT “Technology happens because”: Cade Metz, “The ChatGPT King Isn’t Worried, but He Knows You Might Be,” The New York Times, March 31, 2023, sec. Technology, nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html. GO TO NOTE REFERENCE IN TEXT that Altman knew: Per email to Nate Silver, January 19, 2024.

However, fertility rates in the industrialized world have dramatically declined, often to below replacement levels—so roon is referring to how the world has begun to limit its population on its own. *8 Altman and another OpenAI researcher, Nick Ryder, told me that they expected GPT-4 and not GPT-3.5 to be the big public breakthrough. But their perspective is like that of the parent of a teenage son; you see him growing taller every day. The grandmother who comes over once a year for Thanksgiving is more likely to notice that Billy has suddenly become quite tall. *9 A group of OpenAI engineers left OpenAI in 2021 after the release of GPT-3 to form the rival firm Anthropic because of what Jack Clark, an Anthropic cofounder, told me were primarily concerns about safety because of the power of OpenAI’s models

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

His salary for just the last six months of 2016 was $330,000: OpenAI, form 990, 2016. And in February 2018, Musk left, too: Eduard Gismatullin, “Elon Musk Left OpenAI to Focus on Tesla, SpaceX,” Bloomberg News, February 16, 2019, https://www.bloomberg.com/news/articles/2019-02-17/elon-musk-left-openai-on-disagreements-about-company-pathway. “Excessive automation at Tesla was a mistake”: Elon Musk tweet, April 13, 2018, https://twitter.com/elonmusk/status/984882630947753984?s=19. Altman re-formed the lab as a for-profit company: “OpenAI LP,” OpenAI blog, March 11, 2019, https://openai.com/blog/openai-lp/. an international robotics maker called ABB organized its own contest: Adam Satariano and Cade Metz, “A Warehouse Robot Learns to Sort Out the Tricky Stuff,” New York Times, January 29, 2020, https://www.nytimes.com/2020/01/29/technology/warehouse-robot.html.

SATYA NADELLA, CEO. AT OPENAI SAM ALTMAN, the president of Silicon Valley start-up incubator Y Combinator who became OpenAI’s CEO. GREG BROCKMAN, the former chief technology officer of fintech start-up Stripe who helped build OpenAI. ELON MUSK, the CEO of electric car maker Tesla and rocket company SpaceX who helped create OpenAI. ILYA SUTSKEVER, the Geoff Hinton protégé who left Google Brain to join OpenAI, the San Francisco AI lab created in response to DeepMind. WOJCIECH ZAREMBA, the former Google and Facebook researcher who was one of OpenAI’s first hires. AT BAIDU ROBIN LI, CEO.

Musk pushed back, asking how Page could be sure this superintelligence: Ibid. Brockman vowed to build the new lab they all seemed to want: Cade Metz, “Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free,” Wired, April 27, 2016, https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/. nearly $2 million for the first year: OpenAI, form 990, 2016. Musk and Altman painted OpenAI as a counterweight: Steven Levy, “How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over,” “Backchannel,” Wired, December 11, 2015, https://www.wired.com/2015/12/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over/.

pages: 190 words: 46,977

Elon Musk: A Mission to Save the World
by Anna Crowley Redding
Published 1 Jul 2019

“I met with Obama for one reason”—to talk about the dangers of artificial intelligence.169 In 2015, Elon cofounded a nonprofit called OpenAI to research the development of AI and how can AI be used to benefit humanity instead of … um … to annihilate us. OpenAI is a nonprofit “AI research company, discovering and enacting the path to safe artificial general intelligence.”170 “It’s going to be very tempting to use AI as a weapon. In fact, it will be used as a weapon. The on-ramp to serious AI, the danger is going to be more humans using it against each other, I think, most likely. That will be the danger,”171 he explained in a podcast with Joe Rogan. As of this writing, OpenAI has a team of sixty researchers and engineers working on the project, and they conduct their research without the pressure of having to make money.

IFLScience, 28 April 2016. www.iflscience.com/technology/elon-musk-unveils-the-ridiculously-big-tesla-gigfactory/. Olsen, Patrick. “Tesla Model 3 Gets CR Recommendation After Braking Update.” Consumer Reports, 30 May 2018. www.consumerreports.org/car-safety/tesla-model-3-gets-cr-recommendation-after-braking-update/. OpenAI. openai.com. Oremus, Will. “Elon Musk Is Not a Comic Book Superhero. He’s Way More Interesting Than That.” Slate, 21 May 2015. slate.com/articles/business/moneybox/2015/05/elon_musk_biography_review_how_did_a_sci_fi_nut_with_a_hero_complex_becoming.html. _____. “Romney Decides That Thriving Electric-Car Start-up Tesla Is a ‘Loser.’”

Elon Musk, interview by Chris Anderson. 163. Elon Musk, “The Boring Company Information Session.” 164. Vance, p. 43. 165. Strauss, “Elon Musk.” 166. Paine, Do You Trust This Computer? 167. Paine, Do You Trust This Computer? 168. Cellan-Jones, “Stephen Hawking Warns.” 169. Elon Musk, interview by Joe Rogan. 170. OpenAI, openai.com. 171. Elon Musk, interview by Joe Rogan. 172. Paine, Do You Trust This Computer? 173. Elon Musk, interview by Joe Rogan. 174. Neuralink, www.neuralink.com. 175. Elon Musk, interview by Mohammad Abdullah Al Gergawi. 176. Elon Musk, interview by Mohammad Abdullah Al Gergawi. 177.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

New Haven/London, Yale University Press. https://doi.org/10. 12987/9780300252392. Hamber, Anthony/Miles, Jean/Vaughan, William (Eds.) (1989). Computers and the History of Art. London, Mansell Pub. Hao, Karen (2020). The Messy, Secretive Reality behind OpenAI’s Bid to Save the World. MIT Technology Review, 18 February 2020. Available online at https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonsho t-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/. Mitchell, Margaret/Wu, Simone/Zaldivar, Andrew et al. (2019) Model Cards for Model Reporting. FAT* ’19: Conference on Fairness, Accountability, and Transparency, 29–31 January 2019, 220–29. https://doi.org/10.1145/3287560.3287596.

The two paragraphs above were generated by ChatGPT1 from OpenAI and demonstrate what the neural network that lies behind it has already learned about artificial intelligence in the context of cultural heritage. What is interesting here, however, is that the answer to the initial prompt focuses solely on the potential of applying AI tools ‘to assist in the analysis and interpretation of large amounts of data’, whereas only the second, follow-up prompt, which explicitly shifts the focus towards the possible contribution of cultural heritage to AI, reveals that there are also significant 1 https://chat.openai.com/chat (all URLs here accessed in June 2023). 150 Part 2: Perspectives opportunities for AI to benefit from cultural heritage.

BBC News, 23 July 2022. https://www.bbc.com/news/technology62275326. 39 AI and Art Arguments for Practice Arno Schubbach Over the past decade, the advances in artificial intelligence (AI) research have been attracting a lot of attention and provoked a broad variety of debates. Especially in the last two or three years, the progress in image generation by ‘generative adversarial networks’ (GAN) or ‘diffusion models’ (like DALLE-2 or Stable Diffusion) has been breath-taking—and has perhaps only been overshadowed in the public’s attention by OpenAI’s ChatGPT, which moreover will soon already be part of the everyday life of almost all computer users, if this is not already the case. Compared with these swift technological advances, the debates they entail seem rather stable and often dominated by the same recurring, quite speculative questions: Can machines have consciousness?

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

These systems are called transformers. Since Google researchers published the first paper on them in 2017, the pace of progress has been staggering. Soon after, OpenAI released GPT-2. (GPT stands for generative pre-trained transformer.) It was, at the time, an enormous model. With 1.5 billion parameters (the number of parameters is a core measure of an AI system’s scale and complexity), GPT-2 was trained on 8 million pages of web text. But it wasn’t until the summer of 2020, when OpenAI released GPT-3, that people started to truly grasp the magnitude of what was happening. With a whopping 175 billion parameters it was, at the time, the largest neural network ever constructed, more than a hundred times larger than its predecessor of just a year earlier.

GO TO NOTE REFERENCE IN TEXT At DeepMind we developed systems “DeepMind AI Reduces Google Data Centre Cooling Bill by 40%,” DeepMind, July 20, 2016, www.deepmind.com/​blog/​deepmind-ai-reduces-google-data-centre-cooling-bill-by-40. GO TO NOTE REFERENCE IN TEXT With 1.5 billion parameters “Better Language Models and Their Implications,” OpenAI, Feb. 14, 2019, openai.com/​blog/​better-language-models. GO TO NOTE REFERENCE IN TEXT Over the next few years See Martin Ford, Rule of the Robots: How Artificial Intelligence Will Transform Everything (London: Basic Books, 2021), for a developed comparison. GO TO NOTE REFERENCE IN TEXT More realistically, the average American Amy Watson, “Average Reading Time in the U.S. from 2018 to 2021, by Age Group,” Statista, Aug. 3, 2022, www.statista.com/​statistics/​412454/​average-daily-time-reading-us-by-age.

GO TO NOTE REFERENCE IN TEXT eminent professor of complexity Melanie Mitchell See Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (London: Pelican Books, 2020), and Steven Strogatz, “Melanie Mitchell Takes AI Research Back to Its Roots,” Quanta Magazine, April 19, 2021, www.quantamagazine.org/​melanie-mitchell-takes-ai-research-back-to-its-roots-20210419. GO TO NOTE REFERENCE IN TEXT I think it will be done The Alignment Research Center has already tested GPT-4 for precisely this kind of capability. GPT-4 was, at this stage, “ineffective” at acting autonomously, the research found. “GPT-4 System Card,” OpenAI, March 14, 2023, cdn.openai.com/​papers/​gpt-4-system-card.pdf. Within days of launch people were getting surprisingly close; see, for example, mobile.twitter.com/​jacksonfall/​status/​1636107218859745286. The version of the test here, though, requires far more autonomy than displayed there. GO TO NOTE REFERENCE IN TEXT Chapter 5: The Technology of Life Just as everything from the steam engine Susan Hockfield, The Age of Living Machines: How Biology Will Build the Next Technology Revolution (New York: W.

pages: 562 words: 201,502

Elon Musk
by Walter Isaacson
Published 11 Sep 2023

Musk challenged him to justify how he could legally transform a nonprofit funded by donations into a for-profit that could make millions. Altman tried to show that it was all legitimate, and he insisted that he personally was not a shareholder or cashing in. He also offered Musk shares in the new company, which Musk declined. Instead, Musk unleashed a barrage of attacks on OpenAI and Altman. “OpenAI was created as an open-source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” he said. “I’m still confused as to how a non-profit to which I donated $100M somehow became a $30B market cap for-profit.

Musk’s determination to develop artificial intelligence capabilities at his own companies caused a break with OpenAI in 2018. He tried to convince Altman that OpenAI, which he thought was falling behind Google, should be folded into Tesla. The OpenAI team rejected that idea, and Altman stepped in as president of the lab, starting a for-profit arm that was able to raise equity funding. So Musk decided to forge ahead with building a rival AI team to work on Tesla Autopilot. Even as he was struggling with the production hell surges in Nevada and Fremont, he recruited Andrej Karpathy, a specialist in deep learning and computer vision, away from OpenAI. “We realized that Tesla was going to become an AI company and would be competing for the same talent as OpenAI,” Altman says.

Musk tried to prevent Page and Google from purchasing DeepMind, the company formed by AI pioneer Demis Hassabis. When that failed, he formed a competing lab, a nonprofit called OpenAI, with Sam Altman in 2015. Humans can be pricklier than machines, and Musk eventually split with Altman, left the board of OpenAI, and lured away its high-profile engineer Andrej Karpathy to lead the Autopilot team at Tesla. Altman then formed a for-profit arm of OpenAI, got a $13 billion investment from Microsoft, and recruited Karpathy back. Among the products that OpenAI developed was a bot called ChatGPT that was trained on large internet data sets to answer questions posed by users.

pages: 308 words: 85,880

How to Fix the Future: Staying Human in the Digital Age
by Andrew Keen
Published 1 Mar 2018

The self-policing strategy of the DeepMind coalition sounds similar to the goals of another idealistic Elon Musk start-up—OpenAI, a Silicon Valley–based nonprofit research company focused on the promotion of an open-source platform for artificial intelligence technology. Musk cofounded OpenAI with Sam Altman, the thirty-one-year-old CEO of Y Combinator, Silicon Valley’s most successful seed investment fund. Launched in 2015 with a billion dollars raised by Silicon Valley royalty including the multi billionaires Reid Hoffman and Peter Thiel, the Silicon Valley–based OpenAI is run by a former Google expert on machine learning and staffed with an all-star team of computer scientists cherry-picked from top Big Tech firms.

But, I’m afraid, there aren’t too many people in Silicon Valley quite as responsible as LinkedIn co-founder Reid Hoffman, a relative paragon of civic virtue, who, during the 2016 American presidential election, promised to donate five million dollars of his own money to a veterans’ charity if Donald Trump publicly disclosed his taxes. In spite of being an investor in OpenAI, Reid Hoffman is skeptical of hubristic Silicon Valley companies that believe they can stand outside history and fix the entire world. Rather than being courageous, he suggests, that’s just myopic. Even juvenile. “It’s great they are being ambitious,” Hoffman told the New Yorker about Sam Altman and some of his Y Combinator projects. “But classically in the Valley, when people try to reinvent an area, it ends badly.”7 Hoffman’s ambivalence about the grandiose promises of OpenAI may be one reason why he is also one of the major investors in the “Fund for Artificial Intelligence and Society” launched by the nonprofit Knight Foundation in early 2017.

This fund, which also includes the MIT Media Lab and the Berkman Center at Harvard as partners, and, you’ll remember, Betaworks’ John Borthwick as an advisor, is designed—like the Centre for the Study of Existential Risk at Cambridge—to foster a combinatorial network of researchers, ethicists, and technologists focused on studying the impact of AI on society. In contrast with the Deep Mind or OpenAI coalition, this Knight Foundation initiative doesn’t just rely on technologists to make ethical decisions. “My point of view is that it is a massive transformation and does really impact the future of humanity,” Hoffman says about the AI revolution. “But that we can steer it more toward utopia rather than dystopia with intelligence and diligence.”8 John Bracken, a longtime nonprofit executive who runs the Knight Foundation program, told me that Hoffman is a “particularly important” influence on this new fund.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

Alec Radford, Jeffrey Wu, Dario Amodei et al., “Better language models and their implications,” OpenAI Blog, February 14, 2019, openai.com/blog/better-language-models/. 38. James Vincent, “OpenAI’s latest breakthrough is astonishingly powerful, but still fighting its flaws,” The Verge, July 30, 2020, www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential. 39. Gary Marcus and Ernest Davis, “GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about,” MIT Technology Review, August 22, 2020, www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/. 40.

Microsoft’s 2019 billion-dollar investment in the AI research company OpenAI—which along with Google’s DeepMind is a leader in pushing the frontiers of deep learning—offers a case study in the natural synergy between cloud computing and artificial intelligence. OpenAI will be able to leverage massive computational resources hosted by Microsoft’s Azure service—something that is essential given its focus on building ever larger neural networks. Only cloud computing can deliver compute power on the scale that OpenAI requires for its research. Microsoft, in turn, will gain access to practical innovations that are spawned by OpenAI’s ongoing quest for artificial general intelligence.

While this is admittedly a far cry from human-level AI, Kurzweil remains confident in his strategy, telling me that “humans use this hierarchical approach” and that ultimately it will be “sufficient for AGI.”36 Yet another path toward artificial general intelligence is being forged by OpenAI, a San Francisco–based research organization that was founded in 2015 with financial backing from, among others, Elon Musk, Peter Thiel and Linked-in co-founder Reid Hoffman. OpenAI was initially set up as a nonprofit entity with a mission to undertake a safe and ethical quest for AGI. The organization was conceived partly in response to Elon Musk’s deep concern about the potential for superhuman machine intelligence to someday pose a genuine threat to humanity. From the onset, OpenAI has attracted some of the field’s top researchers, including Ilya Sutskever, who was part of the team from Geoff Hinton’s University of Toronto Lab that built the neural network that triumphed at the 2012 ImageNet competition.

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

I’ve been lucky enough to spend some time talking over beers with Ben and I share this view about the downside of having any corporation or government with as much control over something that will become so powerful. So do many others, including Elon Musk and Reid Hoffman, who helped kick off the OpenAI initiative. OpenAI’s mission is to “build safe AGI and ensure AGI’s benefits are as widely and evenly distributed as possible.” But although these open initiatives are laudable, what hurts many of them is the lack of data and data velocity, which inhibits the learning rate. Core to every one of the major platforms is a product or service that compels you to give them your data for free—from your Google searches, to your Alexa enquiries, to your Instagram pictures.

The platform then monetizes your data in numerous ways, selling products or services more effectively to you or selling your data to advertisers. All the while the platform is using its tremendous data advantage to make its service better and better. Providing your data seems like a small price to pay for the extraordinary benefit of the service. That in itself becomes the problem with open AI initiatives outside of companies where there is a financial incentive to give away a product or service to get the data to make the product better. It is hard to see any of these open initiatives gaining enough momentum without an extraordinary product or service that is core to their data capture.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

It Flagged an Innocent Student,” Washington Post, April 3, 2023, https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin. BACK TO NOTE REFERENCE 118 OpenAI, “GPT-4,” OpenAI, March 14, 2023, https://openai.com/research/gpt-4; OpenAI, “GPT-4 Technical Report,” arXiv:2303.08774v3 [cs.CL], March 27, 2023, https://arxiv.org/pdf/2303.08774.pdf; OpenAI, “GPT-4 System Card,” OpenAI, March 23, 2023, https://cdn.openai.com/papers/gpt-4-system-card.pdf. BACK TO NOTE REFERENCE 119 OpenAI, “Introducing GPT-4,” YouTube video, March 15, 2023, https://www.youtube.com/watch?v=--khbXchTeE. BACK TO NOTE REFERENCE 120 Daniel Feldman (@d_feldman), “On the left is GPT-3.5.

BACK TO NOTE REFERENCE 128 As I write this, prices for the GPT-3.5 API are down to $1.00 per 500,000 tokens, or roughly 370,000 words. Prices will likely be even lower by the time you read this. See Ben Dickson, “OpenAI Is Reducing the Price of the GPT-3 API—Here’s Why It Matters,” VentureBeat, August 25, 2022, https://venturebeat.com/ai/openai-is-reducing-the-price-of-the-gpt-3-api-heres-why-it-matters; OpenAI, “Introducing ChatGPT and Whisper APIs,” OpenAI, March 1, 2023, https://openai.com/blog/introducing-chatgpt-and-whisper-apis; OpenAI, “What Are Tokens and How to Count Them?,” OpenAI, accessed April 30, 2023, https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them. BACK TO NOTE REFERENCE 129 Stephen Nellis, “Nvidia Shows New Research on Using AI to Improve Chip Designs,” Reuters, March 27, 2023, https://www.reuters.com/technology/nvidia-shows-new-research-using-ai-improve-chip-designs-2023-03-28.

BACK TO NOTE REFERENCE 98 Pandu Nayak, “Understanding Searches Better Than Ever Before,” Google, October 25, 2019, https://blog.google/products/search/search-language-understanding-bert; William Fedus et al., “Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity,” arXiv:2101.03961 [cs.LG], January 11, 2021, https://arxiv.org/abs/2101.03961. BACK TO NOTE REFERENCE 99 For more in-depth information on GPT-3, see Greg Brockman et al., “OpenAI API,” OpenAI, June 11, 2020, https://openai.com/blog/openai-api; Brown et al., “Language Models Are Few-Shot Learners”; Kelsey Piper, “GPT-3, Explained: This New Language AI Is Uncanny, Funny—and a Big Deal,” Vox, August 13, 2020, https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language; “GPT-3 Demo: New AI Algorithm Changes How We Interact with Technology,” Disruption Theory, YouTube video, August 28, 2020, https://www.youtube.com/watch?

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

What really surprised the AI community, however, was not the model used in GPT-2, the architecture of which was based on simply predicting the next most likely word based on all the previous words in the text. OpenAI’s achievement was that it had scaled the system up to a new level by analyzing text from more than 8 million web pages. The striking thing was the announcement that OpenAI would not release the model, contrary to a trend toward transparency in the research community. “Due to our concerns about malicious applications of the technology,” the OpenAI team wrote, “we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.” OpenAI was created in 2015 as a nonprofit organization funded by wealthy technologists, including Elon Musk, Peter Thiel, Sam Altman, and Reid Hoffman, who were concerned with charting a path toward safe artificial general intelligence.

It can also craft Harry Potter stories in the style of Ernest Hemingway, invent plausible conversations between famous people in history who never met, summarize movies with emojis, write poetry, and much more. The reason we know about these capabilities is that OpenAI released the GPT-3 model to interested parties, albeit through an application process in which OpenAI controls access. Those granted access began playing with it and posting their findings. OpenAI announced its intention to offer GPT-3 as a revenue-generating commercial product in limited contexts. In the months between announcing GPT-2 and GPT-3, OpenAI found itself needing investment capital and converted from a nonprofit to a for-profit corporation. It promised to adhere to its social mission by pursuing an unusual “capped profit” model, by which investors in the company could get returns up to a specified cap and any additional profits beyond that would be reinvested into OpenAI’s pursuit of safe artificial general intelligence.

But what seemed a sober precaution was considered by some in the AI world either as running afoul of research norms and rank hypocrisy given the “open” part of OpenAI or as a cheap publicity stunt designed to call attention to the organization. Some AI scientists facetiously said that they, too, had made breakthrough discoveries in the lab but could not share details due to their concerns about bad actors. By late 2019, OpenAI decided to release the full-scale GPT-2 model—with 1.5 billion parameters—as part of a staged release plan. OpenAI’s scientists also reported results from their research partners, shedding more light on the earlier concerns.

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

Today, the transformer underpins most cutting-edge applications of AI in development, from Google Search and Translate to mobile autocomplete and speech recognition by Alexa. It also paved the way for Californian company OpenAI to build ChatGPT. The Transformer Chatbot Nothing could have prepared Mira Murati and her colleagues for how ChatGPT would be used by the world. On 29 November 2022, Mira, who was OpenAI’s chief technology officer, was putting the finishing touches to a new release launching the next day.3 There hadn’t been much fanfare about it, as it was mostly an experimental prototype. Mira went home at her usual time. She had joined OpenAI a few years previously when it was a non-profit research lab with a single goal: to create an artificial form of ‘general intelligence’, AI software able to perform any task at the same level of competence as human beings.

It had been set up by radical tech entrepreneurs including Elon Musk and Peter Thiel out of a concern that AI would end up destroying the human race. Their solution? To fund the creation of a benevolent AI system that they could control to do good, not evil. But then, the organization transformed. OpenAI took a hefty investment of more than $10bn from Microsoft and converted itself into what was, for all intents and purposes, a for-profit enterprise that sold AI technologies to large corporations and governments around the world.4 OpenAI’s crown jewel was an algorithm called GPT – the Generative Pre-trained Transformer – software that could produce text-based answers in response to human queries. One of the authors of the ‘Attention Is All You Need’ paper, Lukasz Kaiser, had ended up working there and helping to build it.

But while in Rome last year learning about Benanti’s framework of ‘algor-ethics’, I met Brad Smith, the president of Microsoft. Smith was the corporate face of responsible AI – the man who had helped choreograph the Rome Call alongside Paolo, while his company invested $10bn in OpenAI, one of the world’s most powerful AI companies. I couldn’t resist asking him how he reconciled the two things. I asked him if he truly believed endeavours like these would make any dent at all in the global hegemony of companies like Microsoft or OpenAI. Smith said all the right things: that technologists needed to connect more with the social sciences, with philosophy, with religion and the humanities, to help them consider the societal impact of the products they made.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

The platform made use of Nvidia’s Compute Unified Device Architecture, or CUDA, which allows programmers to write code to perform highly parallel computations on Nvidia GPUs. For a 2020 retrospective of the stunning increases in the efficiency of training neural networks since AlexNet, see the work of OpenAI’s Danny Hernandez and Tom Brown at https://openai.com/blog/ai-and-efficiency/ and https://cdn.openai.com/papers/ai_and_efficiency.pdf. 21. “Rival.” 22. Jacky Alciné, personal interview, April 19, 2018. 23. See https://twitter.com/jackyalcine/status/615329515909156865 and https://twitter.com/yonatanzunger/status/615355996114804737 for this exchange. 24.

As a result, these curious agents found their way to the goal in much more vast and complex mazes than the agents without this intrinsic drive. Pathak’s Berkeley group teamed up with a group of researchers from OpenAI, and together they continued to explore this idea of using prediction error as a reward signal. Surprisingly, they found that a dramatic simplification of this architecture—replacing the network designed specifically to predict controllable aspects of the future with one designed to predict random features of the image on screen—worked just as well and in some cases even better.51 The researchers at OpenAI, led by Yuri Burda and Harrison Edwards, worked to refine this idea, which they dubbed random network distillation, or RND.52 It wasn’t long before they began to set their sights on Montezuma’s Revenge.

Goh’s work, not suitable for children or the faint of heart, is available at https://open_nsfw.gitlab.io, and was based on the methods in Nguyen et al., “Synthesizing the Preferred Inputs for Neurons in Neural Networks via Deep Generator Networks.” Goh subsequently joined Olah’s Clarity team at OpenAI. 70. See Mordvintsev, Olah, and Tyka, “Inceptionism,” and Mordvintsev, Olah, and Tyka, “DeepDream.” 71. See Olah, Mordvintsev, and Schubert, “Feature Visualization”; Olah et al., “The Building Blocks of Interpretability”; and Carter et al., “Activation Atlas.” More recent work includes detailed “microscopy” of cornerstone deep-learning models like AlexNet; see, e.g., https://microscope.openai.com/models/alexnet. 72. Chris Olah, personal interview, May 4, 2020. For more, see his “Circuits” collaboration: https://distill.pub/2020/circuits/. 73.

pages: 291 words: 80,068

Framers: Human Advantage in an Age of Technology and Turmoil
by Kenneth Cukier , Viktor Mayer-Schönberger and Francis de Véricourt
Published 10 May 2021

For more on the work, see also: Ng Wai Foong, “Beginner’s Guide to OpenAI Five at Dota2,” Medium, May 7, 2019, https://medium.com/@ngwaifoong92/beginners-guide-to-openai-five-at-dota2-3b49ee5169b8; Evan Pu, “Understanding OpenAI Five,” Medium, August 12, 2018, https://medium.com/@evanthebouncy/understanding-openai-five-16f8d177a957. OpenAI’s “team spirit” hyper-parameter: Christy Dennison et al., “OpenAI Five,” OpenAI, June 25, 2018, https://openai.com/blog/openai-five. On connecting nothing: T. S. Eliot, The Waste Land (New York: Boni and Liveright, 1922). 4. counterfactuals Foote’s paper: Eunice Foote, “Circumstances Affecting the Heat of the Sun’s Rays,” American Journal of Science and Arts 22, no. 66 (November 1856): 382–83, https://archive.org/stream/mobot31753002152491#page/382/mode/2up.

AI wins Dota 2: Nick Statt, “OpenAI’s Dota 2 AI Steamrolls World Champion e-Sports Team with Back-to-Back Victories,” The Verge, April 13, 2019, https://www.theverge.com/2019/4/13/18309459/openai-five-dota-2-finals-ai-bot-competition-og-e-sports-the-international-champion. OpenAI Dota 2 research paper: Christopher Berner et al., “Dota 2 with Large Scale Deep Reinforcement Learning,” OpenAI, 2019, https://arxiv.org/abs/1912.06680. For more on the work, see also: Ng Wai Foong, “Beginner’s Guide to OpenAI Five at Dota2,” Medium, May 7, 2019, https://medium.com/@ngwaifoong92/beginners-guide-to-openai-five-at-dota2-3b49ee5169b8; Evan Pu, “Understanding OpenAI Five,” Medium, August 12, 2018, https://medium.com/@evanthebouncy/understanding-openai-five-16f8d177a957.

Defense of the Ancients, or Dota, is a multiplayer online video game where teams of five players vie to destroy a large structure in the other team’s base (and slaughter one’s enemies in violent battle). It requires complex strategic decisions, long-term planning, and cooperation among players. And it’s a global phenomenon, with international tournaments and talk of adding it as an official Olympic sport. The annual prize money for the top teams reaches a pixel-popping $40 million. In 2019 OpenAI, an AI research organization in San Francisco, built a system that stunned the Dota universe by crushing the best human players at Dota 2. On the surface, it seems that the system could divine causation, generalize from experience, and, with those abstractions, apply causal templates to new circumstances.

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

Microsoft’s Megatron-Turing model,2 released in late 2021, and Google’s PaLM,3 released in early 2022, each has more than 525 billion parameters compared to 175 billion for OpenAI’s GPT-3, which we wrote about in previous chapters and which was released in June of 2020. These models also perform more impressively than GPT-3 on a wide range of language tasks. OpenAI is also working on its next version of GPT, continuing the race. Models such as DeepMind’s RETRO and OpenAI’s GLIDE have improved both efficiency and capacity, able to do more with the same number of model parameters but often using more training data than older models.4 As these models add nodes, layers, and connections, they can recognize and use additional relationships and patterns.

Will Douglas Heaven, “DeepMind Says Its New Language Model Can Beat Others 25 Times Its Size,” MIT Technology Review, December 8, 2021, https://www.technologyreview.com/2021/12/08/1041557/deepmind-language-model-beat-others-25-times-size-gpt-3-megatron/. 5. Ilya Sutskever, “Fusion of Language and Vision,” The Batch, December 20, 2020, https://read.deeplearning.ai/the-batch/issue-72/. 6. “Dall·E 2,” OpenAI.com, https://openai.com/dall-e-2/. 7. Cade Metz, “Meet Dall-E, the A.I. That Draws Anything at Your Command,” New York Times, April 6, 2022, https://www.nytimes.com/2022/04/06/technology/openai-images-dall-e.html 8. Robert Service, “Protein Structures for All,” Science, December 16, 2021, https://www.science.org/content/article/breakthrough-2021. 9. David F. Carr, “Hungarian Gov Teams Up with Eastern European Bank to Develop AI Supercomputer,” VentureBeat, December 9, 2021, https://venturebeat.com/2021/12/09/hungarian-gov-teams-up-with-eastern-european-bank-to-develop-ai-supercomputer/. 10.

Language models encode what is reflected in human text rather than offering a deep understanding of it, although they may sometimes project the appearance of such deep understanding. In 2022, machine learning continued to broaden its vision, so to speak. As OpenAI’s chief scientist predicted at the end of 2020, language models have “start[ed] to become aware of the visual world.”5 Multimodality, the ability of text-trained language models to process and generate audio and visual media, is a burgeoning field of exploration. The most prominent current example is DALL·E 2 from OpenAI, announced in early 2022.6 DALL·E 2 can create photographic images or professional-quality artwork based on arbitrary text descriptions.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

And it’s generally too expensive to train 1,000 robots in parallel. In short, training is hard and slow in the real world, so you generally need a simulated environment at least to bootstrap training. OpenAI gym8 is a toolkit that provides a wide variety of simulated environments (Atari games, board games, 2D and 3D physical simulations, and so on), so you can train agents, compare them, or develop new RL algorithms. Let’s install OpenAI gym. For a minimal OpenAI gym installation, simply use pip: $ pip3 install --upgrade gym Next open up a Python shell or a Jupyter notebook and create your first environment: >>> import gym >>> env = gym.make("CartPole-v0") [2016-10-14 16:03:23,199] Making new env: MsPacman-v0 >>> obs = env.reset() >>> obs array([-0.03799846, -0.03288115, 0.02337094, 0.00720711]) >>> env.render() The make() function creates an environment, in this case a CartPole environment.

Pac-Man Using Deep Q-Learning min_after_dequeue, RandomShuffleQueue MNIST dataset, MNIST-MNIST model parallelism, Model Parallelism-Model Parallelism model parameters, Gradient Descent, Batch Gradient Descent, Early Stopping, Under the Hood, Quadratic Programming, Creating Your First Graph and Running It in a Session, Construction Phase, Training RNNsdefining, Model-based learning model selection, Model-based learning model zoos, Model Zoos model-based learning, Model-based learning-Model-based learning modelsanalyzing, Analyze the Best Models and Their Errors-Analyze the Best Models and Their Errors evaluating on test set, Evaluate Your System on the Test Set-Evaluate Your System on the Test Set moments, Adam Optimization Momentum optimization, Momentum optimization-Momentum optimization Monte Carlo tree search, Policy Gradients Multi-Layer Perceptrons (MLP), Introduction to Artificial Neural Networks, The Perceptron-Multi-Layer Perceptron and Backpropagation, Neural Network Policiestraining with TF.Learn, Training an MLP with TensorFlow’s High-Level API multiclass classifiers, Multiclass Classification-Multiclass Classification Multidimensional Scaling (MDS), Other Dimensionality Reduction Techniques multilabel classifiers, Multilabel Classification-Multilabel Classification Multinomial Logistic Regression (see Softmax Regression) multinomial(), Neural Network Policies multioutput classifiers, Multioutput Classification-Multioutput Classification MultiRNNCell, Distributing a Deep RNN Across Multiple GPUs multithreaded readers, Multithreaded readers using a Coordinator and a QueueRunner-Multithreaded readers using a Coordinator and a QueueRunner multivariate regression, Frame the Problem N naive Bayes classifiers, Multiclass Classification name scopes, Name Scopes natural language processing (NLP), Recurrent Neural Networks, Natural Language Processing-An Encoder–Decoder Network for Machine Translationencoder-decoder network for machine translation, An Encoder–Decoder Network for Machine Translation-An Encoder–Decoder Network for Machine Translation TensorFlow tutorials, Natural Language Processing, An Encoder–Decoder Network for Machine Translation word embeddings, Word Embeddings-Word Embeddings Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient Nesterov momentum optimization, Nesterov Accelerated Gradient-Nesterov Accelerated Gradient network topology, Fine-Tuning Neural Network Hyperparameters neural network hyperparameters, Fine-Tuning Neural Network Hyperparameters-Activation Functionsactivation functions, Activation Functions neurons per hidden layer, Number of Neurons per Hidden Layer number of hidden layers, Number of Hidden Layers-Number of Hidden Layers neural network policies, Neural Network Policies-Neural Network Policies neuronsbiological, From Biological to Artificial Neurons-Biological Neurons logical computations with, Logical Computations with Neurons neuron_layer(), Construction Phase next_batch(), Execution Phase No Free Lunch theorem, Testing and Validating node edges, Visualizing the Graph and Training Curves Using TensorBoard nonlinear dimensionality reduction (NLDR), LLE(see also Kernel PCA; LLE (Locally Linear Embedding)) nonlinear SVM classification, Nonlinear SVM Classification-Computational Complexitycomputational complexity, Computational Complexity Gaussian RBF kernel, Gaussian RBF Kernel-Gaussian RBF Kernel with polynomial features, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel similarity features, adding, Adding Similarity Features-Adding Similarity Features nonparametric models, Regularization Hyperparameters nonresponse bias, Nonrepresentative Training Data nonsaturating activation functions, Nonsaturating Activation Functions-Nonsaturating Activation Functions normal distribution (see Gaussian distribution) Normal Equation, The Normal Equation-Computational Complexity normalization, Feature Scaling normalized exponential, Softmax Regression norms, Select a Performance Measure notations, Select a Performance Measure-Select a Performance Measure NP-Complete problems, The CART Training Algorithm null hypothesis, Regularization Hyperparameters numerical differentiation, Numerical Differentiation NumPy, Create the Workspace NumPy arrays, Handling Text and Categorical Attributes NVidia Compute Capability, Installation nvidia-smi, Managing the GPU RAM n_components, Choosing the Right Number of Dimensions O observation space, Neural Network Policies off-policy algorithm, Temporal Difference Learning and Q-Learning offline learning, Batch learning one-hot encoding, Handling Text and Categorical Attributes one-versus-all (OvA) strategy, Multiclass Classification, Softmax Regression, Exercises one-versus-one (OvO) strategy, Multiclass Classification online learning, Online learning-Online learning online SVMs, Online SVMs-Online SVMs OpenAI Gym, Introduction to OpenAI Gym-Introduction to OpenAI Gym operation_timeout_in_ms, In-Graph Versus Between-Graph Replication Optical Character Recognition (OCR), The Machine Learning Landscape optimal state value, Markov Decision Processes optimizers, Faster Optimizers-Learning Rate SchedulingAdaGrad, AdaGrad-AdaGrad Adam optimization, Faster Optimizers, Adam Optimization-Adam Optimization Gradient Descent (see Gradient Descent optimizer) learning rate scheduling, Learning Rate Scheduling-Learning Rate Scheduling Momentum optimization, Momentum optimization-Momentum optimization Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient RMSProp, RMSProp out-of-bag evaluation, Out-of-Bag Evaluation-Out-of-Bag Evaluation out-of-core learning, Online learning out-of-memory (OOM) errors, Static Unrolling Through Time out-of-sample error, Testing and Validating OutOfRangeError, Reading the training data directly from the graph, Multithreaded readers using a Coordinator and a QueueRunner output gate, LSTM Cell output layer, Multi-Layer Perceptron and Backpropagation OutputProjectionWrapper, Training to Predict Time Series-Training to Predict Time Series output_put_keep_prob, Applying Dropout overcomplete autoencoder, Unsupervised Pretraining Using Stacked Autoencoders overfitting, Overfitting the Training Data-Overfitting the Training Data, Create a Test Set, Soft Margin Classification, Gaussian RBF Kernel, Regularization Hyperparameters, Regression, Number of Neurons per Hidden Layeravoiding through regularization, Avoiding Overfitting Through Regularization-Data Augmentation P p-value, Regularization Hyperparameters PaddingFIFOQueue, PaddingFifoQueue Pandas, Create the Workspace, Download the Datascatter_matrix, Looking for Correlations-Looking for Correlations parallel distributed computing, Distributing TensorFlow Across Devices and Servers-Exercisesdata parallelism, Data Parallelism-TensorFlow implementation in-graph versus between-graph replication, In-Graph Versus Between-Graph Replication-Model Parallelism model parallelism, Model Parallelism-Model Parallelism multiple devices across multiple servers, Multiple Devices Across Multiple Servers-Other convenience functionsasynchronous communication using queues, Asynchronous Communication Using TensorFlow Queues-PaddingFifoQueue loading training data, Loading Data Directly from the Graph-Other convenience functions master and worker services, The Master and Worker Services opening a session, Opening a Session pinning operations across tasks, Pinning Operations Across Tasks sharding variables, Sharding Variables Across Multiple Parameter Servers sharing state across sessions, Sharing State Across Sessions Using Resource Containers-Sharing State Across Sessions Using Resource Containers multiple devices on a single machine, Multiple Devices on a Single Machine-Control Dependenciescontrol dependencies, Control Dependencies installation, Installation-Installation managing the GPU RAM, Managing the GPU RAM-Managing the GPU RAM parallel execution, Parallel Execution-Parallel Execution placing operations on devices, Placing Operations on Devices-Soft placement one neural network per device, One Neural Network per Device-One Neural Network per Device parameter efficiency, Number of Hidden Layers parameter matrix, Softmax Regression parameter server (ps), Multiple Devices Across Multiple Servers parameter space, Gradient Descent parameter vector, Linear Regression, Gradient Descent, Training and Cost Function, Softmax Regression parametric models, Regularization Hyperparameters partial derivative, Batch Gradient Descent partial_fit(), Incremental PCA Pearson's r, Looking for Correlations peephole connections, Peephole Connections penalties (see rewards, in RL) percentiles, Take a Quick Look at the Data Structure Perceptron convergence theorem, The Perceptron Perceptrons, The Perceptron-Multi-Layer Perceptron and Backpropagationversus Logistic Regression, The Perceptron training, The Perceptron-The Perceptron performance measures, Select a Performance Measure-Select a Performance Measureconfusion matrix, Confusion Matrix-Confusion Matrix cross-validation, Measuring Accuracy Using Cross-Validation-Measuring Accuracy Using Cross-Validation precision and recall, Precision and Recall-Precision/Recall Tradeoff ROC (receiver operating characteristic) curve, The ROC Curve-The ROC Curve performance scheduling, Learning Rate Scheduling permutation(), Create a Test Set PG algorithms, Policy Gradients photo-hosting services, Semisupervised learning pinning operations, Pinning Operations Across Tasks pip, Create the Workspace Pipeline constructor, Transformation Pipelines-Select and Train a Model pipelines, Frame the Problem placeholder nodes, Feeding Data to the Training Algorithm placers (see simple placer; dynamic placer) policy, Policy Search policy gradients, Policy Search (see PG algorithms) policy space, Policy Search polynomial features, adding, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel, Kernelized SVM Polynomial Regression, Training Models, Polynomial Regression-Polynomial Regressionlearning curves in, Learning Curves-Learning Curves pooling kernel, Pooling Layer pooling layer, Pooling Layer-Pooling Layer power scheduling, Learning Rate Scheduling precision, Confusion Matrix precision and recall, Precision and Recall-Precision/Recall TradeoffF-1 score, Precision and Recall-Precision and Recall precision/recall (PR) curve, The ROC Curve precision/recall tradeoff, Precision/Recall Tradeoff-Precision/Recall Tradeoff predetermined piecewise constant learning rate, Learning Rate Scheduling predict(), Data Cleaning predicted class, Confusion Matrix predictions, Confusion Matrix-Confusion Matrix, Decision Function and Predictions-Decision Function and Predictions, Making Predictions-Estimating Class Probabilities predictors, Supervised learning, Data Cleaning preloading training data, Preload the data into a variable PReLU (parametric leaky ReLU), Nonsaturating Activation Functions preprocessed attributes, Take a Quick Look at the Data Structure pretrained layers reuse, Reusing Pretrained Layers-Pretraining on an Auxiliary Taskauxiliary task, Pretraining on an Auxiliary Task-Pretraining on an Auxiliary Task caching frozen layers, Caching the Frozen Layers freezing lower layers, Freezing the Lower Layers model zoos, Model Zoos other frameworks, Reusing Models from Other Frameworks TensorFlow model, Reusing a TensorFlow Model-Reusing a TensorFlow Model unsupervised pretraining, Unsupervised Pretraining-Unsupervised Pretraining upper layers, Tweaking, Dropping, or Replacing the Upper Layers Pretty Tensor, Up and Running with TensorFlow primal problem, The Dual Problem principal component, Principal Components Principal Component Analysis (PCA), PCA-Randomized PCAexplained variance ratios, Explained Variance Ratio finding principal components, Principal Components-Principal Components for compression, PCA for Compression-Incremental PCA Incremental PCA, Incremental PCA-Randomized PCA Kernel PCA (kPCA), Kernel PCA-Selecting a Kernel and Tuning Hyperparameters projecting down to d dimensions, Projecting Down to d Dimensions Randomized PCA, Randomized PCA Scikit Learn for, Using Scikit-Learn variance, preserving, Preserving the Variance-Preserving the Variance probabilistic autoencoders, Variational Autoencoders probabilities, estimating, Estimating Probabilities-Estimating Probabilities, Estimating Class Probabilities producer functions, Other convenience functions projection, Projection-Projection propositional logic, From Biological to Artificial Neurons pruning, Regularization Hyperparameters, Symbolic Differentiation Pythonisolated environment in, Create the Workspace-Create the Workspace notebooks in, Create the Workspace-Download the Data pickle, Better Evaluation Using Cross-Validation pip, Create the Workspace Q Q-Learning algorithm, Temporal Difference Learning and Q-Learning-Learning to Play Ms.

actions, Evaluating Actions: The Credit Assignment Problem-Evaluating Actions: The Credit Assignment Problem credit assignment problem, Evaluating Actions: The Credit Assignment Problem-Evaluating Actions: The Credit Assignment Problem discount rate, Evaluating Actions: The Credit Assignment Problem examples of, Learning to Optimize Rewards Markov decision processes, Markov Decision Processes-Markov Decision Processes neural network policies, Neural Network Policies-Neural Network Policies OpenAI gym, Introduction to OpenAI Gym-Introduction to OpenAI Gym PG algorithms, Policy Gradients-Policy Gradients policy search, Policy Search-Policy Search Q-Learning algorithm, Temporal Difference Learning and Q-Learning-Learning to Play Ms. Pac-Man Using Deep Q-Learning rewards, learning to optimize, Learning to Optimize Rewards-Learning to Optimize Rewards Temporal Difference (TD) Learning, Temporal Difference Learning and Q-Learning-Temporal Difference Learning and Q-Learning ReLU (rectified linear units), Modularity-Modularity ReLU activation, ResNet ReLU function, Multi-Layer Perceptron and Backpropagation, Activation Functions, Xavier and He Initialization-Nonsaturating Activation Functions relu(z), Construction Phase render(), Introduction to OpenAI Gym replay memory, Learning to Play Ms.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

‘Human-level performance in 3D multiplayer games with population-based reinforcement learning’, Science 364, no. 6443 (2019): 859–865. 2. Berner, Christopher, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi et al. ‘Dota 2 with large scale deep reinforcement learning’, arXiv preprint arXiv:1912.06680 (2019). 3. See OpenAI, ‘OpenAI Five defeats Dota 2 World Champions’, 15 April 2019, https://openai.com/blog/openai-five-defeats-dota-2-world-champions/. 4. Cohn, Gabe. ‘AI art at Christies sells for $432,500’, The New York Times, 25 October 2018, https://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html. 5. Elgammal, Ahmed, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone.

Czarnecki, Zhongwen Xu, Hado van Hasselt, Satinder Singh, and David Silver. ‘Discovering Reinforcement Learning Algorithms’, arXiv preprint arXiv:2007.08794 (2020). O’Mara, Margaret, The Code: Silicon Valley and the Remaking of America. New York: Penguin Press, 2019. OpenAI, ‘OpenAI Five defeats Dota 2 World Champions’, 15 April 2019, https://openai.com/blog/openai-five-defeats-dota-2-world-champions/. Ortiz-Catalan, Max, Enzo Mastinu, Paolo Sassu, Oskar Aszmann, Rickard Brånemark. ‘Self-Contained Neuromusculoskeletal Arm Prostheses’, New England Journal of Medicine 382, no. 18 (2020): 1732. Osborn, Kris, ‘Future of war will be “hyperactive battlefields”: US Army General,’ The National Interest, 30 January 2021, https://nationalinterest.org/blog/buzz/future-war-will-be-‘hyperactive-battlefields’-us-army-general-177371.

But the standout performance came from a team made of one human and one algorithm. Why? Success apparently rested on the combination of unerring tactical skill of the machine—its speed of decision and accuracy of shooting—in harness with the strategic outlook of the human, able to take a longer-range view of the game. Meanwhile, another leading AI research firm, OpenAI, has been using another multiplayer strategy game as its testbed—Dota 2.2 This too is a multiplayer game, pitting two teams against each other, each with five members. The game demands the familiar mix of tactical skill and strategic thinking. Here again, AI has become competitive against world-class human teams, in part on the basis of what one player called its ‘hydraulics’.

pages: 179 words: 43,441

The Fourth Industrial Revolution
by Klaus Schwab
Published 11 Jan 2016

http://www.nature.com/nature/journal/v489/n7415/full/nature11421.html 60 Stephen Hawking, Stuart Russell, Max Tegmark, Frank Wilczek, “Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?”, The Independent, 2 May 2014. http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html 61 Greg Brockman, Ilya Sutskever & the OpenAI team, “Introducing OpenAI”, 11 December 2015 https://openai.com/blog/introducing-openai/ 62 Steven Levy, “How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over”, 11 December 2015 https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.qjj55npcj 63 Sara Konrath, Edward O’Brien, and Courtney Hsing.

As theoretical physicist and author Stephen Hawking and fellow scientists Stuart Russell, Max Tegmark and Frank Wilczek wrote in the newspaper The Independent when considering the implications of artificial intelligence: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all…All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks”.60 One interesting development in this area is OpenAI, a non-profit AI research company announced in December 2015 with the goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”.61 The initiative – chaired by Sam Altman, President of Y Combinator, and Elon Musk, CEO of Tesla Motors - has secured $1 billion in committed funding.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

This is going to be an engineered system, and once we figure out what the key components are, that would be a good time to start thinking about how we modulate and structure them so as to get the best outcomes. Right now, it’s just very ephemeral. MARTIN FORD: There are already a number of think-tank organizations springing up, such as OpenAI. Do you think those are premature in terms of the resources being invested, or do you think it’s a productive thing to start working on? DAPHNE KOLLER: OpenAI does multiple things. A lot of what it does is to create open source AI tools to democratize access to a truly valuable technology. In that respect, I think it’s a great thing. There’s a lot of work being done at those organizations thinking about the other important risks of AI.

This is where the computer can be more autonomous in the way that it acquires knowledge about the world. Another area of research is in causality, where the computer can not only observe data, like images or videos, but also act on it and see the effect of those actions in order to infer causal relationships in the world. The kinds of things that DeepMind, OpenAI, or Berkeley are doing with virtual agents, for example, are going in the right direction to answer those types of questions, and we’re also doing these kinds of things in Montreal. MARTIN FORD: Are there any particular projects that you would point to as being really at the forefront of deep learning right now?

YOSHUA BENGIO: There are a number of interesting projects, but the ones that I think are likely in the long run to have a big impact are those that involve virtual worlds in which an agent is trying to solve problems and is trying to learn about their environment. We are working on this at MILA, and there are projects in the same area in progress at DeepMind, OpenAI, Berkeley, Facebook and Google Brain. It’s the new frontier. It’s important to remember, though, that this is not short-term research. We’re not working on a particular application of deep learning, instead we’re looking into the future of how a learning agent makes sense of its environment and how a learning agent can learn to speak or to understand language, in particular what we call grounded language.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

At the time of writing, the state-of-the-art AI for text-based applications are so-called transformers, which include Google’s BERT and OpenAI’s GPT-3 (T. Brown et al. 2020; Devlin et al. 2019; Vaswani et al. 2017). Transformers have also been successfully used for tasks involving audio (Child et al. 2019), images (M. Chen et al. 2020; Dosovitskiy et al. 2021), and video (Wang et al. 2021). The highest-profile AI achievements in real-time strategy games were DeepMind’s AlphaStar defeat of human grandmasters in the game StarCraft II and the OpenAI Five’s defeat of human world champions in Dota 2 (OpenAI et al. 2019; Vinyals et al. 2019). Early successes in image classification (see, e.g., Krizhevsky et al. 2012) are widely seen as having been key for demonstrating the potential of deep learning.

An AGI could learn not only to play board games but also to drive, to have conversations, to do mathematics, and countless other tasks. So far, artificial intelligence has been narrow. AlphaGo is extraordinarily good at playing Go but is incapable of doing anything else.41 But some of the leading AI labs, such as DeepMind and OpenAI, have the explicit goal of building AGI.42 And there have been indications of progress, such as the performance of GPT-3, an AI language model which can perform a variety of tasks it was never explicitly trained to perform, such as translation or arithmetic.43 AlphaZero, a successor to AlphaGo, taught itself how to play not only Go but also chess and shogi, ultimately achieving world-class performance.44 About two years later, MuZero achieved the same feat despite initially not even knowing the rules of the game.45 The development of AGI would be of monumental longterm importance for two reasons.

See also the following: speech recognition, Abdel-Hamid et al. (2014); Ravanelli et al. (2019); music, Briot et al. (2020); Choi et al. (2018); Magenta (n.d.); visual art, Gatys et al. (2016); Lecoutre et al. (2017). Building on astonishing progress demonstrated by Ramesh et al. (2021), the ability to create images from text descriptions by combining two AI systems known as VQGAN (Esser et al. 2021) and CLIP (OpenAI 2021b; Radford et al. 2021) caused a Twitter sensation (Miranda 2021). 38. “BERT is now used in every English search, Google says, and it’s deployed across a range of languages, including Spanish, Portuguese, Hindi, Arabic, and German” (Wiggers 2020). BERT is an example of a transformer (see the previous endnote). 39.

pages: 277 words: 70,506

We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News
by Eliot Higgins
Published 2 Mar 2021

Audio deepfakes have already been put to malicious use, with scammers using speech samples of a CEO to replicate his voice digitally, with which they ordered a junior employee to urgently transfer €220,000 into the con artists’ account.30 A research company backed by Elon Musk, OpenAI, created an algorithm that writes coherent text independently, creating the prospect of automated trolls able to do more than just spam; they could engage people in argument, push conspiracy theories and dilute meaningful public discussion. Fearful of misuse, OpenAI decided not release the research.31, 32 While deepfakes are a threat, we can inform ourselves, prepare and respond. To become paranoid about deepfakes would itself have disastrous consequences, leading people to judge all documentation cynically.

v=cQ54GDm1eL0 28 www.youtube.com/watch?v=sDOo5nDJwgA 29 www.whichfaceisreal.com/ 30 www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 31 amp.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?__twitter_impression=true https://openai.com/blog/better-language-models/ 32 www.vice.com/en_us/article/594qx5/there-is-no-tech-solution-to-deepfakes 33 lab.witness.org/projects/synthetic-media-and-deep-fakes/ 34 www.youtube.com/watch?time_continue=1&v=Qh_6cHw50l0 35 amp.axios.com/deepfake-authentication-privacy-5fa05902-41eb-40a7-8850-5450bcad0475.html?

J. here, here chlorine gas here Christchurch shootings here, here, here, here Christian Science Monitor here citizen journalism, rise of here, here, here climate-change denial here Clinton, Hillary here, here CNN here, here, here, here Columbine massacre here Colvin, Marie here, here confirmation bias here Congo here Counterfactual Community here, here, here, here, here, here, here, here, here, here, here, here Covid-19 pandemic here, here, here Cracked here Crimea, Russian annexation of here, here, here, here criminal justice here CrowdStrike here cyberspace, US domination of here Czuperski, Maks here Dagestan here Daily Mail here Damascus here, here, here, here, here chemical attacks here, here, here, here, here Daraa here Dawes, Kevin here Dawson, Ryan here de Kock, Peter here De Wereld Draait Door here ‘death flights’ here Deep State here Deepfake Detection Challenge here ‘deepfakes’ here, here Democratic National Convention here Denmark here Detroit street gangs here ‘digilantism’ here DigitalGlobe here, here Discord here disinformation here, here, here, here resistance to here and Skripal poisoning here and social media here and Syrian conflict here, here, here see also Counterfactual Community diversity, and online community here Dix, Jacob here DNA profiling here Dobrokhtov, Roman here, here, here, here Donetsk here, here, here, here Douma, see Damascus, chemical attacks Dowler, Milly here doxxing here, here drones here drugs cartels here Duke, David here Dutch Safety Board here, here El Paso shooting here Ellis, Hannah here ‘elves’ here emergency-dispatch calls here Encyclopedia Dramatica here environmental damage here Escher, Federico here ethnic cleansing here European Center for Constitutional and Human Rights (ECCHR) here Europol here Evans, Robert here, here, here, here Extreme Toxicology here fact-checking projects here Faktenfinder here ‘false triangulation’ here Falun Gong here Fancy Bear here, here Far Eastern Military Command Academy here, here Fedotov, Sergey, see Sergeev, Denis 53rd Anti-Aircraft Missile Brigade here, here, here, here, here, here, here, here, here Financial Times here Finland here First Draft News here Firth, Sara here Fisk, Robert here Fitzpatrick, Catherine A. here Floyd, George here Flynn, General Michael here Foley, James here, here Fontanka here Foreign Policy here, here, here Forensic Architecture here, here Forensic Science Centre of Lithuania here Fox News here, here Free Syrian Army here, here, here, here, here Freeman, Lindsay here FSB here, here, here, here Full Fact here Gab here Gaddafi, Muammar here, here, here, here, here, here Galustian, Richard here Gamergate here, here geolocation here, here, here, here, here, here, here GetContact here Ghouta, see Damascus, chemical attacks Global Legal Action Network (GLAN) here GlobalResearch here Google Digital News Initiative here Gorelyh, Ilya here Grant, Hugh here Graph Search here Gray, Freddie here ‘Great Replacement, The’ here Gregory, Sam here Grozev, Christo here, here, here, here, here, here, here GRU here, here, here, here, here, here, here, here Guardian here, here, here, here, here, here, here, here, here, here gun control here Hadi, Abdrabbuh Mansur here Haftar, General Khalifa here Haggard, Andrew here, here, here, here Hague, William here Hama here, here, here, here al-Hamwi, Sami here Hanham, Melissa here Hayden, General Michael here Hebron here Helsingin Sanomat here Henderson, Ian here, here Hersh, Seymour here Heyer, Heather here Hitchens, Peter here Hitler, Adolf here Holocaust here, here Homs here, here, here, here, here, here Houla massacre here, here, here Houthis here Human Rights Center here Human Rights Watch here, here, here, here Hunchly here Hussein, Saddam here Identity Evropa here Ilovaysk, Battle of here IMINT (imagery intelligence) here India here Information Wars here, here, here, here InfoWars here, here, here, here Insider, The here, here, here, here, here, here, here International Criminal Court here, here international criminal law, and technological advances here Internet Research Agency here Interpreter, The here Iran here, here, here Iraq here, here, here, here, here ISIS here, here, here, here, here, here Israel here, here Issacharoff, Dean here ITAR-TASS here Ivannikov, Oleg Vladimirovich here Jabal Shashabo here, here Jabhat al-Nusra here Jespersen, Bjørn here Joint Investigation Team here, here, here, here, here Jones, Alex here, here Jukes, Peter here, here Kahn Sheikhoun sarin attack here, here Kaszeta, Dan here KGB here, here, here Khashoggi, Jamal here, here Al Khatib, Hadi here, here, here Khrushchev, Colonel Evgeny here King, Shaun here Kivimäki, Veli-Pekka (VP) here, here, here Koenig, Alexa here Koettl, Christoph here, here Kommersant here Kovalchuk, Alexander here Krasnodon here Ku Klux Klan here, here, here Kuhotkin, Sergey here Kursk here, here Al-Laham, Mimi here Lane, David here Las Vegas shootings here Lavrov, Sergey here, here, here Lebanon here Leicestershire Police here Lens Young Homsi here Leroy, Aliaume here, here Les Décodeurs here Libya here, here, here, here, here, here, here, here, here, here Al-Saiq Brigade atrocities here Libyan National Army here, here Litvinenko, Alexander here LiveJournal here London Review of Books here Loyga here Luhansk here, here, here Lyons, Josh here, here McClatchy DC Bureau here Macron, Emmanuel here Magnitsky, Sergei here Makarenko, Vladimir here Malaysia Airlines Flight MH17 here, here, here, here, here, here, here, here, here, here, here, here, here, here Malaysia Airlines Flight 370, here Mamontov, Arkady here Martin, Ryan here mass shootings, conspiracy theories here Matrix, The here May, Theresa here Medvedev, Dmitri here Mein Kampf here Middle East Live here Military Medical Academy here Millerovo here, here MintPress News here, here ‘miserabilism’ here, here, here Mishkin, Alexander (‘Alexander Petrov’) here, here, here, here, here Misrata here, here Mnemonic here Moldova here Montenegro coup plot here, here, here Morgan, Daniel here Moussa, Jenan here Mubarak, Hosni here Münster here Murdoch, Rupert here Musk, Elon here al-Musulmani, Ahmad here Myanmar here Mystery Munitions here, here National Center for Media Forensics here NATO here, here, here Navalny, Alexey here Nayda, Vitaly here Nazi affiliations here, here, here, here New York Times here, here, here, here News Provenance Project here New Yorker here News of the World here Newsweek here Newtral here Nimmo, Ben here North Korea here, here NPR here, here Nuremberg trials here Obama, Barack here, here, here, here, here Odnoklassniki here, here Oliphant, Roland here OpenAI here Organisation for the Prohibition of Chemical Weapons (OPCW) here, here, here OSINT (open-source intelligence) here Ostanin, Iggy here, here, here, here Owens, Candace here paedophiles here Pagella Politica here Pakistan here Pandora Intelligence here Panoramio here Paris Match here, here, here, here, here Paris terrorist attacks here Patriot Prayer here Peele, Jordan here Pelosi, Nancy here, here Pepe the Frog here, here Periscope here Peskov, Dmitry here Petrov, Alexander, see Mishkin, Alexander phone-hacking scandal here, here, here, here, here, here Pinochet, General Augusto here Pittsburgh synagogue attack here Postal, Chris here post-traumatic stress disorder (PTSD) here Poway synagogue attack here Press TV here Prison Planet here Professional Pilots Rumour Network here Protocol on Open Source Investigations here ProtonMail here Proud Boys here Putin, Vladimir here, here, here, here, here, here, here, here, here, here, here Radio Free Europe/Radio Liberty here Radio Svoboda here Rapp, Stephen here Reddit here, here ‘red-pilling’ here, here Rees, Gavin here Regular Contributor, The here Reporters’ Lab here Respekt here Reuters here, here reverse image searches here Revolution Man here rhino poaching here Roberts, Zach D. here Romein, Daniel here, here, here, here, here Rosen, Jay here Roshka, Georgy Petrovich here RosPassport database here Rostov Oblast here RTL Nieuws here Russia-1 here Russia Today (RT) here, here, here, here, here, here, here, here, here, here, here Petrov/Boshirov interview here, here, here, here, here Russian databases, leaked here, here Russian Defence Ministry here, here, here, here, here, here, here, here Russian Foreign ministry here Rwanda here St Petersburg here Saleh, Ali Abdullah here, here Saoud, Sari here sarin gas here, here, here, here satellite imagery here, here, here, here Saudi Arabia here, here, here, here, here, here, here Schiphol Airport here Schmitt, Eric here Second Life here Second World War here, here Senezh here Sergeev, Denis (‘Sergey Fedotov’) here shabiha here Shaif, Rawan here Shikhany institute here ‘shitposting’ here, here Simon, Scott here Simonyan, Margarita here Skripal poisoning here, here, here, here, here, here, here, here, here Sky News here Slack here, here Snizhne here, here, here, here, here, here Snopes here, here social media algorithms here archiving here ISIS and here searching here Sofronov, G.

pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us
by Tim O'Reilly
Published 9 Oct 2017

CHAPTER 11: OUR SKYNET MOMENT 230 The messages were powerful and personal: “We Are the 99 Percent,” tumblr.com, September 14, 2011, http://weare the99percent.tumblr.com/page/231. 231 “AI systems must do what we want them to do”: “An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence,” Future of Life Institute, retrieved April 1, 2017, https://futureoflife.org/ai-open-letter/. 231 “unconstrained by a need to generate financial return”: Greg Brockman, Ilya Sutskever, and OpenAI, “Introducing OpenAI,” OpenAI Blog, December 11, 2015, https://blog.openai.com/introduc ing-openai/. 232 best friend of one autistic boy: Judith Newman, “To Siri, with Love,” New York Times, October 17, 2014, https://www.nytimes.com/2014/10/19/fashion/how-apples-siri-became-one-autistic-boys-bff.html. 234 overpopulation on Mars: “Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines,” Wired, May 2015, retrieved April 1, 2017, https://www.wired.com/brandlab/2015/05/andrew-ng-deep-learning-mandate-humans-not-just-machines/. 235 change how we think and how we feel: Emeran A.

Recently, a collection of scientific and Silicon Valley luminaries, including Stephen Hawking and Elon Musk, wrote an open letter recommending “expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” Groups such as the Future of Life Institute and OpenAI have been formed to study the existential risks of AI, and, as the OpenAI site puts it, “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” These are noble goals. But they may have come too late. We are already in the thrall of a vast, world-spanning machine that, due to errors in its foundational programming, has developed a disdain for human beings, is working to make them irrelevant, and resists all attempts to bring it back under control.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

, arXiv preprint arXiv:1705.09990 (2017); and Iyad Rahwan, “Society-in-the-Loop: Programming the Algorithmic Social Contract ”, Ethics and Information Technology, Vol. 20, No. 1 (2018), 5–14. See also the work of OpenAI , an NGO which focuses on achieving safe artificial general intelligence: “Homepage”, Website of OpenAI , https://​openai.​com/​, accessed 1 June 2018. The blog of OpenAI and Future of Humanity Institute researcher Paul Christiano also contains many valuable resources and discussions on the topic: https://​ai-alignment.​com/​, accessed 1 June 2018. 4See, for example, the UK Locomotive Act 1865, s.3. 5Toby Walsh, Android Dreams (London: Hurst & Company, 2017), 111.

We cannot be sure that AI technology will not meet a similar plateau, even after it achieves a form of general intelligence.113 Notwithstanding these limitations, in recent years there have been several significant developments in the capabilities of AI. In January 2017, Google Brain announced that technicians had created AI software which could itself develop further AI software.114 Similar announcements were made around this time by the research group OpenAI,115 MIT,116 the University of California, Berkeley and DeepMind.117 And these are only the ones we know about—companies, governments and even some independent individual AI engineers are likely to be working on processes which go far beyond what those have yet made public. 6 Optimists, Pessimists and Pragmatists Commentators on the future of AI can be grouped into three camps: the optimists, the pessimists and the pragmatists.118 The optimists emphasise the benefits of AI and downplay any dangers.

See Autonomous weapons Kill Switch L La Commission de Réflexion sur l’Éthique de la Recherche en sciences et technologies du Numérique d’Allistene (CERNA) Laws of Robotics Legal Personality Loomis v Wisconsin Luddites See alsoNeo-Luddites M Machine learning deep learning reinforcement learning supervised learning unsupervised learning Massachusetts Institute of Technology (MIT) Media Lab Mens Rea Microsoft Model Law Monkey Selfie Case. See Naruto v Slater N Narrow AI Natural Law Negligence duty of care reasonable person Neo-Luddites See alsoLuddites No-Fault Accident Compensation O Off Switch. See Kill Switch OpenAI Open Roboethics Institute (ORI) Organisation for Economic Co-operation and Development (OECD) P Paris Climate Agreement Partnership on AI to benefit People and Society Positivism Posthumanism Private Law Product liability EU Product Liability Directive US Restatement (Third) of Torts–Products Liability Professions, The Public International Law Q Qualia R Random Darknet Shopper Rawls, John Red Flag Law Rousseau, Jean Jacques S Safe interruptibility Sandbox Saudi Arabia Self-driving cars.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

To that end, Elon Musk along with Sam Altman, the president of the start-up incubator Y Combinator, cochair a nonprofit called OpenAI that has as its purpose to help usher in the era of safe and beneficial AI. The initial blog post announcing its formation states, “Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.” Collectively, OpenAI’s backers have pledged close to a billion dollars of funding. The approach is to develop, among other things, open-source AI.

Working to develop the very thing you are worried about might not seem like the best plan, and while the founders acknowledge the risk, they point out that it is better if an AGI is built in an open, collaborative way with much discussion and debate, rather than by a small group with its own agenda. Critics counter that OpenAI may end up giving 99 percent of the formula to anyone who wants it, leaving us all at the mercy of whatever random extremist group or belligerent state happens to figure out the last percent, even if it never would have had the capability to sort the rest of it out on its own. So here we are. We find ourselves racing forward, trying to build something that has the potential to launch us into a perfect world or destroy us all.

pages: 315 words: 89,861

The Simulation Hypothesis
by Rizwan Virk
Published 31 Mar 2019

Given the response times available to AI algorithms, can we expect that AI will learn to play other video games, such as first- person shooters and fighting? Recently, Elon Musk funded OpenAI and announced that it had learned to play DOTA 2, an extremely popular fantasy-themed fighting game. Competitive video gaming, or eSports, is played by professionals and has become a popular spectator sport in the same way sports such as basketball, baseball and football developed in the last century. OpenAI announced that a team of five bots were competitive enough to qualify to play against professional teams! This is an interesting twist, though not entirely unexpected.

The human mind, as we understand it, is an incredible learning machine, and every single character in the game, assuming characters are going through the normal cycle of birth and growth, would need to exhibit the ability to learn over time. If babies could suddenly speak complete sentences or speak languages they had never been taught, this might be an interesting clue that we are in some kind of simulation. Spatial Awareness. As Google’s DeepMind and Musk’s OpenAI showed, AI can learn to play video games. This means that they can become aware of a 2D space and examine pixels to see what’s going on. With competitive eSports games like DOTA2, this is even more significant because these games are like MMORPGs – they are a 3D world. For a bot to be able to fight and defeat an opponent within a world, the bot would need to be aware of the 3D space.

See NDEs (near-death experiences) Netscape, 287 Neumann, John von, 100, 260 Neurable, 76 Neurolink, 76 A New Kind of Science (Wolfram, 2002), 266 New York Times, 232 Newton, Isaac, 13, 36, 124–26, 161, 166, 220–21 Niels Bohr Institute, 132 Nintendo Entertainment System (NES), 38–39 nirvana, 203 NLP (Natural Language Processing), 89–92 No Man’s Sky, 46–47, 51, 236 Noack, Marcus, 246 nonhuman earth-based lifeforms, 275 non-player characters (NPCs), 30–31, 39, 82, 280–81 non-player characters (NPCs), graphical, 41–42 non-simulated beings, 114 NPCs (non-player characters), 30–31, 39, 53, 82 NPCs and Turing Test, 115 O OASIS, 56–57, 71 OBEs (out-of-body experiences), 219, 241–42 “object” definition, 70 observation, particle collapse as, 131 Oculus VR, 59–60 OpenAI, 87, 94 optimization, 159–160 optimization techniques, computer graphics, 34, 157 Owhadi, Houman, 254–55 P Pac-Man, 1, 34, 82, 208, 273 parallel lives and future selves, 150–52 parallel universes and simulation hypothesis, 159–160 parallel worlds and Fringe, 152–53 parallel worlds and the multiverse, 148–150 parallel worlds, need for computation, 157–59 Paramahansa Yogananda, 183, 200 particle “local” nature, 127 particles and pixels on screen, 162–64 particle-wave duality, 127–134, 254–55 Pauli, Wolfgang, 121, 125–26 Pauli Exclusion Principle, 126 PCs (player characters), 82 PCs vs.

pages: 285 words: 86,858

How to Spend a Trillion Dollars
by Rowan Hooper
Published 15 Jan 2020

It is our job, while keeping an eye on the post-2030 world, to ensure that the benefits are shared. Some AI research teams have pledged to do just that. OpenAI is a San Francisco-based firm set up to develop human-level artificial intelligence and to try to make sure the benefits are spread out fairly. In 2019, they attracted a $1 billion investment from Microsoft, most of which will be used to buy time on Microsoft data farms. Data processing – the learning time used to train AIs – is immensely absorbent of processing time. Greg Brockman, the CEO of OpenAI, said the $1 billion would be burned away in under five years. We could invest a large sum, say $50 billion, to allow AI developers around the world more data processing time, in return for a commitment to sharing results and being transparent about what we’re developing

But the field was buoyed in 2019 when Google announced it had achieved ‘quantum supremacy’, meaning its quantum computer had solved, in 200 seconds, a mathematical problem that would have taken even our best regular computers thousands of years.8 The feat was compared with the Wright brothers’ first flight, as similar world-changing repercussions are expected from quantum computing as have been seen with air travel. What, then, if the almost supernatural processing power of quantum computing was paired with the form of AI known as machine learning, which is at the basis of most of the examples of AI we’ve discussed so far. Machine learning is computationally expensive: it’s why OpenAI will spend $1 billion mostly on data processing. If we could train algorithms using quantum computers, we could do it faster, more cheaply and more efficiently than we do at the moment. As yet, making quantum neural networks for deep learning is only an emerging field.9 There are formidable technical barriers.

Rather than program all the possible outcomes into the software – which is what software engineers used to try to do, with inevitable shortcomings – in machine learning with a neural network, the computer learns on its own. There has been spectacular success with a turbo form of machine learning called deep learning; it’s behind the ability of DeepMind’s AlphaGo and AlphaZero, and it’s the basis of a system developed by OpenAI called Generative Pre-trained Transformer, or GPT. A publicly available version called GPT-2 can generate original text, perhaps a sports report, a movie review, or maybe even poetry, when given a prompt. It is a kind of neural network that relies on what’s called unsupervised learning. That is, it has been exposed to lots of data (in this case, some 8 million text documents scraped off the internet), but like AlphaZero had to learn chess by itself, GPT-2 had to figure out what it all means by itself.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

GPT-3 has an astonishing 175 billion parameters and was trained on some 45 terabytes of text data. See https://openai.com/blog/openai-api/ and for technical details: https://arxiv.org/abs/2005.14165. it does not understand: Of course this depends on what is meant by ‘understanding’. Some might say that human ‘understanding’ is no different in kind from the sort of ‘understanding’ displayed by GPT-3. The cognitive scientist Gary Marcus argues against this position, and I agree with him. See www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/. a five-hundred-word essay: ‘A robot wrote this entire article.

When the chatbot won, its response was ‘I feel about beating the turing [sic] test in quite convenient way.’ By lowering the bar this far, the test becomes much easier to pass. This was a test of human gullibility, and the humans failed. As AI continues to improve, the Turing test may soon be passed without such artificially low standards. In May 2020, the research lab OpenAI released GPT-3 – a vast artificial neural network trained on examples of natural language drawn from a large swathe of the internet. As well as engaging in chatbot-variety dialogue, GPT-3 can generate substantial passages of text in many different styles when prompted with a few initial words or lines.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

Wall Street Journal (2018): www.wsj.com/articles/should-artificial-intelligence-copy-the-human-brain-1533355265?mod=searchresults&page=1&pos=1. FIGURE 1.2: The exponential growth in computing—300,000-fold—in the largest AI training runs. Source: Adapted from D. Hernandez and D. Amodei, “AI and Compute,” OpenAI (2018): https://blog.openai.com/ai-and-compute/. The number of new deep learning AI algorithms and publications has exploded (Figure 1.1), with exponential growth of machine recognition of patterns from enormous datasets. The 300,000-fold increase in petaflops (computing speed equal to one thousand million million [1015] floating-point operations per second) per day of computing used in AI training further reflects the change since 2012 (Figure 1.2).

Source: Adapted from J. F. Bonnefon et al., “The Social Dilemma of Autonomous Vehicles,” Science (2016): 352(6293), 1573–1576. The concern for ethical breaches and harm led not only to the formation of the AI Now Institute but to many other efforts across the globe to promote the ethics and safety of AI, including OpenAI, Pervade, Partnership on AI, the Future of Life Institute, the AI for Good Summit, and academic efforts at UC Berkeley, Harvard, the University of Oxford, and Cambridge. Yet, as the AI Now Institute has pointed out, there is no tech company tracking its own adherence to ethical guidelines. That hit home for me when I read a recent Infosys AI healthcare report, “AI for Healthcare: Balancing Efficacy and Ethics.”64 Although the report claimed that the industry as a whole and the organizations in it need “to establish ethical standards and obligations,” it provided no indication of what those standards or obligations were.

Ng said, “Fearing a rise of killer robots is like worrying about overpopulation on Mars before we populate it,”79 whereas Musk has said that the potential rise of killer robots was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if AI goes rogue and turns on humanity.80 Musk’s deep concerns prompted him and Sam Altman to found a billion-dollar nonprofit institute called OpenAI with the aim of working for safer AI. In addition, he gave $10 million to the Future of Life Institute, in part to construct worst-case scenarios so that they can be anticipated and avoided.81 Max Tegmark, the MIT physicist who directs that institute, convened an international group of AI experts to forecast when we might see artificial general intelligence.

pages: 428 words: 121,717

Warnings
by Richard A. Clarke
Published 10 Apr 2017

“Machine learning and deep learning algorithms . . . we don’t fully understand today how they work.” The new explainable-AI initiative “will give the human operator more details about how the machine used deep learning to come up with the answer.”20 In 2015, business tycoons Elon Musk and Sam Altman created the OpenAI Institute, a nonprofit company that focuses on researching AI. Musk and Altman believe that by making all of OpenAI’s findings open-source and funding it by private donations, eliminating the need for financial return, they can ensure that AI will be developed for the benefit of all people, not for self-interested or destructive aims. They and others are so convinced of its importance that they have committed a total of $1 billion toward the initiative.

See also Fukushima Daiichi nuclear disaster Nuclear reactors, 83–84, 85–86 Nuclear weapons, in Pakistan and India, 264–73, 281–82 Nuclear winter theory, 273–78 Obama, Barack, 283 influenza outbreak, 218 Iraq and, 63 Keystone Pipeline and, 254 Madoff fraud and SEC, 118–19 nuclear weapons and, 281 Syria and, 57, 61, 64–74 Ocrant, Michael, 106, 108, 110 O’Dell, Tawni, 121 Office of Strategic Services (OSS), 21 Off-putting personality, 183–84 Offshore drilling, 42 Off-target events, 333–34 Okamura, Yukinobu, 75–82 data of, 78, 79 earthquake geology and safety guidelines, 75–78, 80–81, 90, 93–94, 97 Okhotsk Plate, 80 Onagawa Nuclear Power Plant, 90–91 O’Neill, Eugene, 143 O’Neill, John P., 8 On the Origin of Species (Darwin), 325 OODA loop (observe, orient, decide, act), 371n OPEC (Organization of the Petroleum Exporting Countries), 23–24 OpenAI Institute, 210 OpenSSL, 179 Operation IceBridge, 246 Operation Smiling Buddha, 266 Oppenheimer & Co., 153, 161 Options trading, 103–4 Oregon State University, 240–41, 352 Organic Act of 1910, 124–25 Organizing Committee for the International Conference on Recombinant DNA, 337 Orient No. 2 mine explosion, 127 O-rings, and Challenger disaster, 11–13 Orthogonal thinking, 184, 235 Oswald, Lee Harvey, 99 Our Final Invention (Barrat), 202 Outbreak (movie), 219–20 Outlandishness, 174 Oxford University, 212 Pacific Plateau, 80 Pacific Ring of Fire, 94 Pakistan, 261–73 bin Laden raid, 268–69 Cold Start, 267, 270 Mumbai terrorist attacks of 2008, 261–64 nuclear weapons and India, 264–73, 281–82 partition of, 265–66 Pakistani Army, 266–67 Pakistani Navy, 264, 270 Pakistan War College, 269 Palm Beach Country Club, 101 Panasenko, Sharon, 327 Pandemic disease, 217–36 Panetta, Leon, 64, 200 Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), 310–11, 315 Paris Agreement, 247–50 Path Where No Man Thought, A: Nuclear Winter and the End of the Arms Race (Sagan and Turco), 276–77 Paulson, John, 149 Peabody Prize, 226 Peace Corps, 62 Pearl Harbor: Warning and Decision (Wohlstetter), 19, 21 Pearl Harbor attack, 20–21 Penicillin, 229 Penney, Alexander, 186 People’s Liberation Army Navy, 199 Performance Coal Company, 130, 134, 139 “Permabears,” 236 Persian Gulf War.

pages: 521 words: 118,183

The Wires of War: Technology and the Global Struggle for Power
by Jacob Helberg
Published 11 Oct 2021

AI powers self-driving cars and suggests movies we might like on Netflix. The Associated Press has used AI to draft basic articles. IBM’s Watson beat two of Jeopardy!’s greatest contestants and, for good measure, identified genes linked to degenerative illness. In June 2020, the San Francisco company OpenAI’s GPT-3 sent shock waves across the tech industry, proving it possible to algorithmically generate cogent and naturally sounding long-form text on almost any topic. The consulting firm PwC estimates that artificial intelligence will contribute an additional $15.7 trillion to global economic growth by 2030.

Within twenty-four hours, Microsoft pulled the plug on Tay.43 But these hiccups won’t hold back AI-powered language generation forever. Indeed, natural language processing is only getting more sophisticated, in ways that could be quite frightening. Better language abilities could make it easier for trolls to spread propaganda—and harder for us to identify them. In 2019, OpenAI fed an algorithm the words “Russia has declared war on the United States after Donald Trump accidentally…” The algorithm proceeded to generate the following realistic—and perilous—sentences: Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air. Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.”

R., 35 Macron, Emmanuel, 235 MacroPolo, 124, 223, 247 Malaysia, 82, 107, 109, 123 Mansted, Katherine, 135 Manuel, Don Juan, xiii–xiv Mao Zedong, 136, 202 Marine Corps, U.S., 52, 180, 231 Marriott, xvii–xviii, 152 Mattis, James, 52 Maven, Project, 178–81 May, Theresa, 73 Meleshevich, Kirill, 69 Merkel, Angela, 166, 267 Microsoft, 84, 104, 110, 134, 197, 244 natural language processing and, 140–41 Silicon Valley–Washington relations and, 166, 172, 180, 191 Modi, Narendra, 82–83, 212 Morgenthau, Hans, xiv Moriuchi, Priscilla, 94 Mossadegh, Mohammad, 53 Mubarak, Hosni, 37–39 Mueller, Robert, 66–67 Musk, Elon, 108, 154, 247 Nader, Ralph, xx–xxi Nakasone, Paul, 85, 228 National Aeronautics and Space Administration (NASA), 165, 232, 244 National Defense Authorization Acts, 123, 215, 251 National Defense Education Act, 232–33, 245 National Institute of Manufacturing, 241, 244 National Science Foundation, 23, 164–65, 244, 246 National Security Agency (NSA), xi, xvii, 30, 48, 208, 228 Snowden affair and, 166–68, 172 natural language processing, 80, 140–43, 253 Naughton, John, 116 Navalny, Alexei, 27, 39–40 Netherlands, 93, 110 New Knowledge, 57, 59 New York Times, xii, 28, 35, 47, 53, 57, 71–72, 82, 147, 161, 262 and elections of 2016, 12, 54, 67 Skripal case and, 74–76 New Zealand, 121–22 9/11, 141, 146, 168, 236 Ningsuan Technologies, 111 Nixon, Richard, 48, 208 Nokia, 119, 122, 130, 217–19 North Atlantic Treaty Organization (NATO), 27, 38, 81, 93, 122, 203, 211–12, 216 cyberattacks and, 30, 197–98, 211 Norway, 21, 104n, 214 nuclear weapons, xiv, 39, 141, 180, 208, 229 cyberattacks and, 45–46 deepfakes and, 138–39 Obama, Barack, 4, 7, 11, 35, 39, 49, 63, 100, 166–67, 205, 228, 234 on climate change, 206–7 competitiveness investing and, 236, 238, 242–43 cyberattacks and, 44, 47 deepfakes and, 136, 138–39, 144, 158 Ocasio–Cortez, Alexandria, 239 Office of Personnel Management (OPM), 44–45, 172, 184 Office of Technology Assessment, 174, 210 oil, 10, 26, 45, 99, 157, 238 OpenAI, 132, 141 open radio access networks (RANs), 219 Orwell, George, xiii–xiv, 127, 148 Osnos, Evan, 32, 40 Pacific Deterrence Initiative, 266 Packard, David, 164, 208 Page, Larry, 15, 165 Pakistan, 138–39 China and, 94, 107, 109, 111, 152 Palantir, 6, 180 Pardo, Tamir, 145, 151, 153 Parkland school shooting, 55, 77, 138 Peele, Jordan, 136 Pelosi, Nancy, 93, 138, 173 Pence, Mike, 179 Perry, William, 208 Peskov, Dmitry, 57, 75 Peters, Gary, 241 Pichai, Sundar, 131, 247 Silicon Valley–Washington relations and, 178–79 tech industry congressional hearings and, 159–60 Pincus, Mark, 167 Podesta, John, 12, 48–49, 83–84, 143 Poland, 2, 27, 204 Politkovskaya, Anna, 26–27 Pompeo, Mike, 207, 269 Postel, Jon, 112–13 Poynter Institute, 261–62 Prigozhin, Yevgeniy, 56–57, 80 PRISM, 166–67 Putin, Vladimir, xiii, xxi, 19, 61–62, 69, 74, 76, 80, 86, 133, 135, 145, 172, 222, 229, 233 active measures and, 27–28, 201 cyberattacks and, 47, 201 domestic opposition to, 39–40 and elections of 2016, 49, 58 IRA disinformation and, 56, 58, 60, 63 rise to power of, 26–27 Russian military influence and, 27, 30, 41–42, 203 TV and, 51–52 U.S.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

” — ERIK BRYNJOLFSSON, MIT professor; author, The Second Machine Age and Machine, Platform, Crowd “ Prediction Machines is a must-read for business leaders, policy makers, economists, strategists, and anyone who wants to understand the implications of AI for designing business strategies, decisions, and how AI will have an impact on our society.” — RUSLAN SALAKHUTDINOV, Carnegie Mellon professor; Director of AI Research, Apple “I encounter so many people who feel excited but overwhelmed by AI. This book will ground those feeling lost by giving them a practical framework.” — SHIVON ZILIS, OpenAI Director and Partner, Bloomberg Beta “ The current AI revolution will likely result in abundance, but the process of getting there requires deliberation on tough topics that include increasing unemployment and income disparity. This book presents frameworks that allow decision makers to deeply understand the forces at play

The CDL’s dominance in this domain resulted partly from our location in Toronto, where many of the core inventions—in a field called “machine learning”—that drove the recent interest in AI were seeded and nurtured. Experts who were previously based in the computer science department at the University of Toronto today head several of the world’s leading industrial AI teams, including those at Facebook, Apple, and Elon Musk’s Open AI. Being so close to so many applications of AI forced us to focus on how this technology affects business strategy. As we’ll explain, AI is a prediction technology, predictions are inputs to decision making, and economics provides a perfect framework for understanding the trade-offs underlying any decision.

Bill Gates advocated for a tax on robots that replace human labor. Sidestepping what would normally be government’s purview, the high-profile startup accelerator Y Combinator is running experiments on providing a basic income for everyone in society.2 Elon Musk organized a group of entrepreneurs and industry leaders to finance Open AI with $1 billion to ensure that no single private-sector company could monopolize the field. Such proposals and actions highlight the complexity of these social issues. As we climb to the pyramid’s top, the choices become strikingly more complex. When thinking about society as a whole, the economics of AI are not so simple anymore.

pages: 292 words: 94,660

The Loop: How Technology Is Creating a World Without Choices and How to Fight Back
by Jacob Ward
Published 25 Jan 2022

By virtue of its mathematical structure, music is, in fact, one of the simplest human domains for AI to simulate. Services like Amper, Google’s Magenta, and Flow of Machines only need a few suggestions from a human as to key, genre, mood, beats per minute, and then almost instantly produce a backing track you could plausibly hear in a movie or behind a rising artist. OpenAI’s Jukebox even includes human singing that to my ear is indistinguishable from the real thing. This uncanny simulation of real music isn’t being done just to make us happier, of course. The larger purpose here is profit. A studio musician who belongs to the American Federation of Musicians charges at least $240 per recording session.

Amper offers a complete song to use in an online advertising campaign for only $499. Put aside the aesthetics for a moment. Consider what capitalism will do with this. Imagine just how quickly companies will want to seize on AI to individually tailor never-ending, never-repeating, low-cost entertainment for each of us. As Miles Brundage, a researcher at OpenAI wrote in 2020, “It seems safe to say that the era of almost-exclusively-human-generated and almost-never-individually-customized media will not last much longer.”4 It may not be that any of this is art, in the philosophical sense. It is, of course, all just an imitation, reverse engineered from the echo of an audience’s reaction to past art.

pages: 328 words: 90,677

Ludicrous: The Unvarnished Story of Tesla Motors
by Edward Niedermeyer
Published 14 Sep 2019

By 2017, however, there had been several unmistakable red flags: claiming that he tweets under the influence of a dangerous cocktail of Ambien and alcohol and admitting he might be bipolar certainly raised eyebrows in some corners. Combined with his ever-growing collection of ambitious ventures—including the Boring Company’s plan to revolutionize tunneling, the Hyperloop tunnel-based transport concept, Neuralink’s “implantable brain-computer interface,” and OpenAI’s effort to promote “friendly artificial intelligence”—it seemed that Musk was beginning to lose himself in an endless quest for more hype. His increasingly erratic behavior burst into the spotlight in 2018, when an escalating series of Twitter conflicts with journalists, analysts, and critics led to a full-scale assault on stock analysts and the media.

Edwards, 56 Department of Energy (DOE) loans from, 68–89, 118, 120, 121 as shareholder of Tesla, 82–86, 90 detractors, 102–108 Detroit, Michigan, 2, 4 Detroit Auto Show, 68 disruptive innovator, Tesla as, 195–197 DOE. see Department of Energy doors falcon-wing, 137–141 gull-wing, 136–137 Downey, California, 76 Drori, Ze’ev, 49–50, 65 Dunlay, Jim, 58 E Eberhard, Martin as advocate of Tesla, 67 founding of Tesla by, 21–24, 27–31, 35, 37–40 ouster of, 44–48, 50, 79 EBITDA, 215 Eisner, Michael, 45 Electrek, 97–101 electric vehicles (EVs), 3, 12–14, 24, 77, 202, 207 Energy Independence and Security Act, 67 Enron, 105 environmental issues, 112–113, 119 Esquire, 61 e-tron quattro, 203 EV1, 13, 24, 34 EVs. see electric vehicles F Facebook, 41 Falcon One, 28 falcon-wing doors, 137–141 FCW (Forward Collision Warning), 125 Ferrari, 60, 200–201 Fiat, 11, 34 financial crisis (2008), 75–76, 105 fixed costs, 54 Flextronics, 47 FOIA (Freedom of Information Act), 72, 131 Ford, Henry, 56, 194 Ford Focus, 159 Ford Fusion, 75 Ford Motor Company, 3, 4, 56, 75, 181, 194, 204, 216 Forward Collision Warning (FCW), 125 Founders Edition Roadster, 215 Freedom of Information Act (FOIA), 72, 131 Fremont, California, 53, 206, 218 funding (fundraising), 29, 40, 44–47, 50, 69–71, 85 G Gage, Tom, 27–29 Galileo Galilei, 105 Gao Yaning, 128 Gartner, 175 gas prices, 11, 14 General Motors (GM). see also specific models bankruptcy and bailout of, 2–3, 88 and electric cars, 11–13, 34 Impact concept car, 24 and Lotus, 36, 37, 53 OnStar system, 194 Germany, 203, 204 Ghosn, Carlos, 197–200 Gigafactory, 77, 183–184, 189, 218 GM. see General Motors G170J1-LE1 screens, 228 Goodwill Agreements, 149 Google, 44, 120–124, 171 Graham, Paul, 41 “A Grain of Salt” (blog post), 152–153 Grant, Charley, 100 “green car” companies, 11 GT Advanced Technologies (GTAT), 95–97 gull-wing doors, 136–137 H Harrigan, Mike, 30 Harris Ranch, 115–116, 119 Harvard Business School, 195 herd mentality, 96 Hethel, England, 49 Hoerbiger, 138–140 Holzhausen, Franz von, 137 Honda, 201 “How to Be Silicon Valley” (speech by Paul Graham), 41 Hyperloop, 16, 88, 217 I IDEO, 38 IGBT (insulated-gate bipolar transistor), 49 Impact concept car, 13, 24 imperfection, 55 incumbent companies, 196–197 innovation, 193–210 by Citroën, 193–195 disruptive, 195–197 by Carlos Ghosn, 197–200 by Tesla, 201–210 “Innovation Killers: How Financial Tools Destroy Your Capacity to Do New Things” (Christensen), 196–197 The Innovator’s Dilemma, 197 insulated-gate bipolar transistor (IGBT), 49 internal conflict, 29–32 InvestorsHub, 99 Israel, 4, 12 J Jaguar I-PACE, 202–203 Jivan, Jon, 98 Jonas, Adam, 172 K kaizen, 58, 60 Krafcik, John, 176 L Lambert, Fred, 98–101 Lamborghini, 204 Land Rover, 60 lead-acid batteries, 23–24, 197 Leech, Keith, 146–147, 156 Level 4 autonomous cars, 175–176 Level 5 autonomous cars, 170, 172, 175–176, 178 Lexus, 204 lithium-ion batteries, 22–24, 26, 34 “long tailpipe,” 110 losses, 11 Lotus, 36–37, 38, 43, 44, 49, 59 Lotus Elise, 28, 36, 37, 38, 40, 43 Lotus Evora, 59 “Ludicrous Mode,” 16 Lyons, Dave, 64 M Mac, Ryan, 218 Magna Powertrain, 48–49 Magna Steyr, 202 manufacturing, 180–192 of batteries, 183–184, 188–189 and continuous reiteration of Model 3s, 182–192 Elon Musk on, 180–182 preproduction as, 187–188 Marchionne, Sergio, 11 market saturation, 10 Marks, Michael, 47, 48, 50 Mars, 25 “Master Plan, Part Deux” (blog post), 164 McLaren F1, 25–26, 39 media hype, 88, 90–91, 93–95, 97–102, 130, 211–224 and base version of Model 3, 220–224 Elon Musk as cause of, 217–224 at Semi/Roadster unveiling, 211–215 as stock price stimulant, 215–216 Menlo Park, California, 28, 58 Michelin, 194 Miles, 11 Mobileye, 167–170 mobility technology, 11 Model 3, 8–10, 180–182 base version of, 220–224 production of, 182–192 Model S, 15, 74–75, 81–84, 90, 99, 135–137. see also Whitestar Model T, 56 Model X, 101, 134–145 Model Year 2008, 69 Moggridge, Bill, 38–39 Montana Skeptic, 105–108 Morgan Stanley, 172 Morris, Charles, 43 Motley Fool, 98 Musk, Elon on belief, 21 and branding of Tesla, 16–17 as cause of media hype, 217–224 childhood and personality of, 25–26 clientele knowledge of, 60 “cluelessness” of, 33–35 and culture of Tesla, 60 and Daimler, 68 detractors of, 102–108 and electric cars, 25–28 and Elise-Roadster conversion, 38–39 on financial viability of Tesla, 72–73 and fundraising, 44, 69–71 and loans, 70, 78 on manufacturing, 180–181, 190 on Model 3, 8–9 on Model S, 74 on Model X, 144–145 on obstacles faced by Tesla, 46 offers of, to sell Tesla, 120–121 on price increases, 71 and production process, 142, 165 as public figure, 15 on Series D, 47 and JB Straubel, 26 and stress, 64–67, 77–78 and Superchargers, 109–119 and Tesla cofounders, 29–32, 45, 47–48 on Tesla’s master plan, 21–22, 30–31, 58, 163 at town hall meeting, 70–71 and Whitestar, 51 Musk, Errol, 25 Musk, Justine, 25–26 Musk, Kimball, 65 N National Aeronautics and Space Administration (NASA), 66 National Highway Traffic Safety Administration (NHTSA), 127, 131–132, 149–162 National Transportation Safety Board (NTSB), 132, 167 NDAs. see non-disclosure agreements Neil, Dan, 59 Neuralink, 16, 217 New Mexico, 48, 67 New United Motor Manufacturing, 53 New York Times, 2, 30, 66 NHTSA. see National Highway Traffic Safety Administration Nissan Leaf, 198 Nissan-Renault Alliance, 197–200, 207 Noble M12, 27 nondisclosure agreements (NDAs), 5, 149–151, 152, 155–156 Norway, 12 NTSB (National Transportation Safety Board), 132, 167 NUMMI plant, 76, 81 Nürburgring, 203 NuvoMedia, 23 O Occupy Wall Street, 80–81 Ohno, Taiichi, 57 OnStar, 194 Opel, 36 Opel Speedster, 36 OpenAI, 217 operating profits and losses, 89 P Packet Design, 23 Page, Larry, 44 Paine, Chris, 13, 64, 71, 73–74 Panasonic, 77 Pandora, 41 PayPal, 16, 28 Peak Oil, 11 Pinnacle Research, 25 platforms, 135–136 Porsche, 24, 26, 39, 203–204 Porsche 911, 39 power electronics module (PEM), 49 Powertrain Technology, 58 Prenzler, Christian, 100 preproduction, 187–188 price increases, 71 Prius, 24 profitability, 81–82, 89 Project Better Place, 4–5, 11–12 public, going, 80–81 Q quality, 55, 59–60 Quality Control Systems, 131 R Ranger, 60 Reddit, 97, 99–100 reliability, 143 Renault Kwid, 207 Renault Zoe, 198 Reuters, 66 Revenge of the Electric Car (film), 64 Roadster as Elise conversion, 37–39 launch of, 14–15, 29, 42, 47–51, 59–61 new model of, 211–215 profitability of, 71–72, 81 securing investments for, 44, 45 and Tesla startup, 2–3 robotaxis, 166–167 Rogan, Joe, 219 Rosen, Harold, 26 Rosen Motors, 26 S Saleen, 99–100 San Carlos, California, 28 San Francisco, California, 59 San Jose, California, 75–76 Santa Monica, California, 45 Saudi Arabia, 218–219 Schwarzenegger, Arnold, 45 Scion xB, 27 Seagate, 23 “The Secret Tesla Motors Master Plan” (blog post), 21 Securities and Exchange Commission (SEC), 67, 160, 219–220, 224, 234 Seeking Alpha, 103, 105–107 self-driving cars, 120–133 Semi, 211–215 Senate Finance Committee, 67 Series A funding, 29 Series C funding, 40, 44–45 Series D funding, 46, 47 Series E funding, 50 S 40 model, 84 Shashua, Amnon, 167–170 Silicon Valley, 4, 14, 15, 17, 45, 53, 54, 58 Siry, Darryl, 65, 73 60 Minutes, 66 S 60 model, 84 “skateboard” chassis, 134, 202 Skype, 41 Smart (Tesla car), 68 software startups, 54–55 SolarCity, 110–111, 164 solar power, 109–114 Sorbonne University, 66 South Africa, 25 SpaceX, 15, 16, 25, 28, 39, 66, 78, 100 Spiegel, Mark, 102–103 Stanford University, 4, 26, 27, 28, 121 startups, 41–43, 59, 62, 76 “stealth recalls,” 160–161 stock price, 89, 90, 93, 97, 100, 102–103 StockTwits, 98 Straubel, JB, 26, 28, 48 SunCube, 146–147 Superchargers, 109–119 SYNC, 194 T TACC (Traffic Aware Cruise Control), 125 Tama, 197 Tarpenning, Marc, 21–24, 27, 31, 37, 43, 113 Tea Party movement, 80–81 “Tesla Death Watch” (blog posts), 3 Tesla Energy Group, 68 Tesla Founders Blog, 50 Tesla Motors. see also specific headings and barriers to entry, 35, 56 branding of, 16–17, 18, 59–63, 225–234 and collisions, 127–133 concept of, 34–36 continuous improvement at, 58 culture of, 51–52, 60 detractors of, 102–108 as disruptive innovator, 195–197 EBITDA of, 215 and environmental issues, 112–113, 119 “factory-less” model of, 35–36 innovation by, 201–210 internal conflict at, 29–32 legacy of, 19 Model 3 introduced by, 8–10 personal approach to public relations, xii raising capital for, 44, 69–71, 85 “shaky ground” of, 4, 5 as startup, 2–3 stock price of, 89, 90, 93, 97, 100, 102–103 strategy of, 22 and Supercharger network, 109–119 and whistleblowers, xii Tesla Motors Club (TMC), 95–97 Teslarati, 100 “Tesla stare,” 60 “Tesla Suspension Breakage: It’s Not the Crime, It’s the Coverup” (blog post), 151 Thailand, 48, 218 Think Global, 11, 67 Thrun, Sebastian, 121 TMC (Tesla Motors Club), 95–97 Too Big to Fail, 91 Toyoda, Akio, 76 Toyoda, Sakichi, 57 Toyota, 184, 201. see also specific models auto sales, 11 contract with, 81, 83 electric vehicles of, 159–160 and 2008 financial crisis, 76–77 pragmatism of, 209 safety scandal, 149–151 Toyota Previa, 214 Toyota Production System (TPS), 56–60, 76–77, 142, 183 Toyota Way, 58, 77 TPS. see Toyota Production System Traction Avant, 193–194 trading volume, 89 Traffic Aware Cruise Control (TACC), 125 The Truth About Cars (TTAC) (blog), 1–3 Tse, Bernard, 67 turnarounds, financial, 83–87 Twitter, 41, 98, 104–108, 113, 152, 156, 217–220, 224, 236 tzero, 23–24, 26, 27, 31, 37 V Valor Equity Partners, 47 Vance, Ashlee, 38, 47, 66, 73, 84, 120–121, 137, 227–228 VantagePoint Capital Partners, 66 variable costs, 54 V8 engine, 62 Volkswagen, 11, 171, 203–205 W Wall Street Journal, 2, 18, 100, 129, 132, 168, 187 Waymo, 173–174 Web 2.0, 41 Weintraub, Seth, 97–98, 101 Wharton School of Business, 25 whistleblowers, xii Whitestar, 46–48, 51, 65, 67, 68, 73 Who Killed the Electric Car?

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

[xcviii] In December 2015, Elon Musk and Sam Altman, president of the technology incubator Y Combinator announced the formation of a new company called Open AI. They had recruited a clutch of the top machine learning professionals despite the efforts of Google and Facebook to hang onto them with eye-watering financial offers. There is some uncertainty about whether other companies controlled by Musk and Altman (like Tesla and Solar City) will have privileged access to technologies developed at Open AI, but the thrust of the company is to make advanced AI techniques more widely available in the hope that will de-risk them.[xcix] Because it works, the use of machine learning will continue to grow – fast.

[xcii] http://www.bloomberg.com/news/2014-12-23/speech-recognition-better-than-a-human-s-exists-you-just-can-t-use-it-yet.html [xciii] http://www.forbes.com/sites/parmyolson/2014/05/28/microsoft-unveils-near-real-time-language-translation-for-skype/ [xciv] http://www.technologyreview.com/news/544651/baidus-deep-learning-system-rivals-people-at-speech-recognition/#comments [xcv] https://youtu.be/V1eYniJ0Rnk?t=1 [xcvi] http://edge.org/response-detail/26780 [xcvii] http://techcrunch.com/2016/03/19/how-real-businesses-are-using-machine-learning/ [xcviii] http://www.latimes.com/business/technology/la-fi-cutting-edge-ibm-20160422-story.html [xcix] http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ [c] http://www.strategyand.pwc.com/global/home/what-we-think/innovation1000/top-innovators-spenders#/tab-2015 [ci] 2013 data: http://www.ons.gov.uk/ons/rel/rdit1/gross-domestic-expenditure-on-research-and-development/2013/stb-gerd-2013.html [cii] http://insights.venturescanner.com/category/artificial-intelligence-2/ [ciii] http://techcrunch.com/2015/12/25/investing-in-artificial-intelligence/ [civ] http://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/ [cv] https://www.theguardian.com/technology/2016/apr/13/google-updates-tensorflow-open-source-artificial-intelligence [cvi] http://www.wired.com/2015/12/facebook-open-source-ai-big-sur/ [cvii] The name Parsey McParseFace is a play on a jokey name for a research ship which received a lot of votes in a poll run by the British government in April 2016. http://www.wsj.com/articles/googles-open-source-parsey-mcparseface-helps-machines-understand-english-1463088180 [cviii] Assuming you don't count the Vatican as a proper country. http://www.ibtimes.co.uk/google-project-loon-provide-free-wifi-across-sri-lanka-1513136 [cix] https://setandbma.wordpress.com/2013/02/04/who-coined-the-term-big-data/ [cx] http://www.pcmag.com/encyclopedia/term/37701/amara-s-law [cxi] http://www.lrb.co.uk/v37/n05/john-lanchester/the-robots-are-coming [cxii] Haitz's Law states that the cost per unit of useful light emitted decreases exponentially [cxiii] http://computationalimagination.com/article_cpo_decreasing.php [cxiv] http://www.nytimes.com/2006/06/07/technology/circuits/07essay.html [cxv] . http://arstechnica.com/gadgets/2015/02/intel-forges-ahead-to-10nm-will-move-away-from-silicon-at-7nm/ [cxvi] .

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

It’s more speculative, aimed at convincing members to pick a truly new, weird area to examine. Lately the talk has been heavily about artificial intelligence (AI), and the dark magic of writing algorithms that can learn on their own; at least six of Sanghvi’s members have wound up working at Google Brain or the nonprofit OpenAI initiative. Six start-ups that have come out of the Commons were founded by women. That’s an achievement too: Getting more women into critical founder roles means they can deeply influence the trajectory of their firm, and benefit from its success. Sanghvi remembers having to argue over getting a fair share of Facebook’s value.

So we could wake up one day, fifteen years from now, to discover that, whoops, someone in Shenzhen has almost accidentally produced a superintelligence. Given that, a phalanx of AI experts has begun to prepare now. “AI is a fundamental risk to the existence of human civilization,” Tesla founder Elon Musk said, and he followed up on his warning by investing in OpenAI, a think tank devoted to planning for “responsible” AI—smart machines that won’t, or can’t, rise up to kill us. If you wanted some comfort, though, consider that of the AI experts I’ve spoken to—the people who, unlike Bostrom and even Musk, build AI all day long—most were considerably less worried about ultraintelligent machines emerging suddenly.

“I think there’s real risks of AI that should be thought about,” Martiros agrees. Tons of firms worldwide are all fantasizing about a “general” AI that could think in human terms. “It’d be a trillion-dollar industry, and it’s not implausible. We can’t predict these things.” He’s in favor of groups like OpenAI pondering the hard questions. So for my friends who want to know about superhuman AI? I’d love to have a definite answer, but I can’t offer one. It could be in our lifetimes; it could not. The Association for the Advancement of Artificial Intelligence surveyed 193 of its members, asking them when a Bostrom-like “superintelligence” would emerge.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

The results of Google’s initial tests were impressive: after being instructed to build a neural network capable of carrying out a common image-labeling task, Google’s AI was able to build and train a model that was more accurate than the one Google’s own engineers had programmed. And journalists? Forget it. Many of us are eminently automatable, especially those of us whose output tends to be more routine and predictable. In 2020, several publications began experimenting with GPT-3, an advanced AI program developed by the nonprofit research lab OpenAI. The program, which takes a prompt and uses machine learning to complete it, was able to produce long, cogent pieces of writing that amazed human editors with their clarity and style. One publication, The Guardian, used GPT-3 to write an entire op-ed about the future of AI and machine learning, and concluded that “overall, it took less time to edit than many human op-eds.”

In 2019, Senators Cory Booker and Ron Wyden, along with Representative Yvette Clarke, introduced something similar with the “Algorithmic Accountability Act,” which would authorize the Federal Trade Commission to audit “highly sensitive automated decision systems,” such as algorithms used for screening job candidates, for evidence of bias or flawed design. Responsible tech companies can also help, by slowing down and considering how their new AI tools could be misused before making them publicly available. In 2019, OpenAI, the nonprofit AI lab, set a good example of responsible deployment when it withheld the full version of its new text generation algorithm, GPT-2. Experts had voiced concerns that GPT-2—which used AI to predict the next words in a sequence and could finish submitted samples of partial texts in an eerily humanlike way—could be used to spread fake news or computer-generated propaganda.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

Remember, however, that all these tasks are much simpler than the real world: they are fully observable, they involve short time horizons, and they have relatively small state spaces and simple, predictable rules. Relaxing any of these conditions means that the standard methods will fail. Current research, on the other hand, is aimed precisely at going beyond standard methods so that AI systems can operate in larger classes of environments. On the day I wrote the preceding paragraph, for example, OpenAI announced that its team of five AI programs had learned to beat experienced human teams at the game Dota 2. (For the uninitiated, who include me: Dota 2 is an updated version of Defense of the Ancients, a real-time strategy game in the Warcraft family; it is currently the most lucrative and competitive e-sport, with prizes in the millions of dollars.)

The DQN system that learns to play a wide variety of video games using deep RL: Volodymyr Mnih et al., “Human-level control through deep reinforcement learning,” Nature 518 (2015): 529–33. 63. Bill Gates’s remarks on Dota 2 AI: Catherine Clifford, “Bill Gates says gamer bots from Elon Musk-backed nonprofit are ‘huge milestone’ in A.I.,” CNBC, June 28, 2018. 64. An account of OpenAI Five’s victory over the human world champions at Dota 2: Kelsey Piper, “AI triumphs against the world’s top pro team in strategy game Dota 2,” Vox, April 13, 2019. 65. A compendium of cases in the literature where misspecification of reward functions led to unexpected behavior: Victoria Krakovna, “Specification gaming examples in AI,” Deep Safety (blog), April 2, 2018. 66.

E., 219, 221, 222 Moore’s law, 34–35 Moravec, Hans, 144 Morgan, Conway Lloyd, 18 Morgenstern, Oskar, 23 Mozi (Mozi), 219 multi-agent cooperation design, 94 Musk, Elon, 153, 164 “Myth of Superhuman AI, The” (Kelly), 148 narrow (tool) artificial intelligence, 46, 47, 136 Nash, John, 30, 195 Nash equilibrium, 30–31, 195–96 National Institutes of Health (NIH), 155 negative altruism, 229–30 NELL (Never-Ending Language Learning) project, 81 nerve nets, 16 NET-VISA, 279–80 Network Enforcement Act (Germany), 108, 109 neural dust, 164–65 Neuralink Corporation, 164 neural lace, 164 neural networks, 288–89 neurons, 15, 16, 19 Never-Ending Language Learning (NELL) project, 81 Newell, Allen, 295 Newton, Isaac, 85–86 New Yorker, The, 88 Ng, Andrew, 151, 152 Norvig, Peter, 2, 62–63 no suicide rule, 287 Nozick, Robert, 223 nuclear industry, 157, 249 nuclear physics, 7–8 Nudge (Thaler & Sunstein), 244 objectives, 11–12, 43, 48–61, 136–42, 165–69. See also goals off-switch game, 196–200 onebillion (software system), 70 One Hundred Year Study on Artificial Intelligence (AI100), 149, 150 OpenAI, 56 operations research, 10, 54, 176 Oracle AI systems, 161–63 orthogonality thesis, 167–68 Ovadya, Aviv, 108 overhypothesis, 85 overly intelligent AI, 132–44 fear and greed, 140–42 gorilla problem, 132–36 intelligence explosions and, 142–44, 208–9 King Midas problem, 136–40 paperclip game, 194–96 Parfit, Derek, 225 Partnership on AI, 180, 250 Pascal, Blaise, 21–22, 40 Passage to India, A (Forster), 254 Pearl, Judea, 54, 275 Perdix (drone), 112 Pinker, Steven, 158, 165–66, 168 Planet (satellite corporation), 75 Politics (Aristotle), 114 Popper, Karl, 221–22 Popular Science, 152 positional goods, 230–31 practical reasoning, 20 pragmatics, 204 preference autonomy principle, 220, 241 preferences.

The Deep Learning Revolution (The MIT Press)
by Terrence J. Sejnowski
Published 27 Sep 2018

Computer scientists signed pledges not to use AI for military purposes. Stephen Hawking and Bill Gates made public statements warning of the existential threat posed by AI. Elon Musk and other Silicon Valley entrepreneurs set up a new company, OpenAI, with a one-billion-dollar nest egg and hired Ilya Sutskever, one of Geoffrey Hinton’s former students, to be its first director. Although OpenAI’s stated goal was to ensure that future AI discoveries would be publicly available for all to use, it had another, implicit and more important goal—to prevent private companies from doing evil. For, with AlphaGo’s victory over world Go champion Sedol, a tipping point had been reached.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

From the Field of AI Dreams People are right to be excited about advances in digital technologies. New machine capabilities can massively expand the things we do and can transform many aspects of our lives for the better. And there have also been tremendous advances. For example, the Generative Pre-trained Transformer 3 (GPT-3), released in 2020 by OpenAI, and ChatGPT released in 2022 by the same company, are natural-language processing systems with remarkable capabilities. Already trained and optimized on massive amounts of text data from the internet, these programs can generate almost human-like articles, including poetry; communicate in typical human language; and, most impressively, turn natural-language instructions into computer code.

Verge, July 5. www.the verge.com/2021/7/5/22563751/tesla-elon-musk-full-self-driving-admission-autopilot-crash. Heaven, Will Douglas. 2020. “Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?” MIT Technology Review, October 15. www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai. Heldring, Leander, James Robinson, and Sebastian Vollmer. 2021a. “The Economic Effects of the English Parliamentary Enclosures.” NBER Working Paper no. 29772. DOI:10.3386/w29772. Heldring, Leander, James Robinson, and Sebastian Vollmer. 2021b. “The Long-Run Impact of the Dissolution of the English Monasteries.”

“Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation,” McKinsey Global Institute, December. https://www.mckinsey.com/~/media/BAB489A30B724BECB5DEDC41E9BB9FAC.ashx. Marantz, Andrew. 2020. Antisocial: Online Extremists, Techno-Utopians and the Hijacking of the American Conversation. New York: Penguin. Marcus, Gary, and Ernest Davis. 2020. “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About.” MIT Technology Review, August 22. Marcus, Steven. 1974 [2015]. Engels, Manchester, and the Working Class. Routledge: London. Marens, Richard. 2011. “We Don’t Need You Anymore: Corporate Social Responsibilities, Executive Class Interests, and Solving Mizruchi and Hirschman’s Paradox.” https://heinonline.org/HOL/Page?

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

For example, the data used to train the AI may be insufficient and inadequately represent race or gender demographics. One company’s recruiting department may find that its AI algorithms are biased against women because the training data didn’t include enough women. Or the data may be biased because it was collected from a biased society. Microsoft’s Tay and OpenAI’s GPT-3 were both known to make inappropriate remarks about minority groups. Recently, research has shown that AI is able to infer sexual orientation with high accuracy based on facial micro-expressions. Such abilities could lead to discrimination. This is similar to what happened to Sahej in “The Golden Elephant,” when his Dalit status was found not directly but by inference.

With enough natural data and sufficient processing power, the system can learn on its own to detect arrival and departure times, and a great deal more. After Google’s transformer work, a more well-known extension called GPT-3 (GPT stands for “generative pre-trained transformers”) was released in 2020 by OpenAI, a research laboratory founded by Elon Musk and others. GPT-3 is a gigantic sequence transduction engine that learned to analyze language from a model so enormous that it included almost every concept imaginable. Leveraging one of the most powerful supercomputers in the world, GPT-3 was trained on more than 45 terabytes of text, which would take 500,000 lifetimes for a human to read.

Ubuntu 15.04 Server with systemd: Administration and Reference
by Richard Petersen
Published 15 May 2015

You can use the service command to start and stop corosync. It separates the core infrastructure from the clustering services. Derived from the OpenAIS project, Corosync provides the underlying cluster infrastructure rather than API. You can find out more about Corosync at: http://www.corosync.org/ Corosync is a plug-in cluster engine with a modular design. Modules, known as service engines, are plugged in to the Corosync engine to make use of Corosync cluster services. Corosync components include Totem communications protocol which is based on the OpenAIS virtual synchrony communications model, a live component replacement (LCR) plugin system, an object database for the service engines and their configuration, a logging system, and inter-process communications (IPC) manager.

Service engine modules include configuration for LDAP and corosync/openeais file format, the cluster manager (pacemaker) operates as part of corosync, both fence and fence agents. Corosync is configured by the /etc/corosync.conf configuration file. Currently there are four directives, forming blocks, within which options can be specified. They are the same as those used for OpenAIS. The four directives are totem for the Totem protocol, logging, amf for the AMF service, and event for the event service. See the corosync.conf man page for a complete description of directives and options. Corosync uses its own protocol called Totem to perform multicast communications. Totem configuration is specified in the totem directive of the corosync.conf file as shown here.

GFS 2 now works through the Corosync Cluster Engine. You would use Corosync cluster commands for your cluster. GFS2 tools have been placed in the gfs2-utils, package, and the Distributed Lock Manager (DLM) commands in the dlm package. Many former cluster packages and applications have been deprecated with Ubuntu, including cman, rgmanager, openais, heartbeat, luci, and system-config-cluster. Though lower level GFS commands are available in the gfs2-utils package, you are expected to use Corosync and Pacemaker commands to manage your clusters. To run a cluster, you need both a cluster manager and locking mechanism. Pacemaker with the Distributed Lock Manager (dlm) implements cluster management and locking.

pages: 309 words: 79,414

Going Dark: The Secret Social Lives of Extremists
by Julia Ebner
Published 20 Feb 2020

In coming years, newly released AI tools – so-called ‘deep fakes’ – could further enhance the professionalism of extremist online campaigns. AI can write newspaper articles and books,2 generate pictures of people that don’t exist3 and manipulate faces in real time.4 Such technologies could be used to produce hoax articles, create social bots, change footage and edit speeches. In early 2019, the NGO OpenAI decided against the release of its ‘deep fakes for text tool’ because its researchers feared misuse.5 Even without such sophisticated AI tools, we are already seeing the effects of the tech-savvy extremist campaigns. They have exacerbated political and societal fragmentation and accelerated the populist surge across Europe and the US.

M. here, here b4bo here bin Laden, Osama here, here, here birthrates here, here Bissonnette, Alexandre here, here BitChute here bitcoin here, here, here Blissett, Luther here Bloc Identitaire here blockchain technology here bloggers here Blood & Honour here Bloom, Mia here Bloomberg, Michael here Böhmermann, Jan here Bowers, Robert here Breed Them Out here Breitbart here, here, here Breivik, Anders Behring here, here ‘Brentonettes’ here Brewer, Emmett here Brexit here, here Britain First here British National Party (BNP) here, here, here Broken Heart operation here Brown, Dan here Bubba Media here Bumble here, here Bundestag hack here, here BuzzFeed here C Star here, here ‘Call of Duty’ here, here Cambridge Analytica here, here Camus, Renaud here Carroll, Lewis here CBS here Channel programme here Charleston church shooting here Charlie Hebdo here Charlottesville rally here, here, here, here, here, here, here, here, here Chemnitz protests here, here Choudary, Anjem here Christchurch terror attacks here, here, here, here Christian identity here Chua, Amy here CIA here, here, here Clinton, Bill and Hillary here, here, here, here, here, here, here Cohn, Norman here Collett, Mark here Cologne rape crisis here Combat here, here Comey, James here Comvo here concentration camps here Conrad, Klaus here Conservative Political Action Conference here Constitution for the Ethno-State here Corem, Yochai here counter-extremism legislation here counter-trolling here Covington, Harold here Crash Override Network here Crusius, Patrick here cryptocurrencies here, here, here, here Cuevas, Joshua here Cyberbit here Cyborgology blog here ‘Daily Shoah’ podcast here Daily Stormer here, here, here, here, here, here, here, here, here Weev and here Damore, James here Dark Net here Data and Society Research Institute here Davey, Jacob here Dawkins, Richard here, here De La Rosa, Veronique here de Turris, Gianfranco here Dearden, Lizzie here deep fakes here, here DefCon here, here Der Spiegel here Deutsche Bahn here Diana, Princess of Wales here, here Die Linke here Die Rechte here ‘digital dualism’ here digital education here disinformation here, here, here Disney here Domestic Discipline here, here Donovan, Joan here Doomsday preppers here doubling here Dox Squad here, here doxxing here, here, here, here, here Doyle, Laura here, here Draugiem here DTube here Dugin, Alexander here Dunning–Kruger Effect here Dutch Leaks here Dylan, Bob here Earnest, John here 8chan here, here, here, here, here, here, here, here EKRE (Estonian fascist party) here El Paso shooting here Element AI here Emanuel, Rahm here encryption and steganography here Encyclopedia Dramatica here English Defence League here, here, here, here Enoch, Mike here environmentalism here, here ethno-pluralism here, here ‘Eurabia’ here, here ‘European Israel’ here European National here European Parliament elections here European Spring here Evola, Julius here executions here Facebook friends here fashions and lifestyles here, here Fawcett, Farah here Faye, Guillaume here FBI here, here, here, here, here Fearless Democracy here, here FedEx here Feldman, Matthew here Ferdinand II, King of Aragon here Fiamengo, Janice here Fields, James Alex here Fight Club here Finkelstein, Robert here Finsbury Mosque attack here, here, here Fisher, Robert here Foley, James here Follin, Marcus here football hooligans here, here Football Lads Alliance (FLA) here For Britain party here Fortnite here 4chan here, here, here, here, here, here, here, here, here FPÖ (Austrian Freedom Party) here, here, here, here, here Frankfurt School here Fransen, Jayda here Fraternal Order of Alt-Knights here Freedom Fighters, The here freedom of speech here, here, here, here F-Secure here FSN TV here Gab here, here, here, here, here, here Gamergate controversy here GamerGate Veterans here gamification here, here, here, here, here, here, here, here Ganser, Daniele here Gates of Vienna here Gateway Pundit here Gawker here GCHQ here GE here GellerReport here Generation Identity (GI) here, here, here, here, here, here, here, here Generation Islam here genetic testing here, here German elections here, here German Institute on Radicalization and De-Radicalization Studies here German National Cyber Defence Centre here Gervais, Ricky here Ghost Security here Giesea, Jeff here Gigih Rahmat Dewa here Gionet, Tim here gladiators here Global Cabal of the New World Order here global financial crisis here, here global warming here GNAA here Goatse Security here GOBBLES here Goebbels, Joseph here GoFundMe here Goldy, Faith here Goodhart, David here ‘Google’s Ideological Echo Chamber’ here Gorbachev, Mikhail here Graham, Senator Lindsey here Gratipay here Great Awakening here, here Great Replacement theory here, here, here, here, here ‘Grievance Studies’ here grooming gangs here, here Guardian here, here H., Daniel here Habeck, Robert here HackerOne here hackers and hacking here ‘capture the flag’ operations here, here denial of service operations here ethical hacking here memory-corruption operations here political hacking here ‘qwning’ here SQL injections here techniques here Halle shooting here Hamas here, here Hanks, Tom here Happn here Harris, DeAndre here ‘hashtag stuffing’ here Hate Library here HateAid here, here Hatreon here, here, here Heidegger, Martin here Heise, Thorsten here, here Hensel, Gerald here, here Herzliya International Institute for Counter-Terrorism here Heyer, Heather here, here, here Himmler, Heinrich here Hintsteiner, Edwin here Histiaeus here Hitler, Adolf here, here, here, here, here Mein Kampf here, here Hitler salutes here, here, here, here Hitler Youth here HIV here Hizb ut-Tahrir here, here, here Höcker, Karl-Friedrich here Hofstadter, Richard here Hollywood here Holocaust here Holocaust denial here, here, here, here, here Holy War Hackers Team here Home Office here homophobia here, here, here Hooton Plan here Hoover Dam here Hope Not Hate here, here, here Horgan, John here Horowitz Foundation here Hot or Not here House of Saud here Huda, Noor here human trafficking here, here Hussein, Saddam here, here Hutchins, Marcus here Hyppönen, Mikko here Identity Evropa here, here iFrames here Illuminati here Incels (Involuntary Celibacy) here, here Independent here Inkster, Nigel here Institute for Strategic Dialogue (ISD) here, here, here, here, here, here, here, here Intelius here International Business Times here International Centre for the Study of Radicalisation (ICSR) here International Federation of Journalists here International Holocaust Memorial Day here International Institute for Strategic Studies here Internet Research Agency (IRA) here iPads here iPhones here iProphet here Iranian revolution here Isabella I, Queen of Castile here ISIS here, here, here, here, here, here, here, here, here, here, here, here hackers and here, here, here, here, here Islamophobia here, here, here, here, here, here, here Tommy Robinson and here, here see also Finsbury Mosque attack Israel here, here, here, here, here Israel Defense Forces here, here Jackson, Michael here jahiliyya here Jakarta attacks here Jamaah Ansharud Daulah (JAD) here Japanese anime here Jemaah Islamiyah here Jesus Christ here Jewish numerology here Jews here, here, here, here, here, here, here, here, here see also anti-Semitism; ZOG JFG World here jihadi brides here, here JihadWatch here Jobs, Steve here Johnson, Boris here Jones, Alex here Jones, Ron here Junge Freiheit here Jurgenson, Nathan here JustPasteIt here Kafka, Franz here Kampf der Niebelungen here, here Kapustin, Denis ‘Nikitin’ here Kassam, Raheem here Kellogg’s here Kennedy, John F. here, here Kennedy family here Kessler, Jason here, here Khomeini, Ayataollah here Kim Jong-un here Kohl, Helmut here Köhler, Daniel here Kronen Zeitung here Kronos banking Trojan here Ku Klux Klan here, here Küssel, Gottfried here Lane, David here Le Loop here Le Pen, Marine here LeBretton, Matthew here Lebron, Michael here Lee, Robert E. here Li, Sean here Li family here Libyan Fighting Group here LifeOfWat here Lifton, Robert here Littman, Gisele here live action role play (LARP) here, here, here, here, here, here lobbying here Lokteff, Lana here loneliness here, here, here, here, here, here, here Lorraine, DeAnna here Lügenpresse here McDonald’s here McInnes, Gavin here McMahon, Ed here Macron, Emmanuel here, here, here, here MAGA (Make America Great Again) here ‘mainstream media’ here, here, here ‘Millennium Dawn’ here Manosphere here, here, here March for Life here Maria Theresa statue here, here Marighella, Carlos here Marina Bay Sands Hotel (Singapore) here Marx, Karl here Das Kapital here Masculine Development here Mason, James here MAtR (Men Among the Ruins) here, here Matrix, The here, here, here, here May, Theresa here, here, here Meechan, Mark here Meme Warfare here memes here, here, here, here and terrorist attacks here Men’s Rights Activists (MRA) here Menlo Park here Mercer Family Foundation here Merkel, Angela here, here, here, here MGTOW (Men Going Their Own Way) here, here, here MI6, 158, 164 migration here, here, here, here, here, here, here, here, here see also refugees millenarianism here Millennial Woes here millennials here Minassian, Alek here Mindanao here Minds here, here misogyny here, here, here, here, here see also Incels mixed martial arts (MMA) here, here, here, here Morgan, Nicky here Mounk, Yascha here Movement, The here Mueller, Robert here, here Muhammad, Prophet here, here, here mujahidat here Mulhall, Joe here MuslimCrypt here MuslimTec here, here Mussolini, Benito here Naim, Bahrun here, here Nance, Malcolm here Nasher App here National Action here National Bolshevism here National Democratic Party (NPD) here, here, here, here National Health Service (NHS) here National Policy Institute here, here National Socialism group here National Socialist Movement here National Socialist Underground here NATO DFR Lab here Naturalnews here Nawaz, Maajid here Nazi symbols here, here, here, here, here, here, here see also Hitler salutes; swastikas Nazi women here N-count here Neiwert, David here Nero, Emperor here Netflix here Network Contagion Research Institute here NetzDG legislation here, here Neumann, Peter here New Balance shoes here New York Times here News Corp here Newsnight here Nietzsche, Friedrich here, here Nikolai Alexander, Supreme Commander here, here, here, here, here, here 9/11 attacks here, here ‘nipsters’ here, here No Agenda here Northwest Front (NWF) here, here Nouvelle Droite here, here NPC meme here NSDAP here, here, here Obama, Barack and Michelle here, here, here, here, here Omas gegen Rechts here online harassment, gender and here OpenAI here open-source intelligence (OSINT) here, here Operation Name and Shame here Orbán, Viktor here, here organised crime here Orwell, George here, here Osborne, Darren here, here Oxford Internet Institute here Page, Larry here Panofsky, Aaron here Panorama here Parkland high-school shooting here Patreon here, here, here, here Patriot Peer here, here PayPal here PeopleLookup here Periscope here Peterson, Jordan here Pettibone, Brittany here, here, here Pew Research Center here, here PewDiePie here PewTube here Phillips, Whitney here Photofeeler here Phrack High Council here Pink Floyd here Pipl here Pittsburgh synagogue shooting here Pizzagate here Podesta, John here, here political propaganda here Popper, Karl here populist politicians here pornography here, here Poway synagogue shooting here, here Pozner, Lenny here Presley, Elvis here Prideaux, Sue here Prince Albert Police here Pro Chemnitz here ‘pseudo-conservatives’ here Putin, Vladimir here Q Britannia here QAnon here, here, here, here Quebec mosque shooting here Quilliam Foundation here, here, here Quinn, Zoë here Quran here racist slurs (n-word) here Radio 3Fourteen here Radix Journal here Rafiq, Haras here Ramakrishna, Kumar here RAND Corporation here Rasmussen, Tore here, here, here, here Raymond, Jolynn here Rebel Media here, here, here Reconquista Germanica here, here, here, here, here, here, here Reconquista Internet here Red Pill Women here, here, here, here, here Reddit here, here, here, here, here, here, here, here, here, here redpilling here, here, here, here refugees here, here, here, here, here Relotius, Claas here ‘Remove Kebab’ here Renault here Revolution Chemnitz here Rigby, Lee here Right Wing Terror Center here Right Wing United (RWU) here RMV (Relationship Market Value) here Robertson, Caolan here Robinson, Tommy here, here, here, here, here, here, here, here Rockefeller family here Rodger, Elliot here Roof, Dylann here, here Rosenberg, Alfred here Rothschilds here, here Rowley, Mark here Roy, Donald F. here Royal Family here Russia Today here, here S., Johannes here St Kilda Beach meeting here Salafi Media here Saltman, Erin here Salvini, Matteo here Sampson, Chris here, here Sandy Hook school shooting here Sargon of Akkad, see Benjamin, Carl Schild & Schwert rock festival (Ostritz) here, here, here Schilling, Curt here Schlessinger, Laura C. here Scholz & Friends here SchoolDesk here Schröder, Patrick here Sellner, Martin here, here, here, here, here, here, here, here, here, here Serrano, Francisco here ‘sexual economics’ here SGT Report here Shodan here, here Siege-posting here Sleeping Giants here SMV (Sexual Market Value) here, here, here Social Justice Warriors (SJW) here, here Solahütte here Soros, George here, here Sotloff, Steven here Southern, Lauren here Southfront here Spencer, Richard here, here, here, here, here, here Spiegel TV here spoofing technology here Sputnik here, here SS here, here Stadtwerke Borken here Star Wars here Steinmeier, Frank-Walter here Stewart, Ayla here STFU (Shut the Fuck Up) here Stormfront here, here, here Strache, H.

pages: 524 words: 154,652

Blood in the Machine: The Origins of the Rebellion Against Big Tech
by Brian Merchant
Published 25 Sep 2023

One notorious study carried out by researchers at the University of Oxford concluded that nearly half of all American jobs are ripe for technological replacement. Critics have taken issue with that forecast, but our largest tech companies would love to see it come true. Robots, the old corporate adage goes, never call in sick. Tech companies like Amazon, Uber, Facebook, OpenAI, and Microsoft have accumulated vast power and influence. In ways large and small, they are already remaking our working lives. We now face a future where work—even for the so-called middle class, even for white-collar workers—is increasingly informal, precarious, and organized by inscrutable and unaccountable technologies.

The aim, she says, is to create an anxious, uncertain workforce that has no choice but to be malleable before the algorithm’s demands. “These experiments are now fusing into other parts of the economy,” Dubal says. “These practices are ascendent.” Also ascendent is the use of AI services, which boomed in 2023, promoted by companies like OpenAI. When AI is injected into already precarious work structures, it promises to accelerate insecurity and displacement further still. Factories and automated machinery took workers out of their homes and away from their families. Gig apps run by proprietary algorithms take them away from other people altogether, and impose factory logic onto each individual, who sits at home or in a car, taking orders that must be completed in a rigid and exacting way.

Venture capital may be the radical apotheosis of this mode of technological development, capable as it is of funneling enormous sums of money into tech companies that can decide how they would like to build and unleash the products and services that shape society. Take the rise of generative AI. Ambitious start-ups like Midjourney, and well-positioned Silicon Valley companies like OpenAI, are already offering on-demand AI image and prose generation. DALL-E spurred a backlash when it was unveiled in 2022, especially among artists and illustrators, who worry that such generators will take away work and degrade wages. If history is any guide, they’re almost certainly right. DALL-E certainly isn’t as high in quality as a skilled human artist, and likely won’t be for some time, if ever—but as with the skilled cloth workers of the 1800s, that ultimately doesn’t matter.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

(Ed.), Machine Intelligence 3, Vol. 3. Elsevier. Amir, E. and Russell, S. J. (2003). Logical filtering. In IJCAI-03. Amit, Y. and Geman, D. (1997). Shape quantization and recognition with randomized trees. Neural Computation, 9, 1545–1588. Amodei, D. and Hernandez, D. (2018). AI and compute. OpenAI blog, blog.openai.com/ai-and-compute/. Amodei, D., Olah, C., Steinhardt, J., Christiano, P, Schulman, J., and Mané, D. (2016). Concrete problems in AI safety. arXiv:1606.06565. Andersen, S. K., Olesen, K. G., Jensen, F. V., and Jensen, F. (1989). HUGIN—A shell for building Bayesian belief universes for expert systems.

A., 734, 1089 Olson, N., 48, 1104 Olteanu, A., 1046, 1061, 1098 Olum, P., 1028, 1096 omniscience, 58 Omohundro, S., 51, 1061, 1108 one-hot encoding, 725, 808, 908 One Hundred Year Study on AI, 45 Ong, D., 667, 1106 Ong, J., 47, 1086 ONLINE-DFS-AGENT, 155 online gradient descent, 697 online learning, 721, 855 online planning, 383 online replanning, 963 online search, 152, 152–159, 162–163 ontological commitment, 272, 295, 404 ontological engineering, 332, 332–334 ontology, 290, 293 general, 335–346 upper, 355 open-loop, 82, 958 open-world assumption, 385 OpenAI, 1059 OpenAI Gym (simulated environment), 873 open class, 886 OPENCYC (knowledge base), 357 open list, see frontier OPENMIND (knowledge base), 334 open universe probability model (OUPM), 649 operations research, 28, 79, 125, 126 Oppacher, F., 161, 1108 OPS-5 (logical reasoning system), 310, 329 optical flow, 1000, 1028 optimal brain damage, 838 optimal control theory, 160 optimality (of a search algorithm), 93 optimality theory (in linguistics), 902 optimally efficient algorithm, 108 optimal solution, 83 optimism under uncertainty, 157 optimistic description (of an action), 380 optimistic prior, 849 optimization, 684 convex, 140, 159 optimizer’s curse, 527, 549 OPTIMUM-AIV (planning and scheduling system), 402 optogenetics, 19 order-of-magnitude distribution, 650 orderability, 520 order statistic, 526 ordinal utility, 522 Organon (Aristotle), 265, 357 origin function, 649 OR node, 141 Orseau, L., 873, 1054, 1103 Ortega, P.

Facebook’s AI Habitat simulation (Savva et al., 2019) provides a photo-realistic virtual environment for indoor robotic tasks, and their HORIZON platform (Gauci et al., 2018) enables reinforcement learning in large-scale production systems. The SYNTHIA system (Ros et al., 2016) is a simulation environment designed for improving the computer vision capabilities of self-driving cars. The OpenAI Gym (Brockman et al., 2016) provides several environments for reinforcement learning agents, and is compatible with other simulations such as the Google Football simulator. Littman (2015) surveys reinforcement learning for a general scientific audience. The canonical text by Sutton and Barto (2018), two of the field’s pioneers, shows how reinforcement learning weaves together the ideas of learning, planning, and acting.

pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think
by James Vlahos
Published 1 Mar 2019

Tech company executives praise it for blasting through decades-old problems in conversational AI; they shower experts in the field with salaries that climb into the six figures and higher. Consider the likes of Ilya Sutskever, a computer scientist credited with breakthroughs in image recognition and machine translation. He earned $1.9 million back in 2016—and that was at a nonprofit, the Elon Musk–supported OpenAI. Silver dollars, though, have only belatedly begun to pour from the Valley’s slot machines. For decades, the approach to getting machines to learn from data languished; brief periods of hype were followed by long stretches of frustration. The AI techniques that dominated were ones in which computer scientists wrote rules that told machines what to do and when to do it.

See also deep learning neurons, 86–88, 90, 91, 93 New Dimensions in Testimony, 272–74 news providers and voice AI, 214 Nickelodeon, 235 Norse mythology, 64 Nuance Communications, 111–12 O Obama, Barack, 115, 218 Olson, Christi, 207 Olson, Jenny, 196 Olsson, Isabelle, 226 “On Computable Numbers” (Turing), 71 1–800-Flowers, 7, 52 one-shot answers, 200, 204, 206–7, 208, 210–11, 220–21 Onion (satirical publication), 126 ontologies, 31–32 Open Agent Architecture, 21 OpenAI, 85 oracles, AI. See also question answering conventional web search and, 200, 207, 209–10 natural-language understanding and, 204 one-shot answers and, 206–7, 208–9, 211, 220–21 potential costs of, 221 responsibility for content and, 216–20 tech industry disruption and, 211–14 Oren, Dror, 132 Ostendorf, Mari, 158–59 Owens, Ron, 241–42 Owyang, Jeremiah, 280 Oxford University, 98 P Page, Larry, 76 Papert, Seymour, 88–89 parametric synthesis, 112, 113–14 Paro (robotic seal), 192 Parry (chatbot), 75–77, 79 pattern matching, 73, 81, 159–60 Pelczar, Nick, 173–74, 180 Perceptron, 87–89, 90, 100 Perceptrons (Minsky and Papert), 88 personalities of voice AI, 117–39 Alexa, 118, 119, 124 Cortana, 117–18 custom, 135–37 designing, 128–29, 133–34 gender and, 130–32 Google Assistant, 10–11, 118, 119, 124–28 language expertise and, 126–28 naturalness of, 126 race and ethnic identities, 132–33 robot personality development, 137–38 Siri, 118, 119, 123–24 UW’s socialbot and, 152 Phelps, Rick, 237–38 phonemes, 95–96, 114 phrase-based statistical machine translation, 104–5 Picard, Rosalind, 183, 184 Pichai, Sundar, 5, 53, 54, 116 Pistor, Julia, 175 Pitts, Walter, 86–87 Pixar, 126, 171–72 Planet Money (podcast), 214–15 Plimpton, George, 215 Poncho, 128 position zero, 208 post-traumatic stress disorder (PTSD), 244–46 Powesland, Peter, 127 Prasad, Rohit, 42, 43, 44, 158–59 privacy.

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

As machine intelligence becomes more powerful, pervasive, and connected, embedding AI in all of our personal and industrial computing devices increases the risk of attacks that can compromise the security infrastructures that protect our resources and communities. Luminaries from the Future of Humanity Institute, OpenAI, Centre for the Study of Existential Risk, and leading universities in the US and UK issued a 100-page policy recommendation paper, “The Malicious Use of Artificial Intelligence,”(35) in which they described the fast-evolving threat landscape, identified key areas of security risk, and made high-level recommendations for preventative action that should be taken immediately.

pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It
by Azeem Azhar
Published 6 Sep 2021

This argument is not relevant to my argument, so I don’t consider it here. 19 Azeem Azhar, ‘Beneficial Artificial Intelligence: My Conversation with Stuart Russell’, Exponential View, 22 August 2019 <https://www.exponentialview.co/p/-beneficial-artificial-intelligence> [accessed 16 April 2021]. 20 Dario Amodei and Danny Hernandez, ‘AI and Compute’, OpenAI, 16 May 2018 <https://openai.com/blog/ai-and-compute/> [accessed 12 January 2021]. 21 Charles E. Leiserson et al., ‘There’s Plenty of Room at the Top: What Will Drive Computer Performance after Moore’s Law?’, Science 368(6495), June 2020 <https://doi.org/10.1126/science.aam9744>. 22 Jean-François Bobier et al., ‘A Quantum Advantage in Fighting Climate Change’, BCG Global, 22 January 2020 <https://www.bcg.com/publications/2020/quantum-advantage-fighting-climate-change> [accessed 23 March 2021].

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

Yet such factors also fit within Solingen’s larger framework, as they underscore Taiwan’s interconnectedness with great powers of the time. In applying this political economy frame to LAWS, the key question is how to ensure not just norms and laws proscribing particularly destructive technology, but also the economic and reputational expense of pursuing them. Not just governments, but also firms, can play a constructive role here. OpenAI’s reluctance in 2019 to release a speech-generating model offers one case in point. AI-driven text generation may not seem like much of a weapon. But once it is combined with automated creation of social media profiles (complete with deepfaked AVIs), bot speech is a perfect tool for authoritarian regimes to use to disrupt organic opinion formation online.

pages: 169 words: 41,887

Literary Theory for Robots: How Computers Learned to Write
by Dennis Yi Tenen
Published 6 Feb 2024

Branston, “The Use of Context for Correcting Garbled English Text,” in Proceedings of the 1964 ACM 19th National Conference (New York: Association for Computing Machinery, 1964), 42.401–­42.4013. Chapter 8: 9 Big Ideas for an Effective Conclusion 119 In the conclusion to his book: I. A. Richards. Poetries and Sciences: A Reissue of Science and Poetry (1926, 1935) with Commentary (New York: Norton, 1970), 76–­78. 130 In putting the algorithm in charge: OpenAI, GPT-­4 Technical Report (New York: arXiv, 2023). Index Abelson, Robert, 92 Aeronautical Laboratory (Cornell University), 110 Aesop, 94 agency, 127, 131–32, 141 Agnesi, Maria, 44 AI, See artificial intelligence AI and Data Ethics Institute, 131 Air Research and Development Command (US Air Force), 87 algebraic patterns, 55 algorithms, 9, 131 alignment problem, 38 alphabets, 43 Alphaville (film), 93 American Catalogue of Books in Print, 66 American Psychiatric Association, 23 American Stationer, 74 Analytical Engine (analytical engines), 48–52, 54–56, 60–61, 64 Andrews, Charlton The Technique of the Play Writing, 71 Appelbaum, Matthew, 92 applications, 32, 33 application tables, 48 Arabic language, 43 Arbogast, Louis, 44 Aristotelianism, 34, 36–38, 72 Aristotle, 36, 44 Poetics, 50–51, 67 Ars Brevis (Llull), 24, 31 artifice, 4, 61, 123 artificial intelligence (AI) in academia, 137–38 and agency, 141 as collective labor, 122–23 conversational AI, 135 and creative process, 133–34 dangers of, 127, 129, 137 definitions of, 4, 11 demystification of, 124 economic consequences of, 133–35 ethical, 22 gaps in thinking about, 5–7 history of, 12 “intelligence” aspect of, 14–16, 21, 125 language-based, 21, 46 and machine translation, 119 participants in, 132 personification of, 127, 130 purpose of, 59 and responsibility, 132 artificial intelligence (AI) (continued) scope of, 5, 16, 93, 128, 129 in social sphere, 127, 136, 139 and template culture, 83 assistive technology, 15, 28, 38–39, 123–24, 138 Athenaeum, 74 Austen, Jane, 65, 67 Author, 70 Author’s Digest, 71 Author’s Journal, 70 author wages, 67 automated assistants, 28, 138 Automated Reading of Cursive Script, 110 automated tasks, devaluation of, 38 automatic transmissions, 14–16 automation in industrial age, 2 of reason, 40 of work, 133–34 Babbage, Charles, 43, 48–54, 56, 59–60, 62–64, 71, 105, 118 On the Economy of Machinery and Manufactures, 60, 63–64, 71 Passages from a Life of a Philosopher, 49–50 backstories, 73 Bacon, Francis, 7, 10 Baidu, 113 Baker, George Pierce, 72–73 Baldur’s Gate, 100 Barrymore, John, 73 BASEBALL (story generator), 92 basic drives, 128 Baudot code, 7 Believe Me, Xantippe (film), 73 Bell Telephone Labs, 110 Benjamin, Walter, 61 Bibles, 39 Bibliothèque universelle de Genève, 54 bigrams, 106–7, 109 bits, 6–9 Blackburn, Simon, 84 Bledsoe, W.

pages: 626 words: 167,836

The Technology Trap: Capital, Labor, and Power in the Age of Automation
by Carl Benedikt Frey
Published 17 Jun 2019

And those jobs, it expects, will require a very different set of skills.39 The main reason why warehouses still employ large swaths of the population is that order picking remains a largely manual process. Humans still hold the comparative advantage in complex perception and manipulation tasks. But here, too, AI has made many recent breakthroughs possible. At the OpenAI lab in San Francisco, California, set up by Elon Musk, a robotic five-fingered hand called Dactyl bears witness to impressive progress in recent years: “If you give Dactyl an alphabet block and ask it to show you particular letters—let’s say the red O, the orange P and the blue I—it will show them to you and spin, twist and flip the toy in nimble ways.”40 Though this is an easy task for any human, the achievement lies in the fact that AI allows Dactyl to learn new tasks, largely on its own through trial and error.

P., 208 Morrill Act, 364 mortality gap, 255 mortality rate, 65 mother of invention, 73 motion-picture machine operator, 178 multipurpose robots, 242, 261, 327 Mumford, Lewis, 46 Municipal Corporations Act of 1835, 86 Murnane, Richard, 237, 302 Murray, Charles, 252–53, 281 Musk, Elon, 313 Mutiny Act, 82 Napoléon Bonaparte, 9 Napoleonic War, 130 National Electric Light Association (NELA), 159 National Industrial Recovery Acts of 1933 and 1935, 200 National Labor Relations (“Wagner”) Act of 1935, 200 national minimum wage, introduction of, 211 National Recovery Administration, 178 nation states, rise of, 57 Nazi Labor front, 12 necessity, technological advances emerging from, 76 Neolithic communities, 34 Neolithic revolution, 33, 61 Neural Machine Translation (NMT), 304 neural networks, 303, 305, 314 Newcomen, Thomas, 53, 106, 317 New Deal, 200, 212, 272, 325 Newton, Isaac, 54 New World, discovery of, 19, 80 Nicholas I of Russia, Emperor, 85 Nobel Prize in Economics, 2, 4, 14, 20 Nordhaus, William, 2, 230, 297 Norman Conquest, 44 North, Douglass C., 79 North Africa, 77 nursery cities, 261 Nye, David, 155 Obama, Barack, 238, 277, 290, 322 occupational licensing, 358 occupational statistics, 219 OECD, 243, 321 Offenbach, Jacques, 53 Ogilvie, Sheilagh, 56–57 Old Poor Law, 344 OpenAI, 313 opportunity gap, societal costs of, 351 Osborne, Michael, 315 Otto, Nikolaus, 166 Ottoman Empire, 17, 66 overproduction, crisis of, 266 Owenism, 137 ownership, concept of, 34 Papin, Denis, 52, 86 Pareto improvement, 13 Paris Universal Exposition of 1867, 147 Park Avenue, 1 Paul, Lewis, 101 Pax Romana, 41 Pearl Harbor, attack on, 180 Pennsylvania Railroad, 208 Percy, Hiram, 165 personal computer (PC), 231 Peter the Great, Tsar, 58 Piketty, Thomas, 210, 217, 277, 361 “pink-collar” workforce, 241 plant downsizings, 255 Pliny the Elder, 36, 40 Polanyi’s paradox, 234, 304 polarization, politics of, 272–77; American dream, 280; Blue Wall, 284; civil rights legislation, 280; clientelism, 271; democracy and the middle class, 265–69; “disciplined self” identity, 279; economic inequality, 274, 277; Engels’ pause, 266, 287; feudal order, political participation in, 265; globalization, automation, and populism, 277–85; housing bubble, 282; identity politics, 278; inflation, 294; Labor Party, rise of, 268; labor unions, bargaining power of, 277; laissez-faire regime, 267; legitimacy of democracy, undermining of, 274; liberal democracy, components of, 267; lobbying, corporate spending on, 275; Luddite uprisings, 265; machinery riots, 265, 289; majority-rule voting system, 270; median voter theories, 270; middle class, rise of, 292; New Deal, 272; new Luddites, 286–92; political elites, 288; populist backlash, 293; Progressive Era, reform agenda of, 271; redistributive taxing and spending, 271; Rust Belt, 279, 283, 291; social class, Marx’s theory of, 265; socialism in America, 272; social media, 285; strikes, protection of car companies from, 276; technology types, distinguishing between, 287; unemployment, American social expenditure on, 274; United Auto Workers union, 276; universal white male suffrage, 270; vulnerability to populist revolutions, 264; welfare state, rise of, 272; welfare system, tax-financed, 267; working class, 278, 279 Polhem, Christopher, 149 political elites, 288 poor laws, 344 Pope, Albert A., 165 population curse, 64–67 populism, rise of, 277–85, 365 populist backlash, 293 populist renaissance, 21 populist revolutions, vulnerability to, 264 Port Clinton, Ohio, 250–51 Portuguese caravel ship, 51 power loom, arrival of, 15 prefabrication, 311 Price, Derek, 39 printing press, Gutenberg’s, 17 Procter and Gamble, 199 productivity, populations and, 64 Progressive Era, reform agenda of, 271 property rights: in American culture, 200; concept of, 62, 91; importance of, 20; in preindustrial societies, 33 Protestant Huguenots, 80 Protestant movement, 46 “proto-industrialization,” 68 prototypes: adoption of, 323; Amazon Go store, 312; developed, 261; imperfect, 298, 314; inventions turned into, 73 public clocks, 45 public infrastructure projects, 363 public schooling, 214 purchasing power, 191 Putnam, Robert, 250–51, 272, 276 railroads: arrival of, 108; declining importance of, 170; as enabling technology for revolutions, 85; network, expansion in Britain, 110; revenues (America), 208 Ramey, Valerie, 159, 332 redistributive taxing and spending, 271 Reform Acts of 1832 and 1867, 83 Reich, Robert, 235 relocation, 359–60 Renaissance, 51; as “age of instruments,” 59; beginnings of modern capitalism during, 70; great inventors of, 38; origin of, 51; productivity-enhancing technological improvements of, 54; technological advances of, 51 rent-seeking monarchs, 79 Restrepo, Pascual, 15, 144, 227, 242, 346 retraining, 353–54 Reuther, Walter, 191, 276, 356 Ricardo, David, 4, 116, 206, 345 right-to-work states, 257 robber barons, 208 Robinson, James, 19, 80 robots, 14; automobile assembly, 18; autonomous, 307; creation of new jobs for engineers, 15; flying, 312; human perception and, 318; jobs of machine operators taken over by, 14; middle-income jobs cut out by, 26; multipurpose, 242, 261, 327; of preindustrial times, 74; routine tasks performed by, 229 Rockefeller, John D., 208 Rodrik, Dani, 286–87 Roman alphabet, 47 Roman Empire: fall of, 41; most famous invention of, 38; slavery in, 74 Roosevelt, Franklin D., 157, 179, 211 Rousseau, Jean-Jacques, 62 royal trading monopolies, 80 Rural Electrification Administration, 157 Russell, Bertrand, 33, 78 Rust Belt, 279, 283, 291 Sanders, Bernie, 286 Savery, Thomas, 106, 317 Scheidel, Walter, 211 Schumpeter, Joseph, 73, 294 Schumpeterian growth, absence of, 72 Schumpeterian transformation, 49 scribes, 49, 50 Second Industrial Revolution, 22, 25, 148–73; agriculture, mechanization of, 189; American inequality during, 217; automotive industry, 202; child labor, as opportunity cost to education, 21; elimination of jobs created for machine operators during, 228; greatest virtue of, 155; mechanization following arrival of, 142; new tasks for labor spawned by, 202; skill-biased technological change, 213; skill demand raised by, 209; technological leadership of, 25; tractor use, expansion of, 196; urban-rural wage gap self-employment, 71 serfdom, 41 Shannon, Claude, 302 Sigismund I of Poland, King, 29 Silicon Valley, 257, 359 silk industry, beginnings of, 99 silk-throwing machine, 52 Simon, Herbert, 316, 336 Singer, Isaac, 149 Skill-biased technological change, 213 slavery, 39, 74 smartphone, spread of, 328 Smiles, Samuel, 110 Smith, Adam, 67, 69–70, 83, 228 Smithian growth, Schumpeterian vs., 58, 72 smokestack cities, 263 social class, Marx’s theory of, 265 socialism in America, 272 social media, 285 socioeconomic segregation, 26 Solow, Robert, 4, 180, 206, 325 speech recognition technology, 306 Spence, Michael, 292 spinning jenny, 102 spousal employment, 240 Sprague, Frank J., 152 steam engine: development of, 73; economic virtuosity of, 107; impact of on aggregate growth, 136; universal application of, 249 steel production, changed nature of, 13 Stephenson, George, 109 Stevenson, Betsey, 336 stocking-frame knitting machine, 10, 54, 76 strikes, protection of car companies from, 276 “stylized facts of growth,” 205 subjective well-being, 255 Summers, Lawrence, 261, 349 supercomputers, 290 supply of technology, obstacles to, 77 “symbolic analysts,” 235 task simplification, example of, 311 tax credits, 355–58 taxing and spending, redistributive, 271 tax revenue, 133 technological gap (1500–1700), 51 technology companies, location decisions of, 260 telephone operator, vanishing of, 201 telescope, 59 Tennessee Valley Authority (TVA) Act of 1933, 363 Tesla, Nikola, 152 textile industry, 38, 55, 95 Thirty Years’ War, 58 Thompson, E.

pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next
by Jeanette Winterson
Published 15 Mar 2021

If digital social passports become normal, and if such passports can be used to decide who goes where, who does what, gets what, pays what (China is mooting charging systems that offer discounts to exemplary citizens), then how we live changes collectively, as well as individually – and perhaps it will make us less compassionate too. We won’t know what’s in the data of the person turned away or turned down or charged double, and likely we will feel it must be justified – mustn’t it? And we all like to feel superior to others. * * * Elon Musk and Sam Altman (CEO of the start-up funder Y Combinator) launched OpenAI in 2015 as a non-profit organisation promoting more inclusive AI – more benefits for more people – and to explore safe AGI. (We don’t want a Skynet situation.) Musk, who has since left the organisation due to what he calls conflicts of interest, is notably worried about artificial general intelligence – the point where AI becomes an autonomous self-monitoring system.

pages: 262 words: 69,328

The Great Wave: The Era of Radical Disruption and the Rise of the Outsider
by Michiko Kakutani
Published 20 Feb 2024

Decades after splitting the atom, technology has split society into different ideological universes.” The result is an increasingly fragmented and fractious world in which opinions are replacing facts, and a tribal craving to belong trumps knowledge and reason. * * * — In late 2022, a San Francisco–based company named OpenAI released an experimental chatbot called ChatGPT. Some early users hailed it as an innovation as consequential as the smartphone. Others nervously described it as “AI’s Jurassic Park moment” or compared it to HAL 9000, the computer that goes rogue in the movie 2001: A Space Odyssey. ChatGPT doesn’t just imitate human conversation.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

v=RZ3ahBm3dCk A Mild Dystopia 1. Griffin, Andrew (2017). ‘Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language’, The Independent [online]. Available at: www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html 2. ‘The 5 Most Infamous Software Bugs in History’, BBVA Open Mind (2015) [online]. Available at: www.bbvaopenmind.com/en/technology/innovation/the-5-most-infamous-software-bugs-in-history 3. Long, Tony (2007). ‘Sept. 26, 1983: The Man Who Saved the World by Doing . . . Nothing’, Wired [online].

pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe
by William Poundstone
Published 3 Jun 2019

The United States has the Future of Life Institute at MIT, founded by Max Tegmark and Skype cofounder Jaan Tallinn, with a board of advisors including the ubiquitous Elon Musk (who donated $10 million). Silicon Valley has two such think tanks: the Machine Intelligence Research Institute, founded by computer scientist Eliezer Yudkowsky and tech entrepreneurs Brian and Sabine Atkins; and the OpenAI Foundation, founded by Musk, Sam Altman, Peter Thiel, and others. If this zeitgeist has a single axiom, it is that existential risks are different. Bostrom wrote: We cannot necessarily rely on the institutions, moral norms, social attitudes or national security policies that developed from our experience with managing other sorts of risks.

Learn Algorithmic Trading
by Sebastien Donadio
Published 7 Nov 2019

Other Books You May Enjoy If you enjoyed this book, you may be interested in these other books by Packt: Mastering Python for Finance - Second Edition James Ma Weiming ISBN: 9781789346466 Solve linear and nonlinear models representing various financial problems Perform principal component analysis on the DOW index and its components Analyze, predict, and forecast stationary and non-stationary time series processes Create an event-driven backtesting tool and measure your strategies Build a high-frequency algorithmic trading platform with Python Replicate the CBOT VIX index with SPX options for studying VIX-based strategies Perform regression-based and classification-based machine learning tasks for prediction Use TensorFlow and Keras in deep learning neural network architecture Hands-On Machine Learning for Algorithmic Trading Stefan Jansen ISBN: 9781789346411 Implement machine learning techniques to solve investment and trading problems Leverage market, fundamental, and alternative data to research alpha factors Design and fine-tune supervised, unsupervised, and reinforcement learning models Optimize portfolio risk and performance using pandas, NumPy, and scikit-learn Integrate machine learning models into a live trading strategy on Quantopian Evaluate strategies using reliable backtesting methodologies for time series Design and evaluate deep neural networks using Keras, PyTorch, and TensorFlow Work with reinforcement learning for trading strategies in the OpenAI Gym Leave a review - let other readers know what you think Please share your thoughts on this book with others by leaving a review on the site that you bought it from. If you purchased the book from Amazon, please leave us an honest review on this book's Amazon page. This is vital so that other potential readers can see and use your unbiased opinion to make purchasing decisions, we can understand what our customers think about our products, and our authors can see your feedback on the title that they have worked with Packt to create.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

HOPE As of this writing, I’m cautiously optimistic that the AI-risk message can save humanity from extinction, just as the Soviet-occupation message ended up liberating hundreds of millions of people. As of 2015, it had reached and converted 40 percent of AI researchers. It wouldn’t surprise me if a new survey now would show that the majority of AI researchers believe AI safety to be an important issue. I’m delighted to see the first technical AI-safety papers coming out of DeepMind, OpenAI, and Google Brain and the collaborative problem-solving spirit flourishing among the AI-safety research teams in these otherwise very competitive organizations. The world’s political and business elite are also slowly waking up: AI safety has been covered in reports and presentations by the Institute of Electrical and Electronics Engineers (IEEE), the World Economic Forum, and the Organization for Economic Cooperation and Development (OECD).

Human Frontiers: The Future of Big Ideas in an Age of Small Thinking
by Michael Bhaskar
Published 2 Nov 2021

The authors of the original Eroom's Law paper now believe the era of stagnation may be coming to an end thanks to the prevalence and new-found effectiveness of machine learning in the discovery of drugs.23 AI is moving to the front lines of the battle against cancer and a paper in Cell illustrates that ML can use molecular structure to predict the effectiveness of antibacterials (the researchers behind the AI even called the resulting antibacterial ‘halicin’ after HAL, the AI in 2001: A Space Odyssey).24 We need things like this to beat future pandemics. Fusion scientists are optimistic that the application of AI could bring decisive advances in the coming years, and in general the field is now focused on ML approaches to core problems.25 Breakthroughs in natural language processing are coming at pace: the parameters of OpenAI's eye-catching GPT language prediction system grew from hundreds of millions to hundreds of billions in just a few years with some spectacular results, enabling it to write convincing text at length on any subject.26 GPT-3 can take a portion of writing and then continue it with at times shocking plausibility.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

The Equality Machine will take you on a tour of what people can build when aspiring to use the power of AI to make the world a more equal place. Read it and get inspired to join them.” —Gillian Hadfield, director, Schwartz Reisman Institute for Technology and Society, University of Toronto, and senior policy adviser, OpenAI “Lobel offers a contrarian and original view: that technology can be a foundation for equality and inclusion rather than a source of bias and inequality. Read this book to find out why and how.” —Oren Etzioni, CEO, Allen Institute for Artificial Intelligence “With this incisive and engaging book, Lobel invites academics, nonprofit leaders, investors, business leaders, and policymakers to use data to solve the world’s most pressing problems, being neither cavalier nor afraid.”

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

The chief issues facing early environmentalists were pollution, biodiversity loss, extinction and resource scarcity. But they didn’t call themselves “extinctionists” or “pollutionists.” They found their identity not in the problems they were fighting, but in the positive value they were fighting to protect. 71 At the time of writing, DeepMind and OpenAI are the most prominent examples. They are in need of great researchers in AI safety, and also great software engineers—especially those who take existential risk seriously. 72 Organizations focused on reducing existential risk include: The Future of Humanity Institute (FHI) The Centre for the Study of Existential Risk (CSER) The Future of Life Institute (FLI) The Global Catastrophic Risk Institute (GCRI) The Berkeley Existential Risk Initiative (BERI) The Open Philanthropy Project (OpenPhil) The Nuclear Threat Initiative (NTI) The Bulletin of the Atomic Scientists The Global Challenges Foundation The Law and Governance of Existential Risk group (LGER) Alliance to Feed the Earth in Disasters (ALLFED) The high-impact careers site 80,000 Hours maintains an up-to-date job board, including such positions: 80000hours.org/job-board and explanations of the kinds of careers that can really help: 80000hours.org/career-reviews 73 In keeping with this, I have signed over the entire advance and royalties from this book to charities helping protect the longterm future of humanity. 74 Eig (2014).

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

They’re using billions of dollars in cash and dizzying stockpiles of data to gobble up available AI talent. They’re also working to construct the “power grids” for the AI age: privately controlled computing networks that distribute machine learning across the economy, with the corporate giants acting as “utilities.” It’s a worrisome phenomenon for those who value an open AI ecosystem and also poses a potential stumbling block to China’s rise as an AI superpower. But bringing AI’s power to bear on the broader economy can’t be done by private companies alone—it requires an accommodating policy environment and can be accelerated by direct government support. As you recall, soon after Ke Jie’s loss to AlphaGo, the Chinese central government released a sweeping blueprint for Chinese leadership in AI.

But broadly speaking, if one of these companies makes a unique breakthrough—a trade secret that could generate massive profits for that company alone—it will do its best to keep a lid on it and will try to extract maximum value before the word gets out. A groundbreaking discovery occurring within one of these closed systems poses the greatest threat to the world’s open AI ecosystem. It also threatens to stymie China in its goal of becoming a global leader in AI. The way things stand today, China already has the edge in entrepreneurship, data, and government support, and it’s rapidly catching up to the United States in expertise. If the technological status quo holds for the coming years, an array of Chinese AI startups will begin fanning out across different industries.

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

When we talk, are we just producing new words based on the last few words we said, based on some probability distribution we’ve come to learn based on all the other utterances we’ve ever heard? It’s not just that. We do, after all, choose our words to make some reference to the world around us. We’re not just riffing on things we’ve already said. And yet, modern-day Markov chains can produce something remarkably like human language. An algorithm like Open AI’s GPT-3 is the spiritual descendant of Shannon’s text machine, only much bigger. The input, instead of being three letters, is a chunk of text hundreds of words long, but the principle is the same: given the passage of text most recently produced, what is the probability that the next word is “the,” or “geometry,” or “graupel”?

See also trees numerology, 276, 278 Oblique Strategies, 172–73 O’Connor, Sandra Day, 365, 371, 405 octane, 316–17 odd numbers, 224–25 Odlyzko, Andrew, 312 Oireachtas, 349 Oldbury, Derek, 97 Old Sarum, 350–51 Olympia Academy (dinner club), 83 “Once in a Lifetime” (Talking Heads), 175, 224n On-Line Encyclopedia of Integer Sequences, 235, 283n, 318n “On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat” (Einstein), 82 Open AI, 95 opinion polling, 70–74 Orange County, Virginia, 363 organizational charts, 106, 107 Orlin, Ben, 19 Oscar, King of Sweden, 39 Ostwald, Wilhelm, 59, 83 Ottman, Tad, 347, 393 outliers, 205, 389, 401, 402 Oven of Akhnai, 420–21 overfitting, 174 Pacioli, Luca Bartolomeo de, 115, 277 PageRank, 290–92, 330 Painlevé, Paul, 80–81 Pakistan, 226 palindromes, 30, 31–32 Panama, 304 Pancatuccio, Paulo, 132–33 pandemics.

pages: 306 words: 82,909

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back
by Bruce Schneier
Published 7 Feb 2023

For years, AI programs have composed news stories about sports and finance for real news organizations like the Associated Press. The constrained nature of much reporting on those topics has made them easier to adapt to AI. AI is now being used to write more general stories. Modern text-creation systems like Open AI’s GPT-3 can be fed facts and write true stories, but they can just as easily be fed untruths and write fake news. It doesn’t take much imagination to see how AI will degrade political discourse. Already, AI-driven personas can write personalized letters to newspapers and elected officials, leave intelligible comments on news sites and message boards, and intelligently debate politics on social media.

pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World
by Peter H. Diamandis and Steven Kotler
Published 3 Feb 2015

“They provided so much support and guidance,” he explains, “that we were able to build our entire Watson-powered prototype in two weeks.” One of this book’s core goals is to point out those pivotal moments when a technology becomes ready for entrepreneurial prime time. Watson in the cloud, tied to an openly available API, is the beginning of one such moment, the potential for a Mosaic-like interface explosion, opening AI to all sorts of new businesses and heralding its transition from deceptive to disruptive growth. Attention, exponential entrepreneurs: What are you waiting for? And everything we’ve just covered is here today. “Soon,” says Ray Kurzweil,40 “we will give an AI permission to listen to every phone conversation you have.

pages: 363 words: 109,077

The Raging 2020s: Companies, Countries, People - and the Fight for Our Future
by Alec Ross
Published 13 Sep 2021

It played by the rules and avoided the mistakes that hobbled other US tech companies in their pursuit of Chinese customers. But it occupied a strategic industry, and its success would have encroached on the technological ambitions of the Chinese government. China’s public-private capital apparatus kicked into gear, and the foreign rival was run out of town. After beating out Uber, Didi Chuxing opened AI research labs in Beijing and Silicon Valley. In July 2020, the company announced it would partner with the Chinese central bank to test a new digital currency. It was a familiar story that has played out with countless Western technology companies: Western company enters China; company cuts promising deals with local partners; company slowly loses market share to a homegrown rival; company retreats from China; homegrown rival entrenches its dominant position.

pages: 592 words: 125,186

The Science of Hate: How Prejudice Becomes Hate and What We Can Do to Stop It
by Matthew Williams
Published 23 Mar 2021

But this task is made more challenging when there are opposing forces dedicated to using the world’s most powerful communications network to accomplish their extreme goals. Notes 1. J. Weizenbaum, ‘ELIZA – a Computer Program for the Study of Natural Language Communication between Man and Machine’, Communications of the ACM 9 (1966), 36–45. 2. ‘Microsoft Opens AI Framework to Other Firms’, China Daily, 22 August 2019. 3. G. King, J. Pan and M. E. Roberts, ‘How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument’, American Political Science Review 111 (2017), 484–501. 4. N. Newman et al., ‘Reuters Institute Digital News Report 2020’, Oxford: Reuters Institute, 2020. 5.

UNIX® Network Programming, Volume 1: The Sockets Networking API, 3rd Edition
by W. Richard Stevens, Bill Fenner, Andrew M. Rudoff
Published 8 Jun 2013

The only information returned will be for datagram sockets. 316 Name and Address Conversions Chapter 11 The members of the hints structure that can be set by the caller are: • • • • ai_flags (zero or more AI_XXX values OR’ed together) ai_family (an AF_xxx value) ai_socktype (a SOCK_xxx value) ai_protocol The possible values for the ai_flags member and their meanings are: AI_PASSIVE The caller will use the socket for a passive open. AI_CANONNAME Tells the function to return the canonical name of the host. AI_NUMERICHOST Prevents any kind of name-to-address mapping; the hostname argument must be an address string. AI_NUMERICSERV Prevents any kind of name-to-service mapping; the service argument must be a decimal port number string.