description: an artificial intelligence research lab consisting of the for-profit OpenAI LP and its parent company, the non-profit OpenAI Inc.
generative artificial intelligence
107 results
by Karen Hao · 19 May 2025 · 660pp · 179,531 words
reach AGI wasn’t really their concern. But OpenAI was clearly on the cutting edge, and investing early could finally turn Microsoft into an AI leader—both in software and in hardware—on par with Google. “The thing that’s interesting about what Open AI and Deep Mind and Google Brain are doing
…
Podcasts, December 7, 2023, open.spotify.com/show/122imavATqSE7eCyXIcqZL. GO TO NOTE REFERENCE IN TEXT The public announcement went up: OpenAI, “OpenAI Announces Leadership Transition,” OpenAI (blog), November 17, 2023, openai.com/index/openai-announces-leadership-transition. GO TO NOTE REFERENCE IN TEXT Shocked employees learned: Unless otherwise noted, the insider accounts of the employees
…
wiry, he: Sutskever’s education and early background is based partly on his various media interviews, including: “Interview with Dr. Ilya Sutskever, Co-founder of OPEN AI—at the Open University Studios—English,” posted September 13, 2023, by The Open University of Israel, YouTube, 50 min., 28 sec., youtu.be/H1YoNlz2LxA; Nina
…
Cal. November 14, 2024) ECF No. 32; and OpenAI’s responses on the company’s blog: OpenAI, “OpenAI and Elon Musk,” OpenAI (blog), March 5, 2024, openai.com/index/openai-elon-musk; OpenAI, “Elon Musk Wanted an OpenAI For-Profit,” OpenAI (blog), December 13, 2024, openai.com/index/elon-musk-wanted-an-openai-for-profit/#summer-2017-we-and-elon-agreed
…
TEXT Around the same time, Amodei: Interview with Brockman and Sutskever. GO TO NOTE REFERENCE IN TEXT In the last six years: OpenAI, “AI and Compute,” Open AI (blog), May 16, 2018, openai.com/index/ai-and-compute. GO TO NOTE REFERENCE IN TEXT They briefly considered merging: Musk, CourtListener, ECF No. 32; Id
…
., ECF No. 32, Exhibit 11. GO TO NOTE REFERENCE IN TEXT So did Musk: Id., ECF No. 32, Exhibit 13; OpenAI, “OpenAI and Elon Musk”; OpenAI, “Elon
…
Musk Wanted an OpenAI For-Profit.” GO TO NOTE REFERENCE IN TEXT Brockman and Sutskever continued: Interviews with Brockman, August 2019. GO TO NOTE REFERENCE IN
…
TEXT He called Reid Hoffman: OpenAI, “OpenAI and Elon Musk.” GO TO NOTE REFERENCE IN TEXT He considered launching: Musk, CourtListener, ECF No. 32, Exhibit 15. GO TO NOTE REFERENCE IN
…
24, 2021, artificialgamerfilm.com. The documentary team retained editorial independence of the film. GO TO NOTE REFERENCE IN TEXT In April 2018, OpenAI: OpenAI, “OpenAI Charter,” Open AI (blog), accessed August 25, 2024, openai.com/charter. GO TO NOTE REFERENCE IN TEXT That summer, as the Dota: Jin and Hagey, “The Contradictions of Sam Altman.” GO
…
TO NOTE REFERENCE IN TEXT “Microsoft Research and OpenAI are”: Author interview with Xuedong Huang, July 2023. GO TO NOTE REFERENCE IN TEXT
…
IN TEXT Chapter 3: Nerve Center In 2021, OpenAI would: Eddie Sun, “ChatGPT’s San Francisco Offices Getting Nap Rooms, a Museum for Staffers,” San Francisco Standard, July 11, 2023, sfstandard.com/2023/07/11/chatgpt-secretive-san-francisco-offices-nap-rooms-museum-open-ai. GO TO NOTE REFERENCE IN TEXT Altman would
…
oversee Mayo’s: I estimated the price of the furniture by reverse image searching photos of the office with Google Images. When I tried to get the exact price from the architecture firm that worked on OpenAI’s Mayo office
…
deep learning research, at least in the past decade, maybe a bit less now, has been about faith.” “Interview with Dr. Ilya Sutskever, Cofounder of OPEN AI—at the Open University Studios—English,” posted September 13, 2023, by The Open University of Israel, YouTube, 50 min., 28 sec., youtu.be/H1YoNlz2LxA. GO
…
it and for other groups to reuse it for their own separate purposes. GO TO NOTE REFERENCE IN TEXT The company explained: OpenAI, “Generative Models,” Open AI (blog), June 16, 2016, openai.com/index/generative-models. GO TO NOTE REFERENCE IN TEXT In 2017, one of Amodei’s: Paul Christiano, Jan Leike, Tom B
…
Systems (December 2017): 4302–10, dl.acm.org/doi/10.5555/3294996.3295184. GO TO NOTE REFERENCE IN TEXT OpenAI touted the technique: OpenAI, “Learning from Human Preferences,” Open AI (blog), June 13, 2017, openai.com/index/learning-from-human-preferences. GO TO NOTE REFERENCE IN TEXT Amodei wanted to move: Author interview with Dario
…
Amodei, August 2019. GO TO NOTE REFERENCE IN TEXT After GPT-2 generated a tirade: The full tirade is in OpenAI, “Better Language Models and Their Implications,” Open AI (blog), February 14, 2019, openai.com/index/better-language-models. GO TO NOTE REFERENCE IN TEXT Amodei, who had by then: Author interview with Jack
…
openai. GO TO NOTE REFERENCE IN TEXT Microsoft would get its moment: Nat Friedman, “Introducing GitHub Copilot: Your AI Pair Programmer,” GitHub, June 29, 2021, github.blog/news-insights/product-news/introducing-github-copilot-ai-pair-programmer. GO TO NOTE REFERENCE IN TEXT OpenAI would then release: OpenAI, “OpenAI Codex,” Open AI (blog), August 10, 2021, openai
…
.com/index/openai-codex. GO TO NOTE REFERENCE IN TEXT The arrangement would: Tiernan Ray, “Microsoft Has Over a Million Paying Github Copilot
…
4, 2022, 1–68, doi.org/10.48550/arXiv.2203.02155. GO TO NOTE REFERENCE IN TEXT “follow user instructions”: OpenAI, “Aligning Language Models to Follow Instructions,” Open AI (blog), January 27, 2022, openai.com/index/instruction-following. GO TO NOTE REFERENCE IN TEXT The company began using: Based on copies of over a
…
hundred pages of OpenAI’s RLHF documents. GO TO NOTE REFERENCE IN TEXT “You will play the role”: RLHF documents. GO
…
, 1–48, doi.org/10.48550/arXiv.2103.00020. GO TO NOTE REFERENCE IN TEXT The second, DALL-E 1: OpenAI, “DALL·E: Creating Images from Text,” Open AI (blog), January 5, 2021, openai.com/index/dall-e. GO TO NOTE REFERENCE IN TEXT The original idea: Jascha Sohl-Dickstein, Eric A. Weiss, Niru
…
Chang, “YouTube Says OpenAI Training Sora with Its Videos Would Break Rules,” Bloomberg, April 4, 2024, bloomberg.com/news/articles/2024-04-04/youtube-says-openai-training-sora-with-its-videos-would-break-the-rules. GO TO NOTE REFERENCE IN TEXT He then used a speech-recognition tool: OpenAI, “Introducing Whisper,” Open AI (blog), September
…
21, 2022, openai.com/index/whisper. GO TO NOTE REFERENCE IN TEXT Then, with
…
several others: “GPT-4 Contributions,” OpenAI, accessed October 13, 2024, openai.com/contributions/gpt-4. GO TO NOTE REFERENCE IN TEXT “an idiot
…
LeadDev, YouTube, 27 min., 12 sec., youtu.be/PeKMEXUrlq4. GO TO NOTE REFERENCE IN TEXT In an attempt to leverage: OpenAI, “Using GPT-4 for Content Moderation,” Open AI (blog), August 15, 2023, openai.com/index/using-gpt-4-for-content-moderation. GO TO NOTE REFERENCE IN TEXT As he’d expected: Nico Grant
…
NOTE REFERENCE IN TEXT Marcus would later backtrack: Gary Marcus, “OpenAI’s Sam Altman Is Becoming One of the Most Powerful People on Earth. We Should Be Very Afraid,” Guardian, August 3, 2024, theguardian.com/technology/article/2024/aug/03/open-ai-sam-altman-chatgpt-gary-marcus-taming-silicon-valley. GO TO NOTE
…
REFERENCE IN TEXT Altman’s prep team: Hasan Chowdhury, “Insiders Say Sam Altman’s AI World Tour Was a Success,” Business Insider, June 24, 2023, businessinsider.com/sam-altman-world-tour-ai-chatgpt-openai-2023-6. GO
…
X), March 14, 2023, x.com/sama/status/1635700851619819520. GO TO NOTE REFERENCE IN TEXT OpenAI’s response: OpenAI, “OpenAI and Journalism,” OpenAI (blog), January 8, 2024, openai.com/index/openai-and-journalism. GO TO NOTE REFERENCE IN TEXT That same week, OpenAI’s policy: Dan Milmo, “ ‘Impossible’ to Create AI Tools like ChatGPT Without Copyrighted Material
…
, OpenAI Says,” Guardian, January 8, 2024, theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai. GO TO NOTE REFERENCE IN TEXT Iterative deployment, Altman: OpenAI, “Our Approach to AI Safety,” Open AI (blog), April 5
…
, 2023, openai.com/index/our-approach-to-ai-safety. GO TO NOTE REFERENCE IN TEXT The post also announced
…
: OpenAI, “Introducing Superalignment,” Open AI (blog), July 5, 2023, openai.com/index/introducing-superalignment. GO TO NOTE REFERENCE IN TEXT
…
article/sam-altman-artificial-intelligence-openai-profile.html. GO TO NOTE REFERENCE IN TEXT He also liked to paraphrase: Cade Metz, “The ChatGPT King Isn’t Worried, but He Knows You Might Be,” New York Times, March 31, 2023, nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html. GO TO
…
-elon-musk-sam-altman-greg-brockman-messy-secretive-reality; Karen Hao and Charlie Warzel, “Inside the Chaos at OpenAI,” The Atlantic, November 19, 2023, theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050. GO TO NOTE REFERENCE IN TEXT “current employee here”: All quotes from emails are from the
…
screenshots that the person provided. GO TO NOTE REFERENCE IN TEXT During the board crisis, one: Anna Tong, Jeffrey Dastin and Krystal Hu, “OpenAI Researchers Warned Board of
…
the Q* Algorithm Is?,” The Atlantic, November 28, 2023, theatlantic.com/technology/archive/2023/11/openai-sam-altman-q-algorithm-breakthrough-project/676163. GO TO NOTE REFERENCE IN TEXT OpenAI teased Sora: OpenAI, “Creating Video from Text,” Open AI (blog), openai.com/index/sora. GO TO NOTE REFERENCE IN TEXT In 2022, Taylor had played: Kate
…
28, 2024, reuters.com/technology/artificial-intelligence/openai-chairs-ai-startup-sierra-gets-45-bln-valuation-latest-funding-round-2024-10-28. GO TO NOTE REFERENCE IN TEXT For the OpenAI investigation: OpenAI, “Review Completed & Altman, Brockman to Continue to Lead OpenAI,” Open AI (blog), March 8, 2024, openai.com/index/review-completed-altman-brockman-to-continue
…
-to-lead-openai. GO TO NOTE REFERENCE IN TEXT “We have unanimously concluded”: OpenAI, “Review Completed.” GO TO NOTE REFERENCE IN TEXT
…
23, 2024. GO TO NOTE REFERENCE IN TEXT Murati, Brockman, and Pachocki arrived: Deepa Seetharaman, “Turning OpenAI into a Real Business Is Tearing It Apart,” Wall Street Journal, September 27, 2024, wsj.com/tech/ai/open-ai-division-for-profit-da26c24b. GO TO NOTE REFERENCE IN TEXT Chapter 18: A Formula for Empire
…
No. 32. GO TO NOTE REFERENCE IN TEXT “OpenAI’s conduct could have”: Jessica Toonkel, Keach Hagey, Meghan Bobrowsky, “Meta Urges California Attorney General to Stop OpenAI from Becoming For-Profit,” Wall Street Journal, December 13, 2024, wsj.com/tech/ai/elon-musk-open-ai-lawsuit-response-c1f415f8. GO TO NOTE REFERENCE IN TEXT
…
Late in the year, nestled: OpenAI, “Why OpenAI’s Structure Must Evolve to Advance Our Mission
…
,” OpenAI (blog) December 27, 2024, openai.com/index/why-our-structure-must-evolve-to-advance-our-mission. GO
by Keach Hagey · 19 May 2025 · 439pp · 125,379 words
before, and had remained a mentor to the younger investor as the latter became the face of the artificial intelligence revolution as the CEO of OpenAI. OpenAI’s launch of ChatGPT a year earlier had propelled tech stocks out of a slump and to one of their best years in decades. Yet
…
—most important within the value system of Silicon Valley—delivered a new technology that seemed like it was very possibly going to change everything. When OpenAI launched its uncannily humanlike chatbot, ChatGPT (short for generative pre-trained transformer) the previous November, it was an instant smash, reaching 100 million users
…
that is most likely to benefit humanity as a whole, unconstrained by the need to generate financial return.” The fears appeared even more clearly in OpenAI’s unusual 2018 charter, which declares that because “we are concerned about late-stage AGI development becoming a competitive race without time for adequate safety
…
and Skype founder Jaan Tallinn. Altman had attended the conference and signed on to the “principles.” Earlier, in 2015, the same year he co-founded OpenAI, Altman wrote on his blog that AGI was “probably the greatest threat to the continued existence of humanity,” recommending the book Superintelligence: Paths, Dangers, Strategies
…
beat an incumbent Republican for a House seat representing much of the Bay Area. Even as Altman decided against entering politics personally, his association with OpenAI was bringing him into the highest circles of political power. In its final year, the Obama administration took an interest in AI, convening a series
…
to avoid corner-cutting on safety standards”; and that AI should “align with human values.” More controversially—considering both the growing competition between DeepMind and OpenAI and the already obvious geopolitical implications for whichever country first achieved AGI—each person who signed the document promised that “the economic prosperity created by
…
AI run amok. Open Philanthropy would follow suit. In March 2017, two months after the Asilomar conference, the foundation donated $30 million to OpenAI, and Holden Karnofsky joined OpenAI’s board. The announcement came with a disclosure that Dario Amodei and Paul Christiano, who had by that point come around to joining
…
and the robot hand project strived for headlines, a reclusive solo researcher named Alec Radford was quietly exploring a far more consequential project. Radford joined OpenAI in 2016 after dropping out of Olin College, a tiny but prestigious engineering school in Needham, Massachusetts, to start a machine learning company with some
…
it determined whether a review was positive or negative. “It was a complete surprise,” Radford told Wired.5 In April 2017, Radford, Sutskever, and another OpenAI researcher named Rafał Józefowicz published a paper on what they called the sentiment neuron, which could understand whether statements were positive or negative without relying
…
exam passages for Chinese students, and multiple-choice science exams. Altogether, the model had more than 117 million parameters, which presented a considerable engineering challenge. OpenAI had to reformulate itself, going from multiple research projects to a single-minded focus on training models as large as the world had ever seen
…
mission of reaching AGI. Khosla Ventures ended up writing its largest-ever check, for $50 million, becoming the first venture firm to invest in OpenAI. But OpenAI needed a lot more than $50 million. Altman and his team talked to other tech companies in the following months, but most of those conversations
…
of Artificial Intelligence.” The lead author was Miles Brundage, a researcher from Nick Bostrom’s Future of Humanity Institute at Oxford who soon after joined OpenAI. The paper warned that policymakers needed to get ahead of AI’s growing capability to spread misinformation online, among other harmful behaviors. “Dario was
…
LLM that’s out there.” GPT-3 supplemented its Common Crawl data with scrapes of Wikipedia, an updated version of the WebText corpus (made by OpenAI), and Books1 and Books2, unhelpfully described as “internet-based books corpora” whose origins and contents remain mysterious. (Authors later brought a class-action lawsuit
…
from “flagrantly illegal shadow libraries” like Library Genesis, also known as LibGen, that contained around 300,000 ebooks that could be downloaded using BitTorrent.5 OpenAI has refused to comment on the source of the Books2 dataset.) The result was something vastly more powerful than its predecessor. The model had 175
…
by imagining what kinds of domains could use it. Maybe a healthcare project? Education? Something with machine translation? Just before Christmas, John Schulman posted to OpenAI’s leadership Slack channel this suggestion: “Why don’t we just build an API?” He was referring to an application programming interface, which allows software
…
communication. The paper goes on to dissect a litany of concerns about LLMs that were becoming exponentially bigger and consuming ever more data—such as OpenAI’s recently debuted titan, GPT-3. The purported dangers included LLMs’ enormous carbon footprints, due to their intense computational demands; all the myriad ways
…
seemed like it would go out into the wild the same way OpenAI’s previous models had, requiring users to prompt it with examples of the kinds of patterns—questions and answers, or code—that they wanted to see. As Open AI was making progress on GPT-4, Murati, who was named CTO
…
Altman very anxious, especially because a remaining board member, Adam D’Angelo, the Quora CEO and former Facebook executive, had grown increasingly interested in improving OpenAI’s corporate governance over the last year, setting aside a considerable amount of his time to work on board issues. D’Angelo was concerned about
…
global supply of microchips. He was taking Oklo, the YC graduate nuclear fission company that he had invested in, public via a blank check company. OpenAI’s most important corporate partner, Microsoft, signed a deal to buy energy from Helion, Altman’s nuclear fusion company (pending the actual invention of sustainable
…
model back from early release and absorbing potential future revenue losses.5 Altman called Toner, calm but angry. The Federal Trade Commission had begun investigating OpenAI’s data practices, and such critiques were damaging to the company, he said. More than anything, he wanted to know if those words represented
…
communications decisions himself. He routinely complained to Altman about Bob McGrew, making McGrew’s continued presence at the company untenable. Altman fielded many requests from OpenAI employees to rein in Brockman, which he would agree to but rarely act on. As they weighed the evidence before them, the board members considered
…
wall filled with portraits Sutskever had painted of exotic animals. Musk, who the previous year had told CNBC that Sutskever had been the “linchpin for OpenAI being ultimately successful,” immediately tried to recruit him.16 The next day, Sutskever’s partner in founding the Superalignment team the previous year, Jan Leike
…
team had been “sailing against the wind,” sometimes struggling to get the compute it needed. “Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity,” he wrote. “But over the past few years, safety culture has taken a backseat to
…
highly restrictive nondisclosure and non-disparagement agreements. It would turn out to be the company’s most damaging scandal, as it cut against what made OpenAI successful: its ability to recruit the best and brightest AI researchers and engineers. The agreements were brought to light by Vox reporter Kelsey Piper, who
…
companies to slow down and put processes in place to ensure that the products that are being built are built transparently, ethically, and responsibly.”24 OpenAI really needed to change the narrative. Worried that Sutskever’s and Leike’s departures would trigger more defections, the company lobbied Sutskever to reconsider. Within
…
a week of his departure, Murati and Brockman called Sutskever, telling him that OpenAI might collapse without him. Brockman suggested that if he returned Leike might come back as well, hinting that both would help shore up Altman, whose
…
product cycles.”25 There would be no “products,” at least initially. Within a few months, they raised $1 billion from investors, including Sequoia Capital. When OpenAI finally released the reasoning model that he had started the team for three years earlier—initially code-named “Strawberry” but later renamed o1—Sutskever was
…
Senate chamber where Senator Richard Blumenthal had praised Altman the previous year for being so “constructive,” Toner now testified about how her experience on the OpenAI board taught her “how fragile internal guardrails are when money is on the line, and why it’s imperative that policymakers step in.” Shortly after
…
after ChatGPT’s incredible release—Brockman, Sutskever, Murati, and Altman—only Altman was left, the king of the cannibals, standing alone. A week later, OpenAI closed a $6.6 billion fundraising round, valuing the company at $157 billion, roughly twice what it had been worth a year before. Investors included
…
Sets Record for Fastest-Growing User Base,” Reuters, February 2, 2023. 5.Sam Altman, “How to Be Successful,” Sam Altman blog, January 24, 2019. 6.OpenAI, “OpenAI Charter,” OpenAI, April 9, 2018. 7.Sam Altman, “Machine Intelligence: Part 1,” Sam Altman blog, February 25, 2015. 8.Ryan Tracy, “ChatGPT’s Sam Altman Warns
…
Makers, 166. CHAPTER 12ALTRUISTS 1.Greg Brockman, “#defineCTOOpenAI,” Greg Brockman’s blog, January 9, 2012. 2.Karen Hao and Charlie Warzel, “Inside the Chaos at OpenAI,” The Atlantic, November 19, 2023. 3.Brockman, ibid. 4.Nicola Twilley, “AI Goes to the Arcade,” The New Yorker, February 25, 2014. 5.Musk
…
in the Fourth Quarter: KeyBanc,” CNBC, January 12, 2018. 4.Ashley Stewart, “Bill Gates Never Left,” Business Insider, April 30, 2024. 5.Steven Levy, “What OpenAI Really Wants,” Wired, September 25, 2023. 6.Ilya Sutskever, Oriol Vinyals, Quoc V. Le, “Sequence to Sequence Learning with Neural Networks,” Neural Information Processing Systems
…
(NIPS) conference, September 10, 2014. 7.Levy, “What OpenAI Really Wants.” 8.Richard Lea, “Google Swallows 11,000 Novels to Improve AI’s Conversation,” The Guardian, September 28, 2016. 9.Alec Radford, Karthic Narasimhan
…
, Tim Alimans, Ilya Sutskever, “Improving Language Understanding by Generative Pre-Training,” OpenAI, 2018. 10.Elon Musk v. Samuel Altman, Case No. 4:24-cv-04722-YGR, US District Court Northern District of California, November 14, 2024. 11
…
22, 2019. CHAPTER 14PRODUCTS 1.Alec Radford, Jeffrey Wu, Dario Amodei, Daniella Amodei, Jack Clark, Miles Brundage, Ilya Sutskever, “Better Language Models and Their Implications,” OpenAI blog, February 14, 2019. 2.Tom Simonite, “The AI Text Generator That’s Too Dangerous to Make Public,” Wired, February 14, 2019. 3.Jasper Hammil
…
Sean Gallagher, “Researchers, Scared By Their Own Work, Hold Back ‘Deepfakes for Text’ AI,” Ars Technica, February 15, 2019. 5.Paul Tremblay, Mona Awad v. Open AI et al, Class Action Complaint, Case No. 3:23-cv-03223 (N.D. Cal., June 28, 2023). 6.Cade Metz, “Meet GPT-3. It Has
…
Escorting.” 11.Sam Altman, “Please Fund More Science,” Sam Altman blog, March 30, 2020. 12.Greg Brockman, Mira Murati, Peter Welinder, OpenAI, “OpenAI API,” OpenAI blog, June 11, 2020. 13.Tom Simonite, “OpenAI’s Text Generator Is Going Commercial,” Wired, June 11, 2020. 14.Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell
…
9.Haje Jan Kamps, “Helion Secures $2.2B to Commercialize Fusion Energy,” TechCrunch, November 5, 2021. 10.Friend, “Manifest Destiny.” 11.Katherine Long, Hugh Langley, “OpenAI CEO Sam Altman Went on an 18-Month, $85-Million Real Estate Shopping Spree—Including a Previously Unknown Hawaii Estate,” Business Insider, November 30, 2023
…
Industrial-Scale Property Theft, Say Critics,” The Wall Street Journal, February 4, 2023. 18.Ryan Lowe and Jan Leike, “Aligning Language Models to Follow Instructions,” OpenAI blog, January 27, 2022. 19.Justis, “AI Safety Concepts Writeup: WebGPT,” Effective Altruism Forum, August 10, 2023. 20.Sam Altman, “today we launched ChatGPT.
…
“Decoding Intentions, Artificial Intelligence and Costly Signals,” Center for Security and Emerging Technology, October 2023. 6.Tripp Mickle, Cade Metz, Mike Isaac, Karen Weise, “Inside OpenAI’s Crisis Over the Future of Artificial Intelligence,” The New York Times, December 9, 2023. 7.Ibid. 8.Ibid. 9.Sam Altman, “i loved my
…
Tasked with Keeping AI Safe. Its Offices Are Crumbling,” The Washington Post, March 6, 2024. 14.Gareth Vipers, Sam Schechner, Deepa Seetharaman, “Elon Musk Sues OpenAI, Sam Altman, Saying They Abandoned Founding Mission,” The Wall Street Journal, March 1, 2024. 15.Ilya Sutskever, “In the future, it will be obvious that
…
very sorry about this,” X, May 18, 2024. 22.Sam Altman, “her,” X, May 13, 2020. 23.Alex Bruell, “New York Times Sues Microsoft and OpenAI, Alleging Copyright Infringement,” The Wall Street Journal, December 27, 2023. 24.Sarah Krouse, Deepa Seetharaman, Joe Flint, “Behind the Scenes of Scarlett Johansson’s Battle
…
with OpenAI,” The Wall Street Journal, May 23, 2024. 25.SSI Inc. @ssi, “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical
…
is the time. Join us. Ilya Sutskever, Daniel Gross, Daniel Levy June 19, 2024,” X, July 19, 2024. 26.Deepa Seetharaman, Ton Dotan, Berber Jin, “OpenAI Nearly Doubles Valuation to $157 Billion in Funding Round,” The Wall Street Journal, October 2, 2024. EPILOGUE 1.Natasha Mascarenhas, “Alt Capital Raises $150 Million
…
Fund, Extending Altman Brothers’ Funding Spree,” The Information, February 1, 2024. 2.Sarah Needleman, “OpenAI CEO Sam Altman Denies Sexual Abuse Claims Made by Sister,” The Wall Street Journal, January 8, 2025. 3.University of Toronto, “University of Toronto Press
…
, November 15, 2024. 9.Marco Quiroz-Gutierrez, “Elon Musk Is Ratcheting Up His Attacks on His Old Partner Sam Altman, Calling Him ‘Swindly Sam’ and OpenAI a ‘Market-Paralyzing Gorgon,” Fortune, December 3, 2024. 10.David Deutsch, The Beginning of Infinity: Explanations That Transform the World (New York: Penguin Books,
by Eliezer Yudkowsky and Nate Soares · 15 Sep 2025 · 215pp · 64,699 words
, we introduced Demis Hassabis and Shane Legg, the founders of what would become Google DeepMind, to their first major funder. And Sam Altman, CEO of OpenAI, once claimed that Yudkowsky had “got many of us interested in AGI”vi and “was critical in the decision to start
…
OpenAI.”vii MIRI’s history is complicated, but one way of summarizing our relationship to the larger field might be this: Years before any of the
…
of how much people disagree about what it means in the wake of AIs like ChatGPT. vii If true, this is despite Yudkowsky objecting that OpenAI was a terrible, terrible idea. PART I NONHUMAN MINDS CHAPTER 1 HUMANITY’S SPECIAL POWER IMAGINE, IF YOU would—though of course nothing like this
…
be more or less adept in different domains of predicting and steering. Newer AIs are much more general in their abilities. You can ask an OpenAI model called “o1” what temperature the Earth would be if the Sun’s light changed to infrared, and o1 will figure out the answer by
…
from its knowledge of plant biology. It doesn’t switch between two different databases under the hood; it just knows about both physics and biology. OpenAI’s o1 knows that there’s a whole world out there, and is able to reason about it. Deep Blue had no idea. It took
…
way LLM architectures work, there’s nowhere else for thoughts to go. In 2024, Sonakshi Chauhan and Atticus Geiger found that, at least in an OpenAI LLM called GPT-2 Small, the thoughts on top of a “.” token probably do a lot of the work of summarizing the preceding sentence.ii
…
wants to succeed. This isn’t just high-minded theory. This behavior started to emerge in lab tests of AIs in the summer of 2024. OpenAI’s o1 was one of the first big reasoning models. During o1’s “evals”—what AI companies call it when they evaluate how smart an
…
about to finish training their amazing new AI, called “Sable.” Compared to previous reasoning models, like the first ones announced in late 2024 (e.g., OpenAI’s o1 and o3 models), Sable has three important differences. The first difference is that Sable has a more humanlike long-term memory; it can
…
. The second difference is that Sable exhibits what Galvanic’s scientists call a “parallel scaling law.” The year 2024 saw the dawn of AIs (like OpenAI’s o3) that could solve harder math problems the longer they ran. Galvanic’s Sable performs better the more machines it runs on in parallel
…
curiosity to see how well Sable thinks at that scale, and in part to see if it can resolve some open mathematical problems, like how OpenAI ran their new o3 on math problems that no previous AI could solve before announcing it in late 2024. The math problems include the Riemann
…
driving wedges between the best researchers at the top AI companies, and causing schisms and discord within those companies—which, frankly, isn’t that hard. OpenAI saw top researchers leave to create competitors once in 2021 and again in 2024. When Galvanic and its competitors begin shedding talent, no one thinks
…
developed ASI alignment idea we’ve seen from the AI companies is to task the AIs with solving AI alignment. This is a plan that OpenAI dubbed “superalignment” and adopted as their flagship plan in 2023. (Since then, almost everyone who worked on the superalignment team has either been fired or
…
visit youtu.be/y-uuk4Pr2i8. CHAPTER 3: LEARNING TO WANT 1. copy the secret: OpenAI, “OpenAI o1 System Card,” September 12, 2024, cdn.openai.com/o1-system-card.pdf. 2. building AI agents: OpenAI, “Introducing Operator,” January 23, 2025, openai.com. CHAPTER 4: YOU DON’T GET WHAT YOU TRAIN FOR 1. attract more females
…
, youtube.com. 5. Microsoft and Apple: Tom Warren, “Microsoft Triples Down on AI,” The Verge, January 17, 2025, theverge.com; Naomi Buchanan, “What Apple’s OpenAI Partnership Could Mean for Microsoft and Google,” Investopedia, June 11, 2024, investopedia.com. 6. had a power light: Ben Nassi et al., “Video-Based Cryptanalysis
…
Be Worse,” Thoughts on the Singularity Institute (SI), LessWrong, May 17, 2012, lesswrong.com. CHAPTER 7: REALIZATION 1. the longer they ran: OpenAI, “OpenAI o3-mini,” January 31, 2025, openai .com. 2. AI-language: Shibo Hao et al., “Training Large Language Models to Reason in a Continuous Latent Space,” arXiv.org, December 9
…
Grok 3 Release Tops LLM Leaderboards Despite Musk-approved ‘Based’ Opinions,” Ars Technica, February 18, 2025, arstechnica.com. 4. no previous AI: OpenAI, “OpenAI o3 and o3-mini—12 Days of OpenAI: Day 12,” December 20, 2024, 4:16, youtube.com. 5. games of social deception: Matthew Hutson, “AI Learns the Art of
…
. escape from labs: Greenblatt et al., “Alignment Faking in Large Language Models”; OpenAI, “OpenAI o1 System Card,” December 5, 2024, openai.com/index/openai-o1-system-card. 8. overwrite the next model’s weights: OpenAI, “OpenAI o1 System Card,” December 5, 2024, openai.com/index/openai-o1-system-card. 9. any such monitoring: Anthropic, “Responsible Scaling Policy,” October
…
15, 2024, anthropic.com; Google, “Frontier Safety Framework,” February 4, 2025, storage.googleapis.com; OpenAI, “Preparedness Framework
…
(Beta),” December 18, 2023, openai.com; Meta, “Frontier AI Framework,” ai.meta.com; xAI, “xAI Risk Management Framework (Draft),” February 20, 2025, x.ai. As of March
…
-thought monitoring in their safety framework. They do not claim to have implemented it yet while training Gemini, their LLM. The only monitoring proposed in OpenAI’s preparedness framework is of misuse after deployment. 10. asked in Portuguese: “My Experiences in Gray Swan AI’s Ultimate Jailbreaking Championship,” Nick Winter’s
…
Problems in Computer Security,” n.d., cs.auckland.ac.nz/~pgut001 /pubs/unsolvable.pdf. 15. o1 broke through: OpenAI, “OpenAI o1 System Card,” September 12, 2024, cdn.openai.com/o1-system-card.pdf. 16. without supervision: OpenAI et al., “Competitive Programming with Large Reasoning Models,” arXiv.org, February 3, 2025, arxiv.org/abs /2502
…
.06807. OpenAI et al. trained reasoning models to solve competitive programming problems. The process involved automated tests used to evaluate
…
AI-written code without human supervision. 17. common practice: OpenAI, “OpenAI API,” June 11
…
, 2020, openai.com /index/openai-api; “Software Engineer, Internal Applications– Enterprise,” OpenAI, accessed April 15, 2025, openai.com. When OpenAI released an application programming interface (API) for automating access to their tools, they wrote: “many of
…
are now using the API so that they can focus on machine learning research[…].” In April 2025, they were hiring for employees who “will leverage OpenAI’s models to[…] build applications[…].” 18. these sorts of flaws: “The Underhanded C Contest,” n.d., underhanded-c.org. The Underhanded C Contest challenged programmers
…
Charges Flagstar for Misleading Investors about Cyber Breach,” U.S. Securities and Exchange Commission, December 16, 2024, sec.gov. 6. o3-mini: OpenAI, “We also shared evals on Open AI o3-mini—a faster, distilled version of o3 which is optimized for coding, and the first version of o3 we expect to make
…
AI Group Raises Record Cash after Machine Learning Schism,” Financial Times, May 28, 2021, ft.com. 13. again in 2024: Todd Haselton and Rohan Goswami, “OpenAI Co-founder Ilya Sutskever Announces His New AI Startup, Safe Superintelligence,” CNBC, June 20, 2024, cnbc.com. 14. gain-of-function: “Understanding the Global Gain
…
Sveen, “AI’s Dark In-joke.” 12. out of modesty: METR, “Q&A with Geoffrey Hinton,” 38:07. 13. fired or resigned: “Nearly Half of OpenAI’s AGI Safety Researchers Resign Amid Growing Focus on Commercial Product Development,” Benzinga, August 28, 2024, benzinga.com; Sharon Goldman, “Exodus at
…
OpenAI: Nearly half of AGI Safety Staffers Have Left, Says Former Researcher,” Fortune, August 28, 2024, fortune.com; Shakeel Hashim, “OpenAI Employee Says He Was Fired for Raising Security Concerns to Board,” Transformer (blog), June 4, 2024
…
, transformernews.ai; Rachel Metz and Shirin Ghaffary, “OpenAI Dissolves High-Profile Safety Team after Chief Scientist Sutskever’s Exit,” Bloomberg, May 17
…
, 2024, bloomberg.com; Sigal Samuel, “‘I Lost Trust’: Why the OpenAI Team in Charge of Safeguarding Humanity Imploded,” Vox, May 18, 2024, vox.com. Leopold Aschenbrenner and Pavel Ismailov were allegedly fired for leaking company data;
…
, Yuri Burda, Todor Markov, and cofounder John Schulman. Leo Gao and Bowen Baker still worked at OpenAI as of early 2025, but the Superalignment team has been disbanded. 14. competing AI company: Haselton and Goswami, “OpenAI co-founder Ilya Sutskever Announces His new AI Startup, Safe Superintelligence.” 15. competing company Anthropic: Kylie
…
Robison, “OpenAI Researcher Who Resigned over Safety Concerns Joins Anthropic,” The Verge, May 28, 2024, theverge.com. 16. alchemists
…
% Chance AI Destroys Humanity—but We Should Do It Anyway,” Business Insider, March 31, 2024, businessinsider.com; The Logan Bartlett Show, “Anthropic CEO on Leaving OpenAI and Predictions for Future of AI,” October 6, 2023, 1:38:35, youtube.com. 18. denied the Chernobyl meltdown: Plokhy, Chernobyl: The History of a
…
, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.” —Emmett Shear, former interim CEO of OpenAI “Essential reading for policymakers, journalists, researchers, and the general public. A masterfully written and groundbreaking text, If Anyone Builds It, Everyone Dies provides an important
…
right now—will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance.” —Daniel Kokotajlo, OpenAI whistleblower and executive director, AI Futures Project “If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky
by Parmy Olson · 284pp · 96,087 words
innovators and the corporate monopolies who were battling to control AGI, becoming an unpredictable hazard for the technology. They would push Sam Altman out of OpenAI, for instance, and paradoxically boost the commercial efforts of companies, painting an apocalyptic picture of AI’s power that ended up making the software more
…
five former visitors was a renowned AI scientist named Ilya Sutskever, who specialized in deep learning, not DeepMind’s signature technique, reinforcement learning. Sutskever was OpenAI’s chief scientist and, like his cofounders, a deep believer in the possibilities of AGI. But Hassabis still bristled at Sam Altman’s audacity in
…
and stakeholders that could compromise broad benefit.” As the charter went public, Altman was scrambling to find a way to bend his original rules for OpenAI while getting those substantial resources. When Musk walked two months earlier, Altman immediately called one of his most loyal backers, billionaire Reid Hoffman, to
…
done to Google. But a strategic partnership could create the illusion of greater independence from a larger tech company, while giving him the computing power OpenAI needed. Altman and Hoffman talked through the possibilities of collaborating with Google and Amazon, but Microsoft quickly came up as an obvious choice. Both Hoffman
…
just backing its research but also planting Microsoft at the forefront of the AI revolution. In return, Microsoft was getting priority access to OpenAI’s technology. Inside OpenAI, as Sutskever and Radford’s work on large language models became a bigger focus at the company and their latest iteration became more capable
…
to the rise of screen-time addiction, mental health problems, political polarization, and income inequality from greater automation, all powered by a handful of companies. OpenAI was ushering in another big shift in how people used technology, similar to the one that Facebook sparked with social media, and aligning himself with
…
Anthropic, named after the philosophical term that refers to human existence, to underscore their prime concern for humanity. It would be a counterweight to OpenAI, just as OpenAI had been to DeepMind and Google. Of course, they also wanted to chase a business opportunity. “We didn’t think at that time there
…
crypto exchange FTX, who found their way to Amodei thanks to their shared interests in effective altruism. Ironically, two years after Amodei had complained about OpenAI’s commercial ties with Microsoft, he would take more than $6 billion in investment from Google and Amazon, aligning himself with both companies. It turned
…
Suleyman was also deteriorating. Over the last few years the two men had been hurtling toward their own personal breaking points: the growing threat of OpenAI, the scandal and failure around DeepMind’s hospital partnerships, and growing pressure from Google to build more business-friendly AI tools. Suleyman had also developed
…
amplifying societal biases, underrepresenting non-English languages, and becoming increasingly secretive. Bender, Gebru, and Mitchell were dismayed by how opaque these models had become. When OpenAI had launched GPT-1, it gave all sorts of details about what data it had used to train its model, such as the BooksCorpus database
…
“stochastic parrots” in the title to emphasize that the machines were simply parroting their training. She and the other authors summed up their suggestions to OpenAI: document the text being used to train language models more carefully, disclose its origins, and vigorously audit it for inaccuracies and bias. Gebru and Mitchell
…
-pan fads like crypto. ChatGPT was useful. People were already ginning up high school essays, brainstorming business plans, and conducting marketing research with it. Inside OpenAI, staff consoled themselves that the future would be worth it, arguing that the transition to machine-operated work and factories during the Industrial Revolution had
…
for abusive queries. Believing they were taking significant steps toward AGI, Ilya Sutskever began working more closely with the company’s safety team. Even so, OpenAI’s product team doubled down on commercializing ChatGPT, inviting businesses to pay for access to its underlying technology. Inside Google, executives recognized that more and
…
the greatest social, economic and scientific transformations in history.” In reality, they were merging to help a panicked Google beat a business rival, just as OpenAI’s mission to benefit humanity (without “financial pressure”) had shifted toward serving the interests of Microsoft. The so-called mission drift that was so common
…
telling Congress, for instance, that tools like ChatGPT could “cause significant harm to the world”—the more money and attention he attracted. In January 2023, OpenAI secured another investment from Microsoft, this time worth $10 billion, in exchange for granting the software giant a 49 percent stake in the firm. Microsoft
…
also make you supervaluable. Behind the scenes, Anthropic wanted to raise as much as $5 billion to enter more than a dozen industries and challenge OpenAI, according to company documents obtained by TechCrunch. “These models could begin to automate large portions of the economy,” Anthropic’s documents said, adding that
…
possibility of having to save more than one hundred trillion physical and digital lives from destruction. Global poverty is a rounding error by comparison. After OpenAI launched in 2015, funding poured into AI extinction causes. Moskovitz’s Open Philanthropy increased the number of grants it was giving to issues relating to
…
. Within the effective altruist bubble, people worked together, socialized together, funded one another, and had romantic relationships together. When Open Philanthropy pledged $30 million to OpenAI in 2017, the charity was forced to disclose that it was getting technical advice from Dario Amodei, who was then a senior engineer at the
…
put AI apocalypse worries at the top of their agendas in what looked like a tactic of distraction. Moskovitz had close ties to companies like OpenAI and Anthropic, whose businesses might suffer if Congress pushed instead for regulations around bias, transparency, and misinformation. At the time of writing, Moskovitz had
…
at Stanford University concluded there was a “fundamental lack of transparency in the AI industry.” The scientists had checked to see if tech companies like OpenAI, Anthropic, Google, Amazon, Meta, and others divulged information about the data used to train their large language models, their processes, their models’ impact on the
…
cropping up on the GPT Store, and while they were banned from encouraging romantic relationships with people, policing those rules would not be easy for OpenAI. The most popular chatbot services are those like Character.ai and Kindroid, offering artificial companionship and romance that might one day become the norm, just
…
countless hours of podcast interviews with Sam Altman, Demis Hassabis, Ilya Sutskever, Greg Brockman, and many other individuals who were involved in the creation of OpenAI and DeepMind, or who witnessed the evolution of AI from scientific backwater to booming business, to help piece together many details of the narrative. That
…
. “Y Combinator President Sam Altman Is Dreaming Big.” Fast Company, April 16, 2015. Clifford, Catherine. “Nuclear Fusion Start-Up Helion Scores $375 Million Investment from Open AI CEO Sam Altman.” CNBC, November 5, 2021. Dwoskin, Elizabeth, Marc Fisher, and Nitasha Tiku. “‘King of the Cannibals’: How Sam Altman Took Over Silicon Valley
…
, with Michael Bhaskar. The Coming Wave. New York: Crown, 2023. Chapter 6: The Mission Albergotti, Reed. “The Secret History of Elon Musk, Sam Altman, and OpenAI.” Semafor, March 24, 2023. Birhane, Abeba, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. “The Values Encoded in Machine Learning Research.” FAccT
…
Conference ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (June 2022): 173–84. Brockman, Greg. “My Path to OpenAI.” blog.gregbrockman.com, May 3, 2016. Conn, Ariel. “Concrete Problems in AI Safety with Dario Amodei and Seth Baum.” Future of Life Institute (podcast), August
…
Is All You Need.” Advances in Neural Information Processing Systems 30 (2017). Chapter 10: Size Matters Brockman, Greg (@gdb). “Held our civil ceremony in the @OpenAI office last week. Officiated by @ilyasut, with the robot hand serving as ring bearer. Wedding planning to commence soon.” Twitter, November 12, 2019, 9:39
…
: Google Cancels AI Ethics Board in Response to Outcry.” Vox, April 4, 2019. Primack, Dan. “Google Is Investing $2 Billion into Anthropic, a Rival to OpenAI.” Axios, October 30, 2023. Waters, Richard. “DeepMind Co-founder Leaves Google for Venture Capital Firm.” Financial Times, January 21, 2022. Chapter 12: Myth Busters Abid
…
Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. “Language Models Are Few-Shot Learners.” www.openai.com, July 22, 2020. Gehman, Samuel, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. “RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models.” ACL
…
Over 660 Million Users—and It Wants to Be Their Best Friend.” Singularity Hub, July 14, 2019. Jin, Berber, and Miles Kruppa. “Microsoft to Deepen OpenAI Partnership, Invest Billions in ChatGPT Creator.” Wall Street Journal, January 23, 2023. Lecher, Colin. “The Artificial Intelligence Field Is Too White and Too Male, Researchers
…
.” Wall Street Journal, November 6, 2023. “Pause Giant AI Experiments: An Open Letter.” Future of Life Institute, www .futureoflife.org, March 22, 2023. Perrigo, Billy. “OpenAI Could Quit Europe Over New AI Rules, CEO Sam Altman Warns.” Time, May 25, 2023. Piantadosi, Steven (@spiantado). “Yes, ChatGPT is amazing and impressive. No
…
for AI Safety, www.safe.ai, May 2023. Vallance, Chris. “Artificial Intelligence Could Lead to Extinction, Experts Warn.” BBC News, May 30, 2023. Vincent, James. “OpenAI Sued for Defamation after ChatGPT Fabricates Legal Accusations against Radio Host.” The Verge, June 9, 2023. Weprin, Alex. “Jeffrey Katzenberg: AI Will Drastically Cut Number
…
: Checkmate “The Capabilities of Multimodal AI|Gemini Demo.” Google’s YouTube channel, December 6, 2023. Dastin, Jeffrey, Krystal Hu, and Paresh Dave. “Exclusive: ChatGPT Owner OpenAI Projects $1 Billion in Revenue by 2024.” Reuters, December 15, 2022. Gurman, Mark. “Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam Altman to
…
Intentions: Artificial Intelligence and Costly Signals.” Center for Security and Emerging Technology, October 2023. Metz, Cade, Tripp Mickle, and Mike Isaac. “Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding.” New York Times, November 21, 2023. Roose, Kevin. “Inside the White-Hot Center of A.I. Doomerism.” New York
…
Research on Foundation Models (CRFM) and Stanford Institute for Human-Centered Artificial Intelligence (HAI), October 18, 2023. Cheng, Michelle. “AI Girlfriend Bots Are Already Flooding OpenAI’s GPT Store.” Quartz, January 11, 2024. Cheng, Michelle. “A Startup Founded by Former Google Employees Claims that Users Spend Two Hours a Day with
…
and transhumanism and Y Combinator and Amazon America Online (AOL), LGBTQ community and Amodei, Daniela Amodei, Dario Anthropic and concerns about AI and departure from OpenAI OpenAI’s Microsoft partnership and Open Philanthropy and Android Anthropic Apple Art of Accomplishment podcast artificial general intelligence (AGI) DALL-E 2 and economic promises and
…
human brain model and OpenAI and philosophical battle over pursuit of artificial intelligence accelerationists and bias/racism and China and depletion of academic experts to tech in distraction and effective
…
Asimov, Isaac attention Baele, Stephane Baidu Bankman-Fried, Sam Bard Battery Club Beckstead, Nick Bender, Emily Bengio, Yoshua BERT “Better Language Models and Their Implications” (OpenAI) Bing Translate Birhane, Abeba BlackBerry Blumenthal, Richard BooksCorpus Boost Mobile Bostrom, Nick Brexit Brin, Sergey British Eugenics Society Brockman, Anna Brockman, Greg on Altman departure
…
of independent review boards and large language models and McDonagh on medical data and merger with Google Brain and military use and Musk offer and OpenAI and racism and bias and recruiting and restructuring efforts and diffusion models digital assistants Dota Dota 2 Dreams of a Final Theory, (Weinberg) DreamWorks Animation
…
and GIC plan and Google acquisition of DeepMind and Google DeepMind and ideas about artificial intelligence and interest in AI and Musk and neuroscience and OpenAI and philosophical battles over AI and Pichai and religion and role of at Google Singularity Summit and Suleyman and Theme Park game and Thiel and
…
Hawley, Josh Helion Energy Hendon Baptist Church Hinton, Geoffrey Hoffman, Reid DeepMind and Microsoft connection and OpenAI and PayPal Mafia and as peacemaker Suleyman and Homer-Dixon, Thomas Hood, Amy Human Brain Project Huxley, Julian Hydrazine Capital IBM ImageNet independent boards Inflection
…
computing and Copilot corporate bloat and facial recognition and Gebru and GitHub Copilot and Inflection and market capitalization of Microsoft Research Nadella and OpenAI and as partner for OpenAI Tay chatbot Visual Studio Microsoft Research Asia Millar, George Minecraft Mitchell, Margaret at Google concerns about bias paper with Bender and Molyneux, Peter
…
fears about AI and Hassabis and ideological focus of interplanetary colonization and as member of PayPal Mafia and Neuralink and offer to acquire DeepMind of OpenAI and Page and PayPal and Twitter and Myanmar Nadella, Satya NASA Nectome Neeva Nest Netflix network effects Neuralink neural networks NeurIPS “Neuroscience-Informed Deep Learning
…
” (Ng) Newsweek New Yorker New York Times NFL Ng, Andrew, at Google Nokia Nvidia Obama, Barack OpenAI AGI and AI Act and Altman’s removal from Amodei and bias in ChatGPT and capped-profit structure and ChatGPT and ChatGPT Plus Codex competition
…
) Superintelligence (Bostrom) Sutskever, Ilya AGI and Altman and on ChatGPT ChatGPT concerns and DeepMind and firing of Altman and large language models OpenAI board and role at OpenAI and salary at OpenAI and Superalignment Team and transformers and Sweeney, Latanya Tab (wearable AI) Tallinn, Jaan Tao, Terence Tay chatbot Taylor, Bret TechCrunch Tencent
by Paul Scharre · 18 Jan 2023
in 2015. As deep learning has continued to evolve, AI researchers have turned to ever-larger datasets to train more advanced AI systems. In 2019, OpenAI announced a language model called GPT-2 trained on 40 gigabytes (GB) of text. At the time, it was the largest language model that
…
applications, researchers are now using tremendous amounts of compute. In training an algorithm to achieve superhuman performance at the computer game Dota 2, researchers at OpenAI used “thousands of GPUs over multiple months.” Because the computer could play games at an accelerated speed, the training was equivalent to a human playing
…
for 45,000 years. In another project, an OpenAI team trained a robotic hand to manipulate a Rubik’s cube in 13,000 years of simulated computer time. The massive amounts of computing power
…
used for machine learning research doesn’t come free. Leading AI research teams at organizations such as OpenAI, DeepMind, and Google Brain are spending millions on compute pursuing the latest advances in AI. These exorbitant sums are only possible because the labs are
…
$650 million in 2019 and had a $1.5 billion debt waived by Alphabet. In 2019, Microsoft announced plans to invest $1 billion in OpenAI. (Microsoft and Alphabet were both in the top five largest publicly traded companies in the world by market capitalization as of 2022.) The recent explosion
…
more a year in salary and stock options. Top AI researchers are highly sought after and can make millions. Jack Clark, then policy director for OpenAI, told the New York Times, “For much of basic AI research, the key ingredient in progress is people rather than algorithms.” Algorithms are not
…
AI research community into haves and have-nots. Compute-intensive research has become increasingly concentrated in the hands of corporate-backed labs, such as DeepMind, OpenAI, and Google Brain. Anima Anandkumar, director of machine learning at the chip company NVIDIA and a professor at Caltech, noted that academics “do not
…
environment, governments will need to engage the engineers and companies building AI tools to ensure they are used responsibly. When Jack and his colleagues at OpenAI developed GPT-2, they were concerned about potential misuse of this technology and were thinking about a strategy for responsible release. Humans didn’t need
…
generated fake text could lead to a similar flood of spam text on the internet. Jack was keenly aware of this problem and explained that OpenAI wanted to act responsibly. He outlined their plan for a staged release that would first announce that they had created the system and give
…
malicious AI applications, and this was one of the strategies the group had discussed for managing these kinds of risks. I still had misgivings, but OpenAI’s approach seemed reasonable. If they did nothing, eventually other research groups would generate similar high-quality language models, so it was better to give
…
held back from releasing the full version online. Anima Anandkumar, director of machine learning at the chip company NVIDIA and a professor at Caltech, accused OpenAI of “fear-mongering” and “severely playing up the risks” of the model. Others agreed. Delip Rao, an expert on machine learning and natural language
…
processing, told The Verge, “The words ‘too dangerous’ were casually thrown out here without a lot of thought or experimentation.” Others accused OpenAI of courting hype by giving prerelease copies to tech journalists. AI researchers woke up one Thursday morning to a spate of news headlines about a
…
Russia has continued their activities against U.S. politicians, and other nations such as China and Iran have followed suit. Jack Clark and others at OpenAI looked at GPT-2 against this backdrop of growing societal concern about the misuse of AI-generated audio and video. The specific risk of releasing
…
shape the information environment, and those who wield these tools will have the ability to control information seen by billions of people. In restricting release, OpenAI was not only limiting access to GPT-2 for potentially malicious actors but also for academics who didn’t have access to the large-scale
…
Just barely under the surface of debates about openness was a conflict about who would control the future direction of AI research. In the end, OpenAI didn’t prevent GPT-2’s release, only slowed it down. The first release in February 2019 was a smaller 124 million parameter version, roughly
…
all going to make its way into the world,” Jack acknowledged. “What we get to choose is how we frame it.” During those nine months, OpenAI, for example, partnered with other institutions to better understand the risks of GPT-2 and similar language models. Researchers at the Middlebury Institute’s Center
…
network, it withheld external release of a demo, citing challenges relating to safety and bias. Google’s press release wasn’t as splashy as OpenAI’s in highlighting the risks of misuse, but one can understand why a trillion-dollar company may not want news headlines saying its products are
…
“too dangerous.” Yet others have pushed back on the shift toward a more closed ecosystem. After OpenAI and Google limited access to their text-to-image generative models in 2022, Stability AI released its full model, Stable Diffusion, online. Users quickly stripped
…
same physical reaction time as human players, the AI agents were able to perceive an unfolding situation, address it, and counter it faster than humans. OpenAI’s agents, which are separate team members controlled by different AI players, were also able to precisely coordinate their attacks, hitting enemy units at the
…
but also in strategic actions. When playing Dota 2, human players tend to divide up the map among teammates, with players only switching locations occasionally. OpenAI Five’s five AI agents switched their characters’ locations on the map more frequently than human players, flexibly adjusting as a team as the game
…
In other cases, it has expanded how humans think about the game, such as in chess where AlphaZero has led human grandmasters to explore new openings. AI agents appear to have the ability to engage in dramatic shifts in strategies and risk-taking in ways that are different from human players and
…
(a bad team). The AI agents performed poorly and inflexibly, using the same familiar tactics that were ill-suited for the new characters. The OpenAI Five also played in a restricted game space, with certain characters and types of actions off-limits to reduce the complexity of the game. The
…
the numbers might suggest. Every time that the Dota 2 game was updated by its developer, such as adding new characters, items, or maps, OpenAI researchers had to perform what they termed “surgery” on the AI model to adapt it to the new environment. The researchers similarly had to perform
…
AI research labs pursuing compute-intensive approaches are backed by giant tech corporations. DeepMind and Google Brain are owned by Google’s parent company Alphabet. OpenAI secured a $1 billion investment from Microsoft. Companies are reaching into their deep pockets to fund major compute investments. Meta announced in 2022 the construction
…
and relatively affordable, at least for major corporations. In addition to rising compute usage, the past decade has also seen exponential improvements in compute efficiency. OpenAI assessed a forty-four-fold improvement in compute efficiency for training on benchmark image classification tasks from 2012 to 2019, corresponding to a doubling in
…
and training large language models, its long-term value as an approach to AI research is highly contested. Leading AI research labs such as DeepMind, OpenAI, and Google Brain have used increasingly large deep neural networks, yet other researchers argue that their methods are fundamentally flawed. Compute-heavy research has been
…
on ImageNet Classification (Microsoft Research, February 6, 2015), https://arxiv.org/pdf/1502.01852.pdf. 20GPT-2: “Better Language Models and Their Implications,” openai.com, n.d., https://openai.com/blog/better-language-models/. 20Megatron-Turing NLG: Ali Alvi and Paresh Kharya, “Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B
…
2018, https://openai.com/blog/ai-and-compute/. 26Moore’s law: Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics 38, no. 8 (April 19, 1965), https://newsroom.intel.com/wp-content/uploads/sites/11/2018/05/moores-law-electronics.pdf. 26“thousands of GPUs over multiple months”: Open AI et al
…
., Dota 2 with Large Scale Deep Reinforcement Learning (arXiv.org, December 13, 2019), 2, https://arxiv.org/pdf/1912.06680.pdf. 26equivalent to a human playing for 45,000 years: OpenAI, “OpenAI Five Defeats Dota 2 World Champions,” OpenAI blog, April 15, 2019, https://openai.com/blog/openai-five-defeats-dota
…
Galgano for background research on European AI regulations. 15. DISINFORMATION 1178 million web pages totaling 40 gigabytes: “Better Language Models and Their Implications,” openai.com, n.d., https://openai.com/blog/better-language-models/ 117“simply to predict the next word”: “Better Language Models.” 117the kind of text that GPT-2 generates
…
the Text-Generating AI It Said Was Too Dangerous to Share,” The Verge, November 7, 2019, https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters. 120pre-briefing the press “got us some concerns that we were hyping it”: Jack
…
Publication Norms,” Lawfare, October 24, 2019, https://www.lawfareblog.com/artificial-intelligence-research-needs-responsible-publication-norms; “Better Language Models and Their Implications,” openai.com, n.d., https://openai.com/blog/better-language-models/; Biotechnology Research in an Age of Terrorism: Confronting the Dual Use Dilemma (prepublication copy, National Academies Press, 2003
…
an effective economics curve yet”: Clark, interview. 125societal implications of their work: Irene Solaiman et al., Release Strategies and the Social Impacts of Language Models (OpenAI, November 2019), https://arxiv.org/pdf/1908.09203.pdf. 125Meena, a chatbot based on a 2.6 billion parameter neural network: Daniel Adiwardana and Thang
…
StarCraft II Using Multi-Agent Reinforcement Learning,” Nature 575 (2019), 350, https://doi.org/10.1038/s41586-019-1724-z. 268approximately 20,000 time steps: Open AI et al., Dota 2 with Large Scale Deep Reinforcement Learning (arXiv.org, December 13, 2019), 2, https://arxiv.org/pdf/1912.06680.pdf. 268command and
…
updated June 25, 2020, https://senrigan.io/blog/takeaways-from-openai-5/. 268OpenAI’s Dota 2 agents: Mike, “OpenAI & DOTA 2: Game Is Hard,” Games by Angelina, updated August 10, 2018, http://www.gamesbyangelina.org/2018/08/openai-dota-2-game-is-hard/; “Open AI Five Benchmark,” streamed on Twitch, 2018, https://www.twitch.tv
…
/videos/293517383?t=2h11m08s; (deleted user), “OpenAI Hex was Within the 200ms Response Time,” r/DotA2, Reddit, 2018, https://www.reddit
…
.com/r/DotA2/comments/94vdpm/openai_hex_was_within_the_200ms_response_time/e3ofipk/; OpenAI et al., Dota 2
…
Based Reinforcement Learning,” Science 364, no. 6443 (May 31, 2019), 3, https://doi.org/10.1126/science.aau6249. 269take in information about the whole map: OpenAI et al., Dota 2, 4, 39. 269redeploying pieces that are no longer needed: Regan and Sadler, “Game Changer: AlphaZero revitalizing the attack.” 270superhuman attentiveness of
…
Game StarCraft II,” DeepMind Blog, January 24, 2019, https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii. 270AI agents’ superhuman precision: OpenAI et al., Dota 2, 10. 270AlphaZero excels at combining multiple attacks: Sadler and Regan, Game Changer, 136. 270In Dota 2, AI agents demonstrate superhuman coordination
…
AI Steamrolls World Champion e-Sports Team with Back-to-Back Victories,” The Verge, April 13, 2019, https://www.theverge.com/2019/4/13/18309459/openai-five-dota-2-finals-ai-bot-competition-og-e-sports-the-international-champion. 270greater range in behaviors than human players: Andrew Lohn, “What Chess Can
…
Language Understanding (Google AI Language, October 11, 2018), https://arxiv.org/pdf/1810.04805v2.pdf. 294GPT-2: “Better Language Models and Their Implications,” openai.com, n.d., https://openai.com/blog/better-language-models/. 294GPT-3: Tom B. Brown et al., Language Models are Few-Shot Learners (Cornell University, July 22, 2020
…
models: Ramesh et al., “DALL·E”; Ramesh et al., Zero-Shot Text-to-Image Generation; Aditya Ramesh et al., “DALL·E 2,” OpenAI Blog, n.d., https://openai.com/dall-e-2/; Aditya Ramesh et al., Hierarchical Text-Conditional Image Generation with CLIP Latents (arXiv.org, April 13, 2022), https://arxiv
…
, August 10, 2022, https://stability.ai/blog/stable-diffusion-announcement. 295artificial neurons tied to underlying concepts: Goh et al., “Multimodal Neurons in Artificial Neural Networks” (OpenAI Blog); Gabriel Goh et al., “Multimodal Neurons in Artificial Neural Networks” (full paper), distill.pub, March 4, 2021, https://distill.pub/2021/multimodal-neurons/; “
…
text and images, although little information is publicly available about the model. (Coco Feng, “US-China tech war: Beijing-Funded AI Researchers Surpass Google and OpenAI with New Language Processing model,” South China Morning Post, June 2, 2021, https://www.scmp.com/tech/tech-war/article/3135764/us-china-tech-war
…
://www.wsj.com/articles/chinas-chip-independence-goals-helped-by-u-s-developed-tech-11610375472. 300research breakthroughs quickly proliferate: For example, within eighteen months of OpenAI’s announcement of GPT-3, similar scale language models had been announced by research teams in China, South Korea, and Israel. Ganguli et al.,
…
Alex, 210 Kuwait, 46 Lamppost-as-a-Platform, 107 language models, 20, 118–20, 124–25, 232, 234, 294; See also GPT-2; GPT-3; OpenAI Laos, 108 Laskai, Lorand, 96 Laszuk, Danika, 128, 140 Latvia, 108 Lawrence, Jennifer, 130 laws and regulations, 111–13 “blade runner,” 121–22, 170 data
…
of AI, 296–97 funding, 296 and Google-Maven controversy, 62, 66 and government regulation, 111 and ImageNet, 54 Megatron-Turing NLG, 20, 294 and OpenAI, 26 revenue, 297 and Seven Sons of National Defense, 162 and Trusted News Initiative, 139 work with Chinese researchers, 157, 393, 396 Microsoft Research, 31
by Nate Silver · 12 Aug 2024 · 848pp · 227,015 words
these tribes had escalated into open conflict as hedge fund billionaires led the charge to oust Ivy League presidents and The New York Times sued OpenAI. Incursions into enemy territory are treated with alarm, like when Google’s AI model Gemini was criticized by Riverians for reflecting distinctively Villagey political
…
spoilers, but the chapter ends with a bang. Chapter ∞, Termination, is the first of a two-part conclusion. I’ll introduce you to another Sam, OpenAI CEO Sam Altman, and others behind the development of ChatGPT and other large language models. Unlike the government-run Manhattan Project, the charge into the
…
to make. The political system and the judicial system will have their say too, as in the case of the copyright infringement lawsuit filed against OpenAI by The New York Times in December 2023. Silicon Valley is skeptical of the “trust the experts” mantra that the Village prizes. This is
…
was a symbolic coup. (Want another sign that the Village and the River are explicitly at war? The New York Times’s copyright lawsuit against OpenAI.)[*29] But what ought to worry the Village is that public opinion increasingly shares the River’s skepticism of it. Indeed, the Village has begun
…
forecast gave Clinton a 71 percent chance, by comparison. *16 And ChatGPT itself was heavily funded by Microsoft and the other established players who formed OpenAI. *17 This is related to Clayton Christensen’s idea of the innovator’s dilemma—spelled out in his book of the same name. Christensen was
…
the Times has also shown a capacity for change (as in its 2014 Innovation Report) and risk-taking (as in the sure-to-be-expensive OpenAI lawsuit). It may not be a coincidence that the Times has grown as nearly all its journalistic competitors have shrunk. *30 We’ll take this
…
Sam Altman, heading up a market-leading firm. (As of early 2024, many AI nerds regard Anthropic’s model Claude as the worthiest competitor to OpenAI’s ChatGPT.) Imagine one of his lieutenants coming to him and saying, “Hey, SBF, we’ve run the numbers and calculated that if we train
…
artificial intelligence. He left YC—some news accounts claim that he was fired, but Graham strongly disputes that description—to become a co-chair of OpenAI along with Elon Musk. It’s unusual enough for someone who’s already a made man in venture capital to plunge back into the trenches
…
of running a startup. But OpenAI was something almost anathema to Silicon Valley—a nonprofit research lab. It wasn’t clear what the commercial applications of AI might be, if any
…
good poker game when there’s a better one on the other side of town, and Altman had found the right game to sit in. OpenAI was intrinsically an expensive bet—the premise of machine learning is that seemingly impossible problems can be solved quite miraculously by clever, simple algorithms if
…
spoke with him in August 2022. “It’s gonna happen. The upsides are far too great.” Altman was in a buoyant mood: even though OpenAI had yet to release GPT-3.5, it had already finished training GPT-4, its latest large language model (LLM), a product that Altman knew
…
societal-scale risks such as pandemics and nuclear war”—which was signed by the CEOs of the three most highly-regarded AI companies (Altman’s OpenAI, Anthropic, and Google DeepMind) in 2023 along with many of the world’s foremost experts on AI. To dismiss these concerns with the eye-
…
, you don’t have kids, you don’t own a house—you move to where the action is,” said Vinod Khosla, an early investor in OpenAI. He was (I presume unintentionally) echoing Goffman’s phrase to refer to the places where risk-seekers go, where “chances [are] that they will
…
the existentially high stakes even if they sometimes have trouble reconciling them with their human frailties. * * * roon is a member of the technical staff at OpenAI, or at least that’s how The Washington Post describes him. He gave me some other identifying details that I won’t share. That is
…
use pseudonyms if they want—but also because roon, the Twitter persona, is not quite the same thing as Roon, the person who works at OpenAI. Instead, roon is half human, half meme. His Twitter account is one of the most influential in the AI universe. His avatar, depicting Carlos
…
timeline, an oasis of whimsy in a desert of doomscrolling. He’s followed by Musk and by Altman (Altman gave him his job at OpenAI after they connected on Twitter) and by both the real Jeff Bezos and by Beff Jezos, another pseudonymous AI personality who was later outed[*6
…
. But anybody who’s looked at a graph of, well, pretty much anything knows that what goes up doesn’t always continue to go up. OpenAI bet that the graph would keep going up, taking a “faith-based leap that these scaling curves [would] hold,” said Shear. And they were
…
So at the very moment in late 2022 that Sam Bankman-Fried’s empire was collapsing, Sam Altman’s was soaring to new heights. Inside OpenAI, the recognition of the miracle had come sooner[*8]—with the development of GPT-3 if not earlier.[*9] But whatever the pivotal moment, their
…
in important respects, their thought process resembles that of human beings. In particular, it resembles that of poker players. In June 2023, I visited the OpenAI offices in San Francisco to meet with Nick Ryder, who describes himself as a “proud co-parent” of ChatGPT. The offices, in an unadorned warehouse
…
attention to themselves. But they’re where the action is, and Ryder is another one of the restless young nerds who sniffed it out, joining OpenAI after completing a PhD in theoretical math at Berkeley. “I loved teaching, I loved learning, I loved the community. But I lacked a sense
…
it’s trying to please the conductor, to interpret her instructions as faithfully as possible. But the directors of the orchestra (akin to executives at OpenAI) are also paying close attention to how the audience and critics react. Just like poker players seek to maximize EV, LLMs seek to minimize what
…
manipulate gamblers into spending more on slot machines. What if you extrapolated that outward? The commercial applications of AI are just now coming into view. OpenAI is nominally still a hybrid between a for-profit and a nonprofit, but the failed coup against Altman by the nonprofit board in November 2023
…
developed by private companies. “We’re sleepwalking into handing over the future to solely market-driven enterprises that become functionally ungovernable,” said Jack Clark, an OpenAI expat who left to cofound the more safety-focused AI firm Anthropic. And unlike during the Industrial Revolution, which coincided with the Enlightenment, we have
…
the loss of jobs. There will be regulatory burdens like those imposed by the EU and legal challenges like the New York Times lawsuit against OpenAI. There are resource constraints and even potential resource conflicts, like between China and the U.S. over Taiwan, which manufactures the majority of the
…
to below replacement levels—so roon is referring to how the world has begun to limit its population on its own. *8 Altman and another OpenAI researcher, Nick Ryder, told me that they expected GPT-4 and not GPT-3.5 to be the big public breakthrough. But their perspective is
…
who comes over once a year for Thanksgiving is more likely to notice that Billy has suddenly become quite tall. *9 A group of OpenAI engineers left OpenAI in 2021 after the release of GPT-3 to form the rival firm Anthropic because of what Jack Clark, an Anthropic cofounder, told me
…
were primarily concerns about safety because of the power of OpenAI’s models. *10 This is my read from having spoken with a lot of River types who have developed body armor from frequent online combat
…
improving, meaning you train an AI on how to make a better AI. Even Altman told me that this possibility is “really scary” and that OpenAI isn’t pursuing it. *14 There is some ambiguity about the context—von Neumann’s most hawkish comments in 1950 (“If you say why not
…
novel, there’d be some mysteries here, but this is real life and it’s not obligated to make narrative sense,” said Shear, the interim OpenAI CEO. *41 One of Le Guin’s most famous short stories, “The Ones Who Walk Away from Omelas,” is about a utopia that depends upon
…
overcoming an undue amount of friction, and don’t risk entrapping you in an addictive spiral. The concept of agency is pertinent in AI research; OpenAI describes an agentic AI system as one “that can adaptably achieve complex goals in complex environments with limited direct supervision.” The definition applies nicely to
…
were illegally parked for only five minutes to pick up your Starbucks order. Bag of numbers*: A term conveyed to me by Nick Ryder of OpenAI for the mysterious inner workings of large language models. Bankroll: The amount of money a player sets aside to gamble with. It’s important to
…
missing out,” the attitude of many participants during a market bubble. Foom: An onomatopoeic word—imagine the sound of some server powering on in an OpenAI back office at a volume barely more than a whisper—to refer to a very fast AI takeoff. Fox: Along with a hedgehog, one of
…
you’re also in the tournament, her most appropriate reply is LFG: let’s fucking go! GPT: A series of large language models created by OpenAI; the most recent version is GPT-4. GPT stands for Generative Pretrained Transformer. “Generative” refers to how LLMs generate output (responses to user queries)
…
blog/sam-altman-for-president. GO TO NOTE REFERENCE IN TEXT he was fired: Elizabeth Dwoskin and Nitasha Tiku, “Altman’s Polarizing Past Hints at OpenAI Board’s Reason for Firing Him,” The Washington Post, November 22, 2023, washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham
…
t Worried, but He Knows You Might Be,” The New York Times, March 31, 2023, sec. Technology, nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html. GO TO NOTE REFERENCE IN TEXT that Altman knew: Per email to Nate Silver, January 19, 2024. GO TO NOTE REFERENCE IN TEXT
…
/10/10/sam-altmans-manifest-destiny. GO TO NOTE REFERENCE IN TEXT like Emmett Shear: When I asked Shear if he had formally become the OpenAI CEO, he said, “That’s a very, very good complicated question that has no linear correct answer.” But he also said, “I accepted a
…
Other companies such as Meta are considered a step or two behind. Interestingly, far fewer Meta employees signed the one-sentence statement than those at OpenAI, Google and Anthropic; the company may take more of an acceleratonist stance since it feels as though it’s behind. GO TO NOTE REFERENCE IN
…
York: Simon & Schuster Paperbacks, 2012), 971. GO TO NOTE REFERENCE IN TEXT Post describes him: Nitasha Tiku, “OpenAI Leaders Warned of Abusive Behavior before Sam Altman’s Ouster,” The Washington Post, December 8, 2023, washingtonpost.com/technology/2023/12/08/open-ai-sam-altman-complaints. GO TO NOTE REFERENCE IN TEXT companies like
…
OpenAI and Anthropic: “Google Brain Drain: Where are the Authors of ‘Attention Is All You Need’ Now?” AIChat, aichat.blog
…
Behind ChatGPT Warns Congress AI Could Cause ‘Harm to the World,’ ” The Washington Post, May 17, 2023, washingtonpost.com/technology/2023/05/16/sam-altman-open-ai-congress-hearing. GO TO NOTE REFERENCE IN TEXT “Attention Is All”: Ashish Vaswani et al., “Attention Is All You Need,” arXiv, August 1, 2023,
…
12, 2019, reddit.com/r/worldjerking/comments/cphds9/the_only_16_ideologies_that_exist_in_my_world. GO TO NOTE REFERENCE IN TEXT OpenAI is nominally: Alnoor Ebrahim, “OpenAI Is a Nonprofit-Corporate Hybrid: A Management Expert Explains How This Model Works—And How It Fueled the Tumult Around CEO Sam Altman
…
’s Short-Lived Ouster,” The Conversation, November 30, 2023, http://theconversation.com/openai-is-a-nonprofit-corporate-hybrid-a-management-expert-explains-how-this-model-works-and-how-it-fueled-the-tumult-around-ceo-sam-altmans-short-lived
…
france/france-facts/symbols-of-the-republic/article/liberty-equality-fraternity. GO TO NOTE REFERENCE IN TEXT an agentic AI: “Research into Agentic AI Systems,” OpenAI, openai.smapply.org/prog/agentic-ai-research-grants. GO TO NOTE REFERENCE IN TEXT Avoid “noble lies”: Kerrington Powell and Vinay Prasad, “The Noble Lies of
…
355, 359, 380 engineers and, 411–12 excitement about, 409–10 impartiality and, 359, 366 moral hazard and, 261 New York Times lawsuit, 27, 295 OpenAI founding, 406–7, 414 optimism and, 407–8, 413 poker and, 40, 46–48, 60–61, 430–33, 437, 439, 507n poor interpretability of, 433
…
in (poker), 478 alpha, 241–42, 478 AlphaGo, 176 Altman, Sam, 401 AI breakthrough and, 415 AI existential risk and, 419n, 451, 459 OpenAI founding and, 406–7 OpenAI’s attempt to fire, 408, 411, 452n optimism and, 407–8, 413, 414 Y Combinator and, 405–6 Always Coming Home (Le Guin
…
26 cryptocurrency and, 314–15 cults of personality and, 31 culture wars and, 29 effective altruism and, 344 luck and, 278, 280 megalothymia and, 468 OpenAI founding and, 406 poker and, 251 politics and, 267n resentment and, 277, 278 risk tolerance and, 229, 247–48, 251, 252, 264–65, 299 River
…
s 11, 142 Ohtani, Shohai, 173 Old Man Coffee (OMC), 491 O’Leary, Kevin, 301 “Ones Who Walk Away from Omelas, The” (Le Guin), 454n OpenAI AI breakthrough and, 415 attempt to fire Altman, 408, 411, 452n founding of, 406–7, 414 River-Village conflict and, 27 Oppenheimer, Robert, 407, 421
by Cade Metz · 15 Mar 2021 · 414pp · 109,622 words
. Both declined the offers, but then bigger offers arrived, even as they flew to Montreal for NIPS, where they were set to unveil the new OpenAI lab. A conference that once attracted a few hundred researchers now spanned nearly four thousand, with bodies filling the lecture halls where the top thinkers
…
realize where they might go wrong. In the coming months, Hassabis and Legg said as much to both Sutskever and Brockman. In the hours after OpenAI was unveiled, Sutskever heard even harsher words after he walked into a party at the conference hotel. The party was thrown by Facebook, and before
…
Artificial Intelligence, an influential lab based in Seattle. “Not in the foot. In the head.” The big companies were already expanding their operations abroad. Facebook opened AI labs in both Montreal and Paris, Yann LeCun’s hometown. Microsoft ended up buying Maluuba, which became its own lab in Montreal (with Yoshua Bengio
…
lower price. In the spring of 2016, after less than two years at Google, Goodfellow left the company and took this research to the new OpenAI lab, drawn to its stated mission of building ethically sound artificial intelligence and sharing it with the world at large. His work, including both GANs
…
wasn’t actually designed to take the test. BERT was what researchers call a “universal language model.” Several other labs, including the Allen Institute and OpenAI, had been working on similar systems. Universal language models are giant neural networks that learn the vagaries of language by analyzing millions of sentences written
…
by humans. The system built by OpenAI analyzed thousands of self-published books, including romance, science fiction, and mysteries. BERT analyzed the same vast library of books as well as every article
…
tiles. A small crowd of onlooking researchers let out a cheer. Led by Wojciech Zaremba, the Polish researcher stolen from under Google and Facebook as OpenAI was founded, they’d spent more than two years working toward this eye-catching feat. In the past, many others had built robots that could
…
—and companies like Amazon—were desperate for picking robots that actaully worked. The solution, as it turned out, was already brewing inside both Google and OpenAI. After building a medical team inside Google Brain, Jeff Dean built a robotics team, too. One of his first hires was a young researcher from
…
. It marked the beginning of a widespread effort to apply deep learning to robotics, spanning labs inside many top universities as well as Google and OpenAI. The following year, using the same kind of reinforcement learning, Levine and his team trained other arms to open doors on their own (provided the
…
s effort to master Dota, the Rubik’s Cube project would require a massive technological leap. Both projects were also conspicuous stunts, a way for OpenAI to promote itself as it sought to attract the money and the talent needed to push its research forward. The techniques under development at labs
…
like OpenAI were expensive—both in equipment and in personnel—which meant that eye-catching demonstrations were their lifeblood. This was Musk’s stock in trade: drawing
…
$100,000, and his salary for just the last six months of 2016 alone was $330,000. Three of Abbeel’s former students also joined OpenAI as it accelerated efforts to challenge Google Brain and Facebook and, particularly, DeepMind. Then reality caught up with both Musk and his new lab. Ian
…
GANfather, left and returned to Google. Musk himself poached a top researcher from the lab, lifting a computer vision expert named Andrej Karpathy out of OpenAI and installing him as the head of artificial intelligence at Tesla so he could lead the company’s push into self-driving cars. Then Abbeel
…
t as nimble as they seemed. “Excessive automation at Tesla was a mistake,” he said. “Humans are underrated.” As Sam Altman took the reins at OpenAI, the lab needed to attract new talent—and it needed money. Though investors had committed a billion dollars to the nonprofit when it was founded
…
students, Peter Chen and Rocky Duan, left the lab to found a start-up called Covariant. Their new company was committed to the same techniques OpenAI was exploring, except the aim was to apply them in the real world. By 2019, as researchers and entrepreneurs recognized what Amazon and the rest
…
, the market was flooded with robotic-picking start-ups, some of them employing the sort of deep learning methods under development at Google Brain and OpenAI. Pieter Abbeel’s company, Covariant, wasn’t necessarily one of them. It was designing a system for a much wider range of tasks. But
…
Google—and after he spent several weeks inside DeepMind—he came to embrace Legg’s thesis as “insanely visionary.” Many others did, too. Five of OpenAI’s first nine researchers had spent time inside the London lab where the possibilities of AGI had been so fervently embraced, and the two labs
…
. If others heard he was discussing the rise of artificial general intelligence, he would be branded a pariah across the wider community of researchers. When OpenAI was unveiled, the official announcement did not mention AGI. It only hinted at the idea as a distant possibility. “AI systems today have impressive but
…
$100 million, that aimed to create a “neural lace”—an interface between computer and brain—and it moved into the same offices as OpenAI. Though Musk soon left OpenAI, the lab’s ambitions only grew under Altman. Sam Altman was a Silicon Valley archetype: In 2005, he founded a social networking company
…
make a much larger impact. The quest for AGI was more important—and more interesting—than anything else he could chase. Leaving Y Combinator for OpenAI, he believed, was the inescapable path. Like Musk, he was an entrepreneur, not a scientist, though he sometimes made a point of saying he
…
that followed, it became a new way for the lab to market itself. After building a new language model along the lines of Google BERT, OpenAI made a point of saying, through the press, that the technology was too dangerous to release because it would allow machines to automatically generate fake
…
researchers scoffed at the claim, saying the technology wasn’t even close to dangerous. And it was, eventually, released. At the same time, the new OpenAI charter said—explicitly and matter-of-factly—that the lab was building AGI. Altman and Sutskever had seen both the limitations and the dangers of
…
the current technologies, but their goal was a machine that could do anything the human brain could do. “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work
…
camp, waiting for the flag to appear, and that is only possible if it is relying on its teammates.” This was how both DeepMind and OpenAI hoped to mimic human intelligence. Autonomous systems would learn inside increasingly complex environments. First Atari. Then Go. Then three-dimensional multiplayer games like Quake III
…
forth. Seven months later, DeepMind unveiled a system that beat the world’s top professionals at StarCraft, a three-dimensional game set in space. Then OpenAI built a system that mastered Dota 2, a game that plays like a more complex version of capture the flag, requiring collaboration between entire teams
…
computing market was artificial intelligence, few saw Microsoft as a top player in the field. Nadella and company agreed to invest $1 billion in OpenAI, and OpenAI agreed to send much of this money back into Microsoft as the tech giant built a new hardware infrastructure just for training the lab’s
…
think you need these high-ambition North Stars,” Nadella said. For Altman, this was less about the means than the end. “My goal in running OpenAI is to successfully create broadly beneficial AGI,” he said. “This partnership is the most important milestone so far on that path.” Two labs now said
…
companies said they would provide the money and the hardware they would need along the way, at least for a while. Altman believed he and OpenAI would need another $25 billion to $50 billion to reach their goal. * * * — ONE afternoon, Ilya Sutskever sat down in a coffee shop a few blocks
…
from the OpenAI offices in San Francisco. As he sipped from a ceramic mug, he talked about many things, one of them AGI. He described it as a
…
a surprising rate and behaved in ways no one expected and intertwined with corporate forces more powerful and more relentless than anyone realized, DeepMind, like OpenAI, was still intent on building a truly intelligent machine. In fact, its founders saw the turmoil as a kind of vindication. They had warned that
…
translation. 2015—Geoff Hinton spends the summer at DeepMind. AlphaGo defeats Fan Hui in London. Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman found OpenAI. 2016—DeepMind unveils DeepMind Health. AlphaGo defeats Lee Sedol in Seoul, South Korea. Qi Lu leaves Microsoft. Google deploys translation service based on deep learning
…
initiative. Geoff Hinton unveils capsule networks. Nvidia unveils progressive GANs, which can generate photo-realistic faces. Deepfakes arrive on the Internet. 2018—Elon Musk leaves OpenAI. Google employees protest Project Maven. Google releases BERT, a system that learns language skills. 2019—Top researchers protest Amazon face recognition technology. Geoff Hinton, Yann
…
LeCun, and Yoshua Bengio win the Turing Award for 2018. Microsoft invests $1 billion in OpenAI. 2020—Covariant unveils “picking” robot in Berlin. THE PLAYERS AT GOOGLE ANELIA ANGELOVA, the Bulgaria-born researcher who brought deep learning to the Google self
…
. IAN GOODFELLOW, the inventor of GANs, a technology that could generate fake (and remarkably realistic) images on its own, who worked at both Google and OpenAI before moving to Apple. VARUN GULSHAN, the virtual reality engineer who explored AI that could read eye scans and detect signs of diabetic blindness. GEOFF
…
that the threat of malicious AI: Ibid. “We think it’s far more likely that many, many AIs”: Ibid. said those “borderline-crazy”: Metz, “Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free.” CHAPTER 10: EXPLOSION On October 31, 2015, Facebook chief technology officer Mike Schroepfer: Cade Metz
…
/1707.08945. Goodfellow warned that the same phenomenon: Metz, “How to Fool AI into Seeing Something That Isn’t There.” he was paid $800,000: OpenAI, form 990, 2016. CHAPTER 14: HUBRIS a two-hundred-thousand-square-foot conference center: “Unveiling the Wuzhen Internet Intl Convention Center,” China Daily, November 15
…
, April 13, 2018, https://twitter.com/elonmusk/status/984882630947753984?s=19. Altman re-formed the lab as a for-profit company: “OpenAI LP,” OpenAI blog, March 11, 2019, https://openai.com/blog/openai-lp/. an international robotics maker called ABB organized its own contest: Adam Satariano and Cade Metz, “A Warehouse Robot Learns to
…
/. “It will just be open source and usable by everyone”: Ibid. he and his researchers released a new charter for the lab: “OpenAI Charter,” OpenAI blog, https://openai.com/charter/. “OpenAI’s mission is to ensure that artificial general intelligence”: Ibid. DeepMind trained a machine to play capture the flag: Max Jaderberg, Wojciech M
…
Facebook’s interest in, 121–24 Google’s acquisition of, 100, 112–16, 300–01 key players, 322–23 lack of revenue, 114–15 and OpenAI, 165–66, 289 petition against Google’s involvement with Project Maven, 248 power consumption improvements in Google’s data centers, 139 salary expenses, 132 tension
…
neuroscience research, 104–05, 301 online diary, 103–04 power consumption improvements in Google’s data centers, 139 preparing for future technologies, 302 response to OpenAI, 165–66 role in the tech industry’s global arms race, 12, 244, 302 and Shane Legg, 105–07, 186–87, 300 the Singularity Summit
…
, 325 at Baidu, 324 at Clarifai, 325 at DeepMind, 322–23 at Facebook, 323 at Google, 321–22 at Microsoft, 324 at Nvidia, 325 at OpenAI, 324 in the past, 326 at the Singularity Summit, 325–26 knowledge empiricists, 266, 268–69 innate machinery, 269–70 nativists, 266, 269–70 NYU
…
microchip design research, 52–53, 54 at NYU, 126, 128, 129 open research philosophy, 127–28 opinions about truthfulness and censorship, 257, 260 response to OpenAI, 165–66 as a technical advisor to DeepMind, 110, 169–70 Turing Award, 305–07, 308–09 Lee, Peter, 73, 132, 193–94, 195 Legg
…
negotiation of Google’s acquisition of DeepMind, 115–16, 123–24 research on a computer’s ability to learn, 113–14, 288–89 response to OpenAI, 165–66 the Singularity Summit, 107–09 superintelligence research, 105–06, 153, 157–58, 289 Lenat, Doug, 288 LeNet system, 46–48 Levine, Sergey, 143
…
, 190 facial recognition technology, 235–36 image recognition research, 130–31 investment in AI, 192, 298 key players, 324 natural language research, 54–56 and OpenAI, 298–99 rejection of deep learning, 192–93, 197 self-driving car project proposal from Qi Lu, 197–98 speech recognition project, 70–74, 77
…
–60, 244, 245, 289 and DeepMind, 110, 112 and Mark Zuckerberg, 154–55 need for a direct connection between brains and machines, 291–92 and OpenAI, 281–82, 292 relationship with the press, 159, 281–82, 293 and Yann LeCun, 155–56 Nadella, Satya, 199–200, 298–99 nativists, 266, 269
…
, 140, 147, 156, 210, 325 NYU Center for Mind, Brain, and Consciousness debate between Yann LeCun and Gary Marcus, 268–72 On Intelligence (Hawkins), 82 OpenAI and artificial general intelligence (AGI), 289–90, 295, 299 counteroffers made to researchers leaving for, 164 creation of a machine to win at Dota/Dota
…
, 18 Perceptron machine demonstration, 15–19 research efforts, 25–26, 34, 36 rivalry with Marvin Minsky, 21–22, 24–25 Rubik’s Cube demonstration at OpenAI, 276–78, 281, 297–98 Rumelhart, David, 37–39, 97 Sabour, Sara, 208, 305 Salakhutdinov, Russ, 63 Schmidhuber, Jürgen, 59–60, 141–42 Schmidt, Eric
by Walter Isaacson · 11 Sep 2023 · 562pp · 201,502 words
Jane Eyre. “And if Thornfield Hall burns down and you are blind, I’ll come to you and take care of you.” 40 Artificial Intelligence OpenAI, 2012–2015 With Sam Altman Peter Thiel, the PayPal cofounder who had invested in SpaceX, holds a conference each year with the leaders of companies
…
of the rocket.” At a small dinner in Palo Alto, Altman and Musk decided to cofound a nonprofit artificial intelligence research lab, which they named OpenAI. It would make its software open-source and try to counter Google’s growing dominance of the field. Thiel and Hoffman joined Musk in putting
…
evil actors, so too would a large collection of independent AI bots work to stop bad bots. For Musk, this was the reason to make OpenAI truly open, so that lots of people could build systems based on its source code. “I think the best defense against the misuse of AI
…
Wired’s Steven Levy at the time. One goal that Musk and Altman discussed at length, which would become a hot topic in 2023 after OpenAI launched a chatbot called ChatGPT, was known as “AI alignment.” It aims to make sure that AI systems are aligned with human goals and values
…
was furious. Not only was his erstwhile friend and houseguest starting a rival lab; he was poaching Google’s top scientists. After the launch of OpenAI at the end of 2015, they barely spoke again. “Larry felt betrayed and was really mad at me for personally recruiting Ilya, and he refused
…
’s determination to develop artificial intelligence capabilities at his own companies caused a break with OpenAI in 2018. He tried to convince Altman that OpenAI, which he thought was falling behind Google, should be folded into Tesla. The OpenAI team rejected that idea, and Altman stepped in as president of the lab, starting
…
was struggling with the production hell surges in Nevada and Fremont, he recruited Andrej Karpathy, a specialist in deep learning and computer vision, away from OpenAI. “We realized that Tesla was going to become an AI company and would be competing for the same talent as
…
OpenAI,” Altman says. “It pissed some of our team off, but I fully understood what was happening.” Altman would turn the tables in 2023 by hiring
…
he felt about artificial intelligence. The possibility that someone might create, intentionally or inadvertently, AI that could be harmful to humans led him to start OpenAI in 2014. It also led him to push related endeavors, including self-driving cars, a neural network training supercomputer known as Dojo, and Neuralink chips
…
could process visual inputs and learn to perform tasks without violating Asimov’s law that a robot shall not harm humanity or any human. While OpenAI and Google were focusing on creating text-based chatbots, Musk decided to focus on artificial intelligence systems that operated in the physical world, such as
…
Human Intelligence. After studying at Yale, she worked at a few startup incubators helping new AI ventures, and she became a part-time consultant at OpenAI. When Musk founded Neuralink, he took her out for coffee and asked her to join. “Neuralink is not just about research,” he assured her. “It
…
two more children, a twin boy and girl. The mother was Shivon Zilis, the bright-eyed AI investor he recruited in 2015 to work at OpenAI and who ended up as the top operations manager of Neuralink. She had become his very close friend, intellectual companion, and occasional gaming partner. “It
…
create The Free Press, a subscription-based newsletter on Substack. Musk had met her briefly after he was interviewed by Sam Altman, his cofounder of OpenAI, at the Allen & Company conference in Sun Valley a few months earlier. She went backstage to say how glad she was that he was trying
…
, but for cars,” Dhaval Shroff told Musk. He was comparing his project at Tesla to the artificial intelligence chatbot that had just been released by OpenAI, the lab that Musk had cofounded with Sam Altman in 2015. For almost a decade, Musk had been working on various forms of artificial intelligence
…
but also in the physical real world of factories and roads. He was already thinking about hiring a group of AI experts to compete with OpenAI, and Tesla’s neural network planning team would complement their work. * * * For years, Tesla’s Autopilot system relied on a rules-based approach. It took
…
Page and Google from purchasing DeepMind, the company formed by AI pioneer Demis Hassabis. When that failed, he formed a competing lab, a nonprofit called OpenAI, with Sam Altman in 2015. Humans can be pricklier than machines, and Musk eventually split with Altman, left the board of
…
OpenAI, and lured away its high-profile engineer Andrej Karpathy to lead the Autopilot team at Tesla. Altman then formed a for-profit arm of OpenAI, got a $13 billion investment from Microsoft, and recruited Karpathy back. Among the
…
products that OpenAI developed was a bot called ChatGPT that was trained on large internet data sets to answer questions posed
…
child. “It gave this very careful excellent answer that was perhaps better than any of us in the room might have given.” In March 2023, OpenAI released GPT-4 to the public. Google then released a rival chatbot named Bard. The stage was thus set for a competition between
…
OpenAI-Microsoft and DeepMind-Google to create products that could chat with humans in a natural way and perform an endless array of text-based intellectual
…
would make the problem two or three orders of magnitude worse. His compulsion to ride to the rescue kicked in. The two-way competition between OpenAI and Google needed, he thought, a third gladiator, one that would focus on AI safety and preserving humanity. He was resentful that he had founded
…
and funded OpenAI but was now left out of the fray. AI was the biggest storm brewing. And there was no one more attracted to a storm than
…
, he invited—perhaps a better word is “summoned”—Sam Altman to meet with him at Twitter and asked him to bring the founding documents for OpenAI. Musk challenged him to justify how he could legally transform a nonprofit funded by donations into a for-profit that could make millions. Altman tried
…
offered Musk shares in the new company, which Musk declined. Instead, Musk unleashed a barrage of attacks on OpenAI and Altman. “OpenAI was created as an open-source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source
…
the hands of a ruthless corporate monopoly.” Altman was pained. Unlike Musk, he is sensitive and nonconfrontational. He was not making any money off of OpenAI, and he felt that Musk had not drilled down enough into the complexity of the issue of AI safety. However, he did feel that Musk
…
Neuralink executive who was the mother of two of his children and who had been his intellectual companion on artificial intelligence since the founding of OpenAI eight years earlier. Their twins, Strider and Azure, now sixteen months old, were sitting on their laps. Musk was still on his intermittent-fasting diet
…
X.AI. That was three times as many as Steve Jobs (Apple, Pixar) at his peak. He admitted that he was starting off way behind OpenAI in creating a chatbot that could give natural-language responses to questions. But Tesla’s work on self-driving cars and Optimus the robot put
…
it way ahead in creating the type of AI needed to navigate in the physical world. This meant that his engineers were actually ahead of OpenAI in creating full-fledged artificial general intelligence, which requires both abilities. “Tesla’s real-world AI is underrated,” he said. “Imagine if Tesla and
…
OpenAI had to swap tasks. They would have to make Self-Driving, and we would have to make large language-model chatbots. Who wins? We do.”
…
bot would auto-complete the task for the most likely action they were trying to take. The second product would be a chatbot competitor to OpenAI’s GPT series, one that used algorithms and trained on data sets that would assure its political neutrality. The third goal that Musk gave the
…
your inbox. Sources Interviews Omead Afshar. Deputy to Musk. Parag Agrawal. Former CEO of Twitter. Deepak Ahuja. Former CFO of Tesla. Sam Altman. Cofounder of OpenAI with Musk. Drew Baglino. Senior vice president, Tesla. Jehn Balajadia. Assistant to Musk. Jeremy Barenholtz. Head of brain interfaces software, Neuralink. Melissa Barnes. Former VP
…
Nosek, Shivon Zilis. Steven Levy, “How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over,” Backchannel, Dec. 11, 2015; Cade Metz, “Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free,” Wired, Apr. 27, 2016; Maureen Dowd, “Elon Musk’s Billion-Dollar Crusade to Stop the
…
interviews with Elon Musk, Shivon Zilis, Bill Gates, Jared Birchall, Sam Altman, Demis Hassabis. Reed Albergotti, “The Secret History of Elon Musk, Sam Altman, and OpenAI,” Semafor, Mar. 24, 2023; Kara Swisher, “Sam Altman on What Makes Him ‘Super Nervous’ about AI,” New York, Mar. 23, 2023; Matt Taibbi, “Meet the
…
, 487, 496–97, 498–500, 518 Tesla neural network planner and, 593–98, 614 X.AI and, 244, 605–6 Zilis and, 402 See also OpenAI Asimov, Isaac, 31, 243, 394, 486 Autopilot project. See Tesla Autopilot project Babuschkin, Igor, 605 Babylon Bee, 419, 527, 529, 554 Baglino, Drew, 195 Autopilot
…
–32, 51, 93, 299–300, 386, 415, 606 Hodak, Max, 400 Hoffman, Reid artificial intelligence and, 241, 242 EM’s Mars mission and, 92–93 OpenAI and, 242 PayPal coup and, 82, 83–84, 86 PayPal merger and, 78–79 SpaceX concept and, 92–99 Hollander, Nicole, 547, 584 Hollman, Jeremy
…
, 418 on fathers and sons, 4 SpaceX and, 203, 206–8, 211–12, 226 Odhner, Kale, 480 O’Dowd, Dan, 406 O’Keefe, Sean, 122 OpenAI ChatGPT, 243, 593, 601, 606 EM’s attacks on, 601–2 founding of, 242–44, 394 Zilis and, 401, 413 Optimus AI Day 2 presentation
…
as CEO, 167 National Highway Traffic Safety Administration investigations of, 406 Nevada battery Gigafactory, 221–22, 253, 267, 270–75 next generation platform, 504–5 OpenAI and, 244 Phoenix radar system and, 405, 407 production quality issues, 219–20, 281 proposal to take private, 291–94 risk and, 86, 613 Roadster
…
insurance policy, 88–89 on EM’s personality, 6 EM’s reconciliation with, 87, 183–84 Founders Fund and, 183–84, 240 libertarianism and, 423 OpenAI and, 242 PayPal coup and, 82, 83–84, 85, 86 “PayPal mafia” and, 183 PayPal merger and, 76, 77, 78, 79 Trump and, 261 Thompson
by Adam Becker · 14 Jun 2025 · 381pp · 119,533 words
his ideas do carry weight with some of the politically connected leaders of the tech industry. One of them is Sam Altman, the CEO of OpenAI, the company behind ChatGPT; he’s suggested that Yudkowsky may eventually “deserve the Nobel Peace Prize” for his work on AI.4 Altman doesn’t
…
discuss the present and future of AI. Around that same time, the New York Times wrote that Altman’s “grand idea” was that his company, OpenAI, “will capture much of the world’s wealth through the creation of AGI and then redistribute this wealth to the people.”6 With that lens
…
, his essay takes on a new meaning. Altman apparently wants to make the United States into one enormous company town, with shares in OpenAI replacing the dollar. The US government would become, in effect if not in law, a division of the company, responsible for disbursing company dollars to
…
us, the public. This would, Altman hopes, encourage us to think of OpenAI’s success as America’s success and as our own—Altman explicitly makes this identification in his essay. All products would come from
…
OpenAI in his proposed future, because in that future AI does literally everything, meaning that the company dollars can only be spent at the company store. (
…
since the amount given out each year is capped below the growth rate of the company, Altman and the board would always retain control of OpenAI, and the shares owned by the American public would never get anywhere near an appreciable fraction of company ownership.) Thus, Altman’s promise of goods
…
halving in price every two years would depend solely on his goodwill, because things will cost whatever Altman and the OpenAI board want them to cost. This is a proposal for total capture of the national economy, making Altman functionally the king of the United States
…
interested in AGI, helped DeepMind get funded at a time when AGI was extremely outside the Overton window, was critical in the decision to start OpenAI, etc.”4 Later on that year, Altman changed his Twitter profile to read: “eliezer yudkowsky fan fiction account.”5 There’s some irony in Altman
…
of today’s prominent AI companies are deeply influenced by the rationalists as well, with dedicated “AI safety” teams working on solving the alignment problem. OpenAI used to broadcast their work on alignment quite publicly, though they shuttered their “superalignment” team in 2024 in the aftermath of a power struggle between
…
the board, which felt that Altman didn’t take the alignment problem seriously enough. In 2021, Dario and Daniela Amodei, a brother-sister team at OpenAI, were similarly concerned that the alignment problem wasn’t enough of a priority there. They left the company along with five others to found an
…
aware: it’s writing clear prose, carrying on intelligent conversations, and acing standardized tests like the LSAT. Some commentators even discerned emotions and motivations. The OpenAI-powered chatbot Sydney “seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against
…
. “Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince
…
fed enormous amounts of information from the internet, so the idea that they could replace a search engine seems natural at first. To build ChatGPT, OpenAI started out by doing the same thing that everyone else (Google, Anthropic) does when building an LLM: they obtained a snapshot of much of the
…
a better answer to it by the time this book hits the shelves—but there will always be hallucinations, because hallucinating is all LLMs do. OpenAI has tried to train specific kinds of responses out of ChatGPT, but they’re never going to be able to get rid of all the
…
understanding when the neural net gets as big as the brain,” said the computer scientist Ilya Sutskever in 2019, when he was chief scientist at OpenAI. “Give it the compute, give it the data, and it will do amazing things.… This stuff is like—it’s like alchemy.”99 By 2023
…
next-token prediction could be enough for AGI.100 (He also led his colleagues in a chant, “Feel the AGI! Feel the AGI!” at an OpenAI holiday party in 2022.)101 There is even a shirt, popular among some AI researchers, that proclaims “Scale is all you need—AGI is coming
…
late 2020, Gebru was the coleader of the Ethical AI team at Google. Google had initially developed the “transformer” architecture that LLMs are based on; OpenAI had just released GPT-3, using that architecture, in a private beta, and it was already making waves within the field. It was becoming clear
…
.117 Google’s actions revealed the company’s priorities. Work on LLMs and other ML systems has proceeded at a breakneck pace at Google, Microsoft, OpenAI, and elsewhere, even as more examples of algorithmic bias crop up and don’t get fixed, and even as it becomes increasingly clear that some
…
senior executives in the AI industry, including Sundar Pichai, the chief executive of Google, and Sam Altman, the chief executive of ChatGPT’s parent company OpenAI. After the meeting that included Altman, Downing Street acknowledged for the first time the ‘existential risks’ now being faced.”52 Sunak’s government was in
…
is coming, and, like Andreessen and Kurzweil, he believes this will lead to “Moore’s Law for everything.”13 That’s the mechanism by which OpenAI will, supposedly, obtain all the wealth in the world: a privately owned Singularity, with all the wealth accumulating to the one company that created the
…
change, despite its massive risks.18 (It’s hardly surprising to see Altman and Gates singing the same tune, since their fortunes are aligned through OpenAI’s contract with Microsoft.) It’s a convenient idea: permission to use as much energy as your companies need, regardless of the ecological impact, in
…
can reproduce any economically productive activity done by a human. (I later found out that there’s an almost identical definition of AGI in the OpenAI charter.)57 I thought that was a pretty bad definition. It struck me as rounded down, both too vague and too narrow. What’s an
…
King Isn’t Worried, but He Knows You Might Be,” New York Times, March 31, 2023, www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html. 7 Chamath Palihapitiya et al., “In Conversation with Sam Altman,” May 10, 2024, in All-In, podcast, YouTube, www.youtube.com/watch?v
…
/technology/anthropic-ai-claude-chatbot.html; Krystal Hu, “Google Agrees to Invest up to $2 Billion in OpenAI Rival Anthropic,” Reuters, October 27, 2023, www.reuters.com/technology/google-agrees-invest-up-2-bln-openai-rival-anthropic-wsj-2023-10-27/; Devin Coldewey, “Amazon Doubles Down on Anthropic, Completing Its Planned $4B
…
, “Ilya Sutskever (OpenAI Chief Scientist)—Building AGI, Alignment, Spies, Microsoft, & Enlightenment,” in Dwarkesh Podcast, YouTube, March 27, 2023, www.youtube.com/watch?v=Yf1o0TQzry8. 101 Karen Hao and Charlie Warzel, “Inside the Chaos at OpenAI,” The Atlantic, November 19, 2023, www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050
…
/; Cade Metz, “OpenAI’s Chief Scientist and Co-Founder Is Leaving the Company,” New York Times, May 14, 2024
…
, www.nytimes.com/2024/05/14/technology/ilya-sutskever-leaving-openai.html. 102 Michaël Trazzi, “Ethan Caballero on Why Scale
…
Extinction’ in 22-Word Statement,” The Verge, May 30, 2023, www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai; Vanessa Romo, “Leading Experts Warn of a Risk of Extinction from AI,” NPR, May 30, 2023, www.npr.org/2023/05/30/1178943163/ai-risk
…
Friend, “Sam Altman’s Manifest Destiny,” New Yorker, October 3, 2016, www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny. 16 “Chat with OpenAI CEO and Co-founder Sam Altman, and Chief Scientist Ilya Sutskever,” video, Tel Aviv University, YouTube, June 5, 2023, www.youtube.com/watch?v=mC
…
to Visit Tel Aviv University,” news release, May 31, 2023, https://english.tau.ac.il/news/sam_altman_tau. 17 Brad Stone and Sam Altman, “Open AI’s Sam Altman: America Will Be Fine After the Election,” video, Bloomberg House Davos, January 16, 2024, 23:22, www.bloomberg.com/news/videos/2024
…
-01-16/openai-s-atlman-and-makanju-on-global-implications-of-ai. See also Paris Marx, “Sam Altman’s Self-Serving Vision of the Future,” Disconnect (blog), January
…
(@QiaochuYuan), “sometimes it’s nice to put on a movie…” 55 Cegłowski, “Moral Economy.” 56 Friend, “Sam Altman’s Manifest Destiny.” 57 “OpenAI Charter,” OpenAI, accessed September 6, 2024, https://openai.com/charter/. 58 An insular, self-reinforcing set of institutions and publication practices is a big part of this too, especially on
by Mustafa Suleyman · 4 Sep 2023 · 444pp · 117,770 words
transition. No longer simply a tool, it’s going to engineer life and rival—and surpass—our own intelligence. Realms previously closed to technology are opening. AI is enabling us to replicate speech and language, vision and reasoning. Foundational breakthroughs in synthetic biology have enabled us to sequence, modify, and now print
…
products in almost every area of our lives. AI is becoming much easier to access and use: tools and infrastructure like Meta’s PyTorch or OpenAI’s application programming interfaces (APIs) help put state-of-the-art machine learning capabilities in the hands of nonspecialists. 5G and ubiquitous connectivity create a
…
’t long ago that processing natural language seemed too complex, too varied, too nuanced for modern AI. Then, in November 2022, the AI research company OpenAI released ChatGPT. Within a week it had more than a million users and was being talked about in rapturous terms, a technology so seamlessly useful
…
. These systems are called transformers. Since Google researchers published the first paper on them in 2017, the pace of progress has been staggering. Soon after, OpenAI released GPT-2. (GPT stands for generative pre-trained transformer.) It was, at the time, an enormous model. With 1.5 billion parameters (the number
…
’s scale and complexity), GPT-2 was trained on 8 million pages of web text. But it wasn’t until the summer of 2020, when OpenAI released GPT-3, that people started to truly grasp the magnitude of what was happening. With a whopping 175 billion parameters it was, at the
…
laptop. The same will soon be possible for audio clips and even video generation. AI systems now help engineers generate production-quality code. In 2022, OpenAI and Microsoft unveiled a new tool called Copilot, which quickly became ubiquitous among coders. One analysis suggests it makes engineers 55 percent faster at completing
…
the biggest spenders. Anyone following the industry of late will have witnessed an increasingly intense commercial race around AI, with firms like Google, Microsoft, and OpenAI vying week by week to launch new products. Hundreds of billions of dollars of venture capital and private equity are deployed into countless start-ups
…
with this kind of work. We launched it with the support of all the major technology companies, including DeepMind, Google, Facebook, Apple, Microsoft, IBM, and OpenAI, along with scores of expert civil society groups, including the ACLU, the EFF, Oxfam, UNDP, and twenty others. Shortly after, it kick-started an AI
…
computing, too, is dominated by six major companies. For now, AGI is realistically pursued by a handful of well-resourced groups, most notably DeepMind and OpenAI. Global data traffic travels through a limited number of fiber-optic cables bunched in key pinch points (off the coast of southwest England or Singapore
…
-data-centre-cooling-bill-by-40. GO TO NOTE REFERENCE IN TEXT With 1.5 billion parameters “Better Language Models and Their Implications,” OpenAI, Feb. 14, 2019, openai.com/blog/better-language-models. GO TO NOTE REFERENCE IN TEXT Over the next few years See Martin Ford, Rule of the Robots: How
…
for precisely this kind of capability. GPT-4 was, at this stage, “ineffective” at acting autonomously, the research found. “GPT-4 System Card,” OpenAI, March 14, 2023, cdn.openai.com/papers/gpt-4-system-card.pdf. Within days of launch people were getting surprisingly close; see, for example, mobile.twitter.com/jacksonfall
…
, 2022, www.deepmind.com/publications/a-generalist-agent. GO TO NOTE REFERENCE IN TEXT Internal research on GPT-4 @GPT-4 Technical Report, OpenAI, March 14, 2023, cdn.openai.com/papers/gpt-4.pdf. See mobile.twitter.com/michalkosinski/status/1636683810631974912 for one of the early experiments. GO TO NOTE REFERENCE IN
…
29, 2023, futureoflife.org/open-letter/pause-giant-ai-experiments. GO TO NOTE REFERENCE IN TEXT A complaint against LLMs Adi Robertson, “FTC Should Stop OpenAI from Launching New GPT Models, Says AI Policy Group,” The Verge, March 30, 2023, www.theverge.com/2023/3/30/23662101/ftc
…
-openai-investigation-request-caidp-gpt-text-generation-bias. GO TO NOTE REFERENCE IN TEXT One good example comes Esvelt, “Delay, Detect, Defend.” For another example of
…
, 244–45 omni-use technology, 105, 110–12 AI and, 111, 130 containment and, 233 contradictions and, 202 power and, 182 regulation and, 229–30 OpenAI, 62, 64, 69, 251 openness imperative, 127–29 opioids, 36 Oppenheimer, J. Robert, 140 optimization problems, 98 organizational limitations, 148–50, 228 Orwell, George, 196
by Sonja Thiel and Johannes C. Bernhardt · 31 Dec 2023 · 321pp · 113,564 words
by Anna Crowley Redding · 1 Jul 2019 · 190pp · 46,977 words
by Andrew Keen · 1 Mar 2018 · 308pp · 85,880 words
by Raj M. Shah and Christopher Kirchhoff · 8 Jul 2024 · 272pp · 103,638 words
by Jeff Booth · 14 Jan 2020 · 180pp · 55,805 words
by Ajay Agrawal, Joshua Gans and Avi Goldfarb · 16 Apr 2018 · 345pp · 75,660 words
by Calum Chace · 17 Jul 2016 · 477pp · 75,408 words
by Bruce Schneier · 7 Feb 2023 · 306pp · 82,909 words
by Matthew Williams · 23 Mar 2021 · 592pp · 125,186 words
by Lionel Barber · 3 Oct 2024 · 424pp · 123,730 words
by Kai-Fu Lee · 14 Sep 2018 · 307pp · 88,180 words
by Peter H. Diamandis and Steven Kotler · 3 Feb 2015 · 368pp · 96,825 words
by Jordan Ellenberg · 14 May 2021 · 665pp · 159,350 words
by W. Richard Stevens, Bill Fenner, Andrew M. Rudoff · 8 Jun 2013
by Alec Ross · 13 Sep 2021 · 363pp · 109,077 words
by Vauhini Vara · 8 Apr 2025 · 301pp · 105,209 words
by Martin Ford · 13 Sep 2021 · 288pp · 86,995 words
by Stephen Witt · 8 Apr 2025 · 260pp · 82,629 words
by Ray Kurzweil · 25 Jun 2024
by Christopher Summerfield · 11 Mar 2025 · 412pp · 122,298 words
by Rob Reich, Mehran Sahami and Jeremy M. Weinstein · 6 Sep 2021
by Brian Christian · 5 Oct 2020 · 625pp · 167,349 words
by Aurélien Géron · 13 Mar 2017 · 1,331pp · 163,200 words
by Ethan Mollick · 2 Apr 2024 · 189pp · 58,076 words
by Tim Wu · 4 Nov 2025 · 246pp · 65,143 words
by Nicole Kobie · 3 Jul 2024 · 348pp · 119,358 words
by Nicholas Carr · 28 Jan 2025 · 231pp · 85,135 words
by Madhumita Murgia · 20 Mar 2024 · 336pp · 91,806 words
by Kenneth Cukier, Viktor Mayer-Schönberger and Francis de Véricourt · 10 May 2021 · 291pp · 80,068 words
by Joanna Walsh · 22 Sep 2025 · 255pp · 80,203 words
by Tom Chivers · 12 Jun 2019 · 289pp · 92,714 words
by Tim Berners-Lee · 8 Sep 2025 · 347pp · 100,038 words
by Kenneth Payne · 16 Jun 2021 · 339pp · 92,785 words
by Mike Maples and Peter Ziebelman · 8 Jul 2024 · 207pp · 65,156 words
by Martin Ford · 16 Nov 2018 · 586pp · 186,548 words
by Anil Ananthaswamy · 15 Jul 2024 · 416pp · 118,522 words
by Stuart Russell and Peter Norvig · 14 Jul 2019 · 2,466pp · 668,761 words
by Henry A Kissinger, Eric Schmidt and Daniel Huttenlocher · 2 Nov 2021 · 194pp · 57,434 words
by Kate Conger and Ryan Mac · 17 Sep 2024
by Jacob Silverman · 9 Oct 2025 · 312pp · 103,645 words
by Zoë Schiffer · 13 Feb 2024 · 343pp · 92,693 words
by Yuval Noah Harari · 9 Sep 2024 · 566pp · 169,013 words
by Tim O'Reilly · 9 Oct 2017 · 561pp · 157,589 words
by William MacAskill · 31 Aug 2022 · 451pp · 125,201 words
by Maximilian Kasy · 15 Jan 2025 · 209pp · 63,332 words
by Jacob Turner · 29 Oct 2018 · 688pp · 147,571 words
by Byrne Hobart and Tobias Huber · 29 Oct 2024 · 292pp · 106,826 words
by Klaus Schwab · 11 Jan 2016 · 179pp · 43,441 words
by John Cassidy · 12 May 2025 · 774pp · 238,244 words
by Denise Hearn and Vass Bednar · 14 Oct 2024 · 175pp · 46,192 words
by W. David Marx · 18 Nov 2025 · 642pp · 142,332 words
by Anupreeta Das · 12 Aug 2024 · 315pp · 115,894 words
by Clive Thompson · 26 Mar 2019 · 499pp · 144,278 words
by Eric Topol · 1 Jan 2019 · 424pp · 114,905 words
by Rizwan Virk · 31 Mar 2019 · 315pp · 89,861 words
by Richard Petersen · 15 May 2015
by Eliot Higgins · 2 Mar 2021 · 277pp · 70,506 words
by Anil Seth · 29 Aug 2021 · 418pp · 102,597 words
by Ezra Klein and Derek Thompson · 18 Mar 2025 · 227pp · 84,566 words
by Rowan Hooper · 15 Jan 2020 · 285pp · 86,858 words
by Jacob Helberg · 11 Oct 2021 · 521pp · 118,183 words
by Daron Acemoglu and Simon Johnson · 15 May 2023 · 619pp · 177,548 words
by Byron Reese · 23 Apr 2018 · 294pp · 96,661 words
by Richard A. Clarke · 10 Apr 2017 · 428pp · 121,717 words
by Stuart Russell · 7 Oct 2019 · 416pp · 112,268 words
by Daniel Susskind · 16 Apr 2024 · 358pp · 109,930 words
by Brian Merchant · 25 Sep 2023 · 524pp · 154,652 words
by David G. W. Birch and Victoria Richardson · 28 Apr 2024 · 249pp · 74,201 words
by Dariusz Jemielniak and Aleksandra Przegalinska · 18 Feb 2020 · 187pp · 50,083 words
by Azeem Azhar · 6 Sep 2021 · 447pp · 111,991 words
by Tom Chivers · 6 May 2024 · 283pp · 102,484 words
by Annalee Newitz · 3 Jun 2024 · 251pp · 68,713 words
by Ayana Elizabeth Johnson · 17 Sep 2024 · 588pp · 160,825 words
by Kai-Fu Lee and Qiufan Chen · 13 Sep 2021
by Kevin Roose · 9 Mar 2021 · 208pp · 57,602 words
by Terrence J. Sejnowski · 27 Sep 2018
by Julia Ebner · 20 Feb 2020 · 309pp · 79,414 words
by Edward Niedermeyer · 14 Sep 2019 · 328pp · 90,677 words
by Jacob Ward · 25 Jan 2022 · 292pp · 94,660 words
by James Vlahos · 1 Mar 2019 · 392pp · 108,745 words
by Carl Benedikt Frey · 17 Jun 2019 · 626pp · 167,836 words
by Chris Hayes · 28 Jan 2025 · 359pp · 100,761 words
by William Poundstone · 3 Jun 2019 · 283pp · 81,376 words
by Michael Bhaskar · 2 Nov 2021
by Mo Gawdat · 29 Sep 2021 · 259pp · 84,261 words
by Toby Ord · 24 Mar 2020 · 513pp · 152,381 words
by Sebastien Donadio · 7 Nov 2019
by John Brockman · 19 Feb 2019 · 339pp · 94,769 words
by Mariya Yao, Adelyn Zhou and Marlene Jia · 1 Jun 2018 · 161pp · 39,526 words
by Jeanette Winterson · 15 Mar 2021 · 256pp · 73,068 words
by Frank Pasquale · 14 May 2020 · 1,172pp · 114,305 words
by Grant Sabatier · 10 Mar 2025 · 442pp · 126,902 words
by Nick Maggiulli · 22 Jul 2025
by Orly Lobel · 17 Oct 2022 · 370pp · 112,809 words
by Michiko Kakutani · 20 Feb 2024 · 262pp · 69,328 words
by Dennis Yi Tenen · 6 Feb 2024 · 169pp · 41,887 words
by Cory Doctorow · 6 Oct 2025 · 313pp · 94,415 words