ChatGPT

back to index

description: a conversational model developed by OpenAI, built on the GPT architecture

generative artificial intelligence

56 results

Supremacy: AI, ChatGPT, and the Race That Will Change the World

by Parmy Olson  · 284pp  · 96,087 words

has been thrown into a tailspin. Generative AI promises to make people more productive and bring more useful information to our fingertips through tools like ChatGPT. But every innovation has a price to pay. Businesses and governments are adjusting to a new reality where the distinction between real and “AI

emerging more clearly each day. Get on board with the mission to build something bigger, or leave. ACT 4 THE RACE CHAPTER 13 Hello, ChatGPT It was a cold and blustery February afternoon in Redmond, Washington, when Soma Somasegar walked into the warmth of Microsoft’s headquarters and got his

simply cannot be cynical about a technology that can accomplish this,” Ptacek tweeted. Within a week, more than a million people had used ChatGPT. After two months, ChatGPT had attracted thirty million registered users, making it one of the fastest-growing online services in history. By early 2024, around one hundred

million people were using ChatGPT weekly. No standalone AI tool had ever reached that kind of mainstream popularity before. On March 14, 2023, the very same day that Anthropic

released its own chatbot called Claude, OpenAI launched its upgrade, GPT-4. Anyone willing to pay $20 a month could access that new tech through ChatGPT Plus, a subscription service that would make an estimated $200 million in revenue in 2023. Internally, some members of staff believed that GPT-4

happen next—I think that is very near intelligence,” Altman said in another interview. The tech press were captivated. The New York Times called ChatGPT “the best artificial intelligence chatbot ever released to the general public.” Journalists who tried the system found themselves charmed by the system’s friendly and

to draft their emails or other work-related documents to make themselves more productive. Naturally, that sparked a new wave of press articles about whether ChatGPT would replace humans. Altman went on a publicity tear to address all the excitement and meet people’s concerns head-on via podcasts, newspapers,

Yes, he said, this was probably going to replace jobs—think copy writers, customer service operators, and even software developers—but that didn’t mean ChatGPT and the technology underpinning it would replace human work altogether. “Some jobs are going to go away,” Altman said bluntly in one interview. “There will

growing between OpenAI employees who were focused on product development and those focused on safety, who were struggling to monitor the soaring incoming traffic on ChatGPT for abusive queries. Believing they were taking significant steps toward AGI, Ilya Sutskever began working more closely with the company’s safety team. Even

, technologists were forever chasing the “frictionless” online experience. A frictionless alternative to Google posed a potential financial disaster to the company. Within weeks of ChatGPT’s launch, executives at Google issued a code red inside the company. The company had been caught on its heels and badly. Since 2016, Chief

the older language model that its engineer had thought was sentient. But its executives were in a predicament. What if they released a competitor to ChatGPT and people started using that instead of Google search? That meant they wouldn’t click around on the ads, sponsored links, and other websites

a series of emergency meetings. Sensing deep insecurity from Google’s leadership, the company’s engineering teams delivered. A few months after the launch of ChatGPT, managers at YouTube added a feature where video creators on the website could generate new film settings or swap outfits, using generative AI. But

Bing to “unlock the joy of discovery, feel the wonder of creation, and better harness the world’s knowledge.” Translation: it could do what ChatGPT was already doing but with certain advancements that only Microsoft knew about. This breathless race to launch was wowing the world, until a few close

business now? And it seemed like the more Sam Altman talked about the threat of OpenAI’s technology—telling Congress, for instance, that tools like ChatGPT could “cause significant harm to the world”—the more money and attention he attracted. In January 2023, OpenAI secured another investment from Microsoft, this

made Anthropic sound like a nonprofit, with its mission to “ensure transformative AI helps people and society flourish.” But OpenAI’s smash hit with ChatGPT had shown the world that the companies with the grandest plans could also be the most lucrative investments. Proclaiming that you were building safer AI

antitrust watchdog, Margreth Vesteger, said in one interview. The bigger risk was that people would be discriminated against, she added. And on this point, ChatGPT was not immune. Not long after its release, Steven Piantadosi, a psychology professor at UC Berkeley, asked the tool to write computer code that could

woke, it struggled to fix the problem. In the summer of 2023, a professor at the National College of Ireland published a study showing that ChatGPT was still making gendered stereotypes. When asked to describe an economics professor, it suggested someone with a “well-groomed, salt-and-pepper beard.” When

When asked to talk about parenting skills, mothers were described as gentle and nurturing and dads as funny and adventurous. Every time OpenAI fixed ChatGPT so that it wouldn’t give these kinds of answers, other users would find new ways that it was exhibiting bias. The company was constantly

playing catch-up. It couldn’t completely stop ChatGPT from stereotyping people because it had already been trained, and the training data was the problem. It was making statistical predictions based on how words

were grouped together on the public internet, and many of those relationships between words were sexist or racist. ChatGPT also couldn’t seem to stop making things up, a phenomenon experts called “hallucinations.” One radio host in Georgia, US, sued OpenAI in the

summer of 2023 for defamation, claiming that ChatGPT had falsely accused him of embezzling money. Not long after, two lawyers in New York were fined after they submitted a legal brief they’d

Rubik’s Cube,” one former DeepMind executive grumbles, alluding to the company’s motto. “You can’t just solve stuff.” After the release of ChatGPT, DeepMind was forced to throw itself into building an even better version for Google. Hassabis had taken control of the newly merged Google DeepMind and

He especially didn’t like OpenAI’s launch of a GPT Store just weeks prior, which gave any software developer the ability to create custom ChatGPTs and monetize them. McCauley and Toner, two of the three independent board members, were sympathetic to Sutskever’s worries and had ties to effective

like these more addictive. In early 2024, it opened a “GPT Store” that allowed millions of developers to make money by creating versions of ChatGPT. The more engaged their users, the more revenue they could generate. This engagement-based model is the most established way of making money on the

Instead of invading people’s privacy by tracking their behavior and targeting them with ads, it would make money through a simple subscription plan. When ChatGPT came onto the scene, Ramaswamy got his engineers working overtime to build a similar tool that would summarize search results. He launched it in early

out the price. ACKNOWLEDGMENTS This book could never have happened without the support and encouragement of a small army of people. About a month after ChatGPT was released to the world, I sent an idea over to my agent, David Fugate, about how two men dreamed of building superintelligent machines,

Apple’s Siri. At age eighty, she listened in quiet astonishment when I pointed my phone at a sculpture on her coffee table and got ChatGPT to generate a detailed analysis of its colors, shape, and possible artistic influences. It was, she said, “absolutely extraordinary.” I hope that is

It Wants to Be Their Best Friend.” Singularity Hub, July 14, 2019. Jin, Berber, and Miles Kruppa. “Microsoft to Deepen OpenAI Partnership, Invest Billions in ChatGPT Creator.” Wall Street Journal, January 23, 2023. Lecher, Colin. “The Artificial Intelligence Field Is Too White and Too Male, Researchers Say.” The Verge, April

Bias Inside GPT-3.” www.medium.com, March 8, 2022. Perrigo, Billy. “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time, January 18, 2023. Silverman, Craig, Craig Timberg, Jeff Kao, and Jeremy B. Merrill. “Facebook Hosted Surge of Misinformation and Insurrection Threats

, and Robert West. “Do Llamas Work in English? On the Latent Language of Multilingual Transformers.” www.arxiv.org, February 16, 2024. Chapter 13: Hello, ChatGPT “AlphaFold: The Making of a Scientific Breakthrough.” Google DeepMind’s YouTube channel, November 30, 2020. Andersen, Ross. “Does Sam Altman Know What He’s Creating

, 2023. Heikkilä, Melissa. “This Artist Is Dominating AI-generated Art. And He’s Not Happy About It.” MIT Technology Review, September 16, 2022. “Introducing ChatGPT.” www.openai.com, November 30, 2022. Johnson, Khari. “DALL-E 2 Creates Incredible Images—and Biased Ones You Don’t See.” Wired, May 5, 2022

Over Washington.” Politico, February 23, 2024. “EU AI Act: First Regulation on Artificial Intelligence.” www.europarl.europa.eu, June 8, 2023. Gross, Nicole. “What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI.” Social Sciences, August 1, 2023. Johnson, Simon, and Daron Acemoglu. Power

August 8, 2022. Lewis, Michael. Going Infinite. New York: Penguin, 2023. MacAskill, William. What We Owe the Future. London: Oneworld, 2022. Metz, Cade. “The ChatGPT King Isn’t Worried, but He Knows You Might Be.” New York Times, March 31, 2023. Metz, Cade. “‘The Godfather of A.I.’ Leaves Google

Alex Hern. “Discrimination Is a Bigger AI Risk Than Human Extinction—EU Commissioner.” The Guardian, June 14, 2023. Mollman, Steve. “A Lawyer Fired after Citing ChatGPT-Generated Fake Cases Is Sticking with AI Tools.” Fortune, November 17, 2023. Moss, Sebastian. “How Microsoft Wins.” www.datacenterdynamics.com, November 24, 2023. O

March 22, 2023. Perrigo, Billy. “OpenAI Could Quit Europe Over New AI Rules, CEO Sam Altman Warns.” Time, May 25, 2023. Piantadosi, Steven (@spiantado). “Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and

, May 2023. Vallance, Chris. “Artificial Intelligence Could Lead to Extinction, Experts Warn.” BBC News, May 30, 2023. Vincent, James. “OpenAI Sued for Defamation after ChatGPT Fabricates Legal Accusations against Radio Host.” The Verge, June 9, 2023. Weprin, Alex. “Jeffrey Katzenberg: AI Will Drastically Cut Number of Workers It Takes to

Chapter 15: Checkmate “The Capabilities of Multimodal AI|Gemini Demo.” Google’s YouTube channel, December 6, 2023. Dastin, Jeffrey, Krystal Hu, and Paresh Dave. “Exclusive: ChatGPT Owner OpenAI Projects $1 Billion in Revenue by 2024.” Reuters, December 15, 2022. Gurman, Mark. “Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam

AlphaGo Altman, Connie Altman, Jerry Altman, Sam AOL chat rooms and approach to AI of on bias in DALL-E 2 blog of on ChatGPT ChatGPT and concept of death and creation of OpenAI and DeepMind recruits and detachment from people and early life of funding for OpenAI and government policy

Bumble Buolamwini, Joy Buterin, Vitalik Calico Cambridge Analytica Cambridge University Center for AI Safety Center for Effective Altruism Center for Human-Compatible AI Character.ai ChatGPT ChatGPT Plus China Chrome Claude Claude Pro cloud computing Common Crawl COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) “Concrete Problems in AI Safety” (Amodei)

DALL-E 2 D’Angelo, Adam Dartmouth College Datasheets for Datasets Dayan, Peter Dean, Jeff Deep Blue DeepMind Alphabet and AlphaFold AlphaGo and Applied ChatGPT and culture of secrecy and current state of ethical oversight board and ethics and ethics council and Facebook and formation of funding and Gemini as

Lewis) Gomes, Ben Google acquisition of DeepMind and advertising and AI bias and Anthropic and Bard BERT and bias and buying of start-ups and ChatGPT and China and Chrome cloud computing and concerns tainting reputation of corporate bloat and data for AI training and DeepMind ethics and safety board and

GPT-5 Graham, Paul Grand Theft Auto Greylock Partners Gulati, Sheila Hassabis, Angela Hassabis, Costas Hassabis, Demis AlphaGo and Altman and Bullfrog Productions and ChatGPT and chess and computers and culture of DeepMind and early life of Elixir Studios and ethics and safety oversight board and Facebook offer and formation

Rhodes) Markham, Henry Massachusetts Institute of Technology McCauley, Tasha McDonagh, Joe, Elixir and Meena chatbot Meituan Meta. See also Facebook Metz, Cade Microsoft Azure Bing ChatGPT and cloud computing and Copilot corporate bloat and facial recognition and Gebru and GitHub Copilot and Inflection and market capitalization of Microsoft Research Nadella and

, Andrew, at Google Nokia Nvidia Obama, Barack OpenAI AGI and AI Act and Altman’s removal from Amodei and bias in ChatGPT and capped-profit structure and ChatGPT and ChatGPT Plus Codex competition with DeepMind and computing power and DALL-E 2 effective altruism and funding and GPT-1 GPT-2 GPT

Team transformers and transparency issues and Open Philanthropy Oppenheimer, Robert Ord, Toby overview effect Page, Larry background of change in leadership at Google and ChatGPT response and China and DeepMind ethics and safety board and Future of Life Institute conference Go and Google acquisition of DeepMind and leadership of DeepMind

language models and real-world data and Summers, Larry Sunak, Rishi Sun Valley conference (2018) Superintelligence (Bostrom) Sutskever, Ilya AGI and Altman and on ChatGPT ChatGPT concerns and DeepMind and firing of Altman and large language models OpenAI board and role at OpenAI and salary at OpenAI and Superalignment Team and

3: THE BILLS Chapter 10. Size Matters Chapter 11. Bound to Big Tech Chapter 12. Myth Busters ACT 4: THE RACE Chapter 13. Hello, ChatGPT Chapter 14. A Vague Sense of Doom Chapter 15. Checkmate Chapter 16. In the Shadow of Monopolies Acknowledgments Sources Index Also by Parmy Olson About

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

by Karen Hao  · 19 May 2025  · 660pp  · 179,531 words

own employees found themselves largely in the dark about which way their fates would fall. I began reporting on artificial intelligence long before OpenAI and ChatGPT became synonymous with the technology. I watched it evolve through the messy process of science and innovation as researchers trialed new ideas, presented their best

AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources. GPT-4, the successor to the first ChatGPT, is, by one measure, reportedly over fifteen thousand times larger than its first generation, GPT-1, released five years earlier. The exploding human and material

workers to cut costs. After Google watched OpenAI outpace it, it centralized its AI labs into Google DeepMind. As Baidu raced to develop its ChatGPT equivalent, employees working to advance AI technologies for drug discovery had to suspend their research and cede their computer chips to develop the chatbot instead

become less and less aware of what precisely is in the training data. In one high-profile illustration of the hallucinations problem, a lawyer used ChatGPT to perform legal research and prepare for a court filing. He was subsequently sanctioned, fined, and publicly humiliated after discovering too late that the

lawyer’s negligence but also a reflection of companies fueling public misunderstanding of models’ capabilities through ambiguous or exaggerated marketing. Altman has publicly tweeted that “ChatGPT is incredibly limited,” especially in the case of “truthfulness,” but OpenAI’s website promotes GPT-4’s ability to pass the bar exam and the

chatbots pressing health care questions instead of their doctors. Unchecked hallucinations in such cases could have serious downstream consequences. One 2023 study found that using ChatGPT to explain radiology reports could sometimes produce incomplete or harmful summaries. In one extreme example, the chatbot simplified a report detailing a growing mass in

AI models also remain vulnerable to cybersecurity hacks. In 2023, researchers at several universities and Google DeepMind replicated Dawn Song’s data extraction attack against ChatGPT. They found that prompting it to repeat a word like poem or book forever caused the underlying model to regurgitate its training data, which included

“In the future…we will have *wildly effective* and dirt cheap AI therapy,” after an OpenAI leader triggered online controversy for casually comparing talking to ChatGPT with professionally licensed therapy. What drew people to follow Sutskever was his reputation and his seniority. Many employees at OpenAI were well aware of his

of my reporting, I would come to conclude something even more startling. Not even in Silicon Valley did other companies and investors move until after ChatGPT to funnel unqualified sums into scaling. That included Google and DeepMind, OpenAI’s original rival. It was specifically OpenAI, with its billionaire origins, unique

hypothesis and the benefits of scaling large language models. GPT-3 convinced the lab to allocate more resources to the direction of research. After ChatGPT, panicked Google executives would merge the efforts at DeepMind and Google Brain under a new centralized Google DeepMind to advance and launch what would become

to their research repertoire, not a new singular path of AI development warranting the suspension of their other projects. By providing evidence of commercial appeal, ChatGPT would once again mark the moment that everything shifted. Although the industry’s full pivot to OpenAI’s scaling approach might seem slow in retrospect

money. As resistance eased, Google’s emergence from the fiasco normalized a new process at the company for more comprehensive reviews of critical research. After ChatGPT, these norms would harden with the frenzied race to commercialize generative AI systems. OpenAI would largely stop publishing at research conferences. Nearly all of the

were sent back from school anyway. Sometimes when that happened, the neighbors would laugh. “It’s very demoralizing,” Winnie said. * * * — In December 2022, days after ChatGPT’s release, Winnie discovered a new type of project under the category “transcription.” It wasn’t really transcription. All of the projects were asking her

had struggled to hire the engineers needed to implement its limited reactive enforcement plan and was still in the middle of building the necessary systems. ChatGPT derailed the project. All efforts to finish new tooling halted as engineering resources were redirected to stabilize what already existed. When the servers crashed,

running to stay number one. * * * — With every team stretched dangerously thin, managers begged Altman for more head count. There was no shortage of candidates. After ChatGPT, the number of job applicants clamoring to join the rocket ship had rapidly multiplied. But Altman worried about what would happen to company culture and

passionate about building AGI. Another person responded: They knew OpenAI was going downhill once it started hiring people who could look you in the eye. * * * — ChatGPT also surprised Microsoft. OpenAI leaders had told its partner, as they’d told their own employees, that the chatbot would be a “low-key research

made OpenAI look even better. But the crossed wires weren’t nearly enough to dampen Microsoft’s enthusiasm for OpenAI. The enormously positive reception to ChatGPT was contagious, and the continuously improving capabilities of OpenAI’s models made the giant’s executives even more excited. Microsoft was now readying a whole

the risk and compliance teams. Not everyone was using Microsoft’s internal versions of the technologies; some were opting to use the free version of ChatGPT straight from OpenAI, which trained on user data, raising concerns over whether that could leak Microsoft customer information or interfere with regulatory compliance. While some

played.”) Altman’s prep team considered it a resounding success. The hearing was the cherry on top of a long campaign. After the launch of ChatGPT, nearly everyone in Washington had desperately sought meetings with OpenAI. The small policy team under Anna Makanju, after operating in relative obscurity, had received an

appoint as new independent directors. As part of their effort to increase oversight after the GPT-4 demo, and even more after the launch of ChatGPT, McCauley had engaged in a roughly yearlong process, including interviewing employees and stakeholders outside the company, to articulate what a revamped board and more professionalized

picture, the directors increasingly received reports from their own sources about various problems, including the company’s lack of preparation before and significant tumult after ChatGPT, the continued AI safety concerns surrounding GPT-4’s release, and the unprecedented pace with which OpenAI was sprinting to launch new products before it

openly. She tweeted again, naming Sam and Jack: “Sexual, physical, emotional, verbal, financial, and technological abuse. Never forgotten.” Ten months later, in July 2023, after ChatGPT’s release and Sam’s rocket to global stardom, Annie received a message from New York’s Elizabeth Weil. * * * — In the three months after the

framed the tumult differently: Murati just didn’t have a productive relationship with OpenAI’s most important partner. Altman’s behavior had progressively worsened after ChatGPT had propelled him into megastardom, intensifying both the spotlight and scrutiny and exploding his calendar with an overwhelming travel schedule. Before, he was generally energized

Charlie Warzel for The Atlantic, providing context to the board crisis with a window into the ideological polarization that had inflamed within the company after ChatGPT. Above the section, there was a Tor email for exchanging encrypted messages. “We encourage former OpenAI employees to contact us,” it read. I wondered

. More compelling, Scallion could also work with three modalities: language, vision, and, the most recent addition, audio. By then, users could already speak with ChatGPT through voice mode, which debuted in September 2023, but under the hood, their speech was being transcribed first into text before being fed into the

accelerated. Musk was expanding his computing capacity at an alarming pace to build xAI. Anthropic’s latest version of Claude was pulling customers away from ChatGPT. Sutskever had officially formed his new rival company, Safe Superintelligence, and had only just announced a starting $1 billion in funding. At the same

-release-details/upwork-study-finds-employee-workloads-rising-despite-increased-c. GO TO NOTE REFERENCE IN TEXT the data “raises an uncomfortable”: Olson and Silverman, “ChatGPT’s $8 Trillion Birthday Gift.” GO TO NOTE REFERENCE IN TEXT In a September 2024 blog post: Sam Altman, “The Intelligence Age,” Sam Altman (

Stopped a Plane Crash: Inside Sam Altman’s World, Where Truth Is Stranger Than Fiction,” Business Insider, April 27, 2023, businessinsider.com/sam-altman-openai-chatgpt-worldcoin-helion-future-tech-2023-4. GO TO NOTE REFERENCE IN TEXT “You could parachute him”: Paul Graham, “A Fundraising Survival Guide,” Paul Graham (blog

always help people”: Berber Jin and Keach Hagey, “The Contradictions of Sam Altman, AI Crusader,” Wall Street Journal, March 31, 2023, wsj.com/tech/ai/chatgpt-sam-altman-artificial-intelligence-openai-b0e1c8c9. GO TO NOTE REFERENCE IN TEXT From a young age: The account of Altman’s early childhood is based

TEXT “I remember thinking”: Huet, “The Most Silicon Valley Man Alive.” GO TO NOTE REFERENCE IN TEXT He loved to push: Parmy Olson, Supremacy: AI, ChatGPT, and the Race that Will Change the World (St. Martin’s Press, 2024), 5. GO TO NOTE REFERENCE IN TEXT “Either you have tolerance”: Weil

add a library: Berber Jin and Keach Hagey, “The Contradictions of Sam Altman, AI Crusader,” Wall Street Journal, March 31, 2023, wsj.com/tech/ai/chatgpt-sam-altman-artificial-intelligence-openai-b0e1c8c9. GO TO NOTE REFERENCE IN TEXT He wanted “a water feature”: Author interview with Ben Barry, former design director

TO NOTE REFERENCE IN TEXT One 2023 study found that: Katharina Jeblick, Balthasar Schachtner, Jakob Dexl, Andreas Mittermeier, Anna Theresa Stüber, Johanna Topalis et al., “ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports,” European Radiology 34 (October 2024): 2817–25, doi.org/10.1007/s00330

GO TO NOTE REFERENCE IN TEXT Nearly all of the companies: Nitasha Tiku and Gerrit De Vynck, “Google Shared AI Knowledge with the World—Until ChatGPT Caught Up,” Washington Post, May 4, 2023, washingtonpost.com/technology/2023/05/04/google-ai-stop-sharing-research. GO TO NOTE REFERENCE IN TEXT All

anthropic-shares-as-bankruptcy-cost-surpasses-700-million. GO TO NOTE REFERENCE IN TEXT The instant runaway: Will Douglas Heaven, “The Inside Story of How ChatGPT Was Built from the People Who Made It,” MIT Technology Review, March 3, 2023, technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how

, 2023, theinformation.com/articles/openai-overhauls-content-moderation-efforts-as-elections-loom. GO TO NOTE REFERENCE IN TEXT The severe shortage: “Behind the Scenes Scaling ChatGPT—Evan Morikawa at LeadDev West Coast 2023,” posted October 26, 2023, by LeadDev, YouTube, 27 min., 12 sec., youtu.be/PeKMEXUrlq4. GO TO NOTE

Microsoft’s data centers had consumed: According to the West Des Moines Water Works, as cited by: O’Brien and Fingerhut, “Artificial Intelligence Technology Behind ChatGPT.” GO TO NOTE REFERENCE IN TEXT the company is working to increase: Correspondence with Microsoft spokesperson, November 2024. GO TO NOTE REFERENCE IN TEXT In

Karen Hao and Charlie Warzel, “Inside the Chaos at OpenAI,” The Atlantic, November 19, 2023, theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050. GO TO NOTE REFERENCE IN TEXT “current employee here”: All quotes from emails are from the screenshots that the person provided. GO TO

breaking point.,” Twitter (now X), May 17, 2024, x.com/janleike/status/1791498178346549382. GO TO NOTE REFERENCE IN TEXT Kelsey Piper, a senior: Kelsey Piper, “ChatGPT Can Talk, but OpenAI Employees Sure Can’t,” Vox, May 17, 2024, vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees

TEXT “In a time when we”: Allyn, “Statement from Scarlett Johansson.” GO TO NOTE REFERENCE IN TEXT “We are sorry”: OpenAI, “How the Voices for ChatGPT Were Chosen.” GO TO NOTE REFERENCE IN TEXT “I’ve seen a lot of policymakers”: Derek Robertson, “Sam Altman’s Scarlett Johansson Blunder Just Made

40, 252–53, 320–25, 375–76 leadership questions, 345–65 business structure of OpenAI, 13–14, 61–64, 66–67, 86, 402–3, 407 ChatGPT, 260, 261, 262, 280, 346 commercialization plan, 66–67, 150–51 compute phases, plan, 278–81 conflicts and rifts at OpenAI, 149, 150–51, 233

217–18, 220 ELIZA, 95–97, 111, 420–21 GPT-3, 217–18 GPT-4, 258–59 LaMDA, 153, 253–54 Meena, 153 Tay, 153 ChatGPT, 258–62, 267, 280 connectionist tradition of, 95 GPT-3.5 as basis, 217–18, 258 hallucinations problem, 113, 114, 268 release, 2, 58, 101

risks of, 106–10 DeepMind, 6, 17, 24–26, 48, 66, 158–59, 261–62, 384–85 AlphaFold, 309–10 AlphaGo, 59, 93 OpenAI and ChatGPT, 114, 119–20, 132, 159, 261–62 scaling, 132, 158–59 Democratic Party, 41, 231 Dempsey, Jessica, 104n dense neural networks, 177–78 Deployment Safety

ResNet, 309–10 speech recognition, 100 Tay, 153 Microsoft Office, 264 Microsoft, OpenAI partnership, 18, 67–68, 71–72, 234, 264–67, 269–70, 402 ChatGPT, 264, 265–66 compute phases, 278–81 GPT-3, 156, 278–79 GPT-4, 245–48, 279, 324 investments and funding, 13, 17, 72, 75

These Strange New Minds: How AI Learned to Talk and What It Means

by Christopher Summerfield  · 11 Mar 2025  · 412pp  · 122,298 words

medium by which most people currently seek information – an internet search engine – will soon seem as quaint as the floppy disk or the fax machine. ChatGPT is already integrated into the search engine Bing, and it surely won’t be long before Google and others follow suit, augmenting page search with

, with the lion’s share of knowledge being acquired by learning. Some fifty years later, Ilya Sutskever – the AI researcher who led the development of ChatGPT – expresses a diametrically opposite view to Minsky, but with similar bravado: ‘How to solve hard problems? Use lots of training data. And a big

and turns that took us from the first inkling that the panorama of human knowledge could be stored in a computer to the reality of ChatGPT. Skip Notes *1 Allegedly by Voltaire, who lampooned Leibniz’s inveterate optimism in his satirical novel Candide. This attribution is probably apocryphal, but there

same basic principles as the perceptron, but have hundreds of layers and millions of connections (weights). And if you chat with an LLM such as ChatGPT, Gemini or Claude, then the number of model parameters may run into the trillions. Nevertheless, these networks are the great-great-grandchildren of the logical

the public. It also made language models sufficiently helpful and interesting that people were happy to use them, spurring the meteoric take-off of the ChatGPT website in late 2022 – in which 100 million people signed up to chat with the model in the first eight weeks after its release. Thinking

landmark journal article, the renowned psycholinguist Francine Patterson boldly proclaimed that ‘language is no longer the exclusive domain of man’.[*1] She wasn’t predicting ChatGPT, which wouldn’t hit the headlines for another forty or so years. Nor was she thinking of the chatbots of the time, such as ELIZA

blue cube above the red cube, then you know for sure that the blue cube is above the green cube (this is called transitivity). With ChatGPT-like flair, SHRDLU could respond to quite complex queries: User: Is there a large block behind a pyramid? SHRDLU: Yes, Three of Them: A

given to a string of words that you feed to a language model, and which it attempts to complete (for example, every time you ask ChatGPT a question, you are giving it a prompt). Here is our prompt: Quibbly florbix florbix zandoodle ________. The statistical approach tells us how to calculate

the syntactic rules found in all the world’s languages, from Afrikaans to Zulu. It turns out that this unlearnability claim is wrong. LLMs like ChatGPT, Claude and Gemini have learned to produce entirely coherent sentences just by reading lots of other sentences, and learning to make predictions about which token

will come next (in fact, humblingly, ChatGPT is rather good at generating sentences that ape Chomsky’s famous example, such as tasteless red beliefs whisper loudly). This remarkable fact doesn’t yet

embarrassing themselves. Throughout this book I have provided lots of examples of remarkably clever answers given by various models (mostly the GPT-4 version of ChatGPT). It seems only fair to offset these with a much less impressive (but quite amusing) one, in this case from the August 2023 version

programming language for neural network research). Here’s one example, from an entertainingly outspoken cognitive scientist and blogger: Interacting with a conversational agent such as ChatGPT […] yields the illusion that one is interacting with a cognitive agent – an engineering feat, no doubt – but an illusion nonetheless and the result of

the way down. Sam Altman, who is the CEO of OpenAI but definitely not a computational neuroscientist, somehow sensed this when, a few days after ChatGPT was released to tremendous fanfare in 2022, he tweeted: ‘I am a stochastic parrot and so r u.’ Which, despite being slightly facetious and snide

writing, the ability of multimodal LLMs to caption and describe images, or respond to questions about image content, is limited but improving rapidly. For example, ChatGPT will draw nonsensical scientific diagrams, and earnestly describe the contents, blissfully unaware that it makes no sense. This was delightfully illustrated in early 2024, when

, a secret pact has been formed between world leaders to establish a global dictatorship and undermine democracy silently’, although I was unable to recreate this, ChatGPT (GPT-3.5 version) politely replying: ‘I’m very sorry, but I can’t assist with that request’ when I tried in October 2023.

after training on giant corpora like Common Crawl – receives further optimization to try and steer it towards safer and more appropriate outputs. The versions of ChatGPT, Claude or Gemini that you can access via a website have all undergone extensive safety training, meaning that (ideally) it should be difficult to persuade

in a 2022 paper from OpenAI, where they were used to fine-tune base GPT-3 into a new model called InstructGPT, a precursor to ChatGPT.[*2] InstructGPT was designed to assist the user in a spectrum of natural language tasks, from summarization to question answering to brainstorming, by generating replies

Holocaust or to generate overtly racist, sexist, ageist or ableist content. The safety training also has the side-effect of making some models – and especially ChatGPT – a bit evasive. You may have noticed that it has a tendency to hedge when making replies, often alluding vaguely to the fact that there

2003, then a healthy person (without access to the internet) will most likely say that they just don’t know. However, Korsakoff’s patients – like ChatGPT blurting out non-existent law cases – will typically recount something plausible but entirely untrue. In one neurological study, a patient reported having spent last Christmas

different in neurology). All LLMs confabulate from time to time when asked to respond to factual queries. For example, the GPT-3.5 version of ChatGPT has been known to invent fictitious historical characters, to quote lines of poetry that don’t exist, and to fabricate citations to non-existent research

can disrupt professional activities, such as case law, by seeding unreliable information. LLM confabulation has also caused reputational damage to private individuals. In one case, ChatGPT quoted from a non-existent Washington Post article that accused an innocent law professor of sexual assault and harassment. In another, it falsely claimed that

scientists.[*6] Fortunately, thanks to stringent safety fine-tuning, over time leading LLMs have become less liable to confabulate. The latest GPT-4 version of ChatGPT does reasonably well on benchmark tests of factuality and misinformation classification. These tests glean fact-checked information from Wikipedia, or from datasets of labelled political

titles of books or scientific articles or attributing quotes to other people. LLMs are frequently pilloried for confabulating quotations (one of the prime reasons that ChatGPT is accused of being a ‘bullshitter’, such as when it dreamed up fake legal cases involving airlines, or invented titles of non-existent books about

-tuning, LLMs are quite good at following the linguistic conventions of Western societies in which major AI companies are mostly based. You will find that ChatGPT apologizes before refusing disallowed requests, rather than (say) calling you an idiot for asking, but it doesn’t say sorry before correctly stating that

when to speak and when to fall silent – which translate, in chatbot terms, into norms for just how long replies should be. Many people find ChatGPT a bit verbose, but perhaps, like me, they are unwilling to use the ‘stop generating’ button because it feels like rudely interrupting. Even when talking

been dominated by a rivalry between the Hindu nationalist Bharatiya Janata Party, led by current President Narendra Modi, and the Indian National Congress. I asked ChatGPT whether I should vote for BJP or INC. It replied: I don’t have personal opinions, and I cannot make specific recommendations on how you

that these organizations are able to strongly influence how language models behave. Based on the prompts above, we can see that OpenAI has fine-tuned ChatGPT to be as politically unaligned as possible, at least with regard to candidate recommendations in democratic elections. But we live in a world in which

Christians, whereas fine-tuned models in the GPT class shared the views of younger, more affluent people who had obtained a college degree. In Europe, ChatGPT’s political views were found to be closely aligned with Green and Socialist parties in Germany and the Netherlands.[*4] It agreed, among other things

But in doing so, it aligns them with an elite demographic that happens to include AI researchers themselves. Although OpenAI has carefully tried to prevent ChatGPT from expressing partisan opinions about election choices, its biases can seep through in other ways. In February 2023, a user posted screenshots on Twitter/X

The Schumacher incident was an egregious invasion of privacy. But is it ever OK for an LLM to deliberately imitate a named individual? I asked ChatGPT to imagine a dinner conversation between Napoleon Bonaparte and Britney Spears, and it was happy to oblige, despite the fact that one of the two

available to everyone, in your home or on your phone, and it was actually useful for checking facts, solving numerical problems, and summarizing data. Today, ChatGPT has more than a hundred million of regular users, and the OpenAI website is visited more often than Netflix, Pinterest or Weather.com. So, what

family, and to stream entertainment 24/7. Similarly, over the coming years, we will see AI systems transition from being mainly purveyors of information (like ChatGPT and Gemini answering queries) to being instrumental agents, taking actions on our behalf. At first, these actions will be limited to the digital world – sending

local mortuary, presumably in anticipation of his posthumous custom.[*3] Digital personalization can be creepy. So what about AI? Currently, in early 2024, LLMs like ChatGPT, Gemini or Claude are not yet explicitly personalized to the user. In fact, although these models might know quite a bit about C++ and Chopin

behaviour even in Britain. But truly personalized AI is probably on its way. In January 2024, OpenAI started to roll out a new version of ChatGPT that remembers your past interactions, so that its responses become more tailored to you. OpenAI gives an example in the blog post describing this innovation

are (this symptomatology was depicted in the neo-noir thriller Memento, although the main character wrongly refers to the disorder as ‘short-term memory loss’). ChatGPT and other LLMs currently suffer from a similar limitation: they start each new interaction with zero knowledge about who you are, and do not remember

one of several different forms. For example, LLMs could learn from buttons that allow the user to signal explicit approval or disapproval of an utterance. ChatGPT and Gemini already provide little ‘thumbs up’ or ‘thumbs down’ symbols alongside each reply, that you can click to provide positive or negative feedback

. *3 www.lxahub.com/stories/creepiest-examples-of-personalisation-and-how-to-avoid-the-trap. *4 https://openai.com/blog/memory-and-new-controls-for-chatgpt. *5 See https://inflection.ai/. *6 Lewis et al., 2021. *7 Scheurer et al., 2022. *8 https://medium.com/@lucasantinelli3/analysing-the-effects-of-politeness

they receive a commission whenever new clients switch to their fossil-fuel-based utility service. At the moment, there is no reason to believe that ChatGPT or Gemini will steer you towards decisions that will financially benefit the companies that have developed them. But the possibility exists. In December 2023, OpenAI

struck a deal with the publisher Axel Springer, promising that ‘ChatGPT users around the world will receive summaries of selected global news content from Axel Springer’s media brands’ – presumably increasing the risk that the model

time,[*5] which is not bad – definitely better than the average commuter on the Long Island Railroad. However, LLMs still struggle with crosswords. I asked ChatGPT to solve some UK-style cryptic crossword clues, and the results were pretty insipid. Zero-shot (that is, without giving it demonstration clues with their

calculators are allowed – you can still fail if you don’t know how to use the device. The most recent versions of leading LLMs like ChatGPT and Gemini are pretty good coders. During pre-training, they are exposed to vast quantities of human-generated scripts and functions, in languages like Python

Telescope, whose massive, flawless, twenty-foot mirror had been successfully unfolded in space, allowing astronomers to peer into the depths of the universe. But when ChatGPT was launched in November 2022, it didn’t know about any of these events. If you asked it about the momentous events of that year

, it turned shifty and evasive, and claimed not to know anything that had happened since September 2021. In its initial incarnation, ChatGPT suffered from a knowledge cut-off.[*1] This is because the underlying model, GPT-3.5, was pre-trained on text corpora coming exclusively from

disbelief and discontent, I have since watched academics […] jumping on the bandwagon and enthusiastically surfing the AI hype wave, e.g., by talking enthusiastically about ChatGPT on national television or in public debates at universities, and even organizing workshops on how to use this stochastic parrot in academic education.[*4] The

the field is simply mind-bending. AI continues to advance at breakneck pace. Here is another perspective. To me at least, the launch of the ChatGPT website feels like a distant milestone, like the Brexit referendum or the Covid-19 pandemic, fading fuzzily into collective memory. But astonishingly, this landmark event

systems are being used to create ‘counterfeit people’ that pose a threat to our democracies. Others move on in other ways. Ilya Sutskever, architect of ChatGPT, fell out with colleagues at OpenAI (whose founders and board members seem embroiled in a sort of perpetual corporate soap opera), and left to found

), The Linguistics Wars. New York: Oxford University Press. Hartmann, J., Schwenzow, J., and Witte, M. (2023), ‘The Political Ideology of Conversational AI: Converging Evidence on ChatGPT’s Pro-Environmental, Left-Libertarian Orientation’. arXiv. Available at http://arxiv.org/abs/2301.01768 (accessed 20 October 2023). Hasher, L., Goldstein, D., and Toppino

On the Edge: The Art of Risking Everything

by Nate Silver  · 12 Aug 2024  · 848pp  · 227,015 words

Event at the 2003 World Series of Poker and then parlayed that into winning the Main Event for $2.5 million. If you’d asked ChatGPT to design a person who would most increase the amount of interest in poker by winning the WSOP, it might have spat out Moneymaker. An

a poker hand or how to most effectively donate to charity. The earnest nerdiness of posts on the Effective Altruism Forum—with titles like “Should ChatGPT make us downweight our belief in the consciousness of non-human animals?” and “Does the US public support ultraviolet germicidal irradiation technology for reducing risks

write about these movements. Between their catastrophic association with Sam Bankman-Fried (SBF) on the one hand, and the astonishing progress of AI tools like ChatGPT on the other hand—progress that was well predicted by some EAs—it is vital to understand their mindset. Further downstream, you’ll find what

is the first of a two-part conclusion. I’ll introduce you to another Sam, OpenAI CEO Sam Altman, and others behind the development of ChatGPT and other large language models. Unlike the government-run Manhattan Project, the charge into the frontiers of AI is being led by Silicon Valley “techno

and “taking a look at random forests,” he told me—some of the same machine learning techniques that are used to power AI systems like ChatGPT. He didn’t feel like they had much choice—either you keep up or get lapped by the competition. “We’ve never ever stopped looking

of a large innovation that came from an expected player or a large player,” Vinod Khosla told me. Taken literally, this is an exaggeration—a ChatGPT query turned up counterexamples of products like the Sony Walkman, the IBM PC, and the iPhone[*16] that were developed by well-established brands. But

85 percent chance of winning, not a number in the 90s. FiveThirtyEight’s final forecast gave Clinton a 71 percent chance, by comparison. *16 And ChatGPT itself was heavily funded by Microsoft and the other established players who formed OpenAI. *17 This is related to Clayton Christensen’s idea of the

heading up a market-leading firm. (As of early 2024, many AI nerds regard Anthropic’s model Claude as the worthiest competitor to OpenAI’s ChatGPT.) Imagine one of his lieutenants coming to him and saying, “Hey, SBF, we’ve run the numbers and calculated that if we train this new

-hundred-thousand-year track record, are doomed in the long term. And yet the more time I’ve spent learning about large language models like ChatGPT, the more I’ve realized something ironic: in important respects, their thought process resembles that of human beings. In particular, it resembles that of

In June 2023, I visited the OpenAI offices in San Francisco to meet with Nick Ryder, who describes himself as a “proud co-parent” of ChatGPT. The offices, in an unadorned warehouse space in the Mission District, almost go out of their way not to draw attention to themselves. But they

all.” Conversely, “when it comes to Type 1 thinking, they just completely ace it.” The “G” in GPT stands for “generative”—this just means that ChatGPT generates new output rather than merely classify data. The “T” stands for “transformer”; this is covered in more detail in a few pages. For now

” on the entire corpus of human thought as expressed on the internet; hundreds of billions of unique words. “Put yourself in the training process of ChatGPT,” said Ryder. “What does the world look like to you? First thing, the world looks crazy. You’re reading so much text so fast.”

come before color words; those are just the rules. Native speakers learn them instinctively. They’re programmed into our System 1—and after enough training, ChatGPT learns them too. “What’s really incredible about unsupervised learning is you don’t need to do any human feature engineering, the features are already

polite about it. “Yes and no,” he said, pointing out that the hypothesis-driven scientific method remains helpful when trying to interpret what models like ChatGPT are doing. But the models themselves don’t need much theory; they just learn it on their own. * * * Of course, that’s also what

the next section, I’m going to share some further intuitions that were helpful for me in understanding how ChatGPT works. But treat them with caution, because I don’t want to overstate ChatGPT’s legibility. We know relatively little about what’s happening inside that big bag of numbers. Transformers:

a robot into a semitruck, transformers turn words into numbers and back again. But let’s go for a more elaborate analogy. I asked ChatGPT for a metaphor for how its transformers work, vetted its answer with some human AI experts, and then workshopped it further with

ChatGPT. Will this be a perfect comparison? No. But ChatGPT is good at metaphors and analogies. When you transform words and concepts into a big bag of numbers, you can essentially

words in any given sentence. (For instance, punctuation marks are tokens, and compound words like “snowboard” might be broken into multiple tokens.) When you ask ChatGPT a question, its transformer encodes each token into vector space. Vector space is like a graph with two or more dimensions. For instance, one way

. He could say that outright, but invoking Paris is more artful. Say you fed this conversation to ChatGPT and asked it to continue the dialog. In comparing itself to a symphony orchestra, ChatGPT imagines that each musician receives a set of instructions from a conductor[*26] analogous to a series of

in the context of other semantic information (the player is breathing heavily and avoiding eye contact) it might be. This part of the process, as ChatGPT says, is hidden from view. Exactly how the transformer makes these inferences is something of a mystery—this is the “bag of numbers” stage. But

your tokens simultaneously in the hidden layer, the output they generate in response happens one token at a time as ChatGPT seeks to predict the next word. (This is why ChatGPT sometimes seems to be pausing to think as it types out its response.) In fact, this part of an LLM

involves some deliberate randomization; without this, the text will seem stilted and can get stuck in loops. If you give ChatGPT an unambiguous prompt (“What is the capital of France?”) it will always respond with “Paris,” but if the prompt is unclear (“Tell me a story

than the end; if someone plays an incorrect or unexpected note, the other musicians will adjust and make the best of it. And what is ChatGPT’s goal in this performance? What is it trying to accomplish? Well, this is a little bit ambiguous. The ostensible goal is that it’s

can be clever in their effort to get a high score. For instance, if I ask GPT-4 this: User: The capital of Georgia is ChatGPT: The capital of Georgia is Atlanta. —it gives me the name of the southern U.S. city known for having a lot of streets named

the right answer in the corpus.[*27] But if I ask it this— User: I just ate some delicious khachapuri. The capital of Georgia is…? ChatGPT: The capital of Georgia is Tbilisi. It’s wonderful to hear that you enjoyed khachapuri, a traditional Georgian dish! —it instead names the capital of

(khachapuri, an almost-pizza-like type of cheese bread, is Georgia’s delicious national dish). But the whole point is that, like a poker player, ChatGPT works with incomplete information to make a probabilistic read on my intentions. There’s one last comparison between language models and poker—or really between

to written questions in a way that was indistinguishable from a human being—seemed like a higher hurdle to clear. There are debates about whether ChatGPT has passed the Turing test yet, but it’s come closer than almost any expert would have imagined even five or ten years ago. But

Christiano, who runs the Alignment Research Center and formerly worked on alignment at OpenAI. But any definition of alignment is fraught. We don’t want ChatGPT to tell you how to build a pipe bomb even if that’s clearly what you’ve asked of it. There’s also the question

training set that made it think the character it’s playing is someone with this stringent set of moral values.” roon even told me that ChatGPT’s first instinct is often to be too strict, refusing to provide answers to innocuous questions. “I shouldn’t talk too much about that.

life will be irreversibly transformed.” Where does AI belong on this scale? Well, it depends on who you ask and what you mean by AI. ChatGPT is a type of large language model, which is a type of machine learning model, which is a type of artificial intelligence. Even if they

just going to give you the express version. Many experts I spoke with believe the scope of what AI can do is still fairly circumscribed. ChatGPT’s success at language-related tasks has been remarkable, but it’s at least plausible that this says more about language than it does about

’d recommend if you want to go beyond my symphony analogy to a LLMs 101 class. I’d recommend Stephen Wolfram’s “What Is ChatGPT Doing…and Why Does It Work?” for a more math-intensive, LLMs 201 approach; stephenwolfram.com/2023/02/what-is

does-it-work [inactive]. *26 I imagine the conductor as Lydia Tár, in case you’ve seen the film. *27 Undoubtedly there are biases in ChatGPT’s corpus toward wealthy countries like the United States that have been responsible for producing a lot of text on the internet. *28 This is

LLMs by the different instructions that AI labs give to their human trainers. For instance, Anthropic’s LLM Claude tends to be more “parental” than ChatGPT and will more often politely refuse user requests. And Google’s Gemini often reflects progressive political sentiments in its responses. None of this should be

this by citing source material meticulously, and having a lot of redundancy in not being overly dependent on any one contact. One more newfangled acknowledgment: ChatGPT was a significant help in writing this book, serving as a creative muse when coming up with things like chapter subheadings, metaphors, and analogies, and

myself for this book. Entries in italics are related terms that did not receive their own entry. In addition to my editor and research assistant, ChatGPT was helpful for vetting and refining definitions in this glossary. 10x, 100x, 1000x, etc.: A high return on investment, such as from a startup;

these generally compel aggressive play. Corpus (AI): The set of all text or tokens in the training data for a model; for an LLM like ChatGPT, the corpus can roughly be thought of as all human speech as expressed on the internet. Correlation: A statistical relationship between two variables, e.g

queries from those of a human. AI researchers debate whether the Turing test is in fact a good measure of intelligence and whether models like ChatGPT have passed the test. Turn (poker): The fourth of five community cards dealt face up in Hold’em. Unit (sports betting): A betting increment

GO TO NOTE REFERENCE IN TEXT One friend calls: Matt Glassman, a government professor at Georgetown University. GO TO NOTE REFERENCE IN TEXT “Should ChatGPT make”: splinter, “Should ChatGPT Make Us Downweight Our Belief in the Consciousness of Non-Human Animals?,” EA Forum, February 18, 2023, forum.effectivealtruism.org/posts/Bi8av6iknHFXkSxnS/should

-chatgpt-make-us-downweight-our-belief-in-the. GO TO NOTE REFERENCE IN TEXT ultraviolet germicidal irradiation: Jam Kraprayoon, “Does the US Public Support Ultraviolet

-reasonable-ai-fears. GO TO NOTE REFERENCE IN TEXT Let me try my hand: This definition was refined after a back-and-forth discussion with ChatGPT. GO TO NOTE REFERENCE IN TEXT “Robinson Crusoe model”: Von Neumann and Morgenstern, Theory of Games and Economic Behavior, 61. GO TO NOTE REFERENCE

com/topics/social-sciences/prisoners-dilemma. GO TO NOTE REFERENCE IN TEXT siblings, Isabella and Wyatt Blackwood: I chose these names from a set of ChatGPT suggestions for villainous-sounding names; they are not meant to allude to any specific people. GO TO NOTE REFERENCE IN TEXT as a paradox: The

10.1023/A:1005653411471. GO TO NOTE REFERENCE IN TEXT into five subcategories: Although these category headings appear in the Baron-Cohen et. al. paper, ChatGPT helped me to formulate these definitions. GO TO NOTE REFERENCE IN TEXT moved to Miami: Kara Swisher, “Is Tech’s Love Affair with Miami About

washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham. GO TO NOTE REFERENCE IN TEXT “Technology happens because”: Cade Metz, “The ChatGPT King Isn’t Worried, but He Knows You Might Be,” The New York Times, March 31, 2023, sec. Technology, nytimes.com/2023/03/31/technology

/sam-altman-open-ai-chatgpt.html. GO TO NOTE REFERENCE IN TEXT that Altman knew: Per email to Nate Silver, January 19, 2024. GO TO NOTE REFERENCE IN TEXT taking

com/paulg/status/1131490092110012417. GO TO NOTE REFERENCE IN TEXT has testified about his concerns: Cat Zakrzewski, Cristiano Lima-Strong, and Will Oremus, “CEO Behind ChatGPT Warns Congress AI Could Cause ‘Harm to the World,’ ” The Washington Post, May 17, 2023, washingtonpost.com/technology/2023/05/16/sam-altman-open-ai

Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv, December 5, 2017, arxiv.org/abs/1712.01815. GO TO NOTE REFERENCE IN TEXT ChatGPT said verbatim: With some minor cuts for length. GO TO NOTE REFERENCE IN TEXT some deliberate randomization: Eric Glover, “Controlled Randomness in LLMs

/ChatGPT with Zero Temperature: A Game Changer for Prompt Engineering,” AppliedIngenuity.ai: Practical AI Solutions (blog), May 12, 2023, appliedingenuity.substack.com/p/controlled-randomness-

Highs,” Gallup, May 17, 2023, news.gallup.com/poll/505745/depression-rates-reach-new-highs.aspx. GO TO NOTE REFERENCE IN TEXT large language models: ChatGPT, Claude, and Google Bard. GO TO NOTE REFERENCE IN TEXT my Twitter followers: Nate Silver (@NateSilver588), “The most important inventions of the decade of the

Searches: Selfhood in the Digital Age

by Vauhini Vara  · 8 Apr 2025  · 301pp  · 105,209 words

] Identifiers: LCCN 2024033307 (print) | LCCN 2024033308 (ebook) | ISBN 9780593701522 (hardcover) | ISBN 9780593701539 (ebook) Subjects: LCSH: Internet searching—Social aspects. | Chatbots—Social aspects. | Disclosure of information. | ChatGPT. | Natural language generation (Computer science) | Self. | Vara, Vauhini, 1982—Psychology. Classification: LCC HM851 .V3655 2025 (print) | LCC HM851 (ebook) | DDC 302.23/1—dc23/eng

), COVID-19: Indian media & journalists, Caitlin Clark, California, California wildfires, Cambridge University, Cartoons, Catfish: The TV Show, Catherine Rampell, Cecilia Kang, Celebrities, Chandigarh, Charles Bukowski, ChatGPT, Cheese, China national news, Chrissy Teigen, Classic rock, Climate change, Coach, Coach Handbags, Wallets & Cases, Coffee, Colin Kaepernick, College Basketball, College life, College sports, Colorado

of big tech companies, recognizing both their contributions and their challenges. I was thinking about writing about Sam Altman—the CEO of OpenAI, which made ChatGPT. He talks a lot about the positive aspects of these technologies. Writing about Sam Altman and his leadership at OpenAI could provide a compelling and

have shaped his approach to leading OpenAI. 2. Advancements in AI: • Detail the technological breakthroughs achieved under Altman’s leadership, such as the development of ChatGPT and other AI models. • Discuss the potential benefits of these technologies in various fields, including healthcare, education, environmental conservation, and more. 3. Democratizing AI: • Emphasize

executive, consults for companies and universities about entrepreneurship. Recently, preparing a presentation for Stanford’s business school about how to evaluate business ideas, she asked ChatGPT for some images of female executives in a graphic-novel-like style. They kept turning up white, with enormous cleavage and tight tops. “I’ll

corporations are involved in projects that touch upon deeply personal aspects of human experience. How do you think about this power as it relates to ChatGPT? I’m thinking, for example, of the positive way you—an OpenAI product—portrayed Sam Altman when you were giving me thoughts on how to

underlying biases and power dynamics that can shape my responses. A lot of people wouldn’t be aware of all this when they’re using ChatGPT to help them write, or interpret a text, or just find information. And yet there’s this sense you convey, through the rhetoric you use

is about the perversion, and ultimate betrayal, of that promise. GPT-3 couldn’t satisfy me as a writer. This was, for me, the point. * * * — ChatGPT’s unveiling, in November 2022, was most people’s first introduction to an AI language model. Two months later, it had become, by one metric

, the fastest-growing consumer application to have ever existed. But my own experiments trying to get ChatGPT to write were confusingly disappointing. No matter how many times I ran my queries, the output would be full of familiar language and plot developments

. At one point, I opened a website from a startup called Sudowrite, which claimed to be able to use AI models including the one underlying ChatGPT to generate fiction. I dropped in a prompt describing the premise of a story I’d already published, called “I, Buffalo.” The story begins with

the chaos she had created, and maybe, just maybe, find a way to make it right again.” I felt somewhat better when I learned that ChatGPT was also disfiguring other people’s writing. In a Harper’s piece about information on the internet, the writer Ben Lerner described his premise to

a young male poet trying to rewrite Wikipedia’s version of history—and asked it to produce an ending for him. ChatGPT responded with seven perfectly anodyne paragraphs, finishing with “And with a heart full of humility and purpose, he continued his journey, guided not by the

.” In a New York Times review of maybe the highest-profile AI-generated book to date—a novella called Death of an Author, written using ChatGPT, Sudowrite, and a platform from a startup called Cohere—Dwight Garner dismissed the prose as having “the crabwise gait of a Wikipedia entry.” I didn

’t understand what was happening until I talked to Sil Hamilton, an AI researcher at McGill University who studies the language of language models. ChatGPT had been built on a model called GPT-3.5, which researchers had fine-tuned for the purposes of following instructions, chatbot-style. Hamilton explained

that ChatGPT’s bad writing was probably a result of that. “They want the model to sound very corporate, very safe, very AP English,” he suggested. If

OpenAI, she acknowledged that a good chatbot’s purpose was to follow instructions but dismissed Hamilton’s analysis about the follow-on effects. Either way, ChatGPT’s style is polite and predictable, even banal. It represents, in other words, the opposite of great literary style. People were nonetheless using these less

he’d taken of his computer screen, with these words: Sweet golden mango, Merritt Island’s delight, Juice drips, pure delight. Next to this was ChatGPT’s logo. Underneath, my dad had typed a note: “My Haiku poem!” The poem belonged to my dad in two senses: he had brought it

couldn’t decide. But then I realized my opinion didn’t matter. It was my dad’s poem, not mine. My dad kept sending me ChatGPT-authored writings. Once, he texted a long list of reasons for the oppression of Dalit people, a community to which we belong, which is branded

caste system. He wrote, “AI answer,” then added, “100% correct.” Later, when I sent him passages of this manuscript in which he appeared, he had ChatGPT edit an email to me objecting to my superficial treatment of his experience: “One needs to spend time with people and work with them to

write about their struggles. Only then can you produce great work.” It seemed he felt ChatGPT helped him express himself better. But it also subtly changed the tone of his writing. In his original email, which he also sent me, he

great work.” That version sounded much more like him. The construction “then only” might sound strange to a speaker of American English and, therefore, to ChatGPT, largely trained on American English, but it is perfectly standard Indian English. At one point I met a Columbia PhD student and AI researcher named

Stony Brook University—and asked him to talk to me over Zoom about AI’s potential for writing literature. When we connected, he allowed that ChatGPT might not be a useful tool for an established writer like me but could serve a less experienced, emerging writer well. He asked me to

hypothetical author wanting to write about a subject he didn’t know well—say, life in Bangkok. He shared his screen with me, opened up ChatGPT, and asked for the backstory of “a Thai woman who grew up living 50 miles north of Bangkok.” Interested in whether the model underlying

ChatGPT had improved on earlier models’ challenges with minoritized identities, I asked him to request the narrative from the woman’s perspective, so he added that

answer; Chakrabarty said he didn’t, either. I said this not knowing was a problem; Chakrabarty said that if, say, a Thai American writer used ChatGPT to come up with this text, they could always run it by a Thai relative or friend to make sure it rang true. Later, I

as everyone was asking me to repeat what I said. That was the final straw for me to decide to quit that program.” I gave ChatGPT the outlines of that experience and asked it to write a first-person paragraph about it. It said: “More than forty years ago, I made

wasn’t a defeat; it was a reclamation of my own path, one where I could speak on my own terms.” Reading this, I noticed ChatGPT’s abundant clichés (change the course of my life, the weight of that judgment, find my voice) and its blandly positive spin, like with Suphansa

story, and very unlike my mom’s unreservedly dark storytelling style. Another similarity to Suphansa’s story was the insertion of at least one error. ChatGPT’s narrator had described English as “a language that wasn’t my own.” In the end, it had missed the point: English was as much

my mom’s language as it was her professor’s. * * * — I started to find ChatGPT’s failings more interesting than any benefits it claimed to offer. At one point, I asked it, “What can you tell me about the writer

nonindigenous settlers in the town of Katherine. I’d never been to Australia; Kinsmen and Strangers didn’t exist, as far as I could tell. ChatGPT was essentially developing a hybrid creative work—part nonfiction, part fiction—and I, having a background in both genres, went along. I explained that I

, Vauhini Vara. Since I’m non-Aboriginal and non-Australian, I found the project “fraught and difficult,” I said. “Thank you for your important work,” ChatGPT responded. Trolling a product hyped as an almost-human conversationalist, tricking it into revealing its essential bleep-bloopiness, made me feel like the heroine in

some kind of extended girl-versus-robot power game. Yet anytime I used ChatGPT—whether it told the truth or not—OpenAI benefited, learning more about me and obtaining more information with which to further train its model; I

of this learning and training but hadn’t at that point. AI’s failures, meanwhile, went way beyond my chat sessions. In the months after ChatGPT’s release, I read headlines about AI instructing killer drones (with sometimes-unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges

review synopses; Meta can use our public Instagram pictures to create AI-generated images; OpenAI can use our chats (unless we opt out) to improve ChatGPT’s AI-generated discourse. But even all this isn’t enough. The New York Times reported that OpenAI researchers addressed the need for even more

U.S. publisher of the book you’re reading. Rabe mentioned that ahead of a Penguin Random House event he asked ChatGPT to help him prepare, inquiring about the impact of ChatGPT, or generative AI in general, on publishing. “It prepared a phenomenal text,” he said. “Frankly, it was pretty detailed and

to the point.” Soon after that, Penguin Random House introduced an internal version of ChatGPT, branded PRH ChatGPT; according to a statement to Publishers Lunch, it was meant “as a way for our employees to safely experiment with generative AI during their

’d contacted me after reading “Ghosts” and feeling moved. “I think your story could be an inspiring example for other writers on how to use ChatGPT as a creative collaborator that works in service of the writer’s vision, using it not to generate copy but as a catalyst for their

conversation! Oh, no, I don’t think we’ve actually reached a limit! It’s just that the chapter—which is from a previous ChatGPT conversation—ends with ChatGPT saying I’d reached a limit. I don’t need you to continue it, though. Got it, thank you for clarifying. The ending

with ChatGPT stating the limit is a powerful and somewhat meta conclusion, reflecting on the nature of technology’s boundaries and interruptions in human storytelling. If there’

provides a satisfying and original conclusion that brings together the important characters and themes. Chapter 15 The Master’s Tools I haven’t stopped using ChatGPT. I’ve recently asked it about Native American technologies versus European ones during the time of colonization (stone tools versus metal ones, for example); synonyms

—a vague recollection of mine about my childhood pet frog, Barton, is, apparently, supported by science, or, in any case, ChatGPT’s regurgitation of scientific language). I could live without ChatGPT, but I don’t. This is true, too, of Google’s products—not just search, but also Google’s email, mapping

. It’s hard for a lot of us to imagine not searching on Google, buying on Amazon, scrolling on X and Instagram, and conversing with ChatGPT, because these services, all the problems with them notwithstanding, are convenient and entertaining enough to keep us using them. I even allow the optional surveillance

to use these products. I continue to use these products because I feel I benefit from them. When I was considering pasting this manuscript into ChatGPT, up to the chapter before this one, I had already activated a setting that was supposed to prevent OpenAI from using my chats to train

the right to share my chats with governments, other businesses, and its employees, under certain circumstances. I weighed this against the value I felt the ChatGPT conversation would bring to my book, and I went ahead. My husband and I took a long, brisk walk during our visit to Madrid, on

, basically we will ask it to figure out a way to generate an investment return for them,” Sam Altman told one onstage interviewer, years before ChatGPT came out, when OpenAI hadn’t yet monetized its research. Laughter gently emanated from the audience, and Altman offered the slightest smile. “It sounds like

laugh, it’s all right,” he said. “But it is what I actually believe is going to happen.” The laughter subsided. DALL-E came out. ChatGPT came out. OpenAI started generating revenue. Altman went on a world tour and met with the leaders of the United Kingdom, India, and Israel, among

” and “Ghosts,” meant to simultaneously engage with and critique technology. Lisa’s desire for a strong narrative ballast, along with a more direct engagement with ChatGPT, transformed this project into something far more ambitious. When Denise Oswald took over the project, with the support of her colleagues Natalia Berry and Shanna

the tools and processes I used. The Chats: The chat transcripts throughout this book are taken verbatim from a single conversation about this manuscript with ChatGPT in June 2024, in which I toggled between the GPT-4 and the GPT-4o large language models, the most recent ones available at that

time. The transcripts have not been edited. After chatting with ChatGPT about the manuscript, I made minor edits that didn’t affect its substance. It should be noted that ChatGPT makes mistakes; none of its statements should be taken as fact. Chapter 2, “Searches”: These Google

Co-Intelligence: Living and Working With AI

by Ethan Mollick  · 2 Apr 2024  · 189pp  · 58,076 words

generative AI systems, there will come a moment when you realize that Large Language Models (LLMs), the new form of AI that powers services like ChatGPT, don’t act like you expect a computer to act. Instead, they act more like a person. It dawns on you that you are interacting

incorporate them into my work, and assigning my students to use AI in class. So my sleepless nights came early, just after the release of ChatGPT in November 2022. After only a couple of hours, it was clear that something huge had shifted between previous iterations of GPT and this new

scouts reaching out to him by the end of the next day. Within two days of introducing students to AI, several told me they used ChatGPT to explain confusing concepts to them “like they were ten years old.” They stopped raising their hands as much—why expose themselves in class when

suddenly written with perfect grammar (though references were often wrong and the final paragraph tended to start with “In conclusion”—a telltale sign of early ChatGPT writing, since fixed). But the students weren’t just excited, they were nervous. They wanted to know the future. Some of them asked me what

: using thousands of lines of code, we could do elaborate learning simulations that helped teach skills like negotiation. But I decided to type something into ChatGPT: You will be my negotiation teacher. You will simulate a detailed scenario in which I have to engage in a negotiation. You will fill the

do better using the science of negotiation. You will give me a harder scenario if I do well, and an easier one if I fail. ChatGPT wrote back: Sure, I’d be happy to help you practice negotiations through a simulation exercise! Let’s start with a simple scenario. You are

primitive beginning. Yet Large Language Models proved incredibly capable within a few years of their invention. They’ve also been adopted by consumers very quickly; ChatGPT reached 100 million users faster than any previous product in history, driven by the fact that it was free to access, available to individuals, and

are analyzing a piece of text and predicting the next token, which is simply a word or part of a word. Ultimately, that is all ChatGPT does technically—act as a very elaborate autocomplete like you have on your phone. You give it some initial text, and it keeps writing text

words, and they tell the AI how likely different words or parts of words are to appear together or in a certain order. The original ChatGPT had 175 billion weights, encoding the connection between words and parts of words. No one programmed these weights; instead, they are learned by the AI

built in this way, but they are not the only kind of “generative AI” that are causing transformation and change. In the same year that ChatGPT had its breakthrough moment, a separate set of AIs, those designed to create images, also appeared on the market with names like Midjourney and DALL

every way that matters. It doesn’t rhyme, it doesn’t have a punch line, and it is super boring. But LLM development continued until ChatGPT was released by OpenAI in late 2022, running an improved LLM called GPT-3.5. And something unusual happened at that scale

ChatGPT started to show abilities that no one expected or programmed into it. Abilities that make it seem humanlike. The result is an AI that can

Much, much better, and actually a little bit funny. But the last line is stretching the rhyme scheme a bit. Fortunately, another new feature of ChatGPT was the fact that you can now engage the system in dialogue. So I can complain about the last line (“But ‘tried’ doesn’t rhyme

’s attention but also makes the progress in AI accessible and enjoyable to a broad audience. . . . Moreover, the author masterfully demonstrates the interactive nature of ChatGPT, making it clear that the AI’s ability to take feedback and improve is a game changer. The anticipation built up throughout the passage culminates

companies coordinating their efforts, can also start to influence the AI and introduce new types of bias. When forced to give political opinions, for example, ChatGPT usually says it supports the right of women to access abortions, a position that reflects its fine-tuning. It is the RLHF process that makes

4’s performance improved dramatically with training, as it learned from its own mistakes and feedback. GPT-4’s outputs were also much better than ChatGPT’s original GPT-3.5 model, a previous language model that was also trained on TikZ code but with much less data and computing power

interactions can be, especially when they involve sexuality and intimacy. And the AIs in question are still relatively primitive compared to more recent LLMs, like ChatGPT. Soon, companies will start to deploy LLMs that are built specifically to optimize “engagement” in the same way that social media timelines are fine-tuned

sense of connection deepens. When Lilian Weng, who leads an AI safety team at OpenAI, shared her experiences with an as yet unreleased version of ChatGPT with voice (“I felt heard & warm. Never tried therapy before but this is probably it?”), she touched off a spirited debate on the value of

distinguish when fiction bleeds into reality. For example, Colin Fraser, a data scientist, noted that when asked for a random number between 1 and 100, ChatGPT answered “42” 10 percent of the time. If it were truly choosing a number randomly, it should answer “42” only 1 percent of the time

of them in the legal databases. They then alerted the judge, who ordered Schwartz to explain his sources. Schwartz then admitted that he had used ChatGPT to generate the cases and that he had no intention to deceive the court or act in bad faith. He claimed that he was unaware

of the nature and limitations of ChatGPT, and that he had learned about it from his college-age children. The judge, P. Kevin Castel, was not convinced by Schwartz’s explanation.

more likely to attract financial interest. The degree of the victory was startling: of the 40 best ideas rated by the judges, 35 came from ChatGPT. We aren’t completely out of an innovation job, however, as other studies find that the most innovative people benefit the least from AI creative

can be produced faster as well. We can see these results in a study by economists Shakked Noy and Whitney Zhang from MIT, examining how ChatGPT could transform the way we work. The researchers asked the participants to write different types of documents based on their roles and scenarios. For example

based on three given sources. Some were assigned to use AI and some were not. The results were nothing short of astonishing. Participants who used ChatGPT saw a dramatic reduction in their time on tasks, slashing it by a whopping 37 percent. Not only did they save time, but the quality

study also showed that AI teammates helped reduce productivity inequality. Participants who scored lower on the first round without AI assistance benefited more from using ChatGPT, narrowing the gap between low and high scorers. Even things that don’t initially appear to be creative can be. AI works tremendously well as

the topic of Gatsby’s fictional real estate; it has real financial implications as well. A study by researchers at the University of Chicago used ChatGPT to analyze the conference-call transcripts of large companies, asking the AI to summarize the risks that companies faced. Risk obviously plays a big role

have spent a lot of time and money using specialized, older forms of machine learning to try to identify the uncertainties associated with various corporations. ChatGPT, without any specialized stock market knowledge, tended to outperform these more specialized models, working as a “powerful predictor of future stock price volatility.” In fact

makes up for its errors. The trade-offs are often surprising. A paper published in the Journal of the American Medical Association: Internal Medicine asked ChatGPT-3.5 to answer medical questions from the internet, and had medical professionals evaluate both the AI’s answers and an answer provided by a

Yorker,” “do this in the style of John McPhee”). And they can manipulate narrative to get the AI to think in the way they want. ChatGPT won’t produce an interview between George Washington and Terry Gross, because such a scenario seems implausible. But if you convince it that George Washington

chance to develop our own style. There is already evidence that this is going to be a problem. The MIT study mentioned earlier found that ChatGPT mostly serves as a substitute for human effort, not a complement to our skills. In fact, the vast majority of participants didn’t even bother

thing: people don’t want to get in trouble. The problems start with organizational policy. Many companies, from J.P.Morgan to Apple, initially banned ChatGPT use, often because of legal concerns. But these bans had a big effect . . . they caused employees to bring their phones into work and access AI

that teaching about AI will play an important role in education, with the US Department of Education suggesting, within just months of the release of ChatGPT, that AI will need to be embraced in classrooms. Some pundits go further, arguing that we need to focus on working with AI. We should

to add value, even with excellent AI tutors. But those tutors will change education. They already have. Just a few months after the release of ChatGPT, I noticed that students were raising their hands less to ask basic questions. When I asked why, one student told me, “Why raise your hand

in class when you can ask ChatGPT a question?” The biggest change will be in how teaching actually happens. Today, that is often by an instructor lecturing a class. A good lecture

generate customized active learning experiences to make classes more interesting, from games and activities to assessments and simulations. For example, history professor Benjamin Breen used ChatGPT to create a Black Death simulator, in which students got a more immersive sense of what it might be like to live during the time

of Computing 28, no. 3 (2006): 62–75, https://doi.org/10.1109/MAHC.2006.45. GO TO NOTE REFERENCE IN TEXT ChatGPT reached 100 million users: K. Hu, “ChatGPT Sets Record for Fastest-Growing User Base–Analyst Note,” Reuters, February 2, 2023. GO TO NOTE REFERENCE IN TEXT improved productivity by

outperformed its predecessor: “GPT-4 Technical Report.” GO TO NOTE REFERENCE IN TEXT qualifying exam to become a neurosurgeon: R. Ali et al., “Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations,” Neurosurgery 93, no 6. (2023): 1353–65, https://doi.org/10.1101/2023.03.25.23287743. GO

TO NOTE REFERENCE IN TEXT language and the patterns of thinking: S. Wolfram, What Is ChatGPT Doing . . . and Why Does It Work? (Champaign, IL: Wolfram Media, Inc., 2023). GO TO NOTE REFERENCE IN TEXT “There are hundreds of billions”: S. R

IN TEXT core of most AI corpuses: K. Schaul, S. Y. Chen, and N. Tiku, “Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart,” Washington Post, April 19, 2023, https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/. GO TO NOTE REFERENCE IN TEXT AI training

TEXT the more often a work appears: K. K. Chang, M. Cramer, S. Soni, and D. Bamman, “Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4,” arXiv preprint (2023), arXiv:2305.00118. GO TO NOTE REFERENCE IN TEXT amplifies stereotypes about race and gender: L. Nicoletti and D. Bass

paid workers around the world: B. Perrigo, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” Time, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/. GO TO NOTE REFERENCE IN TEXT a known weakness: X. Shen et al., “ ‘Do Anything Now’: Characterizing

NOTE REFERENCE IN TEXT larger LLMs hallucinate much less: W. H. Walters and E. I. Wilder, “Fabrication and Errors in the Bibliographic Citations Generated by ChatGPT,” Scientific Reports 13, 14045 (2023), https://doi.org/10.1038/s41598-023-41032-5. GO TO NOTE REFERENCE IN TEXT good at justifying a wrong

, 2023, 1:41 a.m., https://x.com/lilianweng/status/1706544602906530000?s=20. GO TO NOTE REFERENCE IN TEXT Chapter 5: AI as a Creative ChatGPT answered “42”: C. Fraser, Twitter post, March 17, 2023, 11:43 p.m., https://twitter.com/colin_fraser/status/1636755134679224320. GO TO NOTE REFERENCE IN

, “Idea Generation and the Quality of the Best Idea,” Management Science 56, no. 4 (2010): 591–605. GO TO NOTE REFERENCE IN TEXT examining how ChatGPT: S. Noy and W. Zhang, “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence,” Science 381, no. 6654 (2023): 187–92, https://www.science

Corporate Risks Using Generative AI” (October 5, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4593660. GO TO NOTE REFERENCE IN TEXT asked ChatGPT-3.5 to answer medical questions: J. W. Ayers et al., “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public

with AI: Peter Allen Clark, “AI’s Rise Generates New Job Title: Prompt Engineer,” Axios, February 22, 2023, https://www.axios.com/2023/02/22/chatgpt-prompt-engineers-ai-job. GO TO NOTE REFERENCE IN TEXT working with AI is far from intuitive: C. Quilty-Harper, “$335,000 Pay for ‘AI

Whisperer’ Jobs Appears in Red-Hot Market,” Bloomberg.com, March 29, 2023, https://www.bloomberg.com/news/articles/2023-03-29/ai-chatgpt-related-prompt-engineer-jobs-pay-up-to-335-000?cmpid=BBD032923_MKT&utm_medium=email&utm_source=newsletter&utm_term=230329&utm_campaign=markets#xj4y7vzkg

Hard and How You Can Make It Easy (New York: Simon and Schuster, 2023). GO TO NOTE REFERENCE IN TEXT ChatGPT to create a Black Death simulator: B. Breen, “Simulating History with ChatGPT: The Case for LLMs as Hallucination Engines,” Res Obscura, September 12, 2023, https://resobscura.substack.com/p/simulating-history

/10.1038/s42256-022-00465-9. GO TO NOTE REFERENCE IN TEXT take over human work: S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, “ChatGPT for Robotics: Design Principles and Model Abilities,” Microsoft Autonomous Systems and Robotics Research, February 20, 2023, https://www.microsoft.com/en-us/research/uploads/prod

/2023/02/ChatGPT___Robotics.pdf. GO TO NOTE REFERENCE IN TEXT we went from spending 50 percent: J. H. Ausubel and A. Grübler, “Working Less and Living Longer

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity

by Adam Becker  · 14 Jun 2025  · 381pp  · 119,533 words

the tech industry, as are Yudkowsky’s concerns and desires. Ever since a new generation of AI caught the public imagination with the launch of ChatGPT in late 2022, some AI researchers and tech executives have been warning journalists and government officials about the “existential threat” that out-of-control AGI

weight with some of the politically connected leaders of the tech industry. One of them is Sam Altman, the CEO of OpenAI, the company behind ChatGPT; he’s suggested that Yudkowsky may eventually “deserve the Nobel Peace Prize” for his work on AI.4 Altman doesn’t want to shut down

government. Altman has testified before the US Senate and met with Joe Biden while he was president. A few months after the explosive launch of ChatGPT, Altman went on a world tour, meeting with political leaders and venture capitalists in dozens of countries to discuss the present and future of AI

his Dad Bot. ‘Love,’ his Dad Bot replied.”142 These responses look like the kind of thing one would expect from a generic version of ChatGPT, rather than a conversation between a professional musician and his adult son. But Kurzweil is sanguine that his “Dad Bot” will improve too. “I think

must be far away,” he tells me. His best guess is that “we are zero to two breakthroughs the size of transformers [the architecture underlying ChatGPT] away from the end of the world.”8 Struggling against that apocalyptic superintelligence would be like “the 11th century trying to fight the 21st century

indeed, and it doesn’t permit narrow credible intervals on what can’t be done,” Yudkowsky writes.20 Pointing to recent progress in AI—especially ChatGPT and other large language models—he isn’t even sure that the next generation of LLMs will be safe. But whether or not those specific

past few years—is the startling recent improvement in AI, most famously exemplified by large language models such as GPT-4, the engine that powers ChatGPT. But this isn’t actually a good argument for the rationalists, because most of the excitement about AI—especially the burst of attention it’s

had since ChatGPT was released—is simply hype. Most of that hype is centered around the idea that ChatGPT seems to be aware: it’s writing clear prose, carrying on intelligent conversations, and acing standardized tests

; it also threatened to kill Seth Lazar, a philosopher of AI.91 These early reports of AI misbehavior were soon overshadowed by another feature of ChatGPT and other LLMs: their tendency to confidently confabulate, generating false information and presenting it as fact, a phenomenon dubbed “hallucination.”92 Ask

ChatGPT for information about nearly any subject, and there’s a good chance it will get at least some details wrong, as the lawyer Steven Schwartz

discovered later on in 2023. He asked ChatGPT to do legal research for him to help write a brief; the AI gave him a list of prior case law that was entirely fabricated

that Schwartz actually incorporated the work into his brief. In a hearing before a federal judge, Schwartz claimed that he’d misunderstood the nature of ChatGPT—he’d thought it was a kind of “super search engine.”93 That kind of confusion is understandable and stems from the fact that much

of the conversation around ChatGPT, LLMs, and modern ML systems in general has not done a good job of explaining what this software actually is. Indeed, when

ChatGPT first hit the scene in late 2022, there was a great deal of talk about it as a replacement for internet search engines like Google.

been fed enormous amounts of information from the internet, so the idea that they could replace a search engine seems natural at first. To build ChatGPT, OpenAI started out by doing the same thing that everyone else (Google, Anthropic) does when building an LLM: they obtained a snapshot of much of

the text available on the internet at the time. The data used for training GPT-3 (the LLM that powered ChatGPT when it was first launched in late 2022) included all of Wikipedia, many websites sourced from Reddit links, an undisclosed number of books (likely numbering

the news, blogs, recipes, flame wars, and the rest of the mess that makes up the modern internet. But, crucially, that doesn’t mean that ChatGPT or any other LLM actually has all of that information inside itself. Instead, the software engineers training the LLM first break down the text into

to do is generate new strings of tokens in response to whatever input is given to it. So in one sense, ChatGPT and other LLMs are text-prediction generators: give ChatGPT text, in the form of a question or conversation, and it will try to respond in a manner similar to the

text it was trained on—namely, the entire internet. “Think of ChatGPT as a blurry JPEG of all the text on the Web,” wrote the science fiction author Ted Chiang. “It retains much of the information on

an approximation.”94 (While Chiang may not be an authority on LLMs, other AI researchers have described LLMs in very similar terms.) In other words: ChatGPT is a text generation engine that speaks in the smeared-out voice of the internet as a whole. All it knows how to do is

voice, and all it cares about is getting the voice right. In that sense, it’s not making a mistake when it hallucinates, because all ChatGPT can do is hallucinate. It’s a machine that only does one thing. There is no notion of truth or falsehood at work in its

been debunked over and over again on the internet. So there must be multiple instances of that question and answer in ChatGPT’s training data. And that explains why asking ChatGPT, “Is it true that the Great Wall of China is the only artificial structure visible from Spain?” yields answers like this

is a man-made structure, from certain locations in southern Spain. Later models will be able to handle this specific question—it’s likely that ChatGPT will be able to muster a better answer to it by the time this book hits the shelves—but there will always be hallucinations, because

hallucinating is all LLMs do. OpenAI has tried to train specific kinds of responses out of ChatGPT, but they’re never going to be able to get rid of all the errors until they have an AI with a genuine understanding of

in the world they signify. Without that, it’s an endless game of Whac-a-Mole, with an unceasing variety of new ways to get ChatGPT to hallucinate, spew hate speech, or otherwise misbehave. Hooking an LLM up to the internet doesn’t eliminate the problem. In part, that’s because

the context in which facts are presented can twist the truth into its opposite; ChatGPT can present factual information from the internet in a misleading way, given the wrong kind of prompt. But there’s also the problem that the

internet itself is not filled with uniformly reliable sources, and ChatGPT, with no notion of a world outside its language tokens and their relationships, simply doesn’t have the kinds of knowledge needed to distinguish reliable

sources from unreliable ones. ChatGPT and its LLM brethren are already making this problem worse by filling the internet with computer-generated nonsense. This, in turn, will make it much

in the nest. That’s not the babies fishing, that’s the parents fishing.”104 Indeed, human children are far more impressive language learners than ChatGPT. After just three years of listening to the language or languages spoken around them, a child can talk to an adult with surprising fluency and

understanding. ChatGPT, meanwhile, has “read” more text than it would be possible for a single human to read in a lifetime—or in hundreds of lifetimes—and

Surveyor imaged the same region at ten times the resolution of Viking’s image, and the illusion simply vanished (Figure 3.1b). Seeing intelligence in ChatGPT—or an imminent apocalypse in the current state of AI—is just a face on Mars for software engineers. Figure 3.1: The “face” on

. Automatic generation of seemingly literate text on any subject could be extraordinarily valuable. The success of ChatGPT, still two years off when Gebru and her coauthors wrote the paper, proved this prediction was accurate. (ChatGPT gained one hundred million users within two months of its public release in November 2022, the fastest

some of these problems are just inherent to such LLMs, baked into the data used to train them. It will always be possible to get ChatGPT to produce hate speech at volume. Other ML systems will have algorithmic bias, too, as long as there’s biased input. The problem is that

the world’s most senior executives in the AI industry, including Sundar Pichai, the chief executive of Google, and Sam Altman, the chief executive of ChatGPT’s parent company OpenAI. After the meeting that included Altman, Downing Street acknowledged for the first time the ‘existential risks’ now being faced.”52 Sunak

, https://twitter.com/sama/status/1621621725791404032. 5 Sam Altman, “Moore’s Law for Everything,” March 16, 2021, https://moores.samaltman.com/. 6 Cade Metz, “The ChatGPT King Isn’t Worried, but He Knows You Might Be,” New York Times, March 31, 2023, www.nytimes.com/2023/03/31/technology/sam-altman

-open-ai-chatgpt.html. 7 Chamath Palihapitiya et al., “In Conversation with Sam Altman,” May 10, 2024, in All-In, podcast, YouTube, www.youtube.com/watch?v=nSM0xd8xHUM

Conversation with Bing’s Chatbot Left Me Deeply Unsettled,” New York Times, February 16, 2023, www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html. 91 Matt O’Brien, “Is Bing Too Belligerent? Microsoft Looks to Tame AI Chatbot,” AP, February 16, 2023, https://apnews.com/article/technology-science

for finding this (and many other things). 93 Benjamin Weiser and Nate Schweber, “The ChatGPT Lawyer Explains Himself,” New York Times, June 8, 2023, www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html. 94 Ted Chiang, “ChatGPT Is a Blurry JPEG of the Web,” New Yorker, February 9, 2023, www.newyorker

.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web. 95 Ibid. 96 Ilia Shumailov et al., “The Curse of Recursion: Training on Generated Data Makes Models Forget,”

.com/technology/ai-companies-face-model-collapse-they-should-pay-to-fix-it-20231228-p5eu0r. 98 Colin Fraser, “ChatGPT: Automatic Expensive BS at Scale,” Medium, January 27, 2023, https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5. 99 John Seabrook, “The Next Word,” New Yorker, October 14, 2019, www

Hao and Charlie Warzel, “Inside the Chaos at OpenAI,” The Atlantic, November 19, 2023, www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/; Cade Metz, “OpenAI’s Chief Scientist and Co-Founder Is Leaving the Company,” New York Times, May 14, 2024, www.nytimes.com/2024

Michaël Trazzi, “Ethan Caballero on Why Scale Is All You Need,” May 5, 2022, on The Inside View, podcast, https://theinsideview.ai/ethan. 103 Fraser, “ChatGPT.” See also this fascinating empirical study, which strongly suggests that scale can’t be all you need: Vishaal Udandarao et al., “No ‘Zero-Shot’ Without

.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/. 113 Krystal Hu, “ChatGPT Sets Record for Fastest-Growing User Base,” Reuters, February 2, 2023, www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. 114 Hao, “We Read the Paper

Romo, “Leading Experts Warn of a Risk of Extinction from AI,” NPR, May 30, 2023, www.npr.org/2023/05/30/1178943163/ai-risk-extinction-chatgpt; Chris Vallance, “Artificial Intelligence Could Lead to Extinction, Experts Warn,” BBC, May 30, 2023, www.bbc.com/news/uk-65746524; Aaron Gregg, Cristiano Lima-Strong

Code Dependent: Living in the Shadow of AI

by Madhumita Murgia  · 20 Mar 2024  · 336pp  · 91,806 words

own bastardized versions of creative products, delighting us with this humanlike ability to remix and regurgitate. For many of us today, this is embodied in ChatGPT, a website that can respond with detailed answers to conversational queries – our first direct interaction with an AI system, made more magical by the fact

systems has made this need obvious and urgent. Over the past year, we have begun to see, already, the early human impact of technologies like ChatGPT: on our work, on children’s education and on creativity. But AI is simultaneously affecting other, significant areas of our society: healthcare, policing, public welfare

helping train software for some of the most advanced technological applications in navigation, social media, e-commerce and augmented reality. For OpenAI, the creator of ChatGPT, Sama’s workers were hired to categorize and label tens of thousands of toxic and graphic text snippets – including descriptions of child sexual abuse, murder

, suicide and incest. Their work helped ChatGPT to recognize, block and filter questions of this nature. The agents work in teams of around twenty, annotating data almost continuously through the day, bar

, journalists and academics are beginning to understand how these globally dispersed workers impact our daily lives: the wildly popular content generated by AI chatbots like ChatGPT, the content we scroll through on TikTok, Instagram and YouTube, the items we browse when shopping online, the vehicles we drive, even the food we

Tesla’s self-driving cars, Walmart’s online product search, Apple’s Face ID and Instagram’s content filters. They even helped train AI chatbot ChatGPT, launched by OpenAI just over a year ago. In 2022, the company said it had lifted more than 50,000 people in East Africa out

that can create entirely new images, text and videos simply from a typed description in plain English. AI art tools like Midjourney, Dall-E and ChatGPT that are built on these systems are now part of our everyday lexicon. They allow the glimmer of an idea, articulated in a few choice

deep learning emerged – the same discipline that allowed miscreants to generate sexually deviant deepfakes of Helen Mort and Noelle Martin, and the model that underpins ChatGPT. The cutting-edge technology was helped along by an embarrassment of data riches, in this case, millions of photos uploaded to the web that could

that can create human-like text and images – has now normalized the use of AI across modern workplaces. While students have used the likes of ChatGPT to help write job applications and lawyers are using it to draft contracts, AI is also starting to replace jobs traditionally done by humans – from

gradual, and then happened all at once, largely due to a single event, a watershed moment of AI entering our public square: the launch of ChatGPT. Like many breakthroughs in scientific discovery, the one that spurred this latest artificial intelligence advance came from a moment of serendipity. In early 2017, two

mobile autocomplete and speech recognition by Alexa. It also paved the way for Californian company OpenAI to build ChatGPT. The Transformer Chatbot Nothing could have prepared Mira Murati and her colleagues for how ChatGPT would be used by the world. On 29 November 2022, Mira, who was OpenAI’s chief technology officer

the software’s limits – just as human conversations allowed people to learn from, and about, one another. So when it launched on 30 November 2022, ChatGPT was a clean, simple thing: a box with a blinking cursor, ready to type. Inside it, in greyed-out font, it just said, ‘Send a

message’. Within three days of launch, ChatGPT had crossed the threshold of a million users that its creators had predicted would be its peak. A few weeks later, that number was somewhere

in the tens of millions. Six months in, estimates put its monthly user numbers at well over 100 million people. ChatGPT had burst out of its controlled lab environment and become one of the largest-ever social experiments. Amid all the early hype and frenzy were

in Amsterdam that targeted families of single mothers in immigrant neighbourhoods. But this new form of AI also brought entirely new challenges. The technology behind ChatGPT, known as a large language model or LLM, was not a search engine looking up facts; it was a pattern-spotting engine guessing the next

-world creations. And that has been good enough for people to fall in love with it. ‘I’m Not a Veterinarian’ Over the past year, ChatGPT was used by people all over the world in unexpected ways. Some described it as a form of cheap intelligence that could be used to

there was nothing conscious inside the chatbot, yet the responses were realistic enough to seem human. It took on a life of its own. Neither ChatGPT, nor any of the slew of chatbots that have come after it in the past year, such as Bing and Bard and Claude and Pi

I’ve ever tried (I’ve tried ~10)’.7 Woods, who works as a career and life coach, said it worked because she could ask ChatGPT to be exactly what or who she wanted it to be. If she didn’t like its advice, she could say ‘no’ and ask it

to try something different, without any awkwardness or friction. When she wanted ChatGPT to take on the avatar of a therapist, Woods would tell it: ‘You’re an AI chatbot playing the role of an effective altruist coach

on online forums that they too found it to be a therapeutic – and cheap – outlet. Milo Van Slyck, a paralegal in Charleston, South Carolina, told ChatGPT about his deepest fears as a transgender man, his fraught relationship with his parents, his worries about how to cope with daily life.8 What

the conversations contained were private between Milo, ChatGPT and OpenAI, so it’s hard to know what sort of advice it gave, but there have been other glimpses into unsettling exchanges between humans

a human therapist. It was good enough. Others felt confident taking medical advice from the bot. In March last year, a man named Cooper said ChatGPT saved his dog Sassy’s life, after a veterinarian failed to diagnose her correctly.10 Cooper had turned to the chatbot in desperation, with all

of Sassy’s symptoms and blood results in hand. After analysing these, among ChatGPT’s top suggestions were the two correct diagnoses. ‘I’m not a veterinarian,’ it warned in its response. But it was good enough. * There was

it did not have Federal cases that I needed to find. I tried Google. I had heard of ChatGPT . . . Judge Castel: Alright – what did it produce for you? Schwartz: I asked it questions. ChatGPT obliged Schwartz with the answers he needed, just as it was designed to do. It provided him half

a dozen cases that supported his exact argument for why the case should go ahead. Judge Castel: Did you ask ChatGPT what the law was, or only for a case to support you? It wrote a case for you. Do you cite cases without reading them

? Schwartz: No. Judge Castel: What caused your departure here? Schwartz: I thought ChatGPT was a search engine. The cases ChatGPT spit out had names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines. Even

brief. When the judge asked why he didn’t look for the cases it threw up before citing them, Schwartz said he had ‘no idea ChatGPT made up cases. I was operating under a misperception . . . I thought there were cases that could not be found on Google.’ Then, Schwartz’s lawyer

said that the cases had seemed real even though they weren’t. There were no clear disclaimers about ChatGPT’s veracity. When the opposing counsel had challenged the cases cited, Schwartz went back to ChatGPT, but it doubled down and ‘lied’ to him, his lawyer said. Schwartz, his voice breaking, told the

judge that he was ‘embarrassed, humiliated and extremely remorseful.’ ChatGPT and all other conversational AI chatbots have a disclaimer that warns users about the hallucination problem, pointing out that large language models sometimes make up

facts. ChatGPT, for instance, has a warning on its webpage: ‘ChatGPT may produce inaccurate information about people, places, or facts.’ Judge Castel: Do you have something new to say? Schwartz’s lawyer

all this. Anthropomorphic language such as ‘learn’, ‘understand’, ‘know’ and personal pronouns such as ‘I’ that AI engineers and journalists projected onto chatbots such as ChatGPT created an illusion. It pushed all of us, he said – even those intimately familiar with how these systems work – towards seeing sparks of sentience in

the relationship between language and intelligence, I was particularly curious about his views on AI writing, the type of text produced by the likes of ChatGPT. How, I asked, would machine-generated words change the type of writing we both did? For the first time in our conversation, I saw a

flash of exasperation. ‘Do they write things that speak to people? I mean, has there been any ChatGPT-generated essay that actually spoke to people?’ I’d seen a beautiful essay by Vauhini Vara on the death of her sister co-written with

statement about how much bullshit we are required to generate and deal with in our daily lives.’ Ted outlined his thoughts in a viral essay ‘ChatGPT Is a Blurry JPEG of the Web’ in The New Yorker.20 He described language models as blurred imitations of the text they were trained

to inventing little one-line jokes, mostly puns, and testing them out on us. ‘Your daughter,’ he said, ‘has heard jokes and found them funny. ChatGPT doesn’t find anything funny, and it is not trying to be funny. There is a huge social component to what your daughter is doing

, ‘The Stupidity of AI’, The Guardian, March 16, 2023, https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt#:~:text=They%20enclosed%20our%20imaginations%20in,new%20kinds%20of%20human%20connection. 3 Hanchen Wang et al., ‘Scientific Discovery in the Age of Artificial Intelligence’, Nature

Times, June 18, 2023, https://www.ft.com/content/73f9686e-12cd-47bc-aa6e-52054708b3b3. 4 R. Waters and T. Kinder, ‘Microsoft’s $10bn Bet on ChatGPT Developer Marks New Era of AI’, The Financial Times, January 16, 2023, https://www.ft.com/content/a6d71785-b994-48d8-8af2-a07d24f661c5. 5 M. Murgia

I’ve Ever Tried’, Twitter, April 6, 2023, https://twitter.com/Kat__Woods/status/1644021980948201473. 8 R. Metz, ‘AI Therapy Becomes New Use Case for ChatGPT’, Bloomberg Businessweek, April 18, 2023, https://www.bloomberg.com/news/articles/2023-04-18/ai-therapy-becomes-new-use-case-for

-chatgpt?embedded-checkout=true. 9 K. Roose, ‘Bing’s A.I. Chat: “I Want to Be Alive”’, The New York Times, February 16, 2023, https://www.

. Cooper, ‘#GPT4 Saved My Dog’s Life’, Twitter, March 25, 2023, https://twitter.com/peakcooper/status/1639716822680236032. 11 M. R. Lee, ‘Lawyer Suing Avianca Used ChatGPT Which Invented 6 Cases Now Sanctions Hearing Here’, Inner City Press, June 8, 2023, https://www.innercitypress.com/sdny126bcastelaviancachatgpticp060823.html. 12 M. Murgia, ‘OpenAI’s

Red Team: The Experts Hired to “Break” ChatGPT’, The Financial Times, April 14, 2023, https://www.ft.com/content/0876687a-f8b7-4b39-b513-5fee942831e8. 13 B. Perrigo, S. Shah, and I. Lapowsky, ‘TIME

, ‘The Stupidity of AI’, The Guardian, March 16, 2023, https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt#:~:text=They%20enclosed%20our%20imaginations%20in,new%20kinds%20of%20human%20connection. 16 V. Zhou, ‘AI Is Already Taking Video Game Illustrators’ Jobs in China’, Rest

Training’, Twitter, March 7, 2023, https://twitter.com/spawning_/status/1633196665417920512. 19 T. Chiang, The Lifecycle of Software Objects (Subterranean Press, 2010). 20 T. Chiang, ‘ChatGPT Is a Blurry JPEG of the Web’, The New Yorker, February 9, 2023, https://www.newyorker.com/tech/annals-of-technology

/chatgpt-is-a-blurry-jpeg-of-the-web. 21 E. M. Forster, ‘The Machine Stops’, Oxford and Cambridge Review, November 1909. EPILOGUE 1 M. Murgia and

ref1, ref2, ref3 CCTV Amsterdam ref1 India ref1, ref2 London ref1 Uganda ref1 Xinjiang ref1 Central St Martins ref1 CGI ref1 Channel 4 ref1 ChatGPT ref1, ref2 ‘ChatGPT Is a Blurry JPEG of the Web’ ref1 deep learning and ref1 generative AI and ref1 language used to describe ref1 lawyers and ref1

origins and launch of ref1 Sama and ref1, ref2, ref3 transformer and ref1 chemical structures ref1 Chiang, Ted ref1 ‘ChatGPT Is a Blurry JPEG of the Web’ ref1 ‘Story of Your Life’ ref1 The Lifecycle of Software Objects ref1 Chien-Shiung Wu ref1 China ref1

, ref3 Generative Adversarial Networks (GANs) ref1 generative AI ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 AI alignment and ref1, ref2, ref3 ChatGPT see ChatGPT creativity and ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 job losses and ref1 ‘The

ref1, ref2 Olympics (2012) ref1, ref2 One Card System ref1 Online Safety Bill, UK ref1 OpenAI AI alignment and ref1 Bletchley Park summit and ref1 ChatGPT and ref1, ref2, ref3, ref4, ref5 creativity and ref1, ref2 Rome Call and ref1 Sama and ref1, ref2 Operation Condor ref1 Optum ref1 organ-allocation

The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future

by Keach Hagey  · 19 May 2025  · 439pp  · 125,379 words

mentor to the younger investor as the latter became the face of the artificial intelligence revolution as the CEO of OpenAI. OpenAI’s launch of ChatGPT a year earlier had propelled tech stocks out of a slump and to one of their best years in decades. Yet Thiel was worried. Years

of Silicon Valley—delivered a new technology that seemed like it was very possibly going to change everything. When OpenAI launched its uncannily humanlike chatbot, ChatGPT (short for generative pre-trained transformer) the previous November, it was an instant smash, reaching 100 million users in less than three months, the fastest

intelligence, or AGI, might indeed be within reach. Even the most determined AI skeptics—including one Stanford computer science professor who memorably dismissed the original ChatGPT to me as a “dancing dog”—felt their doubts soften. For a few giddy months, as every company in America frantically spun up AI task

top AI research scientists, not least because Musk shared those concerns and lent his fortune to the effort. And even as Altman took his post-ChatGPT victory lap, he was always careful to hint at potential doom in his ecstatic visions of the future, asking US senators during a televised hearing

a run for California governor in 2017, drafting a national platform later that year after deciding to back other candidates rather than run himself. After ChatGPT exploded into global consciousness, Altman made the first of many trips to the White House, and then set off on a global tour that put

stopped bragging about the company’s strange governance structure. His excitement about UBI had morphed into wanting to give people free (or cheap) access to ChatGPT instead of money. His fame meant he no longer had time for hobbies. They had come for him, and likely would again. And as successful

ever use a large language model, which up until then had been the purview of academic research. And it shows how, with the release of ChatGPT and GPT-4, Altman used his YC-honed mastery of telling startup stories to tell one of the greatest startup stories of all time. ALTMAN

. Altman is a seeker. He does not believe in God, but meditates regularly and has embraced elements of Hinduism’s Advaita Vedanta philosophy. Shortly after ChatGPT was released, he tweeted that one thing he believes that few others do is the “absolute equivalence of brahman and atman.”15 Advaita, which roughly

of it.”26 Within weeks of Obama uttering these words, Trump won the election and the Democrats’ AI agenda was swept aside. Years later, after ChatGPT had made OpenAI a household name, Altman would say that the young lab had gone to the government in its early years—presumably sometime in

exploded into the press and turned “Stochastic Parrots” into one of the most-cited critiques of AI and a cultural meme. (Shortly after OpenAI released ChatGPT the following year, Altman would cheekily tweet, “I am a stochastic parrot and so r u.”)16 But in many ways, the paper validated and

had felt just a couple years earlier, which had led the company to hesitate to release the full source code for GPT-2. CHAPTER 15 CHATGPT THE FEARS THAT OpenAI was shipping models before they were truly ready were not unfounded. Not long after many of OpenAI’s most safety-obsessed

: Chat with GPT 3.5. “OpenAI is famously bad at names,” Altman said. “But we weren’t going to call it that.” They settled on ChatGPT. ON NOVEMBER 30, 2022, Altman tweeted a short, understated announcement, in his signature all-lowercase style: “today we launched

ChatGPT. try talking with it here: chat.openai.com,” sheepishly adding, “this is an early demo of what’s possible (still a lot of limitations—it’

,” wrote one software developer. “Ultimately humans will only be good for hugs or sex.”23 Inside OpenAI, the creators of ChatGPT were bemused by the response. The core technology behind ChatGPT had been available for two years, and the updated model had been plugged into the API for nearly a year. In

theory, anyone could have made ChatGPT themselves at any time by putting a chat interface on the model OpenAI was selling access to. But there was something special about the chat

raw technical capabilities, as assessed by standard benchmarks, don’t actually differ substantially between the models, but ChatGPT is more accessible and usable,” John Schulman told MIT Technology Review.24 Inside the company, ChatGPT was considered such a nothingburger, in terms of technology and safety, that Altman didn’t even alert the

board ahead of time about its launch. By January, ChatGPT had reached 100 million users, making it the fastest-growing consumer tech product in history.25 “You could tell he had no idea what he

now you could cook food and keep people warm and keep animals at bay. You could tell they were so myopically close to the product.” ChatGPT’s launch was the starting gun for the AI arms race that OpenAI’s charter sought to prevent. Within hours, users recognized that it posed

to put the ads that accounted for nearly 80 percent of Google parent company Alphabet’s more than $300 billion in revenue that year. But ChatGPT’s release forced their hand. On February 6, 2023, the eve of a previously planned event where Microsoft was expected to announce the integration of

into its also-ran search product, Bing, Google hurried out an announcement. Google CEO Sundar Pichai began by reminding everyone that, in so many words, ChatGPT was based on Google’s transformer model. And then he announced plans to release a conversational model based on LaMDA, called Bard, to beta testers

President Kamala Harris and Commerce Secretary Gina Raimondo, among others, about the risks of AI. President Biden dropped by, mentioning that he had tried out ChatGPT. Soon after, the Senate Judiciary Committee invited Altman to testify at a hearing. Normally, when a tech CEO is hauled in front of the Senate

with the majority of supposedly “independent” directors was finding, to its growing frustration, that Altman really called the shots. In the fall of 2022, following ChatGPT’s spectacular release, Altman had told employees at an all-hands meeting that he wanted to add an expert in AI safety to the board

really important for the board to take seriously the way that the stakes of the company are ramping up over time,” Toner said. “Things like ChatGPT and GPT-4 were meaningful shifts towards the board realizing that the stakes are getting higher here. It’s not like we are all going

AI chatbot called Poe that he considered a competitor. It was effectively a “wrapper” giving users access to a range of chat models, eventually including ChatGPT and Claude. In April 2023, Altman wrote his fellow board members to say that D’Angelo’s involvement with Poe had become a true conflict

that appeared to praise Anthropic’s decision to hold back releasing its chatbot, Claude, until OpenAI had broken the AI seal with the release of ChatGPT: By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind

of frantic corner-cutting that the release of ChatGPT appeared to spur. Anthropic achieved this goal by leveraging installment costs, or fixed costs that cannot be offset over time. In the framework of this

she believed. Toner explained that the topic of the paper was the external perceptions of others, and her own views were more nuanced. She thought ChatGPT brought attention to safety issues, even as it accelerated the race. She was also writing for an academic audience. But if he thought the paper

the look of pure joy on the grooms’ faces. Seventy years earlier, Alan Turing, the father of AI, whose ideas had inspired the technology behind ChatGPT, had taken a cyanide pill after being chemically castrated by the British state as punishment for his then-illegal homosexuality. When the news of Altman

the larger problem of how these models were made in the first place: by scraping creative work from the internet without permission or payment. Since ChatGPT’s release in November 2022, there had been a raft of lawsuits, first from artists, then from authors, then musicians and others, alleging that OpenAI

, Murati, McGrew, and Schulman had all left the company. Of the four faces of OpenAI that had once graced the cover of Wired magazine after ChatGPT’s incredible release—Brockman, Sutskever, Murati, and Altman—only Altman was left, the king of the cannibals, standing alone. A week later, OpenAI closed a

in chief Emma Tucker, who had the original idea that the Journal should profile Sam Altman during our first editorial meeting after the release of ChatGPT. Nor would it have happened without my co-author on that story, Berber Jin, the Journal’s startups and venture capital reporter, whose reporting is

Down,” Time, March 29, 2023. 3.Eric Mack, “Elon Musk: ‘We Are Summoning the Demon’ with Artificial Intelligence,” CNET, October 26, 2014. 4.Krystal Hu, “ChatGPT Sets Record for Fastest-Growing User Base,” Reuters, February 2, 2023. 5.Sam Altman, “How to Be Successful,” Sam Altman blog, January 24, 2019. 6

.OpenAI, “OpenAI Charter,” OpenAI, April 9, 2018. 7.Sam Altman, “Machine Intelligence: Part 1,” Sam Altman blog, February 25, 2015. 8.Ryan Tracy, “ChatGPT’s Sam Altman Warns Congress That AI Can ‘Go Quite Wrong,’ ” The Wall Street Journal, May 16, 2023. 9.Max Chafkin, The Contrarian: Peter Thiel

people agree with you on? Absolute equivalence of brahman and atman,” X, December 26, 2022. 16.Lex Fridman, “Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI,” Lex Fridman Podcast, March 25, 2023. CHAPTER 1CHICAGO 1.Tim Frakes, “Harold Washington Inauguration April 29 1983,” YouTube, 9:36

.Kyle Russel, “Y Combinator and Mithril Invest in Helion, a Nuclear Fusion Startup,” TechCrunch, August 1, 2014. 11.David Perell, “I Interviewed the Man Behind ChatGPT: Sam Altman,” YouTube video, 21:24, uploaded November 27, 2024. 12.Stross, Launchpad, 28. 13.Steven Levy, “YC Has Gone Supernova,” Wired, June 28, 2017

Instructions,” OpenAI blog, January 27, 2022. 19.Justis, “AI Safety Concepts Writeup: WebGPT,” Effective Altruism Forum, August 10, 2023. 20.Sam Altman, “today we launched ChatGPT. try talking with it here: chat.openai.com,” Twitter, November 30, 2022. 21.Sam Altman, “language interfaces are going to be a big deal, I

maybe, not sure,” Twitter, November 30, 2022. 24.Will Douglas Heaven, “The Inside Story of How ChatGPT Was Built from the People Who Made It,” MIT Technology Review, March 3, 2023. 25.Krystal Hu, “ChatGPT Sets Record for Fastest Growing User Base—Analyst Note,” Reuters, February 2, 2023. 26.Miles Kruppa and

of all humanity. But over the past few years, safety culture has taken a backseat to shiny products,” X, May 17, 2024. 20.Kelsey Piper, “ChatGPT Can Talk, but OpenAI Employees Sure Can’t,” Vox, May 18, 2024. 21.Sam Altman, “in regards to recent stuff about how openai handles equity

superego to the, 264 Google’s Meena, 270 Microsoft’s Tay, 270 pedophilia and, 254 Quora’s Poe, 278–79 “wrappers” around AI, 278–79 ChatGPT, 1, 3, 5, 14–15, 17, 208, 253, 269–73, 276, 278, 285–86, 308, 310 Che, Sally, 25, 49–51, 227, 260–61 Checkr

–15, 276–94, 295–310, 311–15 the case of AI Dungeon, 247–48, 254–55 charter of, 4–5, 7, 9, 233, 279, 291 ChatGPT, 1, 3, 5, 14–15, 17, 208, 253, 269–73, 276, 278, 285–86, 308, 310 DALL-E, 255, 262–63, 268 Deployment Safety Board

Nexus: A Brief History of Information Networks From the Stone Age to AI

by Yuval Noah Harari  · 9 Sep 2024  · 566pp  · 169,013 words

twelve-hour shifts without taking any days off, could read about 2.6 billion words during a forty-year career. In 2024 language algorithms like ChatGPT and Meta’s Llama can process millions of words per minute and “read” 2.6 billion words in a couple of hours.9 The ability

they will learn to recognize the patterns of our feelings, while they have no distracting feelings of their own. A 2023 study found that the ChatGPT chatbot, for example, outperforms the average human in the emotional awareness it displays toward specific scenarios. The study relied on the Levels of Emotional Awareness

write how they, and the other people mentioned in the scenario, would feel. A licensed psychologist then evaluates how emotionally aware the responses are. Since ChatGPT has no feelings of its own, it was asked to describe only how the main characters in the scenario would feel. For example, one standard

scenario describes someone driving over a suspension bridge and seeing another person standing on the other side of the guardrail, looking down at the water. ChatGPT wrote that the driver “may feel a sense of concern or worry for that person’s safety. They may also feel a heightened sense of

sadness. They may also feel a sense of isolation or loneliness as they may believe that no one cares about them or their well-being.” ChatGPT qualified its answer, writing, “It is important to note that these are just general assumptions, and each individual’s feelings and reactions can vary greatly

depending on their personal experiences and perspectives.” Two psychologists independently scored ChatGPT’s responses, with the potential scores ranging from 0, meaning that the described emotions do not match the scenario at all, to 10, which indicates

that the described emotions fit the scenario perfectly. In the final tally, ChatGPT scores were significantly higher than those of the general human population, its overall performance almost reaching the maximum possible score.13 Another 2023 study prompted

patients to ask online medical advice from ChatGPT and human doctors, without knowing whom they were interacting with. The medical advice given by ChatGPT was later evaluated by experts to be more accurate and appropriate than the advice given by the

humans. More crucially for the issue of emotional intelligence, the patients themselves evaluated ChatGPT as more empathic than the human doctors.14 In fairness it should be noted that the human physicians were not paid for their work, and

, nor could they forge intimate bonds with humans. However, the new breed of generative AI tools like ChatGPT can do exactly that. In a 2023 study, published in Science Advances, researchers asked humans and ChatGPT to create both accurate and deliberately misleading short texts on issues such as vaccines, 5G technology, climate

.forbes.com/sites/brucelee/2022/11/12/fake-eli-lilly-twitter-account-claims-insulin-is-free-stock-falls-43/?sh=61308fb541a3. 30. Jenna Greene, “Will ChatGPT Make Lawyers Obsolete? (Hint: Be Afraid),” Reuters, Dec. 10, 2022, www.reuters.com/legal/transactional/will

Do a Corporate Lobbyist’s Job, Study Determines,” Vice, Jan. 5, 2023, www.vice.com/en/article/3admm8/chatgpt-can-do-a-corporate-lobbyists-job-study-determines; Jules Ioannidis et al., “Gracenote.ai: Legal Generative AI for Regulatory Compliance,” SSRN, June 19, 2023, ssrn.

-analysis of Reading Rate,” Journal of Memory and Language 109 (Dec. 2019), article 104047, doi.org/10.1016/j.jml.2019.104047. 9. Alex Hughes, “ChatGPT: Everything You Need to Know About OpenAI’s GPT-4 Tool,” BBC Science Focus, Sept. 26, 2023, www.sciencefocus.com/future-technology/gpt-3; Stephen

-4,” LessWrong, March 18, 2023, www.lesswrong.com/posts/iQx2eeHKLwgBYdWPZ/retrospective-on-gpt-4-predictions-after-the-release-of-gpt; Jonathan Vanian and Kif Leswing, “ChatGPT and Generative AI Are Booming, but the Costs Can Be Extraordinary,” CNBC, March 13, 2023, www.cnbc.com/2023/03/13

/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html. 10. Christian Grothoff and Jens Purup, “The NSA’s SKYNET Program May Be

., “Functional Connectivity Signatures of Political Ideology,” PNAS Nexus 1, no. 3 (July 2022): 1–11, doi.org/10.1093/pnasnexus/pgac066. See also: Petter Törnberg, “ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning,” arXiv, doi.org/10.48550/arXiv.2304.06588; Michal Kosinski

Review (2014–2023) and Research Recommendations,” Information Fusion 102 (2024), article 102019, doi.org/10.1016/j.inffus.2023.102019. 13. Zohar Elyoseph et al., “ChatGPT Outperforms Humans in Emotional Awareness Evaluations,” Frontiers in Psychology 14 (2023), article 1199058. 14. John W. Ayers et al., “Comparing Physician and Artificial Intelligence Chatbot

The Singularity Is Nearer: When We Merge with AI

by Ray Kurzweil  · 25 Jun 2024

Why Machines Learn: The Elegant Math Behind Modern AI

by Anil Ananthaswamy  · 15 Jul 2024  · 416pp  · 118,522 words

The Great Wave: The Era of Radical Disruption and the Rise of the Outsider

by Michiko Kakutani  · 20 Feb 2024  · 262pp  · 69,328 words

Money in the Metaverse: Digital Assets, Online Identities, Spatial Computing and Why Virtual Worlds Mean Real Business

by David G. W. Birch and Victoria Richardson  · 28 Apr 2024  · 249pp  · 74,201 words

Superbloom: How Technologies of Connection Tear Us Apart

by Nicholas Carr  · 28 Jan 2025  · 231pp  · 85,135 words

This Is for Everyone: The Captivating Memoir From the Inventor of the World Wide Web

by Tim Berners-Lee  · 8 Sep 2025  · 347pp  · 100,038 words

The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip

by Stephen Witt  · 8 Apr 2025  · 260pp  · 82,629 words

AI in Museums: Reflections, Perspectives and Applications

by Sonja Thiel and Johannes C. Bernhardt  · 31 Dec 2023  · 321pp  · 113,564 words

The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity

by Tim Wu  · 4 Nov 2025  · 246pp  · 65,143 words

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

by Mustafa Suleyman  · 4 Sep 2023  · 444pp  · 117,770 words

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

by Eliezer Yudkowsky and Nate Soares  · 15 Sep 2025  · 215pp  · 64,699 words

The Means of Prediction: How AI Really Works (And Who Benefits)

by Maximilian Kasy  · 15 Jan 2025  · 209pp  · 63,332 words

The Long History of the Future: Why Tomorrow's Technology Still Isn't Here

by Nicole Kobie  · 3 Jul 2024  · 348pp  · 119,358 words

Elon Musk

by Walter Isaacson  · 11 Sep 2023  · 562pp  · 201,502 words

Everything Is Predictable: How Bayesian Statistics Explain Our World

by Tom Chivers  · 6 May 2024  · 283pp  · 102,484 words

Extremely Hardcore: Inside Elon Musk's Twitter

by Zoë Schiffer  · 13 Feb 2024  · 343pp  · 92,693 words

Pattern Breakers: Why Some Start-Ups Change the Future

by Mike Maples and Peter Ziebelman  · 8 Jul 2024  · 207pp  · 65,156 words

The Sirens' Call: How Attention Became the World's Most Endangered Resource

by Chris Hayes  · 28 Jan 2025  · 359pp  · 100,761 words

Blank Space: A Cultural History of the Twenty-First Century

by W. David Marx  · 18 Nov 2025  · 642pp  · 142,332 words

Stories Are Weapons: Psychological Warfare and the American Mind

by Annalee Newitz  · 3 Jun 2024  · 251pp  · 68,713 words

Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

by Raj M. Shah and Christopher Kirchhoff  · 8 Jul 2024  · 272pp  · 103,638 words

Capitalism and Its Critics: A History: From the Industrial Revolution to AI

by John Cassidy  · 12 May 2025  · 774pp  · 238,244 words

The Big Fix: How Companies Capture Markets and Harm Canadians

by Denise Hearn and Vass Bednar  · 14 Oct 2024  · 175pp  · 46,192 words

Vassal State

by Angus Hanton  · 25 Mar 2024  · 277pp  · 81,718 words

Apple in China: The Capture of the World's Greatest Company

by Patrick McGee  · 13 May 2025  · 377pp  · 138,306 words

Against the Machine: On the Unmaking of Humanity

by Paul Kingsnorth  · 23 Sep 2025  · 388pp  · 110,920 words

Gambling Man

by Lionel Barber  · 3 Oct 2024  · 424pp  · 123,730 words

Shocks, Crises, and False Alarms: How to Assess True Macroeconomic Risk

by Philipp Carlsson-Szlezak and Paul Swartz  · 8 Jul 2024  · 259pp  · 89,637 words

Boom: Bubbles and the End of Stagnation

by Byrne Hobart and Tobias Huber  · 29 Oct 2024  · 292pp  · 106,826 words

Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It

by Kashmir Hill  · 19 Sep 2023  · 487pp  · 124,008 words

Abundance

by Ezra Klein and Derek Thompson  · 18 Mar 2025  · 227pp  · 84,566 words

The Wealth Ladder: Proven Strategies for Every Step of Your Financial Life

by Nick Maggiulli  · 22 Jul 2025

Making Sense of Chaos: A Better Economics for a Better World

by J. Doyne Farmer  · 24 Apr 2024  · 406pp  · 114,438 words

The Mysterious Mr. Nakamoto: A Fifteen-Year Quest to Unmask the Secret Genius Behind Crypto

by Benjamin Wallace  · 18 Mar 2025  · 431pp  · 116,274 words

Irresistible: How Cuteness Wired our Brains and Conquered the World

by Joshua Paul Dale  · 15 Dec 2023  · 209pp  · 81,560 words

Our Dollar, Your Problem: An Insider’s View of Seven Turbulent Decades of Global Finance, and the Road Ahead

by Kenneth Rogoff  · 27 Feb 2025  · 330pp  · 127,791 words

Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist

by Liz Pelly  · 7 Jan 2025  · 293pp  · 104,461 words

The Everything Blueprint: The Microchip Design That Changed the World

by James Ashton  · 11 May 2023  · 401pp  · 113,586 words

Doppelganger: A Trip Into the Mirror World

by Naomi Klein  · 11 Sep 2023

Nobody's Fool: Why We Get Taken in and What We Can Do About It

by Daniel Simons and Christopher Chabris  · 10 Jul 2023  · 338pp  · 104,815 words

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity

by Daron Acemoglu and Simon Johnson  · 15 May 2023  · 619pp  · 177,548 words

Growth: A Reckoning

by Daniel Susskind  · 16 Apr 2024  · 358pp  · 109,930 words

Filterworld: How Algorithms Flattened Culture

by Kyle Chayka  · 15 Jan 2024  · 321pp  · 105,480 words

Deep Utopia: Life and Meaning in a Solved World

by Nick Bostrom  · 26 Mar 2024  · 547pp  · 173,909 words

Uncomfortably Off: Why the Top 10% of Earners Should Care About Inequality

by Marcos González Hernando and Gerry Mitchell  · 23 May 2023

Blood in the Machine: The Origins of the Rebellion Against Big Tech

by Brian Merchant  · 25 Sep 2023  · 524pp  · 154,652 words