AI safety

back to index

description: research area on making artificial intelligence safe and beneficial

generative artificial intelligence

52 results

Extremely Hardcore: Inside Elon Musk's Twitter

by Zoë Schiffer  · 13 Feb 2024  · 343pp  · 92,693 words

, 3:23 p.m., twitter.com/elonmusk/status/1592976585858351105. GO TO NOTE REFERENCE IN TEXT The hacker believed: “Transcript for George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God,” Lex Fridman Podcast, podcast #387, June 29, 2023, lexfridman.com/george-hotz-3-transcript/#chapter12_working_at_twitter. GO TO

The Means of Prediction: How AI Really Works (And Who Benefits)

by Maximilian Kasy  · 15 Jan 2025  · 209pp  · 63,332 words

given reward as large as possible, given limited computational resources and limited data. The computer science perspective has informed much of the public discourse around AI safety and AI ethics, especially regarding topics such as fairness or value alignment: “If there is something wrong, then there must be an optimization error.” In

rest of society. This understanding changes how we should think about possible solutions to the problems of AI. How do we address AI ethics and AI safety if the underlying problems are with the parties that set the objectives for AI? How do we choose these objectives in a way that serves

, we will discuss how to regulate algorithms in the next part of the book. We will revisit debates around problem domains including value alignment and AI safety, privacy and data property rights, automation in the workplace, fairness and algorithmic discrimination, and explainability of algorithmic decision. The objectives of AI are determined by

has been done will be done again; there is nothing new under the sun.” And so it is with the concerns of AI ethics and AI safety. Many of these concerns are similar to concerns that have been raised in entirely human contexts. This is true, in particular, for the concern about

; explainability of decision functions and, 178; obstacles to, 97–98; optimization as flawed approach to, 7; social welfare as foundation of, 72. See also fairness AI safety: democratic governance and, 7, 123; optimization as flawed approach to, 7; teenage mental health and, 123; value alignment and, 131 alchemy, 44, 51, 92 algorithms

AI; reinforcement learning Robinson, Joan, 154 robots, 6, 23–24, 122, 129, 131, 157, 160–61 robustness, 177–78, 184 Russell, Stuart, 4 safety. See AI safety sample splitting, 11, 42 Scandinavian participatory design approach, 158–60 Sedol, Lee, 61 self-attention, 51 self-driving cars, 63–64, 88 self-supervised learning

The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity

by Tim Wu  · 4 Nov 2025  · 246pp  · 65,143 words

.trgdatacenters.com/​resource/​ai-chatbots-energy-usage-of-2023s-most-popular-chatbots-so-far/. BACK TO NOTE REFERENCE 14 “Statement on AI Risk,” Center for AI Safety, May 30, 2024, https://www.safe.ai/​work/​statement-on-ai-risk. BACK TO NOTE REFERENCE 15 Rakesh Kochhar, “Which U.S. Workers Are More

Against the Machine: On the Unmaking of Humanity

by Paul Kingsnorth  · 23 Sep 2025  · 388pp  · 110,920 words

, the lines you have drawn may be not just crossed, but rendered obsolete. Our AI friend Sydney, for example, is already darkly threatening its users. AI safety expert Connor Leahy calls its behaviour ‘a warning shot’. Here, he says, we have ‘an AI system which is accessing the internet, and is threatening

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

by Mustafa Suleyman  · 4 Sep 2023  · 444pp  · 117,770 words

maybe even the contents of the entire cosmos into paper clips. Start following chains of logic like this and myriad sequences of unnerving events unspool. AI safety researchers worry (correctly) that should something like an AGI be created, humanity would no longer control its own destiny. For the first time, we would

most software businesses for decades. It’s worth remembering just how safe years of effort have made many existing technologies—and building on it. Frontier AI safety research is still an undeveloped, nascent field focusing on keeping ever more autonomous systems from superseding our ability to understand or control them. I see

Weapons Convention, has a budget of just $1.4 million and only four full-time employees—fewer than the average McDonald’s. The number of AI safety researchers is still minuscule: up from around a hundred at top labs worldwide in 2021 to three or four hundred in 2022. Given there are

’s a clear must-do here: encourage, incentivize, and directly fund much more work in this area. It’s time for an Apollo program on AI safety and biosafety. Hundreds of thousands should be working on it. Concretely, a good proposal for legislation would be to require that a fixed portion—say

stakeholders need to be included—senior leaders at Alphabet and elsewhere started having these conversations on a regular basis. Across tech companies the kinds of AI safety discussions that felt fringe a decade ago are now becoming routine. The need to balance profits with a positive contribution and cutting-edge safety is

conference in Puerto Rico in 2015 that aimed to do something similar for AI. With a mixed group, it wanted to raise the profile of AI safety, start building a culture of caution, and sketch real answers. We met again in 2017, at the symbolic venue of Asilomar, to draft a set

bioweapons Toby Ord, The Precipice: Existential Risk and the Future of Humanity (London: Bloomsbury, 2020), 57. GO TO NOTE REFERENCE IN TEXT The number of AI safety researchers Benaich and Hogarth, State of AI Report 2022. GO TO NOTE REFERENCE IN TEXT Given there are around For an estimate the number of

Elon Musk

by Walter Isaacson  · 11 Sep 2023  · 562pp  · 201,502 words

’s takeaway was the council was basically bullshit,” says Sam Teller, then his chief of staff. “These Google guys have no intention of focusing on AI safety or doing anything that would limit their power.” Musk proceeded to publicly warn of the danger. “Our biggest existential threat,” he told a 2014 symposium

hosting a series of dinner discussions that included members of his old PayPal mafia, including Thiel and Hoffman, on ways to counter Google and promote AI safety. He even reached out to President Obama, who agreed to a one-on-one meeting in May 2015. Musk explained the risk and suggested that

, and he refused to hang out with me anymore,” Musk says. “And I was like, ‘Larry, if you just hadn’t been so cavalier about AI safety then it wouldn’t really be necessary to have some countervailing force.’ ” Musk’s interest in artificial intelligence would lead him to launch an array

ride to the rescue kicked in. The two-way competition between OpenAI and Google needed, he thought, a third gladiator, one that would focus on AI safety and preserving humanity. He was resentful that he had founded and funded OpenAI but was now left out of the fray. AI was the biggest

was not making any money off of OpenAI, and he felt that Musk had not drilled down enough into the complexity of the issue of AI safety. However, he did feel that Musk’s criticisms came from a sincere concern. “He’s a jerk,” Altman told Kara Swisher. “He has a style

Museum, 121 National Labor Relations Board, 421 NATO, 434 Navteq, 62 Nelson, Bill, 386 Neumann, John von, 604 neural dust (brain implants), 400 Neuralink, 398 AI safety and, 243, 394 chip for, 401–3 FSD and, 614 launch of, 244, 399–401, 457 miracle cures and, 561, 562–63, 564 production algorithm

–44, 394 Zilis and, 401, 413 Optimus AI Day 2 presentation of, 487, 496–97, 498–500 AI Day presentation of, 393, 394, 395–97 AI safety and, 485–86 conception of, 244, 457 design of, 482, 483–85 Dojo and, 595 production of, 486–87 Tesla AI Days and, 487, 495

The Alignment Problem: Machine Learning and Human Values

by Brian Christian  · 5 Oct 2020  · 625pp  · 167,349 words

, where the episode is instantly deemed “hilarious” by all concerned. In its cartoonish, destructive slapstick, it certainly is. But for Amodei—who now leads the AI safety team at San Francisco research lab OpenAI—there is another, more sobering message. At some level, this is exactly what he’s worried about. The

understanding evolution, human motivation, and the delicacy of incentives, with implications for business and parenting alike. Part three takes us to the forefront of technical AI safety research, as we tour some of the best ideas currently going for how to align complex autonomous systems with norms and values too subtle or

he was perhaps one of the last alignment researchers who first had to live a kind of double life before being able to work in AI safety directly: working on more conventional problems to get his academic credentials, while figuring out a way to do the work he felt was truly important

I also, like, really wasn’t clear what I was going to do with my life.”32 Then he started reading about the idea of AI safety, through Nick Bostrom and Milan Ćirković’s book Global Catastrophic Risks, some discussions on the internet forum LessWrong, and a couple papers by Eliezer Yudkowsky

ask for some career advice. “I just randomly emailed him out of the blue, telling him, you know, I want to do a PhD in AI safety, can you give me some advice on where to go? And then I attached some work I’d done or something so he would hopefully

in which such an agent is liable to, as Leike put it, “misbehave drastically.”33 With his doctorate in hand, he prepared to enter the AI safety job market: that is, to join one of the three or four places in the world where one could make a career working on

AI safety. After a six-month stint at the Future of Humanity Institute in Oxford, he settled into a permanent role in London at DeepMind. “At the

look deeply into this question of how machines might learn complicated reward functions from humans. The project would ultimately become one of the most significant AI safety papers of 2017, remarkable not only for what it found but for what it represented: a marquee collaboration between the world’s two most active

AI safety research labs, and a tantalizing path forward for alignment research.34 Together they came up with a plan to implement the largest-scale test of

the same issue.” I remark that there’s a certain irony that his twenty-year-old idea ended up being the foundation for his current AI safety agenda. His idle thoughts on the walk to Safeway became, twenty years later, a plan to avert possible civilization-level catastrophe. “That was a complete

, not because it leads in bad directions, but because it leads in no directions at all.”38 Similar paradoxes and problems of definition haunt the AI safety research community. It would be good, for instance, for there to be a similar kind of precautionary principle: for systems to be designed to err

” or “impactful” behavior precise is a considerable challenge in itself.39 One of the first people to think about these issues in the context of AI safety was Stuart Armstrong, who works at Oxford University’s Future of Humanity Institute.40 Rather than trying to enumerate all of the things we don

theoretical conversation but also to create simple, game-like virtual worlds to illustrate these various problems and make the thought experiments concrete. They call these “AI safety gridworlds”—simple, Atari-like, two-dimensional (hence “grid”) environments in which new ideas and algorithms can be put to a practical test.46 The gridworld

instance, once a box is pushed into a corner, any states of the world that have that box anyplace else now become “unreachable.” In the AI safety gridworlds, agents looking out for stepwise relative reachability, alongside their normal goals and rewards, appear to behave rather conscientiously: agents don’t put boxes into

after it’s done whatever point-scoring actions the game incentivizes. Fascinatingly, the mandate to preserve attainable utility seems to foster good behavior in the AI safety gridworlds even when the auxiliary goals are generated at random.49 When Turner first elaborated the idea, on a library whiteboard at Oregon State, he

things out.”50 Over the course of 2018, he turned the math into working code and tossed his attainable-utility-preserving agent into DeepMind’s AI safety gridworlds. It did work. Acting to maximize each individual game’s rewards while at the same time preserving its future ability to satisfy four or

, we often take actions not only whose unintended effects are difficult to envision, but whose intended effects are difficult to envision. Publishing a paper on AI safety, for instance (or, for that matter, a book): it seems like a helpful thing to do, but who can say or foresee exactly how? I

ask Jan Leike, who coauthored the “AI Safety Gridworlds” paper with Krakovna, what he makes of the response so far to his and Krakovna’s gridworlds research. “I’ve been contacted by lots

of people, especially students, who get into the area and they’re like, ‘Oh, AI safety sounds cool. This is some open-source code I can just throw an agent at and play around with.’ And a lot of people have

. . . . I don’t know. It’s hard to know.” CORRIGIBILITY, DEFERENCE, AND COMPLIANCE One of the most chilling and prescient quotations in the field of AI safety comes in a famous 1960 article on the “Moral and Technical Consequences of Automation” by MIT’s Norbert Wiener: “If we use, to achieve our

and perfectly specified what we did and didn’t want the machine to do, then we had better be sure we can intervene. In the AI safety literature, this concept goes by the name of “corrigibility,” and—soberingly—it’s a whole lot more complicated than it seems.52 Almost any discussion

is so deep that, as we have seen in the last few chapters, much of the work being done in advanced AI applications and in AI safety in particular is about moving beyond systems that take in an explicit objective, and toward systems that attempt to imitate humans (in the case of

the enormity of the stakes, is not haste at all but the opposite. Bostrom’s essay came up more than once as I asked various AI safety researchers how they decided to commit their lives to that cause. “I found that argument pretty weird initially,” says Paul Christiano, “or it seemed off

trained on one set of examples finds itself operating in a different kind of environment, without necessarily realizing it. Amodei et al., “Concrete Problems in AI Safety.” gives an overview of this issue, which comes up in various subsequent chapters of this book. 37. Hardt, “How Big Data Is Unfair.” 38. Jacky

. Training systems which themselves are (or may become) optimizers of some “inner” reward function is a source of concern and of active research among contemporary AI-safety researchers. See Hubinger et al., “Risks from Learned Optimization in Advanced Machine Learning Systems.” 57. Andrew Barto, personal interview, May 9, 2018. 58. See Singh

”; see Ramakrishnan, Zhang, and Shah, “Perturbation Training for Human-Robot Teams.” 54. Murdoch, The Bell. 55. Some researchers at the intersection of cognitive science and AI safety, including the Future of Humanity Institute’s Owain Evans, are working on ways to do inverse reinforcement learning to take into account a person who

as Irreversibility.” See also Sunstein, “Irreversibility.” 38. Sunstein, “Beyond the Precautionary Principle.” See also Sunstein, Laws of Fear. 39. Amodei et al., “Concrete Problems in AI Safety,” has an excellent and broad discussion of “avoiding negative side effects” and “impact regularizers,” and Taylor et al., “Alignment for Advanced Machine Learning Systems,” also

Using Relative Reachability,” June 5, 2018, https://vkrakovna.wordpress.com/2018/06/05/measuring-and-avoiding-side-effects-using-relative-reachability/. 46. Leike et al., “AI Safety Gridworlds.” 47. Victoria Krakovna, personal interview, December 8, 2017. 48. The idea of stepwise baselines was suggested by Alexander Turner in https://www.alignmentforum.org

.” See Turner, “Optimal Farsighted Agents Tend to Seek Power.” For more on the notion of power in an AI safety context, including an information-theoretic account of “empowerment,” see Amodei et al., “Concrete Problems in AI Safety,” which, in turn, references Salge, Glackin, and Polani, “Empowerment: An Introduction,” and Mohamed and Rezende, “Variational Information

Turner, personal interview, July 11, 2019. 51. Wiener, “Some Moral and Technical Consequences of Automation.” 52. According to Paul Christiano, “corrigibility” as a tenet of AI safety began with the Machine Intelligence Research Institute’s Eliezer Yudkowsky, and the name itself came from Robert Miles. See Christiano’s “Corrigibility,” https://ai-alignment

command, see, e.g., Coman et al., “Social Attitudes of AI Rebellion,” and Aha and Coman, “The AI Rebellion.” 62. Smitha Milli, “Approaches to Achieving AI Safety” (interview), Melbourne, Australia, August 2017, https://www.youtube.com/watch?v=l82SQfrbdj4. 63. For more on corrigibility and model misspecification using this paradigm, see also

to the regularization method known as “early stopping”; see Yao, Rosasco, and Caponnetto, “On Early Stopping in Gradient Descent Learning.” For further discussion, in an AI safety context, about when “a metric which can be used to improve a system is used to such an extent that further optimization is ineffective or

?” An intriguing research direction in AI alignment involves developing machine-learning systems able to engage in debate with one another; see Irving, Christiano, and Amodei, “AI Safety via Debate.” 22. Jan Leike, “General Reinforcement Learning” (lecture), Colloquium Series on Robust and Beneficial AI 2016, Machine Intelligence Research Institute, Berkeley, California, June 9

.” Master’s thesis, University of Missouri–Kansas City, 1971. Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete Problems in AI Safety.” arXiv Preprint arXiv:1606.06565, 2016. Ampère, André-Marie. Essai sur la philosophie des sciences; ou, Exposition analytique d’une classification naturelle de toutes les

. “Adversarial Examples Are Not Bugs, They Are Features.” In Advances in Neural Information Processing Systems, 125–36. 2019. Irving, Geoffrey, Paul Christiano, and Dario Amodei. “AI Safety via Debate.” arXiv Preprint arXiv:1805.00899, 2018. Jackson, Frank. “Procrastinate Revisited.” Pacific Philosophical Quarterly 95, no. 4 (2014): 634–47. Jackson, Frank, and Robert

Direction.” arXiv Preprint arXiv:1811.07871, 2018. Leike, Jan, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. “AI Safety Gridworlds.” arXiv Preprint arXiv:1711.09883, 2017. Lenson, David. On Drugs. University of Minnesota Press, 1995. Letham, Benjamin, Cynthia Rudin, Tyler H. McCormick, David Madigan

racial bias Against Prediction (Harcourt), 78–79 age bias, 32, 396n7 AGI. See artificial general intelligence Agüera y Arcas, Blaise, 247 AI Now Institute, 396n9 AI safety artificial general intelligence delay risks and, 310 corrigibility, 295–302, 392–93n51 field growth, 12, 249–50, 263 gridworlds for, 292–93, 294, 295, 390n29

, 305 equiprobabiliorism, 303 equivant (company), 337n5 ergodicity assumption, 320 Ermon, Stefano, 324 ethics actualism vs. possibilism and, 239, 379n71 in reinforcement learning, 149 See also AI safety; fairness; moral uncertainty evaluation function. See value function Evans, Owain, 386–87n55 evolution, 170, 171–74, 368n56 expectations, 138–39, 197 See also TD (temporal

vs. statistical prediction, 91–94, 97–98 transparency and, 319, 397n19 word embedding and, 342n61 See also imitation; value alignment human-machine cooperation, 267–76 AI safety and, 268–69 aspiration and, 275–76, 386–87n55 CIRL, 267–68, 385nn40, 43–44 dangers of, 274–76 demonstration learning for, 271 feedback learning

shaping, 162 training data bias, 29 uncertainty, 288 value alignment, 266–67 UC Davis, 111 UC San Diego, 161 Ullman, Tomer, 345n94 uncertainty, 277–310 AI safety and, 291–92 Bayesian neural networks and, 283–86 confidence and, 281–82 corrigibility and, 295–302 deep learning brittleness, 279–81, 387n8 dissent and

of human values and, 247 inference and, 252–53, 269, 323–24, 385n39 inverse reward design for, 301–02 models and, 325–27 See also AI safety; human-machine cooperation; inverse reinforcement learning value functions, 138–39, 143, 238–39, 241–42, 245, 379n71 variable ratio reinforcement schedules, 153 Vaughan, Jenn Wortman

Artificial Intelligence: A Modern Approach

by Stuart Russell and Peter Norvig  · 14 Jul 2019  · 2,466pp  · 668,761 words

wide variety of games (Mnih et al., 2015). DeepMind in turn open-sourced several agent platforms, including the DeepMind Lab (Beattie et al., 2016), the AI Safety Gridworlds (Leike et al., 2017), the Unity game platform (Juliani et al., 2018), and the DM Control Suite (Tassa et al., 2018). Blizzard released the

these kinds of specification failures and take steps to avoid them. To help them do that, Krakovna was part of the team that released the AI Safety Gridworlds environments (Leike et al., 2017), which allows designers to test how well their agents perform. The moral is that we need to be very

requires explainable decisions for its battlefield systems, and has issued a call for research in the area (Gunning, 2016). AI safety: The book Artificial Intelligence Safety and Security (Yampolskiy, 2018) collects essays on AI safety, both recent and classic, going back to Bill Joy’s Why the Future Doesn’t Need Us (Joy, 2000

. OpenAI blog, blog.openai.com/ai-and-compute/. Amodei, D., Olah, C., Steinhardt, J., Christiano, P, Schulman, J., and Mané, D. (2016). Concrete problems in AI safety. arXiv:1606.06565. Andersen, S. K., Olesen, K. G., Jensen, F. V., and Jensen, F. (1989). HUGIN—A shell for building Bayesian belief universes for

AI Now Institute, 1046, 1059 Airborne Collision Avoidance System X (ACAS X), 588 aircraft carrier scheduling, 401 airport, driving to, 403 airport siting, 530, 535 AI safety, 1061 AI Safety Gridworlds, 873 AISB (Society for Artificial Intelligence and Simulation of Behaviour), 53 Aitken, S., 799, 1092 AI winter, 42, 45 Aizerman, M., 735, 1085

Army of None: Autonomous Weapons and the Future of War

by Paul Scharre  · 23 Apr 2018  · 590pp  · 152,595 words

that it “runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning.” Dietterich did acknowledge that AI safety was an important issue, but said that risks from AI have more to do with what humans allow AI-enabled autonomous systems to do. “The

Robot Rules: Regulating Artificial Intelligence

by Jacob Turner  · 29 Oct 2018  · 688pp  · 147,571 words

in AI including privacy,111 the Trolley Problem,112 algorithmic bias,113 transparency 114 and liability for harm caused by AI.115 In terms of AI safety, the White Paper explains that:Because the achieved goals of artificial intelligence technology are influenced by its initial settings, the goal of artificial intelligence design

Artificial Intelligence Containment”, arXiv preprint arXiv:1707.08476 (2017); Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané, “Concrete Problems in AI Safety”, arXiv preprint arXiv:1606.06565 (2016); Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, and Andrew Critch, “Alignment for Advanced Machine Learning Systems”, Machine Intelligence Research Institute

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

by Karen Hao  · 19 May 2025  · 660pp  · 179,531 words

Four Battlegrounds

by Paul Scharre  · 18 Jan 2023

The Singularity Is Nearer: When We Merge with AI

by Ray Kurzweil  · 25 Jun 2024

The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future

by Tom Chivers  · 12 Jun 2019  · 289pp  · 92,714 words

The Precipice: Existential Risk and the Future of Humanity

by Toby Ord  · 24 Mar 2020  · 513pp  · 152,381 words

Architects of Intelligence

by Martin Ford  · 16 Nov 2018  · 586pp  · 186,548 words

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity

by Amy Webb  · 5 Mar 2019  · 340pp  · 97,723 words

Superintelligence: Paths, Dangers, Strategies

by Nick Bostrom  · 3 Jun 2014  · 574pp  · 164,509 words

Warnings

by Richard A. Clarke  · 10 Apr 2017  · 428pp  · 121,717 words

Human Compatible: Artificial Intelligence and the Problem of Control

by Stuart Russell  · 7 Oct 2019  · 416pp  · 112,268 words

The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future

by Keach Hagey  · 19 May 2025  · 439pp  · 125,379 words

Supremacy: AI, ChatGPT, and the Race That Will Change the World

by Parmy Olson  · 284pp  · 96,087 words

Possible Minds: Twenty-Five Ways of Looking at AI

by John Brockman  · 19 Feb 2019  · 339pp  · 94,769 words

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity

by Adam Becker  · 14 Jun 2025  · 381pp  · 119,533 words

These Strange New Minds: How AI Learned to Talk and What It Means

by Christopher Summerfield  · 11 Mar 2025  · 412pp  · 122,298 words

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

by Eliezer Yudkowsky and Nate Soares  · 15 Sep 2025  · 215pp  · 64,699 words

Nexus: A Brief History of Information Networks From the Stone Age to AI

by Yuval Noah Harari  · 9 Sep 2024  · 566pp  · 169,013 words

Human + Machine: Reimagining Work in the Age of AI

by Paul R. Daugherty and H. James Wilson  · 15 Jan 2018  · 523pp  · 61,179 words

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back

by Bruce Schneier  · 7 Feb 2023  · 306pp  · 82,909 words

On the Edge: The Art of Risking Everything

by Nate Silver  · 12 Aug 2024  · 848pp  · 227,015 words

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence

by John Brockman  · 5 Oct 2015  · 481pp  · 125,946 words

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death

by Mark O'Connell  · 28 Feb 2017  · 252pp  · 79,452 words

Artificial You: AI and the Future of Your Mind

by Susan Schneider  · 1 Oct 2019  · 331pp  · 47,993 words

What We Owe the Future: A Million-Year View

by William MacAskill  · 31 Aug 2022  · 451pp  · 125,201 words

Code Dependent: Living in the Shadow of AI

by Madhumita Murgia  · 20 Mar 2024  · 336pp  · 91,806 words

AI Superpowers: China, Silicon Valley, and the New World Order

by Kai-Fu Lee  · 14 Sep 2018  · 307pp  · 88,180 words

Searches: Selfhood in the Digital Age

by Vauhini Vara  · 8 Apr 2025  · 301pp  · 105,209 words

What If We Get It Right?: Visions of Climate Futures

by Ayana Elizabeth Johnson  · 17 Sep 2024  · 588pp  · 160,825 words

Deep Utopia: Life and Meaning in a Solved World

by Nick Bostrom  · 26 Mar 2024  · 547pp  · 173,909 words

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe

by William Poundstone  · 3 Jun 2019  · 283pp  · 81,376 words

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006

by Ben Goertzel and Pei Wang  · 1 Jan 2007  · 303pp  · 67,891 words

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World

by Cade Metz  · 15 Mar 2021  · 414pp  · 109,622 words

The Quiet Damage: QAnon and the Destruction of the American Family

by Jesselyn Cook  · 22 Jul 2024  · 321pp  · 95,778 words

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It)

by Jamie Bartlett  · 4 Apr 2018  · 170pp  · 49,193 words

The Wealth Ladder: Proven Strategies for Every Step of Your Financial Life

by Nick Maggiulli  · 22 Jul 2025

The Dark Cloud: How the Digital World Is Costing the Earth

by Guillaume Pitron  · 14 Jun 2023  · 271pp  · 79,355 words

Superbloom: How Technologies of Connection Tear Us Apart

by Nicholas Carr  · 28 Jan 2025  · 231pp  · 85,135 words

The Mysterious Mr. Nakamoto: A Fifteen-Year Quest to Unmask the Secret Genius Behind Crypto

by Benjamin Wallace  · 18 Mar 2025  · 431pp  · 116,274 words

Ways of Being: Beyond Human Intelligence

by James Bridle  · 6 Apr 2022  · 502pp  · 132,062 words

This Is for Everyone: The Captivating Memoir From the Inventor of the World Wide Web

by Tim Berners-Lee  · 8 Sep 2025  · 347pp  · 100,038 words

Co-Intelligence: Living and Working With AI

by Ethan Mollick  · 2 Apr 2024  · 189pp  · 58,076 words

Applied Artificial Intelligence: A Handbook for Business Leaders

by Mariya Yao, Adelyn Zhou and Marlene Jia  · 1 Jun 2018  · 161pp  · 39,526 words