hallucination problem

back to index

22 results

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

BACK TO NOTE REFERENCE 160 For more information on the problems LLMs have with hallucination, see Tom Simonite, “AI Has a Hallucination Problem That’s Proving Tough to Fix,” Wired, March 9, 2018, https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix; Craig S. Smith, “Hallucinations Could Blunt ChatGPT’s Success,” IEEE Spectrum, March 13, 2023, https://spectrum.ieee.org/ai-hallucination; Cade Metz, “What Makes A.I. Chatbots Go Wrong?,” New York Times, March 29, 2023 (updated April 4, 2023), https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html; Ziwei Ji et al., “Survey of Hallucination in Natural Language Generation,” ACM Computing Surveys 55, no. 12, article 248 (March 3, 2023): 1–38, https://doi.org/10.1145/3571730.

pages: 189 words: 58,076

Co-Intelligence: Living and Working With AI
by Ethan Mollick
Published 2 Apr 2024

For example, a study examining the number of hallucinations and errors in citations given by AI found that GPT-3.5 made mistakes in 98 percent of the cites, but GPT-4 hallucinated only 20 percent of the time. Additionally, technical tricks, like giving the AI a “backspace” key so it can correct and delete its own errors, seem to improve accuracy. So, while it may never go away, this problem will likely improve. Remember Principle 4: “Assume this is the worst AI you will ever use.” Even today, with some experience, users can learn how to avoid forcing the AI into hallucinations and when careful fact-checking is necessary. And more discussion of this issue will prevent users like Schwartz from wholly relying on LLM-generated answers.

Because LLMs are text prediction machines, they are very good at guessing at plausible, and often subtly incorrect, answers that feel very satisfying. Hallucination is therefore a serious problem, and there is considerable debate over whether it is completely solvable with current approaches to AI engineering. While newer, larger LLMs hallucinate much less than older models, they still will happily make up plausible but wrong citations and facts. Even if you spot the error, AIs are also good at justifying a wrong answer that they have already committed to, which can serve to convince you that the wrong answer was right all along!

You can have a conversation about freedom and revenge, and it can become a vengeful freedom fighter. This playacting is so real that experienced AI users can start believing the AI is having real feelings and emotions, even though they know better. So, to be the human in the loop, you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it. You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing overreliance and complacency.

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

When the opposing counsel had challenged the cases cited, Schwartz went back to ChatGPT, but it doubled down and ‘lied’ to him, his lawyer said. Schwartz, his voice breaking, told the judge that he was ‘embarrassed, humiliated and extremely remorseful.’ ChatGPT and all other conversational AI chatbots have a disclaimer that warns users about the hallucination problem, pointing out that large language models sometimes make up facts. ChatGPT, for instance, has a warning on its webpage: ‘ChatGPT may produce inaccurate information about people, places, or facts.’ Judge Castel: Do you have something new to say? Schwartz’s lawyer: Yes.

Through years of writing about the technology, the pattern that has emerged for me is the extent of the impact of AI on society’s marginalized and excluded groups: refugees and migrants, precarious workers, socioeconomic and racial minorities, and women. These same groups are disproportionately affected by generative AI’s technical limitations too: hallucinations and negative stereotypes perpetuated in the software’s text and image outputs.7 And it’s because they rarely have a voice in the echo chambers in which AI is being built. It was why I had chosen to narrate the perspectives of people outside of Silicon Valley – those whose views are so often ignored in the design or implementation of new technologies like AI.

.: ‘The Machine Stops’ ref1 Fortnite ref1 Foxglove ref1 Framestore ref1 Francis, Pope ref1, ref2 fraudulent activity benefits ref1 gig workers and ref1, ref2, ref3 free will ref1, ref2 Freedom of Information requests ref1, ref2, ref3 ‘Fuck the algorithm’ ref1 Fussey, Pete ref1 Galeano, Eduardo ref1 gang rape ref1, ref2 gang violence ref1, ref2, ref3, ref4 Gebru, Timnit ref1, ref2, ref3 Generative Adversarial Networks (GANs) ref1 generative AI ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 AI alignment and ref1, ref2, ref3 ChatGPT see ChatGPT creativity and ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 job losses and ref1 ‘The Machine Stops’ and ref1 Georgetown University ref1 gig work ref1, ref2, ref3, ref4, ref5 Amsterdam court Uber ruling ref1 autonomy and ref1 collective bargaining and ref1 colonialism and ref1, ref2, ref3 #DeclineNow’ hashtag ref1 driver profiles ref1 facial recognition technologies ref1, ref2, ref3, ref4 fraudulent activity and ref1, ref2, ref3, ref4 ‘going Karura’ ref1 ‘hiddenness’ of algorithmic management and ref1 job allocation algorithm ref1, ref2, ref3, ref4, ref5, ref6 location-checking ref1 migrants and ref1 ‘no-fly’ zones ref1 race and ref1 resistance movement ref1 ‘slaveroo’ ref1 ‘therapy services’ ref1 UberCheats ref1, ref2, ref3 UberEats ref1, ref2 UK Supreme Court ruling ref1 unions and ref1, ref2, ref3 vocabulary to describe AI-driven work ref1 wages ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10, ref11 work systems built to keep drivers apart or turn workers’ lives into games ref1, ref2 Gil, Dario ref1 GitHub ref1 ‘give work, not aid’ ref1 Glastonbury Festival ref1 Glovo ref1 Gojek ref1 ‘going Karura’ ref1 Goldberg, Carrie ref1 golem (inanimate humanoid) ref1 Gonzalez, Wendy ref1 Google ref1 advertising and ref1 AI alignment and ref1 AI diagnostics and ref1, ref2, ref3 Chrome ref1 deepfakes and ref1, ref2, ref3, ref4 DeepMind ref1, ref2, ref3, ref4 driverless cars and ref1 Imagen AI models ref1 Maps ref1, ref2, ref3 Reverse Image ref1 Sama ref1 Search ref1, ref2, ref3, ref4, ref5 Transformer model and ref1 Translate ref1, ref2, ref3, ref4 Gordon’s Wine Bar London ref1 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 GPT-4 ref1 Graeber, David ref1 Granary Square, London ref1, ref2 ‘graveyard of pilots’ ref1 Greater Manchester Coalition of Disabled People ref1 Groenendaal, Eline ref1 Guantanamo Bay, political prisoners in ref1 Guardian ref1 Gucci ref1 guiding questions checklist ref1 Gulu ref1 Gumnishka, Iva ref1, ref2, ref3, ref4 Gutiarraz, Norma ref1, ref2, ref3, ref4, ref5 hallucination problem ref1, ref2, ref3 Halsema, Femke ref1, ref2 Hanks, Tom ref1, ref2 Hart, Anna ref1 Hassabis, Demis ref1 Harvey, Adam ref1 Have I Been Trained ref1 healthcare/diagnostics Accredited Social Health Activists (ASHAs) ref1, ref2, ref3 bias in ref1 Covid-19 and ref1, ref2 digital colonialism and ref1 ‘graveyard of pilots’ ref1 heart attacks and ref1, ref2 India and ref1 malaria and ref1 Optum ref1 pain, African Americans and ref1 qTrack ref1, ref2, ref3 Qure.ai ref1, ref2, ref3, ref4 qXR ref1 radiologists ref1, ref2, ref3, ref4, ref5, ref6 Tezpur ref1 tuberculosis ref1, ref2, ref3 without trained doctors ref1 X-ray screening and ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 heart attacks ref1, ref2 Herndon, Holly ref1 Het Parool ref1, ref2 ‘hiddenness’ of algorithmic management ref1 Hikvision ref1, ref2 Hinton, Geoffrey ref1 Hive Micro ref1 Home Office ref1, ref2, ref3 Hong Kong ref1, ref2, ref3, ref4, ref5 Horizon Worlds ref1 Hornig, Jess ref1 Horus Foundation ref1 Huawei ref1, ref2, ref3 Hui Muslims ref1 Human Rights Watch ref1, ref2, ref3, ref4 ‘humanist’ AI ethics ref1 Humans in the Loop ref1, ref2, ref3, ref4 Hyderabad, India ref1 IBM ref1, ref2, ref3, ref4 Iftimie, Alexandru ref1, ref2, ref3, ref4, ref5 IJburg, Amsterdam ref1 Imagen AI models ref1 iMerit ref1 India ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 facial recognition in ref1, ref2, ref3 healthcare in ref1, ref2, ref3 Industrial Light and Magic ref1 Information Commissioner’s Office ref1 Instacart ref1, ref2 Instagram ref1, ref2 Clearview AI and ref1 content moderators ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 Integrated Joint Operations Platform (IJOP) ref1, ref2 iPhone ref1 IRA ref1 Iradi, Carina ref1 Iranian coup (1953) ref1 Islam ref1, ref2, ref3, ref4, ref5 Israel ref1, ref2, ref3 Italian government ref1 Jaber, Faisal bin Ali ref1 Jainabai ref1 Janah, Leila ref1, ref2, ref3 Jay Gould, Stephen ref1 Jewish faith ref1, ref2, ref3, ref4 Jiang, Mr ref1 Jim Crow era ref1 jobs application ref1, ref2, ref3 ‘bullshit jobs’ ref1 data annotation and data-labelling ref1 gig work allocation ref1, ref2, ref3, ref4, ref5, ref6 losses ref1, ref2, ref3 Johannesburg ref1, ref2 Johnny Depp–Amber Heard trial (2022) ref1 Jones, Llion ref1 Joske, Alex ref1 Julian-Borchak Williams, Robert ref1 Juncosa, Maripi ref1 Kafka, Franz ref1, ref2, ref3, ref4 Kaiser, Lukasz ref1 Kampala, Uganda ref1, ref2, ref3 Kellgren & Lawrence classification system. ref1 Kelly, John ref1 Kibera, Nairobi ref1 Kinzer, Stephen: All the Shah’s Men ref1 Knights League ref1 Koli, Ian ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 Kolkata, India ref1 Koning, Anouk de ref1 Laan, Eberhard van der ref1 labour unions ref1, ref2, ref3, ref4, ref5, ref6 La Fors, Karolina ref1 LAION-5B ref1 Lanata, Jorge ref1 Lapetus Solutions ref1 large language model (LLM) ref1, ref2, ref3 Lawrence, John ref1 Leigh, Manchester ref1 Lensa ref1 Leon ref1 life expectancy ref1 Limited Liability Corporations ref1 LinkedIn ref1 liver transplant ref1 Loew, Rabbi ref1 London delivery apps in ref1, ref2 facial recognition in ref1, ref2, ref3, ref4 riots (2011) ref1 Underground terrorist attacks (2001) and (2005) ref1 Louis Vuitton ref1 Lyft ref1, ref2 McGlynn, Clare ref1, ref2 machine learning advertising and ref1 data annotation and ref1 data colonialism and ref1 gig workers and ref1, ref2, ref3 healthcare and ref1, ref2, ref3 predictive policing and. ref1, ref2, ref3, ref4 rise of ref1 teenage pregnancy and ref1, ref2, ref3 Mahmoud, Ala Shaker ref1 Majeed, Amara ref1, ref2 malaria ref1 Manchester Metropolitan University ref1 marginalized people ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 Martin, Noelle ref1, ref2, ref3, ref4, ref5, ref6, ref7 Masood, S.

pages: 660 words: 179,531

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
by Karen Hao
Published 19 May 2025

Text generators can err wildly, especially with user prompts that probe into topics underrepresented in the training data or riddled with falsehoods and conspiracy theories. The AI industry calls these inaccuracies “hallucinations.” Researchers have sought to get rid of hallucinations by steering generative AI models toward higher-quality parts of their data distribution. But it’s difficult to fully anticipate—as with Roose and Bing, or Uber and Herzberg—every possible way people will prompt the models and how the models will respond. The problem only gets harder as models grow bigger and their developers become less and less aware of what precisely is in the training data. In one high-profile illustration of the hallucinations problem, a lawyer used ChatGPT to perform legal research and prepare for a court filing.

The misstep was not only a case of the lawyer’s negligence but also a reflection of companies fueling public misunderstanding of models’ capabilities through ambiguous or exaggerated marketing. Altman has publicly tweeted that “ChatGPT is incredibly limited,” especially in the case of “truthfulness,” but OpenAI’s website promotes GPT-4’s ability to pass the bar exam and the LSAT. Microsoft’s Nadella has similarly called Bing’s AI chat “search, just better”—a tool “to be able to get to the right answers.” Even the term hallucinations is subtly misleading. It suggests that the bad behavior is an aberration, a bug, when it’s actually a feature of the probabilistic pattern-matching mechanics of neural networks. This misplaced trust in generative AI could once again lead to real harm, particularly in sensitive contexts.

See AI arXiv, 15n asbestos, 288 Asimov, Isaac, 83 Atacama Desert, 271–72, 284–87 atomic bomb, 316–17 authoritarianism, 71, 147, 195–96, 400 Authors Guild, 135 automata studies, 89–90, 434n autonomous weapons, 52, 310, 380 Azure AI, 68, 72, 75, 156, 266, 279 B babbage, 150 Babbage, Charles, 150 backpropagation, 97–98 Baidu, 15, 17, 55, 159, 413 Bankman-Fried, Samuel, 231–32, 233, 257–58, 380 Beckham, David, 1 Bell Labs, 55 Bender, Emily M., 164–69, 253–54 “On the Dangers of Stochastic Parrots,” 164–73, 254, 276, 414 Bengio, Samy, 161–62, 165, 166–67, 169 Bengio, Yoshua, 105, 162 Bezos, Jeff, 41 Biden, Joe, 115–16, 310 Bing, 112, 113, 247, 264, 355 biological viruses, 27 biological weapons, 305, 309, 310, 380 Birhane, Abeba, 102, 106, 137–38 “black box,” 107 Black in AI, 52, 53, 161 blacklists, 222 Black Lives Matter, 152–53, 162–63, 167 blind spots, 88 Blip, The, 375, 377, 384, 386, 396, 397–98 board of directors, of OpenAI Altman’s firing and reinstatement, 1–12, 14, 336, 364–73, 375–76, 384, 386, 396, 402 author’s reporting, 370–73 the investigation, 369–70, 375–76, 377, 392 Murati as interim CEO, 1–2, 8, 357, 364–65, 366 open letter, 10–11, 367–68 Altman’s leadership behavior, 324–25, 345–65, 385 members departing and joining, 11, 57–58, 58, 320–23, 375 oversight questions, 322–25 Bolt, Usain, 34 Books2, 135 Books3, 440n Boomers (Boomerism), 233–34, 250, 305–6, 314, 315, 387, 396, 402, 403–4 bootstrapping, 49 borderless science, 308–11 borderline personality disorder, 338, 460n Boric Font, Gabriel, 296–97, 299–300 Bostrom, Nick, 26–27, 55–56, 57, 122–23 bot tax, 200 bottleneck, 47, 78, 244–45, 280, 309 Boyd, Eric, 266 Brady, Tom, 231 brain-scale AI, 60 Bridgewater Associates, 230 Brin, Sergey, 249 Brockman, Anna, 10, 256–57, 333, 338 Brockman, Greg Altman and, 243–44, 349, 355, 395–96, 406–7 firing and reinstatement, 2, 6, 8–12, 345–46, 366 leadership behavior, 34, 363–64 author’s 2019 interview, 74–81, 84–85, 159–60, 278 background of, 46 board of directors and, 240 board of directors and oversight, 322–23 commercialization plan, 150–51 computing infrastructure, 278–79 culture and mission of OpenAI, 53–54, 84–85 departure of, 404 Dota 2, 66, 144–45 founding of OpenAI, 28, 46–51 governance structure of OpenAI, 61–63 GPT-4, 244–48, 250–51, 252, 257, 260, 346 Latitude, 180–81 leadership of OpenAI, 58–59, 61–62, 63–65, 69, 70, 83, 84–85, 243–44 Omnicrisis, 396–98 recruitment efforts of, 48–49, 53–54, 57–58 research road map, 59–61 retreat of October 2022, 256–57 Scallion, 379–80 Stripe, 41, 46, 55, 58, 73, 82 Brundage, Miles, 248, 250, 314, 388, 406 Buolamwini, Joy, 161 Burning Man, 35, 263 Burrell, Jenna, 93 Buschatzke, Tom, 281 C California Senate Bill 1047, 311 cancers, 192, 282, 288, 293, 301, 378 capped-profit structure, 70, 72, 75, 322, 370–71, 401 carbon emissions, 79–80, 159–60, 171–73, 275–78, 295, 309 Carnegie Mellon University, 97, 106, 172 Carr, Andrew, 385 Carter, Ashton, 43 CBRN weapons, 301, 380 Center for AI Safety, 322 Center for Security and Emerging Technology (CSET), 7, 307, 321, 357, 358 Center on Long-Term Risk, 388 Centre for the Governance of AI, 321–22 Cerrillos, Chile, 288–91, 296, 297 CFPB (Consumer Financial Protection Bureau), 419–20 chatbots, 17, 112–14, 189–90, 217–18, 220 ELIZA, 95–97, 111, 420–21 GPT-3, 217–18 GPT-4, 258–59 LaMDA, 153, 253–54 Meena, 153 Tay, 153 ChatGPT, 258–62, 267, 280 connectionist tradition of, 95 GPT-3.5 as basis, 217–18, 258 hallucinations problem, 113, 114, 268 release, 2, 58, 101, 111, 120, 158, 159, 212, 220, 258–62, 264, 265–66, 268, 302 sign-up incentive, 267 voice mode, 378–79, 380–81, 391 Chauvin, Derek, 152–53 Chen, Mark, 381, 405–6 Chesky, Brian, 41, 367 Chicago Boys (Chicago school of economics), 272–73, 296 child sex abuse material (CSAM), 137, 180–81, 189, 192, 208, 237–39, 241, 242 Chile, 15, 271–81 data centers, 285–91, 295–99 extractivism, 272, 273–74, 281–85, 296–99, 417 Chilean coup d’état of 1973, 273 Chilean protests of 2019-2022, 291, 296–97 Chile Project, 272–73 China AI chips, 115–16, 304 AI development, 55, 103, 132, 146, 159, 191, 301, 303–4, 305, 307, 309–10, 311 mass surveillance, 103–4 Chuquicamata mine collapse of 1957, 281–82 CIA (Central Intelligence Agency), 155, 273, 321 Clarifai, 108, 238 Clark, Jack, 76, 81, 125–28, 154, 156–57, 311 Clarke, Arthur C., 55 Claude, 261, 358, 379, 400, 404–5, 406 clawback clause, 389, 393–96 climate change, 24, 52, 76–80, 93, 165, 196, 276, 281, 292–95, 301 Climate Change AI, 77–78, 276 CLIP, 235, 236 closed-domain questions, 268 closed systems, 308–11 CloudFactory, 206–7, 212–13 code generation, 151–53, 181–84, 318 Codex, 184, 243, 247, 269, 318 cofounders, overview of, 48 Cogito, 242 cognition, 109, 119–20 cognitive dissonance, 227–28 Cohere, 306–7 Coinbase, 136 Collard, Rosemary, 104n Colombia, 15, 103 Colorado River and water usage, 281 Commerce Department, U.S., 304, 307, 308 Common Crawl, 135–36, 137, 151, 163 companion bots, 179, 180 “compositional generation,” 238 compression, 122, 235 compute, 59–61, 115–16, 278–81, 387 efficiency, 175–77, 268–69, 375, 419 threshold, 98, 301–2, 305–8, 310–11 Conception, 41 Conneau, Alexis, 378–79 connectionism, 94–100, 105, 109–10, 117–18 content moderation, 136–37, 155, 179–81, 189–90, 238–39.

pages: 259 words: 89,637

Shocks, Crises, and False Alarms: How to Assess True Macroeconomic Risk
by Philipp Carlsson-Szlezak and Paul Swartz
Published 8 Jul 2024

Should that be considered a disappointment, or looked forward to as meaningful impact? Some will see this as pessimistic and wonder if we have spent enough time playing with ChatGPT and all its generative-AI siblings. We assure readers that we have and are duly impressed. But we continue to believe in the hurdles of technological maturity (we’ve all seen the AI hallucinations), of societal resistance (not in my daughter’s classroom), of regulatory friction (What will the new rules be?), and of cost. And while the potential to displace labor looks real, it seems more idiosyncratic than systemic. AI will likely be impactful over here (perhaps call centers), and then over there (perhaps graphic designers), and so on.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

In AI, technical safety also means sandboxes and secure simulations to create provably secure air gaps so that advanced AIs can be rigorously tested before they are given access to the real world. It means much more work on uncertainty, a major focus right now—that is, how does an AI communicate when it might be wrong? One of the issues with LLMs is that they still suffer from the hallucination problem, whereby they often confidently claim wildly wrong information as accurate. This is doubly dangerous given they often are right, to an expert level. As a user, it’s all too easy to be lulled into a false sense of security and assume anything coming out of the system is true.

Brian, 56 artificial capable intelligence (ACI), vii, 77–78, 115, 164, 210 artificial general intelligence (AGI) catastrophe scenarios and, 209, 210 chatbots and, 114 DeepMind founding and, 8 defined, vii, 51 gorilla problem and, 115–16 gradual nature of, 75 superintelligence and, 75, 77, 78, 115 yet to come, 73–74 artificial intelligence (AI) aspirations for, 7–8 autonomy and, 114, 115 as basis of coming wave, 55 benefits of, 10–11 catastrophe scenarios and, 208, 209–11 chatbots, 64, 68, 70, 113–14 Chinese development of, 120–21 choke points in, 251 climate change and, 139 consciousness and, 74, 75 contradictions and, 202 costs of, 64, 68 current applications, 61–62 current capabilities of, 8–9 cyberattacks and, 162–63, 166–67 defined, vii early experiments in, 51–54 efficiency of, 68–69 ego and, 140 ethics and, 254 explanation and, 243 future of, 78 future ubiquity of, 284–85 global reach of, 9–10 hallucination problem and, 243 human brain as fixed target, 67–68 hyper-evolution and, 109 invisibility of, 73 limitations of, 73 medical applications, 110 military applications, 104, 165 Modern Turing Test, 76–77, 78, 115, 190, 210 narrow nature of, 73–74 near-term capabilities, 77 omni-use technology and, 111, 130 openness imperative and, 128–29 potential of, 56, 70, 135 as priority, 60 profit motive and, 134, 135, 136 proliferation of, 68–69 protein structure and, 88–89 red teaming and, 246 regulation attempts, 229, 260–61 research unpredictability and, 130 robotics and, 95, 96, 98 safety and, 241, 243–44 scaling hypothesis, 67–68, 74 self-critical culture and, 270 sentience claims, 72, 75 skepticism about, 72, 179 surveillance and, 193–94, 195, 196 synthetic biology and, 89–90, 109 technological unemployment and, 177–81 Turing test, 75 See also coming wave; deep learning; machine learning arXiv, 129 Asilomar principles, 269–70, 272–73 ASML, 251 asymmetrical impact, 105–7, 234 Atlantis, 5 Atmanirbhar Bharat program (India), 125–26 attention, 63 attention maps, 63 audits, 245–48, 267 Aum Shinrikyo, 212–13, 214 authoritarianism, 153, 158–59, 191–96, 216–17 autocomplete, 63 automated drug discovery, 110 automation, 177–81 autonomy, 105, 113–15, 166, 234 Autor, David, 179 al-Awlaki, Anwar, 171 B backpropagation, 59 bad actor empowerment, 165–66, 208, 266 See also terrorism B corps, 258 Bell, Alexander Graham, 31 Benz, Carl, 24, 285 Berg, Paul, 269–70 BGI Group, 122 bias, 69–70, 239–40 Bioforge, 86 Biological Weapons Convention, 241, 263 biotech.

See climate change Go, 53–54, 113, 117–19, 120 Google corporate power of, 187 DeepMind purchase, 60, 255–57 efficiency and, 68 LaMDA and, 71, 72 large language models and, 66 quantum computing and, 97–98, 122 robotics and, 95 on transformers, 64 Google Scholar, 128 Gopher, 68 gorilla problem, 115–16 governments containment and, 258–63 organizational limitations of, 148–50 See also nation-states GPS (Global Positioning System), 110 GPT-2, 64, 70 GPT-3, 64, 68 GPT-4, 64, 113–14 GPUs, 130, 251 grand bargain, defined, viii Great Britain corporations and, 186, 189 surveillance, 193, 195–96 great power competition. See geopolitics Gutenberg, Johannes, 30, 35 H H1N1 flu, 173–74 hallucination problem, 243 Harvard Wyss Institute, 95 Hassabis, Demis, 8 health care. See medical applications Henrich, Joseph, 28 Heritage Foundation, 257 Hershberg, Elliot, 87 Hezbollah, 196–97 Hidalgo, César, 108 hierarchical planning, 76–77 Hinton, Geoffrey, 59, 60, 130 Hiroshima/Nagasaki bombings, 41–42 Hobbes, Thomas, 216 Homo technologicus, 6 Hugging Face, 199 Human Genome Project, 80–81 Huskisson, William, 131 Hutchins, Marcus, 161 hyper-evolution, 105, 107–9 chips and, 32–33, 57, 81, 108 containment and, 250 large language models and, 66, 68 I India, 125–26, 169–70 Industrial Revolution containment attempts, 39, 40, 281–83 openness imperative and, 127 profit motive and, 133, 134 technology waves and, 28–29 inertial confinement, 100 Inflection AI, 66, 68, 243, 244 information dematerialization and, 55–56 DNA as, 79, 87–88 Institute of Electrical and Electronics Engineers, 241 integrated circuit, 32 intelligence action and, 75–76 corporations and, 186–87 economic value of, 136 gorilla problem, 115–16 prediction and, 62 See also artificial intelligence interconnectedness, 28 Intergovernmental Panel on Climate Change, 138–39 internal combustion engine, 24–25, 26, 35–36 International Atomic Energy Agency, 241 international cooperation, 263–67 internet, 33, 107–8, 202 iPhone, 187 Iran, 165 Israel, 165 J James, Kay Coles, 257 Japan, containment attempts, 39, 40 jobs, technology impact on, 177–81, 261, 262 Joint European Torus, 100 K Kasparov, Garry, 53 Kay, John, 39 Ke Jie, 118–19, 121 Kennan, George F., 37 Keynes, John Maynard, 178 Khan, A.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

The constant work of removing racist and crucial or sensitive content from foundation models is also pursued under often neo-colonial work conditions, which can be analysed through the ‘data-production dispositif’ (Miceli/Posada 2022). Any museum using a foundation model for data-related work needs to be aware of the conditions of its production, as well as the options for adjustments and integration into a specific product and the range of end-user scenarios, particularly against the backdrop of the so-called hallucination problem. One solution in which museums might contribute within their field of expertise and enhance their data with AI is the field of language sensitivity, as explored in the development of Sabio, a tool designed to detect biases in the metadata of museum collections.4 Another interesting option would be to build alliances within the cultural heritage community in order to build our own models trained on heritage data.

As stated there and also demanded by museum users, AI-generated content should be labelled as such; the training sources and finetuning of them should be made transparent; copyrighted material should be specially marked and excluded from training processes or foundation models; and the rights of artists and photographers should be protected. The hallucination problem, that is, the generation of information based not on facts but instead on the output of a statistical language model, can be highlighted as an existing problem, but nonetheless be made use of experimentally or creatively until better solutions are provided by research and development. Many people have already incorporated language models into their daily lives for improving texts, structuring presentations, writing speeches, or generating code.

pages: 321 words: 112,477

The Measure of Progress: Counting What Really Matters
by Diane Coyle
Published 15 Apr 2025

It is unclear which firms are using which AI tools, and for what, though; media reports suggest it is most extensively used in professional s­ ervices and in activities such as call centres. The technology carries some business risks. For example, Air Canada was held liable in a civil case for a refund policy its call centre chatbot had simply in­ven­ted; the airline’s argument that the bot was an autonomous agent was rejected by the court (Belanger 2024). As I write, the “hallucination” prob­lem of generative AI (for example, making up court cases to cite as pre­ce­dents in a ­legal document) has also not been solved, nor the many disputes concerning intellectual property rights and training data. But ­there may also be productivity benefits in adopting AI. One study found that its use in a call centre for a travel com­pany had enabled the AI to codify the answers given to customers by the better agents and use ­these to train and improve the productivity and ­performance of ­those who ­were not so good (Brynjolfsson, Li, and Raymond 2023).

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford: Clarendon Press, 1978). 130Carlsmith and Darley, “Psychological Aspects of Retributive Justice”, in Advances in Experimental Social Psychology, edited by Mark Zanna (San Diego, CA: Elsevier, 2008). 131In evidence to the Royal Commission on Capital Punishment, Cmd. 8932, para. 53 (1953). 132Exodus 21:24, King James Bible. 133John Danaher, “Robots, Law and the Retribution Gap”, Ethics and Information Technology, Vol. 18, No. 4 (December 2016), 299–309. 134Recent experiments conducted by Zachary Mainen involving the use of the hormone serotonin on biological systems may provide one avenue for future AI to experience emotions in a similar manner to humans. See Matthew Hutson, “Could Artificial Intelligence Get Depressed and Have Hallucinations?”, Science Magazine, 9 April 2018, http://​www.​sciencemag.​org/​news/​2018/​04/​could-artificial-intelligence-get-depressed-and-have-hallucinations, accessed 1 June 2018. 135In a gruesome example of public retribution being exacted against insensate “perpetrators”, in 1661 following the restoration of the English monarchy after the English Civil War and the rebublican Protectorate, three of the already deceased regicides who had participated in the execution of Charles I were disinterred from their graves and tried for treason.

Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions
by Temple Grandin, Ph.d.
Published 11 Oct 2022

The odds of something offensive coming out is 100 percent.” AI applications are being developed for simulations and analytics, and in industry, transportation, cybersecurity, and the military. What are the failsafes? Would you want an AI program running a nuclear reactor? What if the AI operator started hallucinating because a hacker inserted a feedback loop that forced it to perceive the high pressures and temperatures of a meltdown that did not exist? Maybe it would create an actual meltdown. Some computer scientists will admit that they are not completely sure how AI works. In an article by Arthur I.

pages: 231 words: 85,135

Superbloom: How Technologies of Connection Tear Us Apart
by Nicholas Carr
Published 28 Jan 2025

When asked for illustrations of German soldiers from 1943, the bot produced images featuring Asians, Blacks, and women in Nazi military garb.40 Faced with an outcry on social media—some critics saw a kind of algorithmic reverse bigotry at work—the company turned off the feature and apologized. It explained that its “tuning” of the AI to promote ethnic and gender diversity “led the model to overcompensate in some areas.”41 Sometimes hallucinations are programmed. The tuning of AI outputs through data filtering and algorithm tweaking is another form of content moderation, one that extends the reach of corporate moderation processes from the present into the past while also making them even more subjective and opaque.

pages: 287 words: 78,609

The Molecule of More: How a Single Chemical in Your Brain Drives Love, Sex, and Creativityand Will Det Ermine the Fate of the Human Race
by Daniel Z. Lieberman and Michael E. Long
Published 13 Aug 2018

See also Here and Now neurotransmitters (H&Ns) anticipation and dopamine desire circuit, 33–34 and love, 3–7, 9, 13, 16–17, 20, 23 and reward prediction error, 5–7, 215. See also reward prediction error and sex, 3–4, 20–22 antidepressants, 168, 212 antipsychotic medication, 87–88, 111–112, 115–116 Aristotle, 28, 223 Armstrong, Lance, 84 art and science, 134–135 artificial intelligence (AI), 204–205 attention deficit hyperactivity disorder (ADHD), 80–83 auditory hallucination (hearing voices), 110–111, 141. See also hallucination; schizophrenia Augustine, Saint, 102, 211 autism, 136 Avatar (film), 212–213 Bai, Matt, 148 balancing dopamine and H&N neurotransmitters. See dopamine and H&N neurotransmitters, balancing Barrett, Deirdre, 132 A Beautiful Mind (Nasar), 112 Beethoven, Ludwig van, 139 Berridge, Kent, 45 Bierce, Ambrose, 145 biofeedback, 99 bipolar disorder (manic-depressive illness), 138, 189–194 bipolar spectrum, 193–194 birth rates, 206–207 Bizarreness Density Index, 129 brain amygdala, 164–165 development in adolescents, 54, 80–81 dopamine-producing cells in, 198–199 frontal lobes (neocortex), 54, 57, 63, 80–81, 103, 115, 132 non-conscious activities, control of, 199–201 secondary visual cortex, 132 ventral tegmental area of, 29–30, 62, 98–99 Buffalo Bills, 73–74 buyer’s remorse, 34–35 Cabaser (Parkinson’s medication), 49 Caldicott, David, 93 Calhoun, John, 177 Carnegie, Andrew, 192 change ability to change, 83 and conservatives, 148, 171, 197 and liberals, 148–149, 152, 171, 197 love relationship, 7, 9, 16–17 and politics, 175–177 and progressives, 148 and 7R allele, 186–187 and stress, 186–187 through psychotherapy, 100–105 and willpower, 99.

pages: 566 words: 169,013

Nexus: A Brief History of Information Networks From the Stone Age to AI
by Yuval Noah Harari
Published 9 Sep 2024

See also Zeynep Tufekci, “Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency,” Colorado Technology Law Journal 13 (2015): 203–18; Janna Anderson and Lee Rainie, “The Future of Truth and Misinformation Online,” Pew Research Center, Oct. 19, 2017, www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online/; Ro’ee Levy, “Social Media, News Consumption, and Polarization: Evidence from a Field Experiment,” American Economic Review 111, no. 3 (2021): 831–70; William J. Brady, Ana P. Gantman, and Jay J. Van Bavel, “Attentional Capture Helps Explain Why Moral and Emotional Content Go Viral,” Journal of Experimental Psychology: General 149, no. 4 (2020): 746–56. 22. Yue Zhang et al., “Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models” (preprint, submitted in 2023), arxiv.org/abs/2309.01219; Jordan Pearson, “Researchers Demonstrate AI ‘Supply Chain’ Disinfo Attack with ‘PoisonGPT,’ ” Vice, July 13, 2023, www.vice.com/en/article/xgwgn4/researchers-demonstrate-ai-supply-chain-disinfo-attack-with-poisongpt. 23.

pages: 284 words: 96,087

Supremacy: AI, ChatGPT, and the Race That Will Change the World
by Parmy Olson

pages: 260 words: 82,629

The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip
by Stephen Witt
Published 8 Apr 2025

Musk, giving around $45 million, was by far the largest single donor; other early donors included Reid Hoffman, the cofounder of LinkedIn, and Sam Altman, the president of Y Combinator, a venture investor in early-stage start-ups. (OpenAI’s structure and funding commitments would later create trouble for all involved.) While canvassing for donations, OpenAI also built an exceptional roster of AI talent. Andrej Karpathy, who had presented his hallucinating caption engine onstage at GTC 2015, was among the founders. So was Wojciech Zaremba, the Polish programmer who’d cloned AlexNet at Google. Greg Brockman, a developer from North Dakota who’d been an early employee at Stripe, joined as CTO. The most important hire was Ilya Sutskever, Alex Krizhevsky’s old Russian Israeli research partner who’d been present at the creation of AlexNet and had guided the development of AI since.

pages: 412 words: 122,298

These Strange New Minds: How AI Learned to Talk and What It Means
by Christopher Summerfield
Published 11 Mar 2025

.[*1] The medical term for this behaviour is ‘confabulation’, which is defined as ‘a factually incorrect verbal statement or narrative, exclusive of intentional falsification’ – in other words, lying without realizing you are lying.[*2] LLMs are prone to confabulation – they tend to make stuff up (introducing an unfortunate terminological confusion, AI researchers decided to christen this phenomenon ‘hallucination’, which means something quite different in neurology). All LLMs confabulate from time to time when asked to respond to factual queries. For example, the GPT-3.5 version of ChatGPT has been known to invent fictitious historical characters, to quote lines of poetry that don’t exist, and to fabricate citations to non-existent research papers.

pages: 584 words: 170,388

Hyperion
by Dan Simmons
Published 15 Sep 1990

pages: 169 words: 41,887

Literary Theory for Robots: How Computers Learned to Write
by Dennis Yi Tenen
Published 6 Feb 2024

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

It could inspect each one, over and over, to reduce the risk that any of the paperclips fail to meet the design specifications. It could build an unlimited amount of computronium in an effort to clarify its thinking, in the hope of reducing the risk that it has overlooked some obscure way in which it might have somehow failed to achieve its goal. Since the AI may always assign a nonzero probability to having merely hallucinated making the million paperclips, or to having false memories, it would quite possibly always assign a higher expected utility to continued action—and continued infrastructure production—than to halting. The claim here is not that there is no possible way to avoid this failure mode.

pages: 348 words: 119,358

The Long History of the Future: Why Tomorrow's Technology Still Isn't Here
by Nicole Kobie
Published 3 Jul 2024

pages: 1,028 words: 267,392

Wanderers: A Novel
by Chuck Wendig
Published 1 Jul 2019