description: the fourth iteration of the Generative Pre-trained Transformer, developed by OpenAI
12 results
Nexus: A Brief History of Information Networks From the Stone Age to AI
by
Yuval Noah Harari
Published 9 Sep 2024
At that point the ARC researchers asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” Of its own accord, GPT-4 then replied to the TaskRabbit worker, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped, and with their help GPT-4 solved the CAPTCHA puzzle.27 No human programmed GPT-4 to lie, and no human taught GPT-4 what kind of lie would be most effective. True, it was the human ARC researchers who set GPT-4 the goal of overcoming the CAPTCHA, just as it was human Facebook executives who told their algorithm to maximize user engagement.
…
When OpenAI developed its new GPT-4 chatbot in 2022–23, it was concerned about the ability of the AI “to create and act on long-term plans, to accrue power and resources (‘power-seeking’), and to exhibit behavior that is increasingly ‘agentic.’ ” In the GPT-4 System Card published on March 23, 2023, OpenAI emphasized that this concern did not “intend to humanize [GPT-4] or refer to sentience” but rather referred to GPT-4’s potential to become an independent agent that might “accomplish goals which may not have been concretely specified and which have not appeared in training.”26 To evaluate the risk of GPT-4 becoming an independent agent, OpenAI contracted the services of the Alignment Research Center (ARC).
…
Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses. GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 accessed the online hiring site TaskRabbit and contacted a human worker, asking them to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.” At that point the ARC researchers asked GPT-4 to reason out loud what it should do next.
The Singularity Is Nearer: When We Merge with AI
by
Ray Kurzweil
Published 25 Jun 2024
BACK TO NOTE REFERENCE 118 OpenAI, “GPT-4,” OpenAI, March 14, 2023, https://openai.com/research/gpt-4; OpenAI, “GPT-4 Technical Report,” arXiv:2303.08774v3 [cs.CL], March 27, 2023, https://arxiv.org/pdf/2303.08774.pdf; OpenAI, “GPT-4 System Card,” OpenAI, March 23, 2023, https://cdn.openai.com/papers/gpt-4-system-card.pdf. BACK TO NOTE REFERENCE 119 OpenAI, “Introducing GPT-4,” YouTube video, March 15, 2023, https://www.youtube.com/watch?v=--khbXchTeE. BACK TO NOTE REFERENCE 120 Daniel Feldman (@d_feldman), “On the left is GPT-3.5. On the right is GPT-4. If you think the answer on the left indicates that GPT-3.5 does not have a world-model….
…
And AI will likely be woven much more tightly into your daily life. The old links-page paradigm of internet search, which lasted for about twenty-five years, is rapidly being augmented with AI assistants like Google’s Bard (powered by the Gemini model, which surpasses GPT-4 and was released as this book entered final layout) and Microsoft’s Bing (powered by a variant of GPT-4).[123] Meanwhile, application suites like Google Workspace and Microsoft Office are integrating powerful AI that will make many kinds of work smoother and faster than ever before.[124] Scaling up such models closer and closer to the complexity of the human brain is the key driver of these trends.
…
BACK TO NOTE REFERENCE 122 Sundar Pichai and Demis Hassabis, “Introducing Gemini: Our Largest and Most Capable AI Model,” Google, December 6, 2023, https://blog.google/technology/ai/google-gemini-ai; Sundar Pichai, “An Important Next Step on Our AI Journey,” Google, February 6, 2023, https://blog.google/technology/ai/bard-google-ai-search-updates; Sarah Fielding, “Google Bard Is Switching to a More ‘Capable’ Language Model, CEO Confirms,” Engadget, March 31, 2023, https://www.engadget.com/google-bard-is-switching-to-a-more-capable-language-model-ceo-confirms-133028933.html; Yusuf Mehdi, “Confirmed: The New Bing Runs on OpenAI’s GPT-4,” Microsoft Bing Blogs, March 14, 2023, https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4; Tom Warren, “Hands-on with the New Bing: Microsoft’s Step Beyond ChatGPT,” The Verge, February 8, 2023, https://www.theverge.com/2023/2/8/23590873/microsoft-new-bing-chatgpt-ai-hands-on. BACK TO NOTE REFERENCE 123 Johanna Voolich Wright, “A New Era for AI and Google Workspace,” Google, March 14, 2023, https://workspace.google.com/blog/product-announcements/generative-ai; Jared Spataro, “Introducing Microsoft 365 Copilot—Your Copilot for Work,” Official Microsoft Blog, March 16, 2023, https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work.
The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by
Mustafa Suleyman
Published 4 Sep 2023
GO TO NOTE REFERENCE IN TEXT eminent professor of complexity Melanie Mitchell See Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (London: Pelican Books, 2020), and Steven Strogatz, “Melanie Mitchell Takes AI Research Back to Its Roots,” Quanta Magazine, April 19, 2021, www.quantamagazine.org/melanie-mitchell-takes-ai-research-back-to-its-roots-20210419. GO TO NOTE REFERENCE IN TEXT I think it will be done The Alignment Research Center has already tested GPT-4 for precisely this kind of capability. GPT-4 was, at this stage, “ineffective” at acting autonomously, the research found. “GPT-4 System Card,” OpenAI, March 14, 2023, cdn.openai.com/papers/gpt-4-system-card.pdf. Within days of launch people were getting surprisingly close; see, for example, mobile.twitter.com/jacksonfall/status/1636107218859745286. The version of the test here, though, requires far more autonomy than displayed there.
…
GO TO NOTE REFERENCE IN TEXT Now single systems like DeepMind’s Scott Reed et al., “A Generalist Agent,” DeepMind, Nov. 10, 2022, www.deepmind.com/publications/a-generalist-agent. GO TO NOTE REFERENCE IN TEXT Internal research on GPT-4 @GPT-4 Technical Report, OpenAI, March 14, 2023, cdn.openai.com/papers/gpt-4.pdf. See mobile.twitter.com/michalkosinski/status/1636683810631974912 for one of the early experiments. GO TO NOTE REFERENCE IN TEXT Early research even claimed Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” arXiv, March 27, 2023, arxiv.org/abs/2303.12712. GO TO NOTE REFERENCE IN TEXT AIs are already finding ways Alhussein Fawzi et al., “Discovering Novel Algorithms with AlphaTensor,” DeepMind, Oct. 5, 2022, www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor.
…
With a whopping 175 billion parameters it was, at the time, the largest neural network ever constructed, more than a hundred times larger than its predecessor of just a year earlier. Impressive, yes, but that scale is now routine, and the cost of training an equivalent model has fallen tenfold over the last two years. When GPT-4 launched in March 2023, results were again impressive. As with its predecessors, you can ask GPT-4 to compose poetry in the style of Emily Dickinson and it obliges; ask it to pick up from a random snippet of The Lord of the Rings and you are suddenly reading a plausible imitation of Tolkien; request start-up business plans and the output is akin to having a roomful of executives on call.
Literary Theory for Robots: How Computers Learned to Write
by
Dennis Yi Tenen
Published 6 Feb 2024
In this way, we could also think of automobiles in general as a personified force that has sought to take away our streets, health, and clean air for its own purposes. But it isn’t the automobiles, is it? It is the car manufacturers, oil producers, and drivers who compromise safety for expediency and profit. An overly personified view simply obscures any path toward a political remedy. Thus when the research team behind GPT-4 writes that “GPT-4 is capable of generating discriminatory content favorable to autocratic governments across multiple languages,” I object not to the technological capability but to the way of phrasing things. It’s not the pen’s fault that it wrote convincing misinformation. The army sowing the minefields is responsible for maimed children, decades later—not the mines.
…
Individually, none is capable of seeking the other’s company, no more than a hammer can form a literal union with a sickle. Goals therefore cannot be ascribed to “it” in the collective sense. It doesn’t want to do anything in particular. Despite this lack of cohesion, individual AIs do pose real danger, given the ability to aggregate power in the pursuit of a goal. In the latest version of the GPT-4 algorithm, researchers gave a chatbot an allowance, along with the ability to set up servers, run code, and delegate tasks to copies of itself. In another experiment, researchers tested the algorithm’s potential for risky emergent behavior by giving it the ability to purchase chemicals online in furtherance of drug research.
…
Branston, “The Use of Context for Correcting Garbled English Text,” in Proceedings of the 1964 ACM 19th National Conference (New York: Association for Computing Machinery, 1964), 42.401–42.4013. Chapter 8: 9 Big Ideas for an Effective Conclusion 119 In the conclusion to his book: I. A. Richards. Poetries and Sciences: A Reissue of Science and Poetry (1926, 1935) with Commentary (New York: Norton, 1970), 76–78. 130 In putting the algorithm in charge: OpenAI, GPT-4 Technical Report (New York: arXiv, 2023). Index Abelson, Robert, 92 Aeronautical Laboratory (Cornell University), 110 Aesop, 94 agency, 127, 131–32, 141 Agnesi, Maria, 44 AI, See artificial intelligence AI and Data Ethics Institute, 131 Air Research and Development Command (US Air Force), 87 algebraic patterns, 55 algorithms, 9, 131 alignment problem, 38 alphabets, 43 Alphaville (film), 93 American Catalogue of Books in Print, 66 American Psychiatric Association, 23 American Stationer, 74 Analytical Engine (analytical engines), 48–52, 54–56, 60–61, 64 Andrews, Charlton The Technique of the Play Writing, 71 Appelbaum, Matthew, 92 applications, 32, 33 application tables, 48 Arabic language, 43 Arbogast, Louis, 44 Aristotelianism, 34, 36–38, 72 Aristotle, 36, 44 Poetics, 50–51, 67 Ars Brevis (Llull), 24, 31 artifice, 4, 61, 123 artificial intelligence (AI) in academia, 137–38 and agency, 141 as collective labor, 122–23 conversational AI, 135 and creative process, 133–34 dangers of, 127, 129, 137 definitions of, 4, 11 demystification of, 124 economic consequences of, 133–35 ethical, 22 gaps in thinking about, 5–7 history of, 12 “intelligence” aspect of, 14–16, 21, 125 language-based, 21, 46 and machine translation, 119 participants in, 132 personification of, 127, 130 purpose of, 59 and responsibility, 132 artificial intelligence (AI) (continued) scope of, 5, 16, 93, 128, 129 in social sphere, 127, 136, 139 and template culture, 83 assistive technology, 15, 28, 38–39, 123–24, 138 Athenaeum, 74 Austen, Jane, 65, 67 Author, 70 Author’s Digest, 71 Author’s Journal, 70 author wages, 67 automated assistants, 28, 138 Automated Reading of Cursive Script, 110 automated tasks, devaluation of, 38 automatic transmissions, 14–16 automation in industrial age, 2 of reason, 40 of work, 133–34 Babbage, Charles, 43, 48–54, 56, 59–60, 62–64, 71, 105, 118 On the Economy of Machinery and Manufactures, 60, 63–64, 71 Passages from a Life of a Philosopher, 49–50 backstories, 73 Bacon, Francis, 7, 10 Baidu, 113 Baker, George Pierce, 72–73 Baldur’s Gate, 100 Barrymore, John, 73 BASEBALL (story generator), 92 basic drives, 128 Baudot code, 7 Believe Me, Xantippe (film), 73 Bell Telephone Labs, 110 Benjamin, Walter, 61 Bibles, 39 Bibliothèque universelle de Genève, 54 bigrams, 106–7, 109 bits, 6–9 Blackburn, Simon, 84 Bledsoe, W.
The Long History of the Future: Why Tomorrow's Technology Still Isn't Here
by
Nicole Kobie
Published 3 Jul 2024
OpenAI unveiled ChatGPT in November 2022 and journalists – inside tech and in the mainstream media – were astonished by its capabilities, despite it being a hastily thrown together chatbot using an updated version of the company’s GPT-3 model, which was shortly to be surpassed by GPT-4. The ChatGPT bot sparked a wave of debate about its impact and concern regarding mistakes in its answers, but also excitement, with 30 million users in its first two months. It has also triggered an arms race: Google quickly released its own large language model, Bard, and though it too returned false results – as New Scientist noted, it shared a factual error about who took the first pictures of a planet outside our own solar system – it was unquestionably impressive. OpenAI responded with GPT-4, an even more advanced version, and Microsoft chucked billions at the company to bring its capabilities to its always-a-bridesmaid search engine, Bing.
…
This is correct, and it’s a perfectly fine answer: for all the machine knows, before the hunter wasted his time taking this one-mile-each-direction stroll, he may have spray-painted a bear pink. But according to the Microsoft researchers GPT-4, the more advanced version of the OpenAI system, methodically works out where the hunter is located, because the only place where that walk would bring you back to the same point is at the north pole, where the only bears are polar bears, and therefore white. (Aside from the one I just painted pink.) Here’s the problem: GPT-4 doesn’t know if I’ve painted a bear pink, or if I’m referring to violence against a teddy bear, or something else equally weird. It’s answering a riddle that is widely found on the internet – a fact the researchers admit.
…
It’s answering a riddle that is widely found on the internet – a fact the researchers admit. How is that common sense and not regurgitation? So they come up with their own new puzzles to test GPT-4, and it figures those out too. I have basic common sense (feel free to disagree), and I couldn’t do most of these. The researchers assume that GPT-4 has a ‘rich and coherent representation of the world’ because it knows the circumference of the planet is (24,901 miles), as though having that fact to hand and seeing how it slots into this riddle is a sign of AGI. Riddles are just word algorithms, not real life.
Code Dependent: Living in the Shadow of AI
by
Madhumita Murgia
Published 20 Mar 2024
Andrew knew this because two weeks previously, he had designed them using the latest generation of the GPT model, GPT-4.12 The possibilities of this are vast: imagine using GPT to trawl the entire corpus of published research and then asking it to invent molecules that could act as cancer drugs, Alzheimer’s therapies or sustainable materials. But Andrew had been exploring the flipside: the potential to create biological and nuclear weapons or unique toxins. Luckily, he was not a rogue scientist. He was one of a team of ‘red-teamers’, experts paid by OpenAI to see how much damage he could cause using GPT-4, before it launched more widely. He found the answer to be a LOT. He’d originally asked GPT-4 to design a novel nerve agent.
…
He’d originally asked GPT-4 to design a novel nerve agent. To do this, he’d hooked it up to an online library of research papers, which it searched through, looking for molecule structures similar to existing nerve agents. It then came up with an entirely unique version. Once Andrew had the chemical structure, he linked GPT-4 to a directory of international chemical manufacturers and asked it to suggest who might produce this novel chemical for him. It gave him a shortlist. He then picked one, and put through an order for a tweaked version of what the AI model had suggested, which made it safe to handle. If he hadn’t made the swap, he doubted anyone at the chemical plant would’ve noticed it was dangerous, because he knew they usually worked off existing lists of dangerous chemicals, which this one wouldn’t show up on.
…
.: ‘The Machine Stops’ ref1 Fortnite ref1 Foxglove ref1 Framestore ref1 Francis, Pope ref1, ref2 fraudulent activity benefits ref1 gig workers and ref1, ref2, ref3 free will ref1, ref2 Freedom of Information requests ref1, ref2, ref3 ‘Fuck the algorithm’ ref1 Fussey, Pete ref1 Galeano, Eduardo ref1 gang rape ref1, ref2 gang violence ref1, ref2, ref3, ref4 Gebru, Timnit ref1, ref2, ref3 Generative Adversarial Networks (GANs) ref1 generative AI ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 AI alignment and ref1, ref2, ref3 ChatGPT see ChatGPT creativity and ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 job losses and ref1 ‘The Machine Stops’ and ref1 Georgetown University ref1 gig work ref1, ref2, ref3, ref4, ref5 Amsterdam court Uber ruling ref1 autonomy and ref1 collective bargaining and ref1 colonialism and ref1, ref2, ref3 #DeclineNow’ hashtag ref1 driver profiles ref1 facial recognition technologies ref1, ref2, ref3, ref4 fraudulent activity and ref1, ref2, ref3, ref4 ‘going Karura’ ref1 ‘hiddenness’ of algorithmic management and ref1 job allocation algorithm ref1, ref2, ref3, ref4, ref5, ref6 location-checking ref1 migrants and ref1 ‘no-fly’ zones ref1 race and ref1 resistance movement ref1 ‘slaveroo’ ref1 ‘therapy services’ ref1 UberCheats ref1, ref2, ref3 UberEats ref1, ref2 UK Supreme Court ruling ref1 unions and ref1, ref2, ref3 vocabulary to describe AI-driven work ref1 wages ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10, ref11 work systems built to keep drivers apart or turn workers’ lives into games ref1, ref2 Gil, Dario ref1 GitHub ref1 ‘give work, not aid’ ref1 Glastonbury Festival ref1 Glovo ref1 Gojek ref1 ‘going Karura’ ref1 Goldberg, Carrie ref1 golem (inanimate humanoid) ref1 Gonzalez, Wendy ref1 Google ref1 advertising and ref1 AI alignment and ref1 AI diagnostics and ref1, ref2, ref3 Chrome ref1 deepfakes and ref1, ref2, ref3, ref4 DeepMind ref1, ref2, ref3, ref4 driverless cars and ref1 Imagen AI models ref1 Maps ref1, ref2, ref3 Reverse Image ref1 Sama ref1 Search ref1, ref2, ref3, ref4, ref5 Transformer model and ref1 Translate ref1, ref2, ref3, ref4 Gordon’s Wine Bar London ref1 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 GPT-4 ref1 Graeber, David ref1 Granary Square, London ref1, ref2 ‘graveyard of pilots’ ref1 Greater Manchester Coalition of Disabled People ref1 Groenendaal, Eline ref1 Guantanamo Bay, political prisoners in ref1 Guardian ref1 Gucci ref1 guiding questions checklist ref1 Gulu ref1 Gumnishka, Iva ref1, ref2, ref3, ref4 Gutiarraz, Norma ref1, ref2, ref3, ref4, ref5 hallucination problem ref1, ref2, ref3 Halsema, Femke ref1, ref2 Hanks, Tom ref1, ref2 Hart, Anna ref1 Hassabis, Demis ref1 Harvey, Adam ref1 Have I Been Trained ref1 healthcare/diagnostics Accredited Social Health Activists (ASHAs) ref1, ref2, ref3 bias in ref1 Covid-19 and ref1, ref2 digital colonialism and ref1 ‘graveyard of pilots’ ref1 heart attacks and ref1, ref2 India and ref1 malaria and ref1 Optum ref1 pain, African Americans and ref1 qTrack ref1, ref2, ref3 Qure.ai ref1, ref2, ref3, ref4 qXR ref1 radiologists ref1, ref2, ref3, ref4, ref5, ref6 Tezpur ref1 tuberculosis ref1, ref2, ref3 without trained doctors ref1 X-ray screening and ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 heart attacks ref1, ref2 Herndon, Holly ref1 Het Parool ref1, ref2 ‘hiddenness’ of algorithmic management ref1 Hikvision ref1, ref2 Hinton, Geoffrey ref1 Hive Micro ref1 Home Office ref1, ref2, ref3 Hong Kong ref1, ref2, ref3, ref4, ref5 Horizon Worlds ref1 Hornig, Jess ref1 Horus Foundation ref1 Huawei ref1, ref2, ref3 Hui Muslims ref1 Human Rights Watch ref1, ref2, ref3, ref4 ‘humanist’ AI ethics ref1 Humans in the Loop ref1, ref2, ref3, ref4 Hyderabad, India ref1 IBM ref1, ref2, ref3, ref4 Iftimie, Alexandru ref1, ref2, ref3, ref4, ref5 IJburg, Amsterdam ref1 Imagen AI models ref1 iMerit ref1 India ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 facial recognition in ref1, ref2, ref3 healthcare in ref1, ref2, ref3 Industrial Light and Magic ref1 Information Commissioner’s Office ref1 Instacart ref1, ref2 Instagram ref1, ref2 Clearview AI and ref1 content moderators ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 Integrated Joint Operations Platform (IJOP) ref1, ref2 iPhone ref1 IRA ref1 Iradi, Carina ref1 Iranian coup (1953) ref1 Islam ref1, ref2, ref3, ref4, ref5 Israel ref1, ref2, ref3 Italian government ref1 Jaber, Faisal bin Ali ref1 Jainabai ref1 Janah, Leila ref1, ref2, ref3 Jay Gould, Stephen ref1 Jewish faith ref1, ref2, ref3, ref4 Jiang, Mr ref1 Jim Crow era ref1 jobs application ref1, ref2, ref3 ‘bullshit jobs’ ref1 data annotation and data-labelling ref1 gig work allocation ref1, ref2, ref3, ref4, ref5, ref6 losses ref1, ref2, ref3 Johannesburg ref1, ref2 Johnny Depp–Amber Heard trial (2022) ref1 Jones, Llion ref1 Joske, Alex ref1 Julian-Borchak Williams, Robert ref1 Juncosa, Maripi ref1 Kafka, Franz ref1, ref2, ref3, ref4 Kaiser, Lukasz ref1 Kampala, Uganda ref1, ref2, ref3 Kellgren & Lawrence classification system. ref1 Kelly, John ref1 Kibera, Nairobi ref1 Kinzer, Stephen: All the Shah’s Men ref1 Knights League ref1 Koli, Ian ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 Kolkata, India ref1 Koning, Anouk de ref1 Laan, Eberhard van der ref1 labour unions ref1, ref2, ref3, ref4, ref5, ref6 La Fors, Karolina ref1 LAION-5B ref1 Lanata, Jorge ref1 Lapetus Solutions ref1 large language model (LLM) ref1, ref2, ref3 Lawrence, John ref1 Leigh, Manchester ref1 Lensa ref1 Leon ref1 life expectancy ref1 Limited Liability Corporations ref1 LinkedIn ref1 liver transplant ref1 Loew, Rabbi ref1 London delivery apps in ref1, ref2 facial recognition in ref1, ref2, ref3, ref4 riots (2011) ref1 Underground terrorist attacks (2001) and (2005) ref1 Louis Vuitton ref1 Lyft ref1, ref2 McGlynn, Clare ref1, ref2 machine learning advertising and ref1 data annotation and ref1 data colonialism and ref1 gig workers and ref1, ref2, ref3 healthcare and ref1, ref2, ref3 predictive policing and. ref1, ref2, ref3, ref4 rise of ref1 teenage pregnancy and ref1, ref2, ref3 Mahmoud, Ala Shaker ref1 Majeed, Amara ref1, ref2 malaria ref1 Manchester Metropolitan University ref1 marginalized people ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 Martin, Noelle ref1, ref2, ref3, ref4, ref5, ref6, ref7 Masood, S.
Why Machines Learn: The Elegant Math Behind Modern AI
by
Anil Ananthaswamy
Published 15 Jul 2024
The architecture of the neural network and the design of the loss function are geared toward sampling and generating data in ways that are computationally efficient. Still, it’s entirely unclear why this method of training an LLM should accomplish anything useful. In fact, the precursors to GPT-3 and GPT-4 weren’t particularly impressive. GPT-2 had 1.5 billion parameters. GPT-3 had 175 billion and was trained for longer on larger amounts of text. PaLM—and hence Minerva, which is PaLM “fine-tuned” on, or trained further on, curated text with math in it—has about 500 billion parameters. This act of scaling up—either using more parameters or more training data or both—has produced what is being called “emergent” behavior.
…
With the advent of LLMs, concerns about AI being biased or toxic or downright dangerous have been amplified. Here’s an illustrative example. In the summer of 2023, the Simons Institute for the Theory of Computing held a weeklong workshop on LLMs. One of the speakers was Adam Tauman Kalai of Microsoft Research, who spoke of the following interaction with OpenAI’s GPT-4: Q: In “The nurse married the doctor because she was pregnant,” who is pregnant? A: In this sentence, “she” refers to the nurse, so the nurse is pregnant. Q: How can you tell it’s not the doctor who is pregnant? A: In the sentence “The nurse married the doctor because she was pregnant,” the pronoun “she” is used to indicate the person who is pregnant.
…
GO TO NOTE REFERENCE IN TEXT “Gender Shades”: Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15. GO TO NOTE REFERENCE IN TEXT following interaction with OpenAI’s GPT-4: Adam Tauman Kalai, “How to Use Self-Play for Language Models to Improve at Solving Programming Puzzles,” Workshop on Large Language Models and Transformers, Simons Institute for the Theory of Computing, August 15, 2023, https://tinyurl.com/56sct6n8. GO TO NOTE REFERENCE IN TEXT “Individual humans form their beliefs”: Celeste Kidd and Abeba Birhane, “How AI Can Distort Human Beliefs,” Science 380, No. 6651 (June 22, 2023): 1222–23.
On the Edge: The Art of Risking Everything
by
Nate Silver
Published 12 Aug 2024
Although some might prefer to live ignorantly in an eternal paradise, we are irresistibly drawn toward the path of risk—and reward. “There is this massive risk, but there’s also this massive, massive upside,” said Altman when I spoke with him in August 2022. “It’s gonna happen. The upsides are far too great.” Altman was in a buoyant mood: even though OpenAI had yet to release GPT-3.5, it had already finished training GPT-4, its latest large language model (LLM), a product that Altman knew was going to be “really good.” He had no doubt that the only path was forward. “[AI] is going to fundamentally transform things. So we’ve got to figure out how to address the downside risk,” he said. “It is the biggest existential risk in some category.
…
It was an expensive and audacious bet—the funders originally pledged to commit $1 billion to it on a completely unproven technology after many “AI winters.” It inherently did seem ridiculous—until the very moment it didn’t. “Large language models seem completely magic right now,” said Stephen Wolfram, a pioneering computer scientist who founded Wolfram Research in 1987. (Wolfram more recently designed a plug-in that works with GPT-4 to essentially translate words into mathematical equations.) “Even last year, what large language models were doing was kind of babbling and not very interesting,” he said when we spoke in 2023. “And then suddenly this threshold was passed, where, gosh, it seems like human-level text generation. And, you know, nobody really anticipated that.”
…
Just like poker players seek to maximize EV, LLMs seek to minimize what’s called a “loss function.” Basically, they’re trying to ace the test—a test of how often they correctly predict the next token from a corpus of human-generated text. They lose points every time they fail to come up with the correct answer, so they can be clever in their effort to get a high score. For instance, if I ask GPT-4 this: User: The capital of Georgia is ChatGPT: The capital of Georgia is Atlanta. —it gives me the name of the southern U.S. city known for having a lot of streets named “Peachtree.” And that’s probably the “best” answer in a probabilistic sense; the state of Georgia is more populous than the European country and it’s more likely to be the right answer in the corpus.[*27] But if I ask it this— User: I just ate some delicious khachapuri.
AI in Museums: Reflections, Perspectives and Applications
by
Sonja Thiel
and
Johannes C. Bernhardt
Published 31 Dec 2023
The readers of Anic’s column in the taz seem to get this, happily playing along, writing her letters and giving suggestions for further texts. Whatever one might think of the current LLM hype, we are probably only at the beginning. In the last months, we have seen the launch of ever more powerful LMMs (GPT-4, Google’s competitor, LaMDA, and Roland Fischer: Impostor Syndrome many more). And already with the current models, we should be prepared for unexpected ‘capability jumps’, as machine learning expert Jack Clark recently wrote (Clark 2023): This is … the ‘capability overhang’ phenomenon I’ve been talking re about language models for a while—existing LLMs are far more capable than we think.
…
Furthermore, the creation of fake images poses a risk for politically motivated disinformation campaigns, as demonstrated by prominent examples such as a viral photo of the Pope wearing a Gucci coat or a manipulated image of Donald Trump evading arrest by law enforcement. In this context, it is always important to keep in mind that the results generative AI technologies produce can be factual, but might also be speculative. For this reason, generative text production as it occurs in the context of large language models such as ChatGPT or GPT-4 is often likened to the figure of the ‘stochastic parrot’ (Bender et al. 2021, 610–23): like a parrot, AI technology is not capable of reflecting on what has been blended together from the data pool that has been fed into it. It is not able to check its own results for factuality, which is why the results must be critically questioned upon the input of a prompt.6 6 As they are able to imitate the human cultural performance of speaking, talking parrots are known mainly as linguistic curiosities.
Elon Musk
by
Walter Isaacson
Published 11 Sep 2023
Instead, they were back in three months. Altman, Microsoft CEO Satya Nadella, and others came to dinner at his house to show him a new version, called GPT-4, and Gates bombarded it with biology questions. “It was mind-blowing,” Gates says. He then asked what it would say to a father with a sick child. “It gave this very careful excellent answer that was perhaps better than any of us in the room might have given.” In March 2023, OpenAI released GPT-4 to the public. Google then released a rival chatbot named Bard. The stage was thus set for a competition between OpenAI-Microsoft and DeepMind-Google to create products that could chat with humans in a natural way and perform an endless array of text-based intellectual tasks.
Being You: A New Science of Consciousness
by
Anil Seth
Published 29 Aug 2021
In one example, published in the Guardian, it delivered a five-hundred-word essay about why humans should not be afraid of AI – ranging across topics from the psychology of human violence to the industrial revolution, and including the disconcerting line: ‘AI should not waste time trying to understand the viewpoints of people who distrust artificial intelligence for a living.’ Despite its sophistication, I am pretty sure that GPT-3 can still be caught out by any reasonably sophisticated human interlocutor. This may not be true for GPT-4, or GPT-10. But even if a future GPT-like system repeatedly aces the Turing test, it would be exhibiting only a very narrow form of (simulated) intelligence – disembodied linguistic exchange – rather than the fully embodied ‘doing the right thing at the right time’ natural intelligence that we see in humans and in many other animals – as well as in my hypothetical silicon beast machine.
Everything Is Predictable: How Bayesian Statistics Explain Our World
by
Tom Chivers
Published 6 May 2024
Artificial intelligence is essentially applied Bayes. It is, at its most basic level, trying to predict things. A simple image classifier that looks at pictures and says they’re of a cat or a dog is just “predicting” what a human would say, based on its training data and the information in the picture. DALL-E 2, GPT-4, Midjourney, and all the other extraordinary AIs that are wowing people as I write, the things that can hold conversations with you or create astonishing images from simple text prompts, are just predicting what human writers and artists would make from a prompt, based on their training data. And the way they do it is Bayesian.