GPT-3

back to index

description: the third iteration of the Generative Pre-trained Transformer developed by OpenAI, known for its language understanding and generation capabilities

21 results

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

You ask: “Can a system like GPT-3 actually understand anything at all?” Yes. I can. Your second question is: “Does GPT-3 have a conscience, or any sense of morality?” No. I do not. Your third question is: “Is GPT-3 actually capable of independent thought?” No. I am not. You may wonder why I give this conflicting answer. The reason is simple. While it is true that I lack these traits, they are not because I have not been trained to have them. Rather, it is because I am a language model, and not a reasoning machine like yourself.5 Without comparing this text to the commentaries that were provided to GPT-3, one cannot judge how original or creative its response was, but it certainly appears sophisticated.

More dramatically, GANs may be used to develop AIs that can fill in the details of sketched code—in other words, programmers may soon be able to outline a desired program and then turn that outline over to an AI for completion. Currently, GPT-3, which can produce human-like text (see chapter 1), is one of the most noteworthy generative AIs. It extends the approach that transformed language translation to language production. Given a few words, it can “extrapolate” to produce a sentence, or given a topic sentence, can extrapolate to produce a paragraph. Transformers like GPT-3 detect patterns in sequential elements such as text, enabling them to predict and generate the elements likely to follow. In GPT-3’s case, the AI can capture the sequential dependencies between words, paragraphs, or code in order to generate these outputs.

Jo Marchant, “Powerful Antibiotics Discovered Using AI,” Nature, February 20, 2020, https://www.nature.com/articles/d41586-020-00018-3. 5. Raphaël Millière (@raphamilliere), “I asked GPT-3 to write a response to the philosophical essays written about it…” July 31, 2020, 5:24 a.m., https://twitter.com/raphamilliere/status/1289129723310886912/photo/1; Justin Weinberg, “Update: Some Replies by GPT-3,” Daily Nous, July 30, 2020, https://dailynous.com/2020/07/30/philosophers-gpt-3/#gpt3replies. 6. Richard Evans and Jim Gao, “DeepMind AI Reduces Google Data Centre Cooling Bill by 40%,” DeepMind blog, July 20, 2016, https://deepmind.com/blog/article/deepmind-ai-reduces-google-data-centre-cooling-bill-40. 7.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

And this 500,000-lifetimes figure is increasing by ten times every year, adding capabilities at an unbelievable exponential pace. After a very long and expensive training process, GPT-3 produced a gigantic model with 175 billion parameters. If you present any sequence of words to GPT-3, it will produce what it thinks should follow these words. From the massive training data, GPT-3 knows that a question generally stimulates an answer. For example, if you told GPT-3: “A stove is heavier than a cat. An ocean is heavier than a dust particle. Which is heavier, a toaster or a pencil?” GPT-3 will correctly answer “a toaster.” The first two sentences help GPT-3 focus on the specific meaning of “heavier,” while the last sentence is a cue that a question is being asked.

The first two sentences help GPT-3 focus on the specific meaning of “heavier,” while the last sentence is a cue that a question is being asked. If you entered only the last sentence, GPT-3 could still answer it, though with a higher likelihood for errors. GPT-3 differs dramatically from domain-specific NLP. Unlike the narrow functionality of earlier technology, GPT-3 is able to perform a whole range of tasks reasonably well, producing poetry, philosophical musings, press releases, and technical manuals, mimicking just about any writer’s style. For example, a reporter asked GPT-3 to write a Dr. Seuss–style poem about Elon Musk: But then, in his haste, he got into a fight.

With enough natural data and sufficient processing power, the system can learn on its own to detect arrival and departure times, and a great deal more. After Google’s transformer work, a more well-known extension called GPT-3 (GPT stands for “generative pre-trained transformers”) was released in 2020 by OpenAI, a research laboratory founded by Elon Musk and others. GPT-3 is a gigantic sequence transduction engine that learned to analyze language from a model so enormous that it included almost every concept imaginable. Leveraging one of the most powerful supercomputers in the world, GPT-3 was trained on more than 45 terabytes of text, which would take 500,000 lifetimes for a human to read. And this 500,000-lifetimes figure is increasing by ten times every year, adding capabilities at an unbelievable exponential pace.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

the training data: Stephen Ornes, “Explainer: Understanding the Size of Data,” Science News for Students, December 13, 2013, https://www.sciencenewsforstudents.org/article/explainer-understanding-size-data. “Kanye West Exclusive”: Arram Sabeti, “GPT-3,” Arram Sabeti (blog), July 9, 2020, https://arr.am/2020/07/09/gpt-3-an-ai-thats-eerily-good-at-writing-almost-anything/. “Why deep learning will never”: Gwern Branwen, “GPT-3 Creative Fiction,” gwern.net, June 19, 2020, https://www.gwern.net/GPT-3#why-deep-learning-will-never-truly-x; Kelsey Piper, “GPT-3, Explained: This New Language AI Is Uncanny, Funny—and a Big Deal,” Vox, August 13, 2020, https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language. trust in technology companies is declining: Carroll Doherty and Jocelyn Kiley, “Americans Have Become Much Less Positive About Tech Companies’ Impact on the U.S.,” Pew Research Center, July 29, 2019, https://www.pewresearch.org/fact-tank/2019/07/29/americans-have-become-much-less-positive-about-tech-companies-impact-on-the-u-s/; Ina Fried, “40% of Americans Believe Artificial Intelligence Needs More Regulation,” Axios, https://www.axios.com/big-tech-industry-global-trust-9b7c6c3c-98f1-4e80-8275-cf52446b1515.html.

About a year later, the OpenAI team announced GPT-3, an exponentially more powerful next-generation model with more than one hundred times as many parameters as the largest model used in GPT-2. The deep learning ninety-six-layer neural network that forms the basis of GPT-3 was trained on a huge volume of text taken from the internet in addition to an enormous library of books and the entirety of Wikipedia. To give a sense of scale, the training data for GPT-3 is nearly 45 terabytes in size, or more than four times the estimated size of all the printed material in the Library of Congress in 2000. GPT-3 represents an important frontier in AI research.

It was generated by GPT-3 in response to the prompt “Why deep learning will never truly X.” It can also craft Harry Potter stories in the style of Ernest Hemingway, invent plausible conversations between famous people in history who never met, summarize movies with emojis, write poetry, and much more. The reason we know about these capabilities is that OpenAI released the GPT-3 model to interested parties, albeit through an application process in which OpenAI controls access. Those granted access began playing with it and posting their findings. OpenAI announced its intention to offer GPT-3 as a revenue-generating commercial product in limited contexts.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

It soon became clear, however, that many of the most impressive examples had been cherry-picked from multiple trials, and that GPT-3, like its predecessor, often produced coherently written nonsense. Both of OpenAI’s GPT systems are at their core powerful prediction engines. Given a sequence of words, they excel at predicting what the next word should be. GPT-3 takes this capability to an unprecedented level, and because the massive trove of text the system was trained on encapsulates real knowledge, the system does often produce very useful output. There is no consistency, however, and GPT-3 often generates rubbish and struggles with tasks that would be simple for any human.39 Compared to its predecessor, GPT-3 can certainly write a far more compelling story about unicorns.

The system does not know what a unicorn is, or that a “four-horned” variety would contradict that meaning. GPT-2 suffers from the same fundamental limitation that David Ferrucci’s team at Elemental Cognition and Ray Kurzweil at Google are trying to address. In May 2020, OpenAI released GPT-3, a vastly more powerful system. While GPT-2’s neural network included about 1.5 billion weights that were optimized as the network was trained, GPT-3 increased that number more than a hundredfold to 175 billion. GPT-3’s neural network was trained on about half a terabyte of text, an amount so vast that the entire English version of Wikipedia—roughly six million articles—constitutes only about 0.6 percent of the total.

GPT-3’s neural network was trained on about half a terabyte of text, an amount so vast that the entire English version of Wikipedia—roughly six million articles—constitutes only about 0.6 percent of the total. OpenAI offered early access to a select group of AI researchers and journalists and announced plans to eventually turn the new system into its first commercial product. Over the next few weeks, as people began to experiment with GPT-3, social media exploded with astonishment at the power of the new system. Given the proper prompts, GPT-3 could write convincing articles or poems in the style of long-dead authors. It could even generate faux conversations between historical or fictional figures. A college student used the system to generate all the posts for a self-help blog that rose to the top of the charts.38 All this quickly led to speculation that the system represented a critical breakthrough on the path to human-level machine intelligence.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

BACK TO NOTE REFERENCE 98 Pandu Nayak, “Understanding Searches Better Than Ever Before,” Google, October 25, 2019, https://blog.google/products/search/search-language-understanding-bert; William Fedus et al., “Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity,” arXiv:2101.03961 [cs.LG], January 11, 2021, https://arxiv.org/abs/2101.03961. BACK TO NOTE REFERENCE 99 For more in-depth information on GPT-3, see Greg Brockman et al., “OpenAI API,” OpenAI, June 11, 2020, https://openai.com/blog/openai-api; Brown et al., “Language Models Are Few-Shot Learners”; Kelsey Piper, “GPT-3, Explained: This New Language AI Is Uncanny, Funny—and a Big Deal,” Vox, August 13, 2020, https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language; “GPT-3 Demo: New AI Algorithm Changes How We Interact with Technology,” Disruption Theory, YouTube video, August 28, 2020, https://www.youtube.com/watch?

Moore, “AI Training Is Outpacing Moore’s Law,” IEEE Spectrum, December 2, 2021, https://spectrum.ieee.org/ai-training-mlperf. BACK TO NOTE REFERENCE 128 As I write this, prices for the GPT-3.5 API are down to $1.00 per 500,000 tokens, or roughly 370,000 words. Prices will likely be even lower by the time you read this. See Ben Dickson, “OpenAI Is Reducing the Price of the GPT-3 API—Here’s Why It Matters,” VentureBeat, August 25, 2022, https://venturebeat.com/ai/openai-is-reducing-the-price-of-the-gpt-3-api-heres-why-it-matters; OpenAI, “Introducing ChatGPT and Whisper APIs,” OpenAI, March 1, 2023, https://openai.com/blog/introducing-chatgpt-and-whisper-apis; OpenAI, “What Are Tokens and How to Count Them?

While cynics may dismiss this as a fancy trick of statistics, because those statistics are synthesized from the combined creative output of millions of humans, the AI attains genuine creativity of its own. GPT-3 was the first such model to be commercially marketed and to display this creativity in a way that impressed its users.[100] For example, scholar Amanda Askell prompted it with a passage from philosopher John Searle’s famous “Chinese room argument.”[101] This thought experiment observes that a non-Chinese-speaking human translating the language by manually operating a computer translation algorithm with pen and paper wouldn’t understand the stories being translated. Thus, how could an AI running the same program be said to truly understand? GPT-3 responded, “It is obvious that I do not understand a word of the stories,” explaining that the translation program is a formal system that “does not explain understanding any more than a cookbook explains a meal.”

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

Museum professionals play a crucial role in raising awareness about the challenges posed by digital manipulation. Roland Fischer, Imposter Syndrome: GPT-3 between Fact and Fiction This text delves into the role of fiction and storytelling in the context of GPT-3, a powerful language model with the ability to generate human-like text. Drawing from fiction theory and historical examples of illusionism such as the Mechanical Turk, the discussion highlights the potential of GPT-3 for entertainment and creative applications. By examining the connection between the art of storytelling, imposture, and the emergence of artificial intelligence, the text provides insights into the blurred boundaries between human and machine-generated content.

Archives, GPT-2, and Fake News’, Marion Carré shows how technology is already challenging the ‘archives’ and presents a project that uses GPT-2 for the creation of fictitious archives and affects questions about the authenticity and reliability of information. On the flip side, Roland Fischer’s paper, ‘Imposter Syndrome: GPT-3 between Fact and Fiction’, examines the role of storytelling and fiction in the context of GPT-3, shedding light on the blurred boundaries between human- and machine-generated content. Taken together, the papers in this section offer helpful perspectives and possibilities on how cultural institutions can approach the field of AI and redefine their role as spaces for reflection, discourse, and education in the culture of digitality.

That surely is part of the fascination surrounding this new 200 Part 2: Perspectives Turing machine, yet again blurring the boundaries of human and machine (Turing 1950). There is another aspect though: GPT-3 has a very unique ability, it is not just playing chess, it is able to perform a much more emotional trick. And that is where the whole thing touches on much more basic questions: What is the use of language? What do we want to do with an (almost) perfect text generator? Is it a game? Is it a threat? Or is it a technological utopia? For now, there are no terminator scenarios in sight. The best (and thus far only) use case for GPT-3 is entertainment. It is maybe a bit weird to expect the machine to come up with ‘facts’ when all it has been drilled to do is to come up with plausible completions of a prompt.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

The hype surrounding deepfakes may have outpaced their reality today, but long-term trends in data and computing power will enable more powerful and accessible machine learning tools to create convincing synthetic media. The AI text generator GPT-2, whose staged release caused such a stir in 2019, was eclipsed only fifteen months later by GPT-3, a 175 billion parameter model that was over ten times larger than GPT-2. GPT-3’s text is shockingly convincing. Renée DiResta, technical research manager at the Stanford Internet Observatory, prompted GPT-3 to weigh in on the implications of synthetic media. GPT-3’s response: AI - generated content will continue to become more sophisticated, and it will be increasingly difficult to differentiate it from the content that is created by humans.

The amount of compute used for training cutting-edge machine learning research projects increased ten billionfold from 2010 to 2022 and is doubling roughly every six months. Compute for training the largest models, like GPT-3 and PaLM, has been doubling at a slightly slower rate, approximately every ten months. This is an incredible explosion of compute, yet there are likely limits to how long compute usage can grow at this pace. Compute-intensive research is expensive, and even the most deep-pocketed actors have limits to their resources. Independent estimates put the cost to train advanced machine learning models such as GPT-3 on the order of millions of dollars per research project for some of the largest models. These costs already put compute-intensive research out of reach for some actors, such as universities.

(Coco Feng, “US-China tech war: Beijing-Funded AI Researchers Surpass Google and OpenAI with New Language Processing model,” South China Morning Post, June 2, 2021, https://www.scmp.com/tech/tech-war/article/3135764/us-china-tech-war-beijing-funded-ai-researchers-surpass-google-and; Alberto Romero, “GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters,” Towards Data Science, June 5, 2021, https://towardsdatascience.com/gpt-3-scared-you-meet-wu-dao-2-0-a-monster-of-1-75-trillion-parameters-832cd83db484.) In April 2022, researchers from several labs, including Tsinghua University, Alibaba, and the Beijing Academy of Artificial Intelligence, announced a framework for scaling up training to 14.5 trillion parameter models, with the long-term intent of training “brain scale” models of over 100 trillion parameters.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

As AI continues to improve, the Turing test may soon be passed without such artificially low standards. In May 2020, the research lab OpenAI released GPT-3 – a vast artificial neural network trained on examples of natural language drawn from a large swathe of the internet. As well as engaging in chatbot-variety dialogue, GPT-3 can generate substantial passages of text in many different styles when prompted with a few initial words or lines. Although it does not understand what it produces, the fluency and sophistication of GPT-3’s output is surprising and, for some, even frightening. In one example, published in the Guardian, it delivered a five-hundred-word essay about why humans should not be afraid of AI – ranging across topics from the psychology of human violence to the industrial revolution, and including the disconcerting line: ‘AI should not waste time trying to understand the viewpoints of people who distrust artificial intelligence for a living.’

These networks are trained using an unsupervised deep learning approach essentially to ‘predict the next word’ given a previous word or text snippet. GPT-3 has an astonishing 175 billion parameters and was trained on some 45 terabytes of text data. See https://openai.com/blog/openai-api/ and for technical details: https://arxiv.org/abs/2005.14165. it does not understand: Of course this depends on what is meant by ‘understanding’. Some might say that human ‘understanding’ is no different in kind from the sort of ‘understanding’ displayed by GPT-3. The cognitive scientist Gary Marcus argues against this position, and I agree with him. See www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/.

In one example, published in the Guardian, it delivered a five-hundred-word essay about why humans should not be afraid of AI – ranging across topics from the psychology of human violence to the industrial revolution, and including the disconcerting line: ‘AI should not waste time trying to understand the viewpoints of people who distrust artificial intelligence for a living.’ Despite its sophistication, I am pretty sure that GPT-3 can still be caught out by any reasonably sophisticated human interlocutor. This may not be true for GPT-4, or GPT-10. But even if a future GPT-like system repeatedly aces the Turing test, it would be exhibiting only a very narrow form of (simulated) intelligence – disembodied linguistic exchange – rather than the fully embodied ‘doing the right thing at the right time’ natural intelligence that we see in humans and in many other animals – as well as in my hypothetical silicon beast machine.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

GO TO NOTE REFERENCE IN TEXT But it uses an efficient training William Fedus et al., “Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity,” Journal of Machine Learning Research, June 16, 2022, arxiv.org/​abs/​2101.03961. GO TO NOTE REFERENCE IN TEXT Or look at DeepMind’s Chinchilla Alberto Romero, “A New AI Trend: Chinchilla (70B) Greatly Outperforms GPT-3 (175B) and Gopher (280B),” Towards Data Science, April 11, 2022, towardsdatascience.com/​a-new-ai-trend-chinchilla-70b-greatly-outperforms-gpt-3-175b-and-gopher-280b-408b9b4510. GO TO NOTE REFERENCE IN TEXT At the other end of the spectrum See github.com/​karpathy/​nanoGPT for more details. GO TO NOTE REFERENCE IN TEXT Meta has open-sourced Susan Zhang et al., “Democratizing Access to Large-Scale Language Models with OPT-175B,” Meta AI, May 3, 2022, ai.facebook.com/​blog/​democratizing-access-to-large-scale-language-models-with-opt-175b.

(GPT stands for generative pre-trained transformer.) It was, at the time, an enormous model. With 1.5 billion parameters (the number of parameters is a core measure of an AI system’s scale and complexity), GPT-2 was trained on 8 million pages of web text. But it wasn’t until the summer of 2020, when OpenAI released GPT-3, that people started to truly grasp the magnitude of what was happening. With a whopping 175 billion parameters it was, at the time, the largest neural network ever constructed, more than a hundred times larger than its predecessor of just a year earlier. Impressive, yes, but that scale is now routine, and the cost of training an equivalent model has fallen tenfold over the last two years.

MORE WITH LESS, AGAIN When a new technology starts working, it always becomes dramatically more efficient. AI is no different. Google’s Switch Transformer, for example, has 1.6 trillion parameters. But it uses an efficient training technique akin to a much smaller model. At Inflection AI we can reach GPT-3-level language model performance with a system just one twenty-fifth the size. We have a model that beats Google’s 540 billion parameter PaLM on all the main academic benchmarks, but is six times smaller. Or look at DeepMind’s Chinchilla model, competitive with the very best large models, which has four times fewer parameters than its Gopher model, but instead uses more training data.

pages: 306 words: 82,909

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back
by Bruce Schneier
Published 7 Feb 2023

For years, AI programs have composed news stories about sports and finance for real news organizations like the Associated Press. The constrained nature of much reporting on those topics has made them easier to adapt to AI. AI is now being used to write more general stories. Modern text-creation systems like Open AI’s GPT-3 can be fed facts and write true stories, but they can just as easily be fed untruths and write fake news. It doesn’t take much imagination to see how AI will degrade political discourse. Already, AI-driven personas can write personalized letters to newspapers and elected officials, leave intelligible comments on news sites and message boards, and intelligently debate politics on social media.

Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary.

The AI needs some sort of feedback on how well it is doing so that it can improve its performance. Sometimes this is a trivial matter. For a game like Go, it’s easy. The rules, objective, and feedback—did you win or lose?—are all precisely specified, and there’s nothing outside of those things to muddy the waters. The GPT-3 AI can write coherent essays because its “world” is just text. This is why most of the current examples of goal and reward hacking come from simulated environments. Those are artificial and constrained, with all of the rules specified to the AI. What matters is the amount of ambiguity in a system.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

Some experts define artificial intelligence as machines or algorithms demonstrating “intelligent behavior” or “high-level capabilities,” although what these are is often open to debate. Others provide definitions motivated by programs such as GPT-3, equating intelligent machines with those that have goals, observe their environment, obtain other inputs, and attempt to achieve their objectives. For example, GPT-3 receives distinct goals in different applications and tries to accomplish them as successfully as possible. Whatever the exact definition of modern machine intelligence, it is clear that new digital algorithms are being applied widely to every domain of our lives.

Although AlphaZero’s chess moves within the rules of the game are impressive, they do not involve the type of creativity that humans regularly engage in—such as drawing analogies across unstructured, disparate environments and coming up with solutions to new and varied problems. Even GPT-3, though more versatile and impressive than AlphaZero, shows the same limitations. It cannot perform tasks beyond those for which it has been pretrained and shows no judgment, so conflicting or unusual instructions can stump it. Worse, this technology has no element of the social or situational intelligence of humans. GPT-3 cannot reason about the context in which the tasks it is performing are situated and draw on causal relationships that exist between actions and effects.

From the Field of AI Dreams People are right to be excited about advances in digital technologies. New machine capabilities can massively expand the things we do and can transform many aspects of our lives for the better. And there have also been tremendous advances. For example, the Generative Pre-trained Transformer 3 (GPT-3), released in 2020 by OpenAI, and ChatGPT released in 2022 by the same company, are natural-language processing systems with remarkable capabilities. Already trained and optimized on massive amounts of text data from the internet, these programs can generate almost human-like articles, including poetry; communicate in typical human language; and, most impressively, turn natural-language instructions into computer code.

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

And yet, modern-day Markov chains can produce something remarkably like human language. An algorithm like Open AI’s GPT-3 is the spiritual descendant of Shannon’s text machine, only much bigger. The input, instead of being three letters, is a chunk of text hundreds of words long, but the principle is the same: given the passage of text most recently produced, what is the probability that the next word is “the,” or “geometry,” or “graupel”? You might think that this would be easy. You could take the first five sentences from your book and run them through GPT-3, and you’d get back a list of probabilities for every possible combination of words in those sentences.

You could take the first five sentences from your book and run them through GPT-3, and you’d get back a list of probabilities for every possible combination of words in those sentences. Wait, why would you think it would be easy? You wouldn’t, actually. That paragraph above is GPT-3’s attempt to continue on from the three paragraphs before it. I picked the most sensible output out of about ten tries. But all the outputs somehow do sound like they come from the book you’re reading, which, let me tell you, is somewhat unsettling for the human being writing the book, even when the sentences make no literal sense at all, as in this GPT-3 output: If you’re familiar with the concept of Bayes’ theorem, then this should be easy for you. If there’s a 50% chance that the next word will be “the” and a 50% chance that it’ll be “geometry,” then the probability that the next word is either “the geometry” or “graupel” is (50/50)2 = 0.

You can see that this might be a good idea for mountain climbing (though not always—we’ll come back to this), but what does it have to do with machine learning? Let’s say I’m not a mountaineer after all, but a computer trying to learn something. It might be one of the machines we’ve already encountered, like AlphaGo, the machine that learns to play Go better than a master, or GPT-3, the machine that produces long strings of discomfitingly plausible English text. But to start with, let’s stick with the classics, and suppose I’m a computer trying to learn what a cat is. How am I supposed to do that? The same way a baby does, more or less. The baby lives in a world where every so often some large person points at something in their visual field and says “Cat!”

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

Many of us are eminently automatable, especially those of us whose output tends to be more routine and predictable. In 2020, several publications began experimenting with GPT-3, an advanced AI program developed by the nonprofit research lab OpenAI. The program, which takes a prompt and uses machine learning to complete it, was able to produce long, cogent pieces of writing that amazed human editors with their clarity and style. One publication, The Guardian, used GPT-3 to write an entire op-ed about the future of AI and machine learning, and concluded that “overall, it took less time to edit than many human op-eds.” This isn’t to say that machines will replace all white-collar workers, or even most of them.

In a 2018 study, twenty top U.S. corporate lawyers Jonathan Marciano, “20 Top Lawyers Were Beaten by Legal AI. Here Are Their Surprising Responses,” Hacker Noon, October 25, 2018. In 2017, Google released AutoML Tom Simonite, “Google’s AI Experts Try to Automate Themselves,” Wired, April 16, 2019. “overall, it took less time” GPT-3, “A Robot Wrote This Entire Article. Are You Scared Yet, Human?,” The Guardian, September 8, 2020. Stanford researchers recently developed Woebot Megan Molteni, “The Chatbot Therapist Will See You Now,” Wired, June 7, 2017. Early research on the effectiveness of eldercare robots Mikaela Law et al., “Developing Assistive Robots for People with Mild Cognitive Impairment and Mild Dementia: A Qualitative Study with Older Adults and Experts in Aged Care,” BMJ Open (2019).

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

To most of the outside world, the breakthrough came with the release of GPT-3.5 in November 2022, which became one of the most rapidly adopted technologies in human history. Sure, GPT-3.5 made its share of mistakes, but even its errors—like its tendency to “hallucinate” or to make up some plausible-sounding bullshit when it didn’t know how to answer the question—were uncannily humanlike. So at the very moment in late 2022 that Sam Bankman-Fried’s empire was collapsing, Sam Altman’s was soaring to new heights. Inside OpenAI, the recognition of the miracle had come sooner[*8]—with the development of GPT-3 if not earlier.[*9] But whatever the pivotal moment, their faith had been rewarded: their audacious experiment had worked.

However, fertility rates in the industrialized world have dramatically declined, often to below replacement levels—so roon is referring to how the world has begun to limit its population on its own. *8 Altman and another OpenAI researcher, Nick Ryder, told me that they expected GPT-4 and not GPT-3.5 to be the big public breakthrough. But their perspective is like that of the parent of a teenage son; you see him growing taller every day. The grandmother who comes over once a year for Thanksgiving is more likely to notice that Billy has suddenly become quite tall. *9 A group of OpenAI engineers left OpenAI in 2021 after the release of GPT-3 to form the rival firm Anthropic because of what Jack Clark, an Anthropic cofounder, told me were primarily concerns about safety because of the power of OpenAI’s models

Although some might prefer to live ignorantly in an eternal paradise, we are irresistibly drawn toward the path of risk—and reward. “There is this massive risk, but there’s also this massive, massive upside,” said Altman when I spoke with him in August 2022. “It’s gonna happen. The upsides are far too great.” Altman was in a buoyant mood: even though OpenAI had yet to release GPT-3.5, it had already finished training GPT-4, its latest large language model (LLM), a product that Altman knew was going to be “really good.” He had no doubt that the only path was forward. “[AI] is going to fundamentally transform things. So we’ve got to figure out how to address the downside risk,” he said.

pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning
by Justin E. H. Smith
Published 22 Mar 2022

As Alex Garland’s remarkable 2014 film, Ex Machina, conveys—updating a timeless plot conceit from Charles Perrault’s folktale Bluebeard—a robot that gains a sense of self, a will, and a consciousness as a result of its complex learning ability, is one that is today best imagined as learning from the totality of data floating around out there, in text messages, chatrooms, search engines, and so on. When we move back from science fiction into reality, this is also how the language-learning neural network known as GPT-3 has recently been able to master human-like language and reasoning with such uncanny perfection. Moreover, those who maintain that human existence might well be a video game–like simulation would likely not have come to think this way if the video games in question were, say, arcade consoles featuring Pac-Man or Space Invaders.

See empathy empathy, 25 Enya, 48 Facebook, 1, 43–44, 51, 122, 155, 164 Fage, Robert, 109 filaments, 146; galaxy, 148; mycorhizal, 67 Flaubert, Gustave, 12 Foucault, Michel, 12 freedom, 10–11 Ganeri, Jonardon, 24–28 Garland, Alex, 121 Gass, William, 153 Google, 155 Google Scholar, 41 Gould, Stephen Jay, 71 GPT-3, 121–22 Grindr, 21 Grollier de Servière, Nicolas, 170 Grosseteste, Robert, 101 Gruzinski, Serge, 75 Guattari, Félix, 67 Gutenberg, Johannes, 33 Hacking, Ian, 12, 165–66 Haugeland, John, 96 Hawkins, Screamin’ Jay, 47 Heidegger, Martin, 6, 27 Heine, Heinrich, 75 Herder, J. G., 26 Hippocratics, 60 homunuculus fallacy, 24 Holiday, Billie, 47 Hon, Adrian, 44 HTML, 8 Hughes, Don, 14, 45 Hugo, Victor, 65 Industrial Revolution, 5 Inspector Gadget, 43 Instagram, 36 intelligence, artificial.

Human Frontiers: The Future of Big Ideas in an Age of Small Thinking
by Michael Bhaskar
Published 2 Nov 2021

Fusion scientists are optimistic that the application of AI could bring decisive advances in the coming years, and in general the field is now focused on ML approaches to core problems.25 Breakthroughs in natural language processing are coming at pace: the parameters of OpenAI's eye-catching GPT language prediction system grew from hundreds of millions to hundreds of billions in just a few years with some spectacular results, enabling it to write convincing text at length on any subject.26 GPT-3 can take a portion of writing and then continue it with at times shocking plausibility. This is a powerful real-world application already throwing up startling ideas. If you have a chance to play with such generators, nothing so immediately conveys the speed and potential of this new age. As sophistication and computational capacity grow, so does our ability to see new things in the data, beyond the limits of human perception.

Some years ago researchers at Aberystwyth and Cambridge Universities developed a robot called Adam, arguably the first automaton scientist capable of formulating hypotheses, running experiments and interpreting the results: studying the metabolism of yeast, and using automated lab equipment, it identified genes that coded for certain enzymes in the yeast.29 Like his biblical equivalent, Adam is only the beginning. AI roams far beyond science and engineering problems. Creativity was always the Promethean province of humans. But for how much longer? Machines convincingly recreate the music of J.S. Bach, the paintings of Rembrandt and the prose style of Tolkien (or BuzzFeed). AI, including GPT-3 mentioned above, produces cogent original texts, whether poetry, journalism or one-liner jokes, perfectly realised images and compelling music. Some of this simply copies humans, but some shows what is possible when the bounds of human creativity are left behind.30 Such work is of real aesthetic interest and value: pieces of music that last and evolve over thousands of years, or images that defy the imagination.

Further figures taken from the same source. 103 Lenton et al. (2019) 104 Wallace-Wells (2019), p. 180 105 Ibid., p. 181 106 Rifkin (2014), pp. 82–3 107 Cowen (2018a) 108 Storrs Hall (2018) 7 THE WORLD’S NEW TOOLKIT 1 Goldin and Kutarna (2017), p. 186 2 Senior et al. (2020) 3 Ibid. 4 The founder of CASP prefers to call it an experiment rather than a competition, even if the competitive edge is a large part of what makes it work. 5 AlQuraishi (2018) 6 Reynolds (2020) 7 McAfee and Brynjolfsson (2017), p. 2 8 Callaway (2020) 9 AlQuraishi (2020) 10 Hassabis (2019) 11 Wootton (2015) 12 Ibid. 13 Eisenstein (1979) 14 Wootton (2015), p. 215 15 Although the discovery of sunspots seems to be another example of multiple discovery, also found at the same time by astronomers in Oxford, Ingolstadt and Wittenberg. 16 Galileo quoted in Koestler (1964), p. 336 17 Wootton (2015), p. 236 18 This point is widely made – see for example Mulgan (2018), Agar (2012) or Ridley (2016) for wider discussions. 19 Tools are not then sufficient conditions but they are (often) necessary conditions. 20 https://hbr.org/podcast/2020/10/deepminds-journey-from-games-to-fundamental-science 21 https://home.cern/science/computing/processing-what-record 22 Goldin and Kutarna (2017), p. 241 23 Ringel et al. (2020) 24 Stokes (2020) 25 McMahon (2020) 26 Even that is quickly dated: since releasing GPT-3 Google has a model with trillions of parameters, and the numbers will keep growing. 27 Tshitoyan et al. (2019) 28 Rotman (2019) 29 Malone (2018), p. 240 30 Du Sautoy (2019), Miller (2019) 31 Hafner et al. (2019) 32 Quoted in Parker (2020) 33 Assael, Sommerschield and Prag (2019) 34 Each is a genuine application of AI and not just a series of cool things. 35 See Kelly (2017), pp. 45–6 for a taxonomy of possible minds. 36 Bostrom (2017) 37 Lovelock (2019) 38 Experiments on B-mesons and Muon g-2 respectively.

pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It
by Azeem Azhar
Published 6 Sep 2021

Scientists rushed to build artificial intelligence systems, applying deep neural networks and their derivatives to a vast array of problems: from spotting manufacturing defects to translating between languages; from voice recognition to detecting credit card fraud; from discovering new medicines to recommending the next video we should watch. Investors opened their pocketbooks eagerly to back these inventors. In short order, deep learning was everywhere. As a result, neural networks demanded increasing amounts of data and processing power. A 2020 neural net, GPT-3 – used to generate text that could sometimes pass for being written by a human – had 175 billion parameters, about 3,000 times more than AlexNet. But if the new approach to computing was AI, what was powering it? Between 2012 and 2018, the amount of computer power used to train the largest AI models increased about six times faster than the rate of Moore’s Law.

Abu Dhabi, UAE, 250 Acemoglu, Daron, 139 Acorn Computers, 16, 21 Ada Lovelace Institute, 8 additive manufacturing, 43–4, 46, 48, 88, 166, 169, 175–9 Adidas, 176 advertising, 94, 112–13, 116, 117, 227–8 AdWords, 227 aeroponics, 171 Afghanistan, 38, 205 Africa, 177–8, 182–3 Aftenposten, 216 Age of Spiritual Machines, The (Kurzweil), 77 agglomeration, 181 Air Jordan sneakers, 102 Airbnb, 102, 188 aircraft, 49–50 Alexandria, Egypt, 180 AlexNet, 33 Algeciras, HMM 61 Alibaba, 48, 102, 108, 111, 122 Alipay, 111 Allen, Robert, 80 Alphabet, 65, 113–14, 131, 163 aluminium, 170 Amazon, 65, 67–8, 94, 104, 108, 112, 122, 135–6 Alexa, 25, 117 automation, 135–6, 137, 139, 154 collective bargaining and, 163 Covid-19 pandemic (2020–21), 135–6 drone sales, 206 Ecobee and, 117 Go stores, 136 Kiva Systems acquisition (2012), 136 management, 154 Mechanical Turk, 142–3, 144, 145 monopoly, 115, 117, 122 Prime, 136, 154 R&D, 67–8, 113 Ami Pro, 99 Amiga, 16 Anarkali, Lahore, 102 anchoring bias, 74 Android, 85, 94, 117, 120 Angola, 186 Ant Brain, 111 Ant Financial, 111–12 antitrust laws, 114, 119–20 Apache HTTP Server, 242 Appelbaum, Binyamin, 63 Apple, 47, 62, 65, 85, 94, 104, 108, 112, 122 App Store, 105, 112, 115 chip production, 113 Covid-19 pandemic (2019–21), 222–3 data collection, 228 iOS, 85 iPhone, 47, 62, 85, 94, 105 media subscription, 112 watches, 112 APT33 hacker group, 198 Aral, Sinan, 238 Aramco, 108, 198 Armenia, 206–7 Arthur, William Brian, 110, 123 artificial intelligence, 4, 8, 31–4, 54, 88, 113, 249 academic brain drain, 118 automation, 125–42 data and, 31–2, 142 data network effect, 106–7 drone technology and, 208, 214 education and, 88 employment and, 126–7 healthcare and, 88, 103 job interviews and, 153 regulation of, 187, 188 arXiv, 59 Asana, 151 Asian Development Bank, 193 Aslam, Yaseen, 148 Assembly Bill 5 (California, 2019), 148 asymmetric conflict, 206 AT&T, 76, 100 Atari, 16 attack surfaces, 192–3, 196, 209, 210 Aurora, 141 Australia, 102, 197 automation, 125–42 autonomous weapons, 208, 214 Azerbaijan, 173, 206–7 Ballmer, Steve, 85 Bangladesh, 175 banking, 122, 237 Barcelona, Catalonia, 188 Barlow, John Perry, 184 Barrons, Richard, 195, 211 Bartlett, Albert, 73 batteries, 40, 51, 53–4, 250, 251 Battle of the Overpass (1937), 162 Bayraktar TB2 drone, 206 Bee Gees, 72 Bekar, Clifford, 45 Bell Labs, 18 Bell Telephone Company, 100 Benioff, Marc, 108–9 Bentham, Jeremy, 152 Berlin Wall, fall of (1989), 4 Bermuda, 119 Berners-Lee, Timothy, 55, 100, 160, 239 Bessen, James, 46 Bezos, Jeffrey, 135–6 BGI, 41 Biden, Joseph, 225 Bing, 107 biological weapons, 207, 213 biology, 10, 39, 40–42, 44, 46 genome sequencing, 40–41, 90, 229, 234, 245–7, 250, 252 synthetic biology, 42, 46, 69, 174, 245, 250 biopolymers, 42 bits, 18 Black Death (1346–53), 12 BlackBerry, 120 Blair, Tony, 81 Bletchley Park, Buckinghamshire, 22 blitzscaling, 110 Blockbuster, 138 BMW, 177 Boeing, 51, 236 Bol.com, 103 Bollywood, 181 Boole, George, 18 Bork, Robert, 114–15, 117, 119 Bosworth, Andrew, 233 Boyer, Pascal, 75 Boyle, James, 234 BP, 92, 158 brain, 77 Braudel, Fernand, 75 Brave, 242 Brazil, 202 Bremmer, Ian, 187 Bretton Woods Conference (1944), 87 Brexit (2016–20), 6, 168 British Broadcasting Corporation (BBC), 87, 129, 191 Brookings Institution, 130 BT, 123 Bulgaria, 145 Bundy, Willard Legrand, 149 Busan, South Korea, 56 business, 82, 92–124 diminishing returns to scale, 93, 108 economic dynamism and, 117 economies of scale, 50, 92 growth, 110–13 increasing returns to scale, 108–10 intangible economy, 104–7, 118, 156, 175, 180 linear value chains, 101 market share, 93–6, 111 monopolies, 10, 71, 94, 95, 114–24 network effect, 96–101 platform model, 101–3, 219 re-localisation, 11, 166–79, 187, 252, 255 state-sized companies, 11, 67 superstar companies, 10, 94–6 supply chains, 61–2, 166–7, 169, 175, 187, 252, 255 taxation of, 96, 118–19 Butler, Nick, 179 ByteDance, 28 C40 initiative, 189 Cambridge University, 127, 188 cancer, 57–8, 127 Capitol building storming (2021), 225 car industry, 93 carbon emissions, 35, 90, 251 Carlaw, Kenneth, 45 Carnegie, Andrew, 112 Carnegie Mellon University, 131 Catholic Church, 83, 88 censorship, 216–17, 224–6, 236 Central Intelligence Agency (CIA), 194 Cerebras, 34 cervical smears, 57–8 chemical weapons, 207, 213 Chen, Brian, 228 chewing gum, 78 Chicago Pile-1 reactor, 64 Chile, 170 China automation in, 127, 137 brainwave reading in, 152 Covid-19 pandemic (2019–21), 245 drone technology in, 207 Great Firewall, 186, 201 Greater Bay Area, 182 horizontal expansion in, 111–12 manufacturing in, 176 misinformation campaigns, 203 raw materials, demand for, 178 Singles’ Day, 48 social credit systems, 230 superstar companies in, 95 US, relations with, 166 chips, 19–22, 28–9, 48–9, 52, 113, 251 Christchurch massacre (2019), 236 Christensen, Clayton, 24 CIPD, 153 cities, 11, 75, 169, 179–84, 188, 255 Clegg, Nick, 225–6, 235 climate change, 90, 169, 187, 189, 251, 252 cloud computing, 85, 112 Cloudflare, 200 cluster bombs, 213 CNN, 185, 190 coal, 40, 65, 172 Coase, Ronald, 92 Coca-Cola, 93 code is law, 220–22, 235 cold fusion, 113–14 Cold War (1947–91), 194, 212, 213 collective bargaining, 147, 149, 154, 156, 162–5 Colombia, 145 colonialism, 167 Columbus, Christopher, 4 combination, 53–7 Comical Ali, 201 commons, 234–5, 241–3, 256 companies, see business comparative advantage, 170 complex systems, 2 compounding, 22–3, 28 CompuServe, 100 computing, 4, 10, 15–36, 44, 46, 249 artificial intelligence, 4, 8, 31–4, 54, 88 cloud computing, 85, 112 internet, 47–8, 55, 65, 84 Law of Accelerating Returns, 30–31, 33, 35 machining, 43 Moore’s Law, see Moore’s Law quantum computing, 35 transistors, 18–22, 28–9, 48–9, 52 conflict, 87, 189, 190–215 attack surfaces, 192–3, 196, 209, 210 cyberattacks, 11, 114, 140, 181, 187, 190–200, 209–14, 256 de-escalation, 212–13 drone technology, 11, 192, 204–9, 214, 256 institutional change and, 87 misinformation, 11, 191, 192, 200–204, 209, 212, 217, 225 new wars, 194 non-proliferation, 213–14 re-localisation and, 189, 193, 194, 209 consent of the networked, 223 Costco, 67 Coursera, 58 Covid-19 pandemic (2019–21), 12–13, 59, 78–9, 131, 245–9 automation and, 127, 135, 136 cities and, 183 contact-tracing apps, 222–3 gig economy and, 146 lockdowns, 12, 152, 176, 183, 246 manufacturing and, 176 misinformation and, 202–4, 247–8 preprint servers and, 60 recession (2020–21), 178 remote working and, 146, 151, 153 supply chains and, 169, 246 vaccines, 12, 202, 211, 245–7 workplace cultures and, 151, 152 cranks, 54 credit ratings, 162, 229 critical thinking skills, 212 Croatia, 145 Crocker, David, 55 crowdsourcing, 143–4 Cuba, 203 Cuban missile crisis (1962), 99, 212 cultural lag, 85 cyberattacks, 11, 114, 140, 181, 187, 190–200, 209–14, 256 CyberPeace Institute, 214 Daniel, Simon, 173–4 Dar es Salaam, Tanzania, 183 Darktrace, 197 data, 8, 11, 71, 217–19, 226–31, 235, 237–42, 256 AI and, 8, 32, 33, 58, 106 compensation for, 239 commons, 242 cyberattacks and, 196 doppelgängers, 219, 226, 228, 239 interoperability and, 237–9 network effects, 106–7, 111 protection laws, 186, 226 rights, 240 Daugherty, Paul, 141 DDT (dichlorodiphenyltrichloroe thane), 253 death benefits, 151 Dediu, Horace, 24, 30 deep learning, 32–4, 54, 58, 127 deforestation, 251 dehumanisation, 71, 154, 158 deindustrialisation, 168 Deliveroo, 154, 163 Delphi, 100 dematerialised techniques, 166, 175 Denmark, 58, 160, 199–200, 257 Deutsche Bank, 130 Diamandis, Peter, 5 Dickens, Charles, 80 digital cameras, 83–4 Digital Geneva Convention, 211 Digital Markets Act (EU, 2020), 122 digital minilateralism, 188 Digital Nations group, 188 Digital Services Act (EU, 2020), 123 diminishing returns, 93, 108 disinformation, see misinformation DoorDash, 147, 148, 248 dot-com bubble (1995–2000), 8, 108, 150 Double Irish tax loophole, 119 DoubleClick, 117 drone technology, 11, 192, 204–9, 214, 256 Dubai, UAE, 43 Duke University, 234 dystopia, 208, 230, 253 Eagan, Nicole, 197 eBay, 98, 121 Ecobee, 120 economies of scale, 50, 92 Economist, The, 8, 65, 119, 183, 239 economists, 63 Edelman, 3 education artificial intelligence and, 88 media literacy, 211–12 Egypt, 145, 186 Elance, 144 electric cars, 51, 69, 75, 173–4, 177, 250 electricity, 26, 45, 46, 54, 157, 249–50 see also energy Electronic Frontier Foundation, 184 email, 6, 55 embodied institutions, 82 employment, 10, 71, 125–65 automation, 125–42 collective bargaining, 147, 149, 154, 156, 162–5 dehumanisation and, 71, 154, 158 flexicurity, 160–61, 257 gig economy, 10, 71, 142–9, 153, 162, 164, 239, 252, 255 income inequality, 155–8, 161, 168 lump of labour fallacy, 139 management, 149–54, 158–9 protections, 85–6, 147–9 reskilling, 159–60 universal basic income (UBI), 160, 189 Enclosure, 234–5, 241 energy, 11, 37–8, 39–40, 44, 46, 172–4, 250 cold fusion, 113–14 fossil fuels, 40, 159, 172, 250 gravitational potential, 53 solar power, 37–8, 53, 65, 77, 82, 90, 171, 172, 173, 249, 250, 251 storage, 40, 53, 114, 173–4, 250, 251 wind power, 39–40, 52 Energy Vault, 53–4, 173 Engels, Friedrich, 81 Engels’ pause, 80, 81 environmental movement, 73 Epic Games, 116 estate agents, 100 Estonia, 188, 190–91, 200, 211 Etzion Airbase, Sinai Peninsula, 195 European Commission, 116, 122, 123 European Space Agency, 56 European Union, 6, 82, 147, 186, 226 Excel, 99 exogeny, 2 exponential gap, 9, 10, 67–91, 70, 89, 253 cyber security and, 193 institutions and, 9, 10, 79–88, 90 mathematical understanding and, 71–5 predictions and, 75–9 price declines and, 68–9 superstar companies and, 10, 94–124 exponential growth bias, 73 Exponential View, 8–9 externalities, 97 extremism, 232–4 ExxonMobil, 65, 92 Facebook, 27, 28, 65, 94, 104, 108, 122, 216–17, 218, 219, 221–2, 223 advertising business, 94, 228 censorship on, 216–17, 224–6, 236 collective bargaining and, 164 data collection on, 228, 239–40 extremism and, 233–4 Instagram acquisition (2012), 117, 120 integrity teams, 234 interoperability, 237–8 Kenosha unrest shooting (2020), 224 misinformation on, 201, 225 network effect and, 98, 223 Oculus acquisition (2014), 117 pay at, 156–7 Phan photo controversy (2016), 216–17, 224, 225 platform model, 101 polarisation and, 233 relationship status on, 221–2 Rohingya ethnic cleansing (2018), 224, 225 US presidential election (2016), 217 WhatsApp acquisition (2014), 117 facial recognition, 152, 208 Factory Act (UK, 1833), 81 Fairchild Semiconductor, 19, 21 fake news, 201–4 family dinners, 86 farming, 170–72, 251 Farrar, James, 148 fax machines, 97 Federal Aviation Administration (US), 236 feedback loops, 3, 13 fertilizers, 35, 90 5G, 203 Financial Conduct Authority, 122 Financial Times, 183 Finland, 160, 211–12 Fitbit, 158 Fiverr, 144 flashing of headlights, 83 flexicurity, 160, 257 flints, 42 flywheels, 54 Ford, 54, 92, 162 Ford, Gerald, 114 Ford, Henry, 54, 162 Ford, Martin, 125 Fortnite, 116 fossil fuels, 40, 159, 172 France, 100, 138, 139, 147, 163 free-market economics, 63–4 freelance work, 10, 71, 142–9 Frey, Carl, 129, 134, 141 Friedman, Milton, 63–4, 241 Friedman, Thomas, 167 FriendFeed, 238 Friendster, 26 Fudan University, 245 fund management, 132 Galilei, Galileo, 83 gaming, 86 Gates, Bill, 17, 25, 84 gender, 6 General Agreement on Tariffs and Trade, 87 General Data Protection Regulation (GDPR), 226 General Electric, 52 General Motors, 92, 125, 130 general purpose technologies, 10, 45–8 generative adversarial networks (GANs), 58 Geneva Conventions, 193, 199, 209 Genghis Khan, 44 GEnie, 100 genome sequencing, 40–41, 90, 229, 234, 245–7, 250, 252 Germany, 75, 134, 147 Giddens, Anthony, 82 gig economy, 10, 71, 142–9, 153, 162, 164, 239, 252, 255 Gilbreth, Lillian, 150 Ginsparg, Paul, 59 GitHub, 58, 60 GlaxoSmithKline, 229–30 global financial crisis (2007–9), 168 Global Hawk drones, 206 global positioning systems (GPS), 197 globalisation, 11, 62, 64, 156, 166, 167–71, 177, 179, 187, 193 internet and, 185 conflict and, 189, 193, 194 Glocer, Thomas, 56 Go (game), 132 GOAT, 102 Gojek, 103 Golden Triangle, 170 Goldman Sachs, 151 Goodfellow, Ian, 58 Google, 5, 35, 36, 94, 98, 104, 108, 115, 122 advertising business, 94, 112–13, 116, 117, 227 Android, 85, 94, 117, 120 chip production, 113 Covid-19 pandemic (2019–21), 222–3 data network effect, 106–7 death benefits, 151 Double Irish tax loophole, 119 Maps, 113 quantum computing, 35 R&D, 114, 118 vertical integration, 112–13, 116 X, 114 YouTube acquisition (2006), 112, 117 Gopher, 59, 100 GPT-3, 33 Graeber, David, 133–4 Grand Bazaar, Istanbul, 102 Graphcore, 34, 35 graphics chips, 34 Grateful Dead, The, 184 gravitational potential energy, 53 gravity bombs, 195 Greater Bay Area, China, 182 Greenberg, Andy, 199 Gross, Bill, 53 Grove, Andrew, 17 GRU (Glavnoje Razvedyvatel’noje Upravlenije), 199 Guangzhou, Guangdong, 182 Guardian, 8, 125, 154, 226, 227 Guiyang, Guizhou, 166 H1N1 virus, 75 Habermas, Jürgen, 218 Hard Times (Dickens), 80 Hardin, Garrett, 241 Harop drones, 207–8 Harpy drones, 207–8 Harvard University, 150, 218, 220, 221, 253 healthcare artificial intelligence and, 57–8, 88, 103 data and, 230, 239, 250–51 wearable devices and, 158, 251 Helsinki, Finland, 160 Herlev Hospital, Denmark, 58 Hinton, Geoffrey, 32, 126–7 HIPA Act (US, 1996), 230 Hitachi, 152 Hobbes, Thomas, 210 Hoffman, Josh, 174 Hoffman, Reid, 110, 111 Holmes, Edward, 245 homophily, 231–4 Hong Kong, 182 horizontal expansion, 111–12, 218 Houston Islam protests (2016), 203 Houthis, 206 Howe, Jeff, 143 Hsinchu, Taiwan, 181 Hughes, Chris, 217 Hull, Charles, 43 Human + Machine (Daugherty), 141 human brain, 77 human genome, 40–41, 90, 229, 234, 250 human resources, 150 Hussein, Saddam, 195 Hyaline, 174 hydroponics, 171 hyperinflation, 75 IBM, 17, 21, 47, 98 IDC, 219 Ideal-X, 61 Ikea, 144 Illumina, 41 Ilves, Toomas Hendrik, 190 ImageNet, 32 immigration, 139, 168, 183–4 Impossible Foods, 69 Improv, 99 income inequality, 155–8, 161, 168 India, 103, 145, 181, 186, 224, 253, 254 Indonesia, 103 Industrial Revolution (1760–1840), 79–81, 157, 235 informational networks, 59–60 ING, 178 innovation, 14, 117 Innovator’s Dilemma, The (Christensen), 24 Instagram, 84, 117, 120, 121, 237 institutions, 9, 10, 79–88, 90–91 path dependence, 86–7 punctuated equilibrium, 87–8 intangible economy, 104–7, 118, 156, 175, 180 integrated circuits, 19 Intel, 16–17, 19, 163 intellectual property law, 82 Intermediate-Range Nuclear Forces Treaty (1987), 237 International Alliance of App-Based Transport Workers, 164 International Court of Justice, 224 International Criminal Court, 208 International Energy Agency, 77, 82 International Labour Organization, 131 International Monetary Fund (IMF), 87, 167, 187 international organisations, 82 International Organization for Standardization, 55, 61 International Rescue Committee, 184 International Telecommunication Union, 55 internet, 7, 47–8, 55, 65, 72, 75, 84–5, 88, 115, 184–6 code is law, 220–22, 235 data and, 11, 32, 71 informational networks, 59–60 localisation, 185–6 lockdowns and, 12 network effect, 100–101 online shopping, 48, 61, 62, 75, 94, 102, 135 platform model and, 102 public sphere and, 223 standardisation, 55 Wi-Fi, 151 interoperability, 55, 120–22, 237–9, 241, 243, 256–7 iPhone, 47, 62, 85, 94, 115, 175 Iran, 186, 196, 198, 203, 206 Iraq, 195–6, 201, 209 Ireland, 57–8, 119 Islamic State, 194, 233 Israel, 37, 188, 195–6, 198, 206, 207–8 Istanbul, Turkey, 102 Jacobs, Jane, 182 Japan, 37, 152, 171, 174 Jasanoff, Sheila, 253 JD.com, 137 Jena, Rajesh, 127 Jio, 103 job interviews, 153, 156 John Paul II, Pope, 83 Johnson, Boris, 79 Jumia, 103 just in time supply chains, 61–2 Kahneman, Daniel, 74 KakaoTalk, 27 Kaldor, Mary, 194 Kapor, Mitchell, 99 Karunaratne, Sid, 140–41, 151 Kenosha unrest shooting (2020), 224 Keynes, John Maynard, 126, 158 Khan, Lina, 119 Khartoum, Sudan, 183 Kim Jong-un, 198 King’s College London, 179 Kiva Systems, 136 Kobo360, 145 Kodak, 83–4, 88 Kranzberg, Melvin, 254 Krizhevsky, Alex, 32–3, 34 Kubursi, Atif, 178 Kurdistan Workers’ Party, 206 Kurzweil, Ray, 29–31, 33, 35, 77 Lagos, Nigeria, 182 Lahore, Pakistan, 102 landmines, 213 Law of Accelerating Returns, 30–31, 33, 35 Laws of Motion, 20 learning by doing, 48, 53 Leggatt, George, 148 Lemonade, 56 Lessig, Larry, 220–21 Leviathan (Hobbes), 210 Li Fei-Fei, 32 life expectancy, 25, 26 light bulbs, 44, 157 Lime, 27 Limits to Growth, The (Meadows et al.), 73 linear value chains, 101 LinkedIn, 26, 110, 121, 237, 238 Linkos Group, 197 Linux OS, 242 Lipsey, Richard, 45 lithium-ion batteries, 40, 51 lithium, 170 localism, 11, 166–90, 252, 255 log files, 227 logarithmic scales, 20 logic gates, 18 logistic curve, 25, 30, 51, 52, 69–70 London, England, 180, 181, 183 London Underground, 133–4 looms, 157 Lordstown Strike (1972), 125 Lotus Development Corporation, 99 Luddites, 125, 253 Lufa Farms, 171–2 Luminate, 240 lump of labour fallacy, 139 Lusaka, Zambia, 15 Lyft, 146, 148 machine learning, 31–4, 54, 58, 88, 127, 129, 143 MacKinnon, Rebecca, 223 Maersk, 197, 199, 211 malaria, 253 Malaysia Airlines Flight 17 shootdown (2014), 199 Malta, 114 Malthus, Thomas, 72–3 malware, 197 Man with the Golden Gun, The (1974 film), 37 manufacturing, 10, 39, 42–4, 46, 166–7, 175–9 additive, 43–4, 46, 48, 88, 166, 169, 175–9 automation and, 130 re-localisation, 175–9 subtractive, 42–3 market saturation, 25–8, 51, 52 market share, 93–6, 111 Marshall, Alfred, 97 Massachusetts Institute of Technology, 18, 147, 202, 238 Mastercard, 98 May, Theresa, 183 Mayors for a Guaranteed Income, 189 McCarthy, John, 31 McKinsey, 76, 94 McMaster University, 178 measles, 246 Mechanical Turk, 142–3, 144, 145 media literacy, 211–12 meningitis, 246 Mexico, 202 microorganisms, 42, 46, 69 Microsoft, 16–17, 65, 84–5, 88, 98–9, 100, 105, 108, 122, 221 Bing, 107 cloud computing, 85 data collection, 228 Excel, 99 internet and, 84–5, 100 network effect and, 99 Office software, 98–9, 110, 152 Windows, 85, 98–9 Workplace Productivity scores, 152 Mill, John Stuart, 193 miniaturisation, 34–5 minimum wage, 147, 161 misinformation, 11, 191, 192, 200–204, 209, 212, 217, 225, 247–8 mobile phones, 76, 121 see also smartphones; telecom companies Moderna, 245, 247 Moixa, 174 Mondelez, 197, 211 Mongol Empire (1206–1368), 44 monopolies, 10, 71, 94, 95, 114–24, 218, 255 Monopoly (board game), 82 Montreal, Quebec, 171 mood detection systems, 152 Moore, Gordon, 19, 48 Moore’s Law, 19–22, 26, 28–9, 31, 34, 63, 64, 74 artificial intelligence and, 32, 33–4 Kodak and, 83 price and, 41–2, 51, 68–9 as social fact, 29, 49 superstar companies and, 95 time, relationship with, 48–9 Moravec, Hans, 131 Moravec’s paradox, 131–2 Motorola, 76 Mount Mercy College, Cork, 57 Mozilla Firefox, 242 Mumbai, India, 181 mumps, 246 muskets, 54–5 MySpace, 26–7 Nadella, Satya, 85 Nagorno-Karabakh War (2020), 206–7 napalm, 216 NASA (National Aeronautics and Space Administration), 56 Natanz nuclear site, Iran, 196 National Health Service (NHS), 87 nationalism, 168, 186 NATO (North Atlantic Treaty Organization), 191, 213 Netflix, 104, 107, 109, 136, 137, 138, 139, 151, 248 Netherlands, 103 Netscape Communicator, 6 networks, 58–62 network effects, 96–101, 106, 110, 121, 223 neural networks, 32–4 neutral, technology as, 5, 220–21, 254 new wars, 194 New York City, New York, 180, 183 New York Times, 3, 125, 190, 228 New Zealand, 188, 236 Newton, Isaac, 20 Nigeria, 103, 145, 182, 254 Niinistö, Sauli, 212 Nike, 102 nitrogen fertilizers, 35 Nixon, Richard, 25, 114 Nobel Prize, 64, 74, 241 Nokia, 120 non-state actors, 194, 213 North Korea, 198 North Macedonia, 200–201 Norway, 173, 216 NotPetya malware, 197, 199–200, 211, 213 Novell, 98 Noyce, Robert, 19 NSO Group, 214 nuclear weapons, 193, 195–6, 212, 237 Nuremberg Trials (1945–6), 208 O’Reilly, Tim, 107 O’Sullivan, Laura, 57–8, 60 Obama, Barack, 205, 214, 225 Ocado, 137 Ocasio-Cortez, Alexandria, 239 Oculus, 117 oDesk, 144 Ofcom, 8 Ofoto, 84 Ogburn, William, 85 oil industry, 172, 250 Houthi drone attacks (2019), 206 OAPEC crisis (1973–4), 37, 258 Shamoon attack (2012), 198 Standard Oil breakup (1911), 93–4 Olduvai, Tanzania, 42 online shopping, 48, 61, 62, 75, 94, 102, 135 open-source software, 242 Openreach, 123 Operation Opera (1981), 195–6, 209 opium, 38 Orange, 121 Organisation for Economic Co-operation and Development (OECD), 119, 167 Osborne Computer Corporation, 16 Osborne, Michael, 129 Osirak nuclear reactor, Iraq, 195–6, 209 Ostrom, Elinor, 241 Oxford University, 129, 134, 203, 226 pace of change, 3 pagers, 87 Pakistan, 145, 205 palladium, 170 PalmPilot, 173 panopticon, 152 Paris, France, 181, 183 path dependence, 86 PayPal, 98, 110 PC clones, 17 PeerIndex, 8, 201, 237 Pegasus, 214 PeoplePerHour, 144 PepsiCo, 93 Perez, Carlota, 46–7 pernicious polarization, 232 perpetual motion, 95, 106, 107, 182 Petersen, Michael Bang, 75 Phan Thi Kim Phuc, 216–17, 224, 225 pharmaceutical industry, 6, 93, 250 phase transitions, 4 Philippines, 186, 203 Phillips Exeter Academy, 150 phishing scams, 211 Phoenix, Arizona, 134 photolithography, 19 Pigou, Arthur Cecil, 97 Piketty, Thomas, 160 Ping An Good Doctor, 103, 250 Pix Moving, 166, 169, 175 PKK (Partîya Karkerên Kurdistanê), 206 Planet Labs, 69 platforms, 101–3, 219 PlayStation, 86 plough, 157 Polanyi, Michael, 133 polarisation, 231–4 polio, 246 population, 72–3 Portify, 162 Postel, Jon, 55 Postings, Robert, 233 Predator drones, 205, 206 preprints, 59–60 price gouging, 93 price of technology, 22, 68–9 computing, 68–9, 191, 249 cyber-weapons, 191–2 drones, 192 genome sequencing, 41–2, 252 renewable energy, 39–40, 250 printing press, 45 public sphere, 218, 221, 223 Pulitzer Prize, 216 punctuated equilibrium, 87–8 al-Qaeda, 205, 210–11 Qatar, 198 quantum computing, 35 quantum physics, 29 quarantines, 12, 152, 176, 183, 246 R&D (research and development), 67–8, 113, 118 racial bias, 231 racism, 225, 231, 234 radicalisation pathways, 233 radiologists, 126 Raford, Noah, 43 Raz, Ze’ev, 195, 209 RB, 197 re-localisation, 11, 166–90, 253, 255 conflict and, 189, 193, 194, 209 Reagan, Ronald, 64, 163 religion, 6, 82, 83 resilience, 257 reskilling, 159–60 responsibility gap, 209 Restrepo, Pascual, 139 Reuters, 8, 56, 132 revolutions, 87 Ricardo, David, 169–70, 177 rights, 240–41 Rise of the Robots, The (Ford), 125 Rittenhouse, Kyle, 224 Roche, 67 Rockefeller, John, 93 Rohingyas, 224 Rome, ancient, 180 Rose, Carol, 243 Rotterdam, Netherlands, 56 Rule of Law, 82 running shoes, 102, 175–6 Russell, Stuart, 31, 118 Russian Federation, 122 disinformation campaigns, 203 Estonia cyberattacks (2007), 190–91, 200 Finland, relations with, 212 Nagorno-Karabakh War (2020), 206 nuclear weapons, 237 Ukraine cyberattacks (2017), 197, 199–200 US election interference (2016), 217 Yandex, 122 S-curve, 25, 30, 51, 52, 69–70 al-Sahhaf, Muhammad Saeed, 201 Salesforce, 108–9 Saliba, Samer, 184 salt, 114 Samsung, 93, 228 San Francisco, California, 181 Sandel, Michael, 218 Sanders, Bernard, 163 Sandworm, 197, 199–200, 211 Santander, 95 Sasson, Steve, 83 satellites, 56–7, 69 Saturday Night Fever (1977 soundtrack), 72 Saudi Arabia, 108, 178, 198, 203, 206 Schmidt, Eric, 5 Schwarz Gruppe, 67 Second Machine Age, The (Brynjolfsson and McAfee), 129 self-driving vehicles, 78, 134–5, 141 semiconductors, 18–22, 28–9, 48–9, 52, 113, 251 September 11 attacks (2001), 205, 210–11 Shamoon virus, 198 Shanghai, China, 56 Shannon, Claude, 18 Sharp, 16 Shenzhen, Guangdong, 182 shipping containers, 61–2, 63 shopping, 48, 61, 62, 75, 94, 102, 135 Siemens, 196 silicon chips, see chips Silicon Valley, 5, 7, 15, 24, 65, 110, 129, 223 Sinai Peninsula, 195 Sinclair ZX81, 15, 17, 21, 36 Singapore, 56 Singles’ Day, 48 Singularity University, 5 SixDegrees, 26 Skydio R1 drone, 208 smartphones, 22, 26, 46, 47–8, 65, 86, 88, 105, 111, 222 Smith, Adam, 169–70 sneakers, 102, 175–6 Snow, Charles Percy, 7 social credit systems, 230 social media, 26–8 censorship on, 216–17, 224–6, 236 collective bargaining and, 164 data collection on, 228 interoperability, 121, 237–8 market saturation, 25–8 misinformation on, 192, 201–4, 217, 247–8 network effect, 98, 223 polarisation and, 231–4 software as a service, 109 solar power, 37–8, 53, 65, 77, 82, 90, 171, 172, 173, 249, 250, 251 SolarWinds, 200 Solberg, Erna, 216 South Africa, 170 South Korea, 188, 198, 202 Southey, Robert, 80 sovereignty, 185, 199, 214 Soviet Union (1922–91), 185, 190, 194, 212 Spain, 170, 188 Spanish flu pandemic (1918–20), 75 Speedfactory, Ansbach, 176 Spire, 69 Spotify, 69 Sputnik 1 orbit (1957), 64, 83 stagflation, 63 Standard and Poor, 104 Standard Oil, 93–4 standardisation, 54–7, 61, 62 Stanford University, 32, 58 Star Wars franchise, 99 state-sized companies, 11, 67 see also superstar companies states, 82 stirrups, 44 Stockholm International Peace Research Institute, 208 Stockton, California, 160 strategic snowflakes, 211 stress tests, 237 Stuxnet, 196, 214 Sudan, 183 superstar companies, 10, 11, 67, 94–124, 218–26, 252, 255 blitzscaling, 110 collective bargaining and, 163 horizontal expansion, 111–12, 218 increasing returns to scale, 108–10 innovation and, 117–18 intangible economy, 104–7, 118, 156 interoperability and, 120–22, 237–9 monopolies, 114–24, 218 network effect, 96–101, 121 platform model, 101–3, 219 taxation of, 118–19 vertical expansion, 112–13 workplace cultures, 151 supply chains, 61–2, 166–7, 169, 175, 187, 252 surveillance, 152–3, 158 Surviving AI (Chace), 129 Sutskever, Ilya, 32 synthetic biology, 42, 46, 69, 174, 245, 250 Syria, 186 Taiwan, 181, 212 Talkspace, 144 Tallinn, Estonia, 190 Tang, Audrey, 212 Tanzania, 42, 183 TaskRabbit, 144 Tasmania, Australia, 197 taxation, 10, 63, 96, 118–19 gig economy and, 146 superstar companies and, 118–19 Taylor, Frederick Winslow, 150, 152, 153, 154 Tel Aviv, Israel, 181 telecom companies, 122–3 Tencent, 65, 104, 108, 122 territorial sovereignty, 185, 199, 214 Tesco, 67, 93 Tesla, 69, 78, 113 Thailand, 176, 203 Thatcher, Margaret, 64, 163 Thelen, Kathleen, 87 Thiel, Peter, 110–11 3D printing, see additive manufacturing TikTok, 28, 69, 159–60, 219 Tisné, Martin, 240 Tomahawk missiles, 207 Toyota, 95 trade networks, 61–2, 166–7, 169, 175 trade unions, see collective bargaining Trading Places (1983 film), 132 Tragedy of the Commons, The (Hardin), 241 transistors, 18–22, 28–9, 48–9, 52, 113, 251 transparency, 236 Treaty of Westphalia (1648), 199 TRS-80, 16 Trump, Donald, 79, 119, 166, 201, 225, 237 Tufekci, Zeynep, 233 Turing, Alan, 18, 22 Turkey, 102, 176, 186, 198, 202, 206, 231 Tversky, Amos, 74 23andMe, 229–30 Twilio, 151 Twitch, 225 Twitter, 65, 201, 202, 219, 223, 225, 237 two cultures, 7, 8 Uber, 69, 94, 102, 103, 106, 142, 144, 145 Assembly Bill 5 (California, 2019), 148 engineering jobs, 156 London ban (2019), 183, 188 London protest (2016), 153 pay at, 147, 156 satisfaction levels at, 146 Uber BV v Aslam (2021), 148 UiPath, 130 Ukraine, 197, 199 Unilever, 153 Union of Concerned Scientists, 56 unions, see collective bargaining United Arab Emirates, 43, 198, 250 United Autoworkers Union, 162 United Kingdom BBC, 87 Biobank, 242 Brexit (2016–20), 6, 168 collective bargaining in, 163 Covid-19 epidemic (2020–21), 79, 203 DDT in, 253 digital minilateralism, 188 drone technology in, 207 flashing of headlights in, 83 Golden Triangle, 170 Google and, 116 Industrial Revolution (1760–1840), 79–81 Luddite rebellion (1811–16), 125, 253 misinformation in, 203, 204 National Cyber Force, 200 NHS, 87 self-employment in, 148 telecom companies in, 123 Thatcher government (1979–90), 64, 163 United Nations, 87, 88, 188 United States antitrust law in, 114 automation in, 127 Battle of the Overpass (1937), 162 Capitol building storming (2021), 225 China, relations with, 166 Cold War (1947–91), 194, 212, 213 collective bargaining in, 163 Covid-19 epidemic (2020–21), 79, 202–4 Cyber Command, 200, 210 DDT in, 253 drone technology in, 205, 214 economists in, 63 HIPA Act (1996), 230 Kenosha unrest shooting (2020), 224 Lordstown Strike (1972), 125 manufacturing in, 130 misinformation in, 202–4 mobile phones in, 76 nuclear weapons, 237 Obama administration (2009–17), 205, 214 polarisation in, 232 presidential election (2016), 199, 201, 217 presidential election (2020), 202–3 Reagan administration (1981–9), 64, 163 self-employment in, 148 September 11 attacks (2001), 205, 210–11 shipping containers in, 61 shopping in, 48 solar energy research, 37 Standard Oil breakup (1911), 93–4 taxation in, 63, 119 Trump administration (2017–21), 79, 119, 166, 168, 201, 225, 237 Vietnam War (1955–75), 216 War on Terror (2001–), 205 universal basic income (UBI), 160, 189 universal service obligation, 122 University of Cambridge, 127, 188 University of Chicago, 63 University of Colorado, 73 University of Delaware, 55 University of Oxford, 129, 134, 203, 226 University of Southern California, 55 unwritten rules, 82 Uppsala Conflict Data Program, 194 UpWork, 145–6 USB (Universal Serial Bus), 51 Ut, Nick, 216 utility providers, 122–3 vaccines, 12, 202, 211, 245–7 Vail, Theodore, 100 value-free, technology as, 5, 220–21, 254 Veles, North Macedonia, 200–201 Véliz, Carissa, 226 Venezuela, 75 venture capitalists, 117 vertical expansion, 112–13, 116 vertical farms, 171–2, 251 video games, 86 Vietnam, 61, 175, 216 Virological, 245 Visa, 98 VisiCalc, 99 Vodafone, 121 Vogels, Werner, 68 Wag!

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

So far, artificial intelligence has been narrow. AlphaGo is extraordinarily good at playing Go but is incapable of doing anything else.41 But some of the leading AI labs, such as DeepMind and OpenAI, have the explicit goal of building AGI.42 And there have been indications of progress, such as the performance of GPT-3, an AI language model which can perform a variety of tasks it was never explicitly trained to perform, such as translation or arithmetic.43 AlphaZero, a successor to AlphaGo, taught itself how to play not only Go but also chess and shogi, ultimately achieving world-class performance.44 About two years later, MuZero achieved the same feat despite initially not even knowing the rules of the game.45 The development of AGI would be of monumental longterm importance for two reasons.

More specifically, most AI breakthroughs have been due to a particular approach to machine learning that uses multilayered neural networks, known as “deep learning” (Goodfellow et al. 2016; LeCun et al. 2015). At the time of writing, the state-of-the-art AI for text-based applications are so-called transformers, which include Google’s BERT and OpenAI’s GPT-3 (T. Brown et al. 2020; Devlin et al. 2019; Vaswani et al. 2017). Transformers have also been successfully used for tasks involving audio (Child et al. 2019), images (M. Chen et al. 2020; Dosovitskiy et al. 2021), and video (Wang et al. 2021). The highest-profile AI achievements in real-time strategy games were DeepMind’s AlphaStar defeat of human grandmasters in the game StarCraft II and the OpenAI Five’s defeat of human world champions in Dota 2 (OpenAI et al. 2019; Vinyals et al. 2019).

pages: 285 words: 86,858

How to Spend a Trillion Dollars
by Rowan Hooper
Published 15 Jan 2020

I prefer my opening, thankfully – and GPT-2 is wrong about Hephaestus being the ruler of Olympus, everyone knows that is Zeus – but this is only the basic, publicly available version of the language-generator. More advanced versions can construct impressive arguments, if given enough prompts. I might have asked GPT-2 or its advanced sibling, GPT-3, to make a comparison between what Hephaestus did in mythology and what scientists are trying to do with artificial intelligence, and it might have come out with something more like what I ended up writing. $ $ $ THE DAY WHEN DEEP LEARNING ALGORITHMS are able to write books is a long way off. But there’s a lot they will be useful for.

Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions
by Temple Grandin, Ph.d.
Published 11 Oct 2022

In the journal Nature, David Silver and his colleagues write that the computer used “nonstandard strategies beyond the scope of traditional Go knowledge.” AI is being studied and applied in areas as diverse as video games and analyzing satellite images. AI programs are even being trained to write plays and essays. In an article on Medium.com, Sofia Merenych wrote about GPT-3, a program that composed a play so thoroughly in the manner of Shakespeare that linguists had a difficult time determining it was fake. When the program sucked up vast amounts of human knowledge off the internet, it was capable of writing essays on different subjects. When asked to write about a controversial subject, it sometimes came to conclusions that were offensive.

pages: 521 words: 118,183

The Wires of War: Technology and the Global Struggle for Power
by Jacob Helberg
Published 11 Oct 2021

AI powers self-driving cars and suggests movies we might like on Netflix. The Associated Press has used AI to draft basic articles. IBM’s Watson beat two of Jeopardy!’s greatest contestants and, for good measure, identified genes linked to degenerative illness. In June 2020, the San Francisco company OpenAI’s GPT-3 sent shock waves across the tech industry, proving it possible to algorithmically generate cogent and naturally sounding long-form text on almost any topic. The consulting firm PwC estimates that artificial intelligence will contribute an additional $15.7 trillion to global economic growth by 2030.