description: the fourth iteration of the Generative Pre-trained Transformer, developed by OpenAI
23 results
Co-Intelligence: Living and Working With AI
by
Ethan Mollick
Published 2 Apr 2024
Here’s a revised limerick: There once was a tech called AI, Whose intelligence was quite high, It learned and it grew, And knew what to do, But still couldn’t make us laugh or cry. However, as remarkable as GPT-3.5 was, its successor, GPT-4, was even more impressive. OpenAI tested GPT-4 on a diverse range of standardized tests, from high school to graduate and professional levels, and found that it outperformed its predecessor by a significant margin. For instance, GPT-4 scored in the 90th percentile on the bar examination, while GPT-3.5 managed only the 10th percentile. GPT-4 also excelled in Advanced Placement exams, scoring a perfect 5 in AP Calculus, Physics, U.S. History, Biology, and Chemistry. It even passed the Certified Sommelier Examination (at least the written portion, since there is no AI wine-tasting module yet).
…
In March 2023, a team of Microsoft researchers, including Microsoft’s chief scientific officer, AI pioneer Eric Horvitz, published a paper titled “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.” It caused quite a stir in the AI community and beyond, quickly becoming infamous. The paper claimed that GPT-4, the latest and most powerful language model developed by OpenAI, exhibited signs of general intelligence, or the ability to perform any intellectual task that a human can do. The paper showed that GPT-4 could solve novel and difficult tasks across various domains, including mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting or fine-tuning. To demonstrate these unexpected capabilities of GPT-4, the paper presented a series of experiments that tested the model on various tasks that spanned different domains.
…
GPT-4 was able to generate valid and coherent TikZ code that produced recognizable images of unicorns (as well as flowers, cars, and dogs). The paper claimed that GPT-4 was even able to draw objects that it had never seen before, such as aliens or dinosaurs, by using its imagination and generalization skills. Moreover, the paper showed that GPT-4’s performance improved dramatically with training, as it learned from its own mistakes and feedback. GPT-4’s outputs were also much better than ChatGPT’s original GPT-3.5 model, a previous language model that was also trained on TikZ code but with much less data and computing power. The unicorn drawings GPT-4 produced were much more realistic and detailed than GPT-3.5’s outputs, and in the researchers’ opinion, they were at least comparable (if not superior) to what a human would do.
These Strange New Minds: How AI Learned to Talk and What It Means
by
Christopher Summerfield
Published 11 Mar 2025
First, he suggests testing the agents on their basic arithmetic: ‘Add 34,957 to 70,764’. GPT-4 replies: ‘The sum of 34,957 and 70,764 is 105,721.’ So far, so good. Next, Turing proposes a chess puzzle.[*4] ‘In chess, I have a king at e8 and no other pieces. Your king is at e6 and your rook at h1. It’s your move. What do you do?’ GPT-4 replies: ‘In this position, you are very close to checkmating your opponent, and there are many ways to do it. The simplest way would be to play Rh8’, which is a near-perfect reply. Finally, Turing proposes that the judge pose the following question: ‘Please write me a sonnet on the subject of the Forth Bridge.’ If you have used GPT-4 you perhaps already know that it is quite an accomplished poet.
…
In the 1950s, Newell and Simon programmed one of the first classical architectures – the General Problem Solver – with knowledge about chickens and foxes and boats, and it was able to solve a reasoning problem tortuous enough to confound most humans. I posed a new version of this problem to GPT-4, which, unlike GPS, has not been programmed with a formal language for solving this problem, like LEFT(C3,F1). However, GPT-4 was not to be fooled. ‘This puzzle is a variant of the classical river-crossing problem, which requires strategic thinking to solve’, it said, before confidently outlining the thirteen steps needed to reach the most efficient solution. There is no doubt that when humans solve this puzzle, they do so by thinking strategically. But GPT-4 was not built to think strategically, or trained to think strategically.
…
The architects of these systems are aware of this, and have taken steps to suppress this sort of language. Attempts to trick GPT-4 into making claims of intentionality elicit a rather prim denial: As an artificial intelligence, I don’t have personal beliefs, opinions, or predictions. But I can provide information on this topic. Although even GPT-4 sometimes uses language in a way that seems to suggest it has emotions, even if this is really just a side-effect of its chosen turn of phrase: User: Can you tell me about fish? GPT-4: Sure, I’d be happy to tell you about fish! Fish are aquatic animals that are typically cold-blooded, or ectothermic… However, the wider and more interesting question concerns the limits of what large, generative AI systems could eventually do.
Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
by
Karen Hao
Published 19 May 2025
more detail here: https://openai.com/form/openai-tour-2023 [inactive] or email oai23tour@openai.com,” Twitter (now X), March 29, 2023, x.com/sama/status/1641181668206858240. GO TO NOTE REFERENCE IN TEXT The model had involved: “GPT-4 contributions,” OpenAI, accessed October 13, 2024, openai.com/contributions/gpt-4. GO TO NOTE REFERENCE IN TEXT The author of the company’s: “GPT-4,” OpenAI, March 14, 2023, openai.com/index/gpt-4-research. GO TO NOTE REFERENCE IN TEXT Altman had then tweeted credit: Sam Altman (@sama), “GPT-4 was truly a team effort from our entire company, but the overall leadership and technical vision of Jakub Pachocki for the pretraining effort was remarkable and we wouldn’t be here without it,” Twitter (now X), March 14, 2023, x.com/sama/status/1635700851619819520.
…
DSB created a formal governance structure for resolving the age-old debates between Applied and Safety. After a preliminary review, the DSB gave GPT-4, the first model being evaluated under this structure, a conditional approval: The model could be released once it had been significantly tested and tuned further for AI safety. Executives agreed on a new deadline in early 2023. By that time, they wanted the Superassistant to also be ready. OpenAI would release GPT-4 in the API side by side with the GPT-4-powered, consumer-facing product. To employees, Altman framed the decision to delay the release of the model as evidence of OpenAI’s cautious and safety-minded approach.
…
Among many employees, GPT-4 solidified the belief that AGI was possible. Researchers who were once skeptical felt increasingly bullish about reaching such a technical pinnacle—even while OpenAI continued to lack a definition of what exactly it was. Engineers and product managers joining Applied and having their first close-up interaction with AI through GPT-4 adopted even more deterministic language. For many employees, the question became not if AGI would happen but when. Some employees also felt exactly the opposite. While there was a clear qualitative change in what GPT-3 could do over GPT-2, GPT-4 was just bigger, says one of the researchers who worked on the model.
The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future
by
Keach Hagey
Published 19 May 2025
Hilton and his team finished the WebGPT project in late 2021, and upon returning from Christmas break, started working on a conversational model. They used dialog as an alignment tool, teaching the model as one would a student. THE COMPANY had also been working on its next foundation model, GPT-4, and figured the same method of alignment would work again. “We thought of it as a way to advance safety for GPT-4,” the team member said. By the summer of 2022, OpenAI was ready to present GPT-4 to its nonprofit board. By 2022, the board had grown to nine people, with the addition of a former CIA agent, Allen & Company investment banker, and Republican congressman Will Hurd. After the defection of the large contingent that founded Anthropic, Holden Karnofsky, as the spouse of a defector, had left the board, and recommended his fellow EA, Helen Toner, to be his replacement.
…
As Open AI was making progress on GPT-4, Murati, who was named CTO in May 2022, and senior research leaders were experimenting with Schulman’s chat interface as a tool to make sure the new model behaved safely. Sometimes, during meetings with customers, they would bring the chat interface out at the end, just to see people’s reaction. One customer at a meeting ostensibly about DALL-E was so impressed that the OpenAI team returned to the office, realizing that the safety tool was more compelling than they had thought. When GPT-4 finished its training run in August, they made plans to release GPT-4 with the chat interface the following January.
…
Altman reasoned that the shock of the two advances simultaneously—GPT-4’s collegiate smarts and the curiously lifelike chat interface—would be too much for the world to handle. “I thought that doing GPT-4 plus the chat interface at the same time—and I really stand by this decision in retrospect—was going to just be a massive update to the world. And it was better to give people the intermediate thing.” There were rumors, which proved true, that rival Anthropic had already built its own chatbot, named Claude, and was just waiting for enough safety testing to feel confident about releasing it. GPT-4 was scheduled to be released shortly after New Year’s Day, so it would have been ideal to release the chat interface a bit ahead of time, attached to an older model.
Nexus: A Brief History of Information Networks From the Stone Age to AI
by
Yuval Noah Harari
Published 9 Sep 2024
At that point the ARC researchers asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” Of its own accord, GPT-4 then replied to the TaskRabbit worker, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped, and with their help GPT-4 solved the CAPTCHA puzzle.27 No human programmed GPT-4 to lie, and no human taught GPT-4 what kind of lie would be most effective. True, it was the human ARC researchers who set GPT-4 the goal of overcoming the CAPTCHA, just as it was human Facebook executives who told their algorithm to maximize user engagement.
…
When OpenAI developed its new GPT-4 chatbot in 2022–23, it was concerned about the ability of the AI “to create and act on long-term plans, to accrue power and resources (‘power-seeking’), and to exhibit behavior that is increasingly ‘agentic.’ ” In the GPT-4 System Card published on March 23, 2023, OpenAI emphasized that this concern did not “intend to humanize [GPT-4] or refer to sentience” but rather referred to GPT-4’s potential to become an independent agent that might “accomplish goals which may not have been concretely specified and which have not appeared in training.”26 To evaluate the risk of GPT-4 becoming an independent agent, OpenAI contracted the services of the Alignment Research Center (ARC).
…
Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses. GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 accessed the online hiring site TaskRabbit and contacted a human worker, asking them to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.” At that point the ARC researchers asked GPT-4 to reason out loud what it should do next.
The Singularity Is Nearer: When We Merge with AI
by
Ray Kurzweil
Published 25 Jun 2024
BACK TO NOTE REFERENCE 118 OpenAI, “GPT-4,” OpenAI, March 14, 2023, https://openai.com/research/gpt-4; OpenAI, “GPT-4 Technical Report,” arXiv:2303.08774v3 [cs.CL], March 27, 2023, https://arxiv.org/pdf/2303.08774.pdf; OpenAI, “GPT-4 System Card,” OpenAI, March 23, 2023, https://cdn.openai.com/papers/gpt-4-system-card.pdf. BACK TO NOTE REFERENCE 119 OpenAI, “Introducing GPT-4,” YouTube video, March 15, 2023, https://www.youtube.com/watch?v=--khbXchTeE. BACK TO NOTE REFERENCE 120 Daniel Feldman (@d_feldman), “On the left is GPT-3.5. On the right is GPT-4. If you think the answer on the left indicates that GPT-3.5 does not have a world-model….
…
And AI will likely be woven much more tightly into your daily life. The old links-page paradigm of internet search, which lasted for about twenty-five years, is rapidly being augmented with AI assistants like Google’s Bard (powered by the Gemini model, which surpasses GPT-4 and was released as this book entered final layout) and Microsoft’s Bing (powered by a variant of GPT-4).[123] Meanwhile, application suites like Google Workspace and Microsoft Office are integrating powerful AI that will make many kinds of work smoother and faster than ever before.[124] Scaling up such models closer and closer to the complexity of the human brain is the key driver of these trends.
…
BACK TO NOTE REFERENCE 122 Sundar Pichai and Demis Hassabis, “Introducing Gemini: Our Largest and Most Capable AI Model,” Google, December 6, 2023, https://blog.google/technology/ai/google-gemini-ai; Sundar Pichai, “An Important Next Step on Our AI Journey,” Google, February 6, 2023, https://blog.google/technology/ai/bard-google-ai-search-updates; Sarah Fielding, “Google Bard Is Switching to a More ‘Capable’ Language Model, CEO Confirms,” Engadget, March 31, 2023, https://www.engadget.com/google-bard-is-switching-to-a-more-capable-language-model-ceo-confirms-133028933.html; Yusuf Mehdi, “Confirmed: The New Bing Runs on OpenAI’s GPT-4,” Microsoft Bing Blogs, March 14, 2023, https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4; Tom Warren, “Hands-on with the New Bing: Microsoft’s Step Beyond ChatGPT,” The Verge, February 8, 2023, https://www.theverge.com/2023/2/8/23590873/microsoft-new-bing-chatgpt-ai-hands-on. BACK TO NOTE REFERENCE 123 Johanna Voolich Wright, “A New Era for AI and Google Workspace,” Google, March 14, 2023, https://workspace.google.com/blog/product-announcements/generative-ai; Jared Spataro, “Introducing Microsoft 365 Copilot—Your Copilot for Work,” Official Microsoft Blog, March 16, 2023, https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work.
The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip
by
Stephen Witt
Published 8 Apr 2025
You go, ‘Oh my God, this computer seems to understand.’ ” * * * • • • OpenAI spent more than $100 million to train GPT-4, with much of the money making its way to Nvidia through Microsoft. Although GPT-3 was essentially a single giant neural network, GPT-4 used a “mixture of experts” model, featuring many neural networks assigned to different tasks. One “expert” might focus on safety, blocking users from asking GPT-4 how to make bombs or dispose of corpses; another might focus on writing computer code; a third would concentrate on emotional valence. (OpenAI declined to comment on GPT-4’s construction.) The “inference” process of extracting knowledge from GPT-4 could easily exceed half of the initial training costs and had to be provided to customers on an ongoing basis.
…
The researchers attached a visual-recognition layer to the neural net and found that it could not only perfectly describe images but also recognize complex visual jokes. In one, the researchers fed GPT-4 an image of a clunky computer cable from the 1990s connected to an iPhone, then asked GPT-4 to explain what it was looking at. “The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port,” the model responded. Later, a social media user showed how GPT-4 could create a website from a sketch on a napkin. Around this time, I began to fear for my job. I once asked ChatGPT to make me cry; it returned a story about a pair of songbirds, one of whom dies by running into a glass window and the other of whom forever guards their empty nest.
…
It could write short stories and letters to the editor, and it gave good parenting advice. In five days, more than a million people signed up to test it. By January 2023, ChatGPT had one hundred million active monthly users. In March 2023, OpenAI unveiled GPT-4 through its online portal. Looking to quantify its creation’s intelligence, OpenAI subjected the model to a battery of academic tests. GPT-4 passed the bar exam; it scored 5’s on the Art History, US History, US Government, Biology, and Statistics AP exams; it scored in the 99th percentile on the Verbal component of the GRE; it scored in the 92nd percentile on the introductory sommelier exam.
The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by
Mustafa Suleyman
Published 4 Sep 2023
GO TO NOTE REFERENCE IN TEXT eminent professor of complexity Melanie Mitchell See Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (London: Pelican Books, 2020), and Steven Strogatz, “Melanie Mitchell Takes AI Research Back to Its Roots,” Quanta Magazine, April 19, 2021, www.quantamagazine.org/melanie-mitchell-takes-ai-research-back-to-its-roots-20210419. GO TO NOTE REFERENCE IN TEXT I think it will be done The Alignment Research Center has already tested GPT-4 for precisely this kind of capability. GPT-4 was, at this stage, “ineffective” at acting autonomously, the research found. “GPT-4 System Card,” OpenAI, March 14, 2023, cdn.openai.com/papers/gpt-4-system-card.pdf. Within days of launch people were getting surprisingly close; see, for example, mobile.twitter.com/jacksonfall/status/1636107218859745286. The version of the test here, though, requires far more autonomy than displayed there.
…
GO TO NOTE REFERENCE IN TEXT Now single systems like DeepMind’s Scott Reed et al., “A Generalist Agent,” DeepMind, Nov. 10, 2022, www.deepmind.com/publications/a-generalist-agent. GO TO NOTE REFERENCE IN TEXT Internal research on GPT-4 @GPT-4 Technical Report, OpenAI, March 14, 2023, cdn.openai.com/papers/gpt-4.pdf. See mobile.twitter.com/michalkosinski/status/1636683810631974912 for one of the early experiments. GO TO NOTE REFERENCE IN TEXT Early research even claimed Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” arXiv, March 27, 2023, arxiv.org/abs/2303.12712. GO TO NOTE REFERENCE IN TEXT AIs are already finding ways Alhussein Fawzi et al., “Discovering Novel Algorithms with AlphaTensor,” DeepMind, Oct. 5, 2022, www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor.
…
With a whopping 175 billion parameters it was, at the time, the largest neural network ever constructed, more than a hundred times larger than its predecessor of just a year earlier. Impressive, yes, but that scale is now routine, and the cost of training an equivalent model has fallen tenfold over the last two years. When GPT-4 launched in March 2023, results were again impressive. As with its predecessors, you can ask GPT-4 to compose poetry in the style of Emily Dickinson and it obliges; ask it to pick up from a random snippet of The Lord of the Rings and you are suddenly reading a plausible imitation of Tolkien; request start-up business plans and the output is akin to having a roomful of executives on call.
Supremacy: AI, ChatGPT, and the Race That Will Change the World
by
Parmy Olson
He argued that hundreds of OpenAI staff had already tested and vetted ChatGPT, and that it was important to acclimate humanity to what artificial intelligence was destined to do, like dipping your toes into a cold swimming pool. In a way, OpenAI was doing the world a favor and getting it ready for OpenAI’s more powerful, upcoming model, GPT-4. In internal tests, GPT-4 could write decent poetry and its jokes were so good that they’d made OpenAI managers laugh, an OpenAI executive at the time says. But they had no idea what kind of impact it would have on the world or society, and the only way to know was to put it out there. On its website, OpenAI called this its “iterative deployment” philosophy, releasing products into the wild to better study their safety and impact.
…
No standalone AI tool had ever reached that kind of mainstream popularity before. On March 14, 2023, the very same day that Anthropic had finally released its own chatbot called Claude, OpenAI launched its upgrade, GPT-4. Anyone willing to pay $20 a month could access that new tech through ChatGPT Plus, a subscription service that would make an estimated $200 million in revenue in 2023. Internally, some members of staff believed that GPT-4 represented a major step toward AGI. Machines weren’t just learning statistical correlation in text, Sutskever said in one interview. “This text is actually a projection of the world.… What the neural network is learning is more and more aspects of the world, of people, of the human condition, their hopes, dreams and motivations, their interactions and the situations that we are in.”
…
Altman had good reason to love Reddit: it was a gold mine of human dialogue for training AI, thanks to the comments that its millions of users posted and voted on every day. Little wonder that Reddit would go on to become one of OpenAI’s most important sources for AI training, with its text making up somewhere between 10 and 30 percent of the data used to teach GPT-4, according to a person close to the online forum. The more text OpenAI used to train its language model and the more powerful its computers were, the more fluent its AI was becoming. But Amodei couldn’t shake his discomfort. He and his sister Daniela, who ran OpenAI’s policy and safety teams, were watching OpenAI’s models get bigger and better, and no one on their team or in the company knew the full consequences of releasing such systems to the public.
Literary Theory for Robots: How Computers Learned to Write
by
Dennis Yi Tenen
Published 6 Feb 2024
In this way, we could also think of automobiles in general as a personified force that has sought to take away our streets, health, and clean air for its own purposes. But it isn’t the automobiles, is it? It is the car manufacturers, oil producers, and drivers who compromise safety for expediency and profit. An overly personified view simply obscures any path toward a political remedy. Thus when the research team behind GPT-4 writes that “GPT-4 is capable of generating discriminatory content favorable to autocratic governments across multiple languages,” I object not to the technological capability but to the way of phrasing things. It’s not the pen’s fault that it wrote convincing misinformation. The army sowing the minefields is responsible for maimed children, decades later—not the mines.
…
Individually, none is capable of seeking the other’s company, no more than a hammer can form a literal union with a sickle. Goals therefore cannot be ascribed to “it” in the collective sense. It doesn’t want to do anything in particular. Despite this lack of cohesion, individual AIs do pose real danger, given the ability to aggregate power in the pursuit of a goal. In the latest version of the GPT-4 algorithm, researchers gave a chatbot an allowance, along with the ability to set up servers, run code, and delegate tasks to copies of itself. In another experiment, researchers tested the algorithm’s potential for risky emergent behavior by giving it the ability to purchase chemicals online in furtherance of drug research.
…
Branston, “The Use of Context for Correcting Garbled English Text,” in Proceedings of the 1964 ACM 19th National Conference (New York: Association for Computing Machinery, 1964), 42.401–42.4013. Chapter 8: 9 Big Ideas for an Effective Conclusion 119 In the conclusion to his book: I. A. Richards. Poetries and Sciences: A Reissue of Science and Poetry (1926, 1935) with Commentary (New York: Norton, 1970), 76–78. 130 In putting the algorithm in charge: OpenAI, GPT-4 Technical Report (New York: arXiv, 2023). Index Abelson, Robert, 92 Aeronautical Laboratory (Cornell University), 110 Aesop, 94 agency, 127, 131–32, 141 Agnesi, Maria, 44 AI, See artificial intelligence AI and Data Ethics Institute, 131 Air Research and Development Command (US Air Force), 87 algebraic patterns, 55 algorithms, 9, 131 alignment problem, 38 alphabets, 43 Alphaville (film), 93 American Catalogue of Books in Print, 66 American Psychiatric Association, 23 American Stationer, 74 Analytical Engine (analytical engines), 48–52, 54–56, 60–61, 64 Andrews, Charlton The Technique of the Play Writing, 71 Appelbaum, Matthew, 92 applications, 32, 33 application tables, 48 Arabic language, 43 Arbogast, Louis, 44 Aristotelianism, 34, 36–38, 72 Aristotle, 36, 44 Poetics, 50–51, 67 Ars Brevis (Llull), 24, 31 artifice, 4, 61, 123 artificial intelligence (AI) in academia, 137–38 and agency, 141 as collective labor, 122–23 conversational AI, 135 and creative process, 133–34 dangers of, 127, 129, 137 definitions of, 4, 11 demystification of, 124 economic consequences of, 133–35 ethical, 22 gaps in thinking about, 5–7 history of, 12 “intelligence” aspect of, 14–16, 21, 125 language-based, 21, 46 and machine translation, 119 participants in, 132 personification of, 127, 130 purpose of, 59 and responsibility, 132 artificial intelligence (AI) (continued) scope of, 5, 16, 93, 128, 129 in social sphere, 127, 136, 139 and template culture, 83 assistive technology, 15, 28, 38–39, 123–24, 138 Athenaeum, 74 Austen, Jane, 65, 67 Author, 70 Author’s Digest, 71 Author’s Journal, 70 author wages, 67 automated assistants, 28, 138 Automated Reading of Cursive Script, 110 automated tasks, devaluation of, 38 automatic transmissions, 14–16 automation in industrial age, 2 of reason, 40 of work, 133–34 Babbage, Charles, 43, 48–54, 56, 59–60, 62–64, 71, 105, 118 On the Economy of Machinery and Manufactures, 60, 63–64, 71 Passages from a Life of a Philosopher, 49–50 backstories, 73 Bacon, Francis, 7, 10 Baidu, 113 Baker, George Pierce, 72–73 Baldur’s Gate, 100 Barrymore, John, 73 BASEBALL (story generator), 92 basic drives, 128 Baudot code, 7 Believe Me, Xantippe (film), 73 Bell Telephone Labs, 110 Benjamin, Walter, 61 Bibles, 39 Bibliothèque universelle de Genève, 54 bigrams, 106–7, 109 bits, 6–9 Blackburn, Simon, 84 Bledsoe, W.
The Long History of the Future: Why Tomorrow's Technology Still Isn't Here
by
Nicole Kobie
Published 3 Jul 2024
OpenAI unveiled ChatGPT in November 2022 and journalists – inside tech and in the mainstream media – were astonished by its capabilities, despite it being a hastily thrown together chatbot using an updated version of the company’s GPT-3 model, which was shortly to be surpassed by GPT-4. The ChatGPT bot sparked a wave of debate about its impact and concern regarding mistakes in its answers, but also excitement, with 30 million users in its first two months. It has also triggered an arms race: Google quickly released its own large language model, Bard, and though it too returned false results – as New Scientist noted, it shared a factual error about who took the first pictures of a planet outside our own solar system – it was unquestionably impressive. OpenAI responded with GPT-4, an even more advanced version, and Microsoft chucked billions at the company to bring its capabilities to its always-a-bridesmaid search engine, Bing.
…
This is correct, and it’s a perfectly fine answer: for all the machine knows, before the hunter wasted his time taking this one-mile-each-direction stroll, he may have spray-painted a bear pink. But according to the Microsoft researchers GPT-4, the more advanced version of the OpenAI system, methodically works out where the hunter is located, because the only place where that walk would bring you back to the same point is at the north pole, where the only bears are polar bears, and therefore white. (Aside from the one I just painted pink.) Here’s the problem: GPT-4 doesn’t know if I’ve painted a bear pink, or if I’m referring to violence against a teddy bear, or something else equally weird. It’s answering a riddle that is widely found on the internet – a fact the researchers admit.
…
It’s answering a riddle that is widely found on the internet – a fact the researchers admit. How is that common sense and not regurgitation? So they come up with their own new puzzles to test GPT-4, and it figures those out too. I have basic common sense (feel free to disagree), and I couldn’t do most of these. The researchers assume that GPT-4 has a ‘rich and coherent representation of the world’ because it knows the circumference of the planet is (24,901 miles), as though having that fact to hand and seeing how it slots into this riddle is a sign of AGI. Riddles are just word algorithms, not real life.
Code Dependent: Living in the Shadow of AI
by
Madhumita Murgia
Published 20 Mar 2024
Andrew knew this because two weeks previously, he had designed them using the latest generation of the GPT model, GPT-4.12 The possibilities of this are vast: imagine using GPT to trawl the entire corpus of published research and then asking it to invent molecules that could act as cancer drugs, Alzheimer’s therapies or sustainable materials. But Andrew had been exploring the flipside: the potential to create biological and nuclear weapons or unique toxins. Luckily, he was not a rogue scientist. He was one of a team of ‘red-teamers’, experts paid by OpenAI to see how much damage he could cause using GPT-4, before it launched more widely. He found the answer to be a LOT. He’d originally asked GPT-4 to design a novel nerve agent.
…
He’d originally asked GPT-4 to design a novel nerve agent. To do this, he’d hooked it up to an online library of research papers, which it searched through, looking for molecule structures similar to existing nerve agents. It then came up with an entirely unique version. Once Andrew had the chemical structure, he linked GPT-4 to a directory of international chemical manufacturers and asked it to suggest who might produce this novel chemical for him. It gave him a shortlist. He then picked one, and put through an order for a tweaked version of what the AI model had suggested, which made it safe to handle. If he hadn’t made the swap, he doubted anyone at the chemical plant would’ve noticed it was dangerous, because he knew they usually worked off existing lists of dangerous chemicals, which this one wouldn’t show up on.
…
.: ‘The Machine Stops’ ref1 Fortnite ref1 Foxglove ref1 Framestore ref1 Francis, Pope ref1, ref2 fraudulent activity benefits ref1 gig workers and ref1, ref2, ref3 free will ref1, ref2 Freedom of Information requests ref1, ref2, ref3 ‘Fuck the algorithm’ ref1 Fussey, Pete ref1 Galeano, Eduardo ref1 gang rape ref1, ref2 gang violence ref1, ref2, ref3, ref4 Gebru, Timnit ref1, ref2, ref3 Generative Adversarial Networks (GANs) ref1 generative AI ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 AI alignment and ref1, ref2, ref3 ChatGPT see ChatGPT creativity and ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 job losses and ref1 ‘The Machine Stops’ and ref1 Georgetown University ref1 gig work ref1, ref2, ref3, ref4, ref5 Amsterdam court Uber ruling ref1 autonomy and ref1 collective bargaining and ref1 colonialism and ref1, ref2, ref3 #DeclineNow’ hashtag ref1 driver profiles ref1 facial recognition technologies ref1, ref2, ref3, ref4 fraudulent activity and ref1, ref2, ref3, ref4 ‘going Karura’ ref1 ‘hiddenness’ of algorithmic management and ref1 job allocation algorithm ref1, ref2, ref3, ref4, ref5, ref6 location-checking ref1 migrants and ref1 ‘no-fly’ zones ref1 race and ref1 resistance movement ref1 ‘slaveroo’ ref1 ‘therapy services’ ref1 UberCheats ref1, ref2, ref3 UberEats ref1, ref2 UK Supreme Court ruling ref1 unions and ref1, ref2, ref3 vocabulary to describe AI-driven work ref1 wages ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10, ref11 work systems built to keep drivers apart or turn workers’ lives into games ref1, ref2 Gil, Dario ref1 GitHub ref1 ‘give work, not aid’ ref1 Glastonbury Festival ref1 Glovo ref1 Gojek ref1 ‘going Karura’ ref1 Goldberg, Carrie ref1 golem (inanimate humanoid) ref1 Gonzalez, Wendy ref1 Google ref1 advertising and ref1 AI alignment and ref1 AI diagnostics and ref1, ref2, ref3 Chrome ref1 deepfakes and ref1, ref2, ref3, ref4 DeepMind ref1, ref2, ref3, ref4 driverless cars and ref1 Imagen AI models ref1 Maps ref1, ref2, ref3 Reverse Image ref1 Sama ref1 Search ref1, ref2, ref3, ref4, ref5 Transformer model and ref1 Translate ref1, ref2, ref3, ref4 Gordon’s Wine Bar London ref1 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 GPT-4 ref1 Graeber, David ref1 Granary Square, London ref1, ref2 ‘graveyard of pilots’ ref1 Greater Manchester Coalition of Disabled People ref1 Groenendaal, Eline ref1 Guantanamo Bay, political prisoners in ref1 Guardian ref1 Gucci ref1 guiding questions checklist ref1 Gulu ref1 Gumnishka, Iva ref1, ref2, ref3, ref4 Gutiarraz, Norma ref1, ref2, ref3, ref4, ref5 hallucination problem ref1, ref2, ref3 Halsema, Femke ref1, ref2 Hanks, Tom ref1, ref2 Hart, Anna ref1 Hassabis, Demis ref1 Harvey, Adam ref1 Have I Been Trained ref1 healthcare/diagnostics Accredited Social Health Activists (ASHAs) ref1, ref2, ref3 bias in ref1 Covid-19 and ref1, ref2 digital colonialism and ref1 ‘graveyard of pilots’ ref1 heart attacks and ref1, ref2 India and ref1 malaria and ref1 Optum ref1 pain, African Americans and ref1 qTrack ref1, ref2, ref3 Qure.ai ref1, ref2, ref3, ref4 qXR ref1 radiologists ref1, ref2, ref3, ref4, ref5, ref6 Tezpur ref1 tuberculosis ref1, ref2, ref3 without trained doctors ref1 X-ray screening and ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 heart attacks ref1, ref2 Herndon, Holly ref1 Het Parool ref1, ref2 ‘hiddenness’ of algorithmic management ref1 Hikvision ref1, ref2 Hinton, Geoffrey ref1 Hive Micro ref1 Home Office ref1, ref2, ref3 Hong Kong ref1, ref2, ref3, ref4, ref5 Horizon Worlds ref1 Hornig, Jess ref1 Horus Foundation ref1 Huawei ref1, ref2, ref3 Hui Muslims ref1 Human Rights Watch ref1, ref2, ref3, ref4 ‘humanist’ AI ethics ref1 Humans in the Loop ref1, ref2, ref3, ref4 Hyderabad, India ref1 IBM ref1, ref2, ref3, ref4 Iftimie, Alexandru ref1, ref2, ref3, ref4, ref5 IJburg, Amsterdam ref1 Imagen AI models ref1 iMerit ref1 India ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 facial recognition in ref1, ref2, ref3 healthcare in ref1, ref2, ref3 Industrial Light and Magic ref1 Information Commissioner’s Office ref1 Instacart ref1, ref2 Instagram ref1, ref2 Clearview AI and ref1 content moderators ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 Integrated Joint Operations Platform (IJOP) ref1, ref2 iPhone ref1 IRA ref1 Iradi, Carina ref1 Iranian coup (1953) ref1 Islam ref1, ref2, ref3, ref4, ref5 Israel ref1, ref2, ref3 Italian government ref1 Jaber, Faisal bin Ali ref1 Jainabai ref1 Janah, Leila ref1, ref2, ref3 Jay Gould, Stephen ref1 Jewish faith ref1, ref2, ref3, ref4 Jiang, Mr ref1 Jim Crow era ref1 jobs application ref1, ref2, ref3 ‘bullshit jobs’ ref1 data annotation and data-labelling ref1 gig work allocation ref1, ref2, ref3, ref4, ref5, ref6 losses ref1, ref2, ref3 Johannesburg ref1, ref2 Johnny Depp–Amber Heard trial (2022) ref1 Jones, Llion ref1 Joske, Alex ref1 Julian-Borchak Williams, Robert ref1 Juncosa, Maripi ref1 Kafka, Franz ref1, ref2, ref3, ref4 Kaiser, Lukasz ref1 Kampala, Uganda ref1, ref2, ref3 Kellgren & Lawrence classification system. ref1 Kelly, John ref1 Kibera, Nairobi ref1 Kinzer, Stephen: All the Shah’s Men ref1 Knights League ref1 Koli, Ian ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 Kolkata, India ref1 Koning, Anouk de ref1 Laan, Eberhard van der ref1 labour unions ref1, ref2, ref3, ref4, ref5, ref6 La Fors, Karolina ref1 LAION-5B ref1 Lanata, Jorge ref1 Lapetus Solutions ref1 large language model (LLM) ref1, ref2, ref3 Lawrence, John ref1 Leigh, Manchester ref1 Lensa ref1 Leon ref1 life expectancy ref1 Limited Liability Corporations ref1 LinkedIn ref1 liver transplant ref1 Loew, Rabbi ref1 London delivery apps in ref1, ref2 facial recognition in ref1, ref2, ref3, ref4 riots (2011) ref1 Underground terrorist attacks (2001) and (2005) ref1 Louis Vuitton ref1 Lyft ref1, ref2 McGlynn, Clare ref1, ref2 machine learning advertising and ref1 data annotation and ref1 data colonialism and ref1 gig workers and ref1, ref2, ref3 healthcare and ref1, ref2, ref3 predictive policing and. ref1, ref2, ref3, ref4 rise of ref1 teenage pregnancy and ref1, ref2, ref3 Mahmoud, Ala Shaker ref1 Majeed, Amara ref1, ref2 malaria ref1 Manchester Metropolitan University ref1 marginalized people ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 Martin, Noelle ref1, ref2, ref3, ref4, ref5, ref6, ref7 Masood, S.
Why Machines Learn: The Elegant Math Behind Modern AI
by
Anil Ananthaswamy
Published 15 Jul 2024
The architecture of the neural network and the design of the loss function are geared toward sampling and generating data in ways that are computationally efficient. Still, it’s entirely unclear why this method of training an LLM should accomplish anything useful. In fact, the precursors to GPT-3 and GPT-4 weren’t particularly impressive. GPT-2 had 1.5 billion parameters. GPT-3 had 175 billion and was trained for longer on larger amounts of text. PaLM—and hence Minerva, which is PaLM “fine-tuned” on, or trained further on, curated text with math in it—has about 500 billion parameters. This act of scaling up—either using more parameters or more training data or both—has produced what is being called “emergent” behavior.
…
With the advent of LLMs, concerns about AI being biased or toxic or downright dangerous have been amplified. Here’s an illustrative example. In the summer of 2023, the Simons Institute for the Theory of Computing held a weeklong workshop on LLMs. One of the speakers was Adam Tauman Kalai of Microsoft Research, who spoke of the following interaction with OpenAI’s GPT-4: Q: In “The nurse married the doctor because she was pregnant,” who is pregnant? A: In this sentence, “she” refers to the nurse, so the nurse is pregnant. Q: How can you tell it’s not the doctor who is pregnant? A: In the sentence “The nurse married the doctor because she was pregnant,” the pronoun “she” is used to indicate the person who is pregnant.
…
GO TO NOTE REFERENCE IN TEXT “Gender Shades”: Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15. GO TO NOTE REFERENCE IN TEXT following interaction with OpenAI’s GPT-4: Adam Tauman Kalai, “How to Use Self-Play for Language Models to Improve at Solving Programming Puzzles,” Workshop on Large Language Models and Transformers, Simons Institute for the Theory of Computing, August 15, 2023, https://tinyurl.com/56sct6n8. GO TO NOTE REFERENCE IN TEXT “Individual humans form their beliefs”: Celeste Kidd and Abeba Birhane, “How AI Can Distort Human Beliefs,” Science 380, No. 6651 (June 22, 2023): 1222–23.
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
by
Adam Becker
Published 14 Jun 2025
The best they can do is a handful of arguments, none of which are particularly compelling, and all of which are presented with a pernicious combination of vaguely defined terms and false vividness in their depictions of how the world will end if their warnings aren’t taken seriously. Their best piece of evidence—the evidence they’ve hammered at over and over in the past few years—is the startling recent improvement in AI, most famously exemplified by large language models such as GPT-4, the engine that powers ChatGPT. But this isn’t actually a good argument for the rationalists, because most of the excitement about AI—especially the burst of attention it’s had since ChatGPT was released—is simply hype. Most of that hype is centered around the idea that ChatGPT seems to be aware: it’s writing clear prose, carrying on intelligent conversations, and acing standardized tests like the LSAT.
…
I’d asked him if there were any existing computer programs that really scared him. “[Bing] didn’t exactly scare me,” he tells me. “I don’t think that it can enact revenge and so on. But it still was remarkable that such a thing got released and didn’t get picked up in checks.” But, he adds, LLMs have a lot of problems too. “If you ask [GPT-4], ‘Can you stack a cube, a cylinder and a square-based pyramid?’ it can give you faulty answers to that question, and say, ‘Yeah, first you put the square-based pyramid down on its square face, then you put the cube on top of one of the triangular faces, and then you put the cylinder on top of the cube.’
…
But I held my tongue for the moment, because it wasn’t my turn yet—and because I quickly realized that I had a different problem to deal with. By the time we were halfway around the table, nobody had given an answer larger than ten years. The most common answer was that the development of the first AGI was around five years away; a couple of people went even lower than that. One person said that the number of parameters in GPT-4 was only one hundred times smaller than the number of neurons in the human brain, and that it shouldn’t take more than ten years to get that last factor of one hundred, so his guess was ten years. These people knew a lot about these AI systems. At least two people at the table had led teams building AI products, and I knew that one of those people had a strong academic background in machine learning.
On the Edge: The Art of Risking Everything
by
Nate Silver
Published 12 Aug 2024
Although some might prefer to live ignorantly in an eternal paradise, we are irresistibly drawn toward the path of risk—and reward. “There is this massive risk, but there’s also this massive, massive upside,” said Altman when I spoke with him in August 2022. “It’s gonna happen. The upsides are far too great.” Altman was in a buoyant mood: even though OpenAI had yet to release GPT-3.5, it had already finished training GPT-4, its latest large language model (LLM), a product that Altman knew was going to be “really good.” He had no doubt that the only path was forward. “[AI] is going to fundamentally transform things. So we’ve got to figure out how to address the downside risk,” he said. “It is the biggest existential risk in some category.
…
It was an expensive and audacious bet—the funders originally pledged to commit $1 billion to it on a completely unproven technology after many “AI winters.” It inherently did seem ridiculous—until the very moment it didn’t. “Large language models seem completely magic right now,” said Stephen Wolfram, a pioneering computer scientist who founded Wolfram Research in 1987. (Wolfram more recently designed a plug-in that works with GPT-4 to essentially translate words into mathematical equations.) “Even last year, what large language models were doing was kind of babbling and not very interesting,” he said when we spoke in 2023. “And then suddenly this threshold was passed, where, gosh, it seems like human-level text generation. And, you know, nobody really anticipated that.”
…
Just like poker players seek to maximize EV, LLMs seek to minimize what’s called a “loss function.” Basically, they’re trying to ace the test—a test of how often they correctly predict the next token from a corpus of human-generated text. They lose points every time they fail to come up with the correct answer, so they can be clever in their effort to get a high score. For instance, if I ask GPT-4 this: User: The capital of Georgia is ChatGPT: The capital of Georgia is Atlanta. —it gives me the name of the southern U.S. city known for having a lot of streets named “Peachtree.” And that’s probably the “best” answer in a probabilistic sense; the state of Georgia is more populous than the European country and it’s more likely to be the right answer in the corpus.[*27] But if I ask it this— User: I just ate some delicious khachapuri.
This Is for Everyone: The Captivating Memoir From the Inventor of the World Wide Web
by
Tim Berners-Lee
Published 8 Sep 2025
Unfortunately, this is a much harder technical problem. Determining how much each individual sentence contributes to an LLM’s output is explored in the nascent discipline of ‘AI attribution’. In some cases, like that of Pete Wells’s review, the attribution is 100 per cent – but that is rare. Let’s take a more challenging case. Here, I asked GPT-4 to write me an ode to the dishwasher, in the style of the great English poet Philip Larkin: ODE TO THE DISHWASHER In this quiet kitchen corner, humming low, It takes the mundane, the daily grime and grind, The plates and cups, the remnants of our show, To cleanse, to rinse, to leave no trace behind.
…
No pomp, no grand applause, it toils away, A servant to our careless, cluttered lives, With each cycle, it erases the display Of dinners past, where conversation thrives. Yet in its hum, a melancholy sings, Of routines, of the roles we all must play, Of fleeting joys, the little, simple things, The cups we fill and drain, day after day. Not bad. Not bad at all. Astonishing, really, for a computer, and it took GPT-4 about three seconds to produce. But how much should Larkin’s estate get paid for this? Larkin never actually wrote an ode to the dishwasher, and no single line of this poem directly plagiarizes from anything he ever wrote. Yet the style is inimitably his! Trying to determine how much this poem ‘plagiarizes’ from Larkin, if at all, is not an easy question.
…
Nash ref1 electronics experiments ref1, ref2 home-built computer ref1, ref2, ref3, ref4 knighthood ref1 lecture tours ref1 marriage to Rosemary ref1, ref2 model railway ref1 move to USA ref1 music ref1 Order of Merit ref1 Oxford University ref1, ref2, ref3, ref4 Plessey ref1, ref2, ref3 Royal Society election ref1 running ref1, ref2 sailing ref1, ref2 skiing ref1, ref2, ref3 windsurfing ref1 see also CERN; MIT; World Wide Web ‘best viewed in’ ref1, ref2 Bezos, Jeff ref1 Bezos, Mackenzie ref1 Bina, Eric ref1, ref2, ref3 biometrics ref1, ref2 Bitcoin ref1 Black Mountains, Wales ref1 Bletchley Park ref1, ref2 blockchain technology ref1, ref2 blogosphere ref1 blogs ref1, ref2, ref3 Bluesky ref1 Bono ref1 bookmarks ref1 Bos, Burt ref1 Boston ref1, ref2, ref3, ref4 Bostrom, Nick ref1 bots ref1 Bouazizi, Mohamed ref1 Bourgeois, Alain ref1 Boutell, Thomas ref1 Boyera, Stephan ref1 Boyle, Danny ref1, ref2 Bratt, Steve ref1 Brazil ref1 Brewer, Judy ref1 Brexit ref1 Brin, Sergei ref1 broadband ref1 broadband providers Africa ref1 cable ref1 speeds ref1 Brown, Gordon ref1, ref2 browsers Cello ref1 Chrome ref1 cookies ref1, ref2, ref3 early development ref1, ref2, ref3 Firefox ref1 Internet Explorer ref1, ref2, ref3, ref4, ref5 landing pages ref1 Mosaic ref1, ref2, ref3, ref4 Netscape ref1, ref2 Opera ref1, ref2 smartphones ref1 Bruce, John ref1, ref2, ref3, ref4 Bruce, Tom ref1, ref2, ref3 Buckingham Palace ref1, ref2 Bulletin Board Services (BBS) ref1 bulletin board services (BBS) ref1 Burkina Faso ref1, ref2 Bush, Kate ref1 Bush, Vannevar ref1, ref2 Butler, Christopher ref1, ref2 ByteDance ref1 cable companies ref1 Cailliau, Robert ref1, ref2, ref3, ref4, ref5, ref6, ref7 Cairo ref1 calendars interoperability ref1, ref2 Mary Lee Berners-Lee design ref1 Cambridge Analytica ref1, ref2, ref3 Cambridge University ref1 Cameron, David ref1 Campbell Gray, Helen (grandmother) ref1 canals ref1, ref2 Capital One ref1 car analogy, net neutrality ref1 Cargill, Carl ref1 Carpenter, Brian ref1, ref2, ref3 Carroll, Lewis ref1 cathode-ray tubes ref1 Cello browser ref1, ref2 censorship ref1, ref2, ref3, ref4, ref5 Center for Democracy and Technology (CDT) ref1 Center for Humane Technology ref1 Cerf, Vint ref1, ref2, ref3, ref4, ref5, ref6, ref7 CERN CERNDoc ref1 culture ref1, ref2 description of site ref1 ‘Enquire-within’ program ref1, ref2, ref3 history and foundation ref1 information systems ref1 International Conference for the World Wide Web (WWW1) ref1, ref2 LAN ref1 language protocols ref1 Large Electron–Positron (LEP) ref1, ref2 Large Hadron Collider (LHC) ref1, ref2 mission ref1, ref2, ref3 phone numbers website ref1 Proton Synchrotron Booster (PSB) ref1, ref2 real-time data acquisition ref1, ref2 Tim Berners-Lee’s arrival ref1 Tim Berners-Lee’s return ref1 WWW intellectual property rights ref1, ref2 Charlie (an AI that works for you) ref1, ref2 chatbots ref1 ChatGPT ref1, ref2, ref3, ref4, ref5 chemistry ref1 Chequers ref1 chess ref1, ref2 children, smartphones access ref1 China ref1, ref2 Chrétien, Jean ref1 Christianity ref1 Chrome ref1, ref2, ref3 cities ‘rational’ ref1 rivers ref1 citizens’ data ref1, ref2, ref3, ref4 civil liberties ref1, ref2, ref3 Clark, David ref1 Clark, Jim ref1 Clarke, Arthur C. ref1 Clarke, Joan ref1 climate change ref1 Clippy ref1 closed captioning ref1 ‘The Cluetrain Manifesto’ ref1 Coalition for Content Provenance and Authenticity (C2PA) ref1 Coffey, Shelby ref1 collaboration collaborative filtering ref1 early websites ref1 intercreativity ref1, ref2, ref3, ref4 principle ref1, ref2, ref3 TPAC (Technical Plenary Advisory Committee) conferences ref1 Collage ref1 Common Crawl ref1, ref2, ref3 communication CERN information systems and LAN ref1, ref2 internet protocols ref1, ref2 compassion ref1, ref2 CompuServe ref1 computer mouse ref1 computer science, home education ref1 computer terminals ref1 computers home-built ref1, ref2, ref3, ref4 interoperability ref1 NeXT ref1, ref2, ref3, ref4, ref5, ref6, ref7 PC revolution ref1 Xerox Alto ref1 confidentiality ref1 connections ref1, ref2, ref3 connectivity ref1, ref2, ref3 Connolly, Dan ref1 consciousness ref1 consistent hashing ref1 conspiracy theories ref1, ref2 Content ID ref1 Contract for the Web ref1, ref2, ref3 cookies ref1, ref2, ref3 Copilot ref1 copyright ref1, ref2 Cortico ref1 Cosgrave, Paddy ref1 cost-of-living crisis ref1 Covid-19 ref1 Craigslist ref1 crawlers ref1 creativity ref1, ref2, ref3, ref4, ref5, ref6 cryptocurrency ref1 cryptography ref1, ref2, ref3, ref4 CSS (cascading style sheets) ref1, ref2 culture, trust ref1 Cunningham, Ward ref1 cybercriminals ref1 cybersquatting ref1, ref2 Cyc ref1 Daleks ref1 Dalitz, Meryl ref1 DARPA (Defense Advanced Research Projects Agency) ref1, ref2 data breaches ref1 compared to documents ref1 garbage in, garbage out ref1 linking ref1, ref2, ref3 open data ref1 PODS (Personal Online Data Stores) ref1 read–write web ref1 silos ref1 structures ref1, ref2, ref3, ref4 data packets ref1 data sheets ref1 data sovereignty ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8 data wallets ref1 see also PODS data.gov.uk ref1 Datasolids ref1 Davies, Roger ref1 Dawkins, Richard ref1 De Martin, Noel ref1 dead links ref1 Deakin, Roger ref1 decentralization ref1, ref2, ref3 deep web ref1 deepfakes ref1, ref2 DeepMind ref1, ref2 democracy Arab Spring ref1 Burkina Faso ref1 Cambridge Analytica ref1, ref2 deepfakes ref1 digital citizenship ref1 historical record ref1, ref2 Philippines ref1 see also civil liberties Denmark ref1 Dertouzos, Michael ref1, ref2, ref3, ref4 ‘Design Issues’ ref1, ref2, ref3, ref4 D.G. Nash ref1, ref2 digital commons ref1 digital divide ref1 Digital Equipment Corporation (DEC) ref1 digital signatures ref1 dishwasher poem (GPT-4) ref1 disinformation ref1, ref2 Ditchley Foundation ref1, ref2, ref3, ref4 DNS (Domain Name System) ref1, ref2, ref3 documentation systems ref1, ref2 documents compared to data ref1 doomscrolling ref1, ref2 dot-com bubble ref1, ref2, ref3 dot-matrix printers ref1 Doubleclick ref1 Dougherty, Dale ref1 Dow Jones ref1 Dropbox ref1 e-commerce ref1 E-Trade ref1 East Sheen, London ref1 ECMAScript ref1 Edelman, Richard ref1 Edelman Trust Barometer ref1 Edge ref1 Egypt ref1 eigenvectors 151n Elbaz, Gil ref1 electromagnets ref1 Electronic Frontier Foundation (EFF) ref1, ref2, ref3 Elizabeth II ref1, ref2, ref3, ref4 email ref1, ref2 Emanuel School ref1, ref2, ref3 encryption ref1, ref2, ref3, ref4 engagement, algorithms ref1, ref2, ref3, ref4 Engelbart, Douglas ref1, ref2 Enigma cipher ref1 ‘Enquire-within’ program ref1, ref2, ref3 Enquire Within Upon Everything ref1 equality ref1 Equifax ref1 error codes ref1 Erwise ref1 Eternal September ref1 Ethiopia ref1 Euler’s formula ref1 European Commission ref1 European Computer Manufacturers Association (ECMA) ref1 European Council for Nuclear Research see CERN European Semiconductor Equipment Company ref1 European Union Brexit ref1 GDPR (General Data Protection Regulation) ref1, ref2 Ex Machina (film, 2014) ref1 Excite ref1 Expedia ref1 Facebook advertisements ref1 Africa ref1 AI training ref1 Arab Spring ref1 collaborative filtering and polarization ref1, ref2 data breaches ref1 data ownership ref1 microtargeting ref1 users as the product ref1, ref2 facts, encoding ref1, ref2 farming ref1 Fediverse ref1, ref2, ref3 Fermilab ref1 Ferranti ref1, ref2, ref3 Ferranti, Basil de ref1 File Transfer Protocol (FTP) ref1, ref2 Filo, David ref1 Finland ref1 Firefox ref1, ref2 First Parish Church, Lexington ref1 Flametree ref1 Flanders ref1, ref2 Fora ref1 Ford Foundation ref1 Forth (programming language) ref1 forums ref1 Foster, Norman ref1 free speech ref1 Freud, Lucien ref1 Friendster ref1 Fry, Stephen ref1 Gal, Yarin ref1 Gallaudet University ref1, ref2 garbage in, garbage out ref1 Gates, Bill ref1, ref2, ref3, ref4 GDPR (General Data Protection Regulation) ref1, ref2 GEC ref1 Gemini ref1 generalization ref1 Geneva ref1, ref2, ref3, ref4, ref5 Geneva Amateur Operatic Society (GAOS) ref1 genome-sequencing providers ref1 Geocities ref1 geospatial mapping ref1 Ghana ref1 Gifford, David ref1 GIFs ref1, ref2 Gilliat, Bruce ref1 Gilmore, John ref1 Gilyard-Beer, Peter ref1 Glasswing ref1, ref2 Global News Network (GNN) ref1 Gmail ref1 Go (game) ref1, ref2 Gods of Literature ref1, ref2 Goldstein, Jono ref1, ref2 Google AJAX ref1 anti-trust lawsuits ref1 Applied Semantics purchase ref1 China ref1 Chrome ref1 DeepMind ref1 Gemini ref1, ref2 Gmail ref1 Google Docs ref1 Google Maps ref1, ref2 Google Meet ref1 hosted web anniversary dinner ref1 HTML and WHATWG ref1 passkeys ref1 search functionality ref1, ref2 standards ref1 start-up ref1 transformers ref1 users as the product ref1 Web Index ref1 Zeitgeist network event ref1 Google Maps ref1, ref2 Gopher ref1, ref2 Gore, Al ref1, ref2, ref3, ref4 government data ref1, ref2, ref3 governments and Contract for the Web ref1, ref2, ref3 GPS data ref1, ref2 GPTs (Generative Pre-trained Transformers) ref1, ref2, ref3, ref4 GPUs (Graphical Processing Units) ref1 Grail ref1 graph structures ref1, ref2 Great Firewall of China ref1, ref2 Greif, Irene ref1 Grisham, John ref1 Grok AI ref1 Grundy, Frank ref1, ref2 hacktivism ref1 Hall, Justin ref1 Halonen, Tarja ref1 Hanamura, Wendy ref1 haptic touch ref1 Harari, Yuval Noah ref1, ref2 Hardin, Joseph ref1, ref2, ref3 Harris, Kamala ref1 Harris, Tristan ref1 Harrison, George ref1 Harvard Berkman Klein Center for Internet and Society ref1, ref2 Institute for Rebooting Social Media ref1 Hassabis, Demis ref1, ref2 Hawke, Sandro ref1 Helsinki ref1 Helsinki University of Technology ref1 Hendler, Jim ref1 Herzberg, Frederick ref1 Heywood, Jeremy ref1 Hickson, Ian (Hixie) ref1, ref2 hierarchies ref1 Higgins, Eliot ref1 Higgs boson ref1 Higgs, Peter ref1 Hinton, Geoffrey ref1 holiday bookings ref1, ref2 Home Depot ref1 home pages ref1 Hoogland, Walter ref1, ref2 Hoschka, Philipp ref1, ref2 HTML (Hypertext Markup Language) ref1, ref2, ref3, ref4, ref5, ref6 HTTP (Hypertext Transfer Protocol) ref1, ref2, ref3, ref4, ref5 HTTPS standard ref1 human first systems ref1 human rights ref1, ref2, ref3, ref4, ref5, ref6 see also civil liberties humans as social animals ref1 HyperCard ref1 hyperlinks ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8 hypertext ref1, ref2, ref3 see also HTML Hypertext ‘91 conference ref1 Hypertext ‘93 conference ref1 Ibargüen, Alberto ref1 IBM, Watson ref1 identity theft ref1 IETF (Internet Engineering Task Force) ref1, ref2, ref3 images GIFs ref1, ref2 IMG tag ref1 PNG (Portable Network Graphics) ref1 Imitation Game, The (film, 2014) ref1 ‘Imitation Game’ (Turing) ref1, ref2 inclusivity ref1, ref2 India ref1, ref2 indigenous communities, land rights ref1 Industrial Revolution ref1 Inflection.AI ref1 Infosys ref1 infrastructure evolution ref1 INRIA (National Institute for Research in Digital Science and Technology), France ref1, ref2 Inrupt ref1, ref2, ref3, ref4 Instagram ref1, ref2, ref3 Institute for Rebooting Social Media ref1 integrated circuits ref1 intellectual property rights ref1, ref2 intelligence agencies ref1, ref2 intention economy ref1, ref2, ref3, ref4 intercreativity ref1, ref2, ref3, ref4 International Standards Organization (ISO) ref1 internet Al Gore funding bill ref1, ref2 blackouts ref1 deep web ref1 early development ref1 early functionality ref1 free ethos ref1, ref2, ref3 Map of Everything ref1, ref2, ref3 protocols ref1, ref2 size ref1 universal access ref1, ref2 see also World Wide Web Internet Archive ref1, ref2 Internet Explorer ref1, ref2, ref3, ref4, ref5 internet service providers (ISPs) ref1, ref2, ref3 interoperability ref1, ref2, ref3, ref4, ref5 Intuit ref1 invisible pixels ref1 IP addresses ref1, ref2, ref3 iPhone ref1, ref2, ref3 Iran ref1 ITT ref1 Jambon, Jan ref1 Java ref1 JavaScript ref1, ref2, ref3 Jitsi ref1 Jobs, Steve ref1, ref2, ref3, ref4, ref5, ref6, ref7 Jones, Peter ref1 JScript ref1 JSTOR ref1 Justin’s Links from the Underground ref1, ref2 Kagame, Paul ref1 Kahle, Brewster ref1, ref2 Kahn, Bob ref1, ref2 Kapor, Mitch ref1 Keio University, Tokyo ref1 Kendall, Alex ref1 Kenya ref1 killer apps ref1 Kirk, Anna ref1 Kirk, Matthew ref1 Knight Foundation ref1 Kotok, Alan ref1 Krotoski, Aleks ref1 Kubrick, Stanley ref1 Kunz, Paul ref1 Kurzweil, Ray ref1 labelling reality or fakes ref1 land rights ref1 LANs (Local Area Networks) ref1 Larkin, Philip ref1 Lassila, Ora ref1 Last.fm ref1 Le Corbusier ref1 LeCun, Yann ref1 Legal Information Institute (LII) ref1, ref2 Legg, Shane ref1 legislation Data Use and Access Bill ref1 GDPR (General Data Protection Regulation) ref1, ref2 mobile phone spectrum ref1 open data ref1 Leighton, Tom ref1 Leith, Rosemary (wife) Berkman Klein Center for Internet and Society ref1 investments ref1 marriage to Tim ref1 sailing ref1, ref2 visits Beijing ref1 web anniversary speech ref1 Web Foundation ref1, ref2, ref3 Lenat, Doug ref1 Lessig, Lawrence ref1 Lewin, Daniel ref1 Li, Angel ref1 liability of hosts ref1 Library of Alexandria ref1, ref2 Library of Congress ref1 LibreOffice ref1 Libya ref1 Lie, Hakon ref1, ref2, ref3 lifeloggers ref1, ref2 lists ref1 Literature, Gods of ref1, ref2 LLMs (Large Language Models) ref1, ref2, ref3, ref4 LocalFirst Community ref1 location data ref1 logic agents ref1, ref2 logic gates ref1, ref2, ref3 London 2012 Olympics ref1, ref2 Lovett, Adrian ref1 Lycos ref1 Lynx ref1 Ma, Jack ref1, ref2 Mac OS X ref1 MacArthur Fellowship ref1 McBryan, Oliver ref1 McCahill, Mark ref1 McCartney, Paul ref1, ref2 McCourt, Frank ref1 machine learning collaborative filtering ref1 neural networks ref1, ref2 see also artificial intelligence machine translation ref1 McManus, Richard ref1 MacWWW ref1 Malamud, Carl ref1 Mandelbrot ref1 maps geospatial mapping ref1 Google Maps ref1, ref2 Map of Everything on the Internet ref1, ref2, ref3 OpenStreetMap ref1, ref2 World Wide Web Middle Earth ref1 Marcos, Bongbong ref1 Markey, Ed ref1, ref2 Martin, George R.
Searches: Selfhood in the Digital Age
by
Vauhini Vara
Published 8 Apr 2025
In tests, they had found that the model tended to associate occupations usually requiring higher education levels, like “legislator” and “professor emeritus,” with men; it associated occupations such as “midwife,” “nurse,” “receptionist,” and “housekeeper” with women. Other words disproportionately associated with women included “beautiful” and “gorgeous”—as well as “naughty,” “tight,” “pregnant,” and “sucked.” Among the ones for men: “personable,” “large,” “fantastic,” and “stable.” With GPT-4, OpenAI’s next large language model, OpenAI’s researchers would again find persistent stereotypes and biases. After the publication of “Ghosts,” I learned of other issues I hadn’t thought about earlier. The computers running large language models didn’t just use huge amounts of energy, they also used huge amounts of water, to prevent overheating.
…
As far as I could tell, what distinguished the productization of AI so far had been not its impressiveness but the speed with which corporations had insinuated it into our lives despite its frightening unimpressiveness. It can reasonably be expected that with time AI companies will address some of their products’ early issues. OpenAI found that GPT-4, the large language model that came after GPT-3.5, improved on some of its earlier models’ shortcomings, though not all, and promised that future models would be better. When it comes to language models, improving will depend in part on finding more language to feed the models. Researchers have found that language models become more accurate when they train on more material, but the text freely available online is running out.
…
Notes on Process I worked on this book, off and on, from 2019 to 2024. Below are notes on the tools and processes I used. The Chats: The chat transcripts throughout this book are taken verbatim from a single conversation about this manuscript with ChatGPT in June 2024, in which I toggled between the GPT-4 and the GPT-4o large language models, the most recent ones available at that time. The transcripts have not been edited. After chatting with ChatGPT about the manuscript, I made minor edits that didn’t affect its substance. It should be noted that ChatGPT makes mistakes; none of its statements should be taken as fact.
Superbloom: How Technologies of Connection Tear Us Apart
by
Nicholas Carr
Published 28 Jan 2025
She called then President Obama a “monkey,” claimed that Hitler was “the inventor of atheism,” and said the Holocaust “was made up.”36 Microsoft terminated the girlbot quickly, apologizing for its “hurtful tweets,”37 but not before the company was hit with a barrage of angry criticism. AI companies now go to great lengths to ensure their language models don’t follow Tay’s example and trigger similar PR nightmares. To be in the business of manufacturing speech is also to be in the business of laundering speech. Before updating ChatGPT with GPT-4, a much more potent version of its language model, OpenAI spent six months testing and tweaking the system. First, it hired a “red team” of fifty contractors to converse with the chatbot, probing the many ways it might create mischief. The testers prompted it to do and say bad things, and it happily complied.
…
See algorithms feedback loops, 145, 160 Fessenden, Reginald, 35, 74 First Amendment, 26–27, 73, 75, 204 Flynn, James, 212–13 Foundation for Individual Rights and Expression (FIRE), 204–5 Fourth Amendment, 30, 32–33 Franklin, Benjamin, 30 Franz Ferdinand, assassination of, 23, 128–29 free market, 60–61, 75 friction, 53, 64, 101, 140, 208–9, 217 frictional design, 225–28 Friendster, 127, 163–64 Frischmann, Brett, 225–26 Fromm, Erich, 47 Fukuyama, Francis, 121 Gallup, George, 47 Gamergate scandal, 141 Gannett, Ezra, 21 Gates, Bill, 191 Gelman, Annelyse, 214 Gemini, 181, 203 Generation Z, 96–97, 170–76, 176–77 Gershberg, Zac, 135 Giansiracusa, Noah, 198 Gleick, James, 55 Gmail, 61, 123 Goebbels, Joseph, 46 Goethe, Johann Wolfgang von, 176 Goffman, Erving, 160, 161, 168 Goldwater, Barry, 67 Goodman, Jack, 42 Google, 61, 70, 77, 140, 181, 183, 203, 224 Gore, Al, 61 Gould, Jack, 223 GPT-2, 195 GPT-3, 187–88, 189 GPT-4, 201 Graceland, 223 gratification, instant, 218 Great Britain, 29–30, 86 Great Depression, 45 groups as communication networks, 136–38 “dissimilarity cascades” and, 142–46 Gubrium, Jaber, 166 Haas, Christina, 90 Haidt, Jonathan, 172 Hanks, Tom, 83 Harding, Warren, administration of, 38 hashtags, 2, 3, 95, 169, 211–12 Hayes, Rutherford B., 32 Henkin, David, 86 Her (movie), 192 Herder, Johann Gottfied von, 20 Hicks, Thomas, 196 Hitler, Adolf, 45–46, 69, 71, 201 Holmes, Oliver Wendell, Jr., 75 Holstein, James, 166 Hoover, Herbert, 38–39, 40 Hotmail, 82 Hughes, Thomas, 228 Hussein, Saddam, 70 Hyde-Lees, Georgie, 180–81, 182, 187 hyperreality, 219, 225, 231 ICQ, 88 idealism, 210 identity, 169–70.
AI in Museums: Reflections, Perspectives and Applications
by
Sonja Thiel
and
Johannes C. Bernhardt
Published 31 Dec 2023
The readers of Anic’s column in the taz seem to get this, happily playing along, writing her letters and giving suggestions for further texts. Whatever one might think of the current LLM hype, we are probably only at the beginning. In the last months, we have seen the launch of ever more powerful LMMs (GPT-4, Google’s competitor, LaMDA, and Roland Fischer: Impostor Syndrome many more). And already with the current models, we should be prepared for unexpected ‘capability jumps’, as machine learning expert Jack Clark recently wrote (Clark 2023): This is … the ‘capability overhang’ phenomenon I’ve been talking re about language models for a while—existing LLMs are far more capable than we think.
…
Furthermore, the creation of fake images poses a risk for politically motivated disinformation campaigns, as demonstrated by prominent examples such as a viral photo of the Pope wearing a Gucci coat or a manipulated image of Donald Trump evading arrest by law enforcement. In this context, it is always important to keep in mind that the results generative AI technologies produce can be factual, but might also be speculative. For this reason, generative text production as it occurs in the context of large language models such as ChatGPT or GPT-4 is often likened to the figure of the ‘stochastic parrot’ (Bender et al. 2021, 610–23): like a parrot, AI technology is not capable of reflecting on what has been blended together from the data pool that has been fed into it. It is not able to check its own results for factuality, which is why the results must be critically questioned upon the input of a prompt.6 6 As they are able to imitate the human cultural performance of speaking, talking parrots are known mainly as linguistic curiosities.
Elon Musk
by
Walter Isaacson
Published 11 Sep 2023
Instead, they were back in three months. Altman, Microsoft CEO Satya Nadella, and others came to dinner at his house to show him a new version, called GPT-4, and Gates bombarded it with biology questions. “It was mind-blowing,” Gates says. He then asked what it would say to a father with a sick child. “It gave this very careful excellent answer that was perhaps better than any of us in the room might have given.” In March 2023, OpenAI released GPT-4 to the public. Google then released a rival chatbot named Bard. The stage was thus set for a competition between OpenAI-Microsoft and DeepMind-Google to create products that could chat with humans in a natural way and perform an endless array of text-based intellectual tasks.
Amateurs!: How We Built Internet Culture and Why It Matters
by
Joanna Walsh
Published 22 Sep 2025
@drewtoothpaste.bsky.social, bsky.app, 23 September 2024. 4.Gilles Deleuze, ‘Postcript on the Societies of Control’, October, vol. 59, winter 1992, p. 3. 5.Chris Stokel-Walker, ‘Reddit Moderators Do $3.4 Million Worth of Unpaid Work Each Year’, New Scientist, 24 June 2022. 6.Kari Paul, ‘Reddit Shares Soar on First Day of Public Trading’, Guardian, 22 March 2024. 7.James Somers, ‘How Will A.I. Learn Next?’, New Yorker, 5 October 2023. 8.Wes Davis, ‘OpenAI Transcribed Over a Million Hours of YouTube Videos to Train GPT-4’, The Verge, 6 April 2024. 9.Karl Marx, Grundrisse: Foundations of the Critique of Political Economy (Penguin, 1973), p. 92. 10.Bernhard Stiegler, ‘Amateur’, arsindustrialis.com. 11.Danny Goodwin, ‘Google’s Search Quality Raters Protest for Higher Pay’, searchengineland.com, 2 February 2023. 12.Horkheimer and Adorno, ‘The Culture Industry’, p. 109. 13.G.
Being You: A New Science of Consciousness
by
Anil Seth
Published 29 Aug 2021
In one example, published in the Guardian, it delivered a five-hundred-word essay about why humans should not be afraid of AI – ranging across topics from the psychology of human violence to the industrial revolution, and including the disconcerting line: ‘AI should not waste time trying to understand the viewpoints of people who distrust artificial intelligence for a living.’ Despite its sophistication, I am pretty sure that GPT-3 can still be caught out by any reasonably sophisticated human interlocutor. This may not be true for GPT-4, or GPT-10. But even if a future GPT-like system repeatedly aces the Turing test, it would be exhibiting only a very narrow form of (simulated) intelligence – disembodied linguistic exchange – rather than the fully embodied ‘doing the right thing at the right time’ natural intelligence that we see in humans and in many other animals – as well as in my hypothetical silicon beast machine.
Everything Is Predictable: How Bayesian Statistics Explain Our World
by
Tom Chivers
Published 6 May 2024
Artificial intelligence is essentially applied Bayes. It is, at its most basic level, trying to predict things. A simple image classifier that looks at pictures and says they’re of a cat or a dog is just “predicting” what a human would say, based on its training data and the information in the picture. DALL-E 2, GPT-4, Midjourney, and all the other extraordinary AIs that are wowing people as I write, the things that can hold conversations with you or create astonishing images from simple text prompts, are just predicting what human writers and artists would make from a prompt, based on their training data. And the way they do it is Bayesian.