by William Langewiesche · 1 Jan 2002 · 221pp · 70,413 words
by Karen Hao · 19 May 2025 · 660pp · 179,531 words
: him and Sutskever, arms around each other, smiling widely. In the office, the company’s artist in residence would hook up OpenAI’s image generator DALL-E to a color printer to create tiny kaleidoscopic heart-shaped stickers. Next to the printer would be a giant pink heart emblazoned with the line
…
all-in approach to deep learning would lead it to fall short of true AI advancements. A month later, OpenAI released DALL-E 2 to immense fanfare, and Brockman cheekily tweeted a DALL-E 2–generated image using the prompt “deep learning hitting a wall.” The following day, Altman followed with another tweet: “Give
…
models amplify discriminatory and hateful content. Bloomberg, Rest of World, The Washington Post, and many others have shown how image generators like Stable Diffusion and DALL-E reify and regurgitate racist and sexist tropes and cultural stereotypes. “Attractive people” are young and white. “Housekeepers” are Black and brown. “Engineers” are men. “Doctors
…
directly into the hands of consumers. The model even had an eye-catching name from the original researchers who’d developed it in the company: DALL-E 2, a play off the Spanish surrealist artist Salvador Dalí and the titular robot in the Disney Pixar movie WALL-E
…
. DALL-E had spun out of a trend in the broader field of AI research to develop multimodal models—models that combine at least two different “modalities,”
…
first, called CLIP, developed once again by Alec Radford, used the original Transformer and Vision Transformer together to generate detailed captions for images. The second, DALL-E 1, from Aditya Ramesh, a researcher who had studied at New York University and for a time under Meta’s Yann LeCun, trained a twelve
…
-billion-parameter Transformer to accept text and generate novel images. In a blog post, OpenAI highlighted DALL-E 1’s capabilities with a series of playful prompts, including “an avocado armchair,” which produced various green and brown armchairs aesthetically inspired by avocados. The
…
compressed 250 million images to feed them into the Transformer, losing some of their high-resolution details in the process. As the team started on DALL-E 2, a new method for generating images was gaining traction. Known as diffusion, it was a technique inspired by physics that made it possible for
…
patterns within their training data at a deep enough level to perform a broader range of tasks in visual processing. OpenAI changed tack to building DALL-E 2 with diffusion and Radford’s CLIP. Ramesh and other researchers gradually scaled up the model and added the ability to inpaint—allowing a user
…
. Using diffusion created much sharper and more photorealistic images; the method also significantly reduced the amount of compute needed to achieve the same performance as DALL-E 1. Researchers outside of OpenAI would shrink the compute intensity of diffusion models even further. Stable Diffusion, the popular open-source image generator, would require
…
big tech companies would have the required resources to run and to host those models?” OpenAI wouldn’t adopt latent diffusion until much later, leaving DALL-E 2 and 3 much more computationally expensive than Stable Diffusion or Midjourney, which many users deemed the higher-quality products. It was just one example
…
, even within the narrow realm of generative AI, scale was not the only, or even the highest-performing, path to more expanded AI capabilities. * * * — With DALL-E 2’s remarkable jump in performance, the Applied division began working in late 2021 and early 2022 on different ideas for productization. It settled on
…
demand they noticed from GitHub Copilot that people had for engaging directly with generative AI models. It would also help serve the company’s mission: DALL-E 2 was fun and delightful, a great way to ease people’s fears about powerful AI systems and pave the way for OpenAI to deliver
…
for serving up to users. After the experience of firefighting text-based child sex abuse with AI Dungeon, of particular concern was the possibility of DALL-E 2 being used to manipulate real or create synthetic child sexual abuse material, or CSAM. As with each GPT model, the training data for each
…
subsequent DALL-E model was growing more and more polluted. For DALL-E 2, the research team had signed a licensing deal with stock photo platform Shutterstock and done a massive scrape of Twitter
…
training data, however, meant the model would still be able to produce synthetic CSAM. In the same way DALL-E could generate an avocado armchair having only ever seen avocados and armchairs, DALL-E 2 and DALL-E 3 could do the same thing with children and porn for child pornography, a capability known as “compositional
…
and safety, Dave Willner, who as an early employee at Facebook had written that platform’s very first content standards. Later, during the development of DALL-E 3, when the data imperative had grown even larger, the research team decided that sexual images were no longer just a “nice to have” but
…
at Microsoft, Shane Jones, would discover the downstream consequences of those decisions. As he played around with Copilot Designer, Microsoft’s image generator built on DALL-E 3, he was horrified by how quickly it spit out offensive and sexualized images with little prompting. Just adding the term “pro-choice” into the
…
release of the AI model last October.” Microsoft did not comment on the latest status or outcome of Jones’s letter. * * * — As the launch of DALL-E 2 drew closer, the fighting between OpenAI’s Applied division and the newly restocked Safety clan returned. For those on Safety, now dispersed across various
…
teams under the Research division, the unprecedented realism of DALL-E 2 brought with it a wide array of unknowns. How could it be weaponized to produce synthetic CSAM or political deepfakes? To manipulate and persuade
…
contact with real users. Just as Safety worried about the limitations of OpenAI’s foresight, Applied believed this was precisely why it needed to release DALL-E 2. Releasing AI models in controlled ways to gain real-world feedback would take away that guesswork and was thus a necessary part of improving
…
Murati played the role of negotiator, smoothing out the fault lines between different factions and searching for ways to thread the needle between them. On DALL-E 2, she struck a compromise: The web app would be released not as a product but as a “low-key research preview.” Such branding would
…
paid users, to buy time for developing more sophisticated filters. The company moved forward with implementing a series of aggressive abuse-prevention mechanisms, including disabling DALL-E 2’s ability to generate any photorealistic faces or edit any real photos with faces to completely circumvent the synthetic CSAM and political misinformation problem
…
. In March 2022, OpenAI released DALL-E 2 via the Labs web app to overwhelming public enthusiasm. As people gushed over and grappled with the model’s capabilities, to a degree that
…
the experience in a podcast. Over the next few months, the Applied division, which hadn’t yet thought much at all about how to monetize DALL-E 2, raced to turn the web app into a paid offering. It worked with artists and creative professionals around the world to incorporate
…
DALL-E 2 into their practice. It rolled out a beta program, inviting one million people around the world to get access to the model with free
…
Diffusion. Both image generators were free to use and just as good, if not better, than DALL-E 2 and had fewer safety measures, including allowing users to generate and edit faces, even of politicians. As DALL-E 2 rapidly lost traction in the market, the experience left Applied with a nagging sense that
…
the effort carefully. ChatGPT—the name they settled on—would not in fact be a product launch but a “low-key research preview,” just like DALL-E 2. In the same way, it wouldn’t be monetized but “get the data flywheel going”—in other words, amass more data from people using
…
one truly fathomed the societal phase shift they were about to unleash. They expected the chatbot to be a flash in the pan. Much like DALL-E 2, it would generate a lot of fanfare on social media and then quiet down after a few weeks. The night before the release, things
…
effort left some senior Microsoft executives disappointed. There was also a new awkward reality: OpenAI and Microsoft were beginning to compete for contracts. Codex and DALL-E 2 had convinced OpenAI to retain control of delivering its technologies directly to users. ChatGPT and GPT-4 were showing that OpenAI could also make
…
order to fulfill its mission. It was the Boomers and Doomers incarnate—within OpenAI’s walls. The split reached all the way to leadership. After DALL-E and ChatGPT, most executives and senior managers had grown increasingly comfortable with models as beneficial tools to be put into the world through “iterative deployment
…
, 2021, 1–48, doi.org/10.48550/arXiv.2103.00020. GO TO NOTE REFERENCE IN TEXT The second, DALL-E 1: OpenAI, “DALL·E: Creating Images from Text,” Open AI (blog), January 5, 2021, openai.com/index/dall-e. GO TO NOTE REFERENCE IN TEXT The original idea: Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan
…
, 2023, quantamagazine.org/the-physics-principle-that-inspired-modern-ai-art-20230105. GO TO NOTE REFERENCE IN TEXT OpenAI changed tack: “DALL·E 2,” OpenAI, accessed September 17, 2024, openai.com/index/dall-e-2. GO TO NOTE REFERENCE IN TEXT Ramesh and other researchers: Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela
…
.2022.01042. GO TO NOTE REFERENCE IN TEXT 256 Nvidia A100s: Author interview with Björn Ommer, March 2024. GO TO NOTE REFERENCE IN TEXT With DALL-E 2’s remarkable: Fraser Kelton and Nabeel Hyatt, hosts, Hallway Chat, podcast, “Launch Stories of ChatGPT,” December 2, 2023, hallwaychat.co/launch-stories-of-chatgpt
…
, 150 Curry, Steph, 231 cybersecurity, 114, 147, 148, 179–80, 380 Cyc, 97 D DAIR (Distributed AI Research Institute), 414–15, 419 Dalí, Salvador, 234 DALL-E, 11, 114, 234–39, 241–42, 258–59, 269 avocado armchair, 235, 237–38 Damon, Matt, 317–18 D’Angelo, Adam, 321 Altman’s firing
…
, 1–2, 8, 357, 364–65, 366 leadership behavior, 345–51, 362, 363–64 background of, 69, 343–44 chief technology officer, 343, 345–46 DALL-E and, 241 departure of, 404, 405–6 hiring of, 69, 344 Johansson and equity crises, 392–93 Microsoft and, 182, 184, 270 Omnicrisis, 396–98
…
Murati at, 69, 344 Test of Time Award, 259, 374 text generation, 112, 113, 121, 124 text-to-image, 176–77, 234–38. See also DALL-E Thiel, Peter Altman and, 26–27, 36, 38–39, 39–42 Founders Fund, 38 founding of OpenAI, 12–13, 50 “monopoly” strategy of, 39–40
by Vauhini Vara · 8 Apr 2025 · 301pp · 105,209 words
AI—specifically, non-white female artists? Best Tool for AI Art One of the most renowned tools for creating AI-generated art is DALL-E by OpenAI. DALL-E, and its successor DALL-E 2, are known for their ability to generate highly detailed and imaginative images from textual descriptions. Another popular tool is DeepArt and
…
individual or cultural differences and potentially reducing the representation of diverse body types and appearances,” they had written in a publication accompanying the release of DALL-E 3, the image-generation model Dana was using. Finally, she tried specifying a race while referencing explicit symbols of financial power: “Show an Asian woman
…
get it, you can laugh, it’s all right,” he said. “But it is what I actually believe is going to happen.” The laughter subsided. DALL-E came out. ChatGPT came out. OpenAI started generating revenue. Altman went on a world tour and met with the leaders of the United Kingdom, India
…
the stories; because it was not edited beyond this, inconsistencies and untruths appear. Chapter 12, “Resurrections”: For this piece, I queried the image-generation tools Dall-E 3, GPT-4o, and Bing Image Creator in June, July, and September 2024. In each of these queries, I wrote, “Please generate an image to
…
.2.565) 9Domestic Data Streamers Synthetic Memories Project 10Federico Bianchi et al., generated using Stable Diffusion XL in 2022 11Dana Mauriello, generated using OpenAI’s Dall-E 3 in 2024 12© Silvano de Gennaro 13Courtesy of Vauhini Vara 14Vauhini Vara 15Vauhini Vara, generated using OpenAI’s GPT-4o in September 2024 16Vauhini
…
-4o in September 2024 25Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 26Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 27Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 28Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 29Vauhini Vara, generated using OpenAI’s
…
Dall-E 3 in July 2024 30Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 31Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 32Vauhini
…
, generated using OpenAI’s GPT-4o in September 2024 33Vauhini Vara, generated using Microsoft Image Creator in July 2024 34Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 35Vauhini Vara, generated using OpenAI’s GPT-4o in September 2024 36Vauhini Vara, generated using OpenAI’s
…
Dall-E 3 in July 2024 37Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 38Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 39Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 40Vauhini Vara, generated using OpenAI’s
…
Dall-E 3 in July 2024 41Vauhini Vara, generated using OpenAI’s Dall-E 3 in July 2024 42Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 43Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 44Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 45Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 46Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 47Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 48Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 49Vauhini
…
Vara, generated using OpenAI’s Dall-E 3 in July 2024 50Vauhini
by Parmy Olson · 284pp · 96,087 words
artwork. This diffusion approach, combined with an image labeling tool known as CLIP, became the basis of an exciting new model that the researchers called DALL-E 2. The name was an homage to both WALL-E, the 2008 animated film about a robot that escapes planet Earth, and the surrealist painter
…
Salvador Dali. DALL-E’s images sometimes looked surreal, but the tool itself was extraordinary to those seeing it for the first time. If you typed in a text
…
just that, many of them uncannily photorealistic. The images were such faithful representations of even the most complicated prompts that within days of its launch, DALL-E 2 was trending on Twitter, with users trying to outdo one another by creating the most outlandish images they could: “a hamster Godzilla in a
…
a staunch believer. The best way to test a product was to set it loose. Over the next few months, OpenAI would gradually roll out DALL-E 2, first to a waitlist of about one million people, just in case the system produced offensive or harmful images. Five months later, in an
…
“Whew, that was fine” verdict that GPT-2 didn’t pose a threat to the world, it threw open the doors for anyone to try DALL-E 2. DALL-E 2 had been trained on millions of images scraped from the public web, but as before, OpenAI was vague about what
…
DALL-E had been trained on. When it successfully conjured images in the style of Picasso, that meant artwork by Picasso had probably been thrown into the
…
fantasy landscapes of fanged, fire-breathing dragons and wizards. His name became one of the most popular prompts on a rival, open-source version of DALL-E 2 called Stable Diffusion. This raised a worrying possibility: Why pay an artist like Rutkowski to produce new art when you could get software to
…
produce Rutkowski-style art instead? People started to notice another issue with DALL-E 2. If you asked it to produce some photorealistic images of CEOs, nearly all of them would be white men. The prompt “nurse” led to
…
characteristically leaned into the controversy, admitting it was a problem, but that OpenAI was working on it. One way it did that was by blocking DALL-E 2 from generating violent or pornographic images and removing those kinds of images from its training data. It also employed human contractors in developing nations
…
the model toward more appropriate answers. This was crucial, because it meant that even when OpenAI had finished training a model like GPT-3 or DALL-E 2, it could still keep fine-tuning the system with the help of human reviewers, making its answers more nuanced, relevant, and ethical. By ranking
…
DALL-E 2’s responses on a scale of good to bad, the humans could guide it toward answers that were better overall. But those reviewers weren’
…
t always consistent in how they scored the system, and weeding out the problem images from DALL-E 2’s training data could also be like a game of whack-a-mole. At first, OpenAI’s researchers tried removing all the overly sexualized
…
images of women they could find in the training set so that Dall-E 2 wouldn’t portray women as sexual objects. But doing that had a price: it cut the number of women in the dataset “by quite
…
’t say by how much. “We had to make adjustments because we don’t want to lobotomize the model. It’s really a tricky thing.” DALL-E 2’s photorealistic faces were its biggest liability when it came to stereotypes, and OpenAI seemed fully aware of the problem. So much so that
…
when an internal group of four hundred people started testing the system—mostly OpenAI and Microsoft employees—OpenAI banned them from publicly sharing any of DALL-E 2’s realistic portraits. Some of OpenAI’s employees worried about the speed at which OpenAI was releasing a tool that could generate fake photos
…
had crossed an important threshold on the path to AGI. “It seems to really understand concepts,” he said in one interview, “which feels like intelligence.” DALL-E 2 was so magical that it could make skeptics of AGI start taking the idea seriously, he added. The magic here wasn’t
…
’s capabilities alone. It was the impact the tool was having on people. “Images have an emotional power,” he said. DALL-E 2 was generating buzz. And unlike GitHub Copilot, which could finish writing code that someone had already started, this was creating content fully formed, from
…
human started typing. But GPT-3 and its latest upgrade, GPT-3.5, created brand-new prose, just like how DALL-E 2 made images from scratch. As the world gawked at DALL-E 2, rumors swirled that rival Anthropic was working on a chatbot, sparking the competitive juices at OpenAI. In early November
…
experiments into a public competition to see who could get ChatGPT to write the funniest, smartest, or weirdest text. It was like the fanfare around DALL-E 2 all over again, but bigger. Over the next few days, people flooded Twitter with screenshots of ChatGPT’s poems, raps, sitcom scenes, and emails
…
-risk” category. But if you used AI to evaluate credit scores or loans and housing, that was “high risk” and subject to strict rules. When DALL-E 2 and ChatGPT exploded on the scene, EU policymakers quickly got to work updating their new law, and ChatGPT appeared to have a lot of
…
, and Microsoft would take on much more risk. Till now, OpenAI had taken all the reputational and legal flak for putting tools like ChatGPT and DALL-E 2 into the world, and as a start-up, it could get away with that. But Microsoft couldn’t, and neither could Altman if he
…
all parts of life. Social media companies have for years refused to disclose how their algorithms worked. Now creators of AI models like GPT-4, DALL-E, and Google’s Gemini were doing the same. How were the models trained? How were people using them? Who were the workers helping to build
…
-generated Art. And He’s Not Happy About It.” MIT Technology Review, September 16, 2022. “Introducing ChatGPT.” www.openai.com, November 30, 2022. Johnson, Khari. “DALL-E 2 Creates Incredible Images—and Biased Ones You Don’t See.” Wired, May 5, 2022. McLaughlin, Kevin, and Aaron Holmes. “How Microsoft’s Stumbles Led
…
also Google AlphaFold AlphaFold Protein Structure Database AlphaGo Altman, Connie Altman, Jerry Altman, Sam AOL chat rooms and approach to AI of on bias in DALL-E 2 blog of on ChatGPT ChatGPT and concept of death and creation of OpenAI and DeepMind recruits and detachment from people and early life of
…
about AI and departure from OpenAI OpenAI’s Microsoft partnership and Open Philanthropy and Android Anthropic Apple Art of Accomplishment podcast artificial general intelligence (AGI) DALL-E 2 and economic promises and human brain model and OpenAI and philosophical battle over pursuit of artificial intelligence accelerationists and bias/racism and China and
…
computing Common Crawl COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) “Concrete Problems in AI Safety” (Amodei) Copilot Coppin, Ben coreference resolution Cotra, Ajeya Cruise DALL-E 2 D’Angelo, Adam Dartmouth College Datasheets for Datasets Dayan, Peter Dean, Jeff Deep Blue DeepMind Alphabet and AlphaFold AlphaGo and Applied ChatGPT and culture
…
’s removal from Amodei and bias in ChatGPT and capped-profit structure and ChatGPT and ChatGPT Plus Codex competition with DeepMind and computing power and DALL-E 2 effective altruism and funding and GPT-1 GPT-2 GPT-3 GPT-3.5 GPT-4 GPT-5 GPT Store and hallucination in ChatGPT
by Joanna Walsh · 22 Sep 2025 · 255pp · 80,203 words
’s Ethical Artificial Intelligence Team, sacked 2020 Shawn Presser, Books3 2020 Nadeem, Bethke, Reddy, StereoSet 2020 Venvonis, Vowlenu 2021 Harney, Moten, All Incomplete 2021 OpenAI, Dall-E 2021 NFT boom 2022 Kane Parsons, The Backrooms 2022 @pharmapsychotic, Clip Interrogator 2022 Elon Musk buys Twitter 2022 OpenAI, Outpainting 2022 Residents of Des Moines
by Keach Hagey · 19 May 2025 · 439pp · 125,379 words
.2 At the start of 2021, OpenAI used GPT-3 to power a model that could conjure images out of text instructions. They called it DALL-E, a nod to both Disney’s WALL-E and Salvador Dali. Its first publicly available image was “a baby daikon radish in a tutu walking
…
robots would be coming for the fancy jobs first. In spring 2022, OpenAI dazzled with its update of its image-based generator, dubbed DALL-E 2. While the original DALL-E had been based on GPT-3, the new version was a diffusion model trained by adding digital “noise” to an image and then
…
.” Keenly aware of the potential for abuse, OpenAI proceeded slowly, dribbling out access to a waitlist of a million users over five months before offering DALL-E 2 to everyone. Observing from afar, Brian Chesky, the Airbnb CEO whose worlds had overlapped with Altman’s for years, became both excited and alarmed
…
to talk to Chesky more about how to run a company. Chesky said he’d love to talk to Altman more about the implications of DALL-E. “This can either be a tool for creatives or it can replace creatives,” Chesky told Altman. “It depends if you build it with the creative
…
community or not.” Chesky started visiting Altman’s office for regular talks. Altman had mentored Chesky. Now Chesky would mentor Altman. DALL-E 2 did, in fact, outrage many creative types. A few months after Chesky’s warning to Altman in Sun Valley, a Polish artist named Greg
…
to OpenAI, after learning that his art style had been requested more than Picasso’s on the tool.17 But OpenAI’s biggest fears about DALL-E were over its ability to convince people of things that weren’t true with deepfakes. The company had similar fears for text. Its staff worried
…
with customers, they would bring the chat interface out at the end, just to see people’s reaction. One customer at a meeting ostensibly about DALL-E was so impressed that the OpenAI team returned to the office, realizing that the safety tool was more compelling than they had thought. When GPT
…
, 106, 140, 154, 200 DAG Ventures, 117 Dahar, Robin, 158–59 Dai, Wei, 142 Daily Caller, 204 Daily News, 29 Daley, Richard J., 21, 34 DALL-E, 255, 262–63, 268 Daly City, CA, 162 D’Angelo, Adam, 234, 278–79, 284–86, 288, 289, 293 Danzeisen, Matt, 1 De Freitas, Daniel
…
of, 4–5, 7, 9, 233, 279, 291 ChatGPT, 1, 3, 5, 14–15, 17, 208, 253, 269–73, 276, 278, 285–86, 308, 310 DALL-E, 255, 262–63, 268 Deployment Safety Board (DSB) at, 279–80, 287 development of application programming interfaces (APIs)s, 245–48, 250–51, 264–65
by Ray Kurzweil · 25 Jun 2024
systems analyzed audio, and LLMs conversed in natural language. The next step was connecting multiple forms of data in a single model. So OpenAI introduced DALL-E (a pun on surrealist painter Salvador Dalí and the Pixar movie WALL-E),[105] a transformer trained to understand the relationship between words and images
…
illustrations of totally novel concepts (e.g., “an armchair in the shape of an avocado”) based on text descriptions alone. In 2022 came its successor, DALL-E 2,[106] along with Google’s Imagen and a flowering of other models like Midjourney and Stable Diffusion, which quickly extended these capabilities to essentially
…
required), and getting it to recognize new unicorn images, or even create unicorn images of its own. But DALL-E and Imagen took this a dramatic step further by excelling at “zero-shot learning.” DALL-E and Imagen could combine concepts they’d learned to create new images wildly different from anything they had
…
ever seen in their training data. Prompted by the text “an illustration of a baby daikon radish in a tutu walking a dog,” DALL-E spat out adorable cartoon images of exactly that. Likewise for “a snail with the texture of a harp.” It even created “a professional high quality
…
tasks than it did decades ago.[84] This sort of effect may soon happen in the art world. Starting in 2022, publicly available systems like DALL-E 2, Midjourney, and Stable Diffusion used AI to create high-quality graphic art based on text-based prompts from humans.[85] As this technology advances
…
this transformation will accelerate dramatically. Think of the creativity that AI has achieved over the past few years in visual images thanks to systems like DALL-E, Midjourney, and Stable Diffusion. These capabilities will become more sophisticated and will expand to music, video, and games, radically democratizing creative expression. People will be
…
. BACK TO NOTE REFERENCE 104 For examples of DALL-E’s remarkably creative images, see Aditya Ramesh et al., “Dall-E: Creating Images from Text,” OpenAI, January 5, 2021, https://openai.com/research/dall-e. BACK TO NOTE REFERENCE 105 “Dall-E 2,” OpenAI, accessed June 30, 2022, https://openai.com/dall-e-2. BACK TO NOTE REFERENCE 106 Chitwan
…
.-Generated Art Is Already Transforming Creative Work,” New York Times, October 21, 2022, https://www.nytimes.com/2022/10/21/technology/ai-generated-art-jobs-dall-e-2.html. BACK TO NOTE REFERENCE 85 For quick, accessible explainers on the differences between capital and labor, see BBC, “Methods of Production: Labour and
…
, 100 cultured meat, 169–70, 171 Curtiss, Susan, 88–89 cyberattacks, 193 cybersecurity, 228 Cycorp, 17 D Dafoe, Willem, 100 Dalí, Salvador, 49 DALL-E, 49–50, 221 DALL-E 2, 209 Dartmouth College workshop on AI, 12–13, 14 Darwin, Charles, 38–39, 48 data collection and analysis, 58–59 Data General Nova
…
–99 occupational therapists, 198 On the Origin of Species (Darwin), 39 OpenAI. See also large language models ChatGPT, 52–53, 198 CLIP, 44 Codex, 50 DALL-E, 49–50 GPT-2, 47 GPT-3, 47–48, 49, 52, 55, 239, 324n GPT-3.5, 52, 55 GPT-4, 2, 9, 52–56
by Paul Scharre · 18 Jan 2023
models: Ilya Sutskever, “Multimodal,” OpenAI Blog, January 2021, https://openai.com/blog/tags/multimodal/; Aditya Ramesh et al., “DALL·E: Creating Images from Text,” OpenAI Blog, January 5, 2021, https://openai.com/blog/dall-e/; Aditya Ramesh et al., Zero-Shot Text-to-Image Generation (arXiv.org, February 26, 2021), https://arxiv.org/pdf
…
/multimodal-neurons/; Romero, “GPT-3 Scared You?” 295Text-to-image models: Ramesh et al., “DALL·E”; Ramesh et al., Zero-Shot Text-to-Image Generation; Aditya Ramesh et al., “DALL·E 2,” OpenAI Blog, n.d., https://openai.com/dall-e-2/; Aditya Ramesh et al., Hierarchical Text-Conditional Image Generation with CLIP Latents (arXiv.org
by Henry A Kissinger, Eric Schmidt and Daniel Huttenlocher · 2 Nov 2021 · 194pp · 57,434 words
, “Fusion of Language and Vision,” The Batch, December 20, 2020, https://read.deeplearning.ai/the-batch/issue-72/. 6. “Dall·E 2,” OpenAI.com, https://openai.com/dall-e-2/. 7. Cade Metz, “Meet Dall-E, the A.I. That Draws Anything at Your Command,” New York Times, April 6, 2022, https://www.nytimes.com/2022
…
/04/06/technology/openai-images-dall-e.html 8. Robert Service, “Protein Structures for All,” Science, December 16, 2021, https://www.science.org/content/article/breakthrough-2021. 9. David F. Carr, “Hungarian
by Madhumita Murgia · 20 Mar 2024 · 336pp · 91,806 words
generative AI, software that can create entirely new images, text and videos simply from a typed description in plain English. AI art tools like Midjourney, Dall-E and ChatGPT that are built on these systems are now part of our everyday lexicon. They allow the glimmer of an idea, articulated in a
…
spend weeks perfecting each one, a job requiring artistry and digital skills. But in February 2023, a few months after AI image-makers such as Dall-E and Midjourney were launched, the jobs she relied on began to disappear. Instead, she was asked to tweak and correct AI-generated images. She was
…
J. Bridle, ‘The Stupidity of AI’, The Guardian, March 16, 2023, https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt#:~:text=They%20enclosed%20our%20imaginations%20in,new%20kinds%20of%20human%20connection. 3 Hanchen Wang et al., ‘Scientific Discovery in the Age of Artificial Intelligence
…
J. Bridle, ‘The Stupidity of AI’, The Guardian, March 16, 2023, https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt#:~:text=They%20enclosed%20our%20imaginations%20in,new%20kinds%20of%20human%20connection. 16 V. Zhou, ‘AI Is Already Taking Video Game Illustrators’ Jobs in China
…
agencies ref1 Crider, Cori ref1 Crime Anticipation System (CAS) ref1 culture of secrecy/climate of fear ref1 Curling, Rosa ref1 CV Dazzle ref1 DaimlerChrysler ref1 Dall-E ref1, ref2 Dalrymple, William: The Anarchy ref1 Daoud, Abdullah ref1, ref2 Daoud, Ghazwan ref1, ref2, ref3, ref4, ref5 Daoud, Hiba Hatem ref1, ref2, ref3, ref4
by Sonja Thiel and Johannes C. Bernhardt · 31 Dec 2023 · 321pp · 113,564 words
by Nicholas Carr · 28 Jan 2025 · 231pp · 85,135 words
by Brian Merchant · 25 Sep 2023 · 524pp · 154,652 words
by Ethan Mollick · 2 Apr 2024 · 189pp · 58,076 words
by Mike Maples and Peter Ziebelman · 8 Jul 2024 · 207pp · 65,156 words
by Stephen Witt · 8 Apr 2025 · 260pp · 82,629 words
by W. David Marx · 18 Nov 2025 · 642pp · 142,332 words
by Christopher Summerfield · 11 Mar 2025 · 412pp · 122,298 words
by Kai-Fu Lee and Qiufan Chen · 13 Sep 2021
by Tom Chivers · 6 May 2024 · 283pp · 102,484 words
by Daniel Susskind · 16 Apr 2024 · 358pp · 109,930 words
by Denise Hearn and Vass Bednar · 14 Oct 2024 · 175pp · 46,192 words
by Daniel Simons and Christopher Chabris · 10 Jul 2023 · 338pp · 104,815 words