AI boom

back to index

description: rapid progress in generative AI since mid-2010s

19 results

pages: 260 words: 82,629

The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip
by Stephen Witt
Published 8 Apr 2025

Working with pioneers like OpenAI, Nvidia has sped up deep-learning applications more than a thousand times in the last ten years. All major artificial-intelligence applications—Midjourney, ChatGPT, Copilot, all of it—were developed on Nvidia machines. It is this unprecedented increase in computing power that has made the modern AI boom possible. With a near-monopoly on the hardware, Huang is arguably the most powerful person in AI. Certainly, he’s made more money from it than anyone else. In the strike-it-rich tradition, he most closely resembles California’s first millionaire, Samuel Brannan, the celebrated vendor of prospecting supplies who lived in San Francisco in 1849.

Image-recognition tools promised to save billions of dollars a year in MRI analysis costs, putting most radiologists out of a job. Amazon used AI to determine whether the strawberries it was delivering were bruised, and farm owners were experimenting with AI robots to pick the fruit they shipped. One analysis predicted that meeting the needs of the generative AI boom might require doubling US nuclear plant capacity in under ten years. Even conservative estimates projected a 20 percent increase in required total demand. There was no realistic way to supply the Nvidia GPUs with the electricity they needed while simultaneously hitting carbon-neutrality targets. Dominion, in addition to upgrading the high-voltage lines, was discussing reviving mothballed coal-burning facilities.

But his work laid the foundation for AlexNet, and by the mid-2010s, Bengio was enjoying a sense of vindication. Obscure research papers he’d published in niche journals years before now served as the basis for a new scientific field. Bengio was a pure academic with little interest in commerce, but he watched the AI boom with a kind of fatherly pride. Like many researchers in the field, though, Bengio sometimes imagined a world where AI grew too powerful. For most of his career, though, this thought experiment was kind of a joke: If AI was going to conquer the planet, why was it so hard to get a $10,000 research grant to study it?

pages: 284 words: 96,087

Supremacy: AI, ChatGPT, and the Race That Will Change the World
by Parmy Olson

To process so many billions of pieces of data for training, and quickly, it needed the powerful chips found only in servers and typically rented from cloud providers like Amazon Web Services, Google Cloud, or Microsoft’s Azure. These were the companies that had endless football fields of computers enclosed in vast warehouses, whose ownership of these “cloud” computers would see them become the biggest financial winners of the AI boom. By early 2024, the market value of Nvidia would start closing in on $2 trillion as demand raced ahead for its GPU chips for training AI models. It was virtually impossible to build AI outside the orbit of tech giants, which meant developers had little choice but to use those companies to help create their systems.

In May 2023, it became the latest technology company, after Google, Microsoft, Amazon, Meta, and Apple, to reach a market capitalization of $1 trillion. The biggest companies in the world by a huge margin were building tech and AI. But rather than create a thriving market for innovative new companies, the AI boom was helping these firms consolidate their power. Having strengthened their grip on infrastructure, talent, data, computing power, and profits, there is no question that they alone will control our AI future. The AGI dreamers helped make that happen. In June 2023, Microsoft’s chief financial officer Amy Hood told investors that AI-powered services from OpenAI would contribute at least $10 billion to its revenue.

There are more interviews that I could not include in the story due to constraints on space, but that were valuable in giving me context on the lives of Sam Altman and Demis Hassabis and their work and the field of AI, as well as experts who helped me understand and translate the workings of machine-learning systems, neural networks, diffusion systems, and transformers into accessible language. My conversations with hundreds of industry experts, entrepreneurs, venture capitalists, employees and former employees of tech companies, both for this book and over the last few years of reporting about the new AI boom for Bloomberg Opinion, Wall Street Journal, and Forbes also greatly informed my research. I exploited my love of running to listen to countless hours of podcast interviews with Sam Altman, Demis Hassabis, Ilya Sutskever, Greg Brockman, and many other individuals who were involved in the creation of OpenAI and DeepMind, or who witnessed the evolution of AI from scientific backwater to booming business, to help piece together many details of the narrative.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

These four individuals, and the AI systems that they and their students built, are totems for AI researchers of my generation. But there was a good deal of naivety in the Golden Age, with researchers making reckless and grandiose predictions about the likely speed of progress in the field, which have haunted AI ever since. By the mid-1970s, the good times were over, and a vicious backlash began – an AI boom and bust cycle destined to be repeated over the coming decades. But, however critically history may judge this period, it is hard for me to contemplate the characters of this time, and the work they did, with anything other than affection. Divide and Conquer As we’ve seen, General AI is a large and very nebulous target – it is hard to approach directly.

Typically, these were problems in which a human expert had acquired expertise over a long period of time, and where that human expertise was scarce. For the next decade, knowledge-based expert systems were the main focus of AI research. Enormous investment from industry flowed into the field. By the early 1980s, the AI winter was over – another, much bigger AI boom was underway. In this chapter, we’ll look at the expert systems boom, which lasted from the late 1970s to the late 1980s. We’ll start by seeing how human expert knowledge can be captured and given to computers, and I’ll tell you the story of MYCIN – one of the most celebrated expert systems of the period.

pages: 660 words: 179,531

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
by Karen Hao
Published 19 May 2025

Chile is among the most unequal countries in the world today, with nearly a quarter of the country’s income concentrated among a few powerful families in the 1 percent. Having never meaningfully industrialized, it also remains tethered to the extraction economy that makes it relevant to higher geopolitical powers. And so, as the AI boom arrived, Chile would become ground zero for a new scale of extractivism, as the supplier of the industry’s insatiable appetite for raw resources, not just its copper and lithium in the north but also its land, water, and energy resources for a growing crop of data centers in the Santiago metropolitan region.

According to an estimate from researchers at the University of California, Riverside, surging AI demand could consume 1.1 trillion to 1.7 trillion gallons of fresh water globally a year by 2027, or half the water annually consumed in the UK. Those effects will not be felt evenly. Another study found that in the US, one-fifth of data centers were already drawing that water before the generative AI boom from moderately or highly stressed watersheds due to drought or other factors. And in Global South countries like Chile, it’s often the most vulnerable communities who have borne the brunt of these accelerating economies of extraction. As more and more communities have watched data centers affect their lives, a growing number have pushed back vehemently against their unfettered development.

GO TO NOTE REFERENCE IN TEXT Arrakis felt like: Jon Victor and Aaron Holmes, “OpenAI Dropped Work on New ‘Arrakis’ AI Model in Rare Setback,” The Information, October 17, 2023, theinformation.com/articles/openai-dropped-work-on-new-arrakis-ai-model-in-rare-setback. GO TO NOTE REFERENCE IN TEXT There was also a new: Tom Dotan and Deepa Seetharaman, “The Awkward Partnership Leading the AI Boom,” Wall Street Journal, June 13, 2023, wsj.com/articles/microsoft-and-openai-forge-awkward-partnership-as-techs-new-power-couple-3092de51. GO TO NOTE REFERENCE IN TEXT Nadella would tell: Karen Weise and Cade Metz, “How Microsoft’s Satya Nadella Became Tech’s Steely Eyed A.I.

pages: 189 words: 58,076

Co-Intelligence: Living and Working With AI
by Ethan Mollick
Published 2 Apr 2024

But hype cycles have always plagued AI, and as these promises went unfulfilled, disillusionment set in, one of many “AI winters” in which AI progress stalls and funding dries up. Other boom-and-bust cycles followed, each boom accompanied by major technological advances, such as artificial neural networks that mimicked the human brain, followed by collapse as AI could not deliver on expected goals. The latest AI boom started in the 2010s with the promise of using machine learning techniques for data analysis and prediction. Many of these applications used a technique called supervised learning, which means these forms of AI needed labeled data to learn from. Labeled data is data that has been annotated with the correct answers or outputs for a given task.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

But this is not a new Cold War. AI today has numerous potential military applications, but its true value lies not in destruction but in creation. If understood and harnessed properly, it can truly help all of us generate economic value and prosperity on a scale never before seen in human history. In this sense, our current AI boom shares far more with the dawn of the Industrial Revolution or the invention of electricity than with the Cold War arms race. Yes, Chinese and American companies will compete with each other to better leverage this technology for productivity gains. But they are not seeking the conquest of the other nation.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

How these approaches relate can be understood by how they scale—that is, how their performance depends on the difficulty of the problem they’re addressing. Both a light switch and a self-driving car must determine their operators’ intentions, but the former has just two options to choose from, whereas the latter has many more. The AI-boom phases have started with promising examples in limited domains; the bust phases came with the failure of those demonstrations to handle the complexity of less-structured, practical problems. Less apparent is the steady progress we’ve made in mastering scaling. This progress rests on the technological distinction between linear and exponential functions—a distinction that was becoming evident at the dawn of AI but with implications for AI that weren’t appreciated until many years later.

pages: 340 words: 90,674

The Perfect Police State: An Undercover Odyssey Into China's Terrifying Surveillance Dystopia of the Future
by Geoffrey Cain
Published 28 Jun 2021

US telecommunications company Qualcomm set up a relationship with Megvii: it supplied semiconductors to Megvii and could in turn use Megvii’s artificial intelligence software for its own devices.15 “We see explosive demand in China,” Li Xu, cofounder and CEO of SenseTime, said at a business conference in June 2016, on the stage with Jeff Herbst, vice president for business development of Nvidia’s venture investment arm.16 Founded in 1993, Nvidia emerged in the 1990s and 2000s as the leading manufacturer of GPUs. Now, it was poised to cash in on the emerging AI boom.17 Nvidia soon began inking high-profile deals with Chinese facial recognition firms. Chips made by Nvidia and its major competitor, Intel, were used to build some of the world’s most powerful surveillance computers at the Urumqi Cloud Computing Center, which opened in 2016. These computers watched more surveillance footage in a day than humans could in a year.18 “When I’m in China, every lightpost has a camera,” Herbst said.

Smart Mobs: The Next Social Revolution
by Howard Rheingold
Published 24 Dec 2011

We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”55 However, in 1976, AT&T halted publication of Unix source code; the original, eventually banned, books became “possibly the most photocopied works in computing history.”56 At around the same time the Unix community was coalescing, MIT’s Artificial Intelligence research laboratory changed the kind of computers it used. This was a blow to the MIT hacker culture, because their software tools were rendered useless. At the same time, many of the early AI researchers were leaving for private industry to get involved in the techno-bubble of the time, the commercial AI boom and eventual bust. One holdout at MIT, deprived of his beloved programming environment, resistant to the commercialization of what he considered public property by AT&T and Microsoft, was Richard Stallman. Stallman vowed to write an OS that would be as portable and open as Unix, but which would be licensed in a way that would maintain its status as public goods.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

After the community spent years ignoring their ideas, many deep learning researchers felt the urge to trumpet their own personal contributions to a very real technological revolution. “Everybody has a little Trump inside them,” Hinton says. “You can see this in yourself, and it is good to be aware.” One exception was Alex Krizhevsky. As Hinton said: “He hasn’t got enough Trump in him.” Sitting at his desk inside Chauffeur, Krizhevsky was at the heart of this AI boom, but he didn’t see his role as all that important, and he didn’t see any of it as artificial intelligence. It was deep learning, and deep learning was just mathematics, pattern recognition, or, as he called it, “nonlinear regression.” These techniques had been around for decades. It was merely that people like him had come along at the right time, when there was enough data and enough processing power to make it all work.

pages: 272 words: 103,638

Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War
by Raj M. Shah and Christopher Kirchhoff
Published 8 Jul 2024

We need to make it easier for non-citizens to obtain an H-1B visa and work in the United States, and, relatedly, we need to lift caps on immigration. In the field of cybersecurity alone, there are nearly half a million open positions that companies can’t fill owing to the lack of candidates. The new AI boom is producing a similar gap. The U.S. could hire ten thousand top AI scientists from around the world and we’d still need more. More proactively, if the U.S. offered visas to the top twenty AI scientists, and their families, to settle in America, we could appreciably shift the global AI race in favor of democracies.

pages: 292 words: 106,826

Boom: Bubbles and the End of Stagnation
by Byrne Hobart and Tobias Huber
Published 29 Oct 2024

While they have been in the making for decades, the recent breakthroughs in generative AI achieved by upstarts like OpenAI, Stability AI, and Anthropic have disrupted big tech’s R&D complacency for now. Whereas R&D spending by Meta, Alphabet, Google, Microsoft, and Amazon totaled $109 billion in 2019, they channeled $223 billion into R&D amid the AI boom in 2022. And they’re not just building their own tools but also partnering with or acquiring AI companies. (Antitrust is less complicated when a business is being bought for its potential to create new markets rather than its share of an existing one.) So the dawn of another multi-generational corporate R&D bubble remains a possibility.

pages: 423 words: 21,637

On Lisp: Advanced Techniques for Common Lisp
by Paul Graham
Published 8 Sep 1993

In fact, the association between Lisp and AI is just an accident of history. Lisp was invented by John McCarthy, who also invented the term "artificial intelligence." His students and colleagues wrote their programs in Lisp, and so it began to be spoken of as an AI language. This line was taken up and repeated so often during the brief AI boom in the 1980s that it became almost an institution. Fortunately, word has begun to spread that AI is not what Lisp is all about. Recent advances in hardware and software have made Lisp commercially viable: it is now used in Gnu Emacs, the best Unix text-editor; Autocad, the industry standard desktop CAD program; and Interleaf, a leading high-end publishing program.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

See Laura Cooper and Preeti Singh, “Private Equity Backs Record Volume of Tech Deals,” Wall Street Journal, Jan. 3, 2022, www.wsj.com/​articles/​private-equity-backs-record-volume-of-tech-deals-11641207603. GO TO NOTE REFERENCE IN TEXT Investment in AI technologies See, for example, Artificial Intelligence Index Report 2021, although the numbers have certainly grown in the generative AI boom since then. GO TO NOTE REFERENCE IN TEXT PwC forecasts AI will add “Sizing the Prize—PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution,” PwC, 2017, www.pwc.com/​gx/​en/​issues/​data-and-analytics/​publications/​artificial-intelligence-study.html. GO TO NOTE REFERENCE IN TEXT McKinsey forecasts a $4 trillion boost Jacques Bughin et al., “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy,” McKinsey, Sept. 4, 2018, www.mckinsey.com/​featured-insights/​artificial-intelligence/​notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy; Michael Ciu, “The Bio Revolution: Innovations Transforming Economies, Societies, and Our Lives,” McKinsey Global Institute, May 13, 2020, www.mckinsey.com/​industries/​pharmaceuticals-and-medical-products/​our-insights/​the-bio-revolution-innovations-transforming-economies-societies-and-our-lives.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

In 1977 at SRI, Peter Hart, who began his career in artificial intelligence working on Shakey the robot, and Richard Duda, another pioneering artificial intelligence researcher, built Prospector to aid in the discovery of mineral deposits. That work would eventually get CBS’s overheated attention. In the midst of all of this, in 1982, Japan announced its Fifth Generation Computer program. Heavily focused on artificial intelligence, it added an air of competition and inevitability to the AI boom that would lead to a market in which newly minted Ph.D.s could command unheard-of $30,000 annual salaries right out of school. The genie was definitely out of the bottle. Developing expert systems was becoming a discipline called “knowledge engineering”—the idea was that you could package the expertise of a scientist, an engineer, or a manager and apply it to the data of an enterprise.

pages: 348 words: 119,358

The Long History of the Future: Why Tomorrow's Technology Still Isn't Here
by Nicole Kobie
Published 3 Jul 2024

Nvidia didn’t make those; instead it made GPUs, or graphics processing units, which were designed to handle specific and intense workloads such as graphics rendering. Dally figured he could help Ng with just a handful of GPUs. In the end, it took 12 to replace those thousands of CPUs, with the ensuing shift to GPUs sparking an AI boom. Research got cheaper and easier, and Nvidia now dominates the market for AI chips.15 * * * The summer of AI had returned, and it was all about neural networks. Six months after buying Hinton’s DNNresearch, Google unveiled a new tool at its developer conference: the ability to search your own photos using image recognition.

pages: 381 words: 119,533

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
by Adam Becker
Published 14 Jun 2025

“An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” Seventy years later, we still don’t have any computer program that can do all the things on their list. There have been AI booms—like the one that exploded into public consciousness in 2022, powered by large language models (LLMs)—and, to date, they have all been followed by “AI winters,” when progress slows and the state of the art stagnates for years or decades until the next breakthrough. And Brooks thinks winter is coming.

pages: 439 words: 125,379

The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future
by Keach Hagey
Published 19 May 2025

That opened the door to the next paradigm, in which progress would be driven by rapidly increasing amounts of data and compute. But, like all neural networks, they are largely black boxes: as of this writing, even experts don’t truly understand how they work. These dynamics, unleashed by the combination of Sutskever’s insight and Radford’s language research, would come to define the generative AI boom. “There’s this thing that Ilya and I used to talk about, the Feynman method of being a genius,” Brockman said, referring to the famous physicist Richard Feynman, who worked on the Manhattan Project and won the Nobel Prize. “The Feynman method is you have a set of problems you care about, you kind of know what the missing piece is, and you wait for yourself or some other researcher to invent one of those missing pieces.

pages: 589 words: 147,053

The Age of Em: Work, Love and Life When Robots Rule the Earth
by Robin Hanson
Published 31 Mar 2016

In 1984, as a 24-year-old physics graduate student, I read about exciting developments in AI; it seemed to me that human level AI could be feasible soon. So I quit my physics graduate school, headed to Silicon Valley, and got a job doing AI at Lockheed. I stayed in AI for 9 years, and was part of the AI “boom” then. We’ve seen similar booms of excitement and anxiety regarding rapid automation progress every few decades for centuries, and we are seeing another such boom today (Mokyr et al. 2015). Since the 1950s, a few people have gone out of their way to publish forecasts on the duration of time it would take AI developers to achieve human level abilities.