by Cade Metz · 15 Mar 2021 · 414pp · 109,622 words
acknowledged its limitations. Still, the idea slipped from his grasp. The Perceptron was one of the first neural networks, an early incarnation of the technology Geoff Hinton would auction to the highest bidder more than fifty years later. But before it reached that $44 million moment, let alone the extravagant future predicted
…
Minsky still reigned over the international community of AI researchers. They sat at a big wooden table in the center of the room, and as Geoff Hinton walked around the table, he handed each of them a long, rhetorical, math-laden academic paper describing something he called the Boltzmann Machine. Named
…
.” This theory—known as Hebb’s Law—helped inspire the artificial neural networks built by scientists like Frank Rosenblatt in the 1950s. It also inspired Geoff Hinton. Every Saturday, he would carry a notebook to his local public library in Islington, North London, and spend the morning filling its pages with
…
nine-year-old computer scientist and electrical engineer from Paris, was developing a new kind of image-recognition system, building on the ideas developed by Geoff Hinton and David Rumelhart a few years earlier. LeNet learned to recognize handwritten numbers by analyzing what was scribbled on the envelopes of dead letters from
…
building them into larger patterns as information moved through its web of (faux) neurons. It was an idea that would define LeCun’s career. “If Geoff Hinton is a fox, then Yann LeCun is a hedgehog,” says University of California–Berkeley professor Jitendra Malik, borrowing a familiar analogy from the philosopher Isaiah
…
uncertainty he felt was there on the page. Neural networks did need more computing power, but no one realized just how much they needed. As Geoff Hinton later put it: “No one ever thought to ask: ‘Suppose we need a million times more?’” * * * — WHILE Yann LeCun was building his bank scanner
…
what was about to unfold inside Google Brain. In a roundabout way, Ng’s departure catalyzed the project. Before he left, he recommended a replacement: Geoff Hinton. Years later, with the benefit of hindsight, it seemed like the natural step for everyone involved. Hinton was not only a mentor to Ng,
…
in the right direction. Neural networks learned in more powerful ways if someone showed them exactly where the cats were. * * * — IN the spring of 2012, Geoff Hinton phoned Jitendra Malik, the University of California–Berkeley professor who had publicly attacked Andrew Ng over his claims that deep learning was the future of
…
Chess tournaments I used to attend had wooden boards under the tables preventing competitors from kicking each other. Don’t be fooled—this is warfare.” Geoff Hinton later said that Hassabis could lay claim to being the greatest games player of all time, before adding, pointedly, that his prowess demonstrated not
…
University College London lab that sat at the intersection of neuroscience and AI. Funded by David Sainsbury, the British supermarket magnate, its founding professor was Geoff Hinton. Hinton left his new post after only three years, returning to his professorship in Toronto while Hassabis was still running his games company. Several years
…
—a place where he could explore what he called the “connections between the brain and machine learning.” Years later, when asked to describe Shane Legg, Geoff Hinton compared him to Demis Hassabis: “He’s not as bright, not as competitive, and not as good at social interactions. But then, that applies
…
the night before the conference began, he revealed the news to the rest of the world with his speech in the main hall. * * * — WHEN Geoff Hinton sold his company to Google, he arranged to keep his professorship at the University of Toronto. He didn’t want to abandon his students or
…
began with calls to several leaders in the field, including Yoshua Bengio, the University of Montreal professor who helped foster the deep learning movement alongside Geoff Hinton and Yann LeCun, but he made it clear that he was still committed to academia. Bengio drew up a list of promising young researchers
…
build a system capable of beating the world champion. “I thought that was impossible,” Brin said. In that moment, Hassabis resolved to do it. Geoff Hinton compared Demis Hassabis to Robert Oppenheimer, the man whose stewardship of the Manhattan Project during the Second World War led to the first atomic bomb
…
their disparate strengths to feed the larger project, and to somehow accommodate their foibles as well. He knew how to move men (and women, including Geoff Hinton’s cousin, Joan Hinton). Hinton saw the same combination of skills in Hassabis. “He ran AlphaGo like Oppenheimer ran the Manhattan Project. If anybody
…
the development of new medicines. Two hundred thirty-six teams entered the contest, which was scheduled to run for two months. When George Dahl, Geoff Hinton’s student, discovered the contest while riding on a train from Seattle to Portland, he decided to enter. He had no experience with drug discovery
…
reviewing papers for inclusion in the upcoming NIPS conference, Deng was laying eyes on the blueprints weeks before the rest of the world. After bringing Geoff Hinton and his students to the Microsoft Research lab, where they built a neural network that could recognize spoken words with unprecedented accuracy, Deng had watched
…
a speech at Carnegie Mellon University in November 2016, Yann LeCun called GANs “the coolest idea in deep learning in the last twenty years.” When Geoff Hinton heard this, he pretended to count backwards through the years, as if to make sure GANs weren’t any cooler than backpropagation, before acknowledging
…
Li Deng had once again made the drive from the NIPS conference in Vancouver to the NIPS workshops in Whistler. A year after running into Geoff Hinton inside the Whistler Hilton and stumbling onto his research with deep learning and speech recognition, Deng had organized a new workshop around the idea at
…
of researchers who turned up at academic gatherings like the one in Whistler. That year, they were also in the same carpool. Yu already knew Geoff Hinton, from having organized a deep learning workshop alongside Yann LeCun and Yoshua Bengio the previous summer in Montreal, and now, as the SUV climbed
…
over Project Maven was potentially even higher. Many of the scientists overseeing Google’s deep learning research were fundamentally opposed to autonomous weapons, including both Geoff Hinton and the founders of DeepMind. That said, many of the executives at the highest rungs of the Google hierarchy very much wanted to work with
…
by over a hundred people in the field. They included Elon Musk, who had so frequently warned against the threat of superintelligence, as well as Geoff Hinton, Demis Hassabis, and Mustafa Suleyman. For Suleyman, these were technologies that required a new kind of oversight. “Who’s making the decisions that will
…
T-shirt, Pichai told the crowd that the company’s talking digital assistant could make its own phone calls. Thanks to the methods pioneered by Geoff Hinton and his students in Toronto, the Google Assistant could recognize spoken words nearly as well as people could. Thanks to WaveNet, the speech-generation technology
…
learning, he responded with a column for the New Yorker arguing that the change was not as big as it seemed. The techniques espoused by Geoff Hinton, he said, were not powerful enough to understand the basics of natural language, much less duplicate human thought. “To paraphrase an old parable, Hinton
…
neural networks. Yann LeCun joins Bell Labs in Holmdel, New Jersey, where he begins building LeNet, a neural network that can recognize handwritten digits. 1987—Geoff Hinton leaves Carnegie Mellon for the University of Toronto. 1989—Carnegie Mellon graduate student Dean Pomerleau builds ALVINN, a self-driving car based on a neural
…
Jeff Dean, and Greg Corrado publish the Cat Paper. Andrew Ng leaves Google. Geoff Hinton “interns” at Google Brain. Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky publish the AlexNet paper. Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky auction their company, DNNresearch. 2013—Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky join Google. Mark Zuckerberg and Yann LeCun found the
…
publishes the GAN paper, describing a way of generating photos. Ilya Sutskever unveils the Sequence to Sequence paper, a step forward for automatic translation. 2015—Geoff Hinton spends the summer at DeepMind. AlphaGo defeats Fan Hui in London. Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman found OpenAI. 2016—DeepMind
…
based on deep learning. Donald Trump defeats Hillary Clinton. 2017—Qi Lu joins Baidu. AlphaGo defeats Ke Jie in China. China unveils national AI initiative. Geoff Hinton unveils capsule networks. Nvidia unveils progressive GANs, which can generate photo-realistic faces. Deepfakes arrive on the Internet. 2018—Elon Musk leaves OpenAI. Google employees
…
protest Project Maven. Google releases BERT, a system that learns language skills. 2019—Top researchers protest Amazon face recognition technology. Geoff Hinton, Yann LeCun, and Yoshua Bengio win the Turing Award for 2018. Microsoft invests $1 billion in OpenAI. 2020—Covariant unveils “picking” robot in Berlin.
…
OpenAI before moving to Apple. VARUN GULSHAN, the virtual reality engineer who explored AI that could read eye scans and detect signs of diabetic blindness. GEOFF HINTON, the University of Toronto professor and founding father of the “deep learning” movement who joined Google in 2013. URS HÖLZLE, the Swiss-born engineer
…
who oversaw a team that applied artificial intelligence to healthcare. SUNDAR PICHAI, CEO. SARA SABOUR, the Iran-born researcher who worked on “capsule networks” alongside Geoff Hinton at the Google lab in Toronto. ERIC SCHMIDT, chairman. AT DEEPMIND ALEX GRAVES, the Scottish researcher who built a system that could write in longhand
…
the researcher who worked alongside Yann LeCun at both NYU and Facebook. YANN LECUN, the French-born NYU professor who helped nurture deep learning alongside Geoff Hinton before overseeing the Facebook Artificial Intelligence Research lab. MARC’AURELIO RANZATO, the former professional violinist who Facebook poached from Google Brain to seed its AI
…
ZUCKERBERG, founder and CEO. AT MICROSOFT CHRIS BROCKETT, the former professor of linguistics who became a Microsoft AI researcher. LI DENG, the researcher who brought Geoff Hinton’s ideas to Microsoft. PETER LEE, head of research. SATYA NADELLA, CEO. AT OPENAI SAM ALTMAN, the president of Silicon Valley start-up incubator Y
…
Stripe who helped build OpenAI. ELON MUSK, the CEO of electric car maker Tesla and rocket company SpaceX who helped create OpenAI. ILYA SUTSKEVER, the Geoff Hinton protégé who left Google Brain to join OpenAI, the San Francisco AI lab created in response to DeepMind. WOJCIECH ZAREMBA, the former Google and Facebook
…
systems at MIT. MATTHEW ZEILER, founder and CEO. IN ACADEMIA YOSHUA BENGIO, the University of Montreal professor who carried the torch for deep learning alongside Geoff Hinton and Yann LeCun in the 1990s and 2000s. JOY BUOLAMWINI, the MIT researcher who explored bias in face recognition services. GARY MARCUS, the NYU psychologist
…
recognize images, in the early 1960s. DAVID RUMELHART, the University of California–San Diego psychologist and mathematician who helped revive Frank Rosenblatt’s ideas alongside Geoff Hinton in the 1980s. ALAN TURING, the founding father of the computer age who lived on the staircase at King’s College Cambridge that was later
…
al., “ImageNet Large Scale Visual Recognition Challenge,” 2014, https://arxiv.org/abs/1409.0575. It was nearly twice as accurate: Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems 25 (NIPS 2012), https://papers.nips.cc/paper/4824-imagenet-classification-with
…
/12/03/technology/google-alphabet-ceo-larry-page-sundar-pichai.html. CHAPTER 21: X FACTOR “When I was an undergrad at King’s College Cambridge”: Geoff Hinton, tweet, March 27, 2019, https://twitter.com/geoffreyhinton/status/1110962177903640582?s=19. This was the paper that helped launch the computer age: A. M.
…
Robotic Vision, 278 autonomous weapons, 240, 242, 244, 308 backpropagation ability to handle “exclusive-or” questions, 38–39 criticism of, 38 family tree identification, 42 Geoff Hinton’s work with, 41 Baidu auction for acquiring DNNresearch, 5–9, 11, 218–19 as competition for Facebook and Google, 132, 140 interest in neural
…
, 139 salary expenses, 132 tension between Google Brain and, 185–86 venture capital investments in, 110 WaveNet, 260 Defense Innovation Board, 242 Deng, Li and Geoff Hinton, 69–73, 218–19 role at Microsoft, 11, 66–67, 191–92 speech recognition research, 66–68, 193–94, 218–19 Department of Defense.
…
–36 racial profiling by machines, 233 within training data, 231–32 DNNresearch acquisition by Google, 9, 198 auction for, 5–9, 11, 197 founding by Geoff Hinton, 2, 5, 98, 193 Domingos, Pedro, 192 Dota/Dota 2 (game), 281, 297 dreaming as a way of learning, 200 Duan, Rocky, 283 Duda,
…
defense of neural network research, 58–59, 96–97, 206 and Elon Musk, 155–56 and Facebook, 120–21, 126–28, 129, 254–55 and Geoff Hinton, 49–52 investment in Covariant, 284 LeNet system, 46–48 and Mark Zuckerberg, 126–28, 155 microchip design research, 52–53, 54 at NYU,
…
evaluations, 193 “Microsoft’s Lost Decade” (article), 192–93 military applications of AI. See Project Maven Milner, Yuri, 286–87 Minsky, Marvin education, 21 and Geoff Hinton, 28–30 opposition to neural networks, 22, 24–26, 33–34, 44, 65, 194 personality, 29–30 rivalry with Frank Rosenblatt, 21–22, 24–25
…
122, 307 Ng, Andrew at Baidu, 132, 140, 219–20, 222, 308 Coursera development, 88 deep learning proposal for Google, 82–84 education, 81 and Geoff Hinton, 81–82 at Google, 83–84 neural network projects, 57–58, 81–82 research interests, 54, 81–82, 196 NIPS (Neural Information Processing Systems) Conference
…
fake news, 208–09 immigration policies’ effect on foreign researchers, 207–08 photo manipulation, 209–11 Turing Award 2018 presentation, 305–07 about, 305 and Geoff Hinton, 305–10 Jeff Dean’s introduction of the winners, 306 Turing lecture, 309–10 and Yann LeCun, 305–07, 308–09 and Yoshua Bengio, 305
by Martin Ford · 16 Nov 2018 · 586pp · 186,548 words
Intelligence Introduction 1. MARTIN FORD A Brief Introduction to the Vocabulary of AI How AI Systems Learn 2. YOSHUA BENGIO 3. STUART J. RUSSELL 4. GEOFFREY HINTON 5. NICK BOSTROM 6. YANN LECUN 7. FEI-FEI LI 8. DEMIS HASSABIS 9. ANDREW NG 10. RANA EL KALIOUBY 11. RAY KURZWEIL 12.
…
in the 1980s, a very small group of research scientists continued to believe in and advance the technology of neural networks. Foremost among these were Geoffrey Hinton, Yoshua Bengio and Yann LeCun. These three men not only made seminal contributions to the mathematical theory underlying deep learning, they also served as the
…
I interviewed, seven have current or former affiliations with Google or its parent, Alphabet. Other major concentrations of talent are found at MIT and Stanford. Geoff Hinton and Yoshua Bengio are based at the Universities of Toronto and Montreal respectively, and the Canadian government has leveraged the reputations of their research organizations
…
a recalibration of the settings (or weights) for the individual neurons. The result is that the entire network gradually homes in on the correct answer. Geoff Hinton co-authored the seminal academic paper on backpropagation in 1986. He explains backprop further in his interview. An even more obscure term is GRADIENT DESCENT
…
was discovered in the early 1980s, at the time when I started my own work. Yann LeCun independently discovered it around the same time as Geoffrey Hinton and David Rumelhart. It’s an old idea, but we didn’t practically succeed in training these deeper networks until around 2006, over a quarter
…
. MARTIN FORD: I know that there was an “AI Winter” where most people had dismissed deep learning, but a handful of people, like yourself, Geoffrey Hinton, and Yann LeCun, kept it alive. How did that then evolve to the point where we find ourselves today? YOSHUA BENGIO: By the end of
…
do with modeling language from data uses these ideas. The big question was how we could train deeper networks, and the breakthrough was made by Geoffrey Hinton and his work with Restricted Boltzmann Machines (RBMs). In my lab, we were working on autoencoders, which are very closely related to RBMs, and
…
in computer vision, and people in that field would only believe in our deep learning methods if we could show good results on that dataset. Geoffrey Hinton’s group actually did it, following up on earlier work by Yann LeCun on convolutional networks—that is, neural networks which were specialized for
…
to the American Association for the Advancement of Science, the Association for the Advancement of Artificial Intelligence and the Association of Computing Machinery. Chapter 4. GEOFFREY HINTON In the past when AI has been overhyped—including backpropagation in the 1980s—people were expecting it to do great things, and it didn’t
…
great things, so it can’t possibly all be just hype. EMERITUS DISTINGUISHED PROFESSOR OF COMPUTER SCIENCE, UNIVERSITY OF TORONTO VICE PRESIDENT & ENGINEERING FELLOW, GOOGLE Geoffrey Hinton is sometimes known as the Godfather of Deep Learning, and he has been the driving force behind some of its key technologies, such as backpropagation
…
of the Vector Institute for Artificial Intelligence. MARTIN FORD: You’re most famous for working on the backpropagation algorithm. Could you explain what backpropagation is? GEOFFREY HINTON: The best way to explain it is by explaining what it isn’t. When most people think about neural networks, there’s an obvious algorithm
…
s much faster than the evolutionary approach. MARTIN FORD: The backpropagation algorithm was originally created by David Rumelhart, correct, and you took that work forward? GEOFFREY HINTON: Lots of different people invented different versions of backpropagation before David Rumelhart. They were mainly independent inventions, and it’s something I feel I’ve
…
developed in Yann LeCun’s lab but mixed in with a few of our own techniques as well. MARTIN FORD: This was the ImageNet competition? GEOFFREY HINTON: Yes, and what happened then was what should happen in science. One method that people used to think of as complete nonsense had now
…
do it without using a neural network now. MARTIN FORD: This was back in 2012, I believe. Was that the inflection point for deep learning? GEOFFREY HINTON: For computer vision, that was the inflection point. For speech, the inflection point was a few years earlier. Two different graduate students at Toronto showed
…
read the press now, you get the impression that neural networks and deep learning are equivalent to artificial intelligence—that it’s the whole field. GEOFFREY HINTON: For most of my career, there was artificial intelligence, which meant the logic-based idea of making intelligent systems by putting in rules that allowed
…
the term “AI.” MARTIN FORD: Do you really think that AI should just be focused on neural networks and that everything else is irrelevant? GEOFFREY HINTON: I think we should say that the general idea of AI is making intelligent systems that aren’t biological, they are artificial, and they can
…
start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html) That created a lot of disturbance, so I wanted to ask what you meant by that. Geoffrey Hinton: The problem was that the context of the conversation wasn’t properly reported. I was talking about trying to understand the brain, and I
…
crazy. MARTIN FORD: How did you become interested in artificial intelligence? What was the path that took you to your focus on neural networks? GEOFFREY HINTON: My story begins at high school, where I had a friend called Inman Harvey who was a very good mathematician who got interested in the
…
idea that the brain might work like a hologram. MARTIN FORD: A hologram being a three-dimensional representation? GEOFFREY HINTON: Well, the important thing about a proper hologram is that if you take a hologram and you cut it in half, you do not
…
basically the whole time. MARTIN FORD: When you were at high school? Wow. So how did your thinking develop when you went to university? GEOFFREY HINTON: One of the things I studied at university was physiology. I was excited by physiology because I wanted to learn how the brain worked. Toward
…
MARTIN FORD: Well, given the other path that opened up for you, it’s probably a good thing that you weren’t a great carpenter! GEOFFREY HINTON: Following my attempt at carpentry, I worked as a research assistant on a psychology project trying to understand how language develops in very young children
…
there was a consensus that they were just nonsense. MARTIN FORD: When was this in relation to Marvin Minsky and Seymour Papert’s Perceptrons book? GEOFFREY HINTON: This was in the early ‘70s, and Minsky and Papert’s book came out in the late ‘60s. Almost everybody in artificial intelligence thought
…
it’s important to study the brain and be informed by that, and to incorporate those insights into what you’re doing with neural networks? GEOFFREY HINTON: Capsules is a combination of half a dozen different ideas, and it’s complicated and speculative. So far, it’s had some small successes,
…
of applications of deep learning rely heavily on labeled data, or what’s called supervised learning, and that we still need to solve unsupervised learning? GEOFFREY HINTON: That’s not entirely true. There’s a lot of reliance on labeled data, but there are some subtleties in what counts as labeled
…
: If you look at the way a child learns, though, it’s mostly wandering around the environment and learning in a very unsupervised way. GEOFFREY HINTON: Going back to what I just said, the child is wandering around the environment trying to predict what happens next. Then when what happens next
…
confusion. MARTIN FORD: Would you view solving a general form of unsupervised learning as being one of the primary obstacles that needs to be overcome? GEOFFREY HINTON: Yes. But in that sense, one form of unsupervised learning is predicting what happens next, and my point is that you can apply supervised
…
aspect of it is going to be essential. MARTIN FORD: Do you envision it as being an emergent property of connected intelligences on the internet? GEOFFREY HINTON: No, it’s the same with people. The reason that you know most of what you know is not because you yourself extracted that
…
with artificial intelligence too. MARTIN FORD: Do you think AGI, whether it’s an individual system or a group of systems that interact, is feasible? GEOFFREY HINTON: Oh, yes. I mean OpenAI already has something that plays quite sophisticated computer games as a team. MARTIN FORD: When do you think it
…
by other things before the next 100 years occurs. MARTIN FORD: Do you mean through other existential threats like a nuclear war or a plague? GEOFFREY HINTON: Yes, I think so. In other words, I think there are two existential threats that are much bigger than AI. One is global nuclear
…
completely transform the job market? If so, is that something we need to worry about, or is that another thing that’s perhaps overhyped? GEOFFREY HINTON: If you can dramatically increase productivity and make more goodies to go around, that should be a good thing. Whether or not it turns out
…
—in particular, jobs that are predictable and easily automated. One social response to that is a basic income, is that something that you agree with? GEOFFREY HINTON: Yes, I think a basic income is a very sensible idea. MARTIN FORD: Do you think, then, that policy responses are required to address
…
this? Some people take a view that we should just let it play out, but that’s perhaps irresponsible. GEOFFREY HINTON: I moved to Canada because it has a higher taxation rate and because I think taxes done right are good things. What governments ought to
…
making sure that AI benefits everybody. MARTIN FORD: What about some of the other risks that you would associate with AI, such as weaponization? GEOFFREY HINTON: Yes, I am concerned by some of the things that President Putin has said recently. I think people should be very active now in trying
…
chemical warfare and weapons of mass destruction. MARTIN FORD: Would you favor some kind of a moratorium on that type of research and development? GEOFFREY HINTON: You’re not going to get a moratorium on that type of research, just as you haven’t had a moratorium on the development of
…
have stopped them being widely used. MARTIN FORD: What about other risks, beyond the military weapon use? Are there other issues, like privacy and transparency? GEOFFREY HINTON: I think using it to manipulate elections and to manipulate voters is worrying. Cambridge Analytica was set up by Bob Mercer who was a machine
…
Should we have some form of industrial policy? Should the United States and other Western governments focus on AI and make it a national priority? GEOFFREY HINTON: There are going to be huge technological developments, and countries would be crazy not to try and keep up with that, so obviously, I
…
to be a hub of deep learning coalescing in Canada. Is that just random, or is there something special about Canada that helped with that? GEOFFREY HINTON: The Canadian Institute for Advanced Research (CIFAR) provided funding for basic research in high-risk areas, and that was very important. There’s also
…
could really share unpublished ideas. MARTIN FORD: So, it was a strategic investment on the part of the Canadian government to keep deep learning alive? GEOFFREY HINTON: Yes. Basically, the Canadian government is significantly investing in advanced deep learning by spending half a million dollars a year, which is pretty efficient for
…
s Bell Labs, where he is credited with developing convolutional neural networks—a machine learning architecture inspired by the brain’s visual cortex. Along with Geoff Hinton and Yoshua Bengio, Yann is part of a small group of researchers whose effort and persistence led directly to the current revolution in deep learning
…
were working on neural nets, and I connected with them and ended up discovering things like backpropagation in parallel with people like David Rumelhart and Geoffrey Hinton. MARTIN FORD: So, in the early 1980s there was a lot of research in this area going on in Canada? YANN LECUN: No, this
…
was the United States. Canada was not on the map for this type of research yet. In the early 1980s, Geoffrey Hinton was a postdoc at the University of California, San Diego where he was working with cognitive scientists like David Rumelhart and Jay McClelland. Eventually they
…
community. You couldn’t publish a paper that even mentioned the phrase neural networks because it would immediately be rejected by your peers. In fact, Geoffrey Hinton and Terry Sejnowski published a very famous paper in 1983 called, Optimal Perceptual Inference, which described an early deep learning or neural network model. Hinton
…
allow a machine to manipulate symbols, or if we need explicit structures for representing hierarchical structures in language. A lot of my colleagues, like Geoffrey Hinton and Yoshua Bengio, agree that in the long run we won’t need precise specific structures for this. It might be useful in the short
…
LECUN is a Vice President and Chief AI Scientist at Facebook, as well as a professor of computer science at New York University. Along with Geoff Hinton and Yoshua Bengio, Yann is part of the so-called “Canadian Mafia”—the trio of researchers whose effort and persistence led directly to the current
…
in Paris, and a PhD in Computer Science from Universite Pierre et Marie Curie in 1987. He later worked as a post-doctoral researcher in Geoff Hinton’s lab at the University of Toronto. He joined Facebook in 2013 to establish and run the Facebook AI Research (FAIR) organization, headquartered in New
…
lot of people. The winner of the 2012 ImageNet competition created a convergence of ImageNet, GPU computing power, and convolutional neural networks as an algorithm. Geoffrey Hinton wrote a seminal paper that, for me, was Phase One in achieving the holy grail of object recognition. MARTIN FORD: Did you continue this project
…
a result, I wound up doing that work at Stanford University rather than at Google. In fact, I remember a conversation that I had with Geoff Hinton at NIPS, the annual conference on Neural Information Processing Systems, where I was trying to use GPUs, and I think that later influenced his work
…
, which is basically where the dynamic range of the values of the coefficients would decline because the numbers got too big or too small. Geoffrey Hinton and a group of mathematicians solved that problem and now we can go to any number of levels. Their solution was that you recalibrate the
…
t have to decide which to use. They’re all part of a single language framework at this point. MARTIN FORD: When I talked to Geoff Hinton, I suggested a hybrid approach to him, but he was very dismissive of that idea. I get the sense that people in the deep learning
by Karen Hao · 19 May 2025 · 660pp · 179,531 words
just a month before being admitted to the University of Toronto in 2003 as a third-year undergraduate student. It was there that Sutskever met Geoffrey Hinton, a British Canadian professor who had done seminal work in AI research. Hinton became the only person whom Sutskever would call a mentor and who
…
small band of connectionists held fast to Rosenblatt’s pursuit of machine learning systems and continued to advance it. They included Sutskever’s PhD adviser, Geoffrey Hinton, who, as a professor at Carnegie Mellon University in the 1980s, made a key improvement to early neural networks along with colleagues from the University
…
had long had a paramount belief in deep learning, one that began soon after he showed up unannounced one day, only seventeen years old, at Geoffrey Hinton’s office. At the time, Sutskever was still an undergraduate studying math at the University of Toronto and working the french fry station at a
…
60, no. 6 (May 2017): 84–90, doi.org/10.1145/3065386. GO TO NOTE REFERENCE IN TEXT “We thought we were”: Author interview with Geoff Hinton, August 2023. GO TO NOTE REFERENCE IN TEXT Even to Sutskever, who secretly: Cade Metz, Genius Makers: The Mavericks Who Brought AI to Google, Facebook
…
Discovery at the Dawn of AI (Flatiron Books, 2023), 89–90. GO TO NOTE REFERENCE IN TEXT Tech giants were already seeing: Author interview with Geoffrey Hinton, August 2023. GO TO NOTE REFERENCE IN TEXT But alongside these impressive advances: Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human
…
, 2020. A write-up of the conversation is in: Karen Hao, “AI Pioneer Geoff Hinton: ‘Deep Learning Is Going to Be Able to Do Everything,’ ” MIT Technology Review, November 3, 2020, technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything. GO TO NOTE REFERENCE IN TEXT “We actually
…
5: Scale of Ambition “How about now?”: Cade Metz, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World (Dutton, 2021), 93; “Geoffrey Hinton | On Working with Ilya, Choosing Problems, and the Power of Intuition,” posted May 20, 2024, by Sana, YouTube, 45 min., 45 sec., youtu.be/n4IQOBka8bc
…
. GO TO NOTE REFERENCE IN TEXT He stunned Hinton: Author interview with Geoffrey Hinton, November 2023. GO TO NOTE REFERENCE IN TEXT At times he grew: Metz, Genius Makers, 94. GO TO NOTE REFERENCE IN TEXT “One doesn’t
…
Nonsense,” Big Think, May 11, 2022, bigthink.com/life/ape-sign-language. GO TO NOTE REFERENCE IN TEXT In conversations with Hinton: Author interview with Geoff Hinton. GO TO NOTE REFERENCE IN TEXT Chapter 11: Apex For a photo: The photo in question, October 2022. GO TO NOTE REFERENCE IN TEXT Financial
by Martin Ford · 13 Sep 2021 · 288pp · 86,995 words
University of California at Berkeley and OpenAI, has received investments and publicity from some of the brightest lights in deep learning, including Turing Award winners Geoffrey Hinton and Yann LeCun, Google’s Jeff Dean and ImageNet founder Fei-Fei Li.22 In 2019, Covariant defeated nineteen other companies in a competition organized
…
experts it’s often taken almost as a given that AI systems will completely replace human radiologists in the relatively near future. Turing Award winner Geoffrey Hinton, arguably the most prominent advocate of deep learning, said in 2016 that “we should stop training radiologists now” because “it’s just completely obvious that
…
high specificity and sensitivity using only one image (or a few contiguous images) without access to clinical information or prior studies.”48 I suspect that Geoff Hinton would argue that these limitations will inevitably be overcome, and he will very likely turn out to be right in the long run, but I
…
comes with a $1 million financial award, which is funded primarily by Google. In June 2019, the 2018 Turing Award was awarded to three men—Geoffrey Hinton, Yann LeCun and Yoshua Bengio—in recognition of their lifetime contributions to the advancement of deep neural networks. This technology—also known as deep learning
…
scientists, the technology came to be viewed as a research backwater and a likely career dead end. Everything changed in 2012 when a team from Geoff Hinton’s research lab at the University of Toronto entered the ImageNet Large Scale Visual Recognition Challenge. In this annual event, teams from many of the
…
,” which is still the primary learning algorithm used in multilayered neural networks today. Rumelhart, along with Ronald Williams, a computer scientist at Northeastern University, and Geoffrey Hinton, then at Carnegie Mellon, described how the algorithm could be used in what is now considered to be one of the most important scientific papers
…
lead deep learning to dominate the field of AI, but it would be decades before computers would become fast enough to truly leverage the approach. Geoffrey Hinton, who had been a young postdoctoral researcher working with Rumelhart at UC San Diego in 1981,11 would go on to become perhaps the most
…
ImageNet Large Scale Visual Recognition Competition held two years later, in September 2012, arguably represents the inflection point for the technology of deep learning.13 Geoff Hinton, along with Ilya Sutskever and Alex Krizhevsky from his research lab at the University of Toronto, entered a many-layered convolutional neural network that dramatically
…
I’ve sketched out here represents roughly what you might call the “standard history” of deep learning. In this telling, the 2018 Turing Award recipients Geoff Hinton, Yann LeCun and Yoshua Bengio, a professor at the University of Montreal, loom especially large—so much so that they are often referred to as
…
recognized the disruptive potential of deep neural networks and began to build research teams and incorporate the technology into their products and operations. Google hired Geoff Hinton, Yann LeCun became the director of Facebook’s new AI research lab, and the entire industry began waging a full-on talent war that pushed
…
instrumental in bringing awareness of deep learning technology to the broader public sphere. The article, written by reporter John Markoff, ends with a quote from Geoff Hinton: “The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get
…
to humanity. From the onset, OpenAI has attracted some of the field’s top researchers, including Ilya Sutskever, who was part of the team from Geoff Hinton’s University of Toronto Lab that built the neural network that triumphed at the 2012 ImageNet competition. In 2019, Sam Altman, who was then in
…
be to “solve some of the same problems that classical AI was trying to solve but using the building blocks coming from deep learning.”44 Geoff Hinton is even more disparaging of the idea, saying he doesn’t “believe hybrids are the answer” and comparing such a system to a Rube Goldberg
…
stay,” MIT Technology Review, April 23, 2020, www.technologyreview.com/2020/04/23/1000410/ai-triage-covid-19-patients-health-care. 47. Creative Distribution Lab, “Geoffrey Hinton: On radiology (video),” YouTube, November 24, 2016, www.youtube.com/watch?reload=9&v=2HMPRXstSvQ. (Part of the Machine Learning and the Market for Intelligence
…
by back-propagating errors,” Nature, volume 323, issue 6088, pp. 533–536 (1986), October 9, 1986, www.nature.com/articles/323533a0. 11. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, p. 73. 12. Dave Gershgorn, “The data that transformed AI research—and possibly the world,” Quartz, July 26, 2017, qz.com
…
/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/. 13. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, p. 77. 14. Email from Jürgen Schmidhuber to Martin Ford, January 28, 2019. 15. Jürgen Schmidhuber, “Critique of paper by ‘Deep
…
.com/2018/02/19/technology/ai-researchers-desks-boss.html. CHAPTER 5. DEEP LEARNING AND THE FUTURE OF ARTIFICIAL INTELLIGENCE 1. Martin Ford, Interview with Geoffrey Hinton, in Architects of Intelligence: The Truth about AI from the People Building It, Packt Publishing, 2018, pp. 72–73. 2. Matt Reynolds, “New computer vision
…
.digitaltrends.com/cool-tech/neuro-symbolic-ai-the-future/. 44. Ford, Interview with Yoshua Bengio, in Architects of Intelligence, p. 22. 45. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, pp. 84–85. 46. Ford, Interview with Yann LeCun, in Architects of Intelligence, p. 123. 47. Anthony M. Zador, “A critique
by Luke Dormehl · 10 Aug 2016 · 252pp · 74,167 words
, who formed an artificial neural network group which became incredibly influential in its own right. There was also a man named Geoff Hinton. The Patron Saint of Neural Networks Born in 1947, Geoff Hinton is the one of the most important figures in modern neural networks. An unassuming British computer scientist, Hinton has influenced
…
,’ Sejnowski says. ‘Not least because computers at the time had less computing power than your watch does today.’ The Connectionists Aided by the work of Geoff Hinton and others, the field of neural nets boomed. In the grand tradition of each successive generation renaming themselves, the new researchers described themselves as ‘connectionists
…
, Dean Pomerleau had proved his point. Welcome to Deep Learning The next significant advance for neural networks took place in the mid-2000s. In 2005, Geoff Hinton was working at the University of Toronto, having recently returned from setting up the Gatsby Computational Neuroscience Unit at University College London. By this time
…
is what is known as ‘supervised learning’. But with so many unlabelled or incorrectly labelled images in circulation, how is the computer to learn? Fortunately, Geoff Hinton triggered a revolution in what is called ‘unsupervised learning’, in which no labels at all are provided to the computer. All the machine has access
…
I will get to see what we can do with very large-scale computation.’ After thirty years’ ploughing an often lonely furrow with neural networks, Geoff Hinton was finally playing a key role in the biggest AI company in the world. The New AI Mainstream Today, deep learning neural nets have become
…
that could never be said for classic AI. ‘A lot of the things that are now working are because people are using neural nets,’ observes Geoff Hinton. ‘The rule of thumb is that if there’s a task you want to do and you know it involves huge amounts of knowledge, that
…
been some tweaks to the underlying technology, many of today’s big advances come back to the same back-propagation algorithm that David Rumelhart and Geoff Hinton rediscovered in the 1980s. What has changed is the amount of computing power, which in turn means bigger neural networks, with more hidden layers. The
…
– let alone that it can be pinned to an exact timeframe. ‘I’m extremely impressed by his ability to predict it to the nearest year,’ Geoff Hinton says when I ask him about Kurzweil’s ideas about the Singularity. There is a pause and then he clarifies: ‘This is called sarcasm.’ ‘Seeing
…
Interviews (Conducted 2014–2016) David Ackley, Bandar Antabi, William Sims Bainbridge, Selmer Bringsjord, Adam Cheyer, Diane Cook, Gunnar Grímsson, Michael Grothaus, Ken Hayworth, Rob High, Geoff Hinton, John Hopfield, Ken Jennings, Ron Kaplan, Ross King, Carlos Laorden, Hugh Loebner, Jason Lohn, Dean Pomerleau, Mark Riedl, John Sculley, Terry Sejnowski, Lior Shamir, Richard
…
or Many?’, Daedalus, 1988. 6 Hernandez, Daniela, ‘Meet the Man Google Hired to Make AI a Reality’, Wired, 16 January 2014: wired.com/2014/01/geoffrey-hinton-deep-learning/ 7 McCarthy, John, ‘Computer-Controlled Cars’, 1968: formal.stanford.edu/jmc/progress/cars.ps http://www-formal.stanford.edu/jmc/progress/cars.ps
by Brian Christian · 5 Oct 2020 · 625pp · 167,349 words
. By 1973, both the US and British governments have pulled their funding support for neural network research, and when a young English psychology student named Geoffrey Hinton declares that he wants to do his doctoral work on neural networks, again and again he is met with the same reply: “Minsky and Papert
…
out hot exhaust, for two weeks. “It was very hot,” he says. “And it was loud.”11 He is teaching the machine how to see. Geoffrey Hinton, Krizhevsky’s mentor, is now 64 years old and has not given up. There is reason for hope. By the 1980s it became understood that
…
, Alison Gopnik, Samir Goswami, Hilary Greaves, Joshua Greene, Tom Griffiths, David Gunning, Gillian Hadfield, Dylan Hadfield-Menell, Moritz Hardt, Tristan Harris, David Heeger, Dan Hendrycks, Geoff Hinton, Matt Huebert, Tim Hwang, Geoffrey Irving, Adam Kalai, Henry Kaplan, Been Kim, Perri Klass, Jon Kleinberg, Caroline Knapp, Victoria Krakovna, Frances Kreimer, David Kreuger, Kaitlyn
…
once more to penetrate their subtle & profound paradoxes about the knower & the known.” And then, in a trembling script, all caps: “BE THOU WELL.” 10. Geoff Hinton, “Lecture 2.2—Perceptrons: First-generation Neural Networks” (lecture), Neural Networks for Machine Learning, Coursera, 2012. 11. Alex Krizhevsky, personal interview, June 12, 2019. 12
…
. Spielvogel, Carl. “Advertising: Promoting a Negative Quality.” New York Times, September 4, 1957. Spignesi, Stephen J. The Woody Allen Companion. Andrews McMeel, 1992. Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” Journal of Machine Learning Research 15, no. 1
by Stuart Russell and Peter Norvig · 14 Jul 2019 · 2,466pp · 668,761 words
to solve real-world problems; Judea Pearl (2011) for developing probabilistic reasoning techniques that deal with uncertainty in a principled manner; and finally Yoshua Bengio, Geoffrey Hinton, and Yann LeCun (2019) for making “deep learning” (multilayer neural networks) a critical part of modern computing. The rest of this section goes into more
…
manipulate symbols—in fact, the anthropologist Terrence Deacon’s book The Symbolic Species (1997) suggests that this is the defining characteristic of humans. Against this, Geoff Hinton, a leading figure in the resurgence of neural networks in the 1980s and 2010s, has described symbols as the “luminiferous aether of AI”—a reference
…
. In the 2012 ImageNet competition, which required classifying images into one of a thousand categories (armadillo, bookshelf, corkscrew, etc.), a deep learning system created in Geoffrey Hinton’s group at the University of Toronto (Krizhevsky et al., 2013) demonstrated a dramatic improvement over previous systems, which were based largely on handcrafted features
…
“Kelley–Bryson gradient procedure.” Although Werbos had applied it to neural networks, this idea did not become widely known until a paper by David Rumelhart, Geoff Hinton, and Ron Williams (1986) appeared in Nature giving a nonmathematical presentation of the algorithm. Mathematical respectability was enhanced by papers showing that multilayer feedforward networks
…
networks waned as other techniques such as Bayes nets, ensemble methods, and kernel machines came to the fore. Interest in deep models was sparked when Geoff Hinton’s research on deep Bayesian networks–—generative models with category variables at the root and evidence variables at the leaves—began to bear fruit, outperforming
…
-on guides associated with the various open-source software packages for deep learning. Three of the leaders of the field—Yann LeCun, Yoshua Bengio, and Geoff Hinton—introduced the key ideas to non-AI researchers in an influential Nature article (2015). The three were recipients of the 2018 Turing Award. Schmidhuber (2015
…
optimized through gradient descent. Our computer models will learn from conversations with human experts as well as by using all the available data. Yann LeCun, Geoffrey Hinton, and others have suggested that the current emphasis on supervised learning (and to a lesser extent reinforcement learning) is not sustainable—that computer models will
…
a value function over states. He suggests that GANs (generative adversarial networks) can be used to learn to minimize the difference between predictions and reality. Geoffrey Hinton stated in 2017 that “My view is throw it all away and start again,” meaning that the overall idea of learning by adjusting parameters in
by Terrence J. Sejnowski · 27 Sep 2018
self-driving cars, Google Glass, and Google Brain.2 Google was one of the first Internet companies to embrace deep learning; in 2013, it hired Geoffrey Hinton, the father of deep learning, and other companies are racing to catch up. The recent progress in artificial intelligence (AI) was made by reverse engineering
…
Seymour Papert published Perceptrons, which pointed out the computational limitations of a single artificial neuron and marked the beginning of a neural network winter. 1979—Geoffrey Hinton and James Anderson organized the Parallel Models of Associative Memory workshop in La Jolla, California, which brought together a new generation of neural network pioneers
…
Musk and other Silicon Valley entrepreneurs set up a new company, OpenAI, with a one-billion-dollar nest egg and hired Ilya Sutskever, one of Geoffrey Hinton’s former students, to be its first director. Although OpenAI’s stated goal was to ensure that future AI discoveries would be publicly available for
…
intelligence. Drawing on this classic musical, the theme of this chapter is “If AI Only Had a Brain and a Heart.” How the Brain Works Geoffrey Hinton (figure 4.1) and I had similar beliefs about the promise of neural network models when we met at a workshop that Geoffrey organized in
…
of the world’s tallest mountain, which now bears his name. (B) Hinton in 1994. These two photos were taken fifteen years apart. Courtesy of Geoffrey Hinton. Geoffrey received an undergraduate degree in psychology at the University of Cambridge and a doctorate in artificial intelligence from the University of Edinburgh. His thesis
…
Carnegie Mellon took neural networks seriously. Rule-based symbol processing received most of the funding—and generated most of the jobs. Early Pioneers In 1979, Geoffrey Hinton and James Anderson, a psychologist at Brown University, organized the Parallel Models of Associative Memory workshop in La Jolla, California.6 Most participants were meeting
…
manipulate logical expressions are at the heart of digital computing and were a natural starting point for fledgling efforts in artificial intelligence in the 1950s. Geoffrey Hinton, who happens to be Boole’s great-great-grandson, is proud to have a pen once used by Boole and handed down in his family
…
us what its purpose was. Neurons are in the business of processing signals that carry information, and computation was Figure 4.6 Terry Sejnowski and Geoffrey Hinton discussing network models of vision in Boston in 1980. This was one year after Geoffrey and I met at the Parallel Models of Associative Memory
…
one year before I started my lab at Johns Hopkins in Baltimore and Geoffrey started his research group at Carnegie Mellon in Pittsburgh. Courtesy of Geoffrey Hinton. Brain-style Computing 61 the missing link in trying to understand nature. I have over the last forty years been pursuing this goal, pioneering a
…
new field called “computational neuroscience.” After his stint as a postdoctoral fellow at UCSD, Geoffrey Hinton returned to England, where he had a research position with the Applied Psychology Unit of the Medical Research Council (MRC) at Cambridge. One day in
…
Hebb rule for synaptic plasticity. 1982—John Hopfield publishes “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” which introduced the Hopfield net. 1985—Geoffrey Hinton and Terry Sejnowski publish “A Learning Algorithm for Boltzmann Machines,” which was a counterexample to Marvin Minsky and Seymour Papert’s widely accepted belief that
…
no learning algorithm for multilayer networks was possible. 1986—David Rumelhart and Geoffrey Hinton publish “Learning Internal Representations by Error-Propagation,” which introduced the “backprop” learning algorithm now used for deep learning. 1988—Richard Sutton publishes “Learning to Predict
…
. 1995—Anthony Bell and Terrence Sejnowski publish “An InformationMaximization Approach to Blind Separation and Blind Deconvolution,” describing an unsupervised algorithm for Independent Component Analysis. 2013—Geoffrey Hinton’s NIPS 2012 paper “ImageNet Classification with Deep Convolutional Neural Networks” reduces the error rate for correctly classifying objects in images by 18 percent. 2017
…
I think of him whenever I’m stranded at an airport. Jerry distinguished between “scruffy” and “neat” connectionist models. Scruffy models, like the ones that Geoffrey Hinton and I worked on, distributed the representation of objects and concepts across many units in the network, whereas neat models, like the ones Jerry believed
…
,6 engineers found it hard to get the networks to solve complex computational problems. A Network with Content-Addressable Memories In the summer of 1983, Geoffrey Hinton, John Hopfield (figure 7.1), and I were at a workshop at the University of Rochester organized by Jerry The Hopfield Net and Boltzmann Machine
…
Energy Minimum Dana Ballard, who with Christopher Brown had written a classic book on computer vision in 1982,11 was also at the 1983 workshop. Geoffrey Hinton and I were working with Dana on a review of a new approach to analyzing images for Nature.12 The idea was that the nodes
…
finished my postdoctoral fellowship at Harvard Medical School with Stephen Kuffler and moved to my first job in the Department of Biophysics at Johns Hopkins; Geoffrey Hinton had taken a faculty position in the Computer Science Department at Carnegie Mellon, where he was fortunate to have the support of Allen Newell, who
…
Language and Speech Processing. Ben has a consulting group on data science for political and corporate clients. Learning to Recognize Handwritten Zip Codes More recently, Geoffrey Hinton and his students at the University of Toronto trained a Boltzmann machine with three layers of hidden units to classify handwritten zip codes with high
…
be used either in its supervised version, where both inputs and outputs are clamped, or in its unsupervised version, where only the inputs are clamped. Geoffrey Hinton used the unsupervised version to build up a deep Boltzmann machine one layer at a time.22 Starting with a layer of hidden units connected
…
, rule-based tradition that was dominant in artificial intelligence research during the 1970s. When I first met David in 1979 at the workshop organized by Geoffrey Hinton at UC, San Diego, he was pioneering a new approach to human psychology that he and James McClelland called “parallel distributed processing” (PDP). David thought
…
that multilayer networks could be trained using a Boltzmann machine was out, there was an explosion of new learning algorithms. At the same time that Geoffrey Hinton and I were working on the Boltzmann machine, David Rumelhart had developed another learning algorithm for multilayer networks that proved to be even more productive
…
machine learning algorithm has, backprop is more efficient, and it has made possible much more rapid progress. The classic backprop paper, coauthored by David Rumelart, Geoffrey Hinton, and Ronald Williams, appeared in Nature in 1986,4 and since then, it has been cited more than 40,000 times in other research papers
…
down.”5 To make matters worse, not everyone pronounces a word the same way. There are many dialects, each with its own set of rules. Geoffrey Hinton visited Charlie and me at Johns Hopkins during this early planning period and told us he thought that English pronunciation would be too hard to
…
of language further supports the possibility that brains do not need to use explicit rules for language, even though behavior might suggest that they do. Geoffrey Hinton, David Touretzky, and I organized the first Connectionist Summer School at Carnegie Mellon in 1986 (figure 8.3), at a time when Figure 8.3
…
Students at the 1986 Connectionist Summer School at Carnegie Mellon University. Geoffrey Hinton is in the first row, third from right, flanked by Terry Sejnowski and James McClelland. This photo is a who’s who in neural computing
…
today. Neural networks in the 1980s were a bit of twenty-first-century science in the twentieth century. Courtesy of Geoffrey Hinton. 118 Chapter 8 only a few universities had faculty who offered courses on neural networks. In a skit based on NETtalk, the students lined up
…
others. When you are at a saddle, there is always a direction to go downhill. One particularly clever regularization technique, called “dropout,” was invented by Geoffrey Hinton.15 On every learning epoch, when the gradient is estimated from a number of training examples and a step is made in weight space, half
…
once characterized the time between scientific revolutions as the normal work of scientists theorizing, observing, and experimenting within a settled paradigm or explanatory framework.1 Geoffrey Hinton moved to the University of Toronto in 1987 and continued with a steady stream of incremental improvements, although none of them had the magic that
…
each new category of objects requires a domain expert to identify the pose-invariant features needed to distinguish them from other objects. Then, in 2012, Geoffrey Hinton and two students, Alex Krizhevsky and Ilya Sutskever, submitted a paper to the NIPS conference on object recognition in images that used deep learning to
…
rate in many ways resembles the visual cortex; it was introduced by Yann LeCun, who originally called it “Le Net.” A student in France when Geoffrey Hinton and I first met him in the 1980s, Yann LeCun (figure 9.1, right) was inspired by HAL 9000, the mission computer in the epic
…
he trained a network that could read handwritten zip codes on letters, using the Modified National Institute of Standards and Technology (MNIST) Figure 9.1 Geoffrey Hinton and Yann LeCun have mastered deep learning. This photo was taken at a meeting of the Neural Computation and Adaptive Perception Program of the Canadian
…
Institute for Advanced Research around 2000, a program that was an incubator for what became the field of deep learning. Courtesy of Geoffrey Hinton. 130 Chapter 9 database, a labeled data benchmark. Millions of letters each day have to be routed to mailboxes; today this is fully automated. The
…
shed much welcome light on the learning algorithms discovered by nature. Yoshua Bengio28 (figure 9.8) at the University of Montreal and Yann LeCun succeeded Geoffrey Hinton as the directors of CIFAR’s Neural Computation and Adaptive Perception (NCAP) Program when it passed its tenyear review and was renamed “Learning in Machines
…
Brains Program. A French-born Canadian computer scientist, Yoshua has been a leader in applying deep learning to problems in natural language. Advances made by Geoffrey Hinton, Yann LeCun, and Yoshua Bengio were seminal for the successes of deep learning. Courtesy of Yoshua Bengio. 142 Chapter 9 They are coddled by researchers
…
Processing Systems 165 Deep Learning at the Gaming Table Deep learning came of age at the 2012 NIPS Conference at Lake Tahoe (figure 11.3). Geoffrey Hinton, an early pioneer in neural networks, and his students presented a paper reporting that neural networks with many layers were remarkably good at recognizing objects
…
start-up self-driving company like Otto or Cruise is bought by a larger company, the cost is $10 million per machine learning expert.40 Geoffrey Hinton became an employee of Google in 2013 when it bought his company, DNNresearch, which consisted of Geoffrey and two of his graduate students at the
…
. Hello, Mr. Chips 207 Cool Chips I first met Carver Mead (figure 14.1) at a workshop held at a resort outside Pittsburgh in 1983. Geoffrey Hinton had assembled a small group to explore where neural networks were heading. Mead was famous for his major contributions in computer science. He was the
…
shared their insights and advice; and I have had the privilege of working with a generation of exceptionally talented students. I am especially grateful to Geoffrey Hinton, John Hopfield, Bruce Knight, Stephen Kuffler, Michael Stimac, and John Wheeler, as I am to myfather-in-law, Solomon Golomb, who at various turning points
…
ideas for the book, including Yoshua Bengio, Sydney Brenner, Andrea Chiba, Gary Cottrell, Kendra Crick, Rodney Douglas, Paul Ekman, Michaela Ennis, Jerome Feldman, Adam Gazzaley, Geoffrey Hinton, Jonathan C. Howard, Irwin Jacobs, Scott Kirkpatrick, Mark and Jack Knickrehm, Te-Won Lee, David Linden, James McClelland, Saket Navlakha, Barbara Oakley, Tomaso Poggio, Charles
…
the first NIPS conference that I would be standing here today addressing 8,000 attendees—I thought it would only take 10 years.” I visited Geoff Hinton at Mountainview in April, 2016. Google Brain has an entire floor of a building. We reminisced about the old days and came to the conclusion
…
of Medicine, the National Academy of Engineering, the National Academy of Inventors, and the American Academy of Arts and Sciences, a rare honor. I owe Geoffrey Hinton a great debt of gratitude for sharing his insights into computing with networks over many years. As a graduate student at Princeton University, I pursued
…
Talent Is $10 Million per Person,” Recode, September 17, 2016. https://www .recode.net/2016/9/17/12943214/sebastian-thrun-self-driving-talent-pool. 41. Geoffrey Hinton is the chief scientific advisor of the Vector Institute. See http:// vectorinstitute.ai/. 42. Paul Mozur and John Markoff, “Is China Outsmarting America in A
…
, 99 Boltzmann learning, unsupervised, 106 Boltzmann machine backpropagation of errors contrasted with, 112 Charles Rosenberg on, 112 criticisms of, 106 diagram, 98b at equilibrium, 99 Geoffrey Hinton and, 49, 79, 104, 105f, 106, 110, 112, 127 hidden units, 98b, 101, 102, 104, 106, 109 learning mirror symmetries, 102, 104 limitations, 107 multilayer
…
Golomb, Solomon “Sol” Wolf (Beatrice’s father), 220–224, 222f, 271, 273 Goodfellow, Ian, 135 Google, 20, 191, 205 deep learning and, ix, 7, 192 Geoffrey Hinton and, ix, 191, 273 PageRank algorithm, 311n4 self-driving cars, ix, 4, 6 TensorFlow and, 205–206 tensor processing unit (TPU), 7, 205 Google Assistant
by Keach Hagey · 19 May 2025 · 439pp · 125,379 words
in the world of AI with several breakthroughs under his belt, most notably the 2012 “AlexNet” paper that he co-authored with Alex Krizhevsky and Geoff Hinton, which had revived interest in neural networks. He went on to work at Google after the company bought their small neural networks startup. At Google
…
parents to get the university to accept him without a high school degree, but as soon as it did, he headed for the door of Geoff Hinton, the legendary AI researcher. He was seventeen, and while his main job was selling french fries at the nearby Paramount Wonderland amusement park, he asked
…
and profound ways,” McCauley said in an interview with the Chicago Toy & Game Group.27 The Vivarium program’s advisory council included such heavyweights as Geoff Hinton, Marvin Minsky, Douglas Adams (the author of The Hitchhiker’s Guide to the Galaxy), and Koko, the sign language–speaking gorilla. Kay remained a lifelong
…
statement, Connie, Sam, Max, and Jack said the allegations were “utterly untrue.”2 All around him, Altman’s critics and enemies seemed ascendant. In October, Geoff Hinton, the “godfather of AI” who had mentored Sutskever, was awarded the Nobel Prize in Physics for his work in machine learning. During the press conference
…
Altman Denies Sexual Abuse Claims Made by Sister,” The Wall Street Journal, January 8, 2025. 3.University of Toronto, “University of Toronto Press Conference—Professor Geoffrey Hinton, Nobel Prize in Physics 2024,” YouTube, October 8, 2024. 4.Cade Metz, “ ‘The Godfather of AI’ Leaves Google and Warns of Danger Ahead,” The New
by Anil Ananthaswamy · 15 Jul 2024 · 416pp · 118,522 words
to people with only a rudimentary knowledge of the field, but he is also a very good writer who brings the social history to life.” —GEOFFREY HINTON, deep learning pioneer, Turing Award winner, former VP at Google, and professor emeritus at the University of Toronto “After just a few minutes of reading
…
behind ChatGPT. More than a decade ago, as a young undergraduate student looking for an academic advisor at the University of Toronto, Sutskever knocked on Geoffrey Hinton’s door. Hinton was already a well-known name in the field of “deep learning,” a form of machine learning, and Sutskever wanted to work
…
dealt to the field by Marvin Minsky and Seymour Papert in their 1969 book, Perceptrons. (We’ll meet other researchers in subsequent chapters, in particular Geoff Hinton and Yann LeCun, who also kept the faith.) Recall that Frank Rosenblatt and others had shown, using the perceptron convergence theorem, that the perceptron will
…
of researchers had begun elucidating the fundamental elements of an algorithm that could be used to train multi-layer networks. Then, in 1986, David Rumelhart, Geoffrey Hinton, and Ronald Williams published a seminal paper in the journal Nature, showing off the strengths of a training algorithm called backpropagation, thus greasing the wheels
…
-layer perceptrons could not solve something as simple as the XOR problem. I brought up the Minsky-Papert proof early on in my conversation with Geoffrey Hinton, one of the key figures behind the modern deep learning revolution. Hinton got interested in neural networks in the mid-1960s, when he was still
…
, other models built using different types of deep neural networks, such as CNNs, are showing surprising correspondence with at least some aspects of brain function. Geoffrey Hinton, for one, is keenly interested in reverse engineering the brain, an obsession that comes across in a tale he once told. In 2007, before neural
…
, Philip Stark, Patrick Juola, Marcello Pelillo, Peter Hart, Emery Brown, John Abel, Bernhard Boser, Isabelle Guyon, Manfred K. Warmuth, David Haussler, John Hopfield, George Cybenko, Geoffrey Hinton, Yann LeCun, Mikhail Belkin, Alethea Power, Peter Bartlett, and Alexei Efros. Also, thanks to Demis Hassabis for an inspiring conversation. I’m also grateful to
…
(December 1989): 303–14. GO TO NOTE REFERENCE IN TEXT CHAPTER 10: THE ALGORITHM THAT PUT PAID TO A PERSISTENT MYTH “Yes”: Zoom interview with Geoffrey Hinton on October 1, 2021. This and subsequent quotes by Hinton are from this author interview. GO TO NOTE REFERENCE IN TEXT a theoretical chemist: Chris
by Azeem Azhar · 6 Sep 2021 · 447pp · 111,991 words
by Eric Topol · 1 Jan 2019 · 424pp · 114,905 words
by John Markoff · 24 Aug 2015 · 413pp · 119,587 words
by Stuart Russell · 7 Oct 2019 · 416pp · 112,268 words
by Christopher Summerfield · 11 Mar 2025 · 412pp · 122,298 words
by Kevin Kelly · 6 Jun 2016 · 371pp · 108,317 words
by Kashmir Hill · 19 Sep 2023 · 487pp · 124,008 words
by Ajay Agrawal, Joshua Gans and Avi Goldfarb · 16 Apr 2018 · 345pp · 75,660 words
by Stephen Witt · 8 Apr 2025 · 260pp · 82,629 words
by Aurélien Géron · 13 Mar 2017 · 1,331pp · 163,200 words
by Steven Pinker · 1 Jan 1997 · 913pp · 265,787 words
by Nicole Kobie · 3 Jul 2024 · 348pp · 119,358 words
by James Vlahos · 1 Mar 2019 · 392pp · 108,745 words
by Melanie Mitchell · 14 Oct 2019 · 350pp · 98,077 words
by Eliezer Yudkowsky and Nate Soares · 15 Sep 2025 · 215pp · 64,699 words
by Kai-Fu Lee · 14 Sep 2018 · 307pp · 88,180 words
by Parmy Olson · 284pp · 96,087 words
by Tim Wu · 4 Nov 2025 · 246pp · 65,143 words
by Trevor Hastie, Robert Tibshirani and Jerome Friedman · 25 Aug 2009 · 764pp · 261,694 words
by Jordan Ellenberg · 14 May 2021 · 665pp · 159,350 words
by Adam Becker · 14 Jun 2025 · 381pp · 119,533 words
by Mustafa Suleyman · 4 Sep 2023 · 444pp · 117,770 words
by Michiko Kakutani · 20 Feb 2024 · 262pp · 69,328 words
by Paul Scharre · 18 Jan 2023
by Yuval Noah Harari · 9 Sep 2024 · 566pp · 169,013 words
by Peter H. Diamandis and Steven Kotler · 3 Feb 2015 · 368pp · 96,825 words
by Maximilian Kasy · 15 Jan 2025 · 209pp · 63,332 words
by John Brockman · 19 Feb 2019 · 339pp · 94,769 words
by George Gilder · 16 Jul 2018 · 332pp · 93,672 words
by Richard Yonck · 7 Mar 2017 · 360pp · 100,991 words
by Kenneth Payne · 16 Jun 2021 · 339pp · 92,785 words
by Robert Elliott Smith · 26 Jun 2019 · 370pp · 107,983 words
by Hannah Fry · 17 Sep 2018 · 296pp · 78,631 words
by Steven Levy · 25 Feb 2020 · 706pp · 202,591 words
by Daron Acemoglu and Simon Johnson · 15 May 2023 · 619pp · 177,548 words
by Bregman, Rutger · 9 Mar 2025 · 181pp · 72,663 words
by Sonja Thiel and Johannes C. Bernhardt · 31 Dec 2023 · 321pp · 113,564 words
by Igor Tulchinsky · 30 Sep 2019 · 321pp
by Tim Berners-Lee · 8 Sep 2025 · 347pp · 100,038 words
by Jeff Hawkins · 15 Nov 2021 · 253pp · 84,238 words
by Ash Fontana · 4 May 2021 · 296pp · 66,815 words
by Hod Lipson and Melba Kurman · 22 Sep 2016
by Nick Bostrom · 3 Jun 2014 · 574pp · 164,509 words
by Aaron Bastani · 10 Jun 2019 · 280pp · 74,559 words
by Martin Ford · 4 May 2015 · 484pp · 104,873 words
by Michael Kearns and Aaron Roth · 3 Oct 2019
by David Sumpter · 18 Jun 2018 · 276pp · 81,153 words
by Madhumita Murgia · 20 Mar 2024 · 336pp · 91,806 words
by Jeff Booth · 14 Jan 2020 · 180pp · 55,805 words
by Geoffrey Cain · 28 Jun 2021 · 340pp · 90,674 words
by William Poundstone · 3 Jun 2019 · 283pp · 81,376 words
by Kevin Roose · 9 Mar 2021 · 208pp · 57,602 words
by Paul R. Daugherty and H. James Wilson · 15 Jan 2018 · 523pp · 61,179 words
by Mariya Yao, Adelyn Zhou and Marlene Jia · 1 Jun 2018 · 161pp · 39,526 words
by Aurelien Geron · 14 Aug 2019
by Ethan Mollick · 2 Apr 2024 · 189pp · 58,076 words
by Pedro Domingos · 21 Sep 2015 · 396pp · 117,149 words
by Michael Wooldridge · 2 Nov 2018 · 346pp · 97,890 words
by Calum Chace · 17 Jul 2016 · 477pp · 75,408 words
by Rob Reich, Mehran Sahami and Jeremy M. Weinstein · 6 Sep 2021
by Amy Webb · 5 Mar 2019 · 340pp · 97,723 words
by Andrew McAfee and Erik Brynjolfsson · 26 Jun 2017 · 472pp · 117,093 words
by Clive Thompson · 26 Mar 2019 · 499pp · 144,278 words
by Calum Chace · 28 Jul 2015 · 144pp · 43,356 words
by Martin J. Rees · 14 Oct 2018 · 193pp · 51,445 words
by Erik J. Larson · 5 Apr 2021
by Brian Christian and Tom Griffiths · 4 Apr 2016 · 523pp · 143,139 words
by Gary Marcus and Jeremy Freeman · 1 Nov 2014 · 336pp · 93,672 words
by Franklin Foer · 31 Aug 2017 · 281pp · 71,242 words
by Brett King · 5 May 2016 · 385pp · 111,113 words
by Nick Chater · 28 Mar 2018 · 263pp · 81,527 words