Geoffrey Hinton

back to index

69 results

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

TIMELINE 1960—Cornell professor Frank Rosenblatt builds the Mark I Perceptron, an early “neural network,” at a lab in Buffalo, New York. 1969—MIT professors Marvin Minsky and Seymour Papert publish Perceptrons, pinpointing the flaws in Rosenblatt’s technology. 1971—Geoff Hinton starts a PhD in artificial intelligence at the University of Edinburgh. 1973—The first AI winter sets in. 1978—Geoff Hinton starts a postdoc at the University of California–San Diego. 1982—Carnegie Mellon University hires Geoff Hinton. 1984—Geoff Hinton and Yann LeCun meet in France. 1986—David Rumelhart, Geoff Hinton, and Richard Williams publish their paper on “backpropagation,” expanding the powers of neural networks. Yann LeCun joins Bell Labs in Holmdel, New Jersey, where he begins building LeNet, a neural network that can recognize handwritten digits. 1987—Geoff Hinton leaves Carnegie Mellon for the University of Toronto. 1989—Carnegie Mellon graduate student Dean Pomerleau builds ALVINN, a self-driving car based on a neural network. 1992—Yoshua Bengio meets Yann LeCun while doing postdoctoral research at Bell Labs. 1993—The University of Montreal hires Yoshua Bengio. 1998—Geoff Hinton founds the Gatsby Neuroscience Unit at University College London. 1990s—2000s—Another AI winter. 2000—Geoff Hinton returns to the University of Toronto. 2003—Yann LeCun moves to New York University. 2004—Geoff Hinton starts “neural computation and adaptive perception” workshops with funding from the Canadian government.

Yann LeCun joins Bell Labs in Holmdel, New Jersey, where he begins building LeNet, a neural network that can recognize handwritten digits. 1987—Geoff Hinton leaves Carnegie Mellon for the University of Toronto. 1989—Carnegie Mellon graduate student Dean Pomerleau builds ALVINN, a self-driving car based on a neural network. 1992—Yoshua Bengio meets Yann LeCun while doing postdoctoral research at Bell Labs. 1993—The University of Montreal hires Yoshua Bengio. 1998—Geoff Hinton founds the Gatsby Neuroscience Unit at University College London. 1990s—2000s—Another AI winter. 2000—Geoff Hinton returns to the University of Toronto. 2003—Yann LeCun moves to New York University. 2004—Geoff Hinton starts “neural computation and adaptive perception” workshops with funding from the Canadian government.

Andrew Ng, Jeff Dean, and Greg Corrado found Google Brain. Google deploys speech recognition service based on deep learning. 2012—Andrew Ng, Jeff Dean, and Greg Corrado publish the Cat Paper. Andrew Ng leaves Google. Geoff Hinton “interns” at Google Brain. Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky publish the AlexNet paper. Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky auction their company, DNNresearch. 2013—Geoff Hinton, Ilya Sutskever, and Alex Krizhevsky join Google. Mark Zuckerberg and Yann LeCun found the Facebook Artificial Intelligence Research lab. 2014—Google acquires DeepMind. Ian Goodfellow publishes the GAN paper, describing a way of generating photos.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

Rumelhart, along with Ronald Williams, a computer scientist at Northeastern University, and Geoffrey Hinton, then at Carnegie Mellon, described how the algorithm could be used in what is now considered to be one of the most important scientific papers in artificial intelligence, published in the journal Nature in 1986.10 Backpropagation represented the fundamental conceptual breakthrough that would someday lead deep learning to dominate the field of AI, but it would be decades before computers would become fast enough to truly leverage the approach. Geoffrey Hinton, who had been a young postdoctoral researcher working with Rumelhart at UC San Diego in 1981,11 would go on to become perhaps the most prominent figure in the deep learning revolution.

Williams, “Learning representations by back-propagating errors,” Nature, volume 323, issue 6088, pp. 533–536 (1986), October 9, 1986, www.nature.com/articles/323533a0. 11. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, p. 73. 12. Dave Gershgorn, “The data that transformed AI research—and possibly the world,” Quartz, July 26, 2017, qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/. 13. Ford, Interview with Geoffrey Hinton, in Architects of Intelligence, p. 77. 14. Email from Jürgen Schmidhuber to Martin Ford, January 28, 2019. 15. Jürgen Schmidhuber, “Critique of paper by ‘Deep Learning Conspiracy’ (Nature 521 p 436),” June 2015, people.idsia.ch/~juergen/deep-learning-conspiracy.html. 16.

The systems, he wrote, have no ability to integrate information from “clinical notes, laboratory values, prior images” and the like. As a result, the technology has so far excelled only with “entities that can be detected with high specificity and sensitivity using only one image (or a few contiguous images) without access to clinical information or prior studies.”48 I suspect that Geoff Hinton would argue that these limitations will inevitably be overcome, and he will very likely turn out to be right in the long run, but I think it will be a gradual process rather than a sudden disruption. An additional reality is that there are a variety of challenging hurdles beyond the capability of the technology itself that will probably make it very difficult to send radiologists—or any other medical specialists—to the unemployment line anytime soon.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

MARTIN FORD: So, it was a strategic investment on the part of the Canadian government to keep deep learning alive? GEOFFREY HINTON: Yes. Basically, the Canadian government is significantly investing in advanced deep learning by spending half a million dollars a year, which is pretty efficient for something that’s going to turn into a multi-billion-dollar industry. MARTIN FORD: Speaking of Canadians, do you have any interaction with your fellow faculty member, Jordan Peterson? It seems like there’s all kinds of disruption coming out of the University of Toronto... GEOFFREY HINTON: Ha! Well, all I’ll say about that is that he’s someone who doesn’t know when to keep his mouth shut. GEOFFREY HINTON received his undergraduate degree from Kings College, Cambridge and his PhD in Artificial Intelligence from the University of Edinburgh in 1978.

His research has covered many topics related to AI, such as machine learning, knowledge representation, and computer vision, and he has received numerous awards and distinctions, including the IJCAI Computers and Thought Award and election as a fellow to the American Association for the Advancement of Science, the Association for the Advancement of Artificial Intelligence and the Association of Computing Machinery. Chapter 4. GEOFFREY HINTON In the past when AI has been overhyped—including backpropagation in the 1980s—people were expecting it to do great things, and it didn’t actually do things as great as they hoped. Today, it’s already done great things, so it can’t possibly all be just hype. EMERITUS DISTINGUISHED PROFESSOR OF COMPUTER SCIENCE, UNIVERSITY OF TORONTO VICE PRESIDENT & ENGINEERING FELLOW, GOOGLE Geoffrey Hinton is sometimes known as the Godfather of Deep Learning, and he has been the driving force behind some of its key technologies, such as backpropagation, Boltzmann machines, and the Capsules neural network.

They got almost half the error rate of the best computer vision systems, and they were using mainly techniques developed in Yann LeCun’s lab but mixed in with a few of our own techniques as well. MARTIN FORD: This was the ImageNet competition? GEOFFREY HINTON: Yes, and what happened then was what should happen in science. One method that people used to think of as complete nonsense had now worked much better than the method they believed in, and within two years, they all switched. So, for things like object classification, nobody would dream of trying to do it without using a neural network now. MARTIN FORD: This was back in 2012, I believe. Was that the inflection point for deep learning? GEOFFREY HINTON: For computer vision, that was the inflection point. For speech, the inflection point was a few years earlier.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

These included the likes of David Rumelhart and James McClelland, two cognitive scientists at the University of California San Diego, who formed an artificial neural network group which became incredibly influential in its own right. There was also a man named Geoff Hinton. The Patron Saint of Neural Networks Born in 1947, Geoff Hinton is the one of the most important figures in modern neural networks. An unassuming British computer scientist, Hinton has influenced the development of his chosen field on a level few others can approach. He comes from a long line of impressive mathematical thinkers: his great-great-grandfather is the famous logician George Boole, whose Boolean algebra laid the foundations for modern computer science.

The noise it produces sounds like vocal exercises a singer might perform to warm up his or her voice. After training on 1,000 words, NETtalk’s speech became far more recognisably human. ‘We were absolutely amazed,’ Sejnowski says. ‘Not least because computers at the time had less computing power than your watch does today.’ The Connectionists Aided by the work of Geoff Hinton and others, the field of neural nets boomed. In the grand tradition of each successive generation renaming themselves, the new researchers described themselves as ‘connectionists’, since they were interested in replicating the neural connections in the brain. By 1991, there were 10,000 active connectionist researchers in the United States alone.

It would be another fifteen years, until October 2010, before Google announced its own self-driving car initiative. However, thanks to his groundbreaking work in neural nets, Dean Pomerleau had proved his point. Welcome to Deep Learning The next significant advance for neural networks took place in the mid-2000s. In 2005, Geoff Hinton was working at the University of Toronto, having recently returned from setting up the Gatsby Computational Neuroscience Unit at University College London. By this time it was clear that the Internet was helping to generate enormous data sets which would have been unimaginable even a decade before.

The Deep Learning Revolution (The MIT Press)
by Terrence J. Sejnowski
Published 27 Sep 2018

At the opening session at NIPS 2018 in Long Beach, I marveled at the growth of NIPS: “Little did I know 30 years ago at the first NIPS conference that I would be standing here today addressing 8,000 attendees—I thought it would only take 10 years.” I visited Geoff Hinton at Mountainview in April, 2016. Google Brain has an entire floor of a building. We reminisced about the old days and came to the conclusion that we had won, but it took a lot longer than we had expected. Along the way, Geoff was elected to the Royal Societies of both England and Canada and I was elected to the National Academy of Sciences, the National Academy of Medicine, the National Academy of Engineering, the National Academy of Inventors, and the American Academy of Arts and Sciences, a rare honor. I owe Geoffrey Hinton a great debt of gratitude for sharing his insights into computing with networks over many years.

Recent experiments on neural network learning of language support the gradual acquisition of inflectional morphology, consistent with human learning.12 The success of deep learning with Google Translate and other natural language applications in capturing the nuances of language further supports the possibility that brains do not need to use explicit rules for language, even though behavior might suggest that they do. Geoffrey Hinton, David Touretzky, and I organized the first Connectionist Summer School at Carnegie Mellon in 1986 (figure 8.3), at a time when Figure 8.3 Students at the 1986 Connectionist Summer School at Carnegie Mellon University. Geoffrey Hinton is in the first row, third from right, flanked by Terry Sejnowski and James McClelland. This photo is a who’s who in neural computing today. Neural networks in the 1980s were a bit of twenty-first-century science in the twentieth century. Courtesy of Geoffrey Hinton. 118 Chapter 8 only a few universities had faculty who offered courses on neural networks.

Even a perfect physical model of how a neuron worked wouldn’t tell us what its purpose was. Neurons are in the business of processing signals that carry information, and computation was Figure 4.6 Terry Sejnowski and Geoffrey Hinton discussing network models of vision in Boston in 1980. This was one year after Geoffrey and I met at the Parallel Models of Associative Memory workshop in La Jolla and one year before I started my lab at Johns Hopkins in Baltimore and Geoffrey started his research group at Carnegie Mellon in Pittsburgh. Courtesy of Geoffrey Hinton. Brain-style Computing 61 the missing link in trying to understand nature. I have over the last forty years been pursuing this goal, pioneering a new field called “computational neuroscience.”

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

That souring, along with a serious reduction of research output and grant support, led to the “AI winter,” as it became known, which lasted about twenty years. It started to come out of hibernation when the term “deep learning” was coined by Rina Dechter in 1986 and later popularized by Geoffrey Hinton, Yann LeCun and Yoshua Bengio. By the late 1980s, multilayered or deep neural networks (DNN) were gaining considerable interest, and the field came back to life. A seminal Nature paper in 1986 by David Rumelhart and Geoffrey Hinton on backpropagation provided an algorithmic method for automatic error correction in neural networks and reignited interest in the field.15 It turned out this was the heart of deep learning, adjusting the weights of the neurons of prior layers to achieve maximal accuracy for the network output.

But instead of the static BLT, we’ve got data moving through layers of computations, extracting high-level features from raw sensory data, a veritable sequence of computations. Importantly, the layers are not designed by humans; indeed, they are hidden from the human users, and they are adjusted by techniques like Geoff Hinton’s backpropagation as a DNN interacts with the data. We’ll use an example of a machine being trained to read chest X-rays. Thousands of chest X-rays, read and labeled with diagnoses by expert radiologists, provide the ground truths for the network to learn from (Figure 4.5). Once trained, the network is ready for an unlabeled chest X-ray to be input.

Versus M.D.”17 The adversarial relationship between humans and their technology, which had a long history dating back to the steam engine and the first Industrial Revolution, had been rekindled. 1936—Turing paper (Alan Turing) 1943—Artificial neural network (Warren McCullogh, Walter Pitts) 1955—Term “artificial intelligence” coined (John McCarthy), 1957—Predicted ten years for AI to beat human at chess (Herbert Simon) 1958—Perceptron (single-layer neural network) (Frank Rosenblatt) 1959—Machine learning described (Arthur Samuel) 1964—ELIZA, the first chatbot 1964—We know more than we can tell (Michael Polany’s paradox) 1969—Question AI viability (Marvin Minsky) 1986—Multilayer neural network (NN) (Geoffrey Hinton) 1989—Convolutional NN (Yann LeCun) 1991—Natural-language processing NN (Sepp Hochreiter, Jurgen Schmidhuber) 1997—Deep Blue wins in chess (Garry Kasparov) 2004—Self-driving vehicle, Mojave Desert (DARPA Challenge) 2007—ImageNet launches 2011—IBM vs. Jeopardy! champions 2011—Speech recognition NN (Microsoft) 2012—University of Toronto ImageNet classification and cat video recognition (Google Brain, Andrew Ng, Jeff Dean) 2014—DeepFace facial recognition (Facebook) 2015—DeepMind vs.

pages: 416 words: 118,522

Why Machines Learn: The Elegant Math Behind Modern AI
by Anil Ananthaswamy
Published 15 Jul 2024

NEURAL NETWORKS: THE REVIVAL BEGINS John Hopfield was among the few researchers who did not give up on neural networks, despite the blow dealt to the field by Marvin Minsky and Seymour Papert in their 1969 book, Perceptrons. (We’ll meet other researchers in subsequent chapters, in particular Geoff Hinton and Yann LeCun, who also kept the faith.) Recall that Frank Rosenblatt and others had shown, using the perceptron convergence theorem, that the perceptron will always find a linearly separating hyperplane if the dataset can be cleanly divided into two categories. Teaching the perceptron using training data involves finding the correct set of weights for the perceptron’s inputs.

This book presents the mathematics in the context of the social history. It is a masterpiece. The author is very good at explaining the mathematics in a way that makes it available to people with only a rudimentary knowledge of the field, but he is also a very good writer who brings the social history to life.” —GEOFFREY HINTON, deep learning pioneer, Turing Award winner, former VP at Google, and professor emeritus at the University of Toronto “After just a few minutes of reading Why Machines Learn, you’ll feel your own synaptic weights getting updated. By the end you will have achieved your own version of deep learning—with deep pleasure and insight along the way.”

Fortunately, the “intellectual discomfort” in store for us is eminently endurable and more than assuaged by the intellectual payoff, because underlying modern ML is some relatively simple and elegant math—a notion that’s best illustrated with an anecdote about Ilya Sutskever. Today, Sutskever is best known as the co-founder of OpenAI, the company behind ChatGPT. More than a decade ago, as a young undergraduate student looking for an academic advisor at the University of Toronto, Sutskever knocked on Geoffrey Hinton’s door. Hinton was already a well-known name in the field of “deep learning,” a form of machine learning, and Sutskever wanted to work with him. Hinton gave Sutskever some papers to read, which he devoured. He remembers being perplexed by the simplicity of the math, compared to the math and physics of his regular undergrad coursework.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

For the task of recognizing objects in photographs, deep learning algorithms have demonstrated remarkable performance. The first inkling of this came in the 2012 ImageNet competition, which provides training data consisting of 1.2 million labeled images in one thousand categories, and then requires the algorithm to label one hundred thousand new images.4 Geoff Hinton, a British computational psychologist who was at the forefront of the first neural network revolution in the 1980s, had been experimenting with a very large deep convolutional network: 650,000 nodes and 60 million parameters. He and his group at the University of Toronto achieved an ImageNet error rate of 15 percent, a dramatic improvement on the previous best of 26 percent.5 By 2015, dozens of teams were using deep learning methods and the error rate was down to 5 percent, comparable to that of a human who had spent weeks learning to recognize the thousand categories in the test.6 By 2017, the machine error rate was 2 percent.

Blog post on inceptionism research at Google: Alexander Mordvintsev, Christopher Olah, and Mike Tyka, “Inceptionism: Going deeper into neural networks,” Google AI Blog, June 17, 2015. The idea seems to have originated with J. P. Lewis, “Creation by refinement: A creativity paradigm for gradient descent learning networks,” in Proceedings of the IEEE International Conference on Neural Networks (IEEE, 1988). 8. News article on Geoff Hinton having second thoughts about deep networks: Steve LeVine, “Artificial intelligence pioneer says we need to start over,” Axios, September 15, 2017. 9. A catalog of shortcomings of deep learning: Gary Marcus, “Deep learning: A critical appraisal,” arXiv:1801.00631 (2018). 10. A popular textbook on deep learning, with a frank assessment of its weaknesses: François Chollet, Deep Learning with Python (Manning Publications, 2017). 11.

The Baldwin effect in evolution is usually attributed to the following paper: James Baldwin, “A new factor in evolution,” American Naturalist 30 (1896): 441–51. 8. The core idea of the Baldwin effect also appears in the following work: Conwy Lloyd Morgan, Habit and Instinct (Edward Arnold, 1896). 9. A modern analysis and computer implementation demonstrating the Baldwin effect: Geoffrey Hinton and Steven Nowlan, “How learning can guide evolution,” Complex Systems 1 (1987): 495–502. 10. Further elucidation of the Baldwin effect by a computer model that includes the evolution of the internal reward-signaling circuitry: David Ackley and Michael Littman, “Interactions between learning and evolution,” in Artificial Life II, ed.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

By 1973, both the US and British governments have pulled their funding support for neural network research, and when a young English psychology student named Geoffrey Hinton declares that he wants to do his doctoral work on neural networks, again and again he is met with the same reply: “Minsky and Papert,” he is told, “have proved that these models were no good.”10 THE STORY OF ALEXNET It is 2012 in Toronto, and Alex Krizhevsky’s bedroom is too hot to sleep. His computer, attached to twin Nvidia GTX 580 GPUs, has been running day and night at its maximum thermal load, its fans pushing out hot exhaust, for two weeks. “It was very hot,” he says. “And it was loud.”11 He is teaching the machine how to see. Geoffrey Hinton, Krizhevsky’s mentor, is now 64 years old and has not given up.

But I am certain that, at a minimum, conversations and exchanges with the following people have made the book what it is: Pieter Abbeel, Rebecca Ackerman, Dave Ackley, Ross Exo Adams, Blaise Agüera y Arcas, Jacky Alciné, Dario Amodei, McKane Andrus, Julia Angwin, Stuart Armstrong, Gustaf Arrhenius, Amanda Askell, Mayank Bansal, Daniel Barcay, Solon Barocas, Renata Barreto, Andrew Barto, Basia Bartz, Marc Bellemare, Tolga Bolukbasi, Nick Bostrom, Malo Bourgon, Tim Brennan, Miles Brundage, Joanna Bryson, Krister Bykvist, Maya Çakmak, Ryan Carey, Joseph Carlsmith, Rich Caruana, Ruth Chang, Alexandra Chouldechova, Randy Christian, Paul Christiano, Jonathan Cohen, Catherine Collins, Sam Corbett-Davies, Meehan Crist, Andrew Critch, Fiery Cushman, Allan Dafoe, Raph D’Amico, Peter Dayan, Michael Dennis, Shiri Dori-Hacohen, Anca Drăgan, Eric Drexler, Rachit Dubey, Cynthia Dwork, Peter Eckersley, Joe Edelman, Owain Evans, Tom Everitt, Ed Felten, Daniel Filan, Jaime Fisac, Luciano Floridi, Carrick Flynn, Jeremy Freeman, Yarin Gal, Surya Ganguli, Scott Garrabrant, Vael Gates, Tom Gilbert, Adam Gleave, Paul Glimcher, Sharad Goel, Adam Goldstein, Ian Goodfellow, Bryce Goodman, Alison Gopnik, Samir Goswami, Hilary Greaves, Joshua Greene, Tom Griffiths, David Gunning, Gillian Hadfield, Dylan Hadfield-Menell, Moritz Hardt, Tristan Harris, David Heeger, Dan Hendrycks, Geoff Hinton, Matt Huebert, Tim Hwang, Geoffrey Irving, Adam Kalai, Henry Kaplan, Been Kim, Perri Klass, Jon Kleinberg, Caroline Knapp, Victoria Krakovna, Frances Kreimer, David Kreuger, Kaitlyn Krieger, Mike Krieger, Alexander Krizhevsky, Jacob Lagerros, Lily Lamboy, Lydia Laurenson, James Lee, Jan Leike, Ayden LeRoux, Karen Levy, Falk Lieder, Michael Littman, Tania Lombrozo, Will MacAskill, Scott Mauvais, Margaret McCarthy, Andrew Meltzoff, Smitha Milli, Martha Minow, Karthika Mohan, Adrien Morisot, Julia Mosquera, Sendhil Mullainathan, Elon Musk, Yael Niv, Brandie Nonnecke, Peter Norvig, Alexandr Notchenko, Chris Olah, Catherine Olsson, Toby Ord, Tim O’Reilly, Laurent Orseau, Pedro Ortega, Michael Page, Deepak Pathak, Alex Peysakhovich, Gualtiero Piccinini, Dean Pomerleau, James Portnow, Aza Raskin, Stéphane Ross, Cynthia Rudin, Jack Rusher, Stuart Russell, Anna Salamon, Anders Sandberg, Wolfram Schultz, Laura Schulz, Julie Shah, Rohin Shah, Max Shron, Carl Shulman, Satinder Singh, Holly Smith, Nate Soares, Daisy Stanton, Jacob Steinhardt, Jonathan Stray, Rachel Sussman, Jaan Tallinn, Milind Tambe, Sofi Thanhauser, Tena Thau, Jasjeet Thind, Travis Timmerman, Brian Tse, Alexander Matt Turner, Phebe Vayanos, Kerstin Vignard, Chris Wiggins, Cutter Wood, and Elana Zeide.

“We shall then pull our wheel chairs together, look at the tasteless cottage cheese in front of us, & recount the famous story of the conversation at the house of old GLAUCUS, where PROTAGORAS & the sophist HIPPIAS were staying: & try once more to penetrate their subtle & profound paradoxes about the knower & the known.” And then, in a trembling script, all caps: “BE THOU WELL.” 10. Geoff Hinton, “Lecture 2.2—Perceptrons: First-generation Neural Networks” (lecture), Neural Networks for Machine Learning, Coursera, 2012. 11. Alex Krizhevsky, personal interview, June 12, 2019. 12. The method for determining the gradient update in a deep network is known as “backpropagation”; it is essentially the chain rule from calculus, although it requires the use of differentiable neurons, not the all-or-nothing neurons considered by McCulloch, Pitts, and Rosenblatt.

pages: 371 words: 108,317

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future
by Kevin Kelly
Published 6 Jun 2016

thousand games of chess: Personal correspondence with Daylen Yang (author of the Stockfish chess app), Stefan Meyer-Kahlen (developed the multiple award-winning computer chess program Shredder), and Danny Kopec (American chess International Master and cocreator of one of the standard computer chess testing systems), September 2014. “akin to building a rocket ship”: Caleb Garling, “Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines,” Wired, May 5, 2015. In 2006, Geoff Hinton: Kate Allen, “How a Toronto Professor’s Research Revolutionized Artificial Intelligence,” Toronto Star, April 17, 2015. he dubbed “deep learning”: Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep Learning,” Nature 521, no. 7553 (2015): 436–44. the network effect: Carl Shapiro and Hal R. Varian, Information Rules: A Strategic Guide to the Network Economy (Boston: Harvard Business Review Press, 1998).

The next level might group two eyes together and pass that meaningful chunk on to another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs.

pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It
by Azeem Azhar
Published 6 Sep 2021

, Time, 7 February 1972 <http://content.time.com/time/subscriber/article/0,33009,905747,00.html> [accessed 3 April 2021]. 4 John Maynard Keynes, ‘Economic Possibilities for Our Grandchildren’, in Essays in Persuasion (London: Palgrave Macmillan UK, 2010), pp. 321–332 <https://doi.org/10.1007/978-1-349-59072-8_25>. 5 Creative Destruction Lab, ‘Geoff Hinton: On Radiology’, 24 November 2016 <https://www.youtube.com/watch?v=2HMPRXstSvQ> [accessed 24 February 2021]. 6 Paul Daugherty, H. James Wilson and Paul Michelman, ‘Revisiting the Jobs That Artificial Intelligence Will Create’, MIT Sloan Management Review (Summer 2017). 7 Lana Bandoim, ‘Robots Are Cleaning Grocery Store Floors During the Coronavirus Outbreak’, Forbes, 8 April 2020 <https://www.forbes.com/sites/lanabandoim/2020/04/08/robots-are-cleaning-grocery-store-floors-during-the-coronavirus-outbreak/> [accessed 24 February 2021]. 8 Jame DiBiasio, ‘A.I.

By 2010, Moore’s Law had resulted in enough power to facilitate a new kind of machine learning, ‘deep learning’, which involved creating layers of artificial neurons modelled on the cells that underpin human brains. These ‘neural networks’ had long been heralded as the next big thing in AI. Yet they had been stymied by a lack of computational power. Not any more, however. In 2012, a group of leading AI researchers – Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton – developed a ‘deep convolutional neural network’ which applied deep learning to the kinds of image-sorting tasks that AIs had long struggled with. It was rooted in extraordinary computing clout. The neural network contained 650,000 neurons and 60 million ‘parameters’, settings you could use to tune the system.

The rise of newly automated workplaces raises the prospect of mass redundancy. And it is framed as a more existential threat than Keynes’s fears of technological unemployment. Soon, we are told, we’ll reach a point where automated systems will render most of us unemployed and unemployable. In 2016, for example, Geoffrey Hinton – one of the AI pioneers we met earlier – publicly mused on the prospects of radiologists, the specialist doctors who deal with X-rays, computerised tomography and magnetic resonance imaging scans. Radiologists, Hinton told a small crowd of AI researchers and founders, were ‘like the coyote that’s already over the edge of the cliff, but hasn’t yet looked down, so doesn’t know there’s no ground underneath him.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

Also, we thank our colleagues for discussions and feedback, including Nick Adams, Umair Akeel, Susan Athey, Naresh Bangia, Nick Beim, Dennis Bennie, James Bergstra, Dror Berman, Vincent Bérubé, Jim Bessen, Scott Bonham, Erik Brynjolfsson, Andy Burgess, Elizabeth Caley, Peter Carrescia, Iain Cockburn, Christian Catalini, James Cham, Nicolas Chapados, Tyson Clark, Paul Cubbon, Zavain Dar, Sally Daub, Dan Debow, Ron Dembo, Helene Desmarais, JP Dube, Candice Faktor, Haig Farris, Chen Fong, Ash Fontana, John Francis, April Franco, Suzanne Gildert, Anindya Ghose, Ron Glozman, Ben Goertzel, Shane Greenstein, Kanu Gulati, John Harris, Deepak Hegde, Rebecca Henderson, Geoff Hinton, Tim Hodgson, Michael Hyatt, Richard Hyatt, Ben Jones, Chad Jones, Steve Jurvetson, Satish Kanwar, Danny Kahneman, John Kelleher, Moe Kermani, Vinod Khosla, Karin Klein, Darrell Kopke, Johann Koss, Katya Kudashkina, Michael Kuhlmann, Tony Lacavera, Allen Lau, Eva Lau, Yann LeCun, Mara Lederman, Lisha Li, Ted Livingston, Jevon MacDonald, Rupam Mahmood, Chris Matys, Kristina McElheran, John McHale, Sanjog Misra, Matt Mitchell, Sanjay Mittal, Ash Munshi, Michael Murchison, Ken Nickerson, Olivia Norton, Alex Oettl, David Ossip, Barney Pell, Andrea Prat, Tomi Poutanen, Marzio Pozzuoli, Lally Rementilla, Geordie Rose, Maryanna Saenko, Russ Salakhutdinov, Reza Satchu, Michael Serbinis, Ashmeet Sidana, Micah Siegel, Dilip Soman, John Stackhouse, Scott Stern, Ted Sum, Rich Sutton, Steve Tadelis, Shahram Tafazoli, Graham Taylor, Florenta Teodoridis, Richard Titus, Dan Trefler, Catherine Tucker, William Tunstall-Pedoe, Stephan Uhrenbacher, Cliff van der Linden, Miguel Villas-Boas, Neil Wainwright, Boris Wertz, Dan Wilson, Peter Wittek, Alexander Wong, Shelley Zhuang, and Shivon Zilis.

Long term, however, Kindred is using a prediction machine trained on many observations of a human grasping via teleoperation to teach the robot to do that part itself. Should We Stop Training Radiologists? In October 2016, standing on stage in front of an audience of six hundred at our annual CDL conference on the business of machine intelligence, Geoffrey Hinton—a pioneer in deep learning neural networks—declared, “We should stop training radiologists now.” A key part of a radiologist’s job is to read images and detect the presence of irregularities that suggest medical problems. In Hinton’s view, AI would soon be better able to identify medically important objects in an image than any human.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Finding only a small ministry of science laboratory and a professor who was working in a related field, LeCun obtained funding and laboratory space. His new professor told him, “I’ve no idea what you’re doing, but you seem like a smart guy so I’ll sign the papers.” But he didn’t stay long. First he went off to Geoff Hinton’s neural network group at the University of Toronto, and when the Bell Labs offer arrived he moved to New Jersey, continuing to refine his approach known as convolutional neural nets, initially focusing on the problem of recognizing handwritten characters for automated mail-sorting applications.

Interest in neural networks would not reemerge until 1978, with the work of Terry Sejnowski, a postdoctoral student in neurobiology at Harvard. Sejnowski had given up his early focus on physics and turned to neuroscience. After taking a summer course in Woods Hole, Massachusetts, he found himself captivated by the mystery of the brain. That year a British postdoctoral psychologist, Geoffrey Hinton, was studying at the University of California at San Diego under David Rumelhart. The older UC scientist had created the parallel-distributed processing group with Donald Norman, the founder of the cognitive psychology department at the school. Hinton, who was the great-great-grandson of logician George Boole, had come to the United States as a “refugee” as a direct consequence of the original AI Winter in England.

Known as the Neural Computation and Adaptive Perception project, it permitted him to handpick the most suitable researchers in the world across a range of fields stretching from neuroscience to electrical engineering. It helped crystallize a community of people interested in the neural network research. Terry Sejnowski, Yann LeCun, and Geoffrey Hinton (from left to right), three scientists who helped revive artificial intelligence by developing biologically inspired neural network algorithms. (Photo courtesy of Yann LeCun) This time they had something else going for them—the pace of computing power had accelerated, making it possible to build neural networks of vast scale, processing data sets orders of magnitude larger than before.

pages: 487 words: 124,008

Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It
by Kashmir Hill
Published 19 Sep 2023

Ton-That got the OpenFace code up and running, but it wasn’t perfect, so he kept searching, wandering through the academic literature and code repositories, trying out this and that. He was like a person walking through an orchard, sampling the fruit of decades of research, ripe for the picking and gloriously free. “I couldn’t have figured it all out from scratch, but these other guys, like Geoff Hinton, they stuck with it and it was like a snowball,” he said. “There was a lot of stuff we could mine.” When Ton-That didn’t understand something he read in an academic paper, he wasn’t afraid to exercise his curiosity. He would go to the professors’ websites, find their phone numbers, and call them up to ask questions.

Ever since then, a small group of neural network believers had toiled away in spite of Minsky, convinced that he was wrong and that the biggest breakthroughs in the field would come from programs that could teach themselves through trial and error. Most AI researchers thought the neural network researchers were delusional, but those technologists, who included university professors with nerd-famous names such as Yann LeCun and Geoffrey Hinton, were determined. They kept tinkering with their neural networks, going to conferences and publishing papers about their work, in the hope of recruiting others to their technological cause. And eventually, thanks to faster computers, new techniques, and loads more data, their neural networks started to work.

GO TO NOTE REFERENCE IN TEXT “Let’s go to one million”: Mark Zuckerberg’s response as recalled by Yaniv Taigman in ibid. GO TO NOTE REFERENCE IN TEXT “No way”: Yaniv Taigman’s recollection of the conversation in ibid. GO TO NOTE REFERENCE IN TEXT blew the competition: The SuperVision developers were Geoffrey Hinton and his two graduate students, Ilya Sutskever and Alex Krizhevsky. For more on this, see Cade Metz, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World (New York: Dutton, 2021). GO TO NOTE REFERENCE IN TEXT Taigman realized: Author’s interview with Yaniv Taigman, 2021.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

These so-called connectionist models were seen by some as direct competitors both to the symbolic models promoted by Newell and Simon and to the logicist approach of McCarthy and others. It might seem obvious that at some level humans manipulate symbols—in fact, the anthropologist Terrence Deacon’s book The Symbolic Species (1997) suggests that this is the defining characteristic of humans. Against this, Geoff Hinton, a leading figure in the resurgence of neural networks in the 1980s and 2010s, has described symbols as the “luminiferous aether of AI”—a reference to the non-existent medium through which many 19th-century physicists believed that electromagnetic waves propagated. Certainly, many concepts that we name in language fail, on closer inspection, to have the kind of logically defined necessary and sufficient conditions that early AI researchers hoped to capture in axiomatic form.

The back-propagation algorithm was discovered independently several times in different contexts (Kelley, 1960; Bryson, 1962; Dreyfus, 1962; Bryson and Ho, 1969; Werbos, 1974; Parker, 1985) and Stuart Dreyfus (1990) calls it the “Kelley–Bryson gradient procedure.” Although Werbos had applied it to neural networks, this idea did not become widely known until a paper by David Rumelhart, Geoff Hinton, and Ron Williams (1986) appeared in Nature giving a nonmathematical presentation of the algorithm. Mathematical respectability was enhanced by papers showing that multilayer feedforward networks are (subject to technical conditions) universal function approximators (Cybenko, 1988, 1989). The late 1980s and early 1990s saw a huge growth in neural network research: the number of papers mushroomed by a factor of 200 between 1980–84 and 1990–94.

The late 1980s and early 1990s saw a huge growth in neural network research: the number of papers mushroomed by a factor of 200 between 1980–84 and 1990–94. In the late 1990s and early 2000s, interest in neural networks waned as other techniques such as Bayes nets, ensemble methods, and kernel machines came to the fore. Interest in deep models was sparked when Geoff Hinton’s research on deep Bayesian networks–—generative models with category variables at the root and evidence variables at the leaves—began to bear fruit, outperforming kernel machines on small benchmark data sets (Hinton et al., 2006). Interest in deep learning exploded when Krizhevsky et al. (2013) used deep convolutional networks to win the ImageNet competition (Russakovsky et al., 2015).

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

Another big issue that Hopfield’s model ignored is that real neurons are statistical: they don’t deterministically turn on and off as a function of their inputs; rather, as the weighted sum of inputs increases, the neuron becomes more likely to fire, but it’s not certain that it will. In 1985, David Ackley, Geoff Hinton, and Terry Sejnowski replaced the deterministic neurons in Hopfield networks with probabilistic ones. A neural network now had a probability distribution over its states, with higher-energy states being exponentially less likely than lower-energy ones. In fact, the probability of finding the network in a particular state was given by the well-known Boltzmann distribution from thermodynamics, so they called their network a Boltzmann machine.

If two neurons tend to fire together during the day but less so while asleep, the weight of their connection goes up; if it’s the opposite, they go down. By doing this day after day, the predicted correlations between sensory neurons evolve until they match the real ones. At this point, the Boltzmann machine has learned a good model of the data and effectively solved the credit-assignment problem. Geoff Hinton went on to try many variations on Boltzmann machines over the following decades. Hinton, a psychologist turned computer scientist and great-great-grandson of George Boole, the inventor of the logical calculus used in all digital computers, is the world’s leading connectionist. He has tried longer and harder to understand how the brain works than anyone else.

A linear brain, no matter how large, is dumber than a roundworm. S curves are a nice halfway house between the dumbness of linear functions and the hardness of step functions. The perceptron’s revenge Backprop was invented in 1986 by David Rumelhart, a psychologist at the University of California, San Diego, with the help of Geoff Hinton and Ronald Williams. Among other things, they showed that backprop can learn XOR, enabling connectionists to thumb their noses at Minsky and Papert. Recall the Nike example: young men and middle-aged women are the most likely buyers of Nike shoes. We can represent this with a network of three neurons: one that fires when it sees a young male, another that fires when it sees a middle-aged female, and another that fires when either of those does.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

Most famously, it paid $500m in January 2014 for DeepMind, a two year-old company employing just 75 people which builds AIs that can learn to play video games better than people. Later in the year it paid another eight-figure sum to hire the seven academics who had established Dark Blue Labs and Vision Factory, two more AI start-ups based in the UK. Before that, in March 2013, it had hired Geoff Hinton, one of the pioneers of machine learning, based in Toronto. All this activity is partly a matter of economic ambition, but it goes wider than that. Google’s founders and leaders want the company to be financially successful, but they also want it to make a difference to people’s lives. Founders Larry Page and Sergei Brin think the future will be a better place for humans than the present, and they are impatient for it to arrive.

The first experiments with ANNs were made in the 1950s, and Frank Rosenblatt used them to construct the Mark I Perceptron, the first computer which could learn new skills by trial and error. Early hopes for the quick development of thinking machines were dashed, however, and neural nets fell into disuse until the late 1980s, when they experienced a renaissance along with what came to be known as deep learning thanks to pioneers Yann LeCun (now at Facebook), Geoff Hinton (now at Google) and Yoshua Bengio, a professor at the University of Montreal. Yann LeCun describes deep learning as follows. “A pattern recognition system is like a black box with a camera at one end, a green light and a red light on top, and a whole bunch of knobs on the front. The learning algorithm tries to adjust the knobs so that when, say, a dog is in front of the camera, the red light turns on, and when a car is put in front of the camera, the green light turns on.

We are able to learn about categories of items at a higher level of abstraction. AGI optimists think that we will work out how to do that with computers too. There are plenty of serious AI researchers who do believe that the probabilistic techniques of machine learning will lead to AGI within a few decades rather than centuries. The veteran AI researcher Geoff Hinton, now working at Google, forecast in May 2015 that the first machine with common sense could be developed in ten years. (34) Part of the reason for the difference of opinion may be that the latter group take very seriously the notion that exponential progress in computing capability will speed progress towards the creation of an AGI.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

A neuron in a state-of-the-art neural network at the time of writing would have about as many connections as there are in a cat brain; a human neuron has on average about 10,000. So, deep neural networks have more layers, and more, better-connected neurons. To train such networks, techniques beyond backprop were needed, and these were provided in 2006 by Geoff Hinton, a British-Canadian researcher who, more than anyone else, is identified with the deep learning movement. Hinton is, by any reckoning, a remarkable individual. He was one of the leaders of the PDP movement in the 1980s, and one of the inventors of backprop. What I find personally so remarkable is that Hinton didn’t lose heart when PDP research began to lose favour.

If you look at the ‘frisbee’ category, then you’ll see that really the only thing they feature in common is, well, frisbees. In some images, of course, the frisbees are being thrown from one person to another, but in some, the frisbee is on a table, with nobody in view. They are all different – except that they all feature frisbees. The eureka moment for image classification came in 2012, when Geoff Hinton and two colleagues, Alex Krizhevsky and Ilya Sutskever, demonstrated a system called AlexNet, a neural net that dramatically improved performance in an international image recognition competition.10 The final ingredient required to make deep learning work was raw computer-processing power. Training a deep neural net requires a huge amount of computer-processing time.

In the remainder of this chapter, I want to look in more detail at two of the most prominent opportunities for AI: the first is the use of AI in healthcare; the second is the long-held dream of driverless cars. AI-Powered Healthcare People should stop training radiologists now. It is just completely obvious that within five years deep learning is going to do better than radiologists. –– Geoff Hinton (2016) Cardiogram is building your personal healthcare assistant. We want to turn your wearable device into a continuous health monitor that can be used to not only track sleep and fitness, but one day may also prevent a stroke and save your life. –– Cardiogram company website4 Anybody with even the vaguest interest in politics and economics will recognize that the provision of healthcare is one of the most important global financial problems for private citizens and for governments.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

Polyak (1964). 12 “A Method for Unconstrained Convex Minimization Problem with the Rate of Convergence O(1/k2 ),” Yurii Nesterov (1983). 13 “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” J. Duchi et al. (2011). 14 This algorithm was created by Tijmen Tieleman and Geoffrey Hinton in 2012, and presented by Geoffrey Hinton in his Coursera class on neural networks (slides: http://goo.gl/RsQeis; video: https://goo.gl/XUbIyJ). Amusingly, since the authors have not written a paper to describe it, researchers often cite “slide 29 in lecture 6” in their papers. 15 “Adam: A Method for Stochastic Optimization,” D.

If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. 978-1-491-96229-9 [LSI] Preface The Machine Learning Tsunami In 2006, Geoffrey Hinton et al. published a paper1 showing how to train a deep neural network capable of recognizing handwritten digits with state-of-the-art precision (>98%). They branded this technique “Deep Learning.” Training a deep neural net was widely considered impossible at the time,2 and most researchers had abandoned the idea since the 1990s.

Deep Learning is best suited for complex problems such as image recognition, speech recognition, or natural language processing, provided you have enough data, computing power, and patience. Other Resources Many resources are available to learn about Machine Learning. Andrew Ng’s ML course on Coursera and Geoffrey Hinton’s course on neural networks and Deep Learning are amazing, although they both require a significant time investment (think months). There are also many interesting websites about Machine Learning, including of course Scikit-Learn’s exceptional User Guide. You may also enjoy Dataquest, which provides very nice interactive tutorials, and ML blogs such as those listed on Quora.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

But the networks themselves were still severely limited in what they could do. Accurate results to complex problems required many layers of artificial neurons, but researchers hadn’t found a way to efficiently train those layers as they were added. Deep learning’s big technical break finally arrived in the mid-2000s, when leading researcher Geoffrey Hinton discovered a way to efficiently train those new layers in neural networks. The result was like giving steroids to the old neural networks, multiplying their power to perform tasks such as speech and object recognition. Soon, these juiced-up neural networks—now rebranded as “deep learning”—could outperform older models at a variety of tasks.

People are so excited about deep learning precisely because its core power—its ability to recognize a pattern, optimize for a specific outcome, make a decision—can be applied to so many different kinds of everyday problems. That’s why companies like Google and Facebook have scrambled to snap up the small core of deep-learning experts, paying them millions of dollars to pursue ambitious research projects. In 2013, Google acquired the startup founded by Geoffrey Hinton, and the following year scooped up British AI startup DeepMind—the company that went on to build AlphaGo—for over $500 million. The results of these projects have continued to awe observers and grab headlines. They’ve shifted the cultural zeitgeist and given us a sense that we stand at the precipice of a new era, one in which machines will radically empower and/or violently displace human beings.

That’s a process that requires well-trained AI scientists, the tinkerers of this age. Today, those tinkerers are putting AI’s superhuman powers of pattern recognition to use making loans, driving cars, translating text, playing Go, and powering your Amazon Alexa. Deep-learning pioneers like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio—the Enrico Fermis of AI—continue to push the boundaries of artificial intelligence. And they may yet produce another game-changing breakthrough, one that scrambles the global technological pecking order. But in the meantime, the real action today is with the tinkerers.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

In all of these cases, the computers would make incomprehensible moves, or they’d play too aggressively, or they’d miscalculate their opponent’s posture. Sometime in the middle of all that work were a handful of researchers who, once again, were workshopping neural networks, an idea championed by Marvin Minsky and Frank Rosenblatt during the initial Dartmouth meeting. Cognitive scientist Geoff Hinton and computer scientists Yann Lecun and Yoshua Bengio each believed that neural net–based systems would not only have serious practical applications—like automatic fraud detection for credit cards and automatic optical character recognition for reading documents and checks—but that it would become the basis for what artificial intelligence would become.

This tribe of groundbreaking, brilliant comics laid the foundation for the future of American entertainment.14 Collectively, this group of men still wields influence today. In a way, AI went through a similar radical transformation because of a modern-day tribe that shared the same values, ideas, and goals. Those three deep-learning pioneers discussed earlier—Geoff Hinton, Yann Lecun, and Yoshua Bengio—were the Sam Kinisons and Richard Pryors of the AI world in the early days of deep neural nets. Lecun studied under Hinton at the University of Toronto where the Canadian Institute for Advanced Research (CIFAR) inculcated a small group of researchers, which included Yoshua Bengio.

But the white man is currently serving an eight-year prison term for yet another crime—breaking into a warehouse and stealing thousands of dollars’ worth of electronics.16 ProPublica looked at the risk scores assigned to more than 7,000 people arrested in Florida to see whether this was an anomaly—and again, they found significant bias encoded within the algorithms, which were twice as likely to incorrectly flag Black defendants as future criminals while mislabeling white defendants as low risk. The optimization effect sometimes causes brilliant AI tribes to make dumb decisions. Recall DeepMind, which built the AlphaGo and AlphaGo Zero systems and stunned the AI community as it dominated grandmaster Go matches. Before Google acquired the company, it sent Geoff Hinton (the University of Toronto professor who was on leave working on deep learning there) and Jeff Dean, who was in charge of Google Brain, to London on a private jet to meet its supernetwork of top PhDs in AI. Impressed with the technology and DeepMind’s remarkable team, they recommended that Google make an acquisition.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

In the next chapter, I’ll recount the extraordinary ascent of ConvNets from relative obscurity to near-complete dominance in machine vision, a transformation made possible by a concurrent technological revolution: that of “big data.” 5 ConvNets and ImageNet Yann LeCun, the inventor of ConvNets, has worked on neural networks all of his professional life, starting in the 1980s and continuing through the winters and springs of the field. As a graduate student and postdoctoral fellow, he was fascinated by Rosenblatt’s perceptrons and Fukushima’s neocognitron, but noted that the latter lacked a good supervised-learning algorithm. Along with other researchers (most notably, his postdoctoral advisor Geoffrey Hinton), LeCun helped develop such a learning method—essentially the same form of back-propagation used on ConvNets today.1 In the 1980s and ’90s, while working at Bell Labs, LeCun turned to the problem of recognizing handwritten digits and letters. He combined ideas from the neocognitron with the back-propagation algorithm to create the semi-eponymous “LeNet”—one of the earliest ConvNets.

LeNet and its successor ConvNets did not do well in scaling up to more complex vision tasks. By the mid-1990s, neural networks started falling out of favor in the AI community, and other methods came to dominate the field. But LeCun, still a believer, kept working on ConvNets, gradually improving them. As Geoffrey Hinton later said of LeCun, “He kind of carried the torch through the dark ages.”2 LeCun, Hinton, and other neural network loyalists believed that improved, larger versions of ConvNets and other deep networks would conquer computer vision if only they could be trained with enough data. Stubbornly, they kept working on the sidelines throughout the 2000s.

What’s more, the winning entry did not use support vector machines or any of the other dominant computer-vision methods of the day. Instead, it was a convolutional neural network. This particular ConvNet has come to be known as AlexNet, named after its main creator, Alex Krizhevsky, then a graduate student at the University of Toronto, supervised by the eminent neural network researcher Geoffrey Hinton. Krizhevsky, working with Hinton and a fellow student, Ilya Sutskever, created a scaled-up version of Yann LeCun’s LeNet from the 1990s; training such a large network was now made possible by increases in computer power. AlexNet had eight layers, with about sixty million weights whose values were learned via back-propagation from the million-plus training images.7 The Toronto group came up with some clever methods for making the network training work better, and it took a cluster of powerful computers about a week to train AlexNet.

pages: 262 words: 69,328

The Great Wave: The Era of Radical Disruption and the Rise of the Outsider
by Michiko Kakutani
Published 20 Feb 2024

The Bad News Is It’s Not for Us’: Why the Godfather of AI Fears for Humanity,” The Guardian, May 5, 2023, theguardian.com/​technology/​2023/​may/​05/​geoffrey-hinton-godfather-of-ai-fears-for-humanity. GO TO NOTE REFERENCE IN TEXT “If there’s any way”: Manuel G. Pascual, “Geoffrey Hinton: ‘We Need to Find a Way to Control Artificial Intelligence Before It’s Too Late,’ ” El País, May 12, 2023, english.elpais.com/​science-tech/​2023-05-12/​geoffrey-hinton-we-need-to-find-a-way-to-control-artificial-intelligence-before-its-too-late.html. GO TO NOTE REFERENCE IN TEXT “annotated for scientists, engineers”: Mary Shelley, Frankenstein (Cambridge, Mass.: MIT Press, 2017).

But it’s clear that the race by Microsoft, Google, and other Silicon Valley companies to capitalize on AI will mean that many of these systems will be released without adequate guardrails and without a full understanding of AI’s terrifying and still emerging abilities. In the spring of 2023, Geoffrey Hinton—the computer scientist often called “the godfather of AI”—left Google to warn the public of the perils of artificial intelligence. Startled by the rapid advances made by ChatGPT, he said he now believes that AI will surpass humans in intelligence in five to twenty years, possibly even in one or two, and that it will be smarter than people by the same measure that “we’re more intelligent than a frog.”

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

“artificial intelligence (AI) system”: Scott Mayer McKinney et al., “International Evaluation of an AI System for Breast Cancer Screening,” Nature 577 (January 2020): 89–94, https://doi.org/10.1038/s41586-019-1799-6. “an algorithm that can detect”: Pranav Rajpurkar et al., “Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning,” CheXNet, December 25, 2017, http://arxiv.org/abs/1711.05225. “people should stop training radiologists”: Geoff Hinton, “Geoff Hinton: On Radiology,” Creative Destruction Lab, uploaded to YouTube November 24, 2016, https://www.youtube.com/watch?v=2HMPRXstSvQ. the work radiologists and other medical professionals do: Hugh Harvey, “Why AI Will Not Replace Radiologists,” Medium, April 7, 2018, https://towardsdatascience.com/why-ai-will-not-replace-radiologists-c7736f2c7d80.

They noted, “In an independent study of six radiologists, the AI system outperformed all of the human [mammogram] readers.” Similarly, a team from Stanford developed “an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists.” Developments such as these led Geoff Hinton, a pioneer in neural networks and deep learning and a winner of the 2018 A. M. Turing Award, to state that “people should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.” That was in 2016. Since that time, it has been noted that the work radiologists and other medical professionals do is much broader than just interpreting X-rays.

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

This became known as symbolic AI, or Good Old-Fashioned AI (GOFAI). Machine learning, by contrast, is the process of creating and refining algorithms which can produce conclusions based on data without being explicitly programmed to do so. The turning point came in 2012 when researchers in Toronto led by Geoff Hinton won an AI image recognition competition called ImageNet.[lxiv] Hinton is a British researcher now at Toronto University and Google, and perhaps the most important figure behind the rise of deep learning as the most powerful of today's AI techniques. (The word algorithm comes from the name of a 9th-century Persian mathematician, Al-Khwarizmi.

Even the worst case predictions envisage continued rapid improvement in computer processing power, albeit perhaps slower than previously. In December 2015, Microsoft's chief speech scientist Xuedong Huang noted that speech recognition has improved 20% a year consistently for the last 20 years. He predicted that computers would be as good as humans at understanding human speech within five years. Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. Common sense can be described as having a mental model of the world which allows you to predict what will happen if certain actions are taken.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

Within a few years, Judea’s Bayesian networks had completely overshadowed the previous rule-based approaches to artificial intelligence. The advent of deep learning—in which computers, in effect, teach themselves to be smarter by observing tons of data—has given him pause, because this method lacks transparency. While recognizing the impressive achievements in deep learning by colleagues such as Michael I. Jordan and Geoffrey Hinton, he feels uncomfortable with this kind of opacity. He set out to understand the theoretical limitations of deep-learning systems and points out that basic barriers exist that will prevent them from achieving a human kind of intelligence, no matter what we do. Leveraging the computational benefits of Bayesian networks, Judea realized that the combination of simple graphical models and data could also be used to represent and infer cause-effect relationships.

Another strong incentive to turn a blind eye to the AI risk is the (very human) curiosity that knows no bounds. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb,” said J. Robert Oppenheimer. His words were echoed recently by Geoffrey Hinton, arguably the inventor of deep learning, in the context of AI risk: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.” Undeniably, we have both entrepreneurial attitude and scientific curiosity to thank for almost all the nice things we take for granted in the modern era.

We tend to underestimate the complexity and creativity of the human brain and how amazingly general it is. If AI is to become more humanlike in its abilities, the machine-learning and neuroscience communities need to interact closely, something that is happening already. Some of today’s greatest exponents of machine learning—such as Geoffrey Hinton, Zoubin Ghahramani, and Demis Hassabis—have backgrounds in cognitive neuroscience, and their success has been at least in part due to attempts to model brainlike behavior in their algorithms. At the same time, neurobiology has also flourished. All sorts of tools have been developed to watch which neurons are firing and genetically manipulate them and see what’s happening in real time with inputs.

pages: 332 words: 93,672

Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy
by George Gilder
Published 16 Jul 2018

Google, meanwhile, under its new CEO, Sundar Pichai, pivoted away from its highly publicized “mobile first” mantra, which had led to its acquisitions of Android and Ad Mob, and toward “AI first.” Google was the recognized intellectual leader of the industry, and its AI ostentation was widely acclaimed. Indeed it signed up most of the world’s AI celebrities, including its spearheads of “deep learning” prowess, from Geoffrey Hinton and Andrew Ng to Jeff Dean, the beleaguered Anthony Levandowski, and Demis Hassabis of DeepMind. If Google had been a university, it would have utterly outshone all others in AI talent. It must have been discouraging, then, to find that Amazon had shrewdly captured much of the market for AI services with its 2014 Alexa and Echo projects.

The most prominent participants were the bright lights of Google: Larry Page, Eric Schmidt, Ray Kurzweil, Demis Hassabis, and Peter Norvig, along with former Googler Andrew Ng, later of Baidu and Stanford. Also there was Facebook’s Yann LeCun, an innovator in deep-learning math and a protégé of Google’s Geoffrey Hinton. A tenured contingent consisted of the technologist Stuart Russell, the philosopher David Chalmers, the catastrophe theorist Nick Bostrom, the nanotech prophet Eric Drexler, the cosmologist Lawrence Krauss, the economist Erik Brynjolfsson, and the “Singularitarian” Vernor Vinge, along with scores of other celebrity scientists.1 They gathered at Asilomar preparing to alert the world to the dire threat posed by . . . well, by themselves—Silicon Valley.

The blog post laconically presented “some simple techniques for peeking inside these [neural] networks” and then showed a series of increasingly trippy photos, as if the machine were hallucinating. A little gray kitten became the stuff of nightmares: a shaggy beast with forehead and haunches bubbling with dark dog eyes and noses. To Balaban, the code and its results were a visual confirmation of what Yoshua Bengio, a colleague of Geoffrey Hinton in the Montreal crucible of AI, calls the “manifold learning hypothesis.” Bengio sees the essential job of a neural network as learning a hierarchy of representations in which each new layer is built up out of representations resolved in a previous layer. The machine begins with raw pixels and combines them into lines and curves transitioning from dark to light and then into geometrical shapes, which finally can be encoded into elements of human faces or other targeted figures.

pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World
by Peter H. Diamandis and Steven Kotler
Published 3 Feb 2015

Now imagine that this same AI also has contextual understanding—meaning the system recognizes that your conversation with your friend is heading in the direction of family life—so the AI reminds you of the names of each of your friend’s family members, as well as any upcoming birthdays they might have. Behind many of the AI successes mentioned in this section is an algorithm called Deep Learning. Developed by University of Toronto’s Geoffrey Hinton for image recognition, Deep Learning has become the dominant approach in the field. And it should come as no surprise that in spring of 2013, Hinton was recruited, like Kurzweil, to join Google41—a development that will most likely lead to even faster progress. More recently, Google and NASA Ames Research Center—one of NASA’s field centers—jointly acquired a 512 qubit (quantum bit) computer manufactured by D-Wave Systems to study machine learning.

v=6adugDEmqBk. 30 John Ward, “The Services Sector: How Best To Measure It?,” International Trade Administration, October 2010, http://trade.gov/publications/ita-newsletter/1010/services-sector-how-best-to-measure-it.asp. 31 AI with Jeremy Howard, 2013. 32 For information on the German Traffic Sign Recognition Benchmark see http://benchmark.ini.rub.de. 33 Geoffrey Hinton et al., “ImageNet Classification with Deep Convolutional Neural Networks,” http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf. 34 John Markoff, “Armies of Expensive Lawyers, Replaced By Cheaper Software,” New York Times, March 4, 2011, http://www.nytimes.com/2011/03/05/science/05legal.html?

pagewanted=all. 37 “IBM Watson’s Next Venture: Fueling New Era of Cognitive Apps Built in the Cloud by Developers,” IBM Press Release, November 14, 2013, http://www-03.ibm.com/press/us/en/pressrelease/42451.wss. 38 Nancy Dahlberg, “Modernizing Medicine, supercomputer Watson partner up,” Miami Herald, May 16, 2014. 39 AI with Daniel Cane, 2014. 40 Ray Kurzweil, “The Law of Accelerating Returns.” 41 Daniela Hernandez, “Meet the Man Google Hired to Make AI a Reality,” Wired, January 2014, http://www.wired.com/2014/01/geoffrey-hinton-deep-learning/. 42 AI with Geordie Rose, 2014. 43 See http://1qbit.com. 44 John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” AI Magazine, August 31, 1955, 12–14. 45 Jim Lewis, “Robots of Arabia,” Wired, Issue 13.11 (November 2005). 46 Garry Mathiason et al., “The Transformation of the Workplace Through Robotics, Artificial Intelligence, and Automation,” The Littler Report, February 2014, http://documents.jdsupra.com/d4936b1e-ca6c-4ce9-9e83-07906bfca22c.pdf. 47 See http://www.rethinkrobotics.com. 48 All Dan Barry quotes in this section come from an AI conducted 2013. 49 The Cambrian explosion was an evolutionary event beginning about 542 million years ago, during which most of the major animal phyla appeared. 50 See “Amazon Prime Air,” Amazon.com, http://www.amazon.com/b?

pages: 385 words: 111,113

Augmented: Life in the Smart Lane
by Brett King
Published 5 May 2016

For example, Uber could advertise its AI, self-driving cars as “The Safest Drivers in the World”, knowing that statistically an autonomous vehicle will be 20 times safer than a human out of the gate. Key to this future is the need for AIs to learn language, to learn to converse. In an interview with the Guardian newspaper in May 2015, Professor Geoff Hinton, an expert in artificial neural networks, said Google is “on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.” Google is currently working to encode thoughts as vectors described by a sequence of numbers. These “thought vectors” could endow AI systems with a human-like “common sense” within a decade, according to Hinton.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits...” Professor Geoff Hinton, from an interview with the Guardian newspaper, 21st May 2015 These types of algorithms, which allow for leaps in cognitive understanding for machines, have only been possible with the application of massive data processing and computing power. Is the Turing Test or a machine that can mimic a human the required benchmark for human interactions with a computer?

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

Keep doing this, modifying the weights again and again, and you gradually improve the performance of the neural network so that eventually it’s able to go all the way from taking in single pixels to learning the existence of lines, edges, shapes, and then ultimately entire objects in scenes. This, in a nutshell, is deep learning. And this remarkable technique, long derided in the field, cracked computer vision and took the AI world by storm. AlexNet was built by the legendary researcher Geoffrey Hinton and two of his students, Alex Krizhevsky and Ilya Sutskever, at the University of Toronto. They entered the ImageNet Large Scale Visual Recognition Challenge, an annual competition designed by the Stanford professor Fei-Fei Li to focus the field’s efforts around a simple goal: identifying the primary object in an image.

It helps fly drones, flags inappropriate content on Facebook, and diagnoses a growing list of medical conditions: at DeepMind, one system my team developed read eye scans as accurately as world-leading expert doctors. Following the AlexNet breakthrough, AI suddenly became a major priority in academia, government, and corporate life. Geoffrey Hinton and his colleagues were hired by Google. Major tech companies in both the United States and China put machine learning at the heart of their R&D efforts. Shortly after DQN, we sold DeepMind to Google, and the tech giant soon switched to a strategy of “AI first” across all its products. Industry research output and patents soared.

These unlikely avenues are the foundation for arguably the biggest biotech story of the twenty-first century. Likewise, fields can stall for decades but then change dramatically in months. Neural networks spent decades in the wilderness, trashed by luminaries like Marvin Minsky. Only a few isolated researchers like Geoffrey Hinton and Yann LeCun kept them going through a period when the word “neural” was so controversial that researchers would deliberately remove it from their papers. It seemed impossible in the 1990s, but neural networks came to dominate AI. And yet it was also LeCun who said AlphaGo was impossible just days before it made its first big breakthrough.

pages: 348 words: 119,358

The Long History of the Future: Why Tomorrow's Technology Still Isn't Here
by Nicole Kobie
Published 3 Jul 2024

It’s unclear who first originated this idea, though it may well have been invented at multiple different times. In 1974, Paul Werbos published his dissertation explaining how to use backpropagation of errors to train neural networks, though his work was little noticed until 1986 when it was cited in a paper by Rumelhart, Ronald Williams and Geoffrey Hinton, the latter now considered one of the ‘fathers of deep learning’. So in the mid-1980s, multiple researchers came to the same solution for neural networks. But you may have noticed that the mid-80s is not when neural networks took off. There’s a good reason for this: computers couldn’t handle them.

Li followed up one excellent idea with another: a competition. The first ImageNet Large Scale Visual Recognition Challenge was held in 2010 but it wasn’t until 2012 that everything changed. That year, the winning team was the first to post an accuracy rate above 75 per cent and did it using deep learning – the team was made up of Geoffrey Hinton, Ilya Sutskever and Alex Krizhevsky (note all of those names). The next year, the competition was won by one of Hinton’s students, Matthew Zeiler, and in 2014 not only was the winner using deep neural networks, but so were all the high scorers of the challenge; specifically, these were convolutional neural networks.

Shared on the website of the Future of Life Institute we discussed at the top of this chapter, the letter does warn about harms happening now – in particular disinformation and the risk to jobs – but focuses largely on concerns about non-human minds replacing us and taking control of our civilisation. ‘Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,’ the letter says. Geoffrey Hinton didn’t sign that letter, but the so-called godfather of neural networks did something even more unexpected: he stepped down from his role at Google so he could more openly discuss the threats. To be clear, he didn’t quit because he was worried about Google in particular, but about the wider industry.

pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future
by Andrew McAfee and Erik Brynjolfsson
Published 26 Jun 2017

They did this with a combination of sophisticated math, ever-more-powerful computer hardware, and a pragmatic approach that allowed them to take inspiration from how the brain works but not to be constrained by it. Electric signals flow in only one direction through the brain’s neurons, for example, but the successful machine learning systems built in the eighties by Paul Werbos, Geoff Hinton, Yann LeCun, and others allowed information to travel both forward and backward through the network. This “back-propagation” led to much better performance, but progress remained frustratingly slow. By the 1990s, a machine learning system developed by LeCun to recognize numbers was reading as many as 20% of all handwritten checks in the United States, but there were few other real-world applications.

Byrne, “Introduction to Neurons and Neuronal Networks,” Neuroscience Online, accessed January 26, 2017, http://neuroscience.uth.tmc.edu/s1/introduction.html. 73 “the embryo of an electronic computer”: Mikel Olazaran, “A Sociological Study of the Official History of the Perceptrons Controversy,” Social Studies of Science 26 (1996): 611–59, http://journals.sagepub.com/doi/pdf/10.1177/030631296026003005. 74 Paul Werbos: Jürgen Schmidhuber, “Who Invented Backpropagation?” last modified 2015, http://people.idsia.ch/~juergen/who-invented-backpropagation.html. 74 Geoff Hinton: David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, “Learning Representations by Back-propagating Errors,” Nature 323 (1986): 533–36, http://www.nature.com/nature/journal/v323/n6088/abs/323533a0.html. 74 Yann LeCun: Jürgen Schmidhuber, Deep Learning in Neural Networks: An Overview, Technical Report IDSIA-03-14, October 8, 2014, https://arxiv.org/pdf/1404.7828v4.pdf. 74 as many as 20% of all handwritten checks: Yann LeCun, “Biographical Sketch,” accessed January 26, 2017, http://yann.lecun.com/ex/bio.html. 74 “a new approach to computer Go”: David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Search Trees,” Nature 529 (2016): 484–89, http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html. 75 approximately $13,000 by the fall of 2016: Elliott Turner, Twitter post, September 30, 2016 (9:18 a.m.), https://twitter.com/eturner303/status/781900528733261824. 75 “the teams at the leading edge”: Andrew Ng, interview by the authors, August 2015. 76 “Retrospectively, [success with machine learning]”: Paul Voosen, “The Believers,” Chronicle of Higher Education, February 23, 2015, http://www.chronicle.com/article/The-Believers/190147. 76 His 2006 paper: G.

pages: 193 words: 51,445

On the Future: Prospects for Humanity
by Martin J. Rees
Published 14 Oct 2018

Successive layers of processing identify horizontal and vertical lines, sharp edges, and so forth; each layer processes information from a ‘lower’ layer and then passes its output to other layers.8 The basic machine-learning concepts date from the 1980s; an important pioneer was the Anglo-Canadian Geoff Hinton. But the applications only really ‘took off’ two decades later, when the steady operation of Moore’s law—a doubling of computer speeds every two years—led to machines with a thousand times faster processing speed. Computers use ‘brute force’ methods. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!).

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

Even better, computers in the ’00s were running faster and faster, at cheaper and cheaper prices. You could now create neural nets with many layers, or even dozens: “deep learning,” as it’s called, because of how many layers are stacked up. By 2012, the field had a seismic breakthrough. Up at the University of Toronto, the British computer scientist Geoff Hinton had been beavering away for two decades on improving neural networks. That year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images.

(One of the more talented AI coders I know spends his time training neural nets on TV and movie scripts, autogenerating new scripts, then shooting the best ones.) All coders adore the “Hello, World!” moment, but with AI, the romance is decidedly Promethean. Matt Zeiler was a young engineering science student at the University of Toronto when one of Geoff Hinton’s students showed him a video of a flickering candle flame and told him it had been automatically generated by a neural net. “I was like, ‘Holy crap!’” Zeiler told me. The flame was so freakily lifelike that he took Hinton’s course and did his undergraduate thesis with Hinton, intent on absorbing deep learning.

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

And if you change the weights on the lines—that is, if you turn the fourteen knobs—you change the strategy. The picture gives you a fourteen-dimensional landscape you can explore, looking for a strategy that fits best whatever data you already have. If you’re finding it hard to imagine what a fourteen-dimensional landscape looks like, I recommend following the advice of Geoffrey Hinton, one of the founders of the modern theory of neural nets: “Visualize a 3-space and say ‘fourteen’ to yourself very loudly. Everyone does it.” Hinton comes from a lineage of high-dimension enthusiasts: his great-grandfather Charles wrote an entire book in 1904 about how to visualize four-dimensional cubes, and invented the word “tesseract” to describe them.* If you’ve seen Dalí’s painting Crucifixion (Corpus Hypercubus), that’s one of Hinton’s visualizations.

by Frank Rosenblatt: Frank Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological Review 65, no. 6 (1958): 386. Rosenblatt’s perceptron was a generalization of a less refined mathematical model of neural processing developed in the 1940s by Warren McCulloch and Walter Pitts. “Visualize a 3-space”: Lecture 2c of Geoffrey Hinton’s notes for “Neural Networks for Machine Learning.” Available at www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec2.pdf. his great-grandfather: For the familial relation between the two Hintons, see K. Onstad, “Mr. Robot,” Toronto Life, Jan. 28, 2018. Chapter 8: You Are Your Own Negative-First Cousin, and Other Maps The geometry of chords: Dmitri Tymoczko, A Geometry of Music (New York: Oxford University Press, 2010).

) * The sequence that counts the number of paraffins with more and more carbon atoms is, of course, recorded in the On-Line Encyclopedia of Integer Sequences: it is sequence A000602. * It’s linear algebra that provides us the theory of “vectors” that’s so central to machine learning, and which gave Geoffrey Hinton the wherewithal to describe fourteen-dimensional space as just like a three-dimensional space to which you loudly say “fourteen!” every so often. * Daniel Brown, in his extremely interesting book The Poetry of Victorian Scientists, argues that this poem can be read as addressing Sylvester’s exclusion from the university system on account of his faith, casting Sylvester himself as the “missing member.”

pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think
by James Vlahos
Published 1 Mar 2019

McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5, (1943): 115–33, https://goo.gl/aFejrr. 87 He called it the Mark I Perceptron: Perceptron information primarily from: Frank Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Psychological Review 65, no. 6 (1958): 386–408; and “Mark I Perceptron Operators’ Manual,” a report by the Cornell Aeronautical Laboratory, February 15, 1960. 88 “The Navy revealed the embryo”: “New Navy Device Learns By Doing,” New York Times, July 8, 1958, https://goo.gl/Jnf6n9. 89 “Canadian Mafia”: Mark Bergen and Kurt Wagner, “Welcome to the AI Conspiracy: The ‘Canadian Mafia’ Behind Tech’s Latest Craze,” Recode, July 15, 2015, https://goo.gl/PeMPYK. 91 But when Rumelhart, Hinton, and Williams: David Rumelhart et al., “Learning representations by back-propagating errors,” Nature 323 (October 9, 1986): 533–36. 92 The result, Bengio and LeCun announced: Yann LeCun et al., “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, November 1998, 1, https://goo.gl/NtNKJB. 92 Toward the end of the 1990s: email from Geoffrey Hinton to author, July 28, 2018. 92 “Smart scientists,” he said: Bergen and Wagner, “Welcome to the AI Conspiracy.” 92 What’s more, they needed more layers: Yoshua Bengio, email to author, August 3, 2018. 92 In 2006 a groundbreaking pair of papers: Geoffrey Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science 313 (July 28, 2006): 504–07, https://goo.gl/Ki41L8; and Yoshua Bengio et al., “Greedy Layer-Wise Training of Deep Networks,” Proceedings of the 19th International Conference on Neural Information Processing Systems (2006): 153–60, https://goo.gl/P5ZcV7. 93 Then, in 2012, a team of computer scientists from Stanford and Google Brain: Quoc Le et al., “Building High-level Features Using Large Scale Unsupervised Learning,” Proceedings of the 29th International Conference on Machine Learning, 2012, https://goo.gl/Vc1GeS. 93 The next breakthrough came in 2012: Alex Krizhevsky et al., “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems 25 (2012): 1097–105, https://goo.gl/x9IIwr. 94 In 2018 Google announced that one of its researchers: Kaz Sato, “Noodle on this: Machine learning that can identify ramen by shop,” Google blog, April 2, 2018, https://goo.gl/YnCujn. 94 “They said, ‘Okay, now we buy it’”: Tom Simonite, “Teaching Machines to Understand Us,” MIT Technology Review, August 6, 2015, https://goo.gl/nPkpll. 94 But with the efficacy of the technique: Among the many sources consulted for the science of speech recognition and language understanding, some of the most helpful were: Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Noida, India: Pearson Education, 2015); Lane Greene, “Finding a Voice,” The Economist, May 2017, https://goo.gl/hss3oL; and Hongshen Chen et al., “A Survey on Dialogue Systems: Recent Advances and New Frontiers,” ACM SIGKDD Explorations Newsletter 19, no. 2 (December 2017), https://goo.gl/GVQUKc. 95 To pinpoint those, an iPhone: “Hey Siri: An On-device DNN-powered Voice Trigger for Apple’s Personal Assistant,” Apple blog, October 2017, https://goo.gl/gWKjQN. 97 But in 2016 IBM and Microsoft independently announced: Allison Linn, “Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition,” Microsoft blog, October 18, 2016, https://goo.gl/4Vz3YF. 98 Apple has patented a technique: “Digital Assistant Providing Whispered Speech,” United States Patent Application by Apple, December 14, 2017, https://goo.gl/3QRddB. 98 In 2016 researchers at Google and Oxford University: Yannis Assael et al., “LipNet: End-to-End Sentence-level Lipreading,” conference paper submitted for ICLR 2017 (December 2016), https://goo.gl/Bhoz7N. 101 Neural networks need much more compact word embeddings: Tomas Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” proceedings of workshop at ICLR, September 7, 2013, https://goo.gl/gHURjZ. 102 “Deep learning,” he says: Steve Young, interview with author, September 19, 2017. 104 The method, which is known as sequence-to-sequence: Ilya Sutskever et al., “Sequence to Sequence Learning with Neural Networks,” Advances in Neural Information Processing Systems 27 (December 14, 2014), https://goo.gl/U3KtxJ. 105 When Vinyals and Le published the results: Oriol Vinyals and Quoc Le, “A Neural Conversational Model,” Proceedings of the 31st International Conference on Machine Learning 37 (2015): https://goo.gl/sZjDy1. 106 “can home in on the part of the incoming email”: Greg Corrado, “Computer, respond to this email,” Google AI blog, November 3, 2015, https://goo.gl/YHMvnA. 108 “This organic writer, for one, could hardly tell one from the other”: Siddhartha Mukherjee, “The Future of Humans?

pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech
by Franklin Foer
Published 31 Aug 2017

Google has spearheaded the revival of a concept first explored in the sixties, one that has failed until recently: neural networks, which involve computing modeled on the workings of the human brain. Algorithms replicate the brain’s information processing and its methods for learning. Google has hired the British-born professor Geoff Hinton, who has made the greatest progress in this direction. It also acquired a London-based company called DeepMind, which created neural networks that taught themselves, without human instruction, to play video games. Because DeepMind feared the dangers of a single company possessing such powerful algorithms, it insisted that Google never permit its work to be militarized or sold to intelligence services.

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

This is a good way to get international talent to work on your problem and will also build your reputation as a company that supports AI. As with any industry, like attracts like. Dominant tech companies build strong AI departments by hiring superstar leaders. Google and Facebook attracted university professors and AI research pioneers such as Geoffrey Hinton, Fei-Fei Li, and Yann LeCun with plum appointments and endless resources. These professors either take a sabbatical from their universities or split their time between academia and industry. Effective Alternatives to Hiring Despite your best efforts, hiring new AI talent may prove to be slow or impossible.

pages: 263 words: 81,527

The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind
by Nick Chater
Published 28 Mar 2018

This requires a systematic rethink of large parts of psychology, neuroscience and the social sciences, but it also requires a radical shake-up of how each of us thinks about ourselves and those around us. I have had a lot of help writing this book. My thinking has been shaped by decades of conversations with Mike Oaksford and Morten Christiansen, and discussions over the years with John Anderson, Gordon Brown, Ulrike Hahn, Geoff Hinton, Richard Holton, George Loewenstein, Jay McClelland, Adam Sanborn, Jerry Seligman, Neil Stewart, Josh Tenenbaum and James Tresilian, and so many other wonderful friends and colleagues. Writing this book has been supported by generous financial support through grants from the ERC (grant 295917-RATIONALITY), the ESRC Network for Integrated Behavioural Science (grant number ES/K002201/1) and the Leverhulme Trust (grant number RP2012-V-022).

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

No longer constrained by human knowledge, it took only three days of the computer playing itself to best previous AlphaGo versions developed by top researchers and it continued to improve from there. It mastered the masters, then mastered itself, and kept on going. How does this relate to our own intelligence? Geoffrey Hinton has long been trying to understand how our brains work. Hinton, the “godfather of deep learning,” is a cognitive psychologist and computer scientist who moved to Canada because of its continued research funding through the second AI winter in the early 1990s. He currently divides his time between his work at Google and as a professor at the University of Toronto.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

They are either unaware of or unconcerned with the ground-level consequences of their work, and although they might pledge to care about the responsible use of AI, they’re not doing anything to slow down or consider how the tools they build could enable harm. Trust me, I would love to be an AI optimist again. But right now, humans are getting in the way. Two The Myth of the Robot-Proof Job We humans are neural nets. What we can do, machines can do. —Geoffrey Hinton, computer scientist and AI pioneer A few years ago, I got invited to dinner with a big group of executives. It was an unusually fancy spread—expensive Champagne, foie gras, beef tenderloin—and as our entrées arrived, the conversation turned, as it often does in these circles, to AI and automation.

pages: 336 words: 93,672

The Future of the Brain: Essays by the World's Leading Neuroscientists
by Gary Marcus and Jeremy Freeman
Published 1 Nov 2014

But as Steven Pinker and I showed, the details were rarely correct empirically; more than that, nobody was ever able to turn a neural network into a functioning system for understanding language. Today neural networks have finally found a valuable home—in machine learning, especially in speech recognition and image classification, due in part to innovative work by researchers such as Geoff Hinton and Yann LeCun. But the utility of neural networks as models of mind and brain remains marginal, useful, perhaps, in aspects of low-level perception but of limited utility in explaining more complex, higher-level cognition. Why is the scope of neural networks so limited if the brain itself is so obviously a neural network?

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

Classical AI scientists dismissed these as “shallow” or “empirical,” because statistical approaches using data didn’t use knowledge and couldn’t handle reasoning or planning very well (if at all). But with the web providing the much-needed data, the approaches started showing promise. The deep learning “revolution” began around 2006, with early work by Geoff Hinton, Yann LeCun, and Yoshua Bengio. By 2010, Google, Microsoft, and other Big Tech companies were using neural networks for major consumer applications such as voice recognition, and by 2012, Android smartphones featured neural network technology. From about this time up through 2020 (as I write this), deep learning has been the hammer causing all the problems of AI to look like a nail—problems that can be approached “from the ground up,” like playing games and recognizing voice and image data, now account for most of the research and commercial dollars in AI.

pages: 913 words: 265,787

How the Mind Works
by Steven Pinker
Published 1 Jan 1997

But that poses a problem the perceptron did not have to worry about: how to adjust the connections from the input units to the hidden units. It is problematic because the teacher, unless it is a mind reader, has no way of knowing the “correct” states for the hidden units, which are sealed inside the network. The psychologists David Rumelhart, Geoffrey Hinton, and Ronald Williams hit on a clever solution. The output units propagate back to each hidden unit a signal that represents the sum of the hidden unit’s errors across all the output units it connects to (“you’re sending too much activation,” or “you’re sending too little activation,” and by what amount).

The mind needs a representation for the proposition itself. In this example, the model needs an extra layer of units—most straightforwardly, a layer dedicated to representing the entire proposition, separately from the concepts and their roles. The bottom of page 121 shows, in simplified form, a model devised by Geoffrey Hinton that does handle the sentences. The bank of “proposition” units light up in arbitrary patterns, a bit like serial numbers, that label complete thoughts. It acts as a superstructure keeping the concepts in each proposition in their proper slots. Note how closely the architecture of the network implements standard, language-like mentalese!

Simulated evolution gives the networks a big head start in their learning careers. So evolution can guide learning in neural networks. Surprisingly, learning can guide evolution as well. Remember Darwin’s discussion of “the incipient stages of useful structures”—the what-good-is-half-an-eye problem. The neural-network theorists Geoffrey Hinton and Steven Nowlan invented a fiendish example. Imagine an animal controlled by a neural network with twenty connections, each either excitatory (on) or neutral (off). But the network is utterly useless unless all twenty connections are correctly set. Not only is it no good to have half a network; it is no good to have ninety-five percent of one.

pages: 764 words: 261,694

The Elements of Statistical Learning (Springer Series in Statistics)
by Trevor Hastie , Robert Tibshirani and Jerome Friedman
Published 25 Aug 2009

Just as we have learned a great deal from researchers outside of the field of statistics, our statistical viewpoint may help others to better understand different aspects of learning: There is no true interpretation of anything; interpretation is a vehicle in the service of human comprehension. The value of interpretation is in enabling others to fruitfully think about an idea. –Andreas Buja We would like to acknowledge the contribution of many people to the conception and completion of this book. David Andrews, Leo Breiman, Andreas Buja, John Chambers, Bradley Efron, Geoffrey Hinton, Werner Stuetzle, and John Tukey have greatly influenced our careers. Balasubramanian Narasimhan gave us advice and help on many computational problems, and maintained an excellent computing environment. Shin-Ho Bang helped in the production of a number of the figures. Lee Wilkinson gave valuable tips on color production.

Compute the leading principal component and factor analysis directions. Hence show that the leading principal component aligns itself in the maximal variance direction X3 , while the leading factor essentially ignores the uncorrelated component X3 , and picks up the correlated component X2 + X1 (Geoffrey Hinton, personal communication). Ex. 14.16 Consider the kernel principal component procedure outlined in Section 14.5.4. Argue that the number M of principal components is equal to the rank of K, which is the number of non-zero elements in D. Show that the mth component zm (mth column of Z) can be written (up to PN centering) as zim = j=1 αjm K(xi , xj ), where αjm = ujm /dm .

The restricted form of this model simplifies the Gibbs sampling for estimating the expectations in (17.37), since the variables in each layer are independent of one another, given the variables in the other layers. Hence they can be sampled together, using the conditional probabilities given by expression (17.30). The resulting model is less general than a Boltzmann machine, but is still useful; for example it can learn to extract interesting features from images. 5 We thank Geoffrey Hinton for assistance in the preparation of the material on RBMs. 644 17. Undirected Graphical Models By alternately sampling the variables in each layer of the RBM shown in Figure 17.6, it is possible to generate samples from the joint density model. If the V1 part of the visible layer is clamped at a particular feature vector during the alternating sampling, it is possible to sample from the distribution over labels given V1 .

pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI
by Paul R. Daugherty and H. James Wilson
Published 15 Jan 2018

Many other researchers provided relevant findings and insights that enriched our thinking, including Mark Purdy, Ladan Davarzani, Athena Peppes, Philippe Roussiere, Svenja Falk, Raghav Narsalay, Madhu Vazirani, Sybille Berjoan, Mamta Kapur, Renee Byrnes, Tomas Castagnino, Caroline Liu, Lauren Finkelstein, Andrew Cavanaugh, and Nick Yennaco. We owe a special debt to the many visionaries and pioneers who have blazed AI trails and whose work has inspired and informed us, including Herbert Simon, John McCarthy, Marvin Minsky, Arthur Samuel, Edward Feigenbaum, Joseph Weizenbaum, Geoffrey Hinton, Hans Moravec, Peter Norvig, Douglas Hofstadter, Ray Kurzweil, Rodney Brooks, Yann LeCun, and Andrew Ng, among many others. And huge gratitude to our colleagues who provided insights and inspiration, including Nicola Morini Bianzino, Mike Sutcliff, Ellyn Shook, Marc Carrel-Billiard, Narendra Mulani, Dan Elron, Frank Meerkamp, Adam Burden, Mark McDonald, Cyrille Bataller, Sanjeev Vohra, Rumman Chowdhury, Lisa Neuberger-Fernandez, Dadong Wan, Sanjay Podder, and Michael Biltz.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
by Michael Kearns and Aaron Roth
Published 3 Oct 2019

The technical name for the algorithmic framework we have been describing is a generative adversarial network (GAN), and the approach we’ve outlined above indeed seems to be highly effective: GANs are an important component of the collection of techniques known as deep learning, which has resulted in qualitative improvements in machine learning for image classification, speech recognition, automatic natural language translation, and many other fundamental problems. (The Turing Award, widely considered the Nobel Prize of computer science, was recently awarded to Yoshua Bengio, Geoffrey Hinton, and Yann LeCun for their pioneering contributions to deep learning.) Fig. 21. Synthetic cat images created by a generative adversarial network (GAN), from https://ajolicoeur.wordpress.com/cats. But with all of this discussion of simulated self-play and fake cats, it might seem like we have strayed far from the core topic of this book, which is the interaction between societal norms and values and algorithmic decision-making.

pages: 296 words: 66,815

The AI-First Company
by Ash Fontana
Published 4 May 2021

AI researchers made breakthroughs in stringing neurons together in a network at the start of the millennium. The Canadian computer scientist Yoshua Bengio devised a language model based on a neural network that figured out the next best word to use among all the available words in a language based on where that word usually appeared with respect to other words. Geoffrey Hinton, a British-born computer scientist and psychologist, developed a neural network that linked many layers of neurons together, the precursor to deep learning. Importantly, researchers worked to get these neural networks running efficiently on the available computer chips, settling on the chips used for computer graphics because they are particularly good at running many numerical computations in parallel.

pages: 280 words: 74,559

Fully Automated Luxury Communism
by Aaron Bastani
Published 10 Jun 2019

Incredibly, it has a self-teaching neural network which constantly adds to its knowledge of how the heart works with each new case it examines. It is in areas such as this where automation will make initial incursions into medicine, boosting productivity by accompanying, rather than replacing, existing workers. Yet such systems will improve with each passing year and some, like ‘godfather of deep learning’ Geoffrey Hinton, believe that medical schools will soon stop training radiologists altogether. Perhaps that is presumptuous – after all, we’d want a level of quality control and maybe even the final diagnosis to involve a human – but even then, this massively upgraded, faster process might need one trained professional where at present there are dozens, resulting in a quicker, superior service that costs less in both time and money.

pages: 296 words: 78,631

Hello World: Being Human in the Age of Algorithms
by Hannah Fry
Published 17 Sep 2018

Neural networks have been around since the middle of the twentieth century, but until quite recently we’ve lacked the widespread access to really powerful computers necessary to get the best out of them. The world was finally forced to sit up and take them seriously in 2012 when computer scientist Geoffrey Hinton and two of his students entered a new kind of neural network into an image recognition competition.12 The challenge was to recognize – among other things – dogs. Their artificially intelligent algorithm blew the best of its competitors out of the water and kicked off a massive renaissance in deep learning.

Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurelien Geron
Published 14 Aug 2019

However, after a while the validation error stops decreasing and actually starts to go back up. This indicates that the model has started to overfit the training data. With early stopping you just stop training as soon as the validation error reaches the minimum. It is such a simple and efficient regularization technique that Geoffrey Hinton called it a “beautiful free lunch.” Figure 4-20. Early stopping regularization Tip With Stochastic and Mini-batch Gradient Descent, the curves are not so smooth, and it may be hard to know whether you have reached the minimum or not. One solution is to stop only after the validation error has been above the minimum for some time (when you are confident that the model will not do any better), then roll back the model parameters to the point where the validation error was at a minimum.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

His career covered computer-assisted detection applications in medicine and national security, from improving 3D mammography to remotely scanning cargo containers coming into U.S. ports for contraband. He was doing deep learning with CPUs to map mouse brains before what he referred to as “the Big Bang” in 2012, when Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton published a paper showing groundbreaking performance on ImageNet. Before then, John explained, “It took a month to train” the models he was using and “error rates were poor.” Yet he said, “The moment ImageNet happens, everybody in the computer vision community changed from whatever they were doing to deep learning, which was appropriate.”

: Brett Darcey, interview by author, October 6, 2021. 250Training safe and robust AI agents: Jack Clark and Dario Amodei, “Faulty Reward Functions in the Wild,” OpenAI Blog, December 21, 2016, https://openai.com/blog/faulty-reward-functions/; Victoria Krakovna et al., “Specification Gaming: the Flip Side of AI Ingenuity,” Deepmind Blog, April 21, 2020, https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity. 250Micro Air Vehicle Lab (MAVLab): Micro Air Vehicle Lab—TUDelft (website), 2021, https://mavlab.tudelft.nl/. 250“mix of neural networks and control theory”: Federico Paredes, interview by author, January 15, 2019. 250“for a lot of what we want to do”: Chuck Howell, interview by author, May 25, 2021. 251“model distillation”: Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, Distilling the Knowledge in a Neural Network (arXiv.org, March 9, 2015), https://arxiv.org/pdf/1503.02531.pdf 251some pharmaceuticals that are approved: Paul Gerrard and Robert Malcolm, “Mechanisms of Modafinil: A Review of Current Research,” Neuropsychiatric Disease and Treatment 3, no. 3 (June 2007): 349–64, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2654794/; PROVIGIL(R) (Modafinil) Tablets [C-IV], package insert, October 2010, https://www.accessdata.fda.gov/drugsatfda_docs/label/2010/020717s030s034s036lbl.pdf; Jonathan Zittrain, “Intellectual Debt: With Great Power Comes Great Ignorance,” Berkman Klein Center, July 24, 2019, https://medium.com/berkman-klein-center/from-technical-debt-to-intellectual-debt-in-ai-e05ac56a502c; Jonathan Zittrain, “The Hidden Costs of Automated Thinking,” New Yorker, July 23, 2019, https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking. 251“We rely on a complex socio-technical system”: Howell, interview. 251necessary processes for AI assurance to establish justified confidence: Pedro A.

pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence
by Jeff Hawkins
Published 15 Nov 2021

Today, AI and robotics are largely separate fields of research, although the line is starting to blur. Once AI researchers understand the essential role of movement and reference frames for creating AGI, the separation between artificial intelligence and robotics will disappear completely. One AI scientist who understands the importance of reference frames is Geoffrey Hinton. Today’s neural networks rely on ideas that Hinton developed in the 1980s. Recently, he has become critical of the field because deep learning networks lack any sense of location and, therefore, he argues, they can’t learn the structure of the world. In essence, this is the same criticism I am making, that AI needs reference frames.

pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe
by William Poundstone
Published 3 Jun 2019

In 2014 Google paid more than $500 million for the British AI start-up Deep Mind. Corporate parent Alphabet is establishing well-funded AI centers across the globe. “I don’t buy into the killer robot [theory],” Google director of research Peter Norvig told CNBC. Another Google researcher, the psychologist and computer scientist Geoffrey Hinton, said, “I am in the camp that it is hopeless.” Mark Zuckerberg and several Facebook executives went so far as to stage an intervention for Musk, inviting him to dinner at Zuckerberg’s house so they could ply him with arguments that AI is okay. It didn’t work. Ever since, Musk and Zuckerberg have waged a social media feud on the topic.

pages: 321

Finding Alphas: A Quantitative Approach to Building Trading Strategies
by Igor Tulchinsky
Published 30 Sep 2019

Another alternative is FloatBoost, which incorporates the backtracking mechanism of floating search and repeatedly performs a backtracking to remove unfavorable weak classifiers after a new weak classifier is added by AdaBoost; this ensures a lower error rate and reduced feature set at the cost of about five times longer training time. Deep Learning Deep learning (DL) is a popular topic today – and a term that is used to discuss a number of rather distinct things. Some data scientists think DL is just a buzz word or a rebranding of neural networks. The name comes from Canadian scientist Geoffrey Hinton, who created an unsupervised method known as the restricted Boltzmann machine (RBM) for pretraining NNs with a large number of neuron layers. That was meant to improve on the backpropagation training method, but there is no strong evidence that it really was an improvement. Another direction in deep learning is recurrent neural networks (RNNs) and natural language processing.

pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives
by David Sumpter
Published 18 Jun 2018

They realised that convolutional neural networks solved problems at the heart of their businesses. An algorithm that can automatically recognise our friends’ faces, our favourite cute animals and the exotic places that we have visited can allow these companies to better target our interests. Alex and his PhD supervisor, Geoffrey Hinton, were recruited by Google. The following year, one of the competition winners, Rob Fergus, was offered a position at Facebook. In 2014, Google put together its own winning team, and promptly recruited Oxford PhD student Karen Simonyan, who came in second place. In 2015, it was Microsoft researcher Kaiming He and his colleagues who took the prize.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

Connectionism redux Even with symbolic logic dominant, research on connectionist AI continued in the background. Some of today’s superstar researchers began academic life toiling away in what was often seen as a relatively unglamorous backwater. Facebook’s Yann LeCun spent the late 1980s, working on ConvNets, a neural network specialised in visual tasks. Geoffrey Hinton, another titan of the field today, was also plugging away on neural networks in the 1980s—making important contributions to a vital breakthrough in the maths underpinning some of today’s connectionism. In the last decade, though, these relative outsiders have emphatically moved to the mainstream.

Driverless: Intelligent Cars and the Road Ahead
by Hod Lipson and Melba Kurman
Published 22 Sep 2016

A neural network named SuperVision, created by a team of researchers from the University of Toronto, correctly identified objects 85 percent of the time, a phenomenal performance in the world of image-recognition software.9 A drop from a 25 percent to 15 percent error rate might not sound like a lot, but for the computer-vision community, which was used to seeing annual improvements of a fraction of a percent each year, it was like seeing a man run the first four-minute mile. SuperVision’s creators were students Alex Krizhevsky and Ilya Sutskever, and their professor, Geoffrey Hinton. SuperVision was a type of neural network called a convolutional network. Many of the convolutional network’s features were based on techniques laid out more than thirty years earlier by Dr. Fukushima for the Neocognitron. Additional refinements stemmed from work conducted by the research groups of Yann LeCun at NYU, Andrew Ng at Stanford, and Yoshua Bengio at the University of Montreal.

pages: 340 words: 90,674

The Perfect Police State: An Undercover Odyssey Into China's Terrifying Surveillance Dystopia of the Future
by Geoffrey Cain
Published 28 Jun 2021

The Chinese, he said, had less than 10 percent of their population linked up to the internet in 2005, but had rapidly become the world’s most enthusiastic users of social media, mobile apps, and mobile payments.7 In 2011, almost 40 percent of the population, or about 513 million people, had their own internet connections.8 All those internet users were producing the data, through their purchases and clicks, that could train the neural networks to solve myriad tasks, including surveilling the users. The same year, in 2011, a pair of research assistants working for the famed AI researcher Geoffrey Hinton, a computer science professor at the University of Toronto and Google AI researcher, made a hardware breakthrough that made these advances possible. The researchers realized they could repurpose graphics processing units (GPUs), the components installed in devices that allowed for advances in computer game graphics, to improve the processing speeds of a deep neural net.9 With GPUs, AI developers could utilize the same techniques for displaying shapes and images on a computer screen, and use them to train a neural network in finding patterns.

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

These included some of the respected computer scientists I had come across in my readings on data colonialism, researchers such as Timnit Gebru, Emily Bender and Deborah Raji.13 They were worried people were missing the real, human harms enacted by these AI systems, in the pursuit of some foolhardy dream of creating a super-intelligent machine. Others like Stuart Russell and Geoffrey Hinton worried that AI was advancing too quickly, without enough knowledge or careful thought about how to design advanced systems that also protect human safety in the long term. Apart from ethical concerns, there were more prosaic ones too. Creative professionals, from writers to voice actors and visual artists, were suddenly being faced with mutant versions of their craft that were cheaper and quicker to create.14 The idea of a machine that ingests and rehashes the world’s creativity wasn’t particularly palatable to them.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

One of the reasons for this was that pattern recognition technology and other branches of artificial intelligence had already changed so much in the short time since the software had first been built. For instance, though artificial neural networks (ANNs) had fallen out of favor since the 1990s, two important papers on machine learning by Geoffrey Hinton and Ruslan Salakhutdinov in 2006 presented major improvements that returned ANNs to the forefront of AI research.3 Their work and that of others introduced important new methods for setting up and training many-layered neural networks that would go on to transform entire fields. From voice recognition and language translation to image search and fraud detection, these new methods began to be used seemingly everywhere.

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future
by Martin Ford
Published 4 May 2015

Researchers at Facebook have likewise developed an experimental system—consisting of nine levels of artificial neurons—that can correctly determine whether two photographs are of the same person 97.25 percent of the time, even if lighting conditions and orientation of the faces vary. That compares with 97.53 percent accuracy for human observers.9 Geoffrey Hinton of the University of Toronto, one of the leading researchers in the field, notes that deep learning technology “scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better.”10 In other words, even without accounting for likely future improvements in their design, machine learning systems powered by deep learning networks are virtually certain to see continued dramatic progress simply as a result of Moore’s Law.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

The winner of the game is the player who captures the largest territory of the board, based on various scoring rules that evaluate the territories occupied by the stones.14 Although it has simple elements and rules, Go is considered one of the most intellectually challenging games ever devised, with a complexity that dwarfs Chess. Thus, it was a great surprise when, in 2016, AlphaGo beat South Korean Go grandmaster Lee Sedol four-out-of-five times and was declared the winner in that five-game match.15 It was a victory that no one thought possible for an algorithm, prompting Geoffrey Hinton, professor and senior Google AI researcher, to rather ambitiously explain the victory’s significance to a questioning reporter thus:16 It relies on a lot of intuition. The really skilled players just sort of see where a good place to put a stone would be. They do a lot of reasoning as well, which they call reading, but they also have very good intuition about where a good place to go would be, and that’s the kind of thing that people just thought computers couldn’t do.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

New York/London, Routledge, 386–407. https://doi.org/10.4324/9780429505188 -33. Kirschenbaum, Matthew (2023). Prepare for the Textpocalypse. The Atlantic, 8 March 2023. https://www.theatlantic.com/technology/archive/2023/03/ai-chat gpt-writing-language-models/673318/. LeCun, Yann/Bengio, Yoshua /Geoffrey Hinton (2015). Deep Learning. Nature 521.7553, 436–44. https://doi.org/10.1038/nature14539. Moretti, Franco (2013). Distant Reading. London/New York, Verso. https://doi.org/1 0.3366/ccs.2013.0105. 29 Why AI Cannot Think A Theoretical Approach Daniel M. Feige In June 2022, Blake Lemoine, then an employee at Google, published a sensational announcement: According to him, LaMDA, the chatbot that he was working on, had developed consciousness and feelings (Wertheimer 2022).

pages: 523 words: 143,139

Algorithms to Live By: The Computer Science of Human Decisions
by Brian Christian and Tom Griffiths
Published 4 Apr 2016

Cringely, Peter Denning, Raymond Dong, Elizabeth Dupuis, Joseph Dwyer, David Estlund, Christina Fang, Thomas Ferguson, Jessica Flack, James Fogarty, Jean E. Fox Tree, Robert Frank, Stuart Geman, Jim Gettys, John Gittins, Alison Gopnik, Deborah Gordon, Michael Gottlieb, Steve Hanov, Andrew Harbison, Isaac Haxton, John Hennessy, Geoff Hinton, David Hirshliefer, Jordan Ho, Tony Hoare, Kamal Jain, Chris Jones, William Jones, Leslie Kaelbling, David Karger, Richard Karp, Scott Kirkpatrick, Byron Knoll, Con Kolivas, Michael Lee, Jan Karel Lenstra, Paul Lynch, Preston McAfee, Jay McClelland, Laura Albert McLay, Paul Milgrom, Anthony Miranda, Michael Mitzenmacher, Rosemarie Nagel, Christof Neumann, Noam Nisan, Yukio Noguchi, Peter Norvig, Christos Papadimitriou, Meghan Peterson, Scott Plagenhoef, Anita Pomerantz, Balaji Prabhakar, Kirk Pruhs, Amnon Rapoport, Ronald Rivest, Ruth Rosenholtz, Tim Roughgarden, Stuart Russell, Roma Shah, Donald Shoup, Steven Skiena, Dan Smith, Paul Smolensky, Mark Steyvers, Chris Stucchio, Milind Tambe, Robert Tarjan, Geoff Thorpe, Jackson Tolins, Michael Trick, Hal Varian, James Ware, Longhair Warrior, Steve Whittaker, Avi Wigderson, Jacob Wobbrock, Jason Wolfe, and Peter Zijlstra.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

For many applications, however, the learning that takes place in a neural network is little different from the learning that takes place in linear regression, a statistical technique developed by Adrien-Marie Legendre and Carl Friedrich Gauss in the early 1800s. 24. The basic algorithm was described by Arthur Bryson and Yu-Chi Ho as a multi-stage dynamic optimization method in 1969 (Bryson and Ho 1969). The application to neural networks was suggested by Paul Werbos in 1974 (Werbos 1994), but it was only after the work by David Rumelhart, Geoffrey Hinton, and Ronald Williams in 1986 (Rumelhart et al. 1986) that the method gradually began to seep into the awareness of a wider community. 25. Nets lacking hidden layers had previously been shown to have severely limited functionality (Minsky and Papert 1969). 26. E.g., MacKay (2003). 27.

pages: 566 words: 169,013

Nexus: A Brief History of Information Networks From the Stone Age to AI
by Yuval Noah Harari
Published 9 Sep 2024

We have a moral imperative to realize this promise of new technologies.” Kurzweil is keenly aware of the technology’s potential perils, and analyzes them at length, but believes they could be mitigated successfully.15 Others are more skeptical. Not only philosophers and social scientists but also many leading AI experts and entrepreneurs like Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk, and Mustafa Suleyman have warned the public that AI could destroy our civilization.16 A 2024 article co-authored by Bengio, Hinton, and numerous other experts noted that “unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity.”17 In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10 percent chance to advanced AI leading to outcomes as bad as human extinction.18 In 2023 close to thirty governments—including those of China, the United States, and the U.K.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

This makes the customer-service representative less effective and may encourage managers and technologists to seek additional ways of reducing the tasks allocated to them even further. These lessons about human intelligence and adaptability are often ignored in the AI community, which rushes to automate a range of tasks, regardless of the role of human skill. The triumph of AI in radiology is much trumpeted. In 2016 Geoffrey Hinton, cocreator of modern deep-learning methods, Turing Award winner, and Google scientist, suggested that “people should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.” Nothing of the sort has yet happened, and demand for radiologists has increased since 2016, for a very simple reason.

pages: 706 words: 202,591

Facebook: The Inside Story
by Steven Levy
Published 25 Feb 2020

He wasn’t thinking about content moderation then, but rather improvement in things like News Feed ranking, better targeting in ad auctions, and facial recognition to better identify your friends in photographs, so you’d engage more with those posts. But the competition to hire AI wizards was fierce. The godfather of deep learning was a British computer scientist working in Toronto named Geoffrey Hinton. He was like the Batman of this new and irreverent form of AI, and his acolytes were a trio of brilliant Robins who individually were making their own huge contributions. One of the Robins, a Parisian named Yann LeCun, jokingly dubbed Hinton’s movement “the Conspiracy.” But the potential of deep learning was no joke to the big tech companies who saw it as a way to perform amazing tasks at scale, everything from facial recognition to instant translation from one language to another.