deep learning

back to index

241 results

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

MARTIN FORD: How does all of that thinking relate to the current overwhelming focus on deep learning? Clearly, deep neural networks have transformed AI, but lately I’ve been hearing more pushback against deep learning hype, and even some suggestions that we could be facing a new AI Winter. Is deep learning really the primary path forward, or is it just one tool in the toolbox? JOSH TENENBAUM: What most people think of as deep learning is one tool in the toolbox, and a lot of deep learning people realize that too. The term “deep learning” has expanded beyond its original definition. MARTIN FORD: I would define deep learning broadly as any approach using sophisticated neural networks with lots of layers, rather than using a very technical definition involving specific algorithms like backpropagation or gradient descent.

Going one step further, deep learning is where we have neural networks that have many layers. There is no required minimum for a neural network to be deep, but we would usually say that two or three layers is not a deep learning network, while four or more layers is deep learning. Some deep learning networks get up to one thousand layers or more. By having many layers in deep learning, we can represent a very complex transformation between the input and output, by a composition of much simpler transformations, each represented by one of those layers in the network. The deep learning hypothesis suggests that many layers make it easier for the learning algorithm to find a predictor, to set all the connection strengths in the network so that it does a good job.

MARTIN FORD: Do you think there will be a backlash against all the hype surrounding deep learning when its limitations are more widely recognized? BARBARA GROSZ: I have survived numerous AI Winters in the past and I’ve come away from them feeling both fearful and hopeful. I’m fearful that people, once they see the limitations of deep learning will say, “Oh, it doesn’t really work.” But I’m hopeful that, because deep learning is so powerful for so many things, and in so many areas, that there won’t be an AI Winter around deep learning. I do think, however, that to avoid an AI Winter for deep learning, people in the field need to put deep learning in its correct place, and be clear about its limitations.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

he published what he called a trilogy of papers critiquing: Gary Marcus, “Deep Learning: A Critical Appraisal,” 2018, https://arxiv.org/abs/1801.00631; Gary Marcus, “In Defense of Skepticism About Deep Learning,” 2018, https://medium.com/@GaryMarcus/in-defense-of-skepticism-about-deep-learning-6e8bfd5ae0f1; Gary Marcus, “Innateness, AlphaZero, and Artificial Intelligence,” 2018, https://arxiv.org/abs/1801.05667. would eventually lead to a book: Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust (New York: Pantheon, 2019). he agreed that deep learning alone could not achieve true intelligence: “Artificial Intelligence Debate—Yann LeCun vs.

* * * — IN the spring of 2012, Geoff Hinton phoned Jitendra Malik, the University of California–Berkeley professor who had publicly attacked Andrew Ng over his claims that deep learning was the future of computer vision. Despite the success of deep learning with speech recognition, Malik and his colleagues questioned whether the technology would ever master the art of identifying images. And because he was someone who generally assumed that incoming calls were arriving from telemarketers trying to sell him something, it was surprising that he even picked up the phone. When he did, Hinton said: “I hear you don’t like deep learning.” Malik said this was true, and when Hinton asked why, Malik said there was no scientific evidence to back any claim that deep learning could outperform any other technology on computer vision.

Malik said this was true, and when Hinton asked why, Malik said there was no scientific evidence to back any claim that deep learning could outperform any other technology on computer vision. Hinton pointed to recent papers showing that deep learning worked well when identifying objects on multiple benchmark tests. Malik said these datasets were too old. No one cared about them. “This is not going to convince anyone who doesn’t share your ideological predilections,” he said. So Hinton asked what would convince him. At first, Malik said deep learning would have to master a European dataset called PASCAL. “PASCAL is too small,” Hinton told him. “To make this work, we need a lot of training data.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

Bar, “Artificial Intelligence: Driving the Next Technology Cycle,” in Next Generation (Zurich: Julius Baer Group, 2017); Chollet, F., Deep Learning with Python (Shelter Island, New York: Manning, 2017); T. L. Fonseca, “What’s Happening Inside the Convolutional Neural Network? The Answer Is Convolution,” buZZrobot (2017); A. Geitgey, “Machine Learning Is Fun! Part 3: Deep Learning and Convolutional Neural Networks,” Medium (2016); Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature (2015): 521(7553), 436–444; R. Raicea, “Want to Know How Deep Learning Works? Here’s a Quick Guide for Everyone,” Medium (2017); P. Voosen, “The AI Detectives,” Science (2017): 357(6346), 22–27.

Neural networks aren’t actually very neural. François Chollet, a deep learning expert at Google, points out in Deep Learning with Python, “There is no evidence that the brain implements anything like the learning mechanisms in use in modern deep learning models.”2 Of course, there’s no reason why machines should mimic the brain; that’s simplistic reverse-anthropomorphic thinking. And, when we see machines showing some semblance of smartness, we anthropomorphize and think that our brains are just some kind of CPU equivalent, cognitive processing units. Deep learning AI is remarkably different from and complementary to human learning.

Cruz-Roa, A., et al., “Accurate and Reproducible Invasive Breast Cancer Detection in Whole-Slide Images: A Deep Learning Approach for Quantifying Tumor Extent.” Sci Rep, 2017. 7: p. 46450. 51. Ehteshami Bejnordi, B., et al., “Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer.” JAMA, 2017. 318(22): pp. 2199–2210. 52. Golden, J. A., “Deep Learning Algorithms for Detection of Lymph Node Metastases from Breast Cancer: Helping Artificial Intelligence Be Seen.” JAMA, 2017. 318(22): pp. 2184–2186. 53. Yang, S. J., et al., “Assessing Microscope Image Focus Quality with Deep Learning.” BMC Bioinformatics, 2018. 19(1): p. 77. 54.

The Deep Learning Revolution (The MIT Press)
by Terrence J. Sejnowski
Published 27 Sep 2018

When you get Google to translate for you, it now uses deep learning designed by Dean’s Google 192 Chapter 12 Figure 12.9 NASCAR jacket Terry Sejnowski wore to open the 2015 NIPS Conference in Montreal. Sponsors ranged from top-tier Internet companies to financial and media companies. They all have a stake in deep learning. Courtesy of the NIPS Foundation. Brain team. When you google a search term, deep learning helps to rank the results. When you talk to the Google assistant, it uses deep learning to recognize the words you are saying, and as it gets better at holding a conversation with you, it will be using deep learning to serve you better.

Chips 205 Inside Information 219 Consciousness 233 Nature Is Cleverer Than We Are 245 Deep Intelligence 261 169 viii Acknowledgments 269 Recommended Reading 275 Glossary 281 Notes 285 Index 321 Contents Preface P P r r e e f f a a c c e e © Massachusetts Institute of TechnologyAll Rights Reserved If you use voice recognition on an Android phone or Google Translate on the Internet, you have communicated with neural networks1 trained by deep learning. In the last few years, deep learning has generated enough profit for Google to cover the costs of all its futuristic projects at Google X, including self-driving cars, Google Glass, and Google Brain.2 Google was one of the first Internet companies to embrace deep learning; in 2013, it hired Geoffrey Hinton, the father of deep learning, and other companies are racing to catch up. The recent progress in artificial intelligence (AI) was made by reverse engineering brains.

All doctors will become better at diagnosing rare skin diseases with the help of deep learning.16 Deep Cancer The detection of metastatic breast cancer in images of lymph node biopsies on slides is done by experts who make mistakes, mistakes that have deadly consequences. This is a pattern recognition problem for which deep learning should excel. And indeed, a deep learning network trained on a large dataset of slides for which ground truth was known reached an accuracy of 0.925, good but not as good as experts who achieved 0.966 on the same test set.17 However, when the predictions of deep learning were combined The Rise of Machine Learning 11 Figure 1.5 Artist’s impression of a deep learning network diagnosing a skin lesion with high accuracy, cover of February 2, 2017, issue of Nature.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

John Markoff, “Scientists see promise in deep-learning programs,” New York Times, November 23, 2012, www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html. 10. Dario Amodei and Danny Hernandez, “AI and Compute,” OpenAI Blog, May 16, 2018, openai.com/blog/ai-and-compute/. 11. Will Knight, “Facebook’s head of AI says the field will soon ‘hit the wall,’” Wired, December 4, 2019, www.wired.com/story/facebooks-ai-says-field-hit-wall/. 12. Kim Martineau, “Shrinking deep learning’s carbon footprint,” MIT News, August 7, 2020, news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807. 13.

While the central processing chip that powers your laptop computer might have two, or perhaps four, computational “cores,” a contemporary high-end GPU would likely have thousands of specialized cores, all of which can crunch numbers at high speed simultaneously. Once researchers discovered that the calculations required by deep learning applications were broadly similar to those needed to render graphics, they began to turn en masse to GPUs, which rapidly evolved into the primary hardware platform for artificial intelligence. Indeed, this transition was a key enabler of the deep learning revolution that took hold beginning in 2012. In September of that year, a team of AI researchers from the University of Toronto put deep learning on the technology industry’s radar by prevailing at the ImageNet Large Scale Visual Recognition Challenge, an important annual event focused on machine vision.

We’ll delve further into the history of deep learning in Chapter 4. The University of Toronto’s team used GPUs manufactured by NVIDIA, a company founded in 1993 whose business focused exclusively on designing and manufacturing state-of-the-art graphics chips. In the wake of the 2012 ImageNet competition and the ensuing widespread recognition of the powerful synergy between deep learning and GPUs, the company’s trajectory shifted dramatically, transforming it into one of the most prominent technology companies associated with the rise of artificial intelligence. Evidence of the deep learning revolution manifested directly in the company’s market value: between January 2012 and January 2020 NVIDIA’s shares soared by more than 1,500 percent.

Driverless: Intelligent Cars and the Road Ahead
by Hod Lipson and Melba Kurman
Published 22 Sep 2016

In the past few years, in search of deep-learning expertise, entire divisions of automotive companies have migrated to Silicon Valley. Deep learning is why software giants like Google and Baidu, already armed with expertise in managing huge banks of data and building intelligent software, are giving the once-invincible automotive giants a run for their money. Deep learning has been so revolutionary to the AI community that its reverberations are still unfolding as we write this book and will likely continue to unfold in the years ahead. Cars won’t be the only technology that’s transformed by deep learning. We predict that deep learning will change the developmental trajectory of mobile robotics in general.

To encourage third-party developers to build intelligent applications using their software tools, Google, Microsoft, and Facebook have each launched their own version of an open source deep-learning development platform. Despite its usefulness, it has taken decades for deep-learning software to develop. Like other types of machine-learning software, a deep-learning network is data driven, consuming huge amounts of visual data in the form of digital images or videos. Improved performance, enabled by the recent maturation of enabling technologies such as high-speed computers and digital cameras, isn’t the only reason deep-learning software has recently gained acceptance, however. The other reason is political.

Inside the network There are several different types of deep-learning networks for image recognition, each with its own twist on the basic architecture and unique refinements in the way the training algorithm is applied. Deep learning is a fast-growing field and new architectures and algorithms appear every few weeks. One common characteristic, however, is that deep-learning networks use a cascade of many layers of artificial neurons to “extract” features from digital images that the software then identifies and labels. Cutting-edge deep-learning networks often have more than 100 layers of artificial neurons (contrast this to Rosenblatt’s Perception, which had only a single layer of eight neurons). Some people believe that deep-learning networks recognize objects the way humans do, by first recognizing a particular tiny feature, then abstracting that to a broader, more abstract concept.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

This acqui-hire instantly put Google at the forefront of deep learning. Soon after, Yann LeCun was lured away from his full-time New York University professorship by Facebook to head up its newly formed AI lab. It didn’t take long before all the big tech companies (as well as many smaller ones) were snapping up deep-learning experts and their graduate students as fast as possible. Seemingly overnight, deep learning became the hottest part of AI, and expertise in deep learning guaranteed computer scientists a large salary in Silicon Valley or, better yet, venture capital funding for their proliferating deep-learning start-up companies.

A recent AI survey paper summed it up: “Because we don’t deeply understand intelligence or know how to produce general AI, rather than cutting off any avenues of exploration, to truly make progress we should embrace AI’s ‘anarchy of methods.’”14 But since the 2010s, one family of AI methods—collectively called deep learning (or deep neural networks)—has risen above the anarchy to become the dominant AI paradigm. In fact, in much of the popular media, the term artificial intelligence itself has come to mean “deep learning.” This is an unfortunate inaccuracy, and I need to clarify the distinction. AI is a field that includes a broad set of approaches, with the goal of creating machines with intelligence. Deep learning is only one such approach. Deep learning is itself one method among many in the field of machine learning, a subfield of AI in which machines “learn” from data or from their own “experiences.”

.… There’s only a few hundred people in the world that can do that really well.”6 Actually, the number of deep-learning experts is growing quickly; many universities now offer courses in the subject, and a growing list of companies have started their own deep-learning training programs for employees. Membership in the deep-learning club can be quite lucrative. At a recent conference I attended, a leader of Microsoft’s AI product group spoke to the audience about the company’s efforts to hire young deep-learning engineers: “If a kid knows how to train five layers of neural networks, the kid can demand five figures.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

Those changes include the Chinese AI frenzy that AlphaGo’s matches sparked amid the underlying technology that powered it to victory. AlphaGo runs on deep learning, a groundbreaking approach to artificial intelligence that has turbocharged the cognitive capabilities of machines. Deep-learning-based programs can now do a better job than humans at identifying faces, recognizing speech, and issuing loans. For decades, the artificial intelligence revolution always looked to be five years away. But with the development of deep learning over the past few years, that revolution has finally arrived. It will usher in an era of massive productivity increases but also widespread disruptions in labor markets—and profound sociopsychological effects on people—as artificial intelligence takes over human jobs across all sorts of industries.

To understand why, we must first grasp the basics of the technology and how it is set to transform our world. A BRIEF HISTORY OF DEEP LEARNING Machine learning—the umbrella term for the field that includes deep learning—is a history-altering technology but one that is lucky to have survived a tumultuous half-century of research. Ever since its inception, artificial intelligence has undergone a number of boom-and-bust cycles. Periods of great promise have been followed by “AI winters,” when a disappointing lack of practical results led to major cuts in funding. Understanding what makes the arrival of deep learning different requires a quick recap of how we got here. Back in the mid-1950s, the pioneers of artificial intelligence set themselves an impossibly lofty but well-defined mission: to recreate human intelligence in a machine.

After decades spent on the margins of AI research, neural networks hit the mainstream overnight, this time in the form of deep learning. That breakthrough promised to thaw the ice from the latest AI winter, and for the first time truly bring AI’s power to bear on a range of real-world problems. Researchers, futurists, and tech CEOs all began buzzing about the massive potential of the field to decipher human speech, translate documents, recognize images, predict consumer behavior, identify fraud, make lending decisions, help robots “see,” and even drive a car. PULLING BACK THE CURTAIN ON DEEP LEARNING So how does deep learning do this? Fundamentally, these algorithms use massive amounts of data from a specific domain to make a decision that optimizes for a desired outcome.

pages: 174 words: 56,405

Machine Translation
by Thierry Poibeau
Published 14 Sep 2017

This explains why it is impossible to manually specify all the information that would be necessary for an automatic machine translation system, but also why the translation task has remained highly challenging and computationally expensive up to the present time. In this context, deep learning provides an interesting approach that seems especially fitted for the challenges involved in improving human language processing. An Overview of Deep Learning for Machine Translation Deep learning achieved its first success in image recognition. Rather than using a group of predefined characteristics, deep learning generally operates from a very large set of examples (hundreds of thousands of images of faces, for example) to automatically extract the most relevant characteristics (called features in machine learning).

A very promising trend of research is however to try to get meaningful representations of the internal model calculated by the neural network, so as to better understand how the whole approach works. The deep learning approach to machine translation (or neural machine translation) has proven efficient, first, on short sentences in closely related languages, and more recently on long sentences as well as more diverse languages. Progress is very quick, and the deep learning approach can be considered a revolution for the domain, as was the statistical approach at the beginning of the 1990s. It is interesting to note that deep learning approaches spread very quickly. All the major players in the domain (Google, Bing, Facebook, Systran, etc.) are moving forward to deep learning, and 2016 saw the deployment of the first online systems based on this approach.

Deep learning, on the contrary, makes it possible, at least in theory, to learn complex characteristics fully autonomously and gradually from the data, without any prior human effort. In the case of machine translation, deep learning makes it possible to envision systems where very few elements are specified manually, the idea being to let the system infer by itself the best representation from the data. A translation system based solely on deep learning (aka “deep learning machine translation” or “neural machine translation”) thus simply consists of an “encoder” (the part of the system that analyzes the training data) and a “decoder” (the part of the system that automatically produces a translation from a given sentence, based on the data analyzed by the encoder).

pages: 688 words: 107,867

Python Data Analytics: With Pandas, NumPy, and Matplotlib
by Fabio Nelli
Published 27 Sep 2018

“Science leads us forward in knowledge, but only analysis makes us more aware” This book is dedicated to all those who are constantly looking for awareness Table of Contents Chapter 1:​ An Introduction to Data Analysis Data Analysis Knowledge Domains of the Data Analyst Computer Science Mathematics and Statistics Machine Learning and Artificial Intelligence Professional Fields of Application Understanding the Nature of the Data When the Data Become Information When the Information Becomes Knowledge Types of Data The Data Analysis Process Problem Definition Data Extraction Data Preparation Data Exploration/​Visualization Predictive Modeling Model Validation Deployment Quantitative and Qualitative Data Analysis Open Data Python and Data Analysis Conclusions Chapter 2:​ Introduction to the Python World Python—The Programming Language Python—The Interpreter Python 2 and Python 3 Installing Python Python Distributions Using Python Writing Python Code IPython PyPI—The Python Package Index The IDEs for Python SciPy NumPy Pandas matplotlib Conclusions Chapter 3:​ The NumPy Library NumPy:​ A Little History The NumPy Installation Ndarray:​ The Heart of the Library Create an Array Types of Data The dtype Option Intrinsic Creation of an Array Basic Operations Arithmetic Operators The Matrix Product Increment and Decrement Operators Universal Functions (ufunc) Aggregate Functions Indexing, Slicing, and Iterating Indexing Slicing Iterating an Array Conditions and Boolean Arrays Shape Manipulation Array Manipulation Joining Arrays Splitting Arrays General Concepts Copies or Views of Objects Vectorization Broadcasting Structured Arrays Reading and Writing Array Data on Files Loading and Saving Data in Binary Files Reading Files with Tabular Data Conclusions Chapter 4:​ The pandas Library—An Introduction pandas:​ The Python Data Analysis Library Installation of pandas Installation from Anaconda Installation from PyPI Installation on Linux Installation from Source A Module Repository for Windows Testing Your pandas Installation Getting Started with pandas Introduction to pandas Data Structures The Series The DataFrame The Index Objects Other Functionalities on Indexes Reindexing Dropping Arithmetic and Data Alignment Operations Between Data Structures Flexible Arithmetic Methods Operations Between DataFrame and Series Function Application and Mapping Functions by Element Functions by Row or Column Statistics Functions Sorting and Ranking Correlation and Covariance “Not a Number” Data Assigning a NaN Value Filtering Out NaN Values Filling in NaN Occurrences Hierarchical Indexing and Leveling Reordering and Sorting Levels Summary Statistic by Level Conclusions Chapter 5:​ pandas:​ Reading and Writing Data I/​O API Tools CSV and Textual Files Reading Data in CSV or Text Files Using RegExp to Parse TXT Files Reading TXT Files Into Parts Writing Data in CSV Reading and Writing HTML Files Writing Data in HTML Reading Data from an HTML File Reading Data from XML Reading and Writing Data on Microsoft Excel Files JSON Data The Format HDF5 Pickle—Python Object Serialization Serialize a Python Object with cPickle Pickling with pandas Interacting with Databases Loading and Writing Data with SQLite3 Loading and Writing Data with PostgreSQL Reading and Writing Data with a NoSQL Database:​ MongoDB Conclusions Chapter 6:​ pandas in Depth:​ Data Manipulation Data Preparation Merging Concatenating Combining Pivoting Removing Data Transformation Removing Duplicates Mapping Discretization and Binning Detecting and Filtering Outliers Permutation Random Sampling String Manipulation Built-in Methods for String Manipulation Regular Expressions Data Aggregation GroupBy A Practical Example Hierarchical Grouping Group Iteration Chain of Transformations Functions on Groups Advanced Data Aggregation Conclusions Chapter 7:​ Data Visualization with matplotlib The matplotlib Library Installation The IPython and IPython QtConsole The matplotlib Architecture Backend Layer Artist Layer Scripting Layer (pyplot) pylab and pyplot pyplot A Simple Interactive Chart The Plotting Window Set the Properties of the Plot matplotlib and NumPy Using the kwargs Working with Multiple Figures and Axes Adding Elements to the Chart Adding Text Adding a Grid Adding a Legend Saving Your Charts Saving the Code Converting Your Session to an HTML File Saving Your Chart Directly as an Image Handling Date Values Chart Typology Line Charts Line Charts with pandas Histograms Bar Charts Horizontal Bar Charts Multiserial Bar Charts Multiseries Bar Charts with pandas Dataframe Multiseries Stacked Bar Charts Stacked Bar Charts with a pandas Dataframe Other Bar Chart Representations Pie Charts Pie Charts with a pandas Dataframe Advanced Charts Contour Plots Polar Charts The mplot3d Toolkit 3D Surfaces Scatter Plots in 3D Bar Charts in 3D Multi-Panel Plots Display Subplots Within Other Subplots Grids of Subplots Conclusions Chapter 8:​ Machine Learning with scikit-learn The scikit-learn Library Machine Learning Supervised and Unsupervised Learning Training Set and Testing Set Supervised Learning with scikit-learn The Iris Flower Dataset The PCA Decomposition K-Nearest Neighbors Classifier Diabetes Dataset Linear Regression:​ The Least Square Regression Support Vector Machines (SVMs) Support Vector Classification (SVC) Nonlinear SVC Plotting Different SVM Classifiers Using the Iris Dataset Support Vector Regression (SVR) Conclusions Chapter 9: Deep Learning with TensorFlow Artificial Intelligence, Machine Learning, and Deep Learning Artificial intelligence Machine Learning Is a Branch of Artificial Intelligence Deep Learning Is a Branch of Machine Learning The Relationship Between Artificial Intelligence, Machine Learning, and Deep Learning Deep Learning Neural Networks and GPUs Data Availability:​ Open Data Source, Internet of Things, and Big Data Python Deep Learning Python Frameworks Artificial Neural Networks How Artificial Neural Networks Are Structured Single Layer Perceptron (SLP) Multi Layer Perceptron (MLP) Correspondence Between Artificial and Biological Neural Networks TensorFlow TensorFlow:​ Google’s Framework TensorFlow:​ Data Flow Graph Start Programming with TensorFlow Installing TensorFlow Programming with the IPython QtConsole The Model and Sessions in TensorFlow Tensors Operation on Tensors Single Layer Perceptron with TensorFlow Before Starting Data To Be Analyzed The SLP Model Definition Learning Phase Test Phase and Accuracy Calculation Multi Layer Perceptron (with One Hidden Layer) with TensorFlow The MLP Model Definition Learning Phase Test Phase and Accuracy Calculation Multi Layer Perceptron (with Two Hidden Layers) with TensorFlow Test Phase and Accuracy Calculation Evaluation of Experimental Data Conclusions Chapter 10:​ An Example— Meteorological Data A Hypothesis to Be Tested:​ The Influence of the Proximity of the Sea The System in the Study:​ The Adriatic Sea and the Po Valley Finding the Data Source Data Analysis on Jupyter Notebook Analysis of Processed Meteorological Data The RoseWind Calculating the Mean Distribution of the Wind Speed Conclusions Chapter 11:​ Embedding the JavaScript D3 Library in the IPython Notebook The Open Data Source for Demographics The JavaScript D3 Library Drawing a Clustered Bar Chart The Choropleth Maps The Choropleth Map of the U.​S.​

In this chapter you can have an introductory overview of the world of deep learning, and the artificial neural networks on which its techniques are based. Furthermore, among the new Python frameworks for deep learning, you will use TensorFlow, which is proving to be an excellent tool for research and development of deep learning analysis techniques. With this library you will see how to develop different models of neural networks that are the basis of deep learning. Artificial Intelligence, Machine Learning, and Deep Learning For anyone dealing with the world of data analysis, these three terms are ultimately very common on the web, in text, and on seminars related to the subject.

The Relationship Between Artificial Intelligence, Machine Learning, and Deep Learning To sum up, in this section you have seen that machine learning and deep learning are actually subclasses of artificial intelligence. Figure 9-1 shows a schematization of classes in this relationship. Figure 9-1Schematization of the relationship between artificial intelligence, machine learning, and deep learning Deep Learning In this section, you will learn about some significant factors that led to the development of deep learning and read why only in these last years have there been so many steps forward.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

“The Golden Elephant” explores deep learning’s stunning potential—as well as its potential pitfalls, like perpetuating bias. So how do researchers develop, train, and use deep learning? What are its limitations? How is deep learning fueled by data? Why are Internet and finance the two most promising initial industries for AI? In what conditions does deep learning optimally work? When it does work, why does it seem to work so well? And what are the downsides and pitfalls of AI? WHAT IS DEEP LEARNING? Inspired by the tangled webs of neurons in our brains, deep learning constructs software layers of artificial neural networks with input and output layers.

That question about trade-offs lies at the heart of “The Golden Elephant,” which introduces the foundational AI concept of deep leaning. Deep learning is a recent AI breakthrough. Among the many subfields of AI, machine learning is the field that has produced the most successful applications, and within machine learning, the biggest advance is “deep learning”—so much so that the terms “AI,” “machine learning,” and “deep learning” are sometimes used interchangeably (if imprecisely). Deep learning supercharged excitement in AI in 2016 when it powered AlphaGo’s stunning victory over a human competitor in Go, Asia’s most popular intellectual board game. After that headline-grabbing turn, deep learning became a prominent part of most commercial AI applications, and it is featured in most of the stories in AI 2041.

The figure on this page shows such a “cat recognition” deep learning neural network. During this process, deep learning is mathematically trained to maximize the value of an “objective function.” In the case of cat recognition, the objective function is the probability of correct recognitions of “cat” vs. “no cat.” Once “trained,” this deep learning network is essentially a giant mathematical equation that can be tested on images it hasn’t seen, and it will perform “inference” to determine the presence or absence of cats. The advent of deep learning pushed AI capabilities from unusable to usable for many domains.

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

You can also find updated technical resources on our book website, appliedaibook.com. Deep Learning Deep learning is a subfield of machine learning that builds algorithms by using multi-layered artificial neural networks, which are mathematical structures loosely inspired by how biological neurons fire. Neural networks were invented in the 1950s, but recent advances in computational power and algorithm design—as well as the growth of big data—have enabled deep learning algorithms to approach human-level performance in tasks such as speech recognition and image classification. Deep learning, in combination with reinforcement learning, enabled Google DeepMind’s AlphaGo to defeat human world champions of Go in 2016, a feat that many experts had considered to be computationally impossible.

Much media attention has been focused on deep learning, and an increasing number of sophisticated technology companies have successfully implemented deep learning for enterprise-scale products. Google replaced previous statistical methods for machine translation with neural networks to achieve superior performance.(4) Microsoft announced in 2017 that they had achieved human parity in conversational speech recognition.(5) Promising computer vision startups like Clarifai employ deep learning to achieve state-of-the-art results in recognizing objects in images and video for Fortune 500 brands.(6) While deep learning models outperform older machine learning approaches to many problems, they are more difficult to develop because they require robust training of data sets and specialized expertise in optimization techniques.

Operationalizing and productizing models for enterprise-scale usage also requires different but equally difficult-to-acquire technical expertise. In practice, using simpler AI approaches like older, non-deep-learning machine learning techniques can produce faster and better results than fancy neural nets can. Rather than building custom deep learning solutions, many enterprises opt for Machine Learning as a Service (MLaaS) solutions from Google, Amazon, IBM, Microsoft, or leading AI startups. Deep learning also suffers from technical drawbacks. Successful models typically require a large volume of reliably-labeled data, which enterprises often lack.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

They had huge resources for managing the nontechnical aspects of government contracting. With just two total employees, Deep Learning Analytics had none. However, Deep Learning Analytics did have better technology. “We didn’t know anything about radar,” John said, but “it turned out not to be as important as some of the other things” such as “knowing how to do deep learning well” and “a really disciplined software engineering approach.” He said, “That’s one of the recurring stories of . . . deep learning since 2012, is that domain expertise isn’t always the thing that’s going to matter so much. . . . The labeled data was sufficient.” Deep Learning Analytics won a $6 million contract from DARPA for the TRACE program, beating out competitors that had better human expert knowledge on radar imaging.

Many of the datasets used to train deep neural networks are massive. ImageNet, the image database that kicked off the deep learning revolution in 2012, includes 14 million images. In order for a neural network to learn what an object looks like, such as a “cat,” “car,” or “chair,” it needs many examples to develop an internal representation of that object. For any given object, ImageNet contains roughly 500 to 1,000 images of that object to allow for a rich set of examples. Deep learning is a more data-intensive process than writing a set of rules for behavior, but deep learning can also be vastly more effective at building intelligent systems for some tasks.

While AI models are often trained at large data centers, the lower compute requirements mean that inference can increasingly be done on edge devices, such as smartphones, IoT devices, intelligent video cameras, or autonomous cars. Both training and inference are done on computer chips, and advances in computing hardware has been fundamental to the deep learning revolution. Graphics processing units (GPUs) have emerged as a key enabler for deep learning because of their ability to do parallel computation (which is valuable for neural networks) better than traditional central processing units (CPUs). A McKinsey study estimated that 97 percent of deep learning training in data centers in 2017 used GPUs. As machine learning researchers have turned to training bigger models on ever-larger datasets, they have also needed increasingly massive amounts of compute.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

Within a few years, Judea’s Bayesian networks had completely overshadowed the previous rule-based approaches to artificial intelligence. The advent of deep learning—in which computers, in effect, teach themselves to be smarter by observing tons of data—has given him pause, because this method lacks transparency. While recognizing the impressive achievements in deep learning by colleagues such as Michael I. Jordan and Geoffrey Hinton, he feels uncomfortable with this kind of opacity. He set out to understand the theoretical limitations of deep-learning systems and points out that basic barriers exist that will prevent them from achieving a human kind of intelligence, no matter what we do.

The new excitement about AI comes because AI researchers have recently produced powerful and effective versions of both of these learning methods. But there is nothing profoundly new about the methods themselves. BOTTOM-UP DEEP LEARNING In the 1980s, computer scientists devised an ingenious way to get computers to detect patterns in data: connectionist, or neural-network, architecture (the “neural” part was, and still is, metaphorical). The approach fell into the doldrums in the 1990s but has recently been revived with powerful “deep-learning” methods like Google’s DeepMind. For example, you can give a deep-learning program a bunch of Internet images labeled “cat,” others labeled “house,” and so on. The program can detect the patterns differentiating the two sets of images and use that information to label new images correctly.

For example, researchers at Google’s DeepMind used a combination of deep learning and reinforcement learning to teach a computer to play Atari video games. The computer knew nothing about how the games worked. It began by acting randomly and got information only about what the screen looked like at each moment and how well it had scored. Deep learning helped interpret the features on the screen, and reinforcement learning rewarded the system for higher scores. The computer got very good at playing several of the games, but it also completely bombed on others that were just as easy for humans to master. A similar combination of deep learning and reinforcement learning has enabled the success of DeepMind’s AlphaZero, a program that managed to beat human players at both chess and Go, equipped with only a basic knowledge of the rules of the game and some planning capacities.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

So-called adversarial attacks are not unique to AlexNet, either. Deep learning systems showing impressive performance on image recognition in fact do not understand what they are perceiving. It is therefore easy to expose the brittleness of the approach. Other experiments have drastically degraded performance by simply including background objects, easily ignored by humans, but problematic for deep learning systems. In other experiments, images that look like salt-and-pepper static on TVs—random assemblages of black and white pixels—fool deep learning systems, which might classify them as pictures of armadillos, cheetahs, or centipedes.

The system produced question-answer pairs, accumulating evidence in the processing pipeline, and scored the list of pairs using statistical techniques—all possible because of data about past games, where the outcome is known. It’s notable that Watson didn’t use deep learning—at least not the version of Watson that outplayed human champions in the televised event in 2011. Deep learning would not have helped—and here again, this is a nod to the ingenuity of the IBM team. A relatively simple machine-learning technique known as regularized logistic regression was used, even though more powerful learning algorithms were available. (Deep learning in 2011 was still relatively unknown.) More powerful learning systems would simply incur more computational training and testing expense—AI is a toolkit, in the end.

Classical AI scientists dismissed these as “shallow” or “empirical,” because statistical approaches using data didn’t use knowledge and couldn’t handle reasoning or planning very well (if at all). But with the web providing the much-needed data, the approaches started showing promise. The deep learning “revolution” began around 2006, with early work by Geoff Hinton, Yann LeCun, and Yoshua Bengio. By 2010, Google, Microsoft, and other Big Tech companies were using neural networks for major consumer applications such as voice recognition, and by 2012, Android smartphones featured neural network technology. From about this time up through 2020 (as I write this), deep learning has been the hammer causing all the problems of AI to look like a nail—problems that can be approached “from the ground up,” like playing games and recognizing voice and image data, now account for most of the research and commercial dollars in AI.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

But it is not obvious that just continuing to refine deep learning techniques will address this problem. Deep learning will be part of the solution, but a proper solution will, I think, require something much more than just a larger neural net, or more processing power, or more training data in the form of boring French novels. It will require breakthroughs at least as dramatic as deep learning itself. I suspect those breakthroughs will require explicitly represented knowledge as well as deep learning: somehow, we will have to bridge the gap between the world of explicitly represented knowledge, and the world of deep learning and neural nets.

Like the story of AI itself, the story of neural networks is a troubled one: there have been two ‘neural net winters’, and as recently as the turn of the century, many in AI regarded neural networks as a dead or dying field. But neural nets ultimately triumphed, and the new idea driving their resurgence is a technique called deep learning. Deep learning is the core technology of DeepMind. I will tell you the DeepMind story, and how the systems that DeepMind built attracted global adulation. But while deep learning is a powerful and important technique, it isn’t the end of the story for AI, so, just as we did with other AI technologies, we’ll discuss its limitations in detail too. Machine Learning, Briefly The goal of machine learning is to have programs that can compute a desired output from a given input, without being given an explicit recipe for how to do this.

But a resurgence did indeed begin, around the year 2006, and it led to the biggest and most highly publicized expansion ever in the history of AI. The big idea that drove the third wave of neural net research went by the name of deep learning.6 I would love to tell you that there was a single key idea which characterizes deep learning, but in truth the term refers to a collection of related ideas. Deep learning means deep in at least three different senses. Of these, perhaps the most important, as the name suggests, is simply the idea of having more layers. Each layer can process a problem at a different level of abstraction – layers close to the input layer handle low-level concepts in the data (such as the edges in a picture), and as we move deeper into the network, we find more abstract concepts being handled.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

Having a computer that knows what a cat is may not sound like a particularly useful achievement, but the ability to use deep learning for computer vision has a host of real-world uses. One startup called Dextro is using deep learning to create better tools for online video searches. Instead of relying on keyword tags, Dextro’s neural nets scans through live videos, analysing both audio and image. Ask it about David Cameron, for example, and it will bring up not just Conservative Party videos, but also video in which the UK Prime Minister is only mentioned in passing. Facebook, meanwhile, uses deep learning to automatically tag images. In June 2014, the social network published a paper describing what it refers to as its ‘DeepFace’ facial recognition technology.

In June 2014, the social network published a paper describing what it refers to as its ‘DeepFace’ facial recognition technology. Thanks to deep learning, Facebook’s algorithms have proven almost as accurate as the human brain when it comes to looking at two photos and saying whether they show the same person, regardless of whether different lighting or camera angles are used. Facebook is also using deep learning to create technology able to describe images to blind users – such as verbalising the fact that an image shows a particular friend riding a bicycle through the English countryside on a summer’s day. Other projects combine deep learning with robotics. One group of researchers from the University of Maryland has taught a robot how to cook a simple meal by simply showing it ‘how-to’ cooking videos available on YouTube.

Nobody, not even the smartest neural network, can be expected to learn what something is if they are never explicitly told. In fact, what Hinton discovered was that unsupervised learning could be used to train up layers of features, one layer at a time. This was the catalyst in the field of ‘deep learning’, currently the hottest area in AI. You can think of a deep learning network a bit like a factory line. After the raw materials are input, they are passed down the conveyor belt, with each subsequent stop or layer extracting a different set of high-level features. To continue the example of an image recognition network, the first layer may be used to analyse pixel brightness.

pages: 1,082 words: 87,792

Python for Algorithmic Trading: From Idea to Cloud Deployment
by Yves Hilpisch
Published 8 Dec 2020

For example, assume that a stock trades 10 USD under its 200 days SMA level of 100. It is then expected that the stock price will return to its SMA level sometime soon. Machine and Deep Learning With machine and deep learning algorithms, one generally takes a more black box approach to predicting market movements. For simplicity and reproducibility, the examples in this book mainly rely on historical return observations as features to train machine and deep learning algorithms to predict stock market movements. This book does not introduce algorithmic trading in a systematic fashion. Since the focus lies on applying Python in this fascinating field, readers not familiar with algorithmic trading should consult dedicated resources on the topic, some of which are cited in this chapter and the chapters that follow.

With this reasoning, the prediction problem basically boils down to a classification problem of deciding whether there will be an upwards or downwards movement. Different machine learning algorithms have been developed to attack such classification problems. This chapter introduces logistic regression, as a typical baseline algorithm, for classification. Deep learning-based strategies Deep learning has been popularized by such technological giants as Facebook. Similar to machine learning algorithms, deep learning algorithms based on neural networks allow one to attack classification problems faced in financial market prediction. The chapter is organized as follows. “Using Linear Regression for Market Movement Prediction” introduces linear regression as a technique to predict index levels and the direction of price movements.

For example, testing the very same strategy instead of in-sample on an out-of-sample data set and adding transaction costs—as two ways of getting to a more realistic picture—often shows that the performance of the considered strategy “suddenly” trails the base instrument performance-wise or turns to a net loss. Using Deep Learning for Market Movement Prediction Right from the open sourcing and publication by Google, the deep learning library TensorFlow has attracted much interest and wide-spread application. This section applies TensorFlow in the same way that the previous section applied scikit-learn to the prediction of stock market movements modeled as a classification problem. However, TensorFlow is not used directly; it is rather used via the equally popular Keras deep learning package. Keras can be thought of as providing a higher level abstraction to the TensorFlow package with an easier to understand and use API.

Succeeding With AI: How to Make AI Work for Your Business
by Veljko Krunic
Published 29 Mar 2020

During discussions, the team has mentioned that this problem could be solved using either an SVM, a decision tree, logistic regression, or a deep learning-based classification. Should you use deep learning? After all, it’s an exceedingly popular technology, has a substantial mindshare, and could solve the problem. Or should you use one of the other suggested options? Answer to question 5:  You can use a deep learning-based classifier, but I typically wouldn’t try it as my first (or even second) choice. Unless your survey is a monster with a thousand questions, it’s unclear that you’ll be able to train a large deep learning network at all.  I’m not persuaded that for the typical survey of only a few questions, more complicated methods are going to produce better results.

During discussions, the team has mentioned that this problem could be solved using either an SVM, a decision tree, logistic regression, or a deep learning-based classification. Should you use deep learning? After all, it’s an exceedingly popular technology, has a substantial mindshare, and could solve the problem. Or should you use one of the other suggested options? Question 6: You answered question 5 using an algorithm of your choice. Suppose the algorithm you chose didn’t provide a good enough prediction of a customer returning the product. Should you use a better ML algorithm? Is it now time to use the latest and greatest from the field of deep learning? Summary  Every AI project uses some form of the ML pipeline.

Harris, Murphy, and Vaisman’s book [66] provides a good summary of the state of data science before the advancement of deep learning. Data scientist—A practitioner of the field of data science. Many sources (including this book) classify AI practitioners as data scientists. Database administrator (DBA)—A professional responsible for the maintenance of a database. Most commonly, a DBA would be responsible for maintaining a RDBMS-based database. Deep learning—A subfield of AI that uses artificial neural networks arranged in a significant number of layers. In the last few years, deep learning algorithms have been successful in a large number of highly visible applications, including image processing and speech and audio recognition.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

The current champion learning algorithm for machine translation is a form of so-called deep learning, and it produces a rule in the form of an artificial neural network with hundreds of layers and millions of parameters.D Other deep learning algorithms have turned out to be very good at classifying the objects in images and recognizing the words in a speech signal. Machine translation, speech recognition, and visual object recognition are three of the most important subfields in AI, which is why there has been so much excitement about the prospects for deep learning. One can argue almost endlessly about whether deep learning will lead directly to human-level AI.

News article on Geoff Hinton having second thoughts about deep networks: Steve LeVine, “Artificial intelligence pioneer says we need to start over,” Axios, September 15, 2017. 9. A catalog of shortcomings of deep learning: Gary Marcus, “Deep learning: A critical appraisal,” arXiv:1801.00631 (2018). 10. A popular textbook on deep learning, with a frank assessment of its weaknesses: François Chollet, Deep Learning with Python (Manning Publications, 2017). 11. An explanation of explanation-based learning: Thomas Dietterich, “Learning at the knowledge level,” Machine Learning 1 (1986): 287–315. 12. A superficially quite different explanation of explanation-based learning: John Laird, Paul Rosenbloom, and Allen Newell, “Chunking in Soar: The anatomy of a general learning mechanism,” Machine Learning 1 (1986): 11–46.

Computers are also made of circuits, both in their memories and in their processing units; but those circuits have to be arranged in certain ways, and layers of software have to be added, before the computer can support the operation of high-level programming languages and logical reasoning systems. At present, however, there is no sign that deep learning systems can develop such capabilities by themselves—nor does it make scientific sense to require them to do so. There are further reasons to think that deep learning may reach a plateau well short of general intelligence, but it’s not my purpose here to diagnose all the problems: others, both inside8 and outside9 the deep learning community, have noted many of them. The point is that simply creating larger and deeper networks and larger data sets and bigger machines is not enough to create human-level AI.

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

You could now create neural nets with many layers, or even dozens: “deep learning,” as it’s called, because of how many layers are stacked up. By 2012, the field had a seismic breakthrough. Up at the University of Toronto, the British computer scientist Geoff Hinton had been beavering away for two decades on improving neural networks. That year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images. That year, Hinton’s deep-learning neural net got only 15.3 percent of the images wrong.

At first, the result surprised them, though on reflection it made sense: There are a lot of cats on YouTube, so any self-learning algorithm told to pick out salient features that recur over and over again might, in essence, discover humanity’s online obsession with felines. Still, it was a spookily humanlike bit of reasoning. The Terminator was coming to life, and it could grasp the concept of cats! Google soon began throwing enormous resources at deep learning, developing its abilities and integrating it into as many products as possible. They trained deep-learning nets on language pairings—showing it, say, all the Canadian parliamentary proceedings that were translated into both English and French, or their own collections of crowdsourced translations. When they were done, Google Translate became, in a single night, remarkably better—so much improved that Japanese scholars were marveling at the machine’s deft ability to translate complex literary passages between their language and English.

When they were done, Google Translate became, in a single night, remarkably better—so much improved that Japanese scholars were marveling at the machine’s deft ability to translate complex literary passages between their language and English. A few short years later, deep learning had swept the world of software. Companies everywhere were rushing to incorporate it into their services. Ng was snapped up by Baidu, the Chinese search giant, as it frantically sought to catch up to Google’s AI wave. Facebook engineers had long been using many different styles of machine learning to help recognize faces in photos, filter stories in the News Feed, and predict whether users would click on an ad; it set up an experimental AI research lab, and soon Facebook was producing a deep-learning model that could recognize faces with 97.35 percent accuracy, 27 percent better than the state of the art (“closely approaching human-level performance,” as they noted.)

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

It seems that every week there is news of a new AI application approaching or exceeding human performance, often accompanied by speculation of either accelerated success or a new AI winter. Deep learning relies heavily on powerful hardware. Whereas a standard computer CPU can do 109 or 1010 operations per second. a deep learning algorithm running on specialized hardware (e.g., GPU, TPU, or FPGA) might consume between 1014 and 1017 operations per second, mostly in the form of highly parallelized matrix and vector operations. Of course, deep learning also depends on the availability of large amounts of training data, and on a few algorithmic tricks (see Chapter 22). 1.4 The State of the Art Stanford University’s One Hundred Year Study on AI (also known as AI100) convenes panels of experts to provide reports on the state of the art in AI.

See Bernardo and Smith (1994). 5It is better in practice to choose them randomly, to avoid local maxima due to symmetry. CHAPTER 22 DEEP LEARNING In which gradient descent learns multistep programs, with significant implications for the major subfields of artificial intelligence. Deep learning is a broad family of techniques for machine learning in which hypotheses take the form of complex algebraic circuits with tunable connection strengths. The word “deep” refers to the fact that the circuits are typically organized into many layers, which means that computation paths from inputs to outputs have many steps. Deep learning is currently the most widely used approach for applications such as visual object recognition, machine translation, speech recognition, speech synthesis, and image synthesis; it also plays a significant role in reinforcement learning applications (see Chapter 23).

Deep learning is currently the most widely used approach for applications such as visual object recognition, machine translation, speech recognition, speech synthesis, and image synthesis; it also plays a significant role in reinforcement learning applications (see Chapter 23). Deep learning has its origins in early work that tried to model networks of neurons in the brain (McCulloch and Pitts, 1943) with computational circuits. For this reason, the networks trained by deep learning methods are often called neural networks, even though the resemblance to real neural cells and structures is superficial. While the true reasons for the success of deep learning have yet to be fully elucidated, it has self-evident advantages over some of the methods covered in Chapter 19—particularly for high-dimensional data such as images.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

They can even create wholly convincing photographs of people who’ve never actually existed. Artificial neuroscience The third factor in the deep learning revival was more arcane, but critical. This was the increasing technical sophistication of the neural networks that underpinned connectionism. One important development was the discovery of ‘backprop’, or backward propagation. This was a key bit of maths that allowed the artificial neurons in the connectionist AI to learn effectively. With multiple layers in the modern ‘deep learning network’, and with many more neurons and connections between them, working out the optimum connections between them had been fiendishly difficult.

In 2017, the Pentagon stood up an ‘algorithmic warfare cross functional team’, known as Project Maven. The team would consolidate ‘all initiatives that develop, employ, or field artificial intelligence, automation, machine learning, deep learning, and computer vision algorithms’.12 It was a small team, initially, but would grow rapidly. And one of its main contractors on image recognition? Google, of course. The stage was set for the arrival of deep warbots. Hype or hope? The deep learning revolution has produced a new wave of AI hype, only some of which is justified. In rapid succession, connectionist AI delivered results that would have amazed earlier students of symbolic logic.

While autonomous weapons and AI systems have been around for more than half a century, the recent surge in deep learning AI creates the possibility of swarming behaviours, and this more than any other feature, will likely have an impact on warbot tactics. The swarm attacks! Swarms don’t require a particularly sophisticated intelligence. At least not sophisticated in the sense we often use the term—carefully weighing goals and ways before deciding. In nature, swarming is often an instinctive behaviour. In robotics it’s the sort of skilful control challenge that deep learning AI excels at. Above the deserts of the US Navy’s vast China Lake research facility, Perdix and the Defense Department recently demonstrated the state of the art.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

They are particularly useful in speech recognition and handwriting recognition systems. Deep learning Deep learning is a subset of machine learning. Its algorithms use several layers of processing, each taking data from previous layers and passing an output up to the next layer. The nature of the output may vary according to the nature of the input, which is not necessarily binary, just on or off, but can be weighted. The number of layers can vary too, with anything above ten layers seen as very deep learning. Artificial neural nets (ANN) are an important type of deep learning system – indeed some people argue that deep learning is simply a re-branding of neural networks.

Early hopes for the quick development of thinking machines were dashed, however, and neural nets fell into disuse until the late 1980s, when they experienced a renaissance along with what came to be known as deep learning thanks to pioneers Yann LeCun (now at Facebook), Geoff Hinton (now at Google) and Yoshua Bengio, a professor at the University of Montreal. Yann LeCun describes deep learning as follows. “A pattern recognition system is like a black box with a camera at one end, a green light and a red light on top, and a whole bunch of knobs on the front. The learning algorithm tries to adjust the knobs so that when, say, a dog is in front of the camera, the red light turns on, and when a car is put in front of the camera, the green light turns on.

The speculation that a system containing enough of the types of operations involved in machine learning might generate a conscious mind intrigues some neuroscientists, and strikes others as wildly implausible, or as something that is many years away. Gary Marcus, a psychology professor at New York University, says “deep learning is only part of the larger challenge of building intelligent machines. Such techniques [are] still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson, . . . use techniques like deep learning as just one element in a very complicated ensemble of techniques . . .” (33) Andrew Ng, formerly head of the Google Brain project and now in charge of Baidu’s AI activities, says that current machine learning techniques are like a “cartoon version” of the human brain.

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

The way that some game-playing AIs become superhuman in their field is by playing millions of games against versions of themselves and learning from the outcomes.) In deep learning, the algorithms operate in several layers, each layer processing data from previous ones and passing the output up to the next layer. The output is not necessarily binary, just on or off: it can be weighted. The number of layers can vary too, with anything above ten layers seen as very deep learning – although in December 2015 a Microsoft team won the ImageNet competition with a system which employed a massive 152 layers.[lxvi] Deep learning, and especially artificial neural nets (ANNs), are in many ways a return to an older approach to AI which was explored in the 1960s but abandoned because it proved ineffective.

“Then you teach it Mandarin: it learns Mandarin, but it also becomes better at English, and quite frankly none of us know exactly why.”[xciii] In December 2015, Baidu announced that its speech recognition system Deep Speech 2 performed better than humans with short phrases out of context.[xciv] It uses deep learning techniques to recognise Mandarin. Learning and innovating It can no longer be said that machines do not learn, or that they cannot invent. In December 2013, DeepMind demonstrated an AI system which used a deep learning technique called unsupervised learning to teach itself to play old-style Atari video games like Breakout and Pong.[xcv] These are games which previous AI systems found hard to play because they involve hand-to-eye co-ordination.

doi=10.1257/jep.29.3.3 [lvi] https://reason.com/archives/2015/03/03/how-to-survive-a-robot-uprisin [lvii] http://www.politico.com/magazine/story/2013/11/the-robots-are-here-098995 [lviii] http://www.forbes.com/sites/danschawbel/2015/08/04/geoff-colvin-why-humans-will-triumph-over-machines/2/ [lix] http://www.eastoftheweb.com/short-stories/UBooks/BoyCri.shtml [lx] German academic Marcus Hutter, and Shane Legg, co-founder of DeepMind [lxi] http://www.savethechimps.org/about-us/chimp-facts/ [lxii] The Shape of Automation for Men and Management by Herbert Simon, 1965 [lxiii] Computation: Finite and Infinite Machines by Marvin Minsky, 1967 [lxiv] http://www.wired.com/2016/01/microsoft-neural-net-shows-deep-learning-can-get-way-deeper/ [lxv] http://www.etymonline.com/index.php?term=algorithm [lxvi] http://www.wired.com/2016/01/microsoft-neural-net-shows-deep-learning-can-get-way-deeper/ [lxvii] Moravec wrote about this phenomenon in his 1988 book “Mind Children”. A possible explanation is that the sensory motor skills and spatial awareness that we develop as children are the product of millions of years of evolution.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

By 2009, Hinton’s lab had applied deep neural nets for speech recognition, and a chance meeting with a Microsoft researcher named Li Deng meant that the technology could be piloted in a meaningful way. Deng, a Chinese deep-learning specialist, was a pioneer in speech recognition using large-scale deep learning. By 2010, the technique was being tested at Google. Just two years later, deep neural nets were being used in commercial products. If you used Google Voice and its transcription services, that was deep learning, and the technique became the basis for all the digital assistants we use today. Siri, Google, and Amazon’s Alexa are all powered by deep learning. The AI community of interdisciplinary researchers had grown significantly since the Dartmouth summer.

We can’t evolve significantly on our own, and the existing evolutionary timeframe doesn’t suit our current technological aspirations. The promise of deep learning was an acceleration of the evolution of intelligence itself, which would only temporarily involve humans. A deep neural net would be given a basic set of parameters about the data by a person, and then the system would go out and learn on its own by recognizing patterns using many layers of processing. For researchers, the attraction of deep learning is that by design, machines make decisions unpredictably. Thinking in ways we humans have never imagined—or been able to do ourselves—is vitally important when trying to solve big problems for which there haven’t ever been clear solutions.

With all those simulated neurons and layers, exactly what happened and in which order can’t be easily reverse-engineered. One team of Google researchers did try to develop a new technique to make AI more transparent. In essence, the researchers ran a deep-learning image recognition algorithm in reverse to observe how the system recognized certain things such as trees, snails, and pigs. The project, called DeepDream, used a network created by MIT’s Computer Science and AI Lab and ran Google’s deep-learning algorithm in reverse. Instead of training it to recognize objects using the layer-by-layer approach—to learn that a rose is a rose, and a daffodil is a daffodil—instead it was trained to warp the images and generate objects that weren’t there.

pages: 301 words: 85,126

AIQ: How People and Machines Are Smarter Together
by Nick Polson and James Scott
Published 14 May 2018

See also Lee Bell, “Nvidia to Train 100,000 Developers in ‘Deep Learning’ AI to Bolster Healthcare Research,” Forbes.com, May 11, 2017, https://blogs.nvidia.com/blog/2016/12/07/mass-general-researchers-ai/. See also Lee Bell, “Nvidia to Train 100,000 Developers in ‘Deep Learning’ AI to Bolster Healthcare Research,” Forbes.com, May 11, 2017, https://blogs.nvidia.com/blog/2016/12/07/mass-general-researchers-ai/. See also Lee Bell, “Nvidia to Train 100,000 Developers in ‘Deep Learning’ AI to Bolster Healthcare Research,” Forbes.com, May 11, 2017, https://www.forbes.com/sites/leebelltech/2017/05/11/nvidia-to-train-100000-developers-in-deep-learning-ai-to-bolster-health-care-research/. 50.  See, e.g., Tom Simonite, “The Recipe for the Perfect Robot Surgeon,” MIT Technology Review, October 14, 2016, https://www.technologyreview.com/s/602595/the-recipe-for-the-perfect-robot-surgeon/. 51.  

But then Koike realized that he could use a piece of open-source AI software from Google, called TensorFlow, to accomplish the same task, by coding up a “deep-learning” algorithm that could classify a cucumber based on a photograph. Koike had never used AI or TensorFlow before, but with all the free resources out there, he didn’t find it hard to teach himself how. When a video of his AI-powered sorting machine hit YouTube, Koike became an international deep-learning/cucumber celebrity. It wasn’t merely that he had given people a feel-good story, saving his mother from hours of drudgery. He’d also sent an inspiring message to students and coders across the world: that if AI can solve problems in cucumber farming, it can solve problems just about anywhere.

We’re omitting a lot of details here that are essential for making this strategy succeed, but they’re all just minutiae, the kind of thing you learn if you study AI in graduate school. If you just think “trial and error,” you’re 90% of the way there. Factor 4: Deep Learning In addition to the richness of our models, the size of our data sets, and the speed of our computers, there’s a fourth major way in which prediction rules have improved dramatically: people have learned how to extract useful information from vastly more complicated inputs. If you’ve heard the term “deep learning” and wondered what it means, we’re about to explain. We said at the beginning of the chapter that computers are agnostic about the type of input you give them.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

The vertical axis measures the error rate, so lower is better. In 2010, the best machine predictions made mistakes in 28 percent of images. In 2012, the contestants used deep learning for the first time and the error rate plunged to 16 percent. As Princeton professor and computer scientist Olga Russakovsky notes, “2012 was really the year when there was a massive breakthrough in accuracy, but it was also a proof of concept for deep learning models, which had been around for decades.”8 Rapid improvements in the algorithms continued, and a team beat the human benchmark in the competition for the first time in 2015.

“Mastercard Rolls Out Artificial Intelligence across Its Global Network,” Mastercard press release, November 30, 2016, https://newsroom.mastercard.com/press-releases/mastercard-rolls-out-artificial-intelligence-across-its-global-network/. 3. Adam Geitgey, “Machine Learning Is Fun, Part 5: Language Translation with Deep Learning and the Magic of Sequences,” Medium, August 21, 2016, https://medium.com/@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa. 4. Yiting Sun, “Why 500 Million People in China Are Talking to This AI,”MIT Technology Review, September 14, 2017, https://www.technologyreview.com/s/608841/why-500-million-people-in-china-are-talking-to-this-ai/. 5.

Tests That Were Found to Discriminate,” New York Times, July 23, 2009, https://cityroom.blogs.nytimes.com/2009/07/23/the-fire-dept-tests-that-were-found-to-discriminate/?mcubz=0&_r=0; US v. City of New York (FDNY), https://www.justice.gov/archives/crt-fdny/overview. 6. Paul Voosen, “How AI Detectives Are Cracking Open the Black Box of Deep Learning,” Science, July 6, 2017, http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning. 7. T. Blake, C. Nosko, and S. Tadelis, “Consumer Heterogeneity and Paid Search Effectiveness: A Large-Scale Field Experiment,” Econometrica 83 (2015): 155–174. 8. Hossein Hosseini, Baicen Xiao, and Radha Poovendran, “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos” (paper presented at CVPR Workshops, March 31, 2017), https://arxiv.org/pdf/1703.09793.pdf; see also “Artificial Intelligence Used by Google to Scan Videos Could Easily Be Tricked by a Picture of Noodles,” Quartz, April 4, 2017, https://qz.com/948870/the-ai-used-by-google-to-scan-videos-could-easily-be-tricked-by-a-picture-of-noodles/. 9.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

For more recent work, see Graves, “Practical Variational Inference for Neural Networks”; Blundell et al., “Weight Uncertainty in Neural Networks”; and Hernández-Lobato and Adams, “Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks.” For a more detailed history of these ideas, see Gal, “Uncertainty in Deep Learning.” For an overview of probabilistic methods in machine learning more generally, see Ghahramani, “Probabilistic Machine Learning and Artificial Intelligence.” 17. Yarin Gal, personal interview, July 11, 2019. 18. Yarin Gal, “Modern Deep Learning Through Bayesian Eyes” (lecture), Microsoft Research, December 11, 2015, https://www.microsoft.com/en-us/research/video/modern-deep-learning-through-bayesian-eyes/. 19. For a look at using dropout-ensemble uncertainty to detect adversarial examples, see Smith and Gal, “Understanding Measures of Uncertainty for Adversarial Example Detection.” 20.

But on this matter, too, Bellemare would soon come around. Simply plugging deep learning into a classic RL algorithm and running it on seven of the Atari games, Mnih was able to beat every previous RL benchmark in six of them. Not only that: in three of the games, their program appeared to be as good as a human player. They submitted a workshop paper in late 2013, marking their progress.6 “It was just sort of a proof-of-concept paper,” says Bellemare, “that convolutional nets could do this.” “Really,” he says, “it was bringing the deep-learning part to solve what reinforcement-learning researchers hadn’t been able to do in ages, which is to generate these features on the fly.

In better coming to understand our own motivations and drives, we then, in turn, have a chance for complementary and reciprocal insights about how to build an artificial intelligence as flexible, resilient, and intellectually omnivorous as our own. Deepak Pathak looks at the success of deep learning and sees one glaring weakness: each system—be it for machine translation, or object recognition, or even game playing—is purpose-built. Training a huge neural network on a heap of manually labeled images was, as we have seen, the paradigm in which deep learning first truly showed its promise. Explicitly drilling a system to categorize images made a system that could categorize images. Fair enough, he says. “But the problem is, these artificial intelligence systems are not actually intelligent.

pages: 362 words: 97,288

Ghost Road: Beyond the Driverless Car
by Anthony M. Townsend
Published 15 Jun 2020

“Contemporary neural networks do well on challenges that remain close to their core training data,” writes New York University computer scientist Gary Marcus in an exhaustive 2018 critique of deep learning, “but start to break down on cases further out in the periphery.” Nowhere are the limits of deep learning becoming clearer than in the development of self-driving vehicles. The most catastrophic failures involving automated driving so far have all occurred around so-called edge cases, those unexpected events where data to train deep-learning models was insufficient or simply nonexistent—a pedestrian walking a bicycle across a darkened street midblock (Tempe, Arizona, March 18, 2018); a white truck trailer occluded against a brightly lit sky (Williston, Florida, May 7, 2016); an unusual set of road-surface markings at a highway off-ramp (Mountain View, California, March 23, 2018).

Chips like Pegasus are shifting the center of effort inside your car, from powertrain to CPU, and changing the kinds of “fuel” needed. The motor in your old car harnessed the power of internal combustion. It sucked in gasoline and transformed it into mechanical power to move you down the road. This new motor in your AV is powered by deep learning. It ingests gigabytes of data and spits out a stream of insights to guide you on your way. Deep learning sounds more mysterious than it is. The artificial neural networks that make it work were first invented more than 70 years ago. These algorithms, loosely based on mammalian brains, were the basis of a promising early branch of AI research. But after several highprofile failures, the mainstream research community largely abandoned the approach.

Even more remarkable was their seemingly intuitive power (learning). You didn’t have to program a deep learning model with descriptions of exactly what to look for to, say, identify photographs of cats. All you had to do was wind the mechanism up with a million pictures of cats and it could deduce the fundamental indicators of cat-ness all by itself. This process, called “training,” works by slowly calibrating the nodes within and between the stack’s various layers, strengthening the connections that contribute to accurate results and pruning those that don’t. Deep learning does have at least one enormous drawback, however. It is a ravenous consumer of computer power.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

Indeed, given the density of the historical information found in each provenance event, this feature enables us to extract more detailed knowledge from individual event components. It also represents a necessary precondition for complex querying and large-scale analysis. A deep learning model can successfully perform the task of span categorization. As defined above, this type of AI model learns from output examples annotated by experts. When training a deep learning model for span categorization, it is then necessary for an expert to first annotate provenance events by identifying the different portions of text and assigning appropriate categories to them. To address this challenge, we have developed a provenance-specific annotation scheme, that is, a set of categories with which to annotate provenance texts for span categorization.

Part 1: Reflections The Role of Culture in the Intelligence of AI Mercedes Bunz Artificial Intelligence (AI) clearly suffers from its name, which easily leads to misunderstandings. The name suggests that its intelligence is like human intelligence, only ‘artificial’; it would have been far better and more precise to call it ‘machine intelligence’. Neural networks, the technology that is in part the foundation of the current boom and facilitated deep learning approaches, adds to this, as it is a term that further confuses the discussion. The term suggests that machine learning systems are built on ‘neurons’ that operate just like those in the brain. If you ask experts in the field, however, they will quickly explain that biological neurons function very differently.

Available online: https://www.radicalphilosophy.com/article/dismantling-the-apparatus-o f-domination. Azar, Mitra/Cox, Geoff/Impett, Leonardo (2021). Introduction: Ways of Machine Seeing. AI & SOCIETY 36 (4), 1093–104. https://doi.org/10.1007/s00146-020-01 124-6. Buckner, Cameron (2020). Understanding Adversarial Examples Requires a Theory of Artefacts for Deep Learning. Nature Machine Intelligence 2 (12), 731–36. http s://doi.org/10.1038/s42256-020-00266-y. Bunz, Mercedes (2022). How Not to Be Governed Like That by Our Digital Technologies. In: Kathrin Thiele/Birgit Mara Kaiser/Timothy O’Leary (Eds.). The Ends of Critique. Methods, Institutions, Politics. Lanham, Rowman & Littlefield, 179–200.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

[76] Around 2010 it finally reached a threshold where it became able to unlock the hidden power of a connectionist approach modeled on the many-layered hierarchical computation that takes place in the neocortex: deep learning. It is deep learning that has enabled the startling and seemingly sudden breakthroughs that the AI field has achieved since The Singularity Is Near was published. The first of these breakthroughs to signal the radically transformative potential of deep learning was AI’s mastery of the board game Go. Because Go has a vastly larger number of possible moves than chess, and it is harder to judge whether any given move is a good one, the AI approaches that had worked to beat human chess grandmasters were making almost no progress on Go.

BACK TO NOTE REFERENCE 23 For further information on the black box problem and AI transparency, see Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai; “AI Detectives Are Cracking Open the Black Box of Deep Learning,” Science Magazine, YouTube video, July 6, 2017, https://www.youtube.com/watch?v=gB_-LabED68; Paul Voosen, “How AI Detectives Are Cracking Open the Black Box of Deep Learning,” Science, July 6, 2017, https://doi.org/10.1126/science.aan7059; Harry Shum, “Explaining AI,” a16z, YouTube video, January 16, 2020, https://www.youtube.com/watch?v=rI_L95qnVkM; Future of Life Institute, “Neel Nanda on What Is Going On Inside Neural Networks,” YouTube video, February 9, 2023, https://www.youtube.com/watch?

See “AlphaGo,” Google DeepMind, accessed January 30, 2023, https://deepmind.com/research/case-studies/alphago-the-story-so-far; “AlphaGo Zero: Starting from Scratch,” Google DeepMind, October 18, 2017, https://deepmind.com/blog/article/alphago-zero-starting-scratch; Tom Simonite, “This More Powerful Version of AlphaGo Learns On Its Own,” Wired, October 18, 2017, https://www.wired.com/story/this-more-powerful-version-of-alphago-learns-on-its-own; David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (January 27, 2016): 484–89, https://doi.org/10.1038/nature16961; Christof Koch, “How the Computer Beat the Go Master,” Scientific American, March 19, 2016, https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master; Josh Patterson and Adam Gibson, Deep Learning: A Practitioner’s Approach (Sebastopol, CA: O’Reilly, 2017), 6–8, https://books.google.com/books?id=qrcuDwAAQBAJ; Thomas Anthony, Zheng Tian, and David Barber, “Thinking Fast and Slow with Deep Learning and Tree Search,” 31st Conference on Neural Information Processing Systems (NIPS 2017), revised December 3, 2017, https://arxiv.org/pdf/1705.08439.pdf; Kaiming He et al., “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition, December 10, 2015, https://arxiv.org/pdf/1512.03385.pdf.

pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think
by James Vlahos
Published 1 Mar 2019

Facebook snapped up LeCun to lead its AI efforts. Bengio remained an independent academic and founded the world’s largest academic research institute for deep learning—the Montreal Institute for Learning Algorithms (MILA). LeCun was gratified to see that the experts who had once shunned his methods had come around. “They said, ‘Okay, now we buy it,’” LeCun later told a reporter. “‘That’s it, now—you won.’” Image recognition was the first problem to succumb to deep learning’s powers. But with the efficacy of the technique no longer in doubt, many of its practitioners turned to an even more enticing task than identifying pictures: understanding words.

They might not correspond to meanings on human-understandable terms; they were the dimensions that proved most useful to the neural network when it sifted through the data. The beauty of deep learning is that—whether with images, the sounds of speech, or the meanings of words—humans don’t have to pick out the key identifying features. That task inevitably eludes our grasp, says Steve Young, a senior member of the technical staff for Siri and a professor of information engineering at Cambridge University. “Deep learning,” he says, “just avoids the problem by essentially throwing the entire signal into the classifier and letting the classifier work out what features are significant.”

Today Siri pulls from an archive of more than a million sound samples, many of which are as small as half a phoneme. The system uses deep learning to select the optimal sound units—anywhere from a dozen to a hundred or more per sentence—so the puzzle pieces fit cleanly. Because it is trained on examples of real people speaking, the neural network also expresses prosody. Alex Acero, who leads the Siri speech team, says that his ultimate aspiration is to make the assistant sound as natural as the Scarlett Johansson–voiced one in the movie Her. The emergence of powerful deep learning techniques for synthesizing voices means, among other things, that these voices are proliferating.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

The breakthrough moment took nearly half a century, finally arriving in 2012 in the form of a system called AlexNet. AlexNet was powered by the resurgence of an old technique that has now become fundamental to AI, one that has supercharged the field and was integral to us at DeepMind: deep learning. Deep learning uses neural networks loosely modeled on those of the human brain. In simple terms, these systems “learn” when their networks are “trained” on large amounts of data. In the case of AlexNet, the training data consisted of images. Each red, green, or blue pixel is given a value, and the resulting array of numbers is fed into the network as an input.

In 1987 there were just ninety academic papers published at Neural Information Processing Systems, at what became the field’s leading conference. By the 2020s there were almost two thousand. In the last six years there was a six-fold increase in the number of papers published on deep learning alone, tenfold if you widen the view to machine learning as a whole. With the blossoming of deep learning, billions of dollars poured into AI research at academic institutions and private and public companies. Starting in the 2010s, the buzz, indeed the hype, around AI was back, stronger than ever, making headlines and pushing the frontiers of what’s possible.

GO TO NOTE REFERENCE IN TEXT In 2012, AlexNet beat Jerry Wei, “AlexNet: The Architecture That Challenged CNNs,” Towards Data Science, July 2, 2019, towardsdatascience.com/​alexnet-the-architecture-that-challenged-cnns-e406d5297951. GO TO NOTE REFERENCE IN TEXT Thanks to deep learning Chanan Bos, “Tesla’s New HW3 Self-Driving Computer—It’s a Beast,” CleanTechnica, June 15, 2019, cleantechnica.com/​2019/​06/​15/​teslas-new-hw3-self-driving-computer-its-a-beast-cleantechnica-deep-dive. GO TO NOTE REFERENCE IN TEXT It helps fly drones Jeffrey De Fauw et al., “Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease,” Nature Medicine, Aug. 13, 2018, www.nature.com/​articles/​s41591-018-0107-6. GO TO NOTE REFERENCE IN TEXT By the 2020s there were almost two thousand “Advances in Neural Information Processing Systems,” NeurIPS, papers.nips.cc.

pages: 533 words: 125,495

Rationality: What It Is, Why It Seems Scarce, Why It Matters
by Steven Pinker
Published 14 Oct 2021

Computer scientists could put multilayer networks on megavitamins, giving them two, fifteen, even a thousand hidden layers, and training them on billions or even trillions of examples. The networks are called deep learning systems because of the number of layers between the input and the output (they’re not deep in the sense of understanding anything). These networks are powering “the great AI awakening” we are living through, which is giving us the first serviceable products for speech and image recognition, question-answering, translation, and other humanlike feats.33 Deep learning networks often outperform GOFAI (good old-fashioned artificial intelligence), which executes logic-like deductions on hand-coded propositions and rules.34 The contrast in the way they work is stark: unlike logical inference, the inner workings of a neural network are inscrutable.

That is why many technology critics fear that as AI systems are entrusted with decisions about the fates of people, they could perpetuate biases that no one can identify and uproot.35 In 2018 Henry Kissinger warned that since deep learning systems don’t work on propositions we can examine and justify, they portend the end of the Enlightenment.36 That is a stretch, but the contrast between logic and neural computation is clear. Is the human brain a big deep learning network? Certainly not, for many reasons, but the similarities are illuminating. The brain has around a hundred billion neurons connected by a hundred trillion synapses, and by the time we are eighteen we have been absorbing examples from our environments for more than three hundred million waking seconds.

New York: Basic Books. Marcus, G. F. 2000. Two kinds of representation. In E. Dietrich & A. B. Markman, eds., Cognitive dynamics: Conceptual and representational change in humans and machines. Mahwah, NJ: Erlbaum. Marcus, G. F. 2018. The deepest problem with deep learning. Medium, Dec. 1. https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695. Marcus, G. F., & Davis, E. 2019. Rebooting AI: Building artificial intelligence we can trust. New York: Pantheon. Marlowe, F. 2010. The Hadza: Hunter-gatherers of Tanzania. Berkeley: University of California Press. Martin, G.

Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps
by Valliappa Lakshmanan , Sara Robinson and Michael Munn
Published 31 Oct 2020

Neural networks with more than one hidden layer (layers other than the input and output layer) are classified as deep learning (see Figure 1-1). Machine learning models, regardless of how they are depicted visually, are mathematical functions and can therefore be implemented from scratch using a numerical software package. However, ML engineers in industry tend to employ one of several open source frameworks designed to provide intuitive APIs for building models. The majority of our examples will use TensorFlow, an open source machine learning framework created by Google with a focus on deep learning models. Within the TensorFlow library, we’ll be using the Keras API in our examples, which can be imported through tensorflow.keras.

One bit of intuition as to why this works comes from the Uniform Approximation Theorem of deep learning, which, loosely put, states that any function (and its derivatives) can be approximated by a neural network with at least one hidden layer and any “squashing” activation function, like sigmoid. This means that no matter what function we are given, so long as it’s relatively well behaved, there exists a neural network with just one hidden layer that approximates that function as closely as we want.1 Deep learning approaches to solving differential equations or complex dynamical systems aim to represent a function defined implicitly by a differential equation, or system of equations, using a neural network.

While increasing the resolution requires substantially more compute power using finite-difference methods, the neural network is able to maintain high performance with only marginal additional cost. Techniques like the Deep Galerkin Method can then use deep learning to provide a mesh-free approximation of the solution to the given PDE. In this way, solving the PDE is reduced to a chained optimization problem (see “Design Pattern 8: Cascade ”). Deep Galerkin Method The Deep Galerkin Method is a deep learning algorithm for solving partial differential equations. The algorithm is similar in spirit to Galerkin methods used in the field of numeric analysis, where the solution is approximated using a neural network instead of a linear combination of basis functions.

pages: 292 words: 85,151

Exponential Organizations: Why New Organizations Are Ten Times Better, Faster, and Cheaper Than Yours (And What to Do About It)
by Salim Ismail and Yuri van Geest
Published 17 Oct 2014

After allowing it to browse ten million randomly selected YouTube video thumbnails for three days, the network began to recognize cats, without actually knowing the concept of “cats.” Importantly, this was without any human intervention or input. In the two years since, the capabilities of Deep Learning have improved considerably. Today, in addition to improving speech recognition, creating a more effective search engine (Ray Kurzweil is working on this within Google) and identifying individual objects, Deep Learning algorithms can also detect particular episodes in videos and even describe them in text, all without human input. Deep Learning algorithms can even play video games by figuring out the rules of the game and then optimizing performance. Think about the implications of this revolutionary breakthrough.

The contest ended early, in September 2009, when one of the 44,014 valid submissions achieved the goal and was awarded the prize. Deep Learning is a new and exciting subset of Machine Learning based on neural net technology. It allows a machine to discover new patterns without being exposed to any historical or training data. Leading startups in this space are DeepMind, bought by Google in early 2014 for $500 million, back when DeepMind had just thirteen employees, and Vicarious, funded with investment from Elon Musk, Jeff Bezos and Mark Zuckerberg. Twitter, Baidu, Microsoft and Facebook are also heavily invested in this area. Deep Learning algorithms rely on discovery and self-indexing, and operate in much the same way that a baby learns first sounds, then words, then sentences and even languages.

To implement platforms, ExOs follow four steps in terms of data and APIs: Gather: The algorithmic process starts with harnessing data, which is gathered via sensors, people, or imported from public datasets. Organize: The next step is to organize the data. This is known as ETL (extract, transform and load). Apply: Once the data is accessible, algorithms such as machine or deep learning extract insights, identify trends and tune new algorithms. These are realized via tools such as Hadoop and Pivotal, or even (open source) deep learning algorithms like DeepMind or Skymind. Expose: The final step is exposing the data in the form of an open platform. Open data and APIs can be used such that an ExO’s community develops valuable services, new functionalities and innovations layered on top of the platform by remixing published data with their own.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

The extent to which AI technology will threaten to displace highly skilled workers such as doctors ultimately remains to be seen. A 2019 study in the United Kingdom summarized the state of affairs, stating “Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals,” but then went on to conclude that “poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy.” In other words, the deep learning models may be able to match human performance at the narrow task of making a diagnosis from an X-ray, but the fact that such studies don’t then use the results of the algorithm in an actual medical setting means it’s not possible to determine whether the model’s prediction would have actually led to a better outcome for the patient.

“an algorithm that can detect”: Pranav Rajpurkar et al., “Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning,” CheXNet, December 25, 2017, http://arxiv.org/abs/1711.05225. “people should stop training radiologists”: Geoff Hinton, “Geoff Hinton: On Radiology,” Creative Destruction Lab, uploaded to YouTube November 24, 2016, https://www.youtube.com/watch?v=2HMPRXstSvQ. the work radiologists and other medical professionals do: Hugh Harvey, “Why AI Will Not Replace Radiologists,” Medium, April 7, 2018, https://towardsdatascience.com/why-ai-will-not-replace-radiologists-c7736f2c7d80. “deep learning models to be equivalent”: Xiaoxuan Liu et al., “A Comparison of Deep Learning Performance Against Health-Care Professionals in Detecting Diseases from Medical Imaging: A Systematic Review and Meta-Analysis,” Lancet Digital Health 1, no. 6 (October 1, 2019): e271–97, https://doi.org/10.1016/S2589-7500(19)30123-2.

AI systems can identify patterns in huge pools of data that humans can’t discern and can therefore frequently make more accurate predictions. But such systems are often black boxes, unable to explain why they generate particular outputs. And the scientists who build the systems can’t always explain the outputs either, making the decisions inscrutable. Deep learning refers not to the capacity to generate insights but to a spatial metaphor for the architecture of the AI system. The idea behind deep learning is that inputs to a system form sets of simple patterns that are then combined into increasingly more complex patterns using patterns derived from the previous layer. The moniker deep comes from the fact that such systems now contain many more layers than they did just a decade ago—a result of greater computational power to model the patterns in each layer and vast increases in data that enable these more complex patterns to be discovered.

pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence
by Jeff Hawkins
Published 15 Nov 2021

The current wave of AI has attracted thousands of researchers and billions of dollars of investment. Almost all these people and dollars are being applied to improving deep learning technologies. Will this investment lead to human-level machine intelligence, or are deep learning technologies fundamentally limited, leading us once again to reinvent the field of AI? When you are in the middle of a bubble, it is easy to get swept up in the enthusiasm and believe it will go on forever. History suggests we should be cautious. I don’t know how long the current wave of AI will continue to grow. But I do know that deep learning does not put us on the path to creating truly intelligent machines. We can’t get to artificial general intelligence by doing more of what we are currently doing.

They claimed that we could not make truly intelligent machines until we solved how to represent everyday knowledge in a computer. Today’s deep learning networks don’t possess knowledge. A Go-playing computer does not know that Go is a game. It doesn’t know the history of the game. It doesn’t know if it is playing against a computer or a human, or what “computer” and “human” mean. Similarly, a deep learning network that labels images may look at an image and say it is a cat. But the computer has limited knowledge of cats. It doesn’t know that cats are animals, or that they have tails, legs, and lungs. It doesn’t know about cat people versus dog people, or that cats purr and shed fur. All the deep learning network does is determine that a new image is similar to previously seen images that were labeled “cat.”

AI scientists disagree as to whether these language networks possess true knowledge or are just mimicking humans by remembering the statistics of millions of words. I don’t believe any kind of deep learning network will achieve the goal of AGI if the network doesn’t model the world the way a brain does. Deep learning networks work well, but not because they solved the knowledge representation problem. They work well because they avoided it completely, relying on statistics and lots of data instead. How deep learning networks work is clever, their performance is impressive, and they are commercially valuable. I am only pointing out that they don’t possess knowledge and, therefore, are not on the path to having the ability of a five-year-old child.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. 978-1-491-96229-9 [LSI] Preface The Machine Learning Tsunami In 2006, Geoffrey Hinton et al. published a paper1 showing how to train a deep neural network capable of recognizing handwritten digits with state-of-the-art precision (>98%). They branded this technique “Deep Learning.” Training a deep neural net was widely considered impossible at the time,2 and most researchers had abandoned the idea since the 1990s. This paper revived the interest of the scientific community and before long many new papers demonstrated that Deep Learning was not only possible, but capable of mind-blowing achievements that no other Machine Learning (ML) technique could hope to match (with the help of tremendous computing power and great amounts of data).

Part II, Neural Networks and Deep Learning, covers the following topics: What are neural nets? What are they good for? Building and training neural nets using TensorFlow. The most important neural net architectures: feedforward neural nets, convolutional nets, recurrent nets, long short-term memory (LSTM) nets, and autoencoders. Techniques for training deep neural nets. Scaling neural networks for huge datasets. Reinforcement learning. The first part is based mostly on Scikit-Learn while the second part uses TensorFlow. Caution Don’t jump into deep waters too hastily: while Deep Learning is no doubt one of the most exciting areas in Machine Learning, you should master the fundamentals first.

Moreover, most problems can be solved quite well using simpler techniques such as Random Forests and Ensemble methods (discussed in Part I). Deep Learning is best suited for complex problems such as image recognition, speech recognition, or natural language processing, provided you have enough data, computing power, and patience. Other Resources Many resources are available to learn about Machine Learning. Andrew Ng’s ML course on Coursera and Geoffrey Hinton’s course on neural networks and Deep Learning are amazing, although they both require a significant time investment (think months). There are also many interesting websites about Machine Learning, including of course Scikit-Learn’s exceptional User Guide.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

Some of those patterns are complex, but most are fairly simple. Great effort goes into parsing our speech and deciphering our handwriting. The current fad in thinking machines goes by the name of deep learning. When I first heard of deep learning, I was excited by the idea that machines were finally going to reveal to us deep aspects of existence—truth, beauty, and love. I was rapidly disabused. The deep in deep learning refers to the architecture of the machines doing the learning: They consist of many layers of interlocking logical elements, analogous to the “deep” layers of interlocking neurons in the brain.

Today’s algorithm has nothing like human-level competence in understanding images. Work is under way to add focus of attention and handling of consistent spatial structure to deep learning. That’s the hard work of science and research, and we have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a dead end. It took some thirty years to go from backpropagation to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it wouldn’t have been surprising if they were right, as we knew all along that the backpropagation algorithm is not what happens inside people’s heads.

After thirty years of research, a million-times improvement in computer power, and vast data sets from the Internet, we now know the answer to this question: Neural networks scaled up to twelve layers deep, with billions of connections, are outperforming the best algorithms in computer vision for object recognition and have revolutionized speech recognition. It’s rare for any algorithm to scale this well, which suggests that they may soon be able to solve even more difficult problems. Recent breakthroughs have been made that allow the application of deep learning to natural-language processing. Deep recurrent networks with short-term memory were trained to translate English sentences into French sentences at high levels of performance. Other deep-learning networks could create English captions for the content of images with surprising and sometimes amusing acumen. Supervised learning using deep networks is a step forward, but still far from achieving general intelligence.

pages: 241 words: 70,307

Leadership by Algorithm: Who Leads and Who Follows in the AI Era?
by David de Cremer
Published 25 May 2020

AI witnessed a comeback in the last decade, primarily because the world woke up to the realization that deep learning by machines is possible to the level where they can actually perform many tasks better than humans. Where did this wake-up call come from? From a simple game called Go. In 2016, AlphaGo, a program developed by Google DeepMind, beat the human world champion in the Chinese board game, Go. This was a surprise to many, as Go – because of its complexity – was considered the territory of human, not AI, victors. In a decade where our human desire to connect globally, execute tasks faster, and accumulate massive amounts of data, was omnipresent, such deep learning capabilities were, of course, quickly embraced.

‘Most of AI’s Business Uses Will Be in Two Areas.’ Harvard Business Review. July 20. Retrieved from: https://hbr.org/2018/07/most-of-ais-business-uses-will-be-in-two-areas 8 McKinsey (2018). ‘Notes from the AI frontier: Applications and value of deep learning.’ Retrieved from: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning 9 Bloomberg (2018, January 15th). ‘Alibaba's AI Outguns Humans in Reading Test.’ Retrieved from https://www.bloomberg.com/news/articles/2018-01-15/alibaba-s-ai-outgunned-humans-in-key-stanford-reading-test 10 Gee, K. (2017).

In that respect, an interesting study from the US National Bureau of Economic Research demonstrated that low-skill service-sector workers (where retention rates are low) stayed in the job 15% longer when an algorithm was used to judge their employability.¹⁴ Automation and innovation Automation and the corresponding use of algorithms with deep learning abilities are also penetrating other industries. The legal sector is another area where many discussions are taking place about how and whether to automate services. Legal counsellors have started to use automated advisors to contest relatively small fines such as parking tickets. The legal sector is also considering the use of AI to help judges go through evidence collected to reach a verdict in court cases.

The Book of Why: The New Science of Cause and Effect
by Judea Pearl and Dana Mackenzie
Published 1 Mar 2018

Some readers may be surprised to see that I have placed present-day learning machines squarely on rung one of the Ladder of Causation, sharing the wisdom of an owl. We hear almost every day, it seems, about rapid advances in machine learning systems—self-driving cars, speech-recognition systems, and, especially in recent years, deep-learning algorithms (or deep neural networks). How could they still be only at level one? The successes of deep learning have been truly remarkable and have caught many of us by surprise. Nevertheless, deep learning has succeeded primarily by showing that certain questions or tasks we thought were difficult are in fact not. It has not addressed the truly difficult questions that continue to prevent us from achieving humanlike AI.

Another advantage causal models have that data mining and deep learning lack is adaptability. Note that in Figure I.1, the estimand is computed on the basis of the causal model alone, prior to an examination of the specifics of the data. This makes the causal inference engine supremely adaptable, because the estimand computed is good for any data that are compatible with the qualitative model, regardless of the numerical relationships among the variables. To see why this adaptability is important, compare this engine with a learning agent—in this instance a human, but in other cases perhaps a deep-learning algorithm or maybe a human using a deep-learning algorithm—trying to learn solely from the data.

A few months later it played sixty online games against top human players without losing a single one, and in 2017 it was officially retired after beating the current world champion, Ke Jie. The one game it lost to Sedol is the only one it will ever lose to a human. All of this is exciting, and the results leave no doubt: deep learning works for certain tasks. But it is the antithesis of transparency. Even AlphaGo’s programmers cannot tell you why the program plays so well. They knew from experience that deep networks have been successful at tasks in computer vision and speech recognition. Nevertheless, our understanding of deep learning is completely empirical and comes with no guarantees. The AlphaGo team could not have predicted at the outset that the program would beat the best human in a year, or two, or five.

pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future
by Andrew McAfee and Erik Brynjolfsson
Published 26 Jun 2017

November 11, 2016, http://metricviews.org.uk/2007/11/how-big-hectare. 80 Makoto was impressed: Kaz Sato, “How a Japanese Cucumber Farmer Is Using Deep Learning and TensorFlow,” Google, August 31, 2016, https://cloud.google.com/blog/big-data/2016/08/how-a-japanese-cucumber-farmer-is-using-deep-learning-and-tensorflow. 80 “I can’t wait to try it”: Ibid. 80 “It’s not hyperbole”: Ibid. 80 “If intelligence was a cake”: Carlos E. Perez, “ ‘Predictive Learning’ Is the New Buzzword in Deep Learning,” Intuition Machine, December 6, 2016, https://medium.com/intuitionmachine/predictive-learning-is-the-key-to-deep-learning-acceleration-93e063195fd0#.13qh1nti1. 81 Joshua Brown’s Tesla crashed: Anjali Singhvi and Karl Russell, “Inside the Self-Driving Tesla Fatal Accident,” New York Times, July 12, 2016, https://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html. 82 it appears that neither Brown: Tesla, “A Tragic Loss,” June 30, 2016, https://www.tesla.com/blog/tragic-loss. 82 “Conventional wisdom would say”: Chris Urmson, “How a Driverless Car Sees the Road,” TED Talk, June 2015, 15:29, https://www.ted.com/talks/chris_urmson_how_a_driverless_car_sees_the_road/transcript?

Because both supervised and unsupervised machine learning approaches use the algorithms described by Hinton and his colleagues in their 2006 paper, they’re now commonly called “deep learning” systems. Demonstrations and Deployments Except for a very small number of cases, such as the system LeCun built for recognizing handwritten numbers on checks, the business application of deep learning is only a few years old. But the technique is spreading with extraordinary speed. The software engineer Jeff Dean,** who heads Google’s efforts to use the technology, notes that as recently as 2012 the company was not using it at all to improve products like Search, Gmail, YouTube, or Maps. By the third quarter of 2015, however, deep learning was being used in approximately 1,200 projects across the company, having surpassed the performance of other methods.

By 2013, these challenges had been broadly addressed (Erik Bernhardsson, “When Machine Learning Matters,” Erik Bernhardsson [blog], August 5, 2016, https://erikbern.com/2016/08/05/when-machine-learning-matters.html), and the company shifted focus toward using machine learning to deliver highly personalized music recommendations (Jordan Novet, “Spotify Intern Dreams Up Better Music Recommendations through Deep Learning,” VentureBeat, August 6, 2014, http://venturebeat.com/2014/08/06/spotify-intern-dreams-up-better-music-recommendations-through-deep-learning). Spotify launched its algorithm-powered Daily Mix option in September 2016 (Spotify, “Rediscover Your Favorite Music with Daily Mix,” September 27, 2016, https://news.spotify.com/us/2016/09/27/rediscover-your-favorite-music-with-daily-mix).

pages: 579 words: 76,657

Data Science from Scratch: First Principles with Python
by Joel Grus
Published 13 Apr 2015

Our documents are our users’ interests, which look like: documents = [ ["Hadoop", "Big Data", "HBase", "Java", "Spark", "Storm", "Cassandra"], ["NoSQL", "MongoDB", "Cassandra", "HBase", "Postgres"], ["Python", "scikit-learn", "scipy", "numpy", "statsmodels", "pandas"], ["R", "Python", "statistics", "regression", "probability"], ["machine learning", "regression", "decision trees", "libsvm"], ["Python", "R", "Java", "C++", "Haskell", "programming languages"], ["statistics", "probability", "mathematics", "theory"], ["machine learning", "scikit-learn", "Mahout", "neural networks"], ["neural networks", "deep learning", "Big Data", "artificial intelligence"], ["Hadoop", "Java", "MapReduce", "Big Data"], ["statistics", "R", "statsmodels"], ["C++", "deep learning", "artificial intelligence", "probability"], ["pandas", "R", "Python"], ["databases", "HBase", "Postgres", "MySQL", "MongoDB"], ["libsvm", "regression", "support vector machines"] ] And we’ll try to find K = 4 topics. In order to calculate the sampling weights, we’ll need to keep track of several counts.

In particular, we’ll look at the data set of users_interests that we’ve used before: users_interests = [ ["Hadoop", "Big Data", "HBase", "Java", "Spark", "Storm", "Cassandra"], ["NoSQL", "MongoDB", "Cassandra", "HBase", "Postgres"], ["Python", "scikit-learn", "scipy", "numpy", "statsmodels", "pandas"], ["R", "Python", "statistics", "regression", "probability"], ["machine learning", "regression", "decision trees", "libsvm"], ["Python", "R", "Java", "C++", "Haskell", "programming languages"], ["statistics", "probability", "mathematics", "theory"], ["machine learning", "scikit-learn", "Mahout", "neural networks"], ["neural networks", "deep learning", "Big Data", "artificial intelligence"], ["Hadoop", "Java", "MapReduce", "Big Data"], ["statistics", "R", "statsmodels"], ["C++", "deep learning", "artificial intelligence", "probability"], ["pandas", "R", "Python"], ["databases", "HBase", "Postgres", "MySQL", "MongoDB"], ["libsvm", "regression", "support vector machines"] ] And we’ll think about the problem of recommending new interests to a user based on her currently specified interests.

After asking around, you manage to get your hands on this data, as a list of pairs (user_id, interest): interests = [ (0, "Hadoop"), (0, "Big Data"), (0, "HBase"), (0, "Java"), (0, "Spark"), (0, "Storm"), (0, "Cassandra"), (1, "NoSQL"), (1, "MongoDB"), (1, "Cassandra"), (1, "HBase"), (1, "Postgres"), (2, "Python"), (2, "scikit-learn"), (2, "scipy"), (2, "numpy"), (2, "statsmodels"), (2, "pandas"), (3, "R"), (3, "Python"), (3, "statistics"), (3, "regression"), (3, "probability"), (4, "machine learning"), (4, "regression"), (4, "decision trees"), (4, "libsvm"), (5, "Python"), (5, "R"), (5, "Java"), (5, "C++"), (5, "Haskell"), (5, "programming languages"), (6, "statistics"), (6, "probability"), (6, "mathematics"), (6, "theory"), (7, "machine learning"), (7, "scikit-learn"), (7, "Mahout"), (7, "neural networks"), (8, "neural networks"), (8, "deep learning"), (8, "Big Data"), (8, "artificial intelligence"), (9, "Hadoop"), (9, "Java"), (9, "MapReduce"), (9, "Big Data") ] For example, Thor (id 4) has no friends in common with Devin (id 7), but they share an interest in machine learning. It’s easy to build a function that finds users with a certain interest: def data_scientists_who_like(target_interest): return [user_id for user_id, user_interest in interests if user_interest == target_interest] This works, but it has to examine the whole list of interests for every search.

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future
by Martin Ford
Published 4 May 2015

However, the last few years have seen a number of dramatic breakthroughs that have resulted in significant advances in performance, especially when multiple layers of neurons are employed—a technology that has come to be called “deep learning.” Deep learning systems already power the speech recognition capability in Apple’s Siri and are poised to accelerate progress in a broad range of applications that rely on pattern analysis and recognition. A deep learning neural network designed in 2011 by scientists at the University of Lugano in Switzerland, for example, was able to correctly identify more than 99 percent of the images in a large database of traffic signs—a level of accuracy that exceeded that of human experts who competed against the system.

That compares with 97.53 percent accuracy for human observers.9 Geoffrey Hinton of the University of Toronto, one of the leading researchers in the field, notes that deep learning technology “scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better.”10 In other words, even without accounting for likely future improvements in their design, machine learning systems powered by deep learning networks are virtually certain to see continued dramatic progress simply as a result of Moore’s Law. Big data and the smart algorithms that accompany it are having an immediate impact on workplaces and careers as employers, particularly large corporations, increasingly track a myriad of metrics and statistics regarding the work and social interactions of their employees.

Tom Simonite, “Facebook Creates Software That Matches Faces Almost as Well as You Do,” MIT Technology Review, March 17, 2014, http://www.technologyreview.com/news/525586/facebook-creates-software-that-matches-faces-almost-as-well-as-you-do/. 10. As quoted in John Markoff, “Scientists See Promise in Deep-Learning Programs,” New York Times, November 23, 2012, http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html. 11. Don Peck, “They’re Watching You at Work,” The Atlantic, December 2013, http://www.theatlantic.com/magazine/archive/2013/12/theyre-watching-you-at-work/354681/. 12. United States Patent No. 8,589,407, “Automated Generation of Suggestions for Personalized Reactions in a Social Network,” November 19, 2013, http://patft.uspto.gov/netacgi/nph-Parser?

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Published 7 Feb 2017

This suggests—but certainly does not prove—that without us machine users to interpret the results, critically and insightfully, deep-learning machines may grow in competence, surpassing animal brains (including ours) by orders of magnitude in the bottom-up task of finding statistical regularities, but never achieve (our kind of) comprehension. “So what?” some might respond. “The computer kind of bottom-up comprehension will eventually submerge the human kind, overpowering it with the sheer size and speed of its learning.” The latest breakthrough in AI, AlphaGo, the deep-learning program that has recently beaten Lee Seedol, regarded by many as the best human player of Go in the world, supports this expectation in one regard if not in others.

Dehaene and Naccache (2001) note “the impossibility for subjects [i.e., executives] to strategically use the unconscious information.” My claim, then, is that deep learning (so far) discriminates but doesn’t notice. That is, the flood of data that a system takes in does not have relevance for the system except as more “food” to “digest.” Being bedridden, not having to fend for itself, it has no goals beyond increasing its store of well-indexed information. Beyond the capacity we share with Watson and other deep learning machines to acquire know-how that depends on statistical regularities that we extract from experience, there is the capacity to decide what to search for and why, given one’s current aims.

A conscious human mind is not a miracle, not a violation of the principles of natural selection, but a novel extension of them, a new crane that adjusts evolutionary biologist Stuart Kauffman’s concept of the adjacent possible: many more places in Design Space are adjacent to us because we have evolved the ability to think about them and either seek them or shun them. The unanswered question for Domingos and other exponents of deep learning is whether learning a sufficiently detailed and dynamic theory of agents with imagination and reason-giving capabilities would enable a system (a computer program, a Master Algorithm) to generate and exploit the abilities of such agents, that is to say, to generate all the morally relevant powers of a person.103 My view is (still) that deep learning will not give us—in the next fifty years—anything like the “superhuman intelligence” that has attracted so much alarmed attention recently (Bostrom 2014; earlier invocations are Moravec 1988; Kurzweil 2005; and Chalmers 2010; see also the annual Edge world question 2015; and Katchadourian 2015).

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

Grainy SAR images of tanks, artillery, or airplanes parked on a runway often push the limits of human abilities to recognize objects, and historically ATR algorithms have fallen far short of human abilities. The poor performance of military ATR stands in stark contrast to recent advances in computer vision. Artificial intelligence has historically struggled with object recognition and perception, but the field has seen rapid gains recently due to deep learning. Deep learning uses neural networks, a type of AI approach that is analogous to biological neurons in animal brains. Artificial neural networks don’t directly mimic biology, but are inspired by it. Rather than follow a script of if-then steps for how to perform a task, neural networks work based on the strength of connections within a network.

Thus the response of the network to all possible inputs is unknowable. Part of this is due to the early stage of research in neural nets, but part of it is due to the sheer complexity of the deep learning. The JASON group argued that “the very nature of [deep neural networks] may make it intrinsically difficult for them to transition into what is typically recognized as a professionally engineered product.” AI researchers are working on ways to build more transparent AI, but Jeff Clune isn’t hopeful. “As deep learning gets even more powerful and more impressive and more complicated and as the networks grow in size, there will be more and more and more things we don’t understand. . . .

Goodfellow, Jonathan Shlens, and Christian Szegedy, “Explaining and Harnessing Adversarial Examples,” March 20, 2015, https://arxiv.org/abs/1412.6572; Ian Goodfellow, Presentation at Re-Work Deep Learning Summit, 2015, https://www.youtube.com/watch?v=Pq4A2mPCB0Y. 184 “infinitely far to the left”: Jeff Clune, interview, September 28, 2016. 184 “real-world images are a very, very small”: Ibid. 184 present in essentially every deep neural network: “Deep neural networks are easily fooled.” Goodfellow et al., “Explaining and Harnessing Adversarial Examples.” 184 specially evolved noise: Corey Kereliuk, Bob L. Sturm, and Jan Larsen, “Deep Learning and Music Adversaries,” http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/6904/pdf/imm6904.pdf. 184 News-reading trading bots: John Carney, “The Trading Robots Really Are Reading Twitter,” April 23, 2013, http://www.cnbc.com/id/100666302.

pages: 332 words: 93,672

Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy
by George Gilder
Published 16 Jul 2018

CHAPTER 3 Google’s Roots and Religions Under the leadership of Larry Page and Sergey Brin, Google developed the integrated philosophy that currently shapes our lives and fortunes, combining a theory of knowledge (nicknamed “Big Data”), a technological vision (centralized cloud computing), a cult of the commons (rooted in “open source” software), a concept of money and value (based on free goods and automated advertising), a theory of morality as “gifts” rather than profits, and a view of progress as evolutionary inevitability and an ever diminishing “carbon footprint.” This philosophy rules our economic lives in America and, increasingly, around the globe. With its development of “deep learning” by machines and its hiring of the inventor-prophet Raymond Kurzweil in 2014, Google enlisted in a chiliastic campaign to blend human and machine cognition. Kurzweil calls it a “singularity,” marked by the triumph of computation over human intelligence. Google networks, clouds, and server farms could be said to have already accomplished much of it.

Google, meanwhile, under its new CEO, Sundar Pichai, pivoted away from its highly publicized “mobile first” mantra, which had led to its acquisitions of Android and Ad Mob, and toward “AI first.” Google was the recognized intellectual leader of the industry, and its AI ostentation was widely acclaimed. Indeed it signed up most of the world’s AI celebrities, including its spearheads of “deep learning” prowess, from Geoffrey Hinton and Andrew Ng to Jeff Dean, the beleaguered Anthony Levandowski, and Demis Hassabis of DeepMind. If Google had been a university, it would have utterly outshone all others in AI talent. It must have been discouraging, then, to find that Amazon had shrewdly captured much of the market for AI services with its 2014 Alexa and Echo projects.

Or do you diffuse the memory and processing all through the machine? In a massively parallel spread like Dally’s J-machine, the memory is always close to the processor. Twenty-six years later, Dally and Jouppi are still at it. At the August 2017 Hot Chips in Cupertino, all the big guys were touting their own chips for what they call “deep learning,” the fashionable Silicon Valley term for the massive acceleration of multi-layered pattern recognition, correlation, and correction tied to feedback that results in a cumulative gain in performance. What they call “learning” originated in earlier ventures in AI. Guess, measure the error, adjust the answer, feed it back are the canonical steps followed in Google’s data centers, enabling such applications as Google Translate, Google Soundwriter, Google Maps, Google Assistant, Waymo cars, search, Google Now, and so on, in real time.4 As recently as 2012, Google was still struggling with the difference between dogs and cats.

pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI
by Paul R. Daugherty and H. James Wilson
Published 15 Jan 2018

These individuals are responsible for making important judgment calls about which AI technologies might best be deployed for specific applications. A huge consideration here is accuracy versus “explainability.” A deep-learning system, for example, provides a high level of prediction accuracy, but companies may have difficulty explaining how those results were derived. In contrast, a decision tree may not lead to results with high prediction accuracy but will enable a significantly greater explainability. So, for instance, an internal system that optimizes a supply chain with small tolerances for scheduling deliveries might best deploy deep-learning technology, whereas a health-care or consumer-facing application that will have to stand up to considerable regulatory scrutiny may be better off utilizing falling rule list algorithms.10 In addition, the explainability strategist might also decide that, for a particular application, the company might be better off avoiding the use of AI altogether.

See personalization cybersecurity, 56–58, 59 Darktrace, 58 DARPA Cyber Grand Challenges, 57, 190 Dartmouth College conference, 40–41 dashboards, 169 data, 10 in AI training, 121–122 barriers to flow of, 176–177 customization and, 78–80 discovery with, 178 dynamic, real-time, 175–176 in enterprise processes, 59 exhaust, 15 in factories, 26–27, 29–30 leadership and, 180 in manufacturing, 38–39 in marketing and sales, 92, 98–99, 100 in R&D, 69–72 in reimagining processes, 154 on supply chains, 33–34 supply chains for, 12, 15 velocity of, 177–178 data hygienists, 121–122 data supply-chain officers, 179 data supply chains, 12, 15, 174–179 decision making, 109–110 about brands, 93–94 black box, 106, 125, 169 employee power to modify AI, 172–174 empowerment for, 15 explainers and, 123–126 transparency in, 213 Deep Armor, 58 deep learning, 63, 161–165 deep-learning algorithms, 125 DeepMind, 121 deep neural networks (DNN), 63 deep reinforcement learning, 21–22 demand planning, 33–34 Dennis, Jamie, 158 design at Airbus, 144 AI system, 128–129 Elbo Chair, 135–137 generative, 135–137, 139, 141 product/service, 74–77 Dickey, Roger, 52–54 digital twins, 10 at GE, 27, 29–30, 183–184, 194 disintermediation, brand, 94–95 distributed learning, 22 distribution, 19–39 Ditto Labs, 98 diversity, 52 Doctors Without Borders, 151 DoubleClick Search, 99 Dreamcatcher, 136–137, 141, 144 drones, 28, 150–151 drug interactions, 72–74 Ducati, 175 Echo, 92, 164–165 Echo Voyager, 28 Einstein, 85–86, 196 Elbo Chair, 136–137, 139 “Elephants Don’t Play Chess” (Brooks), 24 Elish, Madeleine Clare, 170–171 Ella, 198–199 embodied intelligence, 206 embodiment, 107, 139–140 in factories, 21–23 of intelligence, 206 interaction agents, 146–151 jobs with, 147–151 See also augmentation; missing middle empathy engines for health care, 97 training, 117–118, 132 employees agency of, 15, 172–174 amplification of, 138–139, 141–143 development of, 14 hiring, 51–52 job satisfaction in, 46–47 marketing and sales, 90, 92, 100–101 on-demand work and, 111 rehumanizing time and, 186–189 routine/repetitive work and, 26–27, 29–30, 46–47 training/retraining, 15 warehouse, 31–33 empowerment, 137 bot-based, 12, 195–196 in decision making, 15 of salespeople, 90, 92 workforce implications of, 137–138 enabling, 7 enterprise processes, 45–66 compliance, 47–48 determining which to change, 52–54 hiring and recruitment, 51–52 how much to change, 54–56 redefining industries with, 56–58 reimagining around people, 58–59 robotic process automation (RPA) in, 50–52 routine/repetitive, 46–47 ergonomics, 149–150 EstherBot, 199 ethical, moral, legal issues, 14–15, 108 Amazon Echo and, 164–165 explainers and, 123–126 in marketing and sales, 90, 100 moral crumple zones and, 169–172 privacy, 90 in R&D, 83 in research, 78–79 ethics compliance managers, 79, 129–130, 132–133 European Union, 124 Ewing, Robyn, 119 exhaust data, 15 definition of, 122 experimentation, 12, 14 cultures of, 161–165 in enterprise processes, 59 leadership and, 180 learning from, 71 in manufacturing, 39 in marketing and sales, 100 in process reimagining, 160–165 in R&D, 83 in reimagining processes, 154 testing and, 74–77 expert systems, 25, 41 definition of, 64 explainability strategists, 126 explaining outcomes, 107, 114–115, 179 black-box concerns and, 106, 125, 169 jobs in, 122–126 sustaining and, 130 See also missing middle extended intelligence, 206 extended reality, 66 Facebook, 78, 79, 95, 177–178 facial recognition, 65, 90 factories, 10 data flow in, 26–27, 29–30 embodiment in, 140 job losses and gains in, 19, 20 robotic arms in, 21–26 self-aware, 19–39 supply chains and, 33–34 third wave in, 38–39 traditional assembly lines and, 1–2, 4 warehouse management and, 30–33 failure, learning from, 71 fairness, 129–130 falling rule list algorithms, 124–125 Fanuc, 21–22, 128 feedback, 171–172 feedforward neural networks (FNN), 63 Feigenbaum, Ed, 41 financial trading, 167 first wave of business transformation, 5 Fletcher, Seth, 49 food production, 34–37 ForAllSecure, 57 forecasts, 33–34 Fortescue Metals Group, 28 Fraunhofer Institute of Material Flow and Logistics (IML), 26 fusion skills, 12, 181, 183–206, 210 bot-based empowerment, 12, 195–196 developing, 15–16 holistic melding, 12, 197, 200–201 intelligent interrogation, 12, 185, 193–195 judgment integration, 12, 191–193 potential of, 209 reciprocal apprenticing, 12, 201–202 rehumanizing time, 12, 186–189 relentless reimagining, 12, 203–205 responsible normalizing, 12, 189–191 training/retraining for, 211–213 Future of Work survey, 184–185 Garage, Capital One, 205 Gaudin, Sharon, 99 GE.

From the Mechanistic to the Organic The potential power of AI to transform businesses is unprecedented, and yet there is an urgent and growing challenge. Companies are now reaching a crossroad in their use of AI, which we define as systems that extend human capability by sensing, comprehending, acting, and learning. As businesses deploy such systems—spanning from machine learning to computer vision to deep learning—some firms will continue to see modest productivity gains over the short run, but those results will eventually stall out. Other companies will be able to attain breakthrough improvements in performance, often by developing game-changing innovations. What accounts for the difference? It has to do with understanding the true nature of AI’s impact.

pages: 321

Finding Alphas: A Quantitative Approach to Building Trading Strategies
by Igor Tulchinsky
Published 30 Sep 2019

Another alternative is FloatBoost, which incorporates the backtracking mechanism of floating search and repeatedly performs a backtracking to remove unfavorable weak classifiers after a new weak classifier is added by AdaBoost; this ensures a lower error rate and reduced feature set at the cost of about five times longer training time. Deep Learning Deep learning (DL) is a popular topic today – and a term that is used to discuss a number of rather distinct things. Some data scientists think DL is just a buzz word or a rebranding of neural networks. The name comes from Canadian scientist Geoffrey Hinton, who created an unsupervised method known as the restricted Boltzmann machine (RBM) for pretraining NNs with a large number of neuron layers. That was meant to improve on the backpropagation training method, but there is no strong evidence that it really was an improvement. Another direction in deep learning is recurrent neural networks (RNNs) and natural language processing.

This is called the vanishing gradient problem. These days, the words “deep learning” more often refer to convolutional neural networks (CNNs). The architecture of CNNs was introduced by computer scientists Kunihiko Fukushima, who developed the 126 Finding Alphas neocognitron model (feed-forward NN), and Yann LeCun, who modified the backpropagation algorithm for neocognitron training. CNNs require a lot of resources for training, but they can be easily parallelized and therefore are a good candidate for parallel computations. When applying deep learning, we seek to stack several independent neural network layers that by working together produce better results than the shallow individual structures.

For example, in alpha research the task of predicting stock prices can be a good application of supervised learning, and the task of selecting stocks for inclusion in a portfolio is an application of unsupervised learning. Machine Learning in Alpha Research123 Machine learning Unsupervised methods Clusterization algorithms Supervised methods Statistical models Support vector machines Neural networks Deep learning algorithms Fuzzy logic Ensemble methods Random forest AdaBoost Figure 16.1 The most developed directions of machine learning. The most popular are in black Statistical Models Models like naive Bayes, linear discriminant analysis, the hidden Markov model, and logistic regression are good for solving relatively simple problems that do not need high precision of classification or prediction.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

These processors made it possible to speed up the network training by orders of magnitude, performing the major number crunching required to reduce what once took weeks to a matter of days or hours. Different approaches led to further refining of these deep learning techniques, using methods with names such as restricted Boltzmann machines and recurrent neural networks. All of these factors vastly improved the deep learning algorithms being used for many kinds of pattern recognition work. Continuing advances contributed to the significant gains seen by artificial intelligence during this past decade, including Facebook’s development of DeepFace, which identifies human faces in images with 97 percent accuracy.

Continuing advances contributed to the significant gains seen by artificial intelligence during this past decade, including Facebook’s development of DeepFace, which identifies human faces in images with 97 percent accuracy. In 2012, a University of Toronto artificial intelligence team made up of Hinton and two of his students won the annual ImageNet Large Scale Visual Recognition Competition with a deep learning neural network that blew the competition away.5 More recently, Google DeepMind used deep learning to develop the Go-playing AI, AlphaGo, training it by using a database of thirty million recorded moves from expert-level games. In March 2016, AlphaGo beat the world Go grandmaster, Lee Sedol, in four out of five games. Playing Go is considered a much bigger AI challenge than playing chess.

The camera technology wasn’t strong enough for us to actually measure the microexpressions that are on people’s faces, those subconscious reactions that show up on the muscles on our face before we can shut it down with our consciousness, because they’re impulse.” Continuing, Denman notes that the processing power is also now available to us to be able to run the deep learning neural networks needed to make this happen. Because of this, dozens of companies are entering the space, focusing not only on facial information but on the other ways we engage the world emotionally. Tel Aviv–based Beyond Verbal is an emotions analytics company that extracts and identifies feelings conveyed by intonations in the human voice.

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

When artificial intelligence reduces that waste and increases the benefits to society, as a by-product of removing the waste in the system, it reduces the number of jobs in healthcare. With more than $3.5 trillion of annual spending and 19 percent of the US GDP in healthcare, that could mean a lot of jobs. For instance, as reported in a May 2019 Nature Medicine article, researchers created a 3D volumetric deep learning model to screen for lung cancer.55 When comparing a single image, the deep learning model outperformed six radiology experts, with an 11 percent reduction in false positives and a 5 percent reduction in false negatives. According to Dr. Mozziyar Etemadi, one of the study’s coauthors, “AI in 3D can be much more sensitive in its ability to detect early lung cancer than the human eye looking at 2D images.

All of that digitization is also creating some impressive data capture, much more than we are even aware of, and the data collection from connected computers, people, cameras, and sensors has only just started. Connecting those devices to learn from data is arguably a far easier job than that of building the original network. The rate of growth in today’s deep learning in artificial intelligence is largely driven by data collection and large data sets. In fact, every platform company today is really a data company with AI at its core. Other data, too, is moving out of its previous silos, giving rise to an intelligence that can be combined with other data sets to learn at a rate far faster than humans.

Until 2014, even top AI researchers believed top human competitors would beat computers for years to come because of the complexity of the game and the fact that algorithms had to compare every move, which required enormous compute power. But in 2016, Google’s DeepMind program AlphaGo beat one of the top players in the world, Lee Sedol, in a match that made history. AlphaGo’s program was based on deep learning, which was “trained” using thousands of human amateur and professional games. It made history not only because it was the first time a computer beat a top Go master, but also because of the way it did so. In game 2 and the thirty-seventh move, the computer made a move that defied logic, placing a black stone in the middle of an open area—away from the other stones.

pages: 256 words: 67,563

Explaining Humans: What Science Can Teach Us About Life, Love and Relationships
by Camilla Pang
Published 12 Mar 2020

Compared to traditional machine learning, a neural network is more independent and requires less input from the programmer to define what it should be searching for, since, through internal layers of logic, it is able to create its own connections. All of the more radical examples of artificial intelligence you may have read about – from fully driverless cars to mass automation of people’s jobs – ultimately rely on deep learning, the closest we have so far got to developing a computer program that can think (within considerable limitations) like a human. Deep learning is also responsible for applications, including criminal checks, drug design, and the computer programs that rival the most competent chess players, all of which depend on an ability to simulate the connective capability of the human mind.

How not to follow the crowd Molecular dynamics, conformity and individuality 7. How to achieve your goals Quantum physics, network theory and goal setting 8. How to have empathy with others Evolution, probability and relationships 9. How to connect with others Chemical bonds, fundamental forces and human connection 10. How to learn from your mistakes Deep learning, feedback loops and human memory 11. How to be polite Game theory, complex systems and etiquette Afterword Acknowledgements Index About the Author Dr Camilla Pang holds a PhD in Biochemistry from University College London and is a Postdoctoral Scientist specialising in Translational Bioinformatics.

No one can ever be entirely dispassionate, objective, or dare I say scientific, about how they form new relationships. But chemistry can give us a new outlook and a fresh perspective: one that provides the confidence to make, break and sometimes re-make the connections that define us. 10. How to learn from your mistakes Deep learning, feedback loops and human memory With ADHD, you’re always forgetting what you’re meant to be doing. My working memory – the part where we hold information for short-term, immediate use – is constantly undermined by new thoughts, impulses or emotional responses. It feels as though everywhere you go, even just to the room next door, the working memory is always being refreshed, losing immediate context.

pages: 296 words: 66,815

The AI-First Company
by Ash Fontana
Published 4 May 2021

Reinforcement learning is a functionally different approach to supervised and unsupervised ML. Transfer and deep learning overlap with the other types. The table below shows what might be applicable to a particular situation depending on the data at hand, required interpretability, and existing knowledge of the prediction problem. supervised unsupervised reinforcement transfer deep Learns from inputs given outputs inputs without outputs objectives inputs other layers in the network Needs training and feedback data lots of data objectives existing models lots of data and computational resources Good when data is available but the algorithm is missing it’s unclear what is being looked at and/or there are no labels it’s possible to articulate the state, action, reward, and how to modify the state based on the rewards problems are similar, training time and computational resources are limited, and results are needed fast there is lots of unstructured time series data (for convolutional neural networks) or data that’s not independent (for recurrent neural networks)* Selected methods include random forest trees, decision trees (including random forest and gradient boosted types), regression, support vector machines (SVMs), and neural networks clustering (k-means, hierarchical, and others) and Gaussian mixture models* various, but all forms of reinforcement learning Bayesian networks and Markov logic networks convolutional neural networks and recurrent neural networks COMPOUNDING There are many different methods for making predictions, each one generating and accumulating data in various ways.

Data is uploaded from the system of record, predictive models are applied, and actionable insights made available to the user. Data type Unstructured Deep learning over unstructured data to turn it into structured data and extract predictive features. There’s just not enough data about most jobs available for the training of deep learning models. Data categorization Exhaust Higher accuracy by removing reporting bias. A machine-generated summary of the last email from a sales lead is perhaps more indicative of a likelihood to close a deal with that lead than a salesperson’s subjective opinion expressed in a single, categorical input.

The Canadian computer scientist Yoshua Bengio devised a language model based on a neural network that figured out the next best word to use among all the available words in a language based on where that word usually appeared with respect to other words. Geoffrey Hinton, a British-born computer scientist and psychologist, developed a neural network that linked many layers of neurons together, the precursor to deep learning. Importantly, researchers worked to get these neural networks running efficiently on the available computer chips, settling on the chips used for computer graphics because they are particularly good at running many numerical computations in parallel. The result was a trainable neural network: programmable neurons, connected in a weblike network, passing the computations onto another web sitting below it—all computed on a chip that could perform the necessary operations on a reasonable timescale: mere days instead of months.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

Perez summed up these developments:So not only are researcher[s] who hand optimize gradient descent solutions out of business, so are folks who make a living designing neural architectures! This is actually just the beginning of Deep Learning systems just bootstrapping themselves… This is absolutely shocking and there’s really no end in sight as to how quickly Deep Learning algorithms are going to improve. This meta capability allows you to apply it on itself, recursively creating better and better systems.137 As noted in Chapter 1, various companies and researchers announced in 2017 that they had created AI software which could itself develop further AI software.138 In May 2017, Google demonstrated a meta-learning technology called AutoML.

See Cade Metz, “Google’s Dueling Neural Networks Spar to Get Smarter, No Humans Required”, Wired, 4 November 2017, https://​www.​wired.​com/​2017/​04/​googlesdueling-neural-networks-spar-get-smarter-no-humans-required/​, accessed 16 August 2018. 127Yann LeCun, “Answer to Question: What are Some Recent and Potentially Upcoming Breakthroughs in Deep Learning?”, Quora, 28 July 2016, https://​www.​quora.​com/​What-are-some-recent-and-potentially-upcoming-breakthroughs-in-deep-learning, accessed 16 August 2018. 128Andrea Bertolini, “Robots as Products: The Case for a Realistic Analysis of Robotic Applications and Liability Rules”, Law Innovation and Technology, Vol. 5, No. 2 (2013), 214–247, 234–235. 129See Chapter 1 at s. 5 and FN 111.

id=​rJY0-Kcll, accessed 1 June 2018. 136Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen, Adam Coates, Andrew Maas, Awni Hannun, Brody Huval, Tao Wang, and Sameep Tando, “Optimization: Stochastic Gradient Descent”, Stanford UFLDL Tutorial, http://​ufldl.​stanford.​edu/​tutorial/​supervised/​OptimizationStoc​hasticGradientDe​scent/​, accessed 1 June 2018. 137Carlos E. Perez, “Deep Learning: The Unreasonable Effectiveness of Randomness”, Medium, 6 November 2016, https://​medium.​com/​intuitionmachine​/​deep-learning-the-unreasonable-effectiveness-of-randomness-14d5aef13f87, accessed 1 June 2018. 138See Chapter 1 at s. 5. 139See also Sundar Pichai, “Making AI Work for Everyone”, Google Blog, 17 May 2017, https://​blog.​google/​topics/​machine-learning/​making-ai-work-for-everyone/​, accessed 1 June 2018. 140At present, many AI systems require a significant amount of human fine-tuning, especially when they are produced by companies interested in achieving striking results even at a high cost in terms of resources.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

We could make all the books available electronically and have the students access the books on their phones, because all students have phones. Wrong. Phones are great for reading short works, but long works are difficult and uncomfortable to read on a phone. Studies show that reading on a screen is worse than reading on paper in an educational context. Speed, accuracy, and deep learning all suffer when research subjects read on screens. Paper is simply a superior technology for the kind of deep learning that we want students to engage in as part of their education. Reading on a screen is fun and convenient, yes—but reading for comprehension is not about fun or convenience. It’s about learning. When it comes to learning, students generally prefer paper to screens.3 Another technochauvinist might suggest giving all the students iPads or Chromebooks or some kind of e-reader and making all the books available electronically: another good idea, but the obstacles are obvious.

“Reflecting on One Very, Very Strange Year at Uber.” Susan Fowler (blog), February 19, 2017. https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-one-very-strange-year-at-uber. Gomes, Lee. “Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter.” IEEE Spectrum (blog), February 18, 2015. http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning. Gray, Jonathan, Liliana Bounegru, and Lucy Chambers, eds. The Data Journalism Handbook: How Journalists Can Use Data to Improve the News. Sebastopol, CA: O’Reilly Media, 2012. Grazian, David. Mix It Up: Popular Culture, Mass Media, and Society. 2nd ed.

We can take the model and run new data through it to get a numerical answer that predicts something: how likely it is that a squiggle on a page is the letter A; how likely it is that a given customer will pay back the mortgage money a bank loans to him; which is the best next move to make in a game of tic-tac-toe, checkers, or chess. Machine learning, deep learning, neural networks, and predictive analytics are some of the narrow AI concepts that are currently popular. For every AI system that exists today, there is a logical explanation for how it works. Understanding the computational logic can demystify AI, just like dismantling a computer helps to demystify hardware.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

Avoiding the stretched metaphors of neural networks, the mathematical functions represented now look like this: The typical word used to describe these networks is ‘deep’ and the weight-tuning algorithms employed are now called ‘deep learning’. That term is certainly justified in terms of the layers of mathematics being utilized in these deeply nested functions. In deep learning networks, layers upon layers of numerical functions (often Bell Curves, or functions that combine into other, similar, simple shapes) are being moved around fields of numbers, to produce intricately linked mathematical ‘atoms’ of emergent representations.

The main thing that brains and connectionist algorithms share in common is opacity, but it would be a huge mistake to think that the algorithms have reached the complexity of brains. The largest known deep learning networks have something like 1010 parameters (the numerical weights in those matrices).19 Most have far fewer, but to get a feel for their maximum size, consider that the length of a year is around 1010 milliseconds. To imagine how much larger and more complex the human brain is than the largest deep learning neural network, imagine stretching each of the milliseconds in that year to be the length of a year themselves. That’s the comparable size of the brain (1020), if we just compare the number of real neural synapses to the number of numbers in the biggest ever deep learning connectionist algorithm.

New York: Wiley. 16Ironically, while Babbage’s key innovation of introducing Jacquard loom cards were included in most tabulating machines from the time IBM standardized them in 1928, they only began to be widely used to program general-purpose computers just before Rosenblatt developed the Mark I Perceptron. 17Unfortunately, Turing’s boss, NPL Director Sir Charles Galton Darwin (named for his grandfather Charles Darwin and great-uncle Francis Galton), dismissed the paper as a ‘schoolboy essay’, and it was only published posthumously, in 1968, 14 years after Turing was driven to suicide by the British authorities’ persecution of his crime of homosexuality. 18M. Minsky and S. Papert, 1969, Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press. 19Jeremy Hsu, 2015, Biggest Neural Network Ever Pushes AI Deep Learning. IEEE Spectrum, https://spectrum.ieee.org/tech-talk/computing/software/biggest-neural-network-ever-pushes-ai-deep-learning 20Esther M. Sternberg, 2001, The Balance Within: The Science Connecting Health and Emotions. Times Books. 21Antonio Damasio, 2018, The Strange Order of Things: Life, Feeling, and the Making of Cultures. New York: Pantheon. 22Antonio Damasio and Gil Carvalho, 2013, The Nature of Feelings: Evolutionary and Neurobiological Origins.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

Didn’t their fears always end up being overblown? Several years ago, when I started as a tech columnist for the Times, most of what I heard about AI mirrored my own optimistic views. I met with start-up founders and engineers in Silicon Valley who showed me how new advances in fields like deep learning were helping them build all kinds of world-improving tools: algorithms that could increase farmers’ crop yields, software that would help hospitals run more efficiently, self-driving cars that could shuttle us around while we took naps and watched Netflix. This was the euphoric peak of the AI hype cycle, a time when all of the American tech giants—Google, Facebook, Apple, Amazon, Microsoft—were pouring billions of dollars into developing new AI products and shoving machine learning algorithms into as many of their apps as possible.

A 2019 report by Wells Fargo estimated that as many as two hundred thousand finance employees will lose their jobs over the next decade, thanks to tools like these. Medicine is undergoing a machine makeover, as AI learns to do much of the work that used to require trained human specialists. In 2018, a Chinese tech company built a deep learning algorithm that diagnosed brain cancer and other diseases faster and more accurately than a team of fifteen top doctors. The same year, American researchers developed an algorithm capable of identifying malignant tumors on a CT scan with an error rate twenty times lower than a human radiologist.

In fact, several companies are already using AI to generate fashion designs. In 2017, an Amazon research team developed a machine learning algorithm that analyzes images of garments in a particular style and learns to generate new garments in that style. Glitch, an AI fashion company started by two MIT graduates, sells pieces that are entirely designed by deep learning algorithms. Will AI spare all TSA agents, or replace all fashion designers? Of course not. But the fallout from automation probably won’t be as tidy as watching some occupations go extinct while others survive without a scratch. * * * — In short, what I should have told the executives at the fancy dinner was that they were asking the wrong question.

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

Learning Deep Architectures for AI,* by Yoshua Bengio (Now, 2009), is a brief introduction to deep learning. The problem of error signal diffusion in backprop is described in “Learning long-term dependencies with gradient descent is difficult,”* by Yoshua Bengio, Patrice Simard, and Paolo Frasconi (IEEE Transactions on Neural Networks, 1994). “How many computers to identify a cat? 16,000,” by John Markoff (New York Times, 2012), reports on the Google Brain project and its results. Convolutional neural networks, the current deep learning champion, are described in “Gradient-based learning applied to document recognition,”* by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner (Proceedings of the IEEE, 1998).

Hinton, a psychologist turned computer scientist and great-great-grandson of George Boole, the inventor of the logical calculus used in all digital computers, is the world’s leading connectionist. He has tried longer and harder to understand how the brain works than anyone else. He tells of coming home from work one day in a state of great excitement, exclaiming “I did it! I’ve figured out how the brain works!” His daughter replied, “Oh, Dad, not again!” Hinton’s latest passion is deep learning, which we’ll meet later in this chapter. He was also involved in the development of backpropagation, an even better algorithm than Boltzmann machines for solving the credit-assignment problem that we’ll look at next. Boltzmann machines could solve the credit-assignment problem in principle, but in practice learning was very slow and painful, making this approach impractical for most applications.

A hard core of connectionists soldiered on, but by and large the attention of the machine-learning field moved elsewhere. (We’ll survey those lands in Chapters 6 and 7.) Today, however, connectionism is resurgent. We’re learning deeper networks than ever before, and they’re setting new standards in vision, speech recognition, drug discovery, and other areas. The new field of deep learning is on the front page of the New York Times. Look under the hood, and . . . surprise: it’s the trusty old backprop engine, still humming. What changed? Nothing much, say the critics: just faster computers and bigger data. To which Hinton and others reply: exactly, we were right all along! In truth, connectionists have made genuine progress.

pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us
by Tim O'Reilly
Published 9 Oct 2017

According to Google, RankBrain’s opinion has become the third most important among the more than two hundred factors that it uses to rank pages. Google has also applied deep learning to language translation. The results were so startlingly better that after a few months of testing, the team stopped all work on the old Google Translate system discussed earlier in this chapter and replaced it entirely with the new one based on deep learning. It isn’t yet quite as good as human translators, but it’s close, at least for everyday functional use, though perhaps not for literary purposes. Deep learning is also used in Google Photos. If you have tried Google Photos, you’ve seen how it can recognize objects in your photos.

See “We shape our tools and thereafter our tools shape us,” McLuhan Galaxy, April 1, 2013, https://mcluhan galaxy.wordpress.com/2013/04/01/we-shape-our-tools-and-thereafter-our-tools-shape-us/. 165 “what a typical Deep Learning system is”: Lee Gomes, “Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter,” IEEE Spectrum, February 28, 2015, http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/face book-ai-director-yann-lecun-on-deep-learning. 165 “can’t run it faster than real time”: Yann LeCun, Facebook post, December 5, 2016, retrieved March 31, 2017, https://m. facebook.com/story.php?story_fbid =10154017359117143&id=722677142. 165 third most important: Sullivan, “FAQ: All About the Google RankBrain Algorithm.” 165 stopped all work on the old Google Translate system: Gideon Lewis-Kraus, “The Great A.I.

The trick is to figure out in which direction to tweak each knob and by how much without actually fiddling with them. This involves computing a “gradient,” which for each knob indicates how the light changes when the knob is tweaked. Now, imagine a box with 500 million knobs, 1,000 light bulbs, and 10 million images to train it with. That’s what a typical Deep Learning system is. Deep learning uses layers of recognizers. Before you can recognize a dog, you have to be able to recognize shapes. Before you can recognize shapes, you have to be able to recognize edges, so that you can distinguish a shape from its background. These successive stages of recognition each produce a compressed mathematical representation that is passed up to the next layer.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
by Michael Kearns and Aaron Roth
Published 3 Oct 2019

The technical name for the algorithmic framework we have been describing is a generative adversarial network (GAN), and the approach we’ve outlined above indeed seems to be highly effective: GANs are an important component of the collection of techniques known as deep learning, which has resulted in qualitative improvements in machine learning for image classification, speech recognition, automatic natural language translation, and many other fundamental problems. (The Turing Award, widely considered the Nobel Prize of computer science, was recently awarded to Yoshua Bengio, Geoffrey Hinton, and Yann LeCun for their pioneering contributions to deep learning.) Fig. 21. Synthetic cat images created by a generative adversarial network (GAN), from https://ajolicoeur.wordpress.com/cats.

To see a particularly egregious example, let’s go back to 2015, when the market for machine learning talent was heating up. The techniques of deep learning had recently reemerged from relative obscurity (its previous incarnation was called backpropagation in neural networks, which we discussed in the introduction), delivering impressive results in computer vision and image recognition. But there weren’t yet very many experts who were good at training these algorithms—which was still more of a black art, or perhaps an artisanal craft, than a science. The result was that deep learning experts were commanding salaries and signing bonuses once reserved for Wall Street.

But money alone wasn’t enough to recruit talent—top researchers want to work where other top researchers are—so it was important for AI labs that wanted to recruit premium talent to be viewed as places that were already on the cutting edge. In the United States, this included research labs at companies such as Google and Facebook. One way to do this was to beat the big players in a high-profile competition. The ImageNet competition was perfect—focused on exactly the kind of vision task for which deep learning was making headlines. The contest required each team’s computer program to classify the objects in images into a thousand different and highly specific categories, including “frilled lizard,” “banded gecko,” “oscilloscope,” and “reflex camera.” Each team could train their algorithm on a set of 1.5 million images that the competition organizers made available to all participants.

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
by Thomas H. Davenport and Julia Kirby
Published 23 May 2016

Determining the meaning and significance of these has always been the province of human beings—and a key aspect of human cognition. But now a wide variety of tools are capable of it, too. Words are increasingly “understood”—counted, classified, interpreted, predicted, etc.—through technologies such as machine learning, natural language processing, neural networks, deep learning, and so forth. Some of the same technologies are being used to analyze and identify images. Humans are still better able to make subjective judgments on unstructured data, such as interpreting the meaning of a poem, or distinguishing between images of good neighborhoods and bad ones. But computers are making headway even on these fronts.

Machine learning and neural network analysis is the most promising technology for this application. One branch of machine learning, for example, is particularly well suited to analyzing data in multiple dimensions. Images and video are an example of this type of data—any individual pixel has x and y coordinates, color, intensity, and in videos, time. “Deep learning” neural network approaches have been developed to deal with data in multiple dimensions; the “deep” refers not to “profound,” but rather to a hierarchy of dimensions in the data. It’s this technology that is letting Google engineers identify photos of cats on the Internet. Although it’s difficult to imagine more important tasks than that, perhaps in the near future it will let smart machines watch video taken by drones and security cameras and determine whether something bad is happening.

There are already, as we have discussed, systems to understand text and speech, systems to engage in intelligent Q&A with humans, and systems to recognize a variety of images. It’s just that they’re not yet embedded in the brain of a robot. Jim Lawton, the head of products at Rethink Robotics, commented to us in an interview: “An important area of experimentation today is around the intersection of collaborative robots, big data, and deep learning. The goal is to combine automation of physical tasks and cognitive tasks. For example, a robot could start combining all the information about how much torque is applied in a screw. Robots are, after all, a big bucket of sensors. A truly intelligent robot could begin to see what works in terms of how much torque in a screw leads to field failures.

pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It
by Azeem Azhar
Published 6 Sep 2021

AlexNet had a success rate as high as 87 per cent. Deep learning worked. The triumph of deep learning sparked an AI feeding frenzy. Scientists rushed to build artificial intelligence systems, applying deep neural networks and their derivatives to a vast array of problems: from spotting manufacturing defects to translating between languages; from voice recognition to detecting credit card fraud; from discovering new medicines to recommending the next video we should watch. Investors opened their pocketbooks eagerly to back these inventors. In short order, deep learning was everywhere. As a result, neural networks demanded increasing amounts of data and processing power.

By 2010, Moore’s Law had resulted in enough power to facilitate a new kind of machine learning, ‘deep learning’, which involved creating layers of artificial neurons modelled on the cells that underpin human brains. These ‘neural networks’ had long been heralded as the next big thing in AI. Yet they had been stymied by a lack of computational power. Not any more, however. In 2012, a group of leading AI researchers – Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton – developed a ‘deep convolutional neural network’ which applied deep learning to the kinds of image-sorting tasks that AIs had long struggled with. It was rooted in extraordinary computing clout.

This giant crane-block battery depends on a particular mix of four, very well-understood technologies – the cranes, the building aggregate, the generator which converts the dropping of blocks to energy, and the shipping systems that let us move these things around. And then there’s a fifth, more unexpected technology: an automated ‘machine vision system’ using deep learning. Each crane has a set of cameras whose input is automatically processed by a computer. This computer controls the cranes and the lifting and placing of the blocks. It obviates any human operators – and it is the absence of a human operator that allows Energy Vault to hit a competitive price. Of course, technologies have always combined.

pages: 588 words: 131,025

The Patient Will See You Now: The Future of Medicine Is in Your Hands
by Eric Topol
Published 6 Jan 2015

Underlying its predictive capabilities was quite a portfolio of machine learning systems, including Bayesian nets, Markov chains, support vector machine algorithms, and genetic algorithms.33 I won’t go into any more depth; my brain is not smart enough to understand it all, and fortunately it’s not particularly relevant to where we are going here. Another subtype of AI and machine learning,2,20,34–48 known as deep learning, has deep importance to medicine. Deep learning is behind Siri’s ability to decode speech as well as Google Brain experiments to recognize images. Researchers at Google X extracted ten million still images from YouTube videos and fed them to the network of one thousand computers to see what the Brain, with its one million simulated neurons and one billion simulated synapses would come up with on its own.35,36 The answer—cats.

M. van Rijmenam, “How Machine Learning Could Result in Great Applications for Your Business,” Big Data-Startups Blog, January 10, 2014, http://www.bigdata-startups.com/machine-learning-result-great-applications-business/. 35. N. Jones, “The Learning Machines,” Nature 505 (2014): 146–148. 36. J. Markoff, “Scientists See Promise in Deep-Learning Programs,” New York Times, November 24, 2012, http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html. 37. “Don’t Be Evil, Genius,” The Economist, February 1, 2014, http://www.economist.com/node/21595462/print. 38. J. Pearson, “Superintelligent AI Could Wipe Out Humanity, If We’re Not Ready for It,” Motherboard, April 23, 2014, http://motherboard.vice.com/read/super-intelligent-ai-could-wipe-out-humanity-if-were-not-ready-for-it. 39.

McAfee, “The Dawn of the Age of Artificial Intelligence,” The Atlantic, February 2014, http://www.theatlantic.com/business/print/2014/02/the-dawn-of-the-age-of-artificial-intelligence/283730/. 40. S. Schneider, “The Philosophy of ‘Her,’” New York Times, March 2, 2014, http://opinionator.blogs.nytimes.com/2014/03/02/the-philosophy-of-her/?ref=opinion. 41. A. Vance, “The Race to Buy the Human Brains Behind Deep Learning Machines,” Bloomberg Businessweek, January 27, 2014, http://www.businessweek.com/printer/articles/180155-the-race-to-buy-the-human-brains-behind-deep-learning-machines. 42. G. Satell, “Why the Future of Technology Is All Too Human,” Forbes, February 23, 2014, http://www.forbes.com/sites/gregsatell/2014/02/23/why-the-future-of-technology-is-all-too-human/. 43. D. Auerbach, “A.I.

Virtual Competition
by Ariel Ezrachi and Maurice E. Stucke
Published 30 Nov 2016

Antonio Regalado, “Is Google Cornering the Market on Deep Learning?” MIT Technology Review, January 29, 2014, http://www.technologyreview.com /news/524026/is-google-cornering-the-market-on-deep-learning/; Nicola Jones, “Computer Science: The Learning Machines,” Nature, January 8, 2014, http://www.nature.com/news/computer-science-the-learning-machines -1.14481. European Data Protection Supervisor, Towards a New Digital Ethics: Data, Dignity and Technology, Opinion 4/2015 (September 11, 2015), 9. Robert D. Hof, “Deep Learning,” MIT Technology Review, April 23, 2013, http://www.technologyreview.com/featuredstory/513696/deep-learning/. Tereza Pultarova, “Jaguar Land Rover to Lead Driverless Car Research,” E&T (October 9, 2015), http://eandt.theiet.org /news/2015/oct /jaguar-land -rover-driverless-cars.cfm; David Talbot, “CES 2015: Nvidia Demos a Car Computer Trained with ‘Deep Learning,’ ” MIT Technology Review, January 6, 2015), http://www.technologyreview.com /news/533936/ces-2015 -nvidia-demos-a-car-computer-trained-with-deep-learning /; David Levitin, 2015.

Recent years have witnessed groundbreaking research and progress in the design and development of smart, self-learning algorithms to assist in pricing decisions, planning, trade, and logistics. The field has attracted significant investment in deep learning by leading market players.38 In 2011, International Business Machines Corp.’s Jeopardy!-winning Watson computer showcased the power of its deep-learning techniques, which enabled the computer to optimize its strategy following trials and feedback.39 Since then, IBM has invested in widening the capacity and functionality of the technology, with the aim of making it “the equivalent of a computing operating system for an emerging class of data-fueled artificialintelligence applications.”40 Recently, the launch of the Deep Q network by Google showcased enhanced self-learning capacity.

Tereza Pultarova, “Jaguar Land Rover to Lead Driverless Car Research,” E&T (October 9, 2015), http://eandt.theiet.org /news/2015/oct /jaguar-land -rover-driverless-cars.cfm; David Talbot, “CES 2015: Nvidia Demos a Car Computer Trained with ‘Deep Learning,’ ” MIT Technology Review, January 6, 2015), http://www.technologyreview.com /news/533936/ces-2015 -nvidia-demos-a-car-computer-trained-with-deep-learning /; David Levitin, 2015. “The Sum of Human Knowledge,” Wall Street Journal, September 18, 2015, http://www.wsj.com /articles/the-sum-of-human -knowledge-1442610803. Lohr, “IBM’s AI System Watson to Get Second Home.” European Data Protection Supervisor, Towards a New Digital Ethics.

pages: 285 words: 86,858

How to Spend a Trillion Dollars
by Rowan Hooper
Published 15 Jan 2020

The librarians search mostly in vain their entire lives for books that contain even one meaningful sentence, such is the size of the pool they sift through. But what if there was a computer that could search the library at lightning speed? Deep learning can be that speedy librarian. Certainly in chemistry it is already performing that function. The ‘possibility space’ for different potential drugs is huge, with around 1060 different potential drugs that could be made. That’s more than the number of atoms in the solar system. But deep learning has been used to search this space, this ‘chemistry of Babel’, and discover a powerful new kind of antibiotic, a class of drug that we desperately need more of.11 The researchers, with a healthy sense of irony, named the AI-discovered drug ‘halicin’, after Hal from 2001: A Space Odyssey.

What, then, if the almost supernatural processing power of quantum computing was paired with the form of AI known as machine learning, which is at the basis of most of the examples of AI we’ve discussed so far. Machine learning is computationally expensive: it’s why OpenAI will spend $1 billion mostly on data processing. If we could train algorithms using quantum computers, we could do it faster, more cheaply and more efficiently than we do at the moment. As yet, making quantum neural networks for deep learning is only an emerging field.9 There are formidable technical barriers. But we can get in at the start. Europe, the US and China are each putting billions into developing quantum computing. Europe has the Quantum Technologies Flagship project, and China, which wants to pull clear of the US in the race for quantum supremacy, has just opened the National Laboratory for Quantum Information Sciences.

Rather than program all the possible outcomes into the software – which is what software engineers used to try to do, with inevitable shortcomings – in machine learning with a neural network, the computer learns on its own. There has been spectacular success with a turbo form of machine learning called deep learning; it’s behind the ability of DeepMind’s AlphaGo and AlphaZero, and it’s the basis of a system developed by OpenAI called Generative Pre-trained Transformer, or GPT. A publicly available version called GPT-2 can generate original text, perhaps a sports report, a movie review, or maybe even poetry, when given a prompt.

pages: 371 words: 108,317

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future
by Kevin Kelly
Published 6 Jun 2016

It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’s Watson; DeepMind, Google’s search engine; and Facebook’s algorithms.

thousand games of chess: Personal correspondence with Daylen Yang (author of the Stockfish chess app), Stefan Meyer-Kahlen (developed the multiple award-winning computer chess program Shredder), and Danny Kopec (American chess International Master and cocreator of one of the standard computer chess testing systems), September 2014. “akin to building a rocket ship”: Caleb Garling, “Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines,” Wired, May 5, 2015. In 2006, Geoff Hinton: Kate Allen, “How a Toronto Professor’s Research Revolutionized Artificial Intelligence,” Toronto Star, April 17, 2015. he dubbed “deep learning”: Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep Learning,” Nature 521, no. 7553 (2015): 436–44. the network effect: Carl Shapiro and Hal R. Varian, Information Rules: A Strategic Guide to the Network Economy (Boston: Harvard Business Review Press, 1998).

Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny–looking image, you are teaching the AI what an Easter Bunny looks like. Each of the 3 billion queries that Google conducts each day tutors the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousandfold more data and a hundred times more computing resources, Google will have an unrivaled AI. In a quarterly earnings conference call in the fall of 2015, Google CEO Sundar Pichai stated that AI was going to be “a core transformative way by which we are rethinking everything we are doing. . . .

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

As a consultant Hinton had introduced the deep learning neural net approach early on at Microsoft, and he was vindicated in 2012, when Microsoft’s head of research Richard Rashid gave a lecture in a vast auditorium in Tianjin, China. As the research executive spoke in English he paused after each sentence, which was then immediately translated by software into spoken Chinese in a simulation of his own voice. At the end of the talk, there was silence and then stunned applause from the audience. The demonstration hadn’t been perfect, but by adding deep learning algorithm techniques the company had adopted from Hinton’s research, it had been able to reduce recognition errors by more than 30 percent.

Thinking of themselves as the “three musketeers,” Hinton, LeCun, and Bengio set out to change that. Beginning in 2004 they embarked on a “conspiracy”—in LeCun’s words—to boost the popularity of the networks, complete with a rebranding campaign offering more alluring concepts of the technology such as “deep learning” and “deep belief nets.” LeCun had by this time moved to New York University, partly for closer ties with neuroscientists and with researchers applying machine-learning algorithms to the problem of vision. Hinton approached a Canadian foundation, the Canadian Institute for Advanced Research, for support to organize a research effort in the field and to hold several workshops each year.

It had a broad portfolio of research projects, stretching from Thrun’s work in autonomous cars to efforts to scale up neural networks, loosely identified as “brain” projects, evoking a new wave of AI. The Human Brain Project was initially led by Andrew Ng, who had been a colleague with Thrun at the resurrected Stanford Artificial Intelligence Laboratory. Ng was an expert in machine learning and adept in some of the deep learning neural network techniques that Hinton and LeCun had pioneered. In 2011, he began spending time at Google building a machine vision system and the following year it had matured to the point where Google researchers presented a paper on how the network performed in an unsupervised learning experiment using YouTube videos.

Text Analytics With Python: A Practical Real-World Approach to Gaining Actionable Insights From Your Data
by Dipanjan Sarkar
Published 1 Dec 2016

Dipanjan’s interests include learning about new technology, financial markets, disruptive startups, data science, and more recently, artificial intelligence and deep learning. In his spare time he loves reading, gaming, and watching popular sitcoms and football. About the Technical Reviewer Shanky Sharma Currently leading the AI team at Nextremer India, Shanky Sharma’s work entails implementing various AI and machine learning–related projects and working on deep learning for speech recognition in Indic languages. He hopes to grow and scale new horizons in AI and machine learning technologies. Statistics intrigue him and he loves playing with numbers, designing algorithms, and giving solutions to people.

Besides these, there are several other frameworks and libraries that are not dedicated towards text analytics but that are useful when you want to use machine learning techniques on textual data. These include the scikit-learn, numpy, and scipy stack. Besides these, deep learning and tensor-based libraries like theano, tensorflow, and keras also come in handy if you want to build advanced deep learning models based on deep neural nets, convnets, and LSTM-based models . You can install most of these libraries using the pip install <library> command from the command prompt or terminal. We will talk about any caveats if present in the upcoming chapters when we use these libraries.

Besides being a multipurpose language, the wide variety of frameworks, libraries, and platforms that have been developed by using Python and to be used for Python form a complete robust ecosystem around Python. These libraries make life easier by giving us a wide variety of capabilities and functionality to perform various tasks with minimal code. Some examples would be libraries for handling databases, text data, machine learning, signal processing, image processing, deep learning, artificial intelligence—and the list goes on. Open source: As open source, Python is actively developed and updated constantly with improvements, optimizations, and new features. Now the Python Software Foundation (PSF) owns all Python-related intellectual property (IP) and administers all license-related issues.

pages: 287 words: 95,152

The Dawn of Eurasia: On the Trail of the New World Order
by Bruno Macaes
Published 25 Jan 2018

The comparison forces itself upon the visitor to Baidu, who is greeted in the garden and then the lobby by massive slides connecting to the upper levels, the symbol of fast-paced internet companies worldwide. The Institute of Deep Learning is one of the ways that Baidu, the Chinese search giant, is trying to keep ahead of the innovation pack, notably by making sure that the latest technological advances can quickly be used across its different businesses, including the core search algorithm. Deep learning is an old idea in artificial intelligence and many think it is our best hope in building software that will get us very near to – and in some cases surpass – human abilities.

In this, machine intelligence comes to resemble the way a large array of neurons works in the human brain. Speech and image recognition are among the most immediate applications of deep learning. Yuanqing told me how Baidu had been able to develop practically infallible speech recognition applications, even if the user chooses to whisper to his device rather than speak. They were now concentrating their efforts on how to apply deep learning to automated driving. Applying it to prediction systems still lies considerably in the future, but the future is getting closer each day. ‘How would you describe what is different about the way the Chinese approach technology?’

‘It learns to recognize speech and the learning process works equally for every language. Feed it the data and it will learn Latin or Sanskrit. Some algorithms have even invented their own languages.’ I had come to the Baidu Technology Park in the Haidian District of Beijing to meet Yuanqing Lin, Director of the Baidu Institute of Deep Learning. The complex consists of five individual buildings connected by gallery bridges and overlooking a central botanical garden where small streams of water and recently planted trees gradually reveal a space capsule in the centre. It is easy to get lost here. Haidian hosts a number of technology parks, each with tens or hundreds of both established companies and start-ups, a scale perhaps still half of Silicon Valley but fast approaching it.

pages: 326 words: 88,968

The Science and Technology of Growing Young: An Insider's Guide to the Breakthroughs That Will Dramatically Extend Our Lifespan . . . And What You Can Do Right Now
by Sergey Young
Published 23 Aug 2021

Now this algorithm is hard at work for the British National Health System, proactively analyzing data livestreamed from outpatients. When a potential flare-up is indicated by KenSci’s AI, doctors are alerted so that they can intervene before the situation becomes an emergency. 2.AI Case Study #2: Deep Learning and Computer Vision for Diagnosis Diabetic retinopathy (DR) is a complication of diabetes. Over time, excess sugar damages tiny blood vessels connected to the retina. The body grows new blood vessels, but they rupture easily. Untreated, this eventually leads to total blindness. If caught early, it is highly treatable, but it is fairly asymptomatic at first, and ophthalmologists who can identify the disease are rare.

With thirteen million people in his home country of India suffering from DR, Google scientist Varun Gulshan knew there must be a better way to diagnose and treat the disease using AI. His team first obtained one million retinal scans that had already been analyzed and graded by ophthalmologists. They then used the AI techniques of deep learning and computer vision to teach their algorithm to recognize DR, just like a qualified ophthalmologist would. Today, the shortage of doctors to monitor the retinal condition of diabetic patients is less of a problem, thanks to AI. 3.AI Case Study #3: Natural Language Processing and Taking AI Health Care to the Next Level Using AI to analyze raw data and even images is one thing.

Medical records can be monitored to identify patterns associated with negative health events like hospital-acquired infections, heart attacks, and so on. On a pretty narrow basis, NLP can already perform the detailed analysis I described above to enhance physician decision making. In time, AI will be able to combine computer vision, deep learning, natural language processing, and other techniques to provide extremely reliable diagnostic outcomes. It will take all of the guesswork and inconsistency out of medical care and make our old one-size-fits-all approach seem barbaric in retrospect. We have a long way to go, but within the Near Horizon of Longevity, precision medicine will become, without a great deal of hyperbole, perfect medicine.

pages: 296 words: 78,631

Hello World: Being Human in the Age of Algorithms
by Hannah Fry
Published 17 Sep 2018

In that perfect storm of misplaced trust and power and influence, the consequences have the potential to fundamentally alter our society. * This is paraphrased from a comment made by the computer scientist and machine-learning pioneer Andrew Ng in a talk he gave in 2015. See Tech Events, ‘GPU Technology Conference 2015 day 3: What’s Next in Deep Learning’, YouTube, 20 Nov. 2015, https://www.youtube.com/watch?v=qP9TOX8T-kI. † Simulating the brain of a worm is precisely the goal of the international science project OpenWorm. They’re hoping to artificially reproduce the network of 302 neurons found within the brain of the C. elegans worm. To put that into perspective, we humans have around 100,000,000,000 neurons.

This is another ‘machine-learning algorithm’, like the random forests we met in the ‘Justice’ chapter. It goes beyond what the operators program it to do and learns itself from the images it’s given. It’s this ability to learn that endows the algorithm with ‘artificial intelligence’. And the many layers of knobs and dials also give the network a deep structure, hence the term ‘deep learning’. Neural networks have been around since the middle of the twentieth century, but until quite recently we’ve lacked the widespread access to really powerful computers necessary to get the best out of them. The world was finally forced to sit up and take them seriously in 2012 when computer scientist Geoffrey Hinton and two of his students entered a new kind of neural network into an image recognition competition.12 The challenge was to recognize – among other things – dogs.

The world was finally forced to sit up and take them seriously in 2012 when computer scientist Geoffrey Hinton and two of his students entered a new kind of neural network into an image recognition competition.12 The challenge was to recognize – among other things – dogs. Their artificially intelligent algorithm blew the best of its competitors out of the water and kicked off a massive renaissance in deep learning. An algorithm that works without our knowing how it makes its decisions might sound like witchcraft, but it might not be all that dissimilar from how we learn ourselves. Consider this comparison. One team recently trained an algorithm to distinguish between photos of wolves and pet huskies. They then showed how, thanks to the way it had tuned its own dials, the algorithm wasn’t using anything to do with the dogs as clues at all.

pages: 475 words: 134,707

The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health--And How We Must Adapt
by Sinan Aral
Published 14 Sep 2020

This characterization may seem dramatic, but there is no doubt that technological innovation in the fabrication of falsity is advancing at a breakneck pace. The development of “deepfakes” is generating exceedingly convincing synthetic audio and video that is even more likely to fool us than textual fake news. Deepfake technology uses deep learning, a form of machine learning based on multilayered neural networks, to create hyperrealistic fake video and audio. If seeing is believing, then the next generation of falsity threatens to convince us more than any fake media we have seen so far. In 2018 movie director (and expert impersonator) Jordan Peele teamed up with BuzzFeed to create a deepfake video of Barack Obama calling Donald Trump a “complete and total dipshit.”

Peele added a tongue-in-cheek nod to the obvious falsity of his deepfake when he made Obama say, “Now, I would never say these things…at least not in a public address.” But what happens when the videos are not made to be obviously fake, but instead made to convincingly deceive? Deepfake technology is based on a specific type of deep learning called generative adversarial networks, or GANs, which was first developed by Ian Goodfellow while he was a graduate student at the University of Montreal. One night while drinking beer with fellow graduate students at a local watering hole, Goodfellow was confronted with a machine-learning problem that had confounded his friends: training a computer to create photos by itself.

Conventional methods were failing miserably. But that night while enjoying a few pints, Goodfellow had an epiphany. He wondered if they could solve the problem by pitting two neural networks against each other. It was the origin of GANs—a technology that Yann LeCun, former head of Facebook AI Research, dubbed “the coolest idea in deep learning in the last 20 years.” It’s also what manipulated Barack Obama to call Donald Trump a “dipshit.” GANs pit two networks against each other: a “generator,” whose job is to generate synthetic media, and a “discriminator,” whose job is to determine if the content is real or fake. The generator learns from the discriminator’s decisions and optimizes its media to create more and more convincing video and audio.

Human Frontiers: The Future of Big Ideas in an Age of Small Thinking
by Michael Bhaskar
Published 2 Nov 2021

We have private companies like SpaceX but precious little crewed space flight; we have the Standard Model of physics but haven't gone beyond it; we have libraries of humanities research that no one reads; we have a decades-long war on cancer, but still have cancer. Our time comes with a litany of big ideas: blockchain, mobile social networks, supermaterials like graphene, deep learning neural networks, quantum biology, massive multiplayer online games, molecular machines, behavioural economics, algorithmic trading, gravitational wave and exoplanet astronomy, parametric architecture, e-sports, the ending of taboos around gender and sexuality, to name a few. But execution and purchase are more problematic than in the past.

5 There was a sense of ‘melancholy’, ‘existential angst’, at how it was possible for outsiders to make a jump that, in AlQuraishi's words, worked at twice the pace of regular advance, and possibly more. It was ‘an anomalous leap’ in one of the core scientific problems of our time. What did just happen? The artificial intelligence company DeepMind, part of the Alphabet group, had been quietly working on software called AlphaFold. DeepMind uses deep learning neural networks, a newly potent technique of machine learning (ML), to predict how proteins fold. These networks aim to mimic the functioning of the human brain, using layers of mathematical functions that can, by changing their weightings, appear to learn. This makes predictions which are scored against what is already known.

This is true of everything since – tools and ideas work in tandem.19 In the twenty-first century our capacity to develop big ideas will rest on the development of our tools more than any other factor. Hence the significance of AI. It is the calculus, the telescope, the compass of our time. Demis Hassabis himself makes the link explicit, calling AI a sort of general-purpose Hubble space telescope for science.20 Big ideas like AlphaFold and AlphaGo, instances of the big idea of deep learning neural networks, are steadily making a difference at the coalface. To see how AI reshapes ideas, consider the volume of data produced by contemporary experiments. At CERN, the Large Hadron Collider produces 25 gigabytes of data every second.21 NASA missions churn out more than 125 gigabytes of data every second.22 Climate scientists, particle physicists, population ecologists, derivatives traders and economic forecasters all generate and must process vast amounts of data.

pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World
by Peter H. Diamandis and Steven Kotler
Published 3 Feb 2015

Now imagine that this same AI also has contextual understanding—meaning the system recognizes that your conversation with your friend is heading in the direction of family life—so the AI reminds you of the names of each of your friend’s family members, as well as any upcoming birthdays they might have. Behind many of the AI successes mentioned in this section is an algorithm called Deep Learning. Developed by University of Toronto’s Geoffrey Hinton for image recognition, Deep Learning has become the dominant approach in the field. And it should come as no surprise that in spring of 2013, Hinton was recruited, like Kurzweil, to join Google41—a development that will most likely lead to even faster progress. More recently, Google and NASA Ames Research Center—one of NASA’s field centers—jointly acquired a 512 qubit (quantum bit) computer manufactured by D-Wave Systems to study machine learning.

v=9pmPa_KxsAM. 40 Joann Muller, “No Hands, No Feet: My Unnerving Ride in Google’s Driverless Car,” Forbes, March 21, 2013, http://www.forbes.com/sites/joannmuller/2013/03/21/no-hands-no-feet-my-unnerving-ride-in-googles-driverless-car/. 41 Robert Hof, “10 Breakthrough Technologies 2013: Deep Learning,” MIT Technology Review, April 23, 2013, http://www.technologyreview.com/featuredstory/513696/deep-learning/. 42 Steven Levy, “Google’s Larry Page on Why Moon Shots Matter,” Wired, January 17, 2013, http://www.wired.com/2013/01/ff-qa-larry-page/all/. 43 Larry Page, “Beyond Today—Larry Page—Zeitgeist 2012.” 44 Larry Page, “Google+: Calico Announcement,” Google+, September 2013, https://plus.google.com/+LarryPage/posts/Lh8SKC6sED1. 45 Harry McCracken and Lev Grossman, “Google vs.

Instead, the point is that AI has been in a deceptive phase for the past fifty years, ever since 1956, when a bunch of top brains came together for the first time at the Dartmouth Summer Research Project44 and made a “spectacularly wrong prediction” about their ability to crack AI over a single hot New England summer. But today, couple the successes of Deep Learning and IBM’s Watson to the near-term predictions of technology oracles like Ray Kurzweil, and we find a field reaching the knee of the exponential growth curve—that is, a field ready to run wild in disruption. So what does this mean to you, the exponential entrepreneur? This is a multibillion-dollar question.

Artificial Whiteness
by Yarden Katz

Even from a traditional cognitivist perspective, it is possible to critique these systems for having an inductive bias that diverges wildly from people’s behavior in the same contexts. Indeed, cognitive scientists have challenged the claims made about deep learning–based systems. One study evaluated DeepMind’s systems and offered several important objections.28 For one, the Atari-playing system received the equivalent of roughly thirty-eight days’ worth of play time. This extensive training allowed the system to obtain high scores, especially in games that do not require longer-term planning. However, a person who gets only two hours of play time can beat the deep learning system in games that do require longer-term planning. More important, such systems do not acquire the same knowledge about games that people do.

People, by contrast, can flexibly adopt different goals and styles of play: if asked to play with a different goal, such as losing as quickly as possible, or reaching the next level in the game but just barely, many people have little difficulty doing so. The AlphaGo system suffers from similar limitations. It is highly tuned to the configuration of the Go game on which it was trained. If the board size were to change, for example, there would be little reason to expect AlphaGo to work without retraining. AlphaGo also reveals that these deep learning systems are not as radically empiricist as advertised. The rules of Go are built into AlphaGo, a fact that is typically glossed over. This is hard-coded, symbolic knowledge, not the blank slate that was trumpeted. Nonetheless, the idea of a radically empiricist and general system (which in actuality is confined to narrow domains) is taken to mean DeepMind’s approach is ready for grand quests.

Political and social contexts, which are generally of no interest to AI practitioners, shape how people see their world. The historical power dynamics among people can be read in photographs, although AI systems are blind to such dynamics. The blind spots can be exposed by probing vision systems in a different way from that intended by their developers. To illustrate this, I have used Google’s deep learning–based image captioning system called “Show and Tell”—representative of the systems that have been claimed to outperform people in the visual arena—to analyze a series of images.34 Show and Tell was trained on thousands of photographs and can produce a label for an image it has not processed before.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

This was also the decade when, for the first time, statistical learning techniques were used to recognize faces in images. All of the above, however, were based on traditional computer programming, and while they delivered impressive results, they failed to offer the accuracy and scale today’s computer vision can offer, due to the advancement of Deep Learning artificial intelligence techniques, which have completely surpassed and replaced all prior methods. This intelligence did not learn to see by following a programmer’s list of instructions, but rather through the very act of seeing itself. With AI helping computers see, they can now do it much better than we do, specifically when it comes to individual tasks.

In the 1980s, efforts to revive AI, mostly led by Japan, channelled investment into research, once more leading to the development of very little real intelligence (as compared to the hype and excitement surrounding it) until it was all halted again in 1987 – again, due to an economic crisis. This is known as the second AI winter. Sporadic attempts at AI followed the economic recovery, but it wasn’t until the turn of the millennium, when we stumbled upon the biggest breakthrough in the history of AI, that we started to make real progress. This breakthrough has become known as Deep Learning. My first eye-opening exposure to the topic was through a white paper that was published by Google in 2009. The paper discussed how Google deployed a bit of its abundant computer power to run an experiment in which the machine was asked to ‘watch’ YouTube videos frame by frame and try to observe recurring patterns.

Once that pattern was labelled as a cat, the machine could easily find every single one of those felines across the hundreds of millions of videos on YouTube. It wasn’t too long afterwards that the machine could find letters, words, humans, nudity, cars and most of the other recurring entities that exist online. These neural networks, as we call them, built through Deep Learning, truly were the beginning of AI as we know it today. Everything before that can be considered almost negligible, though as I will show you in the next chapter, it was actually typical of the type of build-up needed to finally find the breakthrough. Since then, funding has flooded into the field of AI.

pages: 284 words: 84,169

Talk on the Wild Side
by Lane Greene
Published 15 Dec 2018

So the language model, seeing both “a nice man” and “an ice man” as possibilities, will rank the more common phrase, “a nice man”, as more plausible. * These big, statistics-driven approaches made language technologies a lot better, but only gradually. The next big leap came with a new kind of machine learning, called “deep learning”. Deep learning relies on digital neural networks, meant to mimic the human brain at a simple level. Virtual “neurons” are connected in several layers. As the network is fed training data, the connections between the many neurons in the various layers are strengthened or weakened, in a way scientists believe is analogous to how humans learn.

That same year, Microsoft announced a speech-recognition system that made as few errors as a human transcriber. The system was powered by six neural networks, each of which tackled some parts of the problem better than others. None of these systems are perfect at the time of writing, and they almost certainly won’t be any time soon. “Deep learning” brought a sudden jump in quality in many language technologies, but it still cannot flexibly handle language like humans can. Translation and speech-recognition systems perform much better when their tasks are limited to a single domain, like medicine or law. This allows the software to make much better guesses about the appropriate output, by focusing on vocabulary and turns of phrase that are common to those domains.

Why are even the world’s best computers, programmed by the world’s best artificial-intelligence scientists, still struggling to do what children seem to do by magic? The rules-based approaches in artificial intelligence didn’t “scale up”, as the geeks say, and now it’s the turn of neural networks and deep learning to make the next round of progress. What humans may do when they learn language is combine these two approaches in ways that still elude the artificial-intelligence engineers. The 1980s saw a debate between those who believe the mind manipulates abstract symbols computationally – a bit like rules-based, “good old-fashioned” AI – and the early pioneers of digital neural networks, who thought that learning language was merely a matter of strengthening some neural connections here, and weakening some there, as the mind was presented with new data, a bit like today’s digital neural networks.

pages: 569 words: 156,139

Amazon Unbound: Jeff Bezos and the Invention of a Global Empire
by Brad Stone
Published 10 May 2021

Persuaded, Prasad joined to work on the problems of far-field speech recognition, but he ended up as an advocate for the deep learning model. Evi’s knowledge graphs were too regimented to be Alexa’s foundational response model; if a user says, “Play music by Sting,” such a system may think he is trying to say “bye” to the artist and get confused, Prasad later explained. By using the statistical training methods of deep learning, the system could quickly ascertain that when the sentence is uttered, the intent is almost certainly to blast “Every Breath You Take.” But Evi’s Tunstall-Pedoe argued that knowledge graphs were the more practical solution and mistrusted the deep learning approach. He felt it was error-prone and would require an endless diet of training data to properly mold Alexa’s learning models.

Tunstall-Pedoe said he had to fight with colleagues in the U.S. over the unusual idea of having Alexa respond to such social cues, recalling that “People were uncomfortable with the idea of programming a machine to respond to ‘hello.’ ” Integrating Evi’s technology helped Alexa respond to factual queries, such as requests to name the planets in the solar system, and it gave the impression that Alexa was smart. But was it? Proponents of another method of natural language understanding, called deep learning, believed that Evi’s knowledge graphs wouldn’t give Alexa the kind of authentic intelligence that would satisfy Bezos’s dream of a versatile assistant that could talk to users and answer any question. In the deep learning method, machines were fed large amounts of data about how people converse and what responses proved satisfying, and then were programmed to train themselves to predict the best answers.

“giant treelike structure”: James Vlahos, “Amazon Alexa and the Search for the One Perfect Answer,” Wired, February 18, 2018, https://www.wired.com/story/amazon-alexa-search-for-the-one-perfect-answer/ (January 19, 2021). harness a large number of high-powered computer processors to train its speech models: Nikko Ström, “Nikko Ström at AI Frontiers: Deep Learning in Alexa,” Slideshare, January 14, 2017, https://www.slideshare.net/AIFrontiers/nikko-strm-deep-learning-in-alexa (January 19, 2021). a patent on the idea was filed: Amazon. Techniques for mobile deceive charging using robotic devices. U.S. Patent 9711985, filed March 30, 2015. https://www.freepatentsonline.com/9711985.html (January 19, 2021).

pages: 406 words: 109,794

Range: Why Generalists Triumph in a Specialized World
by David Epstein
Published 1 Mar 2019

The economists suggested that the professors who caused short-term struggle but long-term gains were facilitating “deep learning” by making connections. They “broaden the curriculum and produce students with a deeper understanding of the material.” It also made their courses more difficult and frustrating, as evidenced by both the students’ lower Calculus I exam scores and their harsher evaluations of their instructors. And vice versa. The calculus professor who ranked dead last in deep learning out of the hundred studied—that is, his students underperformed in subsequent classes—was sixth in student evaluations, and seventh in student performance during his own class.

Silver et al., “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm,” arXiv (2017): 1712.01815. “In narrow enough worlds”: In addition to an interview with Gary Marcus, I used video of his June 7, 2017, lecture at the AI for Good Global Summit in Geneva, as well as several of his papers and essays: “Deep Learning: A Critical Appraisal,” arXiv: 1801.00631; “In Defense of Skepticism About Deep Learning,” Medium, January 14, 2018; “Innateness, AlphaZero, and Artificial Intelligence,” arXiv: 1801.05667. IBM’s Watson: For a balanced take on Watson’s challenges in healthcare—from one critic calling it “a joke,” to others suggesting it falls far short of the original hype but does indeed have value—see: D.

In one of Kornell and Bjork’s interleaving studies, 80 percent of students were sure they had learned better with blocked than mixed practice, whereas 80 percent performed in a manner that proved the opposite. The feeling of learning, it turns out, is based on before-your-eyes progress, while deep learning is not. “When your intuition says block,” Kornell told me, “you should probably interleave.” Interleaving is a desirable difficulty that frequently holds for both physical and mental skills. A simple motor-skill example is an experiment in which piano students were asked to learn to execute, in one-fifth of a second, a particular left-hand jump across fifteen keys.

Bulletproof Problem Solving
by Charles Conn and Robert McLean
Published 6 Mar 2019

Experiments are a powerful and often overlooked tool in the big gun arsenal; if you can't make one, sometimes you can find a natural experiment. Machine learning is emerging as a powerful tool in many problem spheres; we argue to understand problem structure and develop hypotheses before employing deep learning algorithms (huge mistakes can come from bad data and bad structuring, and these models offer little transparency). You can outsource problem solving, including deep learning, through crowdsourcing on platforms such as Kaggle. Where there is an adversary whose behavior many change in reaction to your choices, you can look to game theory approaches with logic trees to work out the best course of action.

The most important defining question at the outset is to understand the nature of your problem: Are you primarily trying to understand the drivers of causation of your problem (how much each element contributes and in what direction), or are you primarily trying to predict a state of the world in order to make a decision? The first question leads you mostly down the left‐hand branch into various statistical analyses, including creating or discovering experiments. The second question leads you mostly down the right‐hand side of the tree into forecasting models, the family of machine or deep learning algorithms, and game theory. Some problems have elements of both sides, and require combining tools from both branches of the decision tree. And simulations and forecasting models can be found on both sides of the tree (see Exhibit 6.1). EXHIBIT 6.1 When you are focused on understanding the complex causes of your problem, so that you can develop strategies for intervention, you are usually in the world of statistics.

Precision here relates to how many correct images of sharks are identified from the total images (Exhibit 6.9). This is an impressive result. But it leaves open the question of false negatives—how many sharks aren't spotted (which seems important!)—but the ability to detect with high precision other species like dolphins is thought to minimize the number of false negatives. Like other deep‐learning algorithms, the expectation is that results will improve further with additional data.12 In addition, trialing technology, such as multispectral cameras, is expected to provide better ocean penetration, particularly for cloudy days. EXHIBIT 6.9 The beach of the future, as The Ripper team pictures it, will incorporate three technologies: A drone tethering system that allows the drone to be powered and sit atop a cable with full view of the beach, able to operate 24/7, but most likely for the hours the beach has lifeguards on duty.

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

Each year, they made incremental, single-digit improvements in precision. Then, a new type of AI known as deep learning emerged – the same discipline that allowed miscreants to generate sexually deviant deepfakes of Helen Mort and Noelle Martin, and the model that underpins ChatGPT. The cutting-edge technology was helped along by an embarrassment of data riches, in this case, millions of photos uploaded to the web that could be used to train new image-recognition algorithms. Deep learning catapulted the small gains Karl was seeing into real progress. All of a sudden, what used to be a 1 per cent improvement was now 10 per cent each year.

Since about 2009, a boom in technical advancements has been fuelled by the voluminous data generated from our intensive use of connected devices and the internet, as well as the growing power of silicon chips. In particular, this has led to the rise of a subtype of AI known as machine learning, and its descendent deep learning, methods of teaching computer software to spot statistical correlations in enormous pools of data – be they words, images, code or numbers. One way to spot patterns is to show AI models millions of labelled examples. This method requires humans to painstakingly label all this data so they can be analysed by computers.

If he hadn’t demonstrated this, someone else would have. And there were no legal issues with it either. At the time, creating deepfake images wasn’t illegal in most parts of the world, and neither was posting and distributing them, so there was no one to police use of the software. Deepfakes are generated using ‘deep’ learning: a subset of artificial intelligence where algorithms perform tasks, such as image generation, by learning patterns in millions of training samples. The models learn to generate faces in a hierarchical way – they start by mapping individual image pixels, and then recognize higher-order structures, like the shape of a specific face or figure.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

Malice or Competence: What We Fear For all the talk about the possibilities of AI and robotics, we’re really only at the embryonic stage of our grand machine-human integration. And AI means different things in different conversations. The most common use refers to machine learning—using statistical models to analyze large quantities of data. The next step from basic machine learning, referred to as deep learning, uses a multilayered architecture of networks, making connections and modeling patterns across data sets. AI can be understood as any machine—defined for our purposes as hardware running digital software—that mimics human behavior (i.e., human reactions). It makes decisions and judgments based on learning derived from data inputs and can mimic our senses, such as vision.

Thanks to algorithmic data mining, we can gather quite a bit of information simply from browsing the web. Recently, a group of researchers from Europe and the United States collaborated to develop automated assessments of organizational diversity and detection of discrimination by race, sex, age, and other parameters. The researchers applied the deep-learned predictors of gender and race—algorithms that use facial and name recognition to predict identity—to the executive management and board member profiles of the 500 largest companies on the 2016 Forbes Global 2000 list. They then ranked the companies by a sex and race diversity index.34 Overall, in the photos found online, women represented only 21.2 percent of all corporate executives.

The year after she was diagnosed, she created a system that uses computer vision technology to independently learn about the patterns of diagnosing breast cancer. She partnered with Dr. Constance Lehman, chief of breast imaging at Boston’s Massachusetts General Hospital. Lehman herself serves on several key national committees and was eager to apply deep learning to all aspects of breast cancer care, from prevention to detection to treatment. Barzilay and Lehman fed the algorithm both the image and the outcome over time so that it could teach itself what can be detected—what a human eye might miss. They fed the algorithm 70,000 images of lesions with known outcomes, both malignant and benign.

pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It)
by Jamie Bartlett
Published 4 Apr 2018

More data fed in makes ML better, which allows it to make more sense of new data, which makes it better still, and so on. More sophisticated ML is being developed all the time. The latest involves teaching machines to solve problems for themselves rather than just feeding them examples, by setting out rules and leaving them to get on with it. This is sometimes called ‘deep learning’, which attempts to mimic the activity that occurs in layers of brain neurons through a neural network to spot patterns or images very effectively.2 To understand how this is different and potentially more powerful than classic ML, consider the ancient Chinese game Go. Machines have been beating humans at chess for years, but Go is more difficult for machines because of the sheer number of possible moves: in the course of a match, there are more possible combinations than there are atoms in the universe.

In 2016, to the surprise of many experts, AlphaGo decisively beat the world’s best Go player, Lee Sedol. This stunning result was quickly surpassed when, in late 2017, Deep Mind released AlphaGo Zero, a software that was given no human examples at all and was taught the rules of how to win by using a deep learning technique with no prior examples. It started off dreadfully bad but improved slightly with each game, and within 40 days of constant self-play it had become so strong that it thrashed the original AlphaGo 100–0. Go is now firmly in the category of ‘games that humans will never win against machines again’.

Although the specific application is very different, driverless vehicles like Stefan’s Starsky trucks use similar techniques of data extraction and analysis as AI-powered crime-prediction technology or CV analysis. Google’s DeepMind, for example, doesn’t just win at Go – it is currently pioneering exciting new medical research and has already dramatically cut the energy bills at Google’s huge data centres by using deep learning to optimise the air conditioning systems.6 There are countervailing tendencies, of course – some experts have got together to develop ‘open source’ AI which is more transparent and, hopefully, carefully designed, but the direction of progress is clear – just follow the money. Over the past few years, big tech firms have bought promising AI start-ups by the truckload.

pages: 288 words: 73,297

The Biology of Desire: Why Addiction Is Not a Disease
by Marc Lewis Phd
Published 13 Jul 2015

The learning spiral would have first quickened, showing a snowball effect in behaviour and a cascade of neural changes, when Donna and Brian began to pursue drugs for the feelings they provided, not as a means to an end—when desire kicked in and their quest for drugs overrode their other goals. That’s when their lives started to unravel. Figure 2. Deep learning: a profile of addiction onset, including phases of accelerated learning, stability, and reduction. Because the onset of addiction must include one or more phases of accelerated learning, but can also simmer for long periods, I’ve settled on the phrase deep learning. This is meant to describe the overall profile of addictive learning, including periods of rapid change, periods of coasting, and temporary remissions (in medical parlance).

That’s how brains change over the course of development, and that’s how habits are formed. So, what exactly is addiction? It’s a habit that grows and self-perpetuates relatively quickly, when we repeatedly pursue the same highly attractive goal. Or, in a phrase, motivated repetition that gives rise to deep learning. Addictive patterns grow more quickly and become more deeply entrenched than other, less compelling habits because of the intensity of the attraction that motivates us to repeat them, especially when they leave us gasping for more and other goals have lost their appeal. The neurobiological mechanics of this process involve multiple brain regions, interlaced to form a web that holds the addiction in place.

Fletcher, Inside Rehab: The Surprising Truth About Addiction Treatment—and How to Get Help That Works (New York: Viking, 2013). 5. David Sack, “Addiction Is a Disease and Needs to Be Treated as Such,” Room for Debate blog, New York Times, February 11, 2014. INDEX AA. See Alcoholics Anonymous abstinence AA and, 15 grey matter volume and, 137 as habit, 68 self-regulation and, 138 accelerated learning deep learning and, 172–174, 172 (fig.) desire and, 170–174 accumbens as addiction front-runner, 126 amygdala and OFC and, 79, 81 Brian and, 126 cognition linked with emotion and, 81–82 desire ignited in, 57–59 dopamine and, 58–59, 81 emotion and, 79 entrenched habits, 163, 170 impulsive to compulsive behaviour and, 127–128 location, 44 (fig.), 45, 56–57 love and, 166–167 mice and, 82 motivation and, 79 motivational core (motivational engine), and, 45 Natalie and, 126 now appeal and, 83–84 reward and, 126 ACEs.

pages: 410 words: 119,823

Radical Technologies: The Design of Everyday Life
by Adam Greenfield
Published 29 May 2017

Over time, it will learn to recognize what distinguishes a good performance from an unacceptable one, and how to improve the odds of success next time out. It will refine its ability to detect what is salient in any given situation, and act on that insight. This process is called “machine learning.” What distinguishes this from “deep” learning, as some would have us call the process through which a machine develops insight? And why does it seem to have become so prominent in recent years? In the beginning was the program. Classically, using computers to solve problems in the real world meant writing programs, and that meant expressing those problems in terms that could be parsed and executed by machine.

Stripped of its mystification, then, machine learning is the process by way of which algorithms are taught to recognize patterns in the world, through the automated analysis of very large data sets. When neural networks are stacked in multiple layers, each stocked with neurons responsible for discerning a particular kind of pattern, they are capable of modeling high-level abstractions. (This stacking accounts for the “deep” in deep learning, in at least one of the circulating definitions.) It is this that ultimately gives the systems running these algorithms the ability to perform complicated tasks without being explicitly instructed in how to do so, and it is is how they now stand to acquire the capabilities we have previously thought of as the exclusive province of the human.

The software controlling a moving vehicle must integrate in real time a highly unstable environment, engine conditions, changes in weather, and the inherently unpredictable behavior of animals, pedestrians, bicyclists, other drivers and random objects it may encounter.8 (Now the significance of those reports you may have encountered of Google pre-driving nominally autonomous vehicles through the backstreets of its Peninsular domain becomes clearer: its engineers are training their guidance algorithm in what to expect from its first environment.) For autonomous vehicles, drones, robots and other systems intended to reckon with the real world in this way, then, the grail is unsupervised deep learning. As the name implies, the algorithms involved are neither prompted nor guided, but are simply set loose on vast fields of data. Order simply emerges. The equivalent of classification for unsupervised learning is clustering, in which an algorithm starts to develop a sense for what is significant in its environment via a process of accretion.

Autonomous Driving: How the Driverless Revolution Will Change the World
by Andreas Herrmann , Walter Brenner and Rupert Stadler
Published 25 Mar 2018

GPUs) and enormous data volumes. Complex, multilevel, artificial neuronal networks are also often described as deep learning. Deep learning is applied nowadays primarily in the areas of language recognition, image recognition and autonomous driving. Box 11.3. Artificial Intelligence, Deep Learning and Neural Networks Lutz Junge, Principal Engineer, Electronics Research Lab, Volkswagen Group of America End-2-end deep neural networks could learn to drive a car just by monitoring multiple human drivers and adopting the rules of driving. Deep learning has produced promising results when applied to tasks that require bridging the gap between two different domains.

LTE-V and 5G will play an important role in this future ecosystem and the telecommunications operators need to improve performance to fulfil the needs of the car industry, specifically the delay time, coverage, end-to-end-security and reliability of their networks. Ultimately, we need a well-standardised system based on future technologies like 5G, artificial intelligence, deep learning, big data, high-resolution maps and high-resolution GPS. The key question is: Who is the owner of these data and who could earn money in which part of this ecosystem? Founding the 5G Automotive Association and developing this ecosystem in collaboration is a big step in the right direction. The vision for future 5G networks is: 1 millisecond delay end-to-end + 10 gigabits per second speed + 99.9999% reliability + E2E security + 10 years lifespan for embedded M2M devices with one battery + capacity for about 500 billion devices + lower costs.

However, it now seems clear that Google is primarily interested in collecting data on drivers and their vehicles, and that it sees the software as a new business model and does not want to produce cars. Players 183 Early in 2016, Nvidia surprisingly announced its own computing platform for controlling autonomous vehicles. This platform has sufficient processing power to support deep learning, sensor fusion and surround vision, all of which are key elements for a self-driving car. It also announced that its PX2 would be used as a standard computer in the Roborace series for self-driving race. Nvidia has also already built autonomous test vehicles, which have only been driven on test routes to date.

pages: 487 words: 124,008

Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It
by Kashmir Hill
Published 19 Sep 2023

But there were limits to what Ton-That could do, both because he wasn’t a machine learning expert and because he didn’t have the best equipment for the job. Part of the reason that neural networks had come into their own was the development of new hardware, including powerful computer chips called graphics processing units, or GPUs, that had been developed for video gaming but that turned out to be incredibly useful for training deep-learning neural networks. Ton-That couldn’t afford state-of-the-art hardware, but luckily for him, he met someone who could access it for free: Smartcheckr’s most important early collaborator, a brilliant mathematician named Terence Z. Liu. After getting a degree in engineering at Nanjing University in China, Liu moved to Ohio to get a doctorate at the University of Toledo.

Ton-That didn’t know who that was, but he was happy when he looked the model up and saw that she had millions of followers on Twitter and Instagram. He was less enthusiastic when he heard that the potential investor had shown the app to Jeff Bezos. The ruthless billionaire founder and chairman of Amazon was legendary for kneecapping rivals, and Ton-That knew that Amazon had recently developed a deep learning–based image analysis system called Rekognition that, among other things, searched and analyzed faces. “Whoa, dude, what are you doing?” Ton-That thought. To make things worse, the oversharer never invested in Clearview. Clearview’s biggest problem was its misfit founders. Ton-That had a couple of liabilities: the Gawker articles about his being a hacker who had created a “worm” and online photos of him in a MAGA hat hanging out with the Trump crowd—just the sort of photos a tool like his was designed to unearth.

Spokespeople at Facebook were quick to reassure journalists that DeepFace wasn’t being actively rolled out on the site. “This is theoretical research, and we don’t currently use the techniques discussed in the paper on Facebook,” a spokesperson told Slate. But less than a year later, Facebook quietly rolled out a deep learning algorithm similar to DeepFace on the site, dramatically improving its ability to find and accurately tag faces in photos. In recognition of what they had achieved and the company’s interest in this new development in AI, Taigman and his team were relocated to sit closer to Zuckerberg at Facebook HQ.

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

Technical improvements in their design and training, combined with richer datasets and more computing power, have allowed us to train much larger and deeper networks than ever before.73 This deep learning gives the networks the ability to learn subtle concepts and distinctions. Not only can they now recognize a cat, they have outperformed humans in distinguishing different breeds of cats.74 They recognize human faces better than we can ourselves, and distinguish identical twins.75 And we have been able to use these abilities for more than just perception and classification. Deep learning systems can translate between languages with a proficiency approaching that of a human translator.

Games have been a central part of AI since the days of the Dartmouth conference. Steady incremental progress took chess from amateur play in 1957 all the way to superhuman level in 1997, and substantially beyond.77 Getting there required a vast amount of specialist human knowledge of chess strategy. In 2017, deep learning was applied to chess with impressive results. A team of researchers at the AI company DeepMind created AlphaZero: a neural network–based system that learned to play chess from scratch. It went from novice to grand master in just four hours.78 In less than the time it takes a professional to play two games, it discovered strategic knowledge that had taken humans centuries to unearth, playing beyond the level of the best humans or traditional programs.

The world’s best Go players had long thought that their play was close to perfection, so were shocked to find themselves beaten so decisively.80 As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong… I would go as far as to say not a single human has touched the edge of the truth of Go.”81 It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. This goal is sometimes known as artificial general intelligence (AGI), to distinguish it from the narrow approaches that had come to dominate. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari games at levels far exceeding human ability.82 Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learned and mastered these games directly from the score and the raw pixels on the screen.

pages: 282 words: 63,385

Attention Factory: The Story of TikTok and China's ByteDance
by Matthew Brennan
Published 9 Oct 2020

66 https://techcrunch.com/2012/12/05/prismatic/ 67 http://yingdudasha.cn/ 68 Image source: https://m.weibo.cn/2745813247/3656157740605616 * “Real stuff” is my imperfect translation of 干货 gānhuò, which could also be translated as “the real McCoy” or “something of substance” Chapter 3 Recommendation, From YouTube to TikTok Chapter Timeline 2009 – Netflix awards a $1 million prize for an algorithm that increased the accuracy of their video recommendation by 10% 2011 – YouTube introduces machine learning algorithmic recommendation engine, Sibyl, with immediate impact 2012 Aug – ByteDance launches news aggregation app Toutiao 2012 Sep t – AlexNet breakthrough at the ImageNet challenge triggers a global explosion of interest in AI 2013 Mar – Facebook changes its newsfeed to a “personalized newspaper ” 2014 April – Instagram begins using an “explore ” tab of personalized content 2015 – Google Brain’s deep learning algorithms begin supercharging a wide variety of Google products, including YouTube recommendations It was 2010, and YouTube had a big problem. Despite being the third most visited website on the internet, “YouTube.com as a homepage was not driving a ton of engagement,” 69 admitted John McFadden, technical lead for YouTube recommendations.

The machine learning worked so well that soon, more people were choosing what to watch based on the “recommended videos” list than any other way of picking videos, such as web searches or email referrals. Google continued to iterate and further optimize the recommendation system, later switching from Sibyl to using Google Brain developed by the company’s now-famous moonshot laboratory group Google X, led by Stanford professor Andrew Ng. Google Brain leveraged groundbreaking new advances in deep learning. Whereas Sibyl’s impact had already been impressive, the results of Google Brain were nothing short of astounding. Over the three years of 2014 to 2017, the aggregate time spent watching videos on YouTube’s homepage grew twenty times. Recommendations drove over 70% of all time on YouTube. 76 There was increasing acknowledgment across social media that YouTube’s suggested videos had become eerie accurate at guessing what would interest you.

Recommendations drove over 70% of all time on YouTube. 76 There was increasing acknowledgment across social media that YouTube’s suggested videos had become eerie accurate at guessing what would interest you. Above: The immediate impact made by YouTube’s use of machine learning algorithmic recommendation engine, Sibyl, in 2011. 77 Rapid progress in the field of AI, including the breakthrough known as “deep learning,” meant this method of content distribution would rapidly come of age with profound implications. For a company like ByteDance, the timing of these new advances could not have been better. They were in the early innings of a new era that would see the effectiveness and accuracy of algorithmic recommendation jump forward in leaps and bounds.

Seeking SRE: Conversations About Running Production Systems at Scale
by David N. Blank-Edelman
Published 16 Sep 2018

If you want to go deeper into your investigation, I suggest exploring some open source code, some of which you can find in my repository and two reference books. Deep Learning you can read online for free at deeplearningbook.org, or get the print version. It’s time to apply machine learning in your organization. My GitHub Repository https://github.com/ricardoamaro/MachineLearning4SRE Recommended Books Russell, Stuart J., Peter Norvig, and John F. Canny. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Pearson International (2003). Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge, MA: MIT Press (2016). www.mitpress.mit.edu/books/deep-learning. Contributor Bio Ricardo Amaro is currently performing senior site reliability engineering functions in Acquia, one of the largest companies in the world of Free Software with around 20,000 servers in production.

The rise of AI is happening now because we have enough data5 from all of the big data initiatives out there and cheap GPUs6 for machine learning algorithms to churn on. In addition, what’s surprising about deep learning is how simple it is. Last decade, no one suspected that we would achieve these incredible results with respect to machine perception problems. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. It’s not complicated, it’s just a lot of it. Richard Feynman, in the 1972 interview, Take the World from a Different Point of View So that gives us a fairly good idea of why we now can do deep learning in such a reachable way and who to blame for the whole thing — gamers and games.

Neural networks A bit complicated to begin with, so let’s take a moment to dive into them separately. What Are Neural Networks? In this section, we try to answer this question by briefly walking you through the basics of neural networks, deep learning techniques, and how to apply them to real data problems. Neurons and Neural Networks Unfortunately, it’s not possible for us to jump right into some deep learning magic Python for SRE without first talking a bit about neural networks to get a basis for our code. A neural network is composed of artificial neurons. An artificial neuron is just a mathematical function conceived as a model of the biological neuron.

pages: 385 words: 112,842

Arriving Today: From Factory to Front Door -- Why Everything Has Changed About How and What We Buy
by Christopher Mims
Published 13 Sep 2021

In the truck’s perceptual system are the classic object-recognition algorithms, made possible by deep learning, that allow it to identify people, obstacles, and other vehicles. It’s the same sort of technology that allows Facebook to recognize your friends’ faces in photos, or Google to cough up pictures of felines when you type in “cats.” Then there are the many niche algorithms the truck is running, each purpose-built. One discerns what traffic lights at intersections are conveying; another interprets turn signals on cars. To interpret the truck’s raw sensory input, its algorithms rely mostly on deep learning. One level above that, in the middle of the truck’s complicated algorithm of interpretation and decision-making, it uses Bayesian analysis.

$3.6 billion into trucking: Cyndia Zwahlen, “Freight Tech VC on Track to Top 2018’s Record $3.6 billion,” Trucks.com, April 29, 2019, https://www.trucks.com/2019/04/29/freight-tech-vc-top-record-3-6-billion. Chapter 13: The Future of Trucking brittle and shallow: Jason Pontin, “Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning,” Wired, February 2, 2018, https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning. a map unlike any created before it: Christopher Mims, “The Key to Autonomous Driving? An Impossibly Perfect Map,” Wall Street Journal, October 11, 2018, https://www.wsj.com/articles/the-key-to-autonomous-driving-an-impossibly-perfect-map-1539259260.

Controlling this truck is a computer that’s dreaming, in a way—a dream not so different from the one you’re experiencing at this very moment, and in every moment of your waking life. Our waking dream, and the truck’s, is a fable about the state of our bodies, the world outside, and possible futures both might soon inhabit. The truck is using deep learning to construct a reference view of reality, including the state of the road ahead, the vehicles around us, and the tarmac beneath our eighteen rumbling wheels, each spinning sixteen times a second. Within 100 milliseconds, the truck’s AI needs to digest all its sensory inputs by converting pixels captured by its cameras, and a point cloud generated by its lidar, into cars, trucks, and motorcycles.

pages: 480 words: 123,979

Dawn of the New Everything: Encounters With Reality and Virtual Reality
by Jaron Lanier
Published 21 Nov 2017

Here is how we build systems today: A bit-precise structure of communication abstractions surrounds the “pay dirt” modules like “deep learning”9 ones that accomplish the most valuable functions. These critical “AI-like” algorithms are not bit-perfect, but even though they’re approximate, they’re still robust. They provide the capabilities at the core of the programs that run our lives these days. They analyze the results of medical trials and operate self-driving vehicles. In a phenotropic architecture, the roles of the bit-perfect and approximate/robust components of a program are often reversed. Modules are connected in a phenotropic system by approximate but robust methods like deep learning and other ideas usually associated with “artificial intelligence.”

The Wisdom of Imperfection Since the modules of an ideal future phenotropic system would be connected through approximate means, using machine vision and other techniques usually associated with artificial intelligence, a lot of the manic, tricky hacking games that go on today wouldn’t even get off the ground. It would be hard, for instance, to inject malware into a computer through a deep learning network, say, by pointing a camera at an image that is supposed to cause the infection. Hard is not the same thing as impossible, but the quest for perfection in security is a fool’s game. To be clear, you can inject malware using an image (it’s done all the time), but it’s only easy to do that when software ingests the image bit by bit and processes it using a precise protocol.

But if an image is ingested only as an analog-style approximation and only analyzed statistically, as if a camera had been pointed at it, then there is much less vulnerability.12 The image is not the problem; the rigidity of the protocol is the problem. Sometimes it’s best when engineers don’t know exactly how software works. The approximate nature of modern algorithms associated with “deep learning” and related terms is inherently resistant to the tricks of the hacker trade, but we apply those capabilities only to performing specialized tasks, not for building architectures. Another way of framing the phenotropic idea is that we should use them in architecture. Just like in biology, security is enhanced when a system becomes robust, which is not the same thing as perfect.

pages: 482 words: 121,173

Tools and Weapons: The Promise and the Peril of the Digital Age
by Brad Smith and Carol Ann Browne
Published 9 Sep 2019

This approach uses statistical methods for pattern recognition, prediction, and reasoning, in effect building systems through algorithms that learn from data. During the last decade, leaps in computer and data science have led to the expanded use of so-called deep learning, or with neural networks. Our human brains contain neurons with synaptic connections that make possible our ability to discern patterns in the world around us.7 Computer-based neural networks contain computational units referred to as neurons, and they’re connected artificially so that AI systems can reason.8 In essence, the deep learning approach feeds huge amounts of relevant data to train a computer to recognize a pattern, using many layers of these artificial neurons.

Terrence J. Sejnowski, The Deep Learning Revolution (Cambridge, MA: MIT Press, 2018), 31; in 1986 Eric Horvitz coauthored one of the leading papers that made the case that expert systems would not be scalable. D.E. Heckerman and E.J. Horvitz, “The Myth of Modularity in Rule-Based Systems for Reasoning with Uncertainty,” Conference on Uncertainty in Artificial Intelligence, Philadelphia, July 1986; https://dl.acm.org/citation.cfm?id=3023728. Back to note reference 5. Ibid. Back to note reference 6. Charu C. Aggarwal, Neural Networks and Deep Learning: A Textbook (Cham, Switzerland: Springer, 2018), 1.

Aggarwal, Neural Networks and Deep Learning: A Textbook (Cham, Switzerland: Springer, 2018), 1. The convergence of intellectual disciplines involved and affected by these developments in recent decades is described in S.J. Gershman, E.J. Horvitz, and J.B. Tenenbaum, Science 349, 273–78 (2015). Back to note reference 7. Aggarwal, Neural Networks and Deep Learning, 1. Back to note reference 8. Ibid., 17–30. Back to note reference 9. See Sejnowski for a thorough history of the developments that have led to advances in neural networks over the past two decades. Back to note reference 10. Dom Galeon, “Microsoft’s Speech Recognition Tech Is Officially as Accurate as Humans,” Futurism, October 20, 2016, https://futurism.com/microsofts-speech-recognition-tech-is-officially-as-accurate-as-humans/; Xuedong Huang, “Microsoft Researchers Achieve New Conversational Speech Recognition Milestone,” Microsoft Research Blog, Microsoft, August 20, 2017, https://www.microsoft.com/en-us/research/blog/microsoft-researchers-achieve-new-conversational-speech-recognition-milestone/.

pages: 337 words: 103,522

The Creativity Code: How AI Is Learning to Write, Paint and Think
by Marcus Du Sautoy
Published 7 Mar 2019

Alemi, Alex A., et al., ‘DeepMath: Deep Sequence Models for Premise Selection’, arXiv:1606.04442v2 (2017) Athalye, Anish, et al., ‘Synthesizing Robust Adversarial Examples’, in Proceedings of the 35th International Conference on Machine Learning, arXiv:1707.07937v3 (2018) Bancerek, Grzegorz, et al., ‘Mizar: State-of-the-Art and Beyond’, in Intelligent Computer Mathematics, pp. 261–79, Springer, 2015 Barbieri, Francesco, Horacio Saggion and Francesco Ronzano, ‘Modelling Sarcasm in Twitter: A Novel Approach’, WASSA@ACL (2014) Bellemare, Marc, et al., ‘Unifying Count-Based Exploration and Intrinsic Motivation’, in Advances in Neural Information Processing Systems, pp. 1471–9, NIPS Proceedings, 2016 Bokde, Dheeraj, Sheetal Girase and Debajyoti Mukhopadhyay, ‘Matrix Factorization Model in Collaborative Filtering Algorithms: A Survey’, Procedia Computer Science, vol. 49, 136–46 (2015) Briot, Jean-Pierre and François Pachet, ‘Music Generation by Deep Learning: Challenges and Directions’, arXiv:1712. 04371 (2017) Briot, Jean-Pierre, Gaëtan Hadjeres and François Pachet, ‘Deep Learning Techniques for Music Generation: A Survey’, arXiv:1709.01620 (2017) Brown, Tom B., et al., ‘Adversarial Patch’, arXiv:1712.09665 (2017) Cavallo, Flaminia, Alison Pease, Jeremy Gow and Simon Colton, ‘Using Theory Formation Techniques for the Invention of Fictional Concepts’, in Proceedings of the Fourth International Conference on Computational Creativity (2013) Clarke, Eric F., ‘Imitating and Evaluating Real and Transformed Musical Performances’, Music Perception: An Interdisciplinary Journal, vol. 10, 317–41 (1993) Colton, Simon, ‘Refactorable Numbers: A Machine Invention’, Journal of Integer Sequences, vol. 2, article 99.1.2 (1999) , ‘The Painting Fool: Stories from Building an Automated Painter’, in Jon McCormack and Mark d’Inverno (eds.), Computers and Creativity, Springer, 2012 and Stephen Muggleton, ‘Mathematical Applications of Inductive Logic Programming’, Machine Learning, vol. 64(1), 25–64 (2006) and Dan Ventura, ‘You Can’t Know My Mind: A Festival of Computational Creativity’, in Proceedings of the Fifth International Conference on Computational Creativity (2014) , et al., ‘The “Beyond the Fence” Musical and “Computer Says Show” Documentary’, in Proceedings of the Seventh International Conference on Computational Creativity (2016) d’Inverno, Mark and Arthur Still, ‘A History of Creativity for Future AI Research’, in Proceedings of the Seventh International Conference on Computational Creativity (2016) du Sautoy, Marcus, ‘Finitely Generated Groups, p-Adic Analytic Groups and Poincaré Series’, Annals of Mathematics, vol. 137, 639–70 (1993) du Sautoy, Marcus, ‘Counting Subgroups in Nilpotent Groups and Points on Elliptic Curves’, J. reine angew.

Interestingly, some of the traits that the model picked out could be clearly identified: for example, action films or drama films. But others were much subtler and had no obvious label, and yet the computer had picked up a trend in the data. For me this is what is so exciting about these new algorithms. They have the potential to tell us something new about ourselves. In a way the deep-learning algorithm is picking up traits in our human code that we still haven’t been able to articulate in words. It’s as if we didn’t know what colour was and had no words to distinguish red from blue, but through the expression of our likes and dislikes the algorithm divided objects in front of us into two groups that correspond to blue and red.

Allen, 2016 Eagleton, Terry, The Ideology of the Aesthetic, Blackwell, 1990 Ford, Martin, The Rise of the Robots: Technology and the Threat of Mass Unemployment, Oneworld, 2015 Fuentes, Agustín, The Creative Spark: How Imagination Made Humans Exceptional, Dutton, 2017 Gaines, James, Evening in the Palace of Reason: Bach Meets Frederick the Great in the Age of Enlightenment, Fourth Estate, 2005 Ganesalingam, Mohan, The Language of Mathematics: A Linguistic and Philosophical Investigation, Springer, 2013 Gaut, Berys and Matthew Kieran (eds.), Creativity and Philosophy, Routledge, 2018 Goodfellow, Ian, Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, 2016 Harari, Yuval Noah, Homo Deus: A Brief History of Tomorrow, Harvill Secker, 2016 Hardy, G. H., A Mathematician’s Apology, CUP, 1940 Harel, David, Computers Ltd: What They Really Can’t Do, OUP, 2000 Hayles, N. Katherine, Unthought: The Power of the Cognitive Nonconscious, University of Chicago Press, 2017 Hofstadter, Douglas, Gödel, Escher, Bach: An Eternal Golden Braid, Penguin Books, 1979 , Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books 1995 , I am a Strange Loop, Basic Books, 2007 Kasparov, Garry, Deep Thinking: Where Artificial Intelligence Ends and Human Creativity Begins, John Murray, 2017 McAfee, Andrew and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future, Norton, 2017 McCormack, Jon and Mark d’Inverno (eds.), Computers and Creativity, Springer, 2012 Monbiot, George, Out of the Wreckage: A New Politics for an Age of Crisis, Verso, 2017 Montfort, Nick, World Clock, Bad Quarto, 2013 Moretti, Franco, Graphs, Maps, Trees: Abstract Models for Literary History, Verso, 2005 Paul, Elliot Samuel and Scott Barry Kaufman (eds.), The Philosophy of Creativity: New Essays, OUP, 2014 Shalev-Shwartz, Shai and Shai Ben-David, Understanding Machine Learning: From Theory to Algorithms, CUP, 2014 Steels, Luc, The Talking Heads Experiment: Origins of Words and Meanings, Language Science Press, 2015 Steiner, Christopher, Automate This: How Algorithms Took Over the Markets, Our Jobs, and the World, Penguin Books, 2012 Tatlow, Ruth, Bach and the Riddle of the Number Alphabet, CUP, 1991 , Bach’s Numbers: Compositional Proportions and Significance, CUP, 2015 Tegmark, Max, Life 3.0: Being Human in the Age of Artificial Intelligence, Allen Lane, 2017 Wilson, Edward O., The Origins of Creativity, Allen Lane, 2017 Yorke, John, Into the Woods: A Five Act Journey into Story, Penguin Books, 2013 Papers For papers with references to arXiv visit the open access archive of papers at https://arxiv.org/.

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

Connect enough of these perceptrons together in the right ways, and you can build a chess-playing computer, a self-driving car, or an algorithm that translates speech in real time like Douglas Adams’s Babel Fish. You don’t hear the term “perceptron” often these days, but these circuits are the building blocks for the convolutional neural networks and deep learning technologies that appear in headlines daily. The same old magic is still selling tickets. The inventor of the perceptron, Frank Rosenblatt, was a psychologist by training, with broad interests in astronomy and neurobiology. He also had a knack for selling big ideas. While working at the Cornell Aeronautical Laboratory, he used a two-million-dollar IBM 704 computer to simulate his first perceptron.

The newspapers are full of breathless articles gushing about the latest breakthrough that someone promises is just around the corner. AI jobs are paying superstar salaries. Tech firms are wooing away from campus professors with AI expertise. Venture capital firms are throwing money at anyone who can say “deep learning” with a straight face. Here Rosenblatt deserves credit because many of his ambitious predictions have come true. The algorithms and basic architecture behind modern AI—machines that mimic human intelligence—are pretty much the same as he envisioned. Facial recognition technology, virtual assistants, machine translation systems, and stock-trading bots are all built upon perceptron-like algorithms.

Because the data are central to these systems, one rarely needs professional training in computer science to spot unconvincing claims or problematic applications. Most of the time, we don’t need to understand the learning algorithm in detail. Nor do we need to understand the workings of the program that the learning algorithm generates. (In so-called deep learning models, no one—including the creators of the algorithm—really understands the workings of the program that algorithm generates.) All you have to do to spot problems is to think about the training data and the labels that are fed into the algorithm. Begin with bad data and labels, and you’ll get a bad program that makes bad predictions in return.

pages: 280 words: 74,559

Fully Automated Luxury Communism
by Aaron Bastani
Published 10 Jun 2019

While the world economy may be much bigger now than it was in 1900, employing more people and enjoying far higher output per person, the lines of work nearly everyone performs – drivers, nurses, teachers and cashiers – aren’t particularly new. Actually Existing Automation In March 2017 Amazon launched its Amazon GO store in downtown Seattle. Using computer vision, deep learning algorithms, and sensor fusion to identify selected items the company looked to build a near fully automated store without cashiers. Here Amazon customers would be able to buy items simply by swiping in with a phone, choosing the things they wanted and swiping out to leave, their purchases automatically debited to their Amazon account.

Before Amazon Go was even announced, the British Retail Consortium predicted almost a third of the country’s 3 million retail jobs would disappear by 2025, resulting in 900,000 lost jobs as companies turn to technology to replace workers. As with self-driving cars and Atlas, all of this is possible because of extreme supply in information – from things like image and range sensors, to stereo cameras, deep learning algorithms, and the ubiquity of smartphones and online accounts. The same holds true elsewhere in the supply chain, from the warehousing robots using sensors and barcodes controlled by a central server, to the autonomous vehicles set to oversee distribution and delivery – whether by vehicle or drone.

Incredibly, it has a self-teaching neural network which constantly adds to its knowledge of how the heart works with each new case it examines. It is in areas such as this where automation will make initial incursions into medicine, boosting productivity by accompanying, rather than replacing, existing workers. Yet such systems will improve with each passing year and some, like ‘godfather of deep learning’ Geoffrey Hinton, believe that medical schools will soon stop training radiologists altogether. Perhaps that is presumptuous – after all, we’d want a level of quality control and maybe even the final diagnosis to involve a human – but even then, this massively upgraded, faster process might need one trained professional where at present there are dozens, resulting in a quicker, superior service that costs less in both time and money.

pages: 269 words: 70,543

Tech Titans of China: How China's Tech Sector Is Challenging the World by Innovating Faster, Working Harder, and Going Global
by Rebecca Fannin
Published 2 Sep 2019

Determined to keep a lead in cutting-edge AI technology, Baidu budgeted $300 million for a second Silicon Valley research lab in 2017, supplementing its first in 2014, and the Beijing-based titan has set up an engineering office in Seattle to focus on autonomous driving and internet security. Baidu has pumped loads of capital into AI startups in the United States with technologies for deep learning, data analytics, and computer vision. See table 2-3. “Having missed out on the social mobile and e-commerce waves of the past few years, Baidu is trying not to repeat the same mistake by going all in on AI, on all fronts,” observes Evdemon of Sinovation Ventures, the Beijing-based venture capital firm headed by AI expert and investor Kai-Fu Lee.

Users open the app and access news through Toutiao’s 4,000 media partnerships without following other accounts, unlike Facebook or Twitter. Anu Hariharan, a partner with Y Combinator’s Continuity Fund in San Francisco, likens Toutiao to YouTube and technology news aggregator Techmeme in one. She finds the most interesting thing about Toutiao to be how it uses machine-and deep-learning algorithms to serve up personalized, high-quality content without any user inputs, social graphs, or product purchase history to rely on.19 From Sea to Shining Sea ByteDance has been moving up in recent years with content deals and smart acquisitions, fulfilling founder Zhang’s mission of making his startup borderless.

The New York City Police Department is reportedly monitoring citizens using cameras and facial recognition software developed in China, from SenseTime partner Hikvision.1 In the United States, tech giants Google, Microsoft, Amazon, Facebook, and IBM dominate AI for many futuristic and practical uses. Google self-driving cars are being tested on California’s Highway 101; Facebook spins out posts based on deep learning of content preferences; Amazon’s Alexa powers lights, TVs, and speakers by voice activation; and Microsoft’s Azure relies on cognitive computing for speech and language applications, while IBM Watson’s AI-based computer system increases productivity and improves customer service for call centers, production lines, and warehouses.

pages: 626 words: 167,836

The Technology Trap: Capital, Labor, and Power in the Age of Automation
by Carl Benedikt Frey
Published 17 Jun 2019

In particular, the narrow focus meant that the algorithm often lost the broader context. A solution to this problem has been found in so-called deep learning, which uses artificial neural networks with more layers. These advances allow machine translators to better capture the structure of complex sentences. Neural Machine Translation (NMT), as it is called, used to be computationally expensive both in training and in translation inference. But due to the progression of Moore’s Law and the availability of larger data sets, NMT has now become viable. In machine translation, deep learning is not without its own drawbacks. One major challenge relates to the translation of rare words.

Today, some 3.5 million Americans work as cashiers across the country. But if you go to an Amazon Go store, you will not see a single cashier or even a self-service checkout stand. Customers walk in, scan their phones, and walk out with what they need. To achieve this, Amazon is leveraging recent advances in computer vision, deep learning, and sensors that track customers, the items they reach for, and take with them. Amazon then bills the credit card passed through the turnstile when the customer leaves the store and sends the receipt to the Go app. While the rollout of the first Seattle, Washington, prototype store was delayed because of issues with tracking multiple users and objects, Amazon now runs three Go stores in Seattle and another in Chicago, Illinois, and plans to launch another three thousand by 2021.

Agrawal, Joshua Gans, and Avi Goldfarb (Chicago: University of Chicago Press), figure 1. 14. “Germany Starts Facial Recognition Tests at Rail Station,” 2017, New York Post, December 17. 15. N. Coudray et al., 2018, “Classification and Mutation Prediction from Non–Small Cell Lung Cancer Histopathology Images Using Deep Learning,” Nature Medicine 24 (10): 1559–1567. 16. A. Esteva et al., 2017, “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks,” Nature 542 (7639): 115. 17. W. Xiong et al., 2017, “The Microsoft 2017 Conversational Speech Recognition System,” Microsoft AI and Research Technical Report MSR-TR-2017-39, August, https://www.microsoft.com/en-us/research/wp-content/uploads/2017/08/ms_swbd17-2.pdf. 18.

pages: 291 words: 80,068

Framers: Human Advantage in an Age of Technology and Turmoil
by Kenneth Cukier , Viktor Mayer-Schönberger and Francis de Véricourt
Published 10 May 2021

In Germany: Alexander Frölich, “Rechtsextremisten steuern die Corona-Proteste zum Teil schon,” Der Tagesspiegel, November 16, 2020, https://www.tagesspiegel.de/berlin/berliner-sicherheitsbehoerden-alarmiert-rechtsextremisten-steuern-die-corona-proteste-zum-teil-schon/26627460.html; Tilma Steffen and Ferdinand Otto, “Aktivisten kamen als Gäste der AfD in den Bundestag,” Die Zeit, November 19, 2020, https://www.zeit.de/politik/deutschland/2020-11/bundestag-afd-stoerer-corona-protest-einschleusung. On François Chollet: François Chollet, Deep Learning with Python (Shelter Island, NY: Manning, 2017). See https://blog.keras.io/the-limitations-of-deep-learning.html. In an interview with Kenneth Cukier in February 2021, he elaborated on how to improve “extreme generatlization,” or framing: “The way you learn and adapt is by constantly making analogies with past situations and concepts. If you have a very rich and diverse bank of past situations and concepts to leverage, you will be able to make more powerful analogies.”

The machine, as Dennett suggests, can do a lot of calculating with an immense amount of formal logic and processing reams of data, but it cannot frame. Much has changed in AI since Dennett wrote his three scenarios. AI no longer relies on humans feeding abstract rules into machines. Instead, the most popular methods today, such as machine learning and deep learning, involve systems partially self-optimizing from massive amounts of data. But although the process is different, the difficulty hasn’t gone away. Even with lots of training data, when a robot encounters a novel situation like a ticking bomb, it can be at an utter loss. Framing—capturing some essence of reality through a mental model in order to devise an effective course of action—is something humans do and machines cannot.

On a death every three seconds: Joe Myers, “This Is How Many People Antibiotic Resistance Could Kill Every Year by 2050 If Nothing Is Done,” World Economic Forum, September 23, 2016, https://www.weforum.org/agenda/2016/09/this-is-how-many-people-will-die-from-antimicrobial-resistance-every-year-by-2050-if-nothing-is-done/. Coolidge’s son’s infection: Chelsea Follett, “U.S. President’s Son Dies of an Infected Blister?,” HumanProgress, March 1, 2016, https://www.humanprogress.org/u-s-presidents-son-dies-of-an-infected-blister/. AI to identify antibiotics: Jonathan M. Stokes et al., “A Deep Learning Approach to Antibiotic Discovery,” Cell 180, no. 4 (February 20, 2020): 688–702. Barzilay quotes: Regina Barzilay, in an interview with Kenneth Cukier, February and November 2020. On Colin Kaepernick: Eric Reid, “Why Colin Kaepernick and I Decided to Take a Knee,” New York Times, September 25, 2017, https://www.nytimes.com/2017/09/25/opinion/colin-kaepernick-football-protests.html.

pages: 328 words: 84,682

The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power
by Michael A. Cusumano , Annabelle Gawer and David B. Yoffie
Published 6 May 2019

Both involve a dramatic change in platform ecosystems. VOICE WARS: RAPID GROWTH BUT CHAOTIC PLATFORM COMPETITION Although artificial intelligence has been around for decades, one branch has made exceptional progress: machine learning (using special software algorithms to analyze and learn from data) and the subfield of deep learning (using hardware and software to build massively parallel processors called neural networks to mimic how the brain works). Applications of these technologies have led to dramatic improvements in certain forms of pattern recognition, especially for images and voice. Apple got the world excited about a voice interface when it introduced Siri in 2011.

No company in 2018 seemed to have a clear path to making a profit directly from this technology. As we finished this book, it was too early to tell how the voice wars will play out. The market was still like the Wild West—more chaos than order. Between 2017 and 2018, improvements in machine learning and deep learning were creating better voice experiences across all competitors. Google appeared to be the technical leader in AI, with many applications in search, advertisements, and machine translation, among others. Apple, which lagged behind in early benchmarks, was improving quickly, as were the benchmarks for Microsoft’s Cortana and Amazon’s Alexa.4 In 2018, Google had the advantage of hundreds of millions of devices (Android smartphones) that have Google’s voice capabilities embedded.

See self-regulation and curation customer tying, 201 Cusumano, Michael, vii–viii, ix, 10–11, 125, 182 data analyzing customer behavior, 53 captured by Google, 10 increasing transparency of performance data on platform members, 91–92 Predix platform governance and use of data, 165 value of, 51 Zuckerberg on data portability vs. security, 190–91 Davis, Carl, 201 deep learning and machine learning, 220–21 Deliveroo (United Kingdom), 83–84, 195–96 demand-side economies of scale, 249n18. See also network effects differentiation and niche competition overview, 25, 41–42, 44–47 AT&T’s lack of, 36–37 digital technology impact, 54–56 niche companies, 46, 55–56 Digital Foundries for Predix, 164–65 digital platforms diffusion and refinement of, 49 gaining market share, 31 global power of, 9 impact on business, 10–11 instant messaging, 41–42 Yellow Pages compared to, 40–41 digital revolution, 11–12, 107 digital technologies overview, 20, 27–28 impact on platform market drivers, 49–58 and network effects, 50–52 organizational commitment to one technology, 54 direct or same-side network effects, 16, 42–44 Dorsey, Jack, 89 Dubinsky, Donna, 69 D-Wave, 227, 228, 229 EachNet (China), 120–23, 120f eBay competition, 56 EachNet in China, 120–23, 120f fraud prevention by, 93 mistakes in China, 119–20, 121–23, 137, 186 success of, 119 economies of scale demand-side, 249n18 digital technologies and, 57 on innovation platforms, 79–80 network effects based on, 16–17 transitioning from losing money to making money, 137 See also network effects ecosystem companies overview, vii, 4 for GE’s Predix platform, 164–65 global power of, 8–9 market value of, 8 platform ecosystems as barriers to entry, 48 ecosystem rules, establishing and enforcing overview, 85–86 innovation platforms, 86–90 transaction platforms, 90–93 Editas Medicine, 232 emerging platforms.

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

Rashida Richardson, Jason M. Schultz, and Kate Crawford, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review 94 (2019): 192–233. 37. Fabio Ciucci, “AI (Deep Learning) Explained Simply,” Data Science Central (blog), November 20, 2018, https://www.datasciencecentral.com/profiles/blogs/ai-deep-learning-explained-simply (“MLs fail when the older data gets less relevant or wrong very soon and often. The task or rules learned must keep the same, or at most rarely updated, so you can re-train.”). 38. Kiel Brennan-Marquez, “Plausible Cause: Explanatory Standards in the Age of Powerful Machines,” Vanderbilt Law Review 70 (2017): 1249–1302. 39.

Certainly narrow AI, designed to make specific predictions, is based on quantifying probability.23 It is but one of many steps taken over the past two decades to modernize medicine with a more extensive evidence base.24 Medical researchers have seized on predictive analytics, big data, artificial intelligence, machine learning, and deep learning as master metaphors for optimizing system performance. Literature in each of these areas can help regulators identify problematic data in AI. Moreover, critiques of the limits of AI itself (including lack of reproducibility, narrow validity, overblown claims, and opaque data) should also inform legal standards.25 The key idea here is that AI’s core competence—helping humans avoid errors—must now be turned on the humans who create AI.

Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Cambridge, MA: Harvard Business Review Press, 2018). 10. For example, a self-driving car’s “vision” system may interpret a stop sign as a “45 miles per hour” sign if some pieces of tape are placed on the sign. Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, et al., “Robust Physical-World Attacks on Deep Learning Visual Classification,” arXiv:1707.08945v5 [cs.CR] (2018). 11. Eric Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (New York: Basic Books, 2019). 12. Kim Saverno, “Ability of Pharmacy Clinical Decision-Support Software to Alert Users about Clinically Important Drug-Drug Interactions,” Journal of American Medical Informatics Association 18, no. 1 (2011): 32–37. 13.

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

In the case of halicin, a neural network captured the association between molecules (the inputs) and their potential to inhibit bacterial growth (the output). The AI that discovered halicin did this without information about chemical processes or drug functions, discovering relationships between the inputs and outputs through deep learning, in which layers of a neural network closer to the input tend to reflect aspects of the input while layers farther from the input tend to reflect broader generalizations that are predictive of the desired output. Deep learning allows neural networks to capture complex relationships such as those between antibiotic effectiveness and aspects of molecular structure reflected in the training data (atomic weight, chemical composition, types of bonds, and the like).

For example, the 540-billion-parameter version of PaLM is able to provide explanations of a joke such as the following: “Did you see that Google just hired an eloquent whale for their TPU team? It showed them how to communicate between different pods.” PaLM offers the explanation: “TPUs are a type of computer chip that Google uses for deep learning. A ‘pod’ is a group of TPUs. A ‘pod’ is also a group of whales. The joke is that the whale is able to communicate between two groups of whales, but the speaker is pretending that the whale is able to communicate between two groups of TPUs.” Beyond human language, these systems can predict completions of partial computer code, provide potential fixes to errors in computer code, and even generate translations from human text to computer code.

Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
by Adrienne Mayor
Published 27 Nov 2018

Made, Not Born 1 1 The Robot and the Witch: Talos and Medea 7 2 Medea’s Cauldron of Rejuvenation 33 3 The Quest for Immortality and Eternal Youth 45 4 Beyond Nature: Enhanced Powers Borrowed from Gods and Animals 61 5 Daedalus and the Living Statues 85 6 Pygmalion’s Living Doll and Prometheus’s First Humans 105 7 Hephaestus: Divine Devices and Automata 129 8 Pandora: Beautiful, Artificial, Evil 156 9 Between Myth and History: Real Automata and Lifelike Artifices in the Ancient World 179 EPILOGUE. Awe, Dread, Hope: Deep Learning and Ancient Stories 213 Glossary 219 Notes 223 Bibliography 251 Index 265 ILLUSTRATIONS COLOR PLATES 1. Death of Talos 2. Jason uses a tool to destroy Talos 3. Foundry workers making a statue of an athlete 4. Blacksmith at work with tools 5. Medea rejuvenates a ram in her cauldron 6.

Bpk Bildagentur / Photo by Johannes Laurentius / Antikensammlung, Staatliche Museen, Berlin / Art Resource, NY. PLATE 14 (FIG. 8.7). Detail, Pandora admired by gods and goddesses, on the red-figure calyx krater by the Niobid Painter, about 460 BC, inv. 1856,1213.1. © The Trustees of the British Museum. EPILOGUE AWE, DREAD, HOPE DEEP LEARNING AND ANCIENT STORIES Ancient myths articulated timeless hopes and fears about artificial life, human limits, and immortality. What could we—and Artificial Intelligence—learn from the classical tales? THE MIX OF exuberance and anxiety aroused by a blurring of the lines between nature and machines might seem a uniquely modern response to the juggernaut of scientific progress in the age of technology.

As we saw, Prometheus warned humankind that Pandora’s jar should never be opened. Are Stephen Hawking, Elon Musk, Bill Gates, and other prescient thinkers the Promethean Titans of our era? They have warned scientists to halt or at least slow the reckless pursuit of AI, because they foresee that once it is set in motion, humans will be unable to control it. “Deep learning” algorithms allow AI computers to extract patterns from vast data, extrapolate to novel situations, and decide on actions with no human guidance. Inevitably AI entities will ask—and answer—questions of their own devising. Computers have already developed altruism and deceit on their own. Will AI become curious to discover hidden knowledge and make decisions by its own logic?

pages: 442 words: 94,734

The Art of Statistics: Learning From Data
by David Spiegelhalter
Published 14 Oct 2019

Measures that lack value for prediction or classification may be identified by data visualization or regression methods and then discarded, or the numbers of features may be reduced by forming composite measures that encapsulate most of the information. Recent developments in extremely complex models, such as those labelled as deep learning, suggest that this initial stage of data reduction may not be necessary and the total raw data can be processed in a single algorithm. Classification and Prediction A bewildering range of alternative methods are now readily available for building classification and prediction algorithms.

Neural networks comprise layers of nodes, each node depending on the previous layer by weights, rather like a series of logistic regressions piled on top of each other. Weights are learned by an optimization procedure, and, rather like random forests, multiple neural networks can be constructed and averaged. Neural networks with many layers have become known as deep-learning models: Google’s Inception image-recognition system is said to have over twenty layers and over 300,000 parameters to estimate. K-nearest-neighbour classifies according to the majority outcome among close cases in the training set. The results of applying some of these methods to the Titanic data, with tuning parameters chosen using tenfold cross-validation and ROC as an optimization criterion, are shown in Table 6.4.

data literacy: the ability to understand the principles behind learning from data, carry out basic data analyses, and critique the quality of claims made on the basis of data. data science: the study and application of techniques for deriving insights from data, including constructing algorithms for prediction. Traditional statistical science forms part of data science, which also includes a strong element of coding and data management. deep learning: a machine-learning technique that extends standard artificial neural network models to many layers representing different levels of abstraction, say going from individual pixels of an image through to recognition of objects. dependent events: when the probability of one event depends on the outcome of another event.

The Internet Trap: How the Digital Economy Builds Monopolies and Undermines Democracy
by Matthew Hindman
Published 24 Sep 2018

Google has even built new globally distributed database systems called Spanner and F1, in which operations across different data centers are synced using atomic clocks.22 The latest iteration of Borg, Google’s cluster management system, coordinates “hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines.”23 In recent years Google’s data centers have expanded their capabilities in other ways, too. As Google has increasingly focused on problems like computer vision, speech recognition, and natural language processing, it has worked to deploy deep learning, a variant of neural network methods. Google’s investments in deep learning have been massive and multifaceted, including (among other things) major corporate acquisitions and the development of the TensorFlow high-level programming toolkit.24 But one critical component has been the development of a custom computer chip built specially for machine learning.

INDEX Italic pages refer to figures and tables ABC News, 75 A/B testing: attention economy and, 13; personalization and, 43, 53, 57; nature of the internet and, 170, 174–76; news and, 150–54, 158; recommendation systems and, 43; tilted playing field and, 27–28, 31 Accelerated Mobile Platform (AMP), 80, 144, 179 activism, 48, 103, 169 Adams, William, 65 Adaptive Semantics, 19 advertising: attention economy and, 3–4, 7, 11, 13; branding and, 28–32, 36, 86, 166; click-through rates and, 56–57; cost per thousand impressions (CPM) and, 69; economic geography and, 3–4, 63, 67–69, 80; Google and, 68–69; inversion of, 135, 178; local vs. national per-person rates and, 68; models and, 181–84; nature of internet and, 163–64, 168, 176, 178; new economics of, 67–69; news and, 102, 109, 129, 134–35, 138, 140–46, 163–64, 168, 176, 178; newspapers and, 11, 28, 57, 68–69, 102, 129, 135, 138, 140, 142–44, 164, 178–79; political economy and, 38, 40–41, 48, 56–58, 60; property targeting and, 56–57; remnant, 69; Super Bowl and, 68; television and, 68; tilted playing field and, 15, 17, 25, 28–36, 30 African Americans, 123–25, 126, 191 agglomeration, 9, 63, 82–83 aggregation, 65–67, 76–77 AIG, 86 Ajax, 34 algorithms: black box problem and, 52; deep learning and, 21; filters and, 39, 43, 48, 54, 60; K-nearest neighbor, 44–45, 54; lessons from, 48–53, 59–61; nature of internet and, 178; need for more data and, 50–51, 61; news and, 147, 151–52; personalization and, 39–44, 48–54, 60–61; principle component analysis (PCA) and, 46; Restricted Boltzmann Machine, 46; ridge regression, 46; root mean-squared error (RMSE) and, 43–44, 47–48, 50; Russian hackers and, 178; search costs and, 41–43; singular value decomposition (SVD), 45, 58–59.

and, 1, 3 auctions, 3, 42, 86, 101 audience reach: attention economy and, 11–14; categories of content and, 165; churn and, 9, 84–85, 88–95, 100–101, 163–64, 167; compounded, 133; Darwinism and, 13, 136, 165–67, 169, 203n9; distribution of, 6, 12–13, 84–85, 88–100, 112, 114, 155, 167–69, 171, 179, 185; evolutionary model and, 164–67; false solutions and, 137–46; growth patterns in, 84, 88, 91, 95–96, 100; headlines and, 13, 32, 36, 38, 107, 147, 149–50, 154–57, 160–61; logarithms and, 88–96, 97, 100, 184–86; methodology and, 184–92; mobile devices and, 2–4, 13, 39, 69, 109, 137, 142–44, 147, 152, 160, 165, 167, 170, 179; nature of internet and, 162–80; news and, 104–18, 121–22, 126–30, 133–39, 142–49, 152, 154, 157–61, 169; overlap and, 67, 76–77, 108, 110; paywalls and, 132, 137–40, 147, 160; public sphere and, 10–11, 13–14, 99, 169; rankings and, 7, 25, 31, 54, 84–96, 100, 110–12, 136, 157, 186; recommendation systems and, 60 (see also recommendation systems); stability and, 165; traffic and, 83–96, 99–101, 104–11, 114–18, 121, 129, 134, 169, 186–89; unique visitors and, 87–88, 106–11, 128, 134 AWS, 153, 168, 203n28 Ayres, Ian, 3 Bagdikian, Benjamin, 171 Bai, Matt, 7 Bankler, Yochai, 12, 170 Banko, Michele, 51 Bank of America, 9 Barlow, John Perry, 162–63, 176 BBC, 32 Beam, Christopher, 35 behavioral targeting, 55–58 Being Digital (Negroponte), 38 Bell, Robert, 44–49 bell curve, 92–93, 95 BellKor, 44–49 Bellman, Steven, 34 Bell Telephone, 16–17 Berners-Lee, Tim, 3 Bezos, Jeff, 72, 139, 147, 150 Bieschke, Eric, 38 BigChaos, 47 BigTable, 21, 23 Bing: attention economy and, 3; economic geography and, 79; Experimentation System and, 28; experiments and, 24–25, 28, 31; market share of, 3, 30–31, 195n63; nature of internet and, 174; news and, 32, 61, 134; page views and, 24; revenue and, 24, 31, 70; tilted playing field and, 24, 28, 30–32 “Bing It On” (Microsoft campaign), 31 black box problem, 52 blogs: attention economy and, 7–9, 13; Darwinism and, 136; economic geography and, 77; methodology and, 189; nature of internet and, 169; news and, 121, 133, 136–37, 155–56, 159; personalization and, Index 45, 50; political, 136; tilted playing field and, 17, 25, 35; Webb and, 45 Boczkowski, Pablo, 70–71 Borg, 21, 23 Boston, 113–14 Bosworth, Andrew, 162 Bowman, Douglas, 25 Box, George, 64 Bradshaw, Samantha, 177 Branch, John, 152 Brand, Stewart, 164 branding, 28–32, 36, 86, 166 Brexit, 58 Brill, Eric, 51 Broadband penetration, 124, 126, 190 browsers, 2, 24–25, 34, 107, 143, 175, 195n63 budget constraints, 72, 181 Buffet, Warren, 102 bundling, 65–67, 76–77, 197n8, 198n10, 198n14 Buzzfeed, 137, 145, 149–51, 159–60, 168 Caffeine, 21, 23 Cambridge Analytica, 40, 58, 59 Campus Network, 35–36 capitalism, 20, 85, 162, 176 Census Bureau, 190 Center for Public Integrity, 140 Chancellor, Joseph, 58 Chandler, Alfred, 20 Chartbeat, 107, 149, 153 Chicago, 113, 141 China, 80, 177–78, 193n5 Christie’s, 42 Chrome, 24–25, 145 churn, 9, 84–85, 88–95, 100–1, 163–64, 167 Chyri, Iris, 130 CineMatch, 43–44, 46, 50 Clauset, Aaron, 184 clickbait, 150 click-through rates, 56–57 cloud computing, 34, 153–54, 168, 203n28 CNN, 9, 32, 39, 72, 107, 159 Colossus, 21, 23 Columbia Journalism Review, 119, 148 Comcast, 172 comment systems, 19 comparative advantage, 62–63, 80, 82 • 227 competition: attention economy and, 1, 3–4, 6, 9, 11, 13, 165–66; bundling and, 67; economic geography and, 64, 67, 74, 78; nature of internet and, 164–70, 173–75; news and, 104, 114–15, 135, 137, 146, 149, 159, 164–70, 173–75; personalization and, 40; political economy and, 40–53, 60; search engines and, 1; social networks and, 35–36; tilted playing field and, 16–17, 21–22, 26–37; traffic and, 83, 86–87, 101 comScore: attention economy and, 10; nature of internet and, 187–88; news and, 104–10, 113, 116, 119–22, 127, 128–29; traffic and, 87, 199n19 concentration: attention economy and, 2–9, 13; dispersion and, 6, 42, 100, 200n23; economic geography and, 63–64, 68, 78, 80–81; forces of, 5–8, 19, 30, 61, 64, 80, 184; lopsided understanding of internet and, 5–8; markets and, 9, 30, 68, 78, 85–88, 99–100, 104, 114–15, 122, 127–30, 171, 184, 199n15; methodology and, 184; nature of internet and, 164, 171–72, 179; news and, 104, 114–15, 122, 126–28, 130, 134; personalization and, 39, 61; revenue and, 2–4, 8, 68, 171, 179; tilted playing field and, 19, 30, 32; traffic and, 7–8, 30, 32, 63, 83–88, 96, 99–101, 104, 122, 171, 199n15 Congress, 104, 141–42 conservatives, 32, 48, 69, 72, 75, 131, 175 consumption: attention economy and, 6, 10; branding and, 28–32, 36, 86, 166; bundling and, 65–67, 76–77, 197n8, 198n10, 198n14; digital content production and, 71–80; economic geography and, 63–68, 70–76, 79; experience goods and, 29–30; methodology and, 181–84; models and, 181–84; nature of internet and, 164–65, 168, 172–75; news and, 110, 113, 122, 125–29, 143, 149; personalization and, 42–43, 50, 57–58; preferences and, 5, 8, 32, 43, 54, 63–65, 69–81, 181–84, 198n41; price discrimination and, 66, 139; switching costs and, 8, 34, 63, 72, 78–79, 164; tilted playing field and, 17, 22–26, 29–34, 37; traffic and, 86; unique visitors and, 87–88, 106–11, 128, 134 cookies, 107 cost per thousand impressions (CPM), 69 court system, 6, 129, 148 creative destruction, 84, 167 228 • Index CSS, 143 CU Community, 35–36 DailyKos, 75 Daily Me, The, 38–39, 61 Daily You, The (Turow), 39 Dallas Morning News, 118 dark fiber, 21 Darwinism, 13, 136, 165–67, 169, 203n9 data centers, 2, 12, 15–16, 20–23, 54 data mining, 59, 135 data packets, 22, 171 “Declaration of the Independence of Cyberspace, A” (Barlow), 162–63 deep learning, 21 DeepMind, 23, 194n24 democracy, 7, 70, 103–4, 163, 175–78, 180 DeNardis, Laura, 171 Department of Justice, 115, 130 Des Moines Register, 136 Detroit, 83 digital content production model, 71–80 Dinosaur Planet, 46–47, 49 disinformation, 177–78 dispersion, 6, 42, 100, 200n23 distribution costs, 12–13, 155, 167–69, 179 diversity: economic geography and, 8, 63–64, 79; methodology and, 189, 191; nature of internet and, 180; news and, 104–5, 115, 118, 129–30, 149; personalization and, 53, 60–61 DNAinfo, 141 Dremel, 27 Duarte, Matias, 27 duopolies, 4, 30, 42, 68, 180 Earl, Jennifer, 169 Easterbrook, Frank, 175 eBay, 3, 26, 39, 42, 86–87, 175 economic geography: advertising and, 3–4, 63, 67–69, 80; agglomeration and, 9, 63, 82–83; aggregation and, 65–67, 76–77; Apple and, 79–80; apps and, 80; auctions and, 3, 42, 86, 101; audience reach and, 87, 104, 106–11, 114–18, 121, 129, 134, 169, 186–89; Bing and, 79; blogs and, 77; bundling and, 65–67, 76–77, 197n8, 198n10, 198n14; comparative advantage and, 62–63, 80; competition and, 64, 67, 74, 78; concentration and, 63–64, 68, 78, 80–81; consumption and, 63–67, 70–76, 79; customers and, 17, 24, 30, 33, 50, 57, 68, 149, 164, 172, 174; digital content production and, 71–80; dispersion and, 6, 42, 100, 200n23; distribution costs and, 12–13, 155, 167–69, 179; diversity and, 8, 63–64, 79; economies of scale and, 63–67, 74, 76, 79, 81; efficiency and, 63, 65, 67–68; Facebook and, 68, 79–80; Google and, 68–69, 79–80; government subsidies and, 133, 137, 140–42; hyperlocal sites and, 68, 77–78, 81, 101–4, 119, 121, 130–37, 164, 180; increasing returns and, 36–37, 63–64, 80–81, 181, 184; international trade and, 5, 62, 80; investment and, 73; journalism and, 78, 81; Krugman and, 6, 62–63, 80; lock-in and, 34–37, 61, 101, 173; long tails and, 8, 70, 84, 184, 199n15; markets and, 63–68, 74–75, 77–78; media preferences and, 69–71; Microsoft and, 65–66; models and, 63–65, 69–81, 198n27, 198n41; Netflix and, 70; networks and, 68–69; news and, 65–80; newspapers and, 65, 68–69, 78; page views and, 24, 87, 106, 108–18, 121, 125–29, 151, 157, 160–61, 188–89, 200n18; paywalls and, 107, 109, 132, 137–40, 147, 160; preferences and, 5, 8, 32, 43, 54, 63–65, 69–81, 181–84, 198n41; profit and, 73–77, 81; protectionism and, 174; quality and, 72–78, 81, 83–84; revenue and, 63, 65–68, 73–75, 79; search costs and, 8, 30, 34, 37, 41–43, 63, 72–74, 168, 181–82; search engines and, 64, 79–81; software and, 65–66; stickiness and, 74; subscriptions and, 65, 67; television and, 66, 70; traffic and, 63, 77–81; video and, 69, 76; Yahoo!

pages: 1,409 words: 205,237

Architecting Modern Data Platforms: A Guide to Enterprise Hadoop at Scale
by Jan Kunigk , Ian Buss , Paul Wilkinson and Lars George
Published 8 Jan 2019

Data scientists typically also need extensive experience with SQL as a tool to drill down into the datasets that they require to build statistical models, via SparkSQL, Hive, or Impala. Machine learning and deep learning Simply speaking, machine learning is where the rubber of big data analytics hits the road. While certainly a hyped term, machine learning goes beyond classic statistics, with more advanced algorithms that predict an outcome by learning from the data—often without explicitly being programmed. The most advanced methods in machine learning, referred to as deep learning, are able to automatically discover the relevant data features for learning, which essentially enables use cases like computer vision, natural language processing, or fraud detection for any corporation.

Many, if not most, enterprises have already embarked on their data-driven journeys and are making serious investments in hardware, software, and services. The big data market is projected to continue growing apace, reaching somewhere in the region of $90 billion of annual revenue by 2025. Related markets, such as deep learning and artificial intelligence, that are enabled by data platforms are also set to see exponential growth over the next decade. The move to Hadoop, and to modern data platforms in general, has coincided with a number of secular trends in enterprise IT, a selection of which are discussed here. Some of these trends are directly caused by the focus on big data, but others are a result of a multitude of other factors, such as the desire to reduce software costs, consolidate and simplify IT operations, and dramatically reduce the time to procure new hardware and resources for new use cases.

It is now generally accepted that, for storage and data processing, the right way to scale a platform is to do so horizontally using distributed clusters of commodity (which does not necessarily mean the cheapest) servers rather than vertically with ever more powerful machines. Although some workloads, such as deep learning, are more difficult to distribute and parallelize, they can still benefit from plenty of machines with lots of cores, RAM, and GPUs, and the data to drive such workloads will be ingested, cleaned, and prepared in horizontally scalable environments. Adoption of Open Source Although proprietary software will always have its place, enterprises have come to appreciate the benefits of placing open source software at the center of their data strategies, with its attendant advantages of transparency and data freedom.

pages: 533

Future Politics: Living Together in a World Transformed by Tech
by Jamie Susskind
Published 3 Sep 2018

An unsupervised machine can therefore be used to ‘discover knowledge’, that is, to make connections of which its human programmers were totally unaware.36 In reinforcement learning, the machine is given ‘rewards’ and ‘punishments’ telling it whether what it did was right or wrong. The machine self-improves. OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS 36 FUTURE POLITICS Many of the advances described in this chapter, particularly those involving images, speech, and text, are the result of so-called ‘deep learning’ techniques that use ‘neural networks’ inspired by the structure of animal brains. Google launched one in 2012, integrating 1,000 large computers with more than a billion connections. This computer was presented with 10 million ‘random’ images from YouTube videos. It was not told what to look for, and the images were not labelled.

After three days, one unit had learned to identify human faces and another had learned to respond to images of a cat’s face (this is YouTube after all).37 Engineers at Google now use ‘duelling’ neural networks to train each other: one AI system creates realistic images while a second AI system plays the role of critic, trying to work out whether they’re fake or real.38 The rapid increase in the use of deep learning can be seen from the AI systems used in games. The version of Deep Blue that beat Garry Kasparov at chess in 1997 was programmed with many general principles of good play.What’s most remarkable about AlphaGo Zero—the latest and most powerful incarnation of the Go-playing AI systems—however, is that it ‘learned’ not by playing against the very best humans or even learning from human play, but by playing against itself over and over again, starting from completely random moves, and rapidly improving over time.39 Machine learning has been around for a while.

Cade Metz,‘Google’s Dueling Neural Networks Spar to Get Smarter, No Humans Required’, Wired, 11 April 2017 <https://www.wired. com/2017/04/googles-dueling-neural-networks-spar-get-smarterno-humans-required/> (accessed 28 November 2017). 39. Silver et al., ‘Mastering’. 40. Domingos, Master Algorithm, 7. 41. Neil Lawrence, quoted in Alex Hern, ‘Why Data is the New Coal’, The Guardian, 27 September 2016 <https://www.theguardian.com/ technology/2016/sep/27/data-efficiency-deep-learning> (accessed 28 November 2017). 42. Ray Kurzweil, The Singularity is Near (New York:Viking, 2005), 127, cited in Susskind and Susskind, Future of the Professions, 157; Peter H. Diamandis and Steven Kotler, Abundance:The Future is Better Than You Think (New York: Free Press, 2014), 55. 43. Paul Mason, Postcapitalism: A Guide to Our Future (London: Allen Lane, 2015), 121. 44.

pages: 439 words: 131,081

The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World
by Max Fisher
Published 5 Sep 2022

In a 2016 paper, Google’s engineers announced a “fundamental paradigm shift” to a new kind of machine learning they called “deep learning.” In the earlier A.I., an automated system had built the programs that picked videos. But, as with the spam-catching A.I.s, humans oversaw that system, intervening as it evolved to guide it and make changes. Now, deep learning was sophisticated enough to assume that oversight job, too. As a result, in most cases, “there’s going to be no humans actually making algorithmic tweaks, measuring those tweaks, and then implementing those tweaks,” the head of an agency that developed talent for YouTube wrote in an article deciphering the deep-learning paper. “So, when YouTube claims they can’t really say why the algorithm does what it does, they probably mean that very literally.”

“So, when YouTube claims they can’t really say why the algorithm does what it does, they probably mean that very literally.” It was as if Coca-Cola stocked a billion soda machines with some A.I.-designed beverage without a single human checking the bottles’ contents—and if the drink-filling A.I. was programmed only to boost sales, without regard for health or safety. As one of YouTube’s deep-learning engineers told an industry conference, “Product tells us that we want to increase this metric, and then we go and increase it.” The average user’s time on the platform skyrocketed. The company estimated that 70 percent of its time on site, an astronomical share of its business, was the result of videos pushed by its algorithm-run recommendation system.

Most were young, save one: the suit-wearing Bolsonaro, whom Dominguez first saw as a guest on one of Moura’s videos. It was 2016. At the time, Bolsonaro, a longtime lawmaker in Brazil’s version of the U.S. House of Representatives, was shunned even in his own party. But YouTube, chasing its billion-hour watch-time goal, had just installed its new, deep-learning A.I. In Brazil, far-right YouTubers—Bolsonaro’s real party—saw their exposure skyrocket. “It all started from there,” said Dominguez, now a lanky eighteen-year-old with glasses and a ponytail, calling YouTube the new home of the Brazilian right. The recommendation algorithm had “woken up Brazilians,” he said.

pages: 404 words: 95,163

Amazon: How the World’s Most Relentless Retailer Will Continue to Revolutionize Commerce
by Natalie Berg and Miya Knights
Published 28 Jan 2019

Addresses the perennial headache that is online returns, while driving footfall to Kohl’s. We expect this to be rolled out internationally. No 2018 Amazon Go Retail First checkout-free store. Shoppers scan their Amazon app to enter. The high-tech convenience store uses a combination of computer vision, sensor fusion and deep learning to create a frictionless customer experience. No 2019 and beyond Fashion or furniture stores would be a logical next step NOTE Amazon Go officially opened its doors to the public in 2018 SOURCE Amazon; author research as of June 2018 However, it was Amazon’s rather ironic launch of physical bookstores in 2015 that marked a genuine shift in strategy, as this was the first time Amazon mimicked digital merchandising and pricing in a physical setting.

Autonomous computing Where connectivity and interfaces have, to date, been hardware-based, the third global technology driver is predicated on the development of increasingly ‘intelligent’ software that can almost think for itself and come up with answers to questions without necessarily being programmed with the necessary information. Instead, autonomous computing systems can cross-reference and correlate disparate data sources, augment their own algorithms, and answer complex ‘what if?’ sorts of questions. As such, AI, including machine learning and deep learning techniques, could not exist without autonomous computing development as the last global technology driver. AI development is, in fact, responsible for many of the functional computing advances of the last 15 years, from search algorithms, spam filters and fraud prevention systems to self-driving vehicles and smart personal assistants.

In all of these applications, it can use the massive computing power of its AWS division to crunch billions of data points in support of testing a variety of options and outcomes to rapidly work out what will and won’t cost-effectively work with customers. McKinsey estimates put the proportion of Amazon purchases driven by product recommendations at 35 per cent.3 In 2016, it made its AI framework, DSSTNE (pronounced as ‘destiny’) free, to help expand the ways deep learning can extend beyond speech and language understanding and object recognition to areas such as search and recommendations. The decision to open source DSSTNE also demonstrates when Amazon recognizes the need to collaborate over making gains with the vast potential of AI. On the Amazon site, these recommendations can be personalized, based on categories and ranges previously searched or browsed, to increase conversion.

pages: 234 words: 67,589

Internet for the People: The Fight for Our Digital Future
by Ben Tarnoff
Published 13 Jun 2022

Market for cloud infrastructure services: As of Q4 2020, AWS has 32 percent of the global market for cloud infrastructure services, followed by Microsoft Azure at 20 percent and Google Cloud at 9 percent, according to Felix Richter, “Amazon Leads $130-Billion Cloud Market,” Statista, February 4, 2021. 108, These dynamics accelerated … The 2010s saw the revival of neural networks under the banner of “deep learning,” which involves the use of many-layered networks. This revival was made possible by a number of factors, foremost among them advances in computing power and the abundance of training data that could be sourced from the internet. Deep learning is the paradigm that underlies much of what is currently known as “artificial intelligence,” and has centrally contributed to significant breakthroughs in computer vision and natural language processing. See Andrey Kurenkov, “A Brief History of Neural Nets and Deep Learning,” Skynet Today, September 27, 2020, and Alex Hanna et al., “Lines of Sight,” Logic, December 20, 2020. 109, The sophistication of these systems … “Data imperative”: Marion Fourcade and Kieran Healy, “Seeing Like a Market,” Socio-Economic Review 15, no. 1 (2017): 9–29. 110, The same individual … Smartphone usage: “Mobile Fact Sheet,” April 7, 2021, Pew Research Center.

pages: 94 words: 33,179

Novacene: The Coming Age of Hyperintelligence
by James Lovelock
Published 27 Aug 2019

It now seems probable that a new form of intelligent life will emerge from an artificially intelligent (AI) precursor made by one of us, perhaps from something like AlphaZero. The signs of the increasing power of AI are all around us. If you read science and technology news feeds, you will be bombarded daily with astounding developments. Here is an example I just spotted. Using ‘deep learning’ technology such as AlphaGo, scientists in Singapore have made a computer that can predict your risk of having a heart attack by looking into your eyes. Not only that, it can tell the gender of a person, also just by looking into the eyes. You might ask, who needs a machine to do that? But the point is, we didn't know it could be done.

(Rossum's Universal Robots) 91, 114 capitalism, global 68 carbon 109 compounds 72 carbon dioxide 5, 10, 11, 57, 64, 105, 110 interglacial levels 55 possible implications of over-reducing levels 56 chalk 64 chlorofluorocarbons (CFCs) 38, 69, 71 cities 40, 50–53 Clarke, Arthur C. 59 climate change global warming 54, 57–8, 60–62, 65, 111 Paris Conference on (2016) 55 possible implications of over-reducing carbon dioxide levels 56 Clynes, Manfred 29 coal 33–5 communication cyborg 99–101 and evolution of speech 96–9 Marconi and electronic information transfer 128 telepathic 100–101 computer code 95 computer programs 79–80, 82, 83, 84 design limitations, lacking intuitive awareness 92 computing systems 92 based on an adaptive neural network 113–14 evolution of adaptive 114 parallel 82 PC chips 92 cosmos age of 3–4 and anthropic principle 25–7, 75, 89, 123 awakening to consciousness 3, 23, 26–8, 121–3 conventional scientific view of 15 dependence of the knowing cosmos on human survival 23–4 Grand Unification epoch 39 information as fundamental property of 26, 75, 87–8 purpose 26–8, 123 size of 3 cyborgs 29–30, 88–9, 101–3, 106, 119–20, 123 and Asimov's three laws of robotics 94 communication 99–101 cooperation with humans 102–3, 104–5, 106–8, 110, 115 and Earth's temperature 106–8, 115 emergence of 85–6, 118–20 evolution 29–30, 94–5, 101, 118, 123 human diplomacy and life with 118–20 as masters over humans 118–20, 123 parallel processing 82 and the quantum world 102 self-written code 94–5 space travel possibilities 108–9 telepathy 100–101 and war 112–17 Daisyworld 13 ‘Data’ (Star Trek android) 93 DeepMind (AI technology company) 79, 80 Deep Blue (computer) 79–80 deep learning technology 82 Delhi 50 diamond 109 chips 44, 109 diesel 72 digital technology 82–3 dimethyl sulphide 60 DNA 109, 127 dolphin intelligence 23 drones 91, 101, 113, 114, 116 Earth/Gaia age of 3, 4, 57–8 in Anthropocene 37–40; see also Anthropocene and asteroid strikes 6–8, 58–9, 63, 65–6 beginning of life on 3, 129–30 capacity to know itself 130 cooling mechanisms 11, 15, 30, 64, 66, 105 feedback loops 65 Gaia theory 12–13, 14–17, 26–7, 106 geoengineering 106–8 glaciation 55 Great Dying 6–7 in Novacene 30, 106–11; see also Novacene as an ocean planet 59–60, 64, 105–6; see also oceans population growth 67 radiation of excessive heat 11, 15, 30 temperature, see temperature of the Earth ecomodernism 67–9 The Economist 112–13 ecosystems 7, 19 Einstein, Albert 20, 21, 102 electrical power 70 electromagnetic pulses (EMPs) 114 electron capture detector (ECD) 38 electronic life 85, 105, 109, 114, 118 cyborgs, see cyborgs exoplanet possibilities of 9–10 European Geophysical Union 17 evolution 27, 28, 129–30 of adaptive computer systems 114 Anthropocene 35–6, 43, 70 chance, necessity and 85 cybernetic 29–30, 94–5, 101, 118, 123 and entry to the Novacene 83–6 of the eye 97 by intentional selection 43, 84, 86, 88–9 by natural selection 3–4, 70, 114; see also natural selection of nesting insects, and city life 51–2 Novacene 29–30, 94–5, 101, 110–11 from self-written code 94–5 of speech 96–9 exoplanets 3, 9–10, 121–2 extremophiles 62, 106 eyes, evolution of 97 Faraday, Michael 70 feedback loops 65 feminine intuition 18, 20 Fermi, Enrico 121 Feynman, Richard 102 First World War 45 fossil fuels 49 Freedman, Sir Lawrence 48 Frege, Gottlob 16 Freud, Sigmund 90 Gaia, see Earth/Gaia galaxies 3, 27 Gatling, Richard, rotary cannon 45 geoengineering 106–8 glaciation 55 global warming 54, 57–8, 60–62, 65, 111 Go (game) 79–80, 84, 88 God 24, 26, 68 Goldilocks Zone 10–11 graphene 109 greenhouse effect 5, 12, 60–61, 107 Greens, the 57, 71, 72 Guernica bombing 45, 48 Hamilton, Clive 68 Hansen, James 63 Hardy, Thomas 124 Havel, Václav 26–7 Hawking, Frank 62–3 Hawking, Isobel Eileen 63 Hawking, Stephen 63 Heaviside, Oliver 128 helium 11 Hiroshima 46 Hooke, Robert 33 human race age of species, Homo sapiens 3 aloneness of 3–5, 121, 122 and the Anthropocene, see Anthropocene at edge of extinction 6–13 guilt feelings over achievements 56 as prime understanders of cosmos 5 temperature tolerance 62, 63–4 hunter-gatherers 21, 67, 125 hydrogen 11, 63–4, 110 Indonesia 7 industrial pollution 37–8 Industrial Revolution 34–5, 37, 70 information 21, 74–5, 87 and anthropic principle 26, 75, 89, 123 capture and storage of 28, 74–5 communicating, see communication conversion of sunlight into 28, 39, 74–5, 87 cyborg retrieval of 101 as a fundamental property of the universe 26, 75, 87–8 junk information 111 Marconi and electronic information transfer 128 maximum transmission rate 81 and neurons 81 theory 88 unconscious 14; see also intuition units of 88, 89 instinct 17, 19, 20, 93 see also intuition intelligence/intelligent life 3, 4, 26, 102–3 AI, see artificial intelligence of animal species 23, 51–2 and cosmic purpose 26–8 cybernetic, see cyborgs distinguishing feature of human intelligence 23 dolphin intelligence 23 electronic, see artificial intelligence; cyborgs; electronic life humanoid ideas of intelligent beings 90–92, 93–4 intelligence/intelligent life (Cont.) hyperintelligence 29, 117, 122 natural selection for 27 social intelligence of bees 51–2 intentional selection 43, 84, 86, 88–9 intuition 13, 14, 18–20, 22, 38; see also instinct AI 80 computer design limitations, lacking intuitive awareness 92 denigration of 20, 99 feminine 18, 20 and invention 38, 99 and parallel processing 92–3 and telepathy 100–101 iodine 60 Jakarta 50 Jefferson, Thomas 52 Jet Propulsion Laboratory, California 47–8 junk information 111 Kasparov, Garry 79–80 Kline, Nathan 29 Laki volcano, Iceland 40 Laplace, Pierre-Simon 18–19 Latour, Bruno 17 LAWS (lethal autonomous weapons systems) 115–17 Lidwell, Owen 62 life chance, necessity and the appearance of 85 and Earth's temperature 11, 15, 30, 62, 63–4, 105–6 electronic, see cyborgs; electronic life evolution of, see evolution on exoplanets 3, 9–10, 121–2 intelligent, see intelligence/intelligent life longevity of life forms 4 Novacene 86, 88–9; see also cyborgs; electronic life spotting life on another planet 127 and zone of habitability/Goldilocks Zone 10–11 limestone 64 linear logic, see logic, linear logic, linear 13, 14, 15, 16, 18, 21, 93, 94, 100, 102 Lotka, Alfred 19 Lovelace, Ada 83 Lovelock, Nell 124 Lovelock, Sandy 125 Lovelock, Tom 124–5 Lovelock family 124–5 Luftwaffe 45 Lynas, Mark 67–8, 69 Marconi, Guglielmo 128–9 Mars 6, 8–9, 10, 59, 108 Mariner mission 126–7 The Matrix 108 Maxwell, James Clerk 16 megacities 40, 50 memory 120 Mercury 10, 21, 22 methane 65, 72 methyl iodide 60 Monod, Jacques 85 Moon 6, 58, 126–7 soft landing on 25, 126 Moore, Gordon 43 Moore's Law 43–4, 82–3, 86 Morton, Oliver: The Planet Remade 107 multiverse theory 26 Mumford, Lewis 45 Musk, Elon 115 Nagasaki 46 NASA 24–5, 126–7 natural selection 3–4, 70, 84, 88, 98, 114 for intelligence 27 neurons 81 new atheists 27 Newcomen, Thomas 33, 128 steam engine 34–6, 87, 124 Newton, Isaac 18–19 laws of planetary motion 21 Novacene and AlphaZero 80, 82 and autonomous weapons 115–17 and the conscious cosmos 121–3 cyborgs, see cyborgs and engineering 83–6; see also cyborgs evolution 29–30, 94–5, 101, 110–11 and Gaia 30, 106–11 likely duration 39 and Marconi 128–9 and Moore's Law 43–4, 82–3, 86 rise of 30, 80, 82–6 speech and writing delaying emergence of 98–9 nuclear power/energy 46, 48–9, 61, 73 nuclear weapons 7, 46–9, 114 oceans 55, 59–60, 63, 64, 107, 110, 129 original sin 56 Orwell, George 4 oxygen 28, 35, 48, 63–4, 108, 109, 110 packaging 72 Palaeocene/Eocene Thermal Maximum 65 parallel processing 82, 92–3 Paris Conference on Climate Change (2016) 55 permafrost 65 petrol 72 Philosophical Transactions of the Royal Society 19 photosynthesis 28, 39, 87, 109 Planck, Max 18–19 plastics 71–2 Poincaré, Henri 18–19 polar ice caps 65 pollution 54, 55, 74 industrial 37–8 radioactive 46 Popper, Karl 16 population growth 67 quantum theory 26, 96, 102 radioactivity 46 railways 41–2 re-wilding 72 reforestation 72 religion 24; see also God green 69 and original sin 56 Rhodes, Richard 46 robots 90, 91, 93–4 rocket speed 42 Roswell incident 121 Russell, Bertrand 16 Second World War 46, 112 selenium 60 Shanghai 50 Shannon, Claude 88 Shelley, Mary: Frankenstein 7 shipbuilding 33–4 Siberian Traps 6–7 silicon 109 chips 43–4 Silverstein, Abe 126 Socrates 20 solar power 73 see also Sun/solar energy solar system 4, 17 speech, evolution of 96–9 stars 3, 4, 27, 121 main sequence 4, 11, 13, 105; see also Sun/solar energy steam engines 70 Newcomen's 34–6, 87, 124 Watt's governor 15–16 Stockfish (computer program) 80 Stoermer, Eugene 37 sulphur 60 Sun/solar energy 4–5, 28, 35, 39, 48, 75; see also sunlight heat emission, and Earth's temperature 5, 10–13, 105, 106–7, 111 sunlight 5, 30, 35, 61 conversion into information 28, 39, 74–5, 87 and the Industrial Revolution 34, 35 and photosynthesis 28, 87 supercritical steam 63, 64 Szilard, Leo 46 Tambora, Mount 7 telegraphy, wireless 128 telepathy 100–101 Tellus journal 17 temperature of the Earth 5, 55–6, 57–66 critical upper limit for life 62, 63–4, 105 current average 65 and cyborgs 106–8, 115 and extremophiles 62 Gaia's cooling mechanisms 11, 15, 30, 64, 66, 105 global warming 54, 57–8 and greenhouse effect 5, 12, 60–61, 107 and human skin cells 62 and life 11, 15, 30, 62, 63–4, 105–6 Palaeocene/Eocene Thermal Maximum 65 possible implications of over-reducing carbon dioxide levels 56 and radiation of excessive heat 11, 15, 30, 60, 107–8 and sea temperature 60, 64 and the Sun 5, 10–13, 105, 106–7, 111 supercritical state 63–4 and water vapour 60–61, 107 Tennyson, Alfred, Lord 130 thinking and anthropic principle 25–7, 89 intuitive, see intuition logical, see logic, linear Tipler, Frank (with John Barrow): The Anthropic Cosmological Principle 24, 25–6, 27, 123 Tokyo, Greater 50 trains 42 transport 42–3 trench warfare 45 Tsar Bomba 46 UFOs 121 unconscious mind 19, 20 see also intuition Venus 10, 64 volcanic events 66 devastating 6–7, 63 Laki 40 Vulcan 21–2 warfare 45–9, 54; see also nuclear weapons and cyborgs 112–17 water vapour 60–61, 107 Watson, Andrew 13 Watt, James 15, 70 Watt governor 15–16 White, Gilbert 39, 41 The Natural History of Selborne 39–40, 41 Wilson, Edward O. 51 wind power 73 Wittgenstein, Ludwig 16, 96 Wood, Lowell 106 Wordsworth, William 42, 54 zone of habitability 10–11

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

Int-$ (the “international dollar”) is a hypothetical currency that tries to take account of different price levels across different countries. 29.  For instance, Daron Acemoglu and Pascual Restrepo, “Artificial Intelligence, Automation and Work” in Ajay Agrawal, Joshua Gans, and Avi Goldfarb, eds., Economics of Artificial Intelligence (Chicago: Chicago University Press, 2018). 30.  Dayong Wang, Aditya Khosla, Rishab Gargeya, et al., “Deep Learning for Identifying Metastatic Breast Cancer,” https://arxiv.org, arXiv:1606.05718 (2016). 31.  Maura Grossman and Gordon Cormack, “Technology-Assisted Review in e-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review,” Richmond Journal of Law and Technology 17, no. 3 (2011). 32.  

Though by no means limited to diagnosis. See Eric Topol, “High-Performance Medicine: The Convergence of Human and Artificial Intelligence,” Nature 25 (2019), 44–56, for a broader overview of the uses of AI in medicine. 41.  Jeffrey De Fauw, Joseph Ledsam, Bernardino Romera-Paredes, et al., “Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease,” Nature Medicine 24 (2018), 1342–50. 42.  Pallab Ghosh, “AI Early Diagnosis Could Save Heart and Cancer Patients,” BBC News, 2 January 2018. 43.  Echo Huang, “A Chinese Hospital Is Betting Big on Artificial Intelligence to Treat Patients,” Quartz, 4 April 2018. 44.  

Davis, Abe, Michael Rubinstein, Neal Wadhwa, et al. “The Visual Microphone: Passive Recovery of Sound from Video.” ACM Transactions on Graphics (TOG) 33, no. 4 (2014). Dawkins, Richard. The Blind Watchmaker. London: Penguin Books, 2016. De Fauw, Jeffrey, Joseph Ledsam, Bernardino Romera-Paredes, et al. “Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease.” Nature Medicine 24 (2018): 1342–50. Deloitte. “From Brawn to Brains: The Impact of Technology on Jobs in the UK” (2015). Deming, David. “The Growing Importance of Social Skills in the Labor Market.” Quarterly Journal of Economics 132, no. 4 (2017): 1593–1640.

pages: 501 words: 114,888

The Future Is Faster Than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives
by Peter H. Diamandis and Steven Kotler
Published 28 Jan 2020

Once we develop this capability, the term “turbo-boost” doesn’t even come close. Yet, we’re already close. AI in the cloud provides the necessary power for JARVIS-like performance. Blending Xiaoice’s conversational friendliness with AlphaGo Zero’s decision-making precision takes this even further. Add in the latest deep learning developments and you get a system that is starting to be able to think for itself. Is it JARVIS? Not yet. But it’s JARVIS-lite—and yet another reason why technological acceleration is itself accelerating. Networks Networks are means of transportation. They’re how goods, services, and, more critically, information and innovation, move from Point A to Point B.

Online retailers are also in the mix, with Amazon acquiring the 3-D body-scanning startup Body Labs in 2017 as a way to make bespoke clothing just another feature available through Prime Wardrobe. As far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During their annual Singles’ Day shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s $25 billion in sales. Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior. And the VR system itself?

Already, affective computing is seeping into e-learning, where AIs adjust the presentation style if the learner gets bored; robotic caregiving, where it improves the quality of robo-nursing; and social monitoring, like a car that engages additional safety measures should the driver become angry. But its biggest impact is on entertainment, where things are getting personal. Facial expressions, hand gestures, eye gaze, vocal tone, head movement, speech frequency and duration are all signals thick with emotional information. By coupling next generation sensors with deep learning techniques, we can read these signals and employ them to analyze a user’s mood. And the basic technology is here. Affectiva, a startup created by Rosalind Picard, the head of MIT’s Affective Computing Group, is an emotional recognition platform used by both the gaming and the marketing industry.

pages: 301 words: 89,076

The Globotics Upheaval: Globalisation, Robotics and the Future of Work
by Richard Baldwin
Published 10 Jan 2019

The deep answer is Moore’s law and Gilder’s law have shifted into their eruptive growth phases when it comes to machine translation. WHY NOW? THE DEEP LEARNING TAKEOVER For a decade, hundreds of Google engineers made incremental progress on translation using the traditional, hands-on approach. In February 2016, Google’s AI maharishi, Jeff Dean, turned the Google Translate team on to Google’s homegrown machine-learning technique called Deep Learning. The job required huge amounts of computer muscle, but Google had that thanks to Moore’s law. The missing link was the data. That changed in 2016 when the United Nations (UN) posted online a data set with nearly 800,000 documents that had been manually translated into the six official UN languages: Arabic, English, Spanish, French, Russian, and Chinese.

Job Destruction Is the Business Model We should listen to Andrew Ng. He is one of the intellectual high priests of digital technology. He was the chief scientist at the Chinese online search giant Baidu, leading over a thousand researchers. Before that, he worked at Google developing the company’s breakthrough machine-learning approach, called Deep Learning. This is the thing behind many of Google’s wonders including its self-driving cars. As if all that wasn’t enough for one person’s career, when he was a professor at Stanford University, he co-founded the online education platform Coursera. His YouTube lecture on AI has been watched over 1.5 million times.

Know Thyself
by Stephen M Fleming
Published 27 Apr 2021

Fleming, Marta Saez Garcia, Gabriel Weindel, and Karen Davranche. “Revealing Subthreshold Motor Contributions to Perceptual Confidence.” Neuroscience of Consciousness 2019, no. 1 (2019): niz001. Gal, Yarin, and Zoubin Ghahramani. “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,” arXiv.org, October 4, 2016. Galvin, Susan J., John V. Podd, Vit Drga, and John Whitmore. “Type 2 Tasks in the Theory of Signal Detectability: Discrimination Between Correct and Incorrect Decisions.” Psychonomic Bulletin & Review 10, no. 4 (2003): 843–876. Garrison, Jane R., Emilio Fernandez-Egea, Rashid Zaman, Mark Agius, and Jon S.

International Journal of Law and Psychiatry 62 (2019): 56–76. Kelley, W. M., C. N. Macrae, C. L. Wyland, S. Caglar, S. Inati, and T. F. Heatherton. “Finding the Self? An Event-Related fMRI Study.” Journal of Cognitive Neuroscience 14, no. 5 (2002): 785–794. Kendall, Alex, and Yarin Gal. “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?” arXiv.org, October 5, 2017. Kentridge, R. W., and C. A. Heywood. “Metacognition and Awareness.” Consciousness and Cognition 9, no. 2 (2000): 308–312. Kepecs, Adam, Naoshige Uchida, Hatim A. Zariwala, and Zachary F. Mainen. “Neural Correlates, Computation and Behavioural Impact of Decision Confidence.”

Samaha, Jason, Missy Switzky, and Bradley R. Postle. “Confidence Boosts Serial Dependence in Orientation Estimation.” Journal of Vision 19, no. 4 (2019): 25. Samek, Wojciech, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Cham, Switzerland: Springer, 2019. Schäfer, Anton Maximilian, and Hans-Georg Zimmermann. “Recurrent Neural Networks Are Universal Approximators.” International Journal of Neural Systems 17, no. 4 (2007): 253–263. Schechtman, Marya. The Constitution of Selves. Ithaca, NY: Cornell University Press, 1996.

pages: 428 words: 121,717

Warnings
by Richard A. Clarke
Published 10 Apr 2017

DARPA (the Defense Advanced Research Projects Agency), whose mission is to ensure that the U.S. military is “the initiator and not the victim of strategic technological surprises,”19 launched a program for “explainable AI.” “Machine learning and deep learning algorithms . . . we don’t fully understand today how they work.” The new explainable-AI initiative “will give the human operator more details about how the machine used deep learning to come up with the answer.”20 In 2015, business tycoons Elon Musk and Sam Altman created the OpenAI Institute, a nonprofit company that focuses on researching AI. Musk and Altman believe that by making all of OpenAI’s findings open-source and funding it by private donations, eliminating the need for financial return, they can ensure that AI will be developed for the benefit of all people, not for self-interested or destructive aims.

Then, about as quickly as they slipped, the markets recovered. Later investigations suggested that errors in the autonomous algorithms of high-frequency traders were at least partly to blame. While AI has fundamentally shifted life on Wall Street, so too is it changing Main Street in new and potentially profound ways. Andrew Ng, the father of deep learning (a branch of machine learning that attempts to mirror human cognition), believes that focusing on the threat from superintelligence is misplaced. Ng is behind Google Brain, one of the most aspirational AI systems yet, and he feels that “worrying about AI evil Superintelligence today is like worrying about overpopulation on the planet Mars.

See also Upper Big Branch Mine disaster Cassandra system, 122, 125, 133, 137–38, 140–41 fatality rate, 123–24, 125–28 federal regulations, 124–30, 137–39 federal research program, 124–25 history of, 122–23 institutional refusal and, 137–42 Coastal wetlands, 41, 42–44 Coastal Wetlands Planning, Protection, and Restoration Act (CWPPRA), 43–44 Coding errors, 366–67 Cognitive biases, 34–35, 171–72 heuristics and, 189–91 Cognitive style, 14–15 Cold and the Dark, The: The World after Nuclear War (Sagan, Ehrlich, and Kennedy), 273–74 Cold Start doctrine, 264–65, 267, 270 Cold War, 25–26, 267–68, 271–74, 277–78 Collateralized debt obligations (CDOs), 147–48 Columbia University, 237, 238 Coming Plague, The (Garrett), 232 Complexity, and vulnerabilities, 366–67 Complexity Mismatch, 116, 178–79, 215, 299 Comprehensive Nuclear-Test-Ban Treaty (CTBT), 266 Computers in Crisis (Murray), 193–94 Conference on the Long-Term Worldwide Biological Consequences of Nuclear War (1983), 273 Congressional oversight committees, 355 Consensus science, 172–73 Continuous miners, 131–32 Conventional wisdom, 28, 355 Coplan, Jeremy, 186 Corvette hacks, 297–98 Cosmic Catastrophes (Morrison and Chapman), 302, 303, 304–5, 308–9, 312, 314–15, 319 Cost-benefit analysis, 361–62 Countervalue strike, 275, 278–79 Cowardice, 180 Cox, Jeff, 150 Cretaceous-Paleogene extinction event, 307–9 Crichton, Michael, 172–73 Crick, Francis, 328 Crimea, 285 CRISPR, 231–32, 326, 327, 329–49 CRISPR/Cas9, 326, 330–49, 360, 366–67 CRISPR Therapeutics, 333 Critical infrastructure protection (CIP), 287 Critics, 168, 170, 186–88 Crittenden, Gary, 143–44, 156 Crocker, Ryan, 73 Crocker’s Rules, 208–9 Cuban Missile Crisis, 26, 274 Cybersecurity, 283–300 Cynomolgus monkeys, 334–35 Daniel, 2 DARPA (Defense Advanced Research Projects Agency), 210, 382n Darwin, Charles, 325 Data, 36–37, 184 “Decay heat,” 85 Decision makers (the audience), 168, 170, 176–82, 380n false Cassandras and, 191–98 making the same mistakes, 189–91 responses, 358–64 scanning for problems, 354–56 Deep Impact (movie), 313–14 Deep learning, 210, 212 Demon core, 83 Deutsche Bank, 157 Devil’s advocates, 359, 379n DiBartolomeo, Dan, 105–6 Diffusion of responsibility, 176–77, 215, 235, 321, 348 Dinosaurs, 307–9 DiPascali, Frank, 107 Disembodied AI, 207 DNA, 326, 327–28, 336–37 Dole, Bob, 28–29 Dot-com bubble, 147 Doudna, Jennifer, 326–30, 335–36, 338–41, 343, 345, 346–49, 360 Drijfhout, Sybren, 253 Duchenne muscular dystrophy, 332 Duelfer, Charles, 30–31 Eagles, the (band), 305 Earth Institute, 238 Earthquake preparedness, 352–53 Ebola virus, 3, 219–20 Edwards, Edwin, 43 Eemian interglacial, 249, 250 Eggers, Dave, 39 Egypt, 59, 63, 66–67 Ehrlich, Paul, 192–93 Ein-Dor, Tsachi, 13, 186, 380n Einstein, Albert, 185 Eisman, Steve, 149, 152 Electricity Information Sharing and Analysis Center (E-ISAC), 287 Electric Power Research Institute, 286 Electromagnetic pulse (EMP), 274, 352 Embodied AI, 206 EMCON (emissions control), 29–30 Empire State Building, 260 Empirical method, 36, 184, 185 Energy policy, 243–44 Enron, 152 Enthoven, Alain, 361 Epidemic Intelligence Service, 354–55 Epidemic That Never Was, The (Neustadt), 196–97 EQ (emotional quotient), 183 Erasmus Medical Center, 222 Ermarth, Fritz, 27 Erroneous Consensus, 172–73 Ethics of AI growth, 205–6 of gene editing, 334, 339–40, 343 Eugenics, 342, 344 Evolution, 329–33 Expert Political Judgment (Tetlock), 13–15 Explainable AI, 210 Fairfield Greenwich Group, 108, 113 Fallujah, 68, 69 False Cassandras, 191–98 Famines, 192 Farmington Mine disaster, 127–28 Farson, Richard, 175 “Fast-failure” review, 357 Fatalism, 2 Fate of the States (Whitney), 153 Federal Bureau of Investigation (FBI), 8, 100, 112, 115 Federal Deposit Insurance Corporation (FDIC), 160 Federal Emergency Management Agency (FEMA), 40, 46–48, 51, 53–54, 323–24 Hurricane Pam exercise, 40, 47–49 Federal Reserve Bank, 159 Feedback loops, 16, 192–93 Fermi, Enrico, 373n Feynman, Richard, 240 Figueres, Christiana, 247 Financial Crisis Inquiry Commission, 162 Financial crisis of 2008, 143–65 Madoff fraud and SEC, 118–19 primary cause of, 147–48 Whitney and, 143–46, 148–50, 156–60 Flash Crash of 2010, 211 Fletcher, Charles, 256–57 Flood Control Act of 1928, 42 Flood Control Act of 1965, 46 Flu pandemic of 1918, 195, 198, 217, 221–24 Flu pandemic of 2009, 217–18, 221–22 Forbes, 154 Ford, Gerald, 196–97 Ford, Robert, 57–74 aid to Syrian opposition, 62–63, 64–65 ambassadorship in Egypt, 67 ambassadorship in Syria, 57–58 departure from Syria, 60–62 warning and prediction of, 64–74 Foreign Service, U.S., 57, 58, 67 Fortune, 146, 148–49, 161 Fossil fuels, 16, 42, 257–58.

pages: 521 words: 118,183

The Wires of War: Technology and the Global Struggle for Power
by Jacob Helberg
Published 11 Oct 2021

But the recent explosion in AI applications has been driven by major advances in what’s known as machine learning, which, as the AI expert Pedro Domingos puts it, “automates automation itself.”10 Key to these machine learning advances is “deep learning,” powered by “neural networks.” In essence, these neural networks mimic how our brains function. Take the process of identifying the image of a cat. In the past, an engineer might have meticulously spelled out certain rules: two triangles on top of a circle likely means “cat.” With deep learning, however, you’d set a neural network loose on an immense dataset of millions of images labeled “cat” or “no cat” and allow the algorithm to puzzle out patterns for itself.11 (Neural networks have yet to learn to generate good names for cats, however.

Because of the COVID-19 pandemic, our conversation took place by Zoom, and Daniel offered a timely illustration. “Today, I took it for granted that the voice I hear over Zoom is your voice, and that the face I see over Zoom is your face,” he said. “Now there’s nifty prototypes that people are using to do deepfakes live.”31 A society disrupted by deepfakes, he suggested, was not far off. Using deep learning, deepfakes mimic visual and speech patterns to create eerily realistic images, audio, and video. The believability of synthetic content has progressed along with advances in neural networks. As recently as 2015, algorithms trying to generate the original face of a man produced results that looked only somewhat more realistic than a painting produced by a talented ten-year-old.

And as disturbing as a world awash in deepfakes would be, that’s just the beginning of what the front-end future holds in store. The Language of Deception For millennia, language has been what sets us apart and makes us human. But that’s changing. We now face security risks stemming from unprecedented advances in “natural language processing”—basically, applying those deep learning neural networks to process or generate human-sounding speech. When you ask, “Hey Siri, what’s the weather today?” or when your wife says “Alexa, play Hamilton” for the 500th time during lockdown, your device’s natural language processing abilities are what enable it to interpret your voice and act on those commands.

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

We could try to solve the problem by taking in more variables as input (you have to figure the size of the team’s stadium would be relevant, for instance), but in the end, linear strategies only get you so far. That class of strategies is just not big enough to, for instance, tell you which images are cats. For that, you have to venture into the wild world of the nonlinear. DX21 The biggest thing going on right now in machine learning is the technique called deep learning. It powers AlphaGo, the computer that beat Lee Se-dol, it powers Tesla’s fleet of sort-of-self-driving cars, and it powers Google Translate. It is sometimes presented as a kind of oracle, offering superhuman insight automatically and at scale. Another name for the technique, neural networks, makes it sound as if the method is somehow capturing the workings of the human brain itself.

Maybe—hopefully—there is some strategy obtainable by setting the fourteen knobs just right that will assign large values to all the points with an X and small values to all the points with an O, and thereby allow me to make educated guesses about other points in the plane I haven’t yet labeled. And if there is such a strategy, hopefully I can learn it by gradient descent, twiddling each knob a bit and seeing how much that diminishes my strategy’s wrongness about the examples it’s already been given. Find the best small twiddle you can make, make it, repeat. The “deep” in deep learning just means the network has a lot of columns. The number of boxes in each column is called the width, and that number can get pretty big, too, in practice, but “wide learning” just doesn’t have the same terminological zing. Today’s deep networks are more complicated than the ones in these pictures, to be sure.

See also redistricting approximation, 193, 271–72, 272n, 274–75 archeology, 258–59 architecture, 411 Arctic distortion, 307 Aristotle, 405–6 Arithmetic (Diophantus), 151 arithmetic mean, 215 arithmetic progressions, 234, 238, 249. See also geometric progressions Arkansas Voters First, 409 Arnell Group, 278 Aronofsky, Darren, 276–77 Articles of Confederation, 353n artificial intelligence (AI) car key search analogy, 186–87 and chess-playing computers, 145 and deep learning, 177–86 and Go-playing programs, 141 and gradient descent, 166–68, 169–73, 174–76 and image analysis, 168–73 and parity problem, 204 sexual anxiety touched off by, 204 and strategy assessment, 173–77 See also machine learning; neural networks Ash, Robert, 290 Ashbery, John, 217 associative learning, 222 astronomy, 250 asymmetry, 89, 135, 257, 306, 342–43, 386, 398 athoni, 190–91 atoms (numerical sequences), 265 Aubrey, John, 11–12 audioactive decay, 265 Australasian Journal of Philosophy, 33–34 authalic projection, 308 autocompletion, 263 automata, 252–53, 253n autonomous vehicles, 177–78, 204–5 Awesome Theorem, 308–10 axioms and appeal of geometry, 3 and commutativity, 297 and geometry pedagogy, 12–13, 15, 18, 22, 24–26 and gradient descent, 172 and metaphorical value of geometry, 411 and transitivity of equality, 20 ayahuasca, 2, 3–4 Babbage, Charles, 133, 155, 252–53 babies and geometry, 2 Babson, Roger, 280–81 Babson College, 280 baby examples, 158–59 Bachelier, Louis, 80–82, 88, 90, 279, 324 Bacon, Kevin, 314–15, 334.

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind
by Susan Schneider
Published 1 Oct 2019

Indeed, androids are already being built to tug at our heartstrings. Can we look beneath the surface and tell whether an AI is truly conscious? You might think that we should just examine the architecture of the Samantha program. But even today, programmers are having difficulties understanding why today’s deep-learning systems do what they do (this has been called the “Black Box Problem”). Imagine trying to make sense of the cognitive architecture of a superintelligence that can rewrite its own code. And even if a map of the cognitive architecture of a superintelligence was laid out in front of us, how would we recognize certain architectural features as being those central to consciousness?

Although many superintelligences would be beyond our grasp, perhaps we can be more confident when speculating on the nature of “early” superintelligences—that is, those that emerge from a civilization that was previously right on the cusp of developing superintelligence. Some of the first superintelligent AIs could have cognitive systems that are modeled after biological brains—the way, for instance, that deep-learning systems are roughly modeled on the brain’s neural networks. So their computational structure might be comprehensible to us, at least in rough outlines. They may even retain goals that biological beings have, such as reproduction and survival. I will turn to this issue of early superintelligence in more detail shortly.9 Dyson sphere But superintelligent AIs, being self-improving, could quickly transition to an unrecognizable form.

pages: 344 words: 94,332

The 100-Year Life: Living and Working in an Age of Longevity
by Lynda Gratton and Andrew Scott
Published 1 Jun 2016

Even here, however, some technology experts argue that the advantage of humans over machines will be short-lived. Fast developments in Cloud Robotics and Deep Learning could close the gap between human and machine performance. Developments in Cloud Robotics, where networks of robots have access to each others’ learning through the cloud network, could result in learning at an exponential rate – certainly far faster than human learning. In Deep Learning the technology attempts to mimic the way humans make inductive reasoning through association by experience, again potentially through leveraging the experience of every other robot via the cloud.

Index The letter f following an entry indicates a figure 3.0 scenarios here–here, here, here, here 3.5 scenarios here–here, here, here 4.0 scenarios here–here, here–here, here, here 5.0 scenarios here–here, here, here, here, here, here–here Acorns here activities of daily living (ADL) here adolescence here–here, here adult equivalence scales here age cognition and here–here corporations and here explorers and here–here government policy and here independent producers and here life stages and here–here, here–here portfolios and here predictability of here segregation and here–here, here–here, here, here–here age process algorithms here, here ageing process here, here ageism here, here agency here, here, here finance and here–here agriculture here–here Amazon here anxiety here appearance here Apple iPhone here reputation here Archer, Margaret here Artificial Intelligence (AI) here, here, here, here education and here human skills and here medical diagnoses and here–here, here skills and knowledge and here–here Asia here assets here, here see also intangible assets; tangible assets; transformational assets assortative mating here–here, here Astor, Brooke here Autor, David here–here, here Baby Boomers here–here beauty here Becker, Gary: ‘Treatise on the Family’ here, here–here, here behavioural nudges here Benartzi, Shlomo here benefits here–here see also welfare Bennis, Warren here birth rates, decline in here–here, here brain, the here–here, here–here cognition here Braithwaite, Valerie here Brontë, Charlotte: Jane Eyre here Buffett, Warren here–here Calico (California Life Company) here Calment, Jeanne here careers breaks and here changes and here–here dual careers here, here, here cell aging here centenarians here, here–here change here–here catalysts for here–here corporations and here–here, here education and here–here government policy and here–here, here identity and here–here inequalities and here–here mastery and here–here planning and experimentation and here–here rate of here–here Cherlin, Andrew here chess here children here, here–here, here Christensen, Clayton here Cloud Robotics here cohort estimate of life expectancy here, here, here companies here, here–here, here–here Amazon here Apple here–here change and here–here, here creative clusters here–here economies of scale and here–here Facebook here flexibility here–here, here–here reputation and here–here research and here small business ecosystems here–here technology and here–here Twitter here value creation here–here WhatsApp here compression of morbidity here–here computing power here–here, here–here see also Moore’s Law connectivity here–here consumerism here, here consumption complementarities here–here consumption levels here, here continuums here corporations here–here, here–here see also companies creative clusters here–here independent producers and here–here creativity here cross-age friendships here crucible experiences here–here Deep Learning here dementia here depreciation here developing countries life expectancy and here–here, here state pensions and here Dickens, Charles: Old Curiosity Shop, The here diet here Dimson, Elroy here disabilities here discounting here discretionary time here diverse networks here, here–here Doctorow, Corey: Makers, The here Downton Abbey effect, the here–here Doyle, Arthur Conan here driverless cars here, here dual career households here, here, here Dweck, Carol here–here dynamic/diverse networks here, here–here Easterlin’s Paradox here economy, the here–here agriculture and here–here gig economy here job creation and here–here leisure industry and here service sector and here sharing economy here, here stability and here education here, here–here, here–here see also mastery experiential learning here–here, here, here human skills and judgement and here ideas and creativity and here institutions here–here learning methods here mental flexibility and agility and here–here multi-stage life and here specialization here–here, here, here technology and here, here, here training here efficacy here, here, here–here elasticity here–here emerging markets life expectancy and here state pensions and here emotional spillover here employers here–here, here employment see also companies; employment changes age and here, here–here, here–here changes and here, here, here–here, here–here city migration and here–here creation here–here demographics and here, here–here diverse networks and here–here elasticity and here–here environmental concerns and here–here, here family structures and here–here, here, here–here, here, here, here–here flexibility and here–here, here, here–here, here–here, here–here, here hollowing out of work here–here, here, here home and here job classification here–here knowledge and skills and here levels here, here matches here–here mobility here multi-stage life and here office-based here paid leave here participation rates here–here, here pay here–here, here psychological contract here satisfaction here–here self-employment here–here specialization and here–here statistics here status and here supply and here–here technology and here, here–here, here–here, here, here, here–here, here unique human skills here–here, here vacancies here–here women and here–here working hours here–here, here working week here–here employment changes here, here, here–here companies and here–here industry sectors and here–here, here entrepreneurship here–here see also independent producers equity release schemes here experiential learning here–here, here, here experimentation here, here–here, here–here explorers here–here, here–here adventurers here age and here–here assets and here crucible experiences and here–here options and here–here searchers here, here exponential discounting here exponential growth here–here Facebook here families here, here, here–here, here children here, here–here, here dual career households here, here, here marriage here–here work and here, here finance here, here–here see also pensions age process algorithms here, here agency and here–here automation and here–here costs here–here efficacy and here–here equity release schemes here flexibility here governments and here–here, here, here–here health and here housing and here–here hyperbolic discounting here–here inheritances here–here investment here, here–here, here–here, here, here old age and here–here pay here–here, here pension replacement rates here–here, here, here–here portfolios here–here psychology and here–here retirement and here–here fitness and health here–here see also health Fleming, Ian here flexibility here, here–here, here, here–here, here–here, here corporations and here–here government policy and here–here working patterns and here flexibility stigma here, here Ford, Henry here Foxconn here Frey, Carl here Friedman, Stewart here–here, here Fries, James here, here Future of Work Consortium here future selves here–here future selves case studies Jane here–here, here–here Jimmy here–here, here galumphing here–here gender here, here see also women inequality here–here, here–here, here, here, here specialization of labour here, here–here, here, here, here–here Generation Y here generational attitudes here gerontology here Giddens, Anthony here, here gig economy here–here globalization here Goldin, Claudia here, here Google here governments here, here–here, here inequalities and here–here pensions and here–here rate of change and here–here Gratton, Lynda here Shift, The here growth mindset here–here Groysberg, Boris here Haffenden, Margaret here Hagestad, Gunhild here–here, here Harvard Grant Study here health here, here–here brain, the here–here chronic diseases here–here, here compression of morbidity here–here dementia here diseases of old age here–here finance and here improvements in here–here inequality here, here–here infectious diseases here public health here stress here–here healthy life expectancy here heterogeneity here hollowing out of work here–here, here, here home, work and here household here–here see also home economies of scale and here–here relationships here, here–here, here, here housing here–here imputed rent here, here ownership here HR policies here–here human skills here–here, here, here, here hyperbolic discounting here–here Ibarra, Herminia here identity here–here, here, here–here, here–here see also self-control; self-knowledge improvisation here–here imputed rent here, here income see also welfare distribution here growth and here inequalities here–here, here–here skills and knowledge and here–here income effect here–here independent producers here–here, here–here assets and here case study here–here creative clusters and here–here learning and here–here prototyping here–here reputation and curating and here–here India here–here Individual, the here Industrial Revolution, the here–here, here, here, here inequalities here–here gender and here–here, here–here, here, here, here government policy and here–here health here, here–here income here–here, here–here life expectancy and here–here, here–here, here infant mortality here intangible assets here–here, here–here, here case studies here–here, here–here, here corporations and here–here endowed individual characteristics here, here independent producers and here marriage and here productive assets see productive assets time and here transformational assets see transformational assets transitions and here–here vitality assets see vitality assets International Labour Organization (ILO) here ‘Women and the Future of Work’ here investment here, here–here, here–here, here Japan centenarians here–here life expectancy here, here–here,here–here, here pensions and here population decline and here job classification here–here job creation here–here job satisfaction here–here juvenescence here, here–here, here Kahneman, Daniel here Kegan, Robert here Keynes, John Maynard: Economic Possibilities of Our Grandchildren here knowledge see skills and knowledge Kurzweil, Ray here labour market see employment Lampedusa, Giuseppe : Leopard, The here law (occupation) here–here leadership here learning methods here leisure class here leisure industry here, here, here–here leisure time here, here, here–here, here–here, here–here Keynes, John Maynard and here life expectancy here–here, here see also long life best practice here, here calculating here–here, here chronic diseases and here–here cohort estimate of here, here, here developing countries and here–here diseases of old age and here–here government plans and here healthy life expectancy here historical here, here, here increase in here–here, here India and here–here inequalities in here–here, here–here, here infant mortality and here Japan and here, here–here, here–here, here limit to here–here period life expectancy measure here, here–here public health innovations and here South Korea here US and here–here Western Europe here life stages here–here, here–here age and here–here experiential learning and here explorers and here–here, here–here independent producers and here–here, here–here juvenescence and here, here–here multi-stage model here, here–here, here, here, here–here, here–here, here new stages here, here see also life stages case studies portfolios and here–here, here–here three-stage model here, here–here, here–here, here–here, here–here, here–here, here–here transitions and here life stages case studies diversity and here Jane here–here Jimmy here–here, here lifetime allowances here–here, here, here liminality here Linde, Charlotte here lockstep of action here–here, here London here–here London Business School here long life see also life expectancy as a curse here, here as a gift here, here Luddites, the here machine learning here marriage here–here Marsh, Paul here Marshall, Anthony here mastery here–here matching here–here Millenials here Mirvas, Philip here Modigliani, Franco here MOOCs (Massive Open Online Courses) here, here Moore’s Law here–here, here Moravec’s Paradox here, here morbidity here–here compression of here–here Morrissey, Francis here mortality here mortality risk here multiple selves here–here National Commission on Technology, Automation, and Economic Progress here neighbourhoods here neoplasticity here neoteny here, here new experiences here occupations here–here old age dependency ration here–here, here Ondine, curse of here options here, here–here Osborne, Michael here paid leave here Parfit, Derek here participation rates here–here, here peers here–here pension case studies Jack here, here–here, here, here Jane here, here, here–here, here, here, here, here, here–here Jimmy here–here, here, here–here, here, here, here–here, here three-stage life model here–here, here–here, here–here pension replacement rate here–here, here, here–here pensions here, here–here, here see also pension case studies amount required here–here funded schemes here goals and here government policy and here–here investment and here, here occupational pensions here–here Pay As You Go schemes here–here, here, here pension replacement rate here–here, here, here–here reform and here state pensions here–here, here period life expectancy measure here, here–here personal brands here pharmacy (occupation) here planning here plasticity here–here play here–here politics, engagement with here Polyani’s Paradox here–here, here population here–here, here–here portfolios (financial) here–here portfolios (life stage) here–here, here–here switching costs here transitions and here–here posse here–here, here possible selves here, here–here possible selves case studies Jane here–here Jimmy here–here, here Preston, Samuel here production complementarities here, here–here, here productive assets here–here, here case studies here, here, here–here, here, here, here, here–here, here, here marriage and here transitions and here professional social capital here–here prototyping here–here psychology here, here–here see also self-control age process algorithms here, here automation and here–here behavioural nudges here saving and here–here pure relationships here, here pyramid schemes here re-creation and recreation here–here, here–here recruitment here reflexive project, the here regenerative community here, here, here Relation P here relationships here–here, here, here children and here–here divorce and here–here, here dual career households here families and here–here, here–here friendships here, here–here household here, here–here, here, here marriage and here–here, here–here matches and here–here multi-generational living here–here, here options and here–here pure relationship here switching roles here, here, here, here–here reputation here–here, here–here, here–here retirees here–here retirement see also pensions age of here, here, here, here, here–here, here consumption levels and here corporations and here, here government policy and here–here stimulation in here, here risk here risk pooling here robotics here, here, here, here see also Artificial Intelligence role models here routine here routine activities here routine-busting here routine tasks here–here Rule of here here Sabbath, the here sabbaticals here–here Save More Tomorrow (SMarT plan) here–here Scharmer, Otto here second half of the chessboard here–here segregation of the ages here–here, here–here, here, here–here self-control here–here, here–here age process algorithms here, here automation and here behavioural nudges here self-employment here–here self-knowledge here–here, here finance and here–here service sector here sexuality here–here Shakespeare, William King Lear here sharing economy here–here, here, here short-termism here–here skills and knowledge here, here–here, here see also human skills earning potential and here professional social capital and here–here technology and here–here valuable here–here Slim, Carlos here smart cities here–here independent producers and here–here social media here, here–here society here spare time here see also leisure time standardized practices here–here Staunton, Mike here strategic bequest motive, the here–here substitution effect here switching here, here, here, here–here tangible assets here–here, here, here, here, here see also housing; pensions case studies here, here, here, here, here, here transitions and here taxation here, here–here Teachers Insurance and Annuity Assurance scheme here technology here, here see also Artificial Intelligence computing power here–here, here–here see also Moore’s Law driverless cars here–here, here education and here, here, here employment and here, here–here, here–here, here, here, here–here, here human skills and here, here innovation and here matching and here relationships and here teenagers here–here, here–here, here, here Thaler, Richard here thick market effects here–here Thomas, R. here time here, here–here see also sabbaticals discretionary time here flexibility and here–here, here Industrial Revolution, the here–here, here–here, here intangible assets and here leisure and here, here, here–here, here–here, here–here restructuring here, here spare time here working hours here–here, here, here–here working hours paradox here–here, here working week, the here–here, here time poor here–here trade unions here transformational assets here, here–here, here–here, here, here, here case studies here–here, here–here, here–here, here–here, here, here, here, here crucible experiences and here corporations and here transitions here, here–here, here–here, here corporations and here financing here–here government policy and here, here nature of here–here portfolios and here–here re-creating here recharging here–here tribal rituals here Twitter here Uhlenberg, Peter here–here, here UK, occupational pension schemes and here–here Unilever here universities here US here–here compression of morbidity and here occupational pension schemes and here Valliant, George here value creation here vitality assets here, here–here, here case studies here, here–here, here, here, here–here, here, here, here, here transitions and here–here website here week, the here–here weekend, the here, here weight loss here welfare here–here see also benefits Wharton School of the University of Pennsylvania here–here, here WhatsApp here Wolfran, Hans-Joachim here women see also gender children and here–here relationships and here, here, here work and here–here Women and Love here work see employment working hours here–here, here, here–here working week, the here–here, here Yahoos here–here youthfulness here–here Bloomsbury Information An imprint of Bloomsbury Publishing Plc 50 Bedford Square 1385 Broadway London New York WC1B 3DP NY 10018 UK USA www.bloomsbury.com BLOOMSBURY and the Diana logo are trademarks of Bloomsbury Publishing Plc First published 2016 © Lynda Gratton and Andrew Scott, 2016 Lynda Gratton and Andrew Scott have asserted their right under the Copyright, Designs and Patents Act, 1988, to be identified as Author of this work.

pages: 336 words: 93,672

The Future of the Brain: Essays by the World's Leading Neuroscientists
by Gary Marcus and Jeremy Freeman
Published 1 Nov 2014

But in retrospect, PDP networks were too far away from such things; they were right to emphasize the brain’s parallelism but wrong to throw away the computational baby along with the serial bathwater. Today, “neural network” models have become more sophisticated, but only in narrow ways; they have more layers and better learning algorithms, but they still remain too unstructured. The technique known as deep learning offers innovative ideas about how unsupervised systems can form categories for themselves, but it still yields little insight into higher level cognition, like language, planning, and abstract reasoning. If there is no reason to believe that the essence of human cognition is step-by-step sequential computation in a serial computer with a stored program (à la von Neumann’s ubiquitous computer architecture), there is also no reason to dismiss computation itself.

The “Aniston” cells even seem to respond cross-modally, responding to written words as well as to photographs. Hierarchies of feature detectors have now also found practical application, in the modern-day neural networks that I mentioned earlier, in speech recognition and image class ific ation. So-called deep learning, for example, is a successful machine-learning variation on the theme of hierarchical feature detection, using many layers of feature detectors. But just because some of the brain is composed of feature detectors doesn’t mean that all of it is. Some of what the brain does can’t be captured well by feature detection; for example, human beings are glorious generalizers.

See also computational brain concept: percepts and, 171–72 conceptual clarity: consciousness, 170–71, 175 connections: mapping synaptic, of brain, 94; neuronal, 92 connectivity, 210: algorithms, 119; core theme of, 90–91; human right, 90; in situ sequencing to determine, 59–60; mapping out cell types and, 28–31 connectivity map, 29–31, 182 connectome, 11, 40, 45, 182; circuits of brain, 182–84; defining progress, 200–201; as DNA sequencing problem, 40–49; worms, 265 Connectome (Seung), 12 ConnectomeDB, 13 connectomics, 11, 86; electron microscopy (EM), 45–46 consciousness, 159, 255; behavioral technology, 175; binocular rivalry, 174; brain imaging, 174–75; Burge’s model of perception, 171f; cognitive vs. noncognitive theories of, 165–68; conceptual clarity, 170–71, 175; eventrelated potentials (ERPs), 174–75; global broadcasting, 165, 168, 174; global neuronal workspace, 165f, 166; hard problem of, 162, 269; measurement problem, 161–64, 170–72; nonconceptual representations, 170–72; percepts and concepts, 171–72; precursors to conscious state, 163; transgenic mice and optogenetic switch, 168–70 contrastive method: conscious and unconscious perception, 163; consciousness, 163 copy number variants (CNVs), 236 cortex: grid cell generation, 74; grid cells and grid maps, 71–73; mammalian space circuit, 69–71; of men and mice, 26–27; modular structure of, 43; spatial cell types in entorhinal network, 74–76; teachings from grid cells, 76; understanding the, 67–69 cortical microcircuit: morphoelectric types, 119 cortical oscillations: speech perception and, 144–46 counting, 53, 54, 55, 56 Cre driver lines, 29 Crick, Francis, 172, 269 Crohn’s disease, 234 Dang, Chinh, 3, 25 DARPA (Defense Advanced Research Projects Agency), 195, 199 Darwin, Charles, 191 deCODE, 198 deep brain stimulation (DBS), 195, 227 deep learning, 211 default network, 208 Defense Advanced Research Projects Agency (DARPA), 125. See also SyNAPSE project (IBM) deficit-lesion-correlation approach, 139 degenerative diseases, 229 Dehaene, Stanislas, 165, 167, 172, 173, 174 De humani corporis fabrica (On the Fabric of the Human Body) (Vesalius), 3, 4f deletion syndromes, 238 dementia, 256 Dennett, Daniel, 166 Department of Energy, 196 depression, 122, 219, 227, 256 Descartes, René, 258, 269 de Sitter, Willem, 95 developmental language disorder, 154 Diagnostic and Statistical Manual of Mental Disorders, 261 diffusion magnetic resonance imaging (MRI), 5 digital atlases, 15–16 digital atlasing, 5 dimensionality reduction, 82, 102 diversity: human brain, 214 divisive normalization: canonical neural computation, 180 DNA: bar-coding of, 46–48; encoding neural electrical activities into, 61–62; methylation, 189; neuroimaging genomics, 156; polymerase, 57–58, 61, 62f; sequencing, 55–57, 149, 150 Donald, Merlin, 133 Donoghue, John, 217, 219–232 Dostrovsky, John, 69 Dravet syndrome, 242 duck-billed platypus: brain construction, 187; neocortex, 188 Duke University, 115 dynamical systems approach, 82, 83 dyslexia, 11, 139 Eccles, John, 257 echolocating bat, 187 École Polytechnique Fédérale de Lausanne (EPFL), 116, 122 Edinburgh Mouse Atlas Project (EMAP), 9 Eichele, Gregor, 7 Einhäuser, Wolfgang, 174 Einstein, Albert, 95 electrical recorders, 249 electrical recording, 38, 224, 245, 250, 257 electrical stimulation: neurotechnology, 225–26, 228, 260 electrodes: computational steps, 80f, 81; measuring electrical activity, 79, 80f electroencephalograph (EEG), 5, 244, 260 electromagnetic (EM) fields or waves, 248–249 electron microscopy (EM), 45, 60, 257 electrophoresis, 7, 15, 272 electrophysiology, 32, 35, 154 Eliasmith, Chris, 109, 125–36 EM Connectomics, 60 emergence: concept, 92–93 emergent phenomena, 93 encoding schemes, 214 Engert, Florian, 18 entorhinal cortex: grid cells in, of rat brain, 71, 72f; spatial cell types in, 74–76 epigenetic mechanisms: brain and behavior, 189, 190 epigenetics, 189 epilepsy, 194, 219, 230, 236, 240, 242, 266 ethics: human brain simulations, 268–69; whole brain simulation, 123 EurExpress, 9 European Commission, 111, 195 European Community, 94 Evans, Alan, 5, 10, 14 event related potentials (ERPs): consciousness, 172–73 evolution: brain organization, 190–91; epigenetic mechanisms, 189–90; neocortex during course of, 188–89; quest for species differences, 191–92; science dictating process, 191; studying various species, 186–87; understanding history of brain, 192–93; unusual mammals, 187–88 exome: humans, 152, 153–54 EyeWire, 15 Facebook, 103 feature detectors: neocortex, 211–12, 214 feedback pathways: thalamus, 37 Felleman, Daniel, 12 Feynman, Richard, 111 filtering: canonical neural computation, 180 Fisher, Simon E., 137, 149–57 flexible coordination: Spaun model, 132–33 Fluorescent In Situ Sequencing (FISSEQ), 58, 58f FMRP (fragile X mental retardation protein), 240–41 force fields, 180 format: percepts and concepts, 171 Forschungszentrum Jülich, 116 FORTRAN, 44 foundation grants: funding for brain map, 199–200 FOXP2 gene: human and chimpanzee differences, 156; mutations of, 151–52, 155; songbirds, 155–56 fragile X mental retardation protein (FMRP), 240–41 Fragile X syndrome, 240–42 Freeman, Jeremy, 23, 65, 100–107 Freud, Sigmund, 259 Freud’s psychodynamic theory, 206 Fried, Itzhak, 211 functional brain map, 161 functional dissociations, 140 functional localization: concept, 139 functional magnetic resonance imaging (fMRI), 4–5, 244, 260 functional modeling: neural responses, 102 fusiform face area, 163; identification of, 163 Fyhn, Marianne, 71 Galen, Claudius, 3 GE, 200 Genbank (public database), 196 genealyzers, 203 gene expression, 6–9, 8f, 54 GenePaint, 9 GENESIS neural simulator, 183 genetic brain, 6–14 genetics: psychiatric patients, 235–37 genome: humans, 149, 152; neuroimaging genomics, 156–57.

pages: 404 words: 92,713

The Art of Statistics: How to Learn From Data
by David Spiegelhalter
Published 2 Sep 2019

Measures that lack value for prediction or classification may be identified by data visualization or regression methods and then discarded, or the numbers of features may be reduced by forming composite measures that encapsulate most of the information. Recent developments in extremely complex models, such as those labelled as deep learning, suggest that this initial stage of data reduction may not be necessary and the total raw data can be processed in a single algorithm. Classification and Prediction A bewildering range of alternative methods are now readily available for building classification and prediction algorithms.

• Neural networks comprise layers of nodes, each node depending on the previous layer by weights, rather like a series of logistic regressions piled on top of each other. Weights are learned by an optimization procedure, and, rather like random forests, multiple neural networks can be constructed and averaged. Neural networks with many layers have become known as deep-learning models: Google’s Inception image-recognition system is said to have over twenty layers and over 300,000 parameters to estimate. • K-nearest-neighbour classifies according to the majority outcome among close cases in the training set. The results of applying some of these methods to the Titanic data, with tuning parameters chosen using tenfold cross-validation and ROC as an optimization criterion, are shown in Table 6.4.

data literacy: the ability to understand the principles behind learning from data, carry out basic data analyses, and critique the quality of claims made on the basis of data. data science: the study and application of techniques for deriving insights from data, including constructing algorithms for prediction. Traditional statistical science forms part of data science, which also includes a strong element of coding and data management. deep learning: a machine-learning technique that extends standard artificial neural network models to many layers representing different levels of abstraction, say going from individual pixels of an image through to recognition of objects. dependent events: when the probability of one event depends on the outcome of another event.

pages: 340 words: 90,674

The Perfect Police State: An Undercover Odyssey Into China's Terrifying Surveillance Dystopia of the Future
by Geoffrey Cain
Published 28 Jun 2021

I am grateful to a former Google AI developer for assisting with the phrasing of this explanation of deep neural nets. 5. Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra. “Notes from the AI Frontier: Applications and Value of Deep Learning,” McKinsey Global Institute, April 17, 2018, https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning#. 6. Andrew Ross Sorkin and Steve Lohr, “Microsoft to Buy Skype for $8.5 Billion,” New York Times, May 10, 2011, https://dealbook.nytimes.com/2011/05/10/microsoft-to-buy-skype-for-8-5-billion/. 7. The World Bank, “Individuals Using the Internet (% of Population)—China,” https://data.worldbank.org/indicator/IT.NET.USER.ZS?

Nvidia, “AI Powered Facial Recognition for Computers with SenseTime,” posted by YouTube user NVIDIA on June 6, 2016, https://www.youtube.com/watch?v=wMUmPumXtpw. The video was recorded at Nvidia’s Emerging Companies Summit in 2016. 17. Aaron Tilley, “The New Intel: How Nvidia Went from Powering Video Games to Revolutionizing Artificial Intelligence,” Forbes, November 30, 2016, https://www.forbes.com/sites/aarontilley/2016/11/30/nvidia-deep-learning-ai-intel/?sh=ba1b3777ff1e. 18. Paul Mozur and Don Clark, “China’s Surveillance State Sucks Up Data,” New York Times, November 24, 2020, https://www.nytimes.com/2020/11/22/technology/china-intel-nvidia-xinjiang.html. 19. Nvidia, “AI Powered Facial Recognition.” 20. Nvidia, “AI Powered Facial Recognition.” 21.

pages: 282 words: 93,783

The Future Is Analog: How to Create a More Human World
by David Sax
Published 15 Jan 2022

That trust extends to a belief that letting students evolve in a way that encourages them to love learning and to see education as a part of their humanity will invariably lead to what Heinonen calls “deep learning.” Deep learning is what happens when education transcends information retention. It is more about knowing what to do with the knowledge you gain than just memorizing it. This is the same emotional learning that Mary Helen Immordino-Yang wrote about, where students genuinely care about the process of learning. Heinonen told me that deep learning actually makes the skills-based learning of core subjects, like math, science, reading, and writing, more effective. Compared to students in other countries, Finns spend far less time learning these subjects and use far less technology when doing so.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

These lessons about human intelligence and adaptability are often ignored in the AI community, which rushes to automate a range of tasks, regardless of the role of human skill. The triumph of AI in radiology is much trumpeted. In 2016 Geoffrey Hinton, cocreator of modern deep-learning methods, Turing Award winner, and Google scientist, suggested that “people should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.” Nothing of the sort has yet happened, and demand for radiologists has increased since 2016, for a very simple reason. Full radiological diagnosis requires even more situational and social intelligence than, for example, customer service, and it is currently beyond the capabilities of machines.

Although talk of intelligent machines has been around for two decades, these technologies started spreading only after 2015. The takeoff is visible in the amount that firms spend on AI-related activities and in the number of job postings for workers with specialized AI skills (including machine learning, machine vision, deep learning, image recognition, natural-language processing, neural networks, support vector machines, and latent semantic analysis). Tracking this indelible footprint, we can see that AI investments and the hiring of AI specialists concentrate in organizations that rely on tasks that can be performed by these technologies, such as actuarial and accounting functions, procurement and purchasing analysis, and various other clerical jobs that involve pattern recognition, computation, and basic speech recognition.

There have also been major advances in data storage, reducing the cost of storing and accessing massive data sets, and improvements in the ability to perform large amounts of computation distributed across many devices, aided by rapid advances in microprocessors and cloud computing. Equally important has been progress in machine learning, especially “deep learning,” by using multilayer statistical models, such as neural networks. In traditional statistical analysis a researcher typically starts with a theory specifying a causal relationship. A hypothesis linking the valuation of the US stock market to interest rates is a simple example of such a causal relationship, and it naturally lends itself to statistical analysis for investigating whether it fits the data and for forecasting future movements.

pages: 417 words: 103,458

The Intelligence Trap: Revolutionise Your Thinking and Make Wiser Decisions
by David Robson
Published 7 Mar 2019

Part 3 turns to the science of learning and memory. Despite their brainpower, intelligent people sometimes struggle to learn well, reaching a kind of plateau in their abilities that fails to reflect their potential. Evidence-based wisdom can help to break that vicious cycle, offering three rules for deep learning. Besides helping us to meet our own personal goals, this cutting-edge research also explains why East Asian education systems are already so successful at applying these principles, and the lessons that Western schooling can learn from them to produce better learners and wiser thinkers. Finally, Part 4 expands our focus beyond the individual, to explore the reasons that talented groups act stupidly – from the failings of the England football team to the crises of huge organisations like BP, Nokia and NASA.

But then you say, “here’s a new fact”, and they’ll go, “Oh, well, that changes things; you’re right.” ’ Bock’s comments show us that there is now a movement away from considering SAT scores and the like as the sum total of our intellectual potential. But the old and new ways of appraising the mind do not need to be in opposition, and in Chapter 8 we will explore how some of the world’s best schools already cultivate these qualities and the lessons they can teach us all about the art of deep learning. If you have been inspired by this research, one of the simplest ways to boost anyone’s curiosity is to become more autonomous during learning. This can be as simple as writing out what you already know about the material to be studied and then setting down the questions you really want to answer.

When Leighton finally reached Kyzyl himself, he left a small plaque in memory of Feynman, and his daughter Michelle would make her own visit in the late 2000s. ‘Like [Ferdinand] Magellan, Richard Feynman completed his last journey in our minds and hearts’, Leighton wrote in his memoir. ‘Through his inspiration to others, his dream took on a life of its own.’ 8 The benefits of eating bitter: East Asian education and the three principles of deep learning James Stigler’s heart was racing and his palms were sweaty – and he wasn’t even the one undergoing the ordeal. A graduate student at the University of Michigan, Stigler was on his first research trip to Japan, and he was now observing a fourth-grade lesson in Sendai. The class were learning how to draw three-dimensional cubes, a task that is not as easy as it might sound for many children, and as the teacher surveyed the students’ work, she quickly singled out a boy whose drawings were particularly sloppy and ordered him to transfer his efforts to the blackboard – in front of everyone.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

The computer scientist David Marr’s classic 1982 computational theory of vision is both a standard reference for the bottom-up view of perception and a practical cookbook for the design and construction of artificial vision systems. More recent machine vision systems implementing artificial neural networks – such as ‘deep learning’ networks – are nowadays achieving impressive performance levels, in some situations comparable to what humans can do. These systems, too, are frequently based on bottom-up theories. With all these points in its favour, the bottom-up ‘how things seem’ view of perception seems to be on pretty solid ground

See www.nytimes.com/2015/09/21/technology/personaltech/software-is-smart-enough-for-sat-but-still-far-from-intelligent.html. vast artificial neural network: GPT stands for ‘Generative Pre-trained Transformer’ – a type of neural network specialised for language prediction and generation. These networks are trained using an unsupervised deep learning approach essentially to ‘predict the next word’ given a previous word or text snippet. GPT-3 has an astonishing 175 billion parameters and was trained on some 45 terabytes of text data. See https://openai.com/blog/openai-api/ and for technical details: https://arxiv.org/abs/2005.14165. it does not understand: Of course this depends on what is meant by ‘understanding’.

‘Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects’. Nature Neuroscience, 2(1), 79–87. Reep, R. L., Finlay, B. L., & Darlington, R. B. (2007). ‘The limbic system in mammalian brain evolution’. Brain, Behavior and Evolution, 70(1), 57–70. Richards, B. A., Lillicrap, T. P., Beaudoin, P., et al. (2019). ‘A deep learning framework for neuroscience’. Nature Neuroscience, 22(11), 1761–70. Riemer, M., Trojan, J., Beauchamp, M., et al. (2019). ‘The rubber hand universe: On the impact of methodological differences in the rubber hand illusion’. Neuroscience and Biobehavioral Reviews, 104, 268–80. Rosas, F., Mediano, P.

Demystifying Smart Cities
by Anders Lisdorf

The basic premise is that intelligence is something humans can recreate and that we can imbue other entities with this intelligence. There are a number of more or less well-defined subfields of artificial intelligence that are sometimes used interchangeably with the term such as machine learning, deep learning, data mining, neural networks, and so on. In practical terms, they all build applications with computer code that implements particular algorithms. An algorithm is a set of instructions or rules that will provide the solution to a problem. Clearly, not all algorithms are AI. A recipe qualifies as an algorithm but hardly as an intelligent one.

If the output does not match the expected, the weights are open to change, but the more they are successful, the more fixed the weights become. In the end the system consisting of multiple layers of “neurons” adapts such that the input is transformed to elicit the correct output. This is also what is behind the term deep learning where the number of layers is increased. Contrast this to the decision tree where each layer of the tree gives you a good and comprehensible information about how decisions are made in terms of classification. In a neural net, all you have are layers consisting of weights and connections. This is why it is considered a black box.

pages: 223 words: 60,909

Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech
by Sara Wachter-Boettcher
Published 9 Oct 2017

Sorelle Friedler, phone interview with the author, January 30, 2017. 9. “Why Google ‘Thought’ This Black Woman Was a Gorilla,” Note to Self, WNYC, September 28, 2015, http://www.wnyc.org/story/deep-problem-deep-learning. 10. Jacky Alciné, email to the author, January 27, 2017. 11. Google Photos, [product tour screens], accessed January 28, 2017, https://photos.google.com. 12. For a walk through the basics, see the free online book by Michael Nielsen: Neural Networks and Deep Learning (Determination Press, 2015), http://neuralnetworksanddeeplearning.com. 13. Daniela Hernandez, “The New Google Photos App Is Disturbingly Good at Data-Mining Your Photos,” Fusion, June 4, 2015, http://fusion.net/story/142326/the-new-google-photos-app-is-disturbingly-good-at-data-mining-your-photos. 14.

pages: 196 words: 61,981

Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside
by Xiaowei Wang
Published 12 Oct 2020

Artificial intelligence is a broad category, and that broadness makes it susceptible to slippery usages, to being malleable to any kind of political or economic end. AI is technically a subset of machine learning. And within artificial intelligence, one of the most exciting areas over the past ten years has been work done on neural networks, which are used in deep learning. These artificial neural networks rely on models of the brain that have been formalized into mathematical operations. Research into these “artificial neurons” began as early as 1943, with a paper by Warren McCulloch and Walter Pitts on the perceptron, an algorithm that modeled binary (yes/no) classification, which would serve as the foundation of contemporary neural networks.

The seduction of AI is already palpable in China and the United States, across the political spectrum, as people advocate for a fully automated world. The attraction is not simply about rationality and the level of control provided by making systems automated. It’s also about scale: once implemented, certain applications of deep learning, like image recognition, have been shown to be faster and more accurate than humans. It’s no surprise that these qualities make AI the ideal worker. Many of us live in a world where machine learning and forms of artificial intelligence already pervade our everyday lives—recommendation algorithms, fun cosmetic and face filters on Snapchat and Meitu, automated checkouts using image-recognition cameras.

pages: 458 words: 116,832

The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism
by Nick Couldry and Ulises A. Mejias
Published 19 Aug 2019

The Intercept, April 30, 2017. https://theintercept.com/2017/04/30/taser-will-use-police-body-camera-videos-to-anticipate-criminal-activity/. Kollewe, Julia. “Marmite Maker Unilever Threatens to Pull Ads from Facebook and Google.” Guardian, February 12, 2018. Krazit, Tom. “Amazon Web Services Backs Deep-Learning Format Introduced by Microsoft and Facebook.” Geekwire, November 16, 2017. https://www.geekwire.com/2017/amazon-web-services-backs-deep-learning-format-introduced-microsoft-facebook/. Krishna, Sankaran. Globalization and Postcolonialism. Lanham, MD: Rowman & Littlefield Publishers, 2008. Kuchler, Hannah. “Facebook Investors Wake Up to Era of Slower Growth.” Financial Times, July 26, 2018. .

In terms of human users, these processes and ecologies of data collection are vast, and the resulting concentration of advertising power correspondingly huge: 72 percent of global advertising spending is in the hands of Google and Facebook. 47 From this base, huge investments in data analysis techniques become possible, with so called deep learning increasingly central to the largest IT businesses. From early on, the vision of Google’s founders was “AI complete.” Much later, IBM announced in 2017 that it “is now a cognitive solutions and cloud platform company,” while Microsoft reorganized itself in March 2018 to prioritize its cloud services and AI businesses.48 The retailers Amazon and Walmart are giant data processing operations.

pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech
by Franklin Foer
Published 31 Aug 2017

The aphorism became widely known only: Josh McHugh, “Google vs. Evil,” Wired, January 2003. “We’re at maybe 1%”: Greg Kumparak, “Larry Page Wants Earth to Have a Mad Scientist Island,” TechCrunch, May 15, 2003. “This is the culmination of literally 50 years”: Robert D. Hof, “Deep Learning,” Technology Review, www.technologyreview.com/s/513696/deep-learning. “The Google policy on a lot of things is to get right up to the creepy line”: Sara Jerome, “Schmidt: Google gets ‘right up to the creepy line’,” The Hill, October 1, 2010. Singularity University: David Rowan, “On the Exponential Curve: Inside Singularity University,” Wired, May 2013.

pages: 592 words: 125,186

The Science of Hate: How Prejudice Becomes Hate and What We Can Do to Stop It
by Matthew Williams
Published 23 Mar 2021

While some mainstream news sources were recommended as the top video, sources from the alt-right often dominated the top twenty, especially following events such as terror attacks.8 These videos ranked up hundreds of thousands of views by ‘issue hijacking’.‡ Since 2016, Google and YouTube have been altering their algorithms to focus on recommending more authoritative news sources. But the use of new ‘deep learning’ technology that is informed by billions of user behaviours a day means extreme videos will continue to be recommended if they are popular with site visitors. Filter bubbles and our bias Research on internet ‘filter bubbles’, often used interchangeably with the term ‘echo chambers’,§ has established that partisan information sources are amplified in online networks of like-minded social media users, where they go largely unchallenged due to ranking algorithms filtering out any challenging posts.9 Data science shows these filter bubbles are resilient accelerators of prejudice, reinforcing and amplifying extreme viewpoints on both sides of the spectrum.

These can be members of the general public, or experts in particular forms of hate (e.g. race, transgender, disability). Posts that get at least three out of four votes for hate are then put into a training dataset. This is our gold standard that trains the machine to mimic the human judgement in the annotation task. Various algorithms are then run across the dataset, including deep learning varieties popular with Google, Facebook, Twitter and Microsoft. But unlike their use of these algorithms, ours are developed in a closed workshop, meaning they can’t be gamed by new data being sent to them from mischievous internet users. Once we determine the algorithm that produces the most accurate results, we deploy it on live social media data streams.

Abdallah, Abdalraouf, 1 Abedi, Salman, 1, 2, 3, 4 abortion, 1, 2 Abu Sayyaf Group, 1 abuse, 1, 2, 3, 4, 5 accelerants to hate, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 accelerationists, 1 addiction, 1, 2, 3, 4 Admiral Duncan bar, 1 adolescence, 1, 2, 3, 4, 5, 6, 7 advertising, 1, 2, 3, 4, 5 African Americans, 1, 2, 3, 4, 5, 6, 7 afterlife, 1, 2 age, 1, 2 aggression: brain and hate, 1, 2, 3, 4, 5; false alarms, 1; group threat, 1, 2, 3, 4, 5, 6; identity fusion, 1; mortality, 1; pyramid of hate, 1; trauma and containment, 1, 2 AI, see artificial intelligence Albright, Jonathan, 1 alcohol, 1, 2, 3, 4, 5, 6, 7, 8 algorithms: far-right hate, 1, 2, 3, 4; filter bubbles and bias, 1, 2; Google, 1, 2, 3; online hate speech, 1, 2, 3, 4, 5, 6; Tay, 1, 2; tipping point, 1, 2; YouTube, 1 Algotransparency.org, 1 Allport, Gordon, 1, 2, 3, 4 Al Noor Mosque, Christchurch, 1 al-Qaeda, 1, 2 Alternative für Deutschland (AfD), 1 alt-right: algorithms, 1, 2; brain and hate, 1; Charlottesville rally, 1, 2; counter-hate speech, 1; definition, 1n; Discord, 1; Facebook, 1, 2, 3; fake accounts, 1; filter bubbles, 1, 2; red-pilling, 1, 2; social media, 1, 2; Trump, 1, 2; YouTube, 1 Alzheimer’s disease, 1 American Crowbar Case, 1 American culture, 1 American Nazi Party, 1, 2 Amodio, David, 1n amygdala: brain and signs of prejudice, 1, 2; brain tumours, 1; disengaging the amygdala autopilot, 1; hate and feeling pain, 1, 2; and insula, 1; neuroscience of hate, 1n, 2, 3, 4; parts that edge us towards hate, 1; parts that process prejudice, 1; prepared versus learned amygdala responses, 1, 2; processing of ‘gut-deep’ hate, 1; recognising facial expressions, 1n, 2; stopping hate, 1, 2; trauma and containment, 1, 2; unlearning prejudiced threat detection, 1 anger, 1, 2, 3, 4, 5, 6, 7, 8 anonymity, 1, 2 anterior insula, 1n Antifa, 1, 2n, 3 anti-gay prejudice, 1, 2, 3, 4, 5, 6, 7, 8 anti-hate initiatives, 1, 2 antilocution, 1 anti-Muslim hate, 1, 2, 3, 4, 5, 6 anti-Semitism, 1, 2, 3, 4, 5, 6 anti-white hate crime, 1 Antonissen, Kirsten, 1, 2 anxiety: brain and hate, 1, 2, 3, 4; harm of hate speech, 1; intergroup contact, 1, 2; subcultures of hate, 1, 2; trauma and containment, 1; trigger events, 1, 2 Arab people, 1, 2, 3, 4, 5, 6 Arbery, Ahmaud, 1 Arkansas, 1, 2 artificial intelligence (AI), 1, 2, 3, 4 Asian Americans, 1, 2 Asian people, 1, 2, 3, 4 assault, 1, 2, 3 asylum seekers, 1, 2, 3, 4 Athens, 1 Atlanta attack, 1 Atran, Scott, 1, 2 attachment, 1 attention, 1, 2, 3 attitudes, 1, 2, 3, 4, 5, 6 Aung San Suu Kyi, 1 austerity, 1 Australia, 1 autism, 1 averages, 1, 2 avoidance, 1, 2, 3 Bali attack, 1 Bangladeshi people, 1 BBC (British Broadcasting Corporation), 1, 2, 3 behavioural sciences, 1, 2 behaviour change, 1, 2, 3 beliefs, 1, 2, 3 Bell, Sean, 1, 2 Berger, Luciana, 1 Berlin attacks, 1 bias: algorithms, 1; brain and hate, 1, 2, 3, 4, 5, 6, 7; filter bubbles, 1; Google Translate, 1; group threat, 1, 2, 3, 4; police racial bias, 1; predicting hate crime, 1; stopping hate, 1, 2, 3; unconscious bias, 1, 2, 3, 4 Bible, 1 Biden, Joe, 1 ‘Big Five’ personality traits, 1 biology, 1, 2, 3, 4, 5, 6, 7 Birstall, 1 bisexual people, 1 Black, Derek, 1, 2 Black, Don, 1, 2, 3 blackface, 1 Black Lives Matter, 1 Black Mirror, 1n black people: author’s brain and hate, 1, 2, 3, 4, 5; brain and signs of prejudice, 1, 2; brain parts that edge us towards hate, 1; brain parts that process prejudice, 1; Charlottesville rally, 1, 2; disengaging the amygdala autopilot, 1; Duggan shooting, 1; feeling pain, 1; Google searches, 1, 2; group threat, 1, 2, 3, 4; online hate speech, 1, 2, 3, 4; police relations, 1, 2; predicting hate crime, 1, 2; prepared versus learned amygdala responses, 1; pyramid of hate, 1, 2, 3n; recognising facial expressions, 1, 2; South Africa, 1; steps to stop hate, 1, 2, 3, 4; trauma and Franklin, 1, 2, 3, 4; trigger events, 1, 2, 3; unconscious bias, 1; unlearning prejudiced threat detection, 1, 2; white flight, 1 BNP, see British National Party Bolsonaro, Jair, 1 Bosnia and Herzegovina, 1, 2 bots, 1, 2, 3, 4, 5 Bowers, Robert Gregory, 1 boys, 1, 2 Bradford, 1 brain: ancient brains in modern world, 1; author’s brain and hate, 1; beyond the brain, 1; the brain and hate, 1; brain and signs of prejudice, 1; brain damage and tumours, 1, 2, 3, 4; brains and unconscious bias against ‘them’, 1; brain’s processing of ‘gut-deep’ hate, 1; defence mechanisms, 1; disengaging the amygdala autopilot, 1; figures, 1; finding a neuroscientist and brain scanner, 1; group threat detection, 1, 2; hacking the brain to hate, 1; hate and feeling pain, 1; locating hate in the brain, 1; neuroscience and big questions about hate, 1; overview, 1; parts that edge us towards hate, 1; parts that process prejudice, 1; prepared versus learned amygdala responses, 1; recognising facial expressions, 1; rest of the brain, 1; signs of prejudice, 1; steps to stop hate, 1, 2; tipping point to hate, 1, 2, 3, 4, 5; trauma and containment, 1, 2; unlearning prejudiced threat detection, 1; where neuroscience of hate falls down, 1 brain imaging: author’s brain and hate, 1; beyond the brain, 1; the brain and hate, 1; brain and signs of prejudice, 1, 2; brain injury, 1, 2; Diffusion MRI, 1; disengaging the amygdala autopilot, 1; finding a neuroscientist and brain scanner, 1; fusiform face area, 1; locating hate in the brain, 1; MEG, 1; neuroscience of hate, 1, 2, 3; parts that process prejudice, 1; prepared versus learned amygdala responses, 1; processing of ‘gut-deep’ hate, 1; subcultures of hate, 1, 2; unconscious bias, 1 brainwashing, 1, 2 Bray, Mark, 1n Brazil, 1, 2, 3 Breivik, Anders, 1, 2 Brexit, 1, 2, 3, 4n, 5, 6, 7, 8, 9 Brexit Party, 1, 2 Brick Lane, London, 1 Britain First, 1, 2 British identity, 1, 2 British National Party (BNP), 1, 2n, 3, 4, 5 Brixton, 1 Broadmoor Hospital, 1, 2 Brooker, Charlie, 1n Brooks, Rayshard, 1 Brown, Katie, 1, 2 Brown, Michael, 1, 2 Brussels attack, 1 Budapest Pride, 1 bullying, 1, 2 Bundy, Ted, 1 burka, 1, 2, 3 Burmese, 1 Bush, George W., 1 Byrd, James, Jr, 1 California, 1, 2n, 3 Caliskan, Aylin, 1 Cambridge Analytica, 1, 2 cancer, 1, 2 Cardiff University Brain Research Imaging Centre (CUBRIC), 1, 2, 3, 4 caregiving motivational system, 1 care homes, 1, 2 Casablanca, 1 cascade effect, 1, 2 categorisation, 1, 2, 3, 4 Catholics, 1 Caucasian Crew, 1 causality, 1, 2 celebrities, 1, 2, 3, 4 censorship, 1, 2 Centennial Olympic Park, Atlanta, 1 Centers for Disease Control (CDC), 1 change blindness, 1 charity, 1, 2, 3 Charlottesville rally, 1, 2, 3n, 4 chatbots, 1, 2, 3 Chauvin, Derek, 1 Chelmsford, 1 Chicago, 1 childhood: attachment issues, 1; child abuse, 1, 2, 3; child grooming, 1; child play, 1; failures of containment, 1, 2, 3, 4; group threat, 1, 2; intergroup contact, 1, 2; learned stereotypes, 1; online hate speech, 1, 2; predicting hate crime, 1; trauma and containment, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10; trigger events, 1, 2; understanding the ‘average’ hate criminal, 1; understanding the ‘exceptional’ hate offender, 1, 2, 3 China, 1, 2, 3, 4 Chinese people, 1, 2, 3 ‘Chinese virus,’ 1, 2 Cho, John, 1 Christchurch mosque attack, 1 Christianity, 1, 2, 3 cinema, 1 citizen journalism, 1 civilising process, 1 civil rights, 1, 2, 3, 4 class, 1, 2 cleaning, 1 climate change, 1, 2 Clinton, Hillary, 1, 2 cognitive behavioural therapy, 1 cognitive dissonance, 1 Cohen, Florette, 1, 2 Cold War, 1 collective humiliation, 1 collective quests for significance, 1, 2 collective trauma, 1, 2 colonialism, 1n, 2 Combat 1, 2 comedies, 1, 2, 3 Communications Acts, 1, 2 compassion, 1, 2, 3 competition, 1, 2, 3, 4, 5, 6, 7, 8 confirmation bias, 1 conflict, 1, 2, 3, 4 conflict resolution, 1, 2, 3, 4, 5 Connectome, 1 Conroy, Jeffrey, 1 Conservative Party, 1, 2, 3 conspiracy theories, 1, 2, 3 contact with others, 1, 2 containment: failures of, 1; hate as container of unresolved trauma, 1; understanding the ‘exceptional’ hate offender, 1, 2, 3 content moderation, 1, 2, 3 context, 1, 2, 3 Convention of Cybercrime, 1 cooperation, 1, 2, 3, 4, 5, 6 Copeland, David, 1, 2, 3, 4, 5, 6, 7 coping mechanisms, 1, 2, 3, 4, 5, 6, 7 Cordoba House (‘Ground Zero mosque’), 1 correction for multiple comparisons, 1, 2n ‘corrective rape’, 1, 2 cortisol, 1 Council of Conservative Citizens, 1n counter-hate speech, 1, 2, 3, 4 courts, 1, 2, 3, 4, 5, 6 COVID-19 pandemic, 1, 2, 3 Cox, Jo, 1, 2, 3 Criado Perez, Caroline, 1 crime, 1, 2, 3, 4, 5, 6, 7 Crime and Disorder Act 1998, 1n crime recording, 1, 2, 3, 4 crime reporting, 1, 2, 3, 4, 5, 6, 7 Crime Survey for England and Wales (CSEW), 1 criminal justice, 1, 2, 3 Criminal Justice Act, 1, 2n criminal prosecution, 1, 2 criminology, 1, 2, 3, 4, 5, 6 cross-categorisation, 1 cross-race or same-race effect, 1 Crusius, Patrick, 1, 2 CUBRIC (Cardiff University Brain Research Imaging Centre), 1, 2, 3, 4 cultural ‘feeding’, 1, 2, 3, 4, 5 cultural worldviews, 1, 2, 3, 4, 5, 6, 7 culture: definitions, 1; group threat, 1, 2, 3; steps to stop hate, 1, 2, 3; tipping point, 1, 2, 3, 4, 5; unlearning prejudiced threat detection, 1 culture machine, 1, 2, 3, 4, 5 culture wars, 1 Curry and Chips, 1 cybercrime, 1 dACC, see dorsal anterior cingulate cortex Daily Mail, 1, 2 Dailymotion, 1 Daily Stormer, 1, 2n Daley, Tom, 1, 2 Darfur, 1 dark matter, 1 death: events that remind us of our mortality, 1; newspapers, 1; predicting hate crime, 1; religion and hate, 1, 2; subcultures of hate, 1, 2; trigger events, 1, 2 death penalty, 1, 2 death threats, 1 decategorisation, 1 De Dreu, Carsten, 1, 2, 3, 4 deep learning, 1, 2 defence mechanisms, 1 defensive haters, 1, 2 dehumanisation, 1, 2, 3, 4, 5, 6 deindividuation, 1, 2 deindustrialisation, 1, 2, 3, 4 Democrats, 1, 2, 3 Denny, Reginald, 1 DeSalvo, Albert (the Boston Strangler), 1 desegregation, 1, 2, 3 Desmond, Matthew, 1 Dewsbury, 1, 2, 3 Diffusion Magnetic Resonance Imaging (Diffusion MRI), 1, 2 diminished responsibility, 1, 2 Director of Public Prosecutions (DPP), 1 disability: brain and hate, 1, 2; group threat, 1, 2, 3, 4, 5, 6; intergroup contact, 1; Japan care home, 1, 2; online hate speech, 1; profiling the hater, 1; suppressing prejudice, 1; victim perception, 1n Discord, 1, 2, 3, 4 discrimination: brain and hate, 1, 2; comedy programmes, 1; Google searches, 1; Japan laws, 1; preference for ingroup, 1; pyramid of hate, 1, 2, 3; questioning prejudgements, 1; trigger events, 1, 2, 3 disgust: brain and hate, 1, 2, 3, 4, 5, 6; group threat detection, 1, 2, 3; ‘gut-deep’ hate, 1, 2; Japan care home, 1; what it means to hate, 1, 2 disinformation, 1, 2, 3 displacement, 1, 2 diversity, 1, 2, 3 dlPFC, see dorsolateral prefrontal cortex domestic violence, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 Doran, John, 1, 2, 3 dorsal anterior cingulate cortex (dACC), 1, 2, 3n, 4, 5, 6, 7, 8, 9 dorsolateral prefrontal cortex (dlPFC), 1n, 2, 3 Douglas, Mary, Purity and Danger, 1 drag queens, 1 drugs, 1, 2, 3, 4, 5, 6, 7, 8, 9 Duggan, Mark, 1 Duke, David, 1 Dumit, Joe, Picturing Personhood, 1 Durkheim, Emile, 1 Dykes, Andrea, 1 Earnest, John T., 1 Eastern Europeans, 1, 2, 3 Ebrahimi, Bijan, 1, 2, 3, 4, 5, 6 echo chambers, 1, 2n economy, 1, 2, 3, 4, 5, 6 EDL, see English Defence League education, 1, 2, 3, 4 Edwards, G., 1 8chan, 1, 2 elections, 1, 2, 3, 4, 5, 6 electroencephalography, 1n elites, 1 ELIZA (computer program), 1 The Ellen Show, 1 El Paso shooting, 1 Elrod, Terry, 1 Emancipation Park, Charlottesville, 1 Emanuel African Methodist Church, Charleston, 1 emotions: brain and hate, 1, 2, 3, 4n, 5, 6, 7, 8, 9; group threat, 1; subcultures of hate, 1; trigger events and mortality, 1; what it means to hate, 1, 2, 3, 4 empathy: brain and hate, 1, 2, 3, 4, 5, 6; feeling hate together, 1; group threat, 1, 2; steps to stop hate, 1, 2, 3; subcultures of hate, 1; trauma and containment, 1 employment, 1, 2, 3, 4, 5, 6, 7 English Defence League (EDL), 1, 2n, 3 epilepsy, 1, 2, 3, 4, 5 Epstein, Robert, 1 equality, 1, 2 Essex, 1 ethnicity, 1, 2n, 3, 4 ethnic minorities, 1, 2, 3, 4, 5, 6 ethnocentrism, 1 EU, see European Union European Commission, 1, 2 European Digital Services Act, 1 European Parliament, 1, 2 European Social Survey, 1 European Union (EU): Brexit referendum, 1, 2, 3, 4n, 5; Facebook misinformation, 1; group threat, 1, 2; online hate speech, 1, 2, 3; trigger events, 1 Eurovision, 1 evidence-based hate crime, 1 evolution, 1, 2, 3, 4, 5, 6, 7, 8 executive control area: brain and hate, 1, 2, 3, 4, 5, 6, 7, 8; disengaging the amygdala autopilot, 1, 2; extremism, 1; recognising false alarms, 1; trauma and containment, 1; trigger events, 1 exogenous shocks, 1 expert opinion, 1 extreme right, 1, 2, 3, 4, 5 extremism: Charlottesville and redpilling, 1, 2; feeling hate together, 1; online hate speech, 1; perceiving versus proving hate, 1; quest for significance, 1, 2, 3; subcultures of hate, 1, 2, 3, 4, 5, 6, 7; trauma and containment, 1; trigger events, 1, 2, 3 Facebook: algorithms, 1, 2; Charlottesville rally, 1, 2; Christchurch mosque attack, 1; far-right hate, 1, 2, 3, 4, 5; filter bubbles, 1, 2; how much online hate speech, 1, 2; Myanmar genocide, 1; online hate and offline harm, 1, 2, 3; redpilling, 1; stopping online hate speech, 1, 2, 3, 4 facial expression, 1, 2, 3, 4 faith, 1, 2 fake accounts, 1, 2; see also bots fake news, 1, 2, 3, 4 false alarms, 1, 2, 3 Farage, Nigel, 1, 2 far left, 1n, 2, 3, 4 Farook, Syed Rizwan, 1 far right: algorithms, 1, 2, 3, 4; brain injury, 1; Charlottesville rally, 1, 2, 3n, 4; COVID-19 pandemic, 1, 2; Facebook, 1, 2, 3, 4, 5; filter bubbles, 1, 2; gateway sites, 1; group threat, 1, 2; red-pilling, 1; rise of, 1; stopping online hate speech, 1; subcultures of hate, 1, 2, 3, 4, 5; terror attacks, 1, 2, 3; tipping point, 1, 2; trauma and containment, 1, 2, 3, 4n; trigger events, 1, 2; YouTube, 1 fathers, 1, 2, 3 FBI, see Federal Bureau of Investigation fear: brain and hate, 1, 2, 3, 4, 5, 6, 7; feeling hate together, 1; group threat, 1, 2, 3, 4, 5; mortality, 1; online hate speech, 1, 2, 3; steps to stop hate, 1, 2; trauma and containment, 1, 2; trigger events, 1, 2, 3 Federal Bureau of Investigation (FBI), 1, 2, 3, 4, 5, 6, 7 Federation of American Immigration Reform, 1 Ferguson, Missouri, 1 Festinger, Leon, 1 fiction, 1 Fields, Ted, 1 50 Cent Army, 1 ‘fight or flight’ response, 1, 2, 3 films, 1, 2 filter bubbles, 1, 2, 3, 4 Finland, 1, 2, 3, 4, 5, 6 Finsbury Park mosque attack, 1, 2, 3 first responders, 1 Fiske, Susan, 1 Five Star Movement, 1 flashbacks, 1 Florida, 1, 2 Floyd, George, 1, 2, 3 Flynt, Larry, 1 fMRI (functional Magnetic Resonance Imaging), 1, 2, 3, 4, 5, 6, 7 football, 1, 2, 3, 4, 5 football hooligans, 1, 2 Forever Welcome, 1 4chan, 1, 2 Fox News, 1, 2 Franklin, Benjamin, 1 Franklin, Joseph Paul, 1, 2, 3, 4, 5, 6, 7, 8 Fransen, Jayda, 1 freedom fighters, 1, 2 freedom of speech, 1, 2, 3, 4, 5, 6 frustration, 1, 2, 3, 4 functional Magnetic Resonance Imaging (fMRI), 1, 2, 3, 4, 5, 6, 7 fundamentalism, 1, 2 fusiform face area, 1 fusion, see identity fusion Gab, 1 Gadd, David, 1, 2n, 3, 4 Gaddafi, Muammar, 1, 2 Gage, Phineas, 1, 2 galvanic skin responses, 1 Gamergate, 1 gateway sites, 1 gay people: author’s experience, 1, 2, 3; brain and hate, 1, 2; Copeland attacks, 1, 2; COVID-19 pandemic, 1; filter bubbles, 1; gay laws, 1; gay marriage, 1, 2, 3; group associations, 1; group threat, 1, 2, 3, 4, 5; hate counts, 1, 2, 3, 4; physical attacks, 1, 2; profiling the hater, 1; Russia, 1, 2, 3, 4, 5; Section 1, 2, 3, 4; steps to stop hate, 1, 2, 3; trigger events, 1, 2; why online hate speech hurts, 1; see also LGBTQ+ people gay rights, 1, 2, 3, 4 gender, 1, 2, 3, 4, 5, 6, 7 Generation Identity, 1 Generation Z, 1, 2 genetics, 1n, 2, 3 genocide, 1, 2, 3, 4, 5, 6 Georgia (country), 1 Georgia, US, 1, 2, 3, 4 Germany, 1, 2, 3, 4, 5, 6, 7 Gilead, Michael, 1 ginger people, 1 girls, and online hate speech, 1 Gladwell, Malcolm, 1 Global Project Against Hate and Extremism, 1 glucocorticoids, 1, 2 God, 1, 2 God’s Will, 1, 2 Goebbels, Joseph, 1 Google, 1, 2, 3, 4, 5, 6, 7, 8 Google+, 1 Google Translate, 1 goth identity, 1, 2, 3, 4 governments, 1, 2, 3, 4, 5, 6 Grant, Oscar, 1 gravitational waves, 1 Great Recession (2007–9), 1 Great Replacement conspiracy theory, 1 Greece, 1, 2 Greenberg, Jeff, 1, 2, 3 Greene, Robert, 1 grey matter, 1 Grillot, Ian, 1, 2 Grodzins, Morton, 1 grooming, 1, 2, 3 ‘Ground Zero mosque’ (Cordoba House), 1 GroupMe, 1 groups: ancient brains in modern world, 1; brain and hate, 1, 2, 3, 4; childhood, 1; feeling hate together, 1; foundations of prejudice, 1; group threat and hate, 1; identity fusion, 1, 2, 3; intergroup hate, 1; pyramid of hate, 1; reasons for hate offending, 1; steps to stop hate, 1, 2; tipping point, 1, 2, 3, 4; warrior psychology, 1, 2, 3; what it means to hate, 1, 2 group threat, 1; beyond threat, 1; Bijan as the threatening racial other, 1; context and threat, 1; cultural machine, group threat and stereotypes, 1; evolution of group threat detection, 1; human biology and threat, 1; neutralising the perception of threat, 1; overview, 1; society, competition and threat, 1; threat in their own words, 1 guilt, 1, 2, 3, 4 guns, 1, 2 ‘gut-deep’ hate, 1, 2, 3, 4 Haines, Matt, 1 Haka, 1 Halle Berry neuron, 1, 2 harassment, 1, 2, 3, 4, 5 harm of hate, 1, 2, 3, 4, 5, 6, 7 Harris, Brendan, 1 Harris, Lasana, 1 Harris, Lovell, 1, 2, 3, 4 hate: author’s brain and hate, 1; the brain and hate, 1; definitions, 1, 2; feeling hate together, 1; foundations of prejudice and hate, 1, 2, 3; group threat and hate, 1; ‘gut-deep’ hate, 1, 2; hate counts, 1; hate in word and deed, 1; profiling the hater, 1; pyramid of hate, 1; rise of the bots and trolls, 1; seven steps to stop hate, 1; subcultures of hate, 1; tipping point from prejudice to hate, 1; trauma, containment and hate, 1; trigger events and ebb and flow of hate, 1; what it means to hate, 1 hate counts, 1; criminalising hate, 1; how they count, 1; overview, 1; perceiving versus proving hate, 1; police and hate, 1; rising hate count, 1; ‘signal’ hate acts and criminalisation, 1; Sophie Lancaster, 1; warped world of hate, 1 hate crime: author’s experience, 1, 2, 3; brain and hate, 1, 2, 3, 4, 5; definitions, 1; events and hate online, 1; events and hate on the streets, 1, 2; the ‘exceptional’ hate criminal, 1; far-right hate, 1, 2, 3; foundations of prejudice and hate, 1, 2, 3, 4; group threat, 1, 2, 3, 4, 5, 6, 7, 8; hate counts, 1, 2, 3, 4, 5; laws, 1n, 2, 3, 4, 5; number of crimes, 1, 2; online hate speech, 1, 2, 3, 4; predicting hate crime, 1; profiling the hater, 1; steps to stop hate, 1, 2, 3; trauma and containment, 1, 2, 3, 4; trigger events, 1, 2, 3, 4, 5, 6; understanding the ‘average’ hate criminal, 1; understanding the ‘exceptional’ hate offender, 1; what it means to hate, 1, 2, 3 hate groups, 1, 2, 3, 4, 5 hate in word and deed, 1; algorithmic far right, 1; Charlottesville rally, 1, 2, 3n, 4; extreme filter bubbles, 1; game changer for the far right, 1; gateway sites, 1; overview, 1; ‘real life effort post’ and Christchurch, 1; red-pilling, 1 HateLab, 1, 2, 3, 4, 5 hate speech: far-right hate, 1, 2, 3; filter bubbles and bias, 1; harm of, 1; how much online hate speech, 1; Japan laws, 1; pyramid of hate, 1; stopping online hate speech, 1; Tay chatbot, 1; trigger events, 1, 2, 3; why online hate speech hurts, 1 hate studies, 1, 2 ‘hazing’ practices, 1 health, 1, 2, 3, 4 Henderson, Russell, 1 Herbert, Ryan, 1 Hewstone, Miles, 1 Heyer, Heather, 1 Hinduism, 1, 2 hippocampus, 1, 2, 3, 4 history of offender, 1 Hitler, Adolf, 1, 2, 3, 4, 5, 6, 7 HIV/AIDS, 1, 2, 3, 4, 5, 6, 7 hollow mask illusion, 1, 2 Hollywood, 1, 2 Holocaust, 1, 2, 3, 4 Homicide Act, 1n homophobia: author’s experience, 1, 2, 3, 4; brain and hate, 1, 2, 3; evidence-based hate crime, 1; federal law, 1; jokes, 1; online hate speech, 1, 2; Russia, 1, 2; Shepard murder, 1; South Africa, 1; trauma and containment, 1; victim perception of motivation, 1n Homo sapiens, 1 homosexuality: author’s experience, 1; online hate speech, 1; policing, 1; questioning prejudgements, 1; Russia, 1, 2; trauma and containment, 1, 2; see also gay people hooligans, 1, 2 Horace, 1 hormones, 1, 2, 3 hot emotions, 1 hot-sauce study, 1, 2 housing, 1, 2, 3, 4, 5, 6 Huddersfield child grooming, 1 human rights, 1, 2, 3 humiliation, 1, 2, 3, 4, 5, 6 humour, 1, 2 Hungary, 1 hunter-gatherers, 1n, 2 Hustler, 1 IAT, see Implicit Association Test identity: author’s experience of attack, 1; British identity, 1, 2; Charlottesville rally, 1, 2; children’s ingroups, 1; group threat, 1, 2; online hate speech, 1, 2, 3, 4; steps to stop hate, 1, 2 identity fusion: fusion and hateful murder, 1; fusion and hateful violence, 1; fusion and self-sacrifice in the name of hate, 1; generosity towards the group, 1; tipping point, 1, 2; warrior psychology, 1, 2, 3 ideology, 1, 2, 3, 4 illegal hate speech, 1, 2, 3, 4 illocutionary speech, 1 imaging, see brain imaging immigration: Forever Welcome, 1; group threat, 1, 2, 3, 4, 5, 6, 7; hate counts, 1n, 2; HateLab Brexit study, 1; identity fusion, 1; intergroup contact, 1; negative stereotypes, 1; online hate speech, 1; Purinton, 1, 2; trauma and containment, 1, 2, 3; trigger events, 1, 2n, 3, 4, 5, 6, 7; YouTube algorithms, 1 immortality, 1, 2 Implicit Association Test (IAT), 1, 2, 3, 4, 5, 6, 7, 8, 9 implicit prejudice: author’s brain and hate, 1, 2, 3, 4; brain and hate, 1, 2, 3, 4, 5, 6; online hate speech, 1, 2 India, 1 Indonesia, 1 Infowars, 1, 2 Ingersoll, Karma, 1 ingroup: brain and hate, 1, 2, 3, 4; child play, 1; group threat, 1, 2, 3, 4, 5, 6, 7; HateLab Brexit study, 1; identity fusion, 1, 2; pyramid of hate, 1; reasons for hate offending, 1; trigger events, 1, 2, 3; what it means to hate, 1, 2, 3, 4, 5 Instagram, 1, 2, 3 Institute for Strategic Dialogue, 1 institutional racism, 1 instrumental crimes, 1 insula: brain and signs of prejudice, 1, 2, 3; facial expressions, 1, 2; fusiform face area, 1; hacking the brain to hate, 1; hate and feeling pain, 1; neuroscience of hate, 1n, 2, 3, 4, 5; parts that edge us towards hate, 1; parts that process prejudice, 1; processing of ‘gut-deep’ hate, 1, 2 Integrated Threat Theory (ITT), 1, 2, 3 integration, 1, 2, 3, 4 intergroup contact, 1, 2, 3 Intergroup Contact Theory, 1, 2, 3 intergroup hate, 1, 2, 3, 4 internet: algorithms, 1, 2; chatbots, 1; counterhate speech, 1; COVID-19 pandemic, 1; far-right hate, 1, 2, 3, 4, 5, 6, 7; filter bubbles, 1, 2, 3; Google searches, 1; hate speech harm, 1; how much online hate speech, 1; online news, 1; reasons for hate offending, 1; rise of the bots and trolls, 1; stopping online hate speech, 1; tipping point, 1, 2, 3; training the machine to count hate, 1; why online hate speech hurts, 1 interracial relations, 1, 2, 3, 4 intolerance, 1, 2 Iranian bots, 1 Iraq, 1 Irish Republican Army (IRA), 1 ISIS, 1, 2, 3, 4, 5, 6, 7, 8, 9 Islam: group threat, 1; online hate speech, 1, 2, 3, 4, 5; steps to stop hate, 1, 2, 3; subcultures of hate, 1, 2, 3, 4; trigger events, 1, 2, 3 Islamism: group threat, 1; online hate speech, 1, 2, 3, 4; profiling the hater, 1; subcultures of hate, 1, 2, 3; trigger events, 1, 2, 3 Islamophobia, 1, 2, 3, 4 Israel, 1, 2, 3 Italy, 1, 2 ITT, see Integrated Threat Theory James, Lee, 1, 2, 3, 4, 5, 6 Japan, 1, 2, 3 Jasko, Katarzyna, 1 Jefferson, Thomas, 1 Jenny Lives with Eric and Martin, 1 Jewish people: COVID-19 pandemic, 1, 2; far-right hate, 1, 2, 3, 4, 5; filter bubbles, 1; Google searches, 1, 2; group threat, 1; Nazism, 1, 2; negative stereotypes, 1 2 online hate speech, 1; pyramid of hate, 1; questioning prejudgements, 1; ritual washing, 1; subcultures of hate, 1, 2; trauma and Franklin, 1, 2, 3 jihad, 1, 2, 3, 4, 5 jokes, 1, 2, 3, 4, 5, 6, 7 Jones, Alex, 1 Jones, Terry, 1 Josephson junction, 1 Judaism, 1; see also Jewish people Jude, Frank, Jr, 1, 2, 3, 4, 5 Kansas, 1 Kerry, John, 1 Kik, 1 King, Gary, 1 King, Martin Luther, Jr, 1, 2 King, Rodney, 1, 2, 3 King, Ryan, 1 Kirklees, 1, 2 KKK, see Ku Klux Klan Kuchibhotla, Srinivas, 1, 2, 3, 4 Kuchibhotla, Sunayana, 1, 2 Ku Klux Klan (KKK), 1, 2, 3n, 4, 5, 6, 7 Labour Party, 1, 2, 3 Lancaster, Sophie, 1, 2 language, 1, 2, 3, 4, 5, 6, 7 LAPD (Los Angeles Police Department), 1 Lapshyn, Pavlo, 1 Lashkar-e-Taiba, 1 Las Vegas shooting, 1, 2 Latinx people, 1, 2, 3, 4, 5, 6, 7 law: brain and hate, 1, 2, 3; criminalising hate, 1; hate counts, 1, 2, 3; Kansas shooting, 1; limited laws, 1; online hate speech, 1; pyramid of hate, 1 Law Commission, 1 Lawrence, Stephen, 1 learned fears, 1, 2, 3 Leave.EU campaign, 1, 2 Leave voters, 1, 2, 3n Lee, Robert E., 1, 2, 3 left orbitofrontal cortex, 1n, 2n Legewie, Joscha, 1, 2, 3, 4 lesbians, 1, 2 Levin, Jack, 1 LGBTQ+ people, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17; see also gay people LIB, see Linguistic Intergroup Bias test Liberman, Nira, 1 Liberty Park, Salt Lake City, 1, 2 Libya, 1, 2, 3, 4 Light, John, 1 Linguistic Intergroup Bias (LIB) test, 1 Liverpool, 1, 2 Livingstone, Ken, 1, 2 Loja, Angel, 1 London: author’s experience of attack, 1; Copeland nail bombing, 1, 2; Duggan shooting, 1; far-right hate, 1; group threat, 1, 2, 3; online hate speech, 1, 2; Rigby attack, 1; terror attacks, 1, 2, 3, 4, 5, 6 London Bridge attack, 1, 2, 3 London School of Economics, 1 ‘lone wolf’ terrorists, 1, 2, 3, 4 long-term memory, 1, 2, 3, 4 Loomer, Laura, 1 Los Angeles, 1 loss: group threat, 1; subcultures of hate, 1, 2, 3, 4; tipping point, 1; trauma and containment, 1, 2, 3, 4, 5 love, 1, 2 Love Thy Neighbour, 1 Lucero, Marcelo, 1, 2 Luqman, Shehzad, 1 ‘Macbeth effect’, 1 machine learning, 1 Madasani, Alok, 1, 2, 3 Madrid attack, 1, 2 Magnetic Resonance Imaging (MRI): Diffusion MRI, 1, 2; functional MRI, 1, 2, 3, 4, 5, 6, 7 magnetoencephalography (MEG), 1, 2, 3 Maldon, 1 Malik, Tashfeen, 1 Maltby, Robert, 1, 2 Manchester, 1, 2 Manchester Arena attack, 1, 2, 3, 4, 5, 6 marginalisation, 1, 2 Martin, David, 1 Martin, Trayvon, 1, 2 MartinLutherKing.org, 1, 2 martyrdom, 1, 2, 3, 4n masculinity, 1, 2, 3, 4, 5 The Matrix, 1 Matthew Shepard and James Byrd Jr Hate Crimes Prevention Act, 1n, 2n Matz, Sandra, 1 Mauritius, 1 McCain, John, 1 McDade, Tony, 1 McDevitt, Jack, Levin McKinney, Aaron, 1 McMichael, Gregory, 1 McMichael, Travis, 1 media: far-right hate, 1, 2; group threat, 1, 2, 3; steps to stop hate, 1, 2, 3, 4, 5, 6; stereotypes in, 1, 2; subcultures of hate, 1; trigger events, 1 Meechan, Mark, 1 MEG (magnetoencephalography), 1, 2, 3 memory, 1, 2, 3, 4, 5, 6, 7 men, and online hate speech, 1 men’s rights, 1 mental illness, 1, 2, 3, 4, 5, 6 mentalising, 1, 2, 3 meta-analysis, 1 Metropolitan Police, 1 Mexican people, 1, 2, 3, 4 micro-aggressions, 1, 2n, 3, 4, 5, 6 micro-events, 1 Microsemi, 1n Microsoft, 1, 2, 3, 4, 5, 6 micro-targeting, 1, 2 Middle East, 1, 2 migration, 1, 2, 3, 4, 5, 6, 7; see also immigration Milgram, Stanley, 1 military, 1 millennials, 1 Milligan, Spike, 1 Milwaukee, 1, 2, 3 minimal groups, 1 Minneapolis, 1, 2, 3 minority groups: far-right hate, 1, 2; group threat, 1, 2, 3, 4, 5; police reporting, 1; questioning prejudgements, 1; trauma and containment, 1; trigger events, 1, 2 misinformation, 1, 2, 3, 4, 5, 6 mission haters, 1, 2, 3 mobile phones, 1, 2, 3 moderation of content, 1, 2, 3 Moore, Nik, 1 Moore, Thomas, 1 Moores, Manizhah, 1 Moore’s Ford lynching, 1 Moradi, Dr Zargol, 1, 2, 3, 4, 5, 6 Moral Choice Dilemma tasks, 1, 2, 3 moral cleansing, 1, 2, 3 moral dimension, 1, 2, 3, 4 moral outrage, 1, 2, 3, 4, 5 Moroccan people, 1, 2 mortality, 1, 2, 3 mortality salience, 1, 2, 3, 4, 5 Moscow, 1 mosques, 1, 2, 3, 4, 5, 6, 7 Moss Side Blood, 1 mothers, 1, 2, 3, 4, 5, 6 motivation, 1n, 2, 3, 4, 5, 6 Mphiti, Thato, 1 MRI, see Magnetic Resonance Imaging Muamba, Fabrice, 1 multiculturalism, 1, 2, 3, 4 murder: brain injury, 1, 2; group threat, 1, 2, 3; hate counts, 1; identity fusion and hateful murder, 1; police and hate, 1, 2; profiling the hater, 1; trauma and containment, 1, 2, 3, 4, 5 Murdered for Being Different, 1 music, 1, 2, 3 Muslims: COVID-19 pandemic, 1; far-right hate, 1, 2, 3, 4; Google searches, 1; group threat, 1, 2, 3, 4, 5, 6; negative stereotypes, 1; online hate speech, 1, 2; profiling the hater, 1, 2; Salah effect, 1; subcultures of hate, 1, 2, 3; trigger events, 1, 2, 3, 4, 5; and Trump, 1, 2, 3, 4n, 5, 6n Mvubu, Themba, 1 Myanmar, 1, 2 Myatt, David, 1 Nandi, Dr Alita, 1 National Action, 1 National Consortium for the Study of Terrorism and Responses to Terrorism, 1 national crime victimisation surveys, 1, 2 National Front, 1, 2, 3 nationalism, 1, 2 National Socialist Movement, 1, 2, 3, 4 natural experiments, 1, 2 Nature: Neuroscience, 1 nature vs nurture debate, 1 Nazism, 1, 2, 3, 4, 5, 6, 7, 8 NCVS (National Crime Victimisation Survey), 1, 2 negative stereotypes: brain and hate, 1, 2; feeling hate together, 1, 2; group threat, 1, 2, 3, 4, 5, 6; steps to stop hate, 1, 2, 3, 4, 5; tipping point, 1 Nehlen, Paul, 1 neo-Nazis, 1n, 2, 3, 4, 5, 6 Netherlands, 1, 2 Netzwerkdurchsetzungsgesetz (NetzDG) law, 1 neuroimaging, see brain imaging neurons, 1, 2, 3, 4, 5, 6, 7 neuroscience, 1, 2, 3, 4, 5, 6, 7, 8, 9 Newark, 1, 2 news, 1, 2, 3, 4, 5, 6, 7 newspapers, 1, 2, 3, 4 New York City, 1, 2, 3, 4, 5, 6 New York Police Department (NYPD), 1 New York Times, 1, 2 New Zealand, 1 n-grams, 1 Nimmo, John, 1 9/11 attacks, 1, 2, 3, 4, 5, 6, 7 911 emergency calls, 1 Nogwaza, Noxolo, 1 non-independence error, 1, 2n Al Noor Mosque, Christchurch, 1 Northern Ireland, 1 NWA, 1 NYPD (New York Police Department), 1 Obama, Barack, 1n, 2, 3, 4, 5, 6 Occupy Paedophilia, 1 ODIHR, see Office for Democratic Institutions and Human Rights Ofcom, 1 offence, 1, 2, 3, 4 Office for Democratic Institutions and Human Rights (ODIHR), 1, 2 Office for Security and Counter Terrorism, 1 office workers, 1 offline harm, 1, 2 Oklahoma City, 1 O’Mahoney, Bernard, 1 online hate speech: author’s experience, 1; COVID-19 pandemic, 1; far-right hate, 1, 2, 3, 4, 5; hate speech harm, 1; how much online hate speech, 1; individual’s role, 1; law’s role, 1; social media companies’ role, 1; steps to stop hate, 1; tipping point, 1, 2; training the machine to count hate, 1; trigger events, 1 Ono, Kazuya, 1 optical illusions, 1 Organization for Human Brain Mapping conference, 1 Orlando attack, 1 Orwell, George, Nineteen Eighty-Four, 1 Osborne, Darren, 1 ‘other’, 1, 2, 3, 4, 5, 6 Ottoman Empire, 1 outgroup: author’s brain and hate, 1, 2, 3; brain and hate, 1, 2, 3, 4, 5, 6, 7; child interaction and play, 1, 2; evolution of group threat detection, 1; feeling hate together, 1; group threat, 1, 2, 3, 4, 5, 6; ‘gut-deep’ hate, 1; HateLab Brexit study, 1; human biology and threat, 1; identity fusion, 1; prejudice formation, 1; profiling the hater, 1; push/pull factor, 1; pyramid of hate, 1; society, competition and threat, 1; steps to stop hate, 1, 2; tipping point, 1; trauma and containment, 1, 2, 3, 4, 5; trigger events, 1, 2, 3, 4, 5, 6, 7, 8 outliers, 1 Overton window, 1, 2, 3, 4 oxytocin, 1, 2, 3, 4 Paddock, Stephen, 1 Paddy’s Pub, Bali, 1 paedophilia, 1, 2, 3, 4, 5 page rank, 1 pain, 1, 2, 3, 4, 5, 6, 7 Pakistani people, 1, 2, 3, 4, 5 Palestine, 1 pandemics, 1, 2, 3, 4 Papua New Guinea, 1, 2, 3 paranoid schizophrenia, 1, 2 parents: caregiving, 1; subcultures of hate, 1; trauma and containment, 1, 2, 3, 4, 5; trigger events, 1, 2, 3 Paris attack, 1 Parsons Green attack, 1, 2 past experience: the ‘average’ hate criminal, 1; the ‘exceptional’ hate criminal, 1; trauma and containment, 1 perception-based hate crime, 1, 2 perception of threat, 1, 2, 3, 4, 5 perpetrators, 1, 2 personal contact, 1, 2 personality, 1, 2, 3 personality disorder, 1, 2 personal safety, 1, 2 personal significance, 1 perspective taking, 1, 2 PFC, see prefrontal cortex Philadelphia Police Department, 1 Philippines, 1 physical attacks, 1, 2, 3, 4, 5, 6, 7, 8 play, 1 Poland, 1, 2, 3 polarisation, 1, 2, 3, 4, 5 police: brain and hate, 1, 2; Duggan shooting, 1; group threat, 1, 2, 3; and hate, 1; NYPD racial bias, 1; online hate speech, 1, 2, 3, 4; perceiving versus proving hate, 1; police brutality, 1, 2, 3, 4; predicting hate crime, 1; recording crime, 1, 2, 3, 4; reporting crime, 1, 2, 3; rising hate count, 1, 2, 3; ‘signal’ hate acts and criminalisation, 1; steps to stop hate, 1, 2, 3; use of force, 1 Polish migrants, 1 politics: early adulthood, 1; far-right hate, 1, 2; filter bubbles and bias, 1; group threat, 1, 2, 3; online hate speech, 1, 2; seven steps to stop hate, 1, 2, 3, 4; trauma and containment, 1; trigger events, 1, 2, 3, 4, 5; Trump election, 1, 2 populism, 1, 2, 3, 4, 5 pornography, 1 Portugal, 1, 2 positive stereotypes, 1, 2 post-traumatic stress disorder (PTSD), 1, 2, 3, 4, 5 poverty, 1, 2, 3 Poway synagogue shooting, 1 power, 1, 2, 3, 4, 5 power law, 1 predicting the next hate crime, 1 prefrontal cortex (PFC): brain and signs of prejudice, 1; brain injury, 1; disengaging the amygdala autopilot, 1; feeling pain, 1; ‘gut-deep’ hate, 1; prejudice network, 1; psychological brainwashing, 1; recognising false alarms, 1; salience network, 1; trauma and containment, 1; trigger events, 1; unlearning prejudiced threat detection, 1, 2 prehistoric brain, 1, 2 prehistory, 1, 2 prejudgements, 1 prejudice: algorithms, 1; author’s brain and hate, 1, 2, 3, 4, 5, 6, 7; brain and hate, 1, 2, 3, 4, 5, 6, 7; brain and signs of prejudice, 1; cultural machine, 1; far-right hate, 1, 2; filter bubbles and bias, 1; foundations of, 1; Google, 2; group threat, 1, 2, 3, 4, 5, 6, 7, 8, 9; human biology and threat, 1; neuroscience of hate, 1, 2; online hate speech, 1, 2, 3; parts that process prejudice, 1; prejudice network, 1, 2, 3, 4; prepared versus learned amygdala responses, 1; pyramid of hate, 1; releasers, 1, 2; steps to stop hate, 1, 2, 3, 4; tipping point from prejudice to hate, 1; trauma and containment, 1, 2, 3, 4, 5; trigger events, 1, 2, 3, 4, 5, 6, 7, 8; Trump, 1, 2; unconscious bias, 1; unlearning prejudiced threat detection, 1; what it means to hate, 1, 2, 3, 4, 5 prepared fears, 1, 2 Prisoner’s Dilemma, 1 profiling the hater, 1 Proposition 1, 2 ProPublica, 1n, 2 prosecution, 1, 2, 3 Protestants, 1 protons, 1 psychoanalysis, 1 psychological development, 1, 2, 3, 4 psychological profiles, 1 psychological training, 1 psychology, 1, 2, 3, 4 psychosocial criminology, 1, 2 psy-ops (psychological operations), 1 PTSD, see post-traumatic stress disorder Public Order Act, 1 pull factor, 1, 2, 3, 4, 5 Pullin, Rhys, 1n Purinton, Adam, 1, 2, 3, 4, 5, 6, 7 push/pull factor, 1, 2, 3, 4, 5, 6 pyramid of hate, 1, 2 Q …, 1 al-Qaeda, 1, 2 quality of life, 1 queer people, 1, 2 quest for significance, 1, 2, 3 Quran burning, 1 race: author’s brain and hate, 1, 2, 3, 4; brain and hate, 1, 2, 3, 4, 5, 6, 7; brain and signs of prejudice, 1; far-right hate, 1, 2, 3; Google searches, 1; group threat, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10; hate counts, 1, 2, 3; online hate speech, 1; predicting hate crime, 1; pyramid of hate, 1; race relations, 1, 2, 3; race riots, 1, 2; race war, 1, 2, 3, 4, 5; steps to stop hate, 1, 2, 3; trauma and containment, 1, 2, 3, 4n, 5, 6; trigger events, 1, 2; unconscious bias, 1; unlearning prejudiced threat detection, 1 racism: author’s experience, 1; brain and hate, 1, 2, 3, 4, 5, 6; far-right hate, 1, 2; group threat, 1, 2, 3, 4, 5, 6, 7, 8; Kansas shooting, 1; NYPD racial bias, 1; online hate speech, 1, 2, 3, 4; steps to stop hate, 1n, 2, 3; Tay chatbot, 1; trauma and containment, 1, 2, 3, 4, 5, 6, 7; Trump election, 1; victim perception of motivation, 1n; white flight, 1 radicalisation: far-right hate, 1, 2, 3; group threat, 1; subcultures of hate, 1, 2, 3, 4, 5; trigger events, 1 rallies, 1, 2, 3; see also Charlottesville rally Ramadan, 1, 2 rape, 1, 2, 3, 4, 5 rap music, 1 realistic threats, 1, 2, 3, 4, 5 Rebel Media, 1 rebels, 1 recategorisation, 1 recession, 1, 2, 3, 4, 5 recommendation algorithms, 1, 2 recording crime, 1, 2, 3, 4 red alert, 1 Reddit, 1, 2, 3, 4 red-pilling, 1, 2, 3, 4 refugees, 1, 2, 3, 4, 5 rejection, 1, 2, 3, 4, 5, 6 releasers of prejudice, 1, 2 religion: group threat, 1, 2, 3; homosexuality, 1; online hate speech, 1, 2, 3; predicting hate crime, 1; pyramid of hate, 1; religion versus hate, 1; steps to stop hate, 1, 2; subcultures of hate, 1, 2; trauma and containment, 1n, 2; trigger events, 1, 2, 3, 4, 5; victim perception of motivation, 1n reporting crimes, 1, 2, 3, 4, 5, 6, 7 repression, 1 Republicans, 1, 2, 3, 4, 5 research studies, 1 responsibility, 1, 2, 3 restorative justice, 1 retaliatory haters, 1, 2, 3 Reuters, 1 Rieder, Bernhard, 1 Rigby, Lee, 1 rights: civil rights, 1, 2, 3, 4; gay rights, 1, 2, 3, 4; human rights, 1, 2, 3; men’s rights, 1; tipping point, 1; women’s rights, 1, 2 right wing, 1, 2, 3, 4, 5, 6; see also far right Right-Wing Authoritarianism (RWA) scale, 1 riots, 1, 2, 3, 4 risk, 1, 2, 3 rites of passage, 1, 2 rituals, 1, 2, 3 Robb, Thomas, 1 Robbers Cave Experiment, 1, 2, 3, 4, 5, 6 Robinson, Tommy (Stephen Yaxley-Lennon), 1, 2, 3, 4 Rohingya Muslims, 1, 2 Roof, Dylann, 1, 2 Roussos, Saffi, 1 Rudolph, Eric, 1 Rushin, S,, 1n Russia, 1, 2, 3, 4, 5, 6, 7, 8 Russian Internet Research Agency, 1 RWA (Right-Wing Authoritarianism) scale, 1 Rwanda, 1 sacred value protection, 1, 2, 3, 4, 5, 6, 7, 8 Saddam Hussein, 1 safety, 1, 2 Sagamihara care home, Japan, 1, 2 Salah, Mohamed, 1, 2, 3 salience network, 1, 2 salmon, brain imaging of, 1 Salt Lake City, 1 same-sex marriage, 1, 2 same-sex relations, 1, 2, 3 San Bernardino attack, 1n, 2, 3 Scanlon, Patsy, 1 scans, see brain imaging Scavino, Dan, 1n schizophrenia, 1, 2, 3, 4 school shootings, 1, 2 science, 1, 2, 3 scripture, 1, 2 SDO, see Social Dominance Orientation (SDO) scale Search Engine Manipulation Effect (SEME), 1 search queries, 1, 2, 3, 4 Second World War, 1, 2, 3 Section 1, Local Government Act, 1, 2, 3 seed thoughts, 1 segregation, 1, 2, 3 seizures, 1, 2, 3 selection bias problem, 1n self-defence, 1, 2 self-esteem, 1, 2, 3, 4 self-sacrifice, 1, 2, 3 Senior, Eve, 1 serial killers, 1, 2, 3 7/7 attack, London, 1 seven steps to stop hate, 1; becoming hate incident first responders, 1; bursting our filter bubbles, 1; contact with others, 1; not allowing divisive events to get the better of us, 1; overview, 1; putting ourselves in the shoes of ‘others’, 1; questioning prejudgements, 1; recognising false alarms, 1 sexism, 1, 2 sexual orientation, 1, 2, 3, 4, 5, 6, 7 sexual violence, 1, 2, 3, 4, 5 sex workers, 1, 2, 3, 4 Shakespeare, William, Macbeth, 1 shame, 1, 2, 3, 4, 5, 6, 7, 8, 9 shared trauma, 1, 2, 3 sharia, 1, 2 Shepard, Matthew, 1, 2 Sherif, Muzafer, 1, 2, 3, 4, 5, 6, 7 shitposting, 1, 2, 3n shootings, 1, 2, 3, 4, 5, 6, 7, 8 ‘signal’ hate acts, 1 significance, 1, 2, 3 Simelane, Eudy, 1 skin colour, 1, 2, 3n, 4, 5, 6, 7 Skitka, Linda, 1, 2 slavery, 1 Slipknot, 1 slurs, 1, 2, 3, 4, 5, 6 Snapchat, 1 social class, 1, 2 social desirability bias, 1, 2 Social Dominance Orientation (SDO) scale, 1 social engineering, 1 socialisation, 1, 2, 3, 4, 5 socialism, 1, 2 social media: chatbots, 1; COVID-19 pandemic, 1; far-right hate, 1, 2, 3, 4; filter bubbles and bias, 1; HateLab Brexit study, 1; online hate speech, 1, 2, 3, 4, 5; online news, 1; pyramid of hate, 1; steps to stop hate, 1, 2, 3; subcultures of hate, 1; trigger events, 1, 2; see also Facebook; Twitter; YouTube Social Perception and Evaluation Lab, 1 Soho, 1 soldiers, 1n, 2, 3 Sorley, Isabella, 1 South Africa, 1 South Carolina, 1 Southern Poverty Law Center, 1n, 2 South Ossetians, 1 Soviet Union, 1, 2 Spain, 1, 2, 3 Spencer, Richard B., 1 Spengler, Andrew, 1, 2, 3, 4 SQUIDs, see superconducting quantum interference devices Stacey, Liam, 1, 2 Stanford University, 1 Star Trek, 1, 2, 3 statistics, 1, 2, 3, 4, 5, 6, 7, 8 statues, 1 Stephan, Cookie, 1, 2 Stephan, Walter, 1, 2 Stephens-Davidowitz, Seth, Everybody Lies, 1 Stereotype Content Model, 1 stereotypes: brain and hate, 1, 2, 3, 4, 5, 6, 7; cultural machine, group threat and stereotypes, 1; definitions, 1; feeling hate together, 1, 2; group threat, 1, 2, 3, 4; homosexuality, 1; NYPD racial bias, 1; steps to stop hate, 1, 2, 3, 4, 5; study of prejudice, 1; tipping point, 1; trigger events, 1 Stoke-on-Trent, 1, 2 Stormfront website, 1, 2, 3 storytelling, 1 stress, 1, 2, 3, 4, 5, 6, 7, 8 striatum, 1, 2, 3n, 4 subcultures, 1, 2, 3, 4, 5 subcultures of hate, 1; collective quests for significance and extreme hate, 1; extremist ideology and compassion, 1; fusion and generosity towards the group, 1; fusion and hateful murder, 1; fusion and hateful violence, 1; fusion and self-sacrifice in the name of hate, 1; quest for significance and extreme hatred, 1; religion/belief, 1; warrior psychology, 1 subhuman, 1, 2 Sue, D.

pages: 254 words: 76,064

Whiplash: How to Survive Our Faster Future
by Joi Ito and Jeff Howe
Published 6 Dec 2016

Held in one of the university’s largest lecture halls, the DeepMind event drew a standing-room-only crowd—students were all but hanging off the walls to hear Hassabis describe how their approach to machine learning had allowed their team to prove the experts who had predicted it would take ten years for a computer to beat a virtuoso like Sedol wrong. The key was a clever combination of deep learning—a kind of pattern recognition, similar to how a human brain (or Google) can recognize a cat or a fire truck after seeing many images—and “learning” so that it could guess statistically what something was likely to be, or in the case of Go, what a human player, considering all of the games of the past, was likely to play in a particular situation.

There are dozens of advances in machine learning and other fields still standing between us and an AGI, but AlphaGo has already realized several of them. It appears to be creative; it appears to be capable of deriving some sort of symbolic logic through a statistical system. It’s hard to overstate the significance of this accomplishment—many people didn’t believe you could get to symbolic reasoning from deep learning. However, while AlphaGo is very smart and very creative, it can only beat you at Go—not at checkers. Its entire universe of expression and vision is a grid of nineteen lines and black and white stones. It will take many more technological breakthroughs before AlphaGo will be interested in going to nightclubs or running for office.

pages: 240 words: 78,436

Open for Business Harnessing the Power of Platform Ecosystems
by Lauren Turner Claire , Laure Claire Reillier and Benoit Reillier
Published 14 Oct 2017

Search neural networks can learn from analysing large amounts of data and build their own rules to match users and producers better and faster. RankBrain, Google’s deep neural network, which helps generate responses to search queries, now handles about 15% of Google’s daily search queries.8 All major platforms, from Facebook to Microsoft, have invested in deep learning, with Amazon even having released deep learning open-source software for search and product recommendations.9 Search technology, increasingly powered by deep neural networks, is evolving fast beyond text and geolocalized data to include voice and images. No doubt the integration of new technology such as messaging bots (automated search) and augmented reality (visual search) will redefine existing platforms’ user search experience.

pages: 271 words: 79,355

The Dark Cloud: How the Digital World Is Costing the Earth
by Guillaume Pitron
Published 14 Jun 2023

‘Strong AI’ is a super intelligence so powerful that it can supposedly experience ‘emotions, intuitions, and feelings to the point of becoming aware of its own existence’, says the Dutchman Lex Coors, one of the stars of the data-centre industry.49 The more optimistic believe that such an entity will become a reality in the next five to 10 years, once humanity has produced 175 zettabytes of data — enough for an AI to learn and perfect itself by processing data itself through ‘deep learning’. Seth Shostak, a senior astronomer at the Search for Extraterrestrial Intelligence Institute with NASA (the National Aeronautics and Space Administration), has even put forward a theory that is equally fascinating and disturbing: the main form of intelligence in the universe is already electronic in nature.

Heading towards a ‘capacity crunch’? The reality is that the true environmental impact of the web lies in what this optical underworld allows us to do. Just as railways were the first step in conquering the American west, the Dunant’s commissioning in January 2021 made virtual reality, IoT, and ‘deep learning’ possible. ‘We can make the analogy with the road network: as it grows, so do the number of cars that use it. Similarly, more capacity begets more appetite for capacity’, according to an undersea systems expert.57 ‘The data market is maintained by people — the FAANGs — who build their own highways, and more and more of them’, agreed another expert.

pages: 469 words: 132,438

Taming the Sun: Innovations to Harness Solar Energy and Power the Planet
by Varun Sivaram
Published 2 Mar 2018

The first priority is minimizing the amount of expensive reserves needed to accommodate renewable energy unpredictability. Better weather forecasting technology is emerging that can help by enabling grid operators to predict more accurately what solar and wind output will be like hours or days in advance and calling in compensating resources accordingly.42 The advent of “deep learning”—the artificial intelligence algorithms that run your Alexa device at home and learn from past experience— could make forecasts of solar production even more accurate and precise.43 In addition, markets should operate on a quicker cadence, reducing from an hour to five minutes or less the time interval between successive decisions on which generators to dispatch and how much to pay them.

Oxford Institute for Energy Studies, December 2016, https://www.oxfordenergy.org/wpcms/wp-content/uploads/2016/12/EU-energy-policy-4th-time-lucky.pdf. 42.  C. K. Woo et al., “Merit-Order Effects of Renewable Energy and Price Divergence in California’s Day-Ahead and Real-Time Electricity Markets,” Energy Policy 92 (2016): 299–312, doi:10.1016/j.enpol.2016.02.023. 43.  Andre Gensler et al., “Deep Learning for Solar Power Forecasting—An Approach Using AutoEncoder and LSTM Neural Networks,” IEEE International Conference on Systems, Man, and Cybernetics, 2016, http://ieeexplore.ieee.org/document/7844673/. 44.  E. Ela et al., “Wholesale Electricity Market Design with Increasing Levels of Renewable Generation: Incentivizing Flexibility in System Operations,” The Electricity Journal 29, no. 4 (2016): 51–60, doi:10.1016/j.tej.2016.05.001. 45.  

See also Deep decarbonization Decentralization, of power, 119, 219 Decentralized control algorithms, 214–215 Decentralized grids in California, 211 expanding central grid vs. building, 193 in Reforming the Energy Vision program, 208–210 solar power in, 215 systemic innovation to accommodate, 199–200 Deep decarbonization global electricity mix for, 60–63, 62f power sources for, 232–239 U.S. roadmap for, 245 Deep learning, 241 Defense Advanced Research Projects Agency (DARPA), 249, 259, 265 Delos, 29 Demand for electricity in cross-national grids, 202 “duck curve” of, 74–78 forecasted growth of, 60–61 in India, 14 Demand response, 212–216, 284g Demonstration projects, 264–265, 289g Denmark, 200–201, 244 Department of Water and Power (Los Angeles, California), xiii Deregulation, of energy industry, 106–107 Derisking insurance, 105 Desalination, 82, 246, 284g DESERTEC, 205 Developed countries, 64, 122 Developing countries applications of elastic solar materials in, 162 funding/capital for solar projects in, 64, 65, 105, 113, 114, 126 future of solar power in, 4–5 microgrids in, 129 SunEdison YieldCo for, 88 Development banks, 45, 65, 102, 111, 113, 290g Devi, Shaiyra, 119 Diesel-powered microgrids, 122 Direct current (DC), 217, 279g.

pages: 345 words: 84,847

The Runaway Species: How Human Creativity Remakes the World
by David Eagleman and Anthony Brandt
Published 30 Sep 2017

When dormant, the bacteria B. pasteurii can survive for decades even in extreme conditions such as the hearts of volcanoes; when active, they secrete calcite, one of concrete’s key ingredients. 5 The hybrid approach between humans and computers is quickly changing, as companies take on superhuman recognition engines (e.g. deep learning algorithms). But note that these new approaches are entirely trained up by previously human-tagged pictures. 6 Julian Franklyn, A Dictionary of Rhyming Slang, 2nd ed. (London: Routledge, 1991). 7 Reprinted by arrangement with the Heirs to the Estate of Martin Luther King Jr. c/o The Writers House as agent for the proprietor New York, NY © 1963 Dr Martin Luther King Jr. © Renewed 1991 Coretta Scott King. 8 Carmel O’Shannessy, “The role of multiple sources in the formation of an innovative auxiliary category in Light Warlpiri, a new Australian mixed language,” Language 89 (2) pp. 328–353. 9 <http://www.whosampled.com/Dr.

When dormant, the bacteria B. pasteurii can survive for decades even in extreme conditions such as the hearts of volcanoes; when active, they secrete calcite, one of concrete’s key ingredients. 5 The hybrid approach between humans and computers is quickly changing, as companies take on superhuman recognition engines (e.g. deep learning algorithms). But note that these new approaches are entirely trained up by previously human-tagged pictures. 6 Julian Franklyn, A Dictionary of Rhyming Slang, 2nd ed. (London: Routledge, 1991). 7 Reprinted by arrangement with the Heirs to the Estate of Martin Luther King Jr. c/o The Writers House as agent for the proprietor New York, NY © 1963 Dr Martin Luther King Jr. © Renewed 1991 Coretta Scott King. 8 Carmel O’Shannessy, “The role of multiple sources in the formation of an innovative auxiliary category in Light Warlpiri, a new Australian mixed language,” Language 89 (2) pp. 328–353. 9 <http://www.whosampled.com/Dr.

pages: 297 words: 84,447

The Star Builders: Nuclear Fusion and the Race to Power the Planet
by Arthur Turrell
Published 2 Aug 2021

Arnoux, “How Fritz Wagner ‘Discovered’ the H-Mode,” Iter Newsline 86 (2009), https://www.iter.org/newsline/86/659; “Thirty Years of H-Mode,” EUROfusion.org (2012), https://www.euro-fusion.org/news/detail/thirty-years-of-h-mode/?. 4. J. Kates-Harbeck, A. Svyatkovskiy, and W. Tang, “Predicting Disruptive Instabilities in Controlled Fusion Plasmas Through Deep Learning,” Nature 568 (2019): 526; G. Kluth et al., “Deep Learning for NLTE Spectral Opacities,” Physics of Plasmas 27 (2020): 052707. 5. T. Boisson, “British Nuclear Fusion Reactor Relaunched for the First Time in 23 Years,” Trust My Science (2020), https://trustmyscience.com/reacteur-fusion-anglais-relance-premiere-fois-depuis-23-ans/. 6.

pages: 282 words: 85,658

Ask Your Developer: How to Harness the Power of Software Developers and Win in the 21st Century
by Jeff Lawson
Published 12 Jan 2021

Investors talk about rewarding a founder who fails by funding their next company. There’s a zealotry toward failure that’s baked so deep into the DNA of Silicon Valley that you’d almost imagine highly successful entrepreneurs walking around sulking, with dreams of eventual failure dancing in their heads. But it’s not the failure that’s celebrated, it’s the deep learnings that advance the mission. Failure is merely accepted as a natural consequence of the learning. When people talk about accepting failure, they’re talking about accepting the journey of discovery. Notice above, when I talked about running experiments, it’s not about success or failure, it’s about accelerated learning.

How we as leaders, and the company as a whole, handle these situations makes a big difference to how employees treat mistakes, and whether the company actually gets better and better at these things. Or, as Chee would say, “suck less.” When things go wrong, it’s either a time to blame, or a time to learn. I believe each failure is an opportunity to uncover deep learnings about how the organization operates, and what could strengthen it systematically, and then take action. We, and many other software companies, do this via a ritual called the “blameless postmortem.” The purpose of the blameless postmort is to dig below the surface of some kind of bad outcome to the true root cause, and address that as an organization.

Industry 4.0: The Industrial Internet of Things
by Alasdair Gilchrist
Published 27 Jun 2016

The problem was that this required the machine to process vast quantities of data and look for patterns and that was not always readily available at the time. We have since discovered that simple neuron networks are a bit of a misnomer as it has little comparison to real neuron networks. In fact, it is now termed deep learning, and is suitable for analysis of large, static data sets. The alternative is the biological neural network, which expands on the neural theme and takes it several steps further. With this AI model, the biological neural network does actually try to mimic the brain’s way of learning, using what is termed spaced distributed representation.

Currently in robotics, we are a long way from the objective; however, in software machine learning is coming along very well. Presently the state of machine learning and artificial intelligence is defined by the latest innovations. In November 2015, Google launched its machine learning system called TensorFlow. Interest in deep learning continues to gain momentum, especially following Google’s purchase of DeepMind Technologies, which has since been renamed Google DeepMind. In February 2015, DeepMind scientists revealed how a computer had taught itself to play almost 50 video games, by figuring out what to do through deep neural networks and reinforcement learning.

pages: 357 words: 95,986

Inventing the Future: Postcapitalism and a World Without Work
by Nick Srnicek and Alex Williams
Published 1 Oct 2015

These are tasks that computers are perfectly suited to accomplish once a programmer has created the appropriate software, leading to a drastic reduction in the numbers of routine manual and cognitive jobs over the past four decades.22 The result has been a polarisation of the labour market, since many middle-wage, mid-skilled jobs are routine, and therefore subject to automation.23 Across both North America and Western Europe, the labour market is now characterised by a predominance of workers in low-skilled, low-wage manual and service jobs (for example, fast-food, retail, transport, hospitality and warehouse workers), along with a smaller number of workers in high-skilled, high-wage, non-routine cognitive jobs.24 The most recent wave of automation is poised to change this distribution of the labour market drastically, as it comes to encompass every aspect of the economy: data collection (radio-frequency identification, big data); new kinds of production (the flexible production of robots,25 additive manufacturing,26 automated fast food); services (AI customer assistance, care for the elderly); decision-making (computational models, software agents); financial allocation (algorithmic trading); and especially distribution (the logistics revolution, self-driving cars,27 drone container ships and automated warehouses).28 In every single function of the economy – from production to distribution to management to retail – we see large-scale tendencies towards automation.29 This latest wave of automation is predicated upon algorithmic enhancements (particularly in machine learning and deep learning), rapid developments in robotics and exponential growth in computing power (the source of big data) that are coalescing into a ‘second machine age’ that is transforming the range of tasks that machines can fulfil.30 It is creating an era that is historically unique in a number of ways. New pattern-recognition technologies are rendering both routine and non-routine tasks subject to automation: complex communication technologies are making computers better than humans at certain skilled-knowledge tasks, and advances in robotics are rapidly making technology better at a wide variety of manual-labour tasks.31 For instance, self-driving cars involve the automation of non-routine manual tasks, and non-routine cognitive tasks such as writing news stories or researching legal precedents are now being accomplished by robots.32 The scope of these developments means that everyone from stock analysts to construction workers to chefs to journalists is vulnerable to being replaced by machines.33 Workers who move symbols on a screen are as at risk as those moving goods around a warehouse.

A Critique of Rifkin and Negri’, in In Letters of Blood and Fire (Oakland, CA: PM Press, 2012), p. 78. 46.It should be mentioned that, increasingly, tacit knowledge tasks are being automated through environmental control and machine learning, with more recent innovations eliminating even the need for a controlled environment. Frey and Osborne, Future of Employment, p. 27; Autor, Polanyi’s Paradox; Sarah Yang, ‘New “Deep Learning” Technique Enables Robot Mastery of Skills via Trial and Error’, Phys.org, 21 May 2015, at phys.org. 47.As Marx notes, because of this ‘the field of application for machinery would therefore be entirely different in a communist society from what it is in bourgeois society.’ Marx, Capital, Volume I, p. 515 n. 33. 48.Silvia Federici, ‘Permanent Reproductive Crisis: An Interview’, Mute, 7 March 2013, at metamute.org. 49.For an excellent overview of historical experiences of alternative domestic arrangements, see Dolores Hayden, Grand Domestic Revolution: A History of Feminist Designs for American Homes, Neighbourhoods and Cities (Cambridge: MIT Press, 1996). 50.However, it is important to recognise that, historically, domestic labour-saving devices have tended to place greater demands on household maintenance, rather than allowing more free time.

pages: 324 words: 91,653

The Quantum Thief
by Hannu Rajaniemi
Published 1 Jan 2010

They gather bits of your gevulot so they can decrypt your mind.’ ‘Why would they want him? He was nothing special. He could make chocolate. I don’t even like chocolate.’ ‘I think your husband was exactly the kind of person the gogol pirates would be interested in, a specialised mind,’ Isidore says. ‘The Sobornost have an endless appetite for deep learning models, and they are obsessed with human sensory modalities, especially taste and smell.’ He takes care to include Élodie in the conversation’s gevulot. ‘And his chocolate certainly is special. His assistant was kind enough to let me try some when I visited the shop: freshly made, a sliver of that dress that arrived from the factory this morning.

His floor and desk are covered in three-dimensional building sketches, both imaginary and real, dominated by a scale model of the Ares Cathedral. The green creature hides behind it. Smart move, little fellow. It’s a big, bad world out there. Many of his fellow students find studying frustrating. As perfect as exomemory is, it only gives you short-term memories. Deep learning still comes from approximately ten thousand hours of work on any given subject. Isidore does not mind: on a good day, he can get lost in the purity of form for hours, exploring tempmatter models of buildings, feeling each detail under his fingertips. He summons up a text on the Tendai sect and the Daidairi Palace and starts reading, waiting for the contemporary world to fade.

pages: 351 words: 93,982

Leading From the Emerging Future: From Ego-System to Eco-System Economies
by Otto Scharmer and Katrin Kaufer
Published 14 Apr 2013

What strategies can help us to function as vehicles for shifting the whole? In exploring these questions, we laid out three big ideas. The first is that there are two fundamentally different modes of learning: learning from the past and learning from the emerging future. In order to learn from the emerging future, we have to activate a deep learning cycle that involves not only opening the mind (transcending the cognitive boundaries), but also opening the heart (transcending our relational boundaries) and opening the will (transcending the boundaries of our small will). The U process of learning from the emerging future follows three movements: “Observe, observe,” “Retreat and reflect: allow the inner knowing to emerge,” and “Act in an instant.”

These methodologies combine state-of-the-art organizational learning tools with participatory innovation techniques and blend them with awareness-based leadership practices. Mastery of these blended new leadership technologies, such as presencing, to sense and actualize emerging future possibilities is the methodological backbone of the school. 4. Presencing coaching circles. One of the most important mechanisms for holding the space for deep learning is peer circles that use deep listening–based coaching practices. A coaching circle usually consists of five to seven members and applies a version of the case clinic process that we described at the end of chapter 7. We have found that the power of these peer group circles is simply amazing. They hold the space for individual and shared renewal.

Likewar: The Weaponization of Social Media
by Peter Warren Singer and Emerson T. Brooking
Published 15 Mar 2018

In turn, the next layer might discover “circles”; the layer after that, “faces”; the layer after that, “noses.” Each layer allows the network to approach a problem with more and more granularity. But each layer also demands exponentially more neurons and computing power. Neural networks are trained via a process known as “deep learning.” Originally, this process was supervised. A flesh-and-blood human engineer fed the network a mountain of data (10 million images or a library of English literature) and slowly guided the network to find what the engineer was looking for (a “car” or a “compliment”). As the network went to work on its pattern-sorting and the engineer judged its performance and tweaked the synapses, it got a little better each time.

And yet, just as in the Terminator movies, if humans are to be spared from this encroaching, invisible robot invasion, their likely savior will be found in other machines. Recent breakthroughs in neural network training hint at what will drive machine evolution to the next level, but also save us from algorithms that seek to manipulate us: an AI survival of the fittest. Newer, more advanced forms of deep learning involve the use of “generative adversarial networks.” In this type of system, two neural networks are paired off against each other in a potentially endless sparring match. The first network strains to create something that seems real—an image, a video, a human conversation—while the second network struggles to determine if it’s fake.

See information data localization, 89 Dawkins, Richard, 189, 190 de Tocqueville, Alexis, 121 deaths Chicago gangs, 13–14 Duterte’s “drug war,” 15 Gaza City, 194, 196 ISIS and, 153 Nairobi’s Westgate mall, 235 Russia and Ukraine, 201–2, 204 Turkey coup, 91–92 deep fakes, 253 deep learning, 249, 250, 256 deep web, 52 democracy (vs. trolls), 211, 241, 262–63, 265–66 Denver Guardian (fake newspaper), 132 Dewey, Caitlin, 137 Digital Forensic Research Lab, 138 Digital Millennium Copyright Act (DMCA), 225–26 digital trail, 60–61, 66 digital wildfires, 137 digital world war, 172 DigitaShadow, 213 diplomacy, 15, 202 discrediting anti-vaxxers, 124–25 campaigners, 116 of information, 123 journalists and activists, 15, 114 of truth, 140 See also trolls and trolling disinformation, 103–14, 137 botnets, 141–47 cybersecurity, 241 dangers of, 261 Russian botnets, 144–45 See also fake news; propaganda disintermediation, 54–55 diversity, 101 Dixson, Angee, 138–39, 140 Dogood, Silence, 29 Domino’s Pizza, hack, 195 Donbass News International (DNI), 108 Dorsey, Jack, 48 dot-com bubble, 44 Drew, Lori, 227–28 Ducca, Lauren, 116 Dumbledore’s Army, 172 Dupin, C.

pages: 411 words: 98,128

Bezonomics: How Amazon Is Changing Our Lives and What the World's Best Companies Are Learning From It
by Brian Dumaine
Published 11 May 2020

We place buy orders for millions of items automatically.” Under the old system, Wilke and his managers only had the bandwidth to focus on Amazon’s top-selling items, but at the scale it operates today, those conversations wouldn’t be possible. Now the original retail buying model that used to be stored in human brains is stored in deep learning algorithms—the thinking process is the same, but Amazon’s managers don’t have to repeat the same analyses over and over again. The other advantage is that the machines produce more consistent results. In the past, Amazon managers had their own spreadsheets and their unique models for making guesses about supply and demand.

Amazon spent more than two decades accumulating data on its customers and honing its AI programs to get to the point where the software is the business model. It comes as little surprise, then, that a 2019 survey by the research firm IDC found that only 25 percent of global corporations have an enterprise-wide AI strategy. Even at Amazon, machines are still far from perfect. If there is an aberration, the deep learning algorithms still aren’t smart enough to adjust on the fly. Say a hurricane hits New Orleans: the machines won’t know to stock more food and water there because it’s a random event. And the programs sometimes become outmoded. Wilke and his AI team are constantly evaluating the algorithms to make sure they’re maximizing business.

pages: 350 words: 109,379

How to Run a Government: So That Citizens Benefit and Taxpayers Don't Go Crazy
by Michael Barber
Published 12 Mar 2015

There is widespread and significant progress which is becoming irreversible.21 If the routines are in place, the political leader can move from crucial detail to big picture, from nuts and bolts to overall design, from individual to nation because he or she, or at least the head of delivery, really knows what’s happening now. RULE 36 A FULL-SCALE REVIEW OF THE PROGRAMME AT LEAST ONCE A YEAR PROVIDES DEEP LEARNING (which can be acted on immediately) There is another walk in the Lake District that I love, this one less rugged, less dramatic and less hard work, not least because Rossett Gill is not involved. Even so, it is achingly beautiful and it has a personal connection because it involves crossing a wooded hillside once owned by my great-grandfather.

ROUTINES DON’T BE SPOOKED BY THE DEAFENING SILENCE (but keep listening) ANTICIPATE THE IMPLEMENTATION DIP (and demonstrate the leadership required to get through it) DEAL WITH CRISES (but don’t use them as an excuse) GOVERNMENT BY ROUTINE BEATS GOVERNMENT BY SPASM (it’s not even close) PREPARE MONTHLY NOTES FOR THE LEADER (and make them ‘deeply interesting’) ROUTINE MEETINGS OR STOCKTAKES CREATE FALSE DEADLINES (and solve problems before they become crises) A FULL-SCALE REVIEW OF THE PROGRAMME AT LEAST ONCE A YEAR PROVIDES DEEP LEARNING (which can be acted on immediately) UNDERSTAND THE WOOD AND THE TREES (and the view beyond) 6. PROBLEM-SOLVING CATEGORIZE PROBLEMS BY THEIR INTENSITY (and act accordingly) DIAGNOSE PROBLEMS PRECISELY (and act accordingly) TAKE ALL THE EXCUSES OFF THE TABLE LEARN ACTIVELY FROM EXPERIENCE (failure is a great teacher) NEGOTIATE ON THE BASIS OF PRINCIPLE (but don’t depend on it) GUARD AGAINST FOLLY (it has been common throughout history) 7.

pages: 489 words: 106,008

Risk: A User's Guide
by Stanley McChrystal and Anna Butrico
Published 4 Oct 2021

Google eagerly signed the contract. Now identifying as an AI company (not a data company, as it had been formerly known) Google would create a “customized AI surveillance engine” to scour the DoD’s massive amount of footage. Google’s computer vision, which incorporated both machine learning and deep learning, would analyze the data to track the movements of vehicles and other objects. As they quietly engaged with Project Maven, Google’s AI services showed initial progress—Google’s software had greater success than humans in detecting important footage. aunt jemima ■ Aunt Jemima is a brand of syrup and pancake mix and other foods, whose packaging features the image of the eponymous character originally appropriated from nineteenth-century minstrel shows.

“AI arms race”: Cheryl Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End,” US Department of Defense, July 21, 2017, https://defense.gov/Explore/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/. identifying as an AI company: Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End.” “customized AI surveillance engine”: Letter to Sundar Pinchai, https://static01.nyt.com/files/2018/technology/googleletter.pdf. both machine learning and deep learning: Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End.” track the movements: Letter to Sundar Pinchai. software had greater success: Scheiber and Conger, “Great Google Revolt.” nineteenth-century minstrel shows: Beatrice Dupuy, “No Evidence Former Slave Who Helped Launch Aunt Jemima Products Became a Millionaire,” AP, June 19, 2020, https://apnews.com/afs:Content:9030960288.

pages: 374 words: 111,284

The AI Economy: Work, Wealth and Welfare in the Robot Age
by Roger Bootle
Published 4 Sep 2019

Over the last decade, however, a number of key developments have come together to power AI forward: • Enormous growth in computer processing power. • Rapid growth in available data. • The development of improved technologies, including advances in text and image, including facial, as well as voice, recognition. • The development of “deep learning”. • The advent of algorithm-based decision-making. So now AI seems close to its “James Watt moment.” Just as the steam engine was in existence for some time before Watt developed it and it came to transform production, so AI, which has been on the scene for some time, is about to stage a leap forward.

But, of course, computers are used everywhere in the world. Moreover, if a country decided to eschew the use of computers because it did not produce them it would consign itself to the economic scrap heap. The same is true of AI. Just because your country does not produce AI – none of the algorithms, deep learning apps that are driving AI, nor the physical entities, such as robots – this does not mean that you cannot benefit by employing them. Indeed, if you don’t, you risk falling into economic irrelevance. That said, there is a marked difference of opinion among the technological cognoscenti about how innovation, including with regard to AI – and the gains from it – will be distributed globally.

pages: 390 words: 109,519

Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media
by Tarleton Gillespie
Published 25 Jun 2018

See also Tracy Ith, “Microsoft’s PhotoDNA: Protecting Children and Businesses in the Cloud,” Microsoft News Center, July 15, 2015, https://news.microsoft.com/features/microsofts-photodna-protecting-children-and-businesses-in-the-cloud/. 83YouTube also uses a tool called ContentID to automatically identify copyrighted music in user videos that works in much the same way, by matching the “fingerprint” of the music to a database of copyrighted works. 84Sarah Perez, “Facebook, Microsoft, Twitter and YouTube Collaborate to Remove ‘Terrorist Content’ from Their Services,” TechCrunch, December 5, 2016, https://techcrunch.com/2016/12/05/facebook-microsoft-twitter-and-youtube-collaborate-to-remove-terrorist-content-from-their-services/. 85My thanks to Sarah Myers West for this insight. 86Ap-Apid, “An Algorithm for Nudity Detection.” 87Ibid. 88Nude.js (https://github.com/pa7/nude.js, 2010) and Algorithmia’s tool (https://isitnude.com/, 2015) both explicitly acknowledge the Ap-Apid paper as the core of their tools. 89Ma et al., “Human Skin Detection via Semantic Constraint.” 90Lee et al., “Naked Image Detection Based on Adaptive and Extensible Skin Color Model”; Platzer, Stuetz, and Lindorfer, “Skin Sheriff”; Sengamedu, Sanyal, and Satish, “Detection of Pornographic Content in Internet Images.” 91Lee et al., “Naked Image Detection Based on Adaptive and Extensible Skin Color Model.” 92James Sutton, “Improving Nudity Detection and NSFW Image Recognition,” KD Nuggets, June 25, 2016, http://www.kdnuggets.com/2016/06/algorithmia-improving-nudity-detection-nsfw-image-recognition.html. 93Sengamedu, Sanyal, and Satish, “Detection of Pornographic Content in Internet Images.” 94Jay Mahadeokar and Gerry Pesavento, “Open Sourcing a Deep Learning Solution for Detecting NSFW Images,” Yahoo Engineering blog, September 30, 2016, https://yahooeng.tumblr.com/post/151148689421/open-sourcing-a-deep-learning-solution-for. 95Agarwal and Sureka, “A Focused Crawler”; Djuric et al., “Hate Speech Detection with Comment Embeddings”; Sood, Antin, and Churchill, “Profanity Use in Online Communities”; Warner and Hirschberg, “Detecting Hate Speech on the World Wide Web.” 96Brendan Maher, “Can a Video Game Company Tame Toxic Behaviour?”

Reset
by Ronald J. Deibert
Published 14 Aug 2020

Central Asian countries like Uzbekistan and Kazakhstan have even gone so far as to advertise for Bitcoin mining operations to be hosted in their jurisdictions because of cheap and plentiful coal and other fossil-fuelled energy sources.349 Some estimates put electric energy consumption associated with Bitcoin mining at around 83.67 terawatt-hours per year, more than that of the entire country of Finland, with carbon emissions estimated at 33.82 megatons, roughly equivalent to those of Denmark.350 To put it another way, the Cambridge Centre for Alternative Finance says that the electricity consumed by the Bitcoin network in one year could power all the teakettles used to boil water in the entire United Kingdom for nineteen years.351 A similar energy-sucking dynamic underlies other cutting-edge technologies, like “deep learning.” The latter refers to the complex artificial intelligence systems used to undertake the fine-grained, real-time calculations associated with the range of social media experiences, such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, and so on.

Training a single AI model can emit as much carbon as five cars in their lifetimes. Retrieved from https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/; Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Retrieved from https://arxiv.org/abs/1906.02243 Data centres are “hidden monuments” to our excessive data consumption: Hogan, M. (2015). Facebook data storage centers as the archive’s underbelly. Television & New Media, 16(1), 3–18. https://doi.org/10.1177/1527476413509415; See also Hogan, M. (2015).

pages: 334 words: 109,882

Quit Like a Woman: The Radical Choice to Not Drink in a Culture Obsessed With Alcohol
by Holly Glenn Whitaker
Published 9 Jan 2020

This practice gives you the chance to stay with the parts of yourself that are hurting, the parts that need you not to go somewhere else, like down the neck of a bottle or the rabbit hole of social media. RASINS also gives you something else, access to a practice known as deep learning. People learn better when they are challenging themselves just outside their ability (where things are not too hard and not too easy). In this state of deep learning, when we are trying to accomplish something difficult, we lay thicker neural networks, meaning that—with practice—our skill at not drinking becomes more potent than our skill of drinking. Lastly, it breaks the cycle of cause and effect: not reacting means not feeding the habit.

pages: 492 words: 118,882

The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory
by Kariappa Bheemaiah
Published 26 Feb 2017

Those jobs are going to get decimated, literally.” Advantages: greater inclusion, increased competition, data standardization Risks: compliance costs, regulation blocks risk monitoring, and technological unemployment 4. Capital Markets Stance: Business-facing Main technologies: Trading Algorithms, Big Data, Neural Nets, Machine/Deep Learning, AI If we were to increase the scale, speed, and volume of the transactions and services stated in the private wealth management industry, we would find ourselves in the high-frequency trading (HFT) world of capital markets , which encompasses the trade and management of private equity, commodities, and derivatives.

As technologies such as the Blockchain begin to remove central points of control, the evolving digital and decentralized structure of markets today are challenging the predefined theories on productivity, risk allocation, and labor requirements. Increased automation, propelled by rapid advancements in machine/deep learning, mobile payments, robotics, and the exponential increase in the computerization of tasks, is leading to the development of networked, on- demand businesses which are transforming and reorganizing firms and establishing new skill requirements across the entire economy. As tasks are digitized and operations are networked, processes can be codified and then replicated.

pages: 706 words: 202,591

Facebook: The Inside Story
by Steven Levy
Published 25 Feb 2020

He wasn’t thinking about content moderation then, but rather improvement in things like News Feed ranking, better targeting in ad auctions, and facial recognition to better identify your friends in photographs, so you’d engage more with those posts. But the competition to hire AI wizards was fierce. The godfather of deep learning was a British computer scientist working in Toronto named Geoffrey Hinton. He was like the Batman of this new and irreverent form of AI, and his acolytes were a trio of brilliant Robins who individually were making their own huge contributions. One of the Robins, a Parisian named Yann LeCun, jokingly dubbed Hinton’s movement “the Conspiracy.” But the potential of deep learning was no joke to the big tech companies who saw it as a way to perform amazing tasks at scale, everything from facial recognition to instant translation from one language to another.

In the earliest days Facebook did hire some people adept in AI, and both the News Feed and the ad auction were fueled by learning algorithms. But beginning in the mid-2010s one particular approach known as machine learning began to accumulate amazing results, suddenly putting AI to use in a number of practical cases. This supercharged iteration on machine learning was called deep learning. It worked by training networks of artificial neurons—working somewhat like the actual neurons in the human brain—to rapidly identify things like objects in images, or spoken words. Zuckerberg felt that this was another moment like mobile, where the winners would be those who had the best machine-learning engineers.

pages: 159 words: 42,401

Snowden's Box: Trust in the Age of Surveillance
by Jessica Bruder and Dale Maharidge
Published 29 Mar 2020

— Leonard Cohen, “Everybody Knows” On a September night in 2017, New York Times tech columnist Farhad Manjoo and his wife were getting ready to sleep when a blood-curdling shriek arose from the bedside. It was Alexa. “The voice assistant began to wail, like a child screaming in a horror-movie dream,” Manjoo later recalled. His Twitter followers greeted the news with sarcasm and satirical advice: “You have an always-on, deep-learning supercomputer node in your house always listening and you are surprised it screams?” wrote one. “Why voluntarily have CIA spy tech in your home?” asked another. “If I were you, I’d keep all the network access wires in one place and keep an axe nearby,” advised a third. Suspicions about Alexa were already running high; a hacker had recently demonstrated how an Echo could be transformed into a wiretap.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

After correcting for the unprecedented amount of hardware DeepMind was willing to employ, it is not clear whether AlphaGo deviates from the trend of algorithmic improvements at all (Brundage 2016). 37. More specifically, most AI breakthroughs have been due to a particular approach to machine learning that uses multilayered neural networks, known as “deep learning” (Goodfellow et al. 2016; LeCun et al. 2015). At the time of writing, the state-of-the-art AI for text-based applications are so-called transformers, which include Google’s BERT and OpenAI’s GPT-3 (T. Brown et al. 2020; Devlin et al. 2019; Vaswani et al. 2017). Transformers have also been successfully used for tasks involving audio (Child et al. 2019), images (M.

The highest-profile AI achievements in real-time strategy games were DeepMind’s AlphaStar defeat of human grandmasters in the game StarCraft II and the OpenAI Five’s defeat of human world champions in Dota 2 (OpenAI et al. 2019; Vinyals et al. 2019). Early successes in image classification (see, e.g., Krizhevsky et al. 2012) are widely seen as having been key for demonstrating the potential of deep learning. See also the following: speech recognition, Abdel-Hamid et al. (2014); Ravanelli et al. (2019); music, Briot et al. (2020); Choi et al. (2018); Magenta (n.d.); visual art, Gatys et al. (2016); Lecoutre et al. (2017). Building on astonishing progress demonstrated by Ramesh et al. (2021), the ability to create images from text descriptions by combining two AI systems known as VQGAN (Esser et al. 2021) and CLIP (OpenAI 2021b; Radford et al. 2021) caused a Twitter sensation (Miranda 2021). 38.

pages: 492 words: 141,544

Red Moon
by Kim Stanley Robinson
Published 22 Oct 2018

He sat down and began to ponder again the problem of programming self-improvement into an AI. New work from Chengdu on rather simple Monte Carlo tree searches and combinatorial optimization had given him some ideas. Deep learning was alas very shallow whenever it left closed sets of rules and data; the name was a remnant of early AI hype. If you wanted to win a game like chess or go, fine, but when immersed in the larger multivariant world, AI needed more than deep learning. It needed to incorporate the symbolic logic of earlier AI attempts, and the various programs that instructed an AI to pursue “child’s play,” meaning randomly created activities and improvements.

Doppelganger: A Trip Into the Mirror World
by Naomi Klein
Published 11 Sep 2023

Yet now we find ourselves neck-deep in a system where, as with my own real-life doppelganger, the stakes are distinctly higher. Personal data, extracted without full knowledge or understanding, is sold to third parties and can influence everything from what loans we are eligible for to what job postings we see—to whether our jobs are replaced by deep learning bots that have gotten shockingly good at impersonating us. And those helpful recommendations and eerie impersonations come from the same algorithms that have led countless people down perilous information tunnels that end in comparing a vaccine app to the Holocaust and may yet end up somewhere far more dangerous.

“An invented past can never be used; it cracks and crumbles under the pressures of life like clay in a season of drought,” James Baldwin wrote. However, “to accept one’s past—one’s history—is not the same thing as drowning in it; it is learning how to use it.” Many Indigenous friends and neighbors I spoke with, though raw with grief and rage, expressed cautious hope that this kind of deep learning might actually be afoot. In an interview with The Globe and Mail, Norman Retasket, a survivor of the Kamloops school, observed, “If I told the same story three years ago,” about what happened at the school, it would have been seen as “fiction.” Now his stories are believed. “The story hasn’t changed,” he said.

pages: 181 words: 52,147

The Driver in the Driverless Car: How Our Technology Choices Will Create the Future
by Vivek Wadhwa and Alex Salkever
Published 2 Apr 2017

To me, the crux of this matter will be maintaining the ability of humans to understand robots and stop them from going too far. Google is looking at building in a kill switch on its A.I. systems.12 Other researchers are developing tools to visualize the otherwise impenetrable code in machine-generated algorithms built using Deep Learning systems. So the question that we must always be able to answer in the affirmative is whether we can stop it. With both A.I. and robotics, we must design all systems with this key consideration in mind, even if that reduces the capabilities and emergent properties of those systems and robots. Will All Benefit Equally?

Mastering Machine Learning With Scikit-Learn
by Gavin Hackeling
Published 31 Oct 2014

For these reasons, this representation is ineffective for tasks that involve photographs or other natural images. Modern computer vision applications frequently use either hand-engineered feature extraction methods that are applicable to many different problems, or automatically learn features without supervision problem using techniques such as deep learning. We will focus on the former in the next section. [ 64 ] www.it-ebooks.info Chapter 3 Extracting points of interest as features The feature vector we created previously represents every pixel in the image; all of the informative attributes of the image are represented and all of the noisy attributes are represented too.

pages: 543 words: 153,550

Model Thinker: What You Need to Know to Make Data Work for You
by Scott E. Page
Published 27 Nov 2018

He discovered the orbits to be consistent with the presence of a large planet in the outer region of the solar system. On September 18, 1846, he sent his prediction to the Berlin Observatory. Five days later, astronomers located the planet Neptune exactly where Le Verrier had predicted it would be. That said, prediction differs from explanation. A model can predict without explaining. Deep-learning algorithms can predict product sales, tomorrow’s weather, price trends, and some health outcomes, but they offer little in the way of explanation. Such models resemble bomb-sniffing dogs. Even though a dog’s olfactory system can determine whether a package contains explosives, we should not look to the dog for an explanation of why the bomb is there, how it works, or how to disarm it.

In this example, a straight line classifies nearly perfectly.7 Figure M1: Using a Linear Model to Classify Voting Behavior Nonlinear classifications: In figure M2, positives (+) represent frequent flyers, consumers who fly more than 10,000 miles per year, and negatives (-) represent all other customers of an airline. People of middle age and higher income are more likely to fly. To classify these data requires a nonlinear model, which could be estimated using deep-learning algorithms, such as neural networks. Neural networks include more variables so that they can fit almost any curve. Figure M2: Using a Nonlinear Model to Classify Frequent Flyers Forests of decision trees: In figure M3, positives (+) represent people who attended a science fiction convention based on their age and the hours per week they spend on the internet.

pages: 215 words: 59,188

Seriously Curious: The Facts and Figures That Turn Our World Upside Down
by Tom Standage
Published 27 Nov 2018

This narrows the system’s guesswork considerably. In recent years, machine-learning approaches have made rapid progress, for three reasons. First, computers are far more powerful. Second, they can learn from huge and growing stores of data, whether publicly available on the internet or privately gathered by firms. Third, so-called “deep learning” methods have combined faster computers and more abundant data with new training algorithms and more complex architectures that can learn from example even more efficiently. All this means that computers are now impressively competent at handling spoken requests that require a narrowly defined reply.

pages: 918 words: 257,605

The Age of Surveillance Capitalism
by Shoshana Zuboff
Published 15 Jan 2019

The company then issued nonvoting class “C” shares in 2016, solidifying Zuckerberg’s personal control over every decision.14 While financial scholars and investors debated the consequences of these share structures, absolute corporate control enabled the Google and Facebook founders to aggressively pursue acquisitions, establishing an arms race in two critical arenas.15 State-of-the-art manufacturing depended on machine intelligence, compelling Google and later Facebook to acquire companies and talent representing its disciplines: facial recognition, “deep learning,” augmented reality, and more.16 But machines are only as smart as the volume of their diet allows. Thus, Google and Facebook vied to become the ubiquitous net positioned to capture the swarming schools of behavioral surplus flowing from every computer-mediated direction. To this end the founders paid outsized premiums for the chance to corner behavioral surplus through acquisitions of an ever-expanding roster of key supply routes.

If the company had tried to process the growing computational workload with traditional CPUs, he explained, “We would have had to double the entire footprint of Google—data centers and servers—just to do three minutes or two minutes of speech recognition per Android user per day.”27 With data center construction as the company’s largest line item and power as its highest operating cost, Google invented its way through the infrastructure crisis. In 2016 it announced the development of a new chip for “deep learning inference” called the tensor processing unit (TPU). The TPU would dramatically expand Google’s machine intelligence capabilities, consume only a fraction of the power required by existing processors, and reduce both capital expenditure and the operational budget, all while learning more and faster.28 Global revenue for AI products and services is expected to increase 56-fold, from $644 million in 2016 to $36 billion in 2025.29 The science required to exploit this vast opportunity and the material infrastructure that makes it possible have ignited an arms race among tech companies for the 10,000 or so professionals on the planet who know how to wield the technologies of machine intelligence to coax knowledge from an otherwise cacophonous data continent.

In 2017 Facebook boasted two billion monthly users uploading 350 million photos every day, a supply operation that the corporation’s own researchers refer to as “practically infinite.”56 In 2018 a Facebook research team announced that it had “closed the gap” and was now able to recognize faces “in the wild” with 97.35 percent accuracy, “closely approaching human-level performance.” The report highlights the corporation’s supply and manufacturing advantages, especially the use of “deep learning” based on “large training sets.”57 Facebook announced its eagerness to use facial recognition as a means to more powerful ad targeting, but even more of the uplift would come from the immense machine training opportunities represented by so many photos. By 2018, its machines were learning to discern activities, interests, mood, gaze, clothing, gait, hair, body type, and posture.58 The marketing possibilities are infinite.

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

Given the current crazy levels of hype over the power of AI systems, my three-year-old daughter may well grow up believing that her best friend in university will be an android. Sadly, she, like her uncle and me, will be disappointed. While there has been a step change in the power of AI systems, brought about in the last decade by advances in deep learning techniques, AI systems are not nearly as intelligent as the press, politicians and philosophers would like us to believe. The hype is understandable: technology leaders have to hugely overstate the life-changing power of their AI systems to have any chance of gaining venture capital these days; journalists have to overstate the strength of results from AI projects, to compete in a clickbait environment; and in order to make a name for themselves, politicians and philosophers need to take an extreme and short-term view of AI in order for it to appear relevant and timely.

pages: 200 words: 71,482

The Meaning of Everything: The Story of the Oxford English Dictionary
by Simon Winchester
Published 1 Jan 2003

The English establishment of the day may be rightly derided at this remove as having been class-ridden and imperialist, bombastic and blimpish, racist and insouciant—but it was marked undeniably also by a sweeping erudition and confidence, and it was peopled by men and women who felt they were able to know all, to understand much, and in consequence to radiate the wisdom of deep learning. It is worth pointing this out simply because it was such people—such remarkable, polymathic, cultured, fascinated, wise, and leisured people—who were primarily involved in the creation of the mighty endeavour that the following account celebrates. On that idyllic and blissfully warm Derby Day evening, two magnificent social events were due to be staged, at the beating heart of the nation's social life.

pages: 205 words: 71,872

Whistleblower: My Journey to Silicon Valley and Fight for Justice at Uber
by Susan Fowler
Published 18 Feb 2020

I called up editors of small and large print magazines and learned from their mistakes and their triumphs; I took long walks through the city with tech journalists and editors who shared their hard-earned lessons. I had a long list of themes I wanted to cover in my new magazine—everything from biotech to deep learning, from fundraising and hiring software engineers to serverless architecture—and narrowing it down was difficult. Eventually, I landed on the topic of on-call best practices, because I knew that even in the worst-case scenario where nobody else could write for the magazine, the engineering team at Stripe and I could put something together pretty quickly.

pages: 287 words: 69,655

Don't Trust Your Gut: Using Data to Get What You Really Want in LIfe
by Seth Stephens-Davidowitz
Published 9 May 2022

(These days, people are increasingly buying products on services such as Amazon Live, which allows people to pitch their products by video to potential customers.) Researchers were given videos of each sales pitch along with data on how much product was sold afterward. (They also had data on the product being sold, the price of the product, and whether they offered free shipping.) The methods: artificial intelligence and deep learning. The researchers converted their 62.32 million frames of video into data. In particular, the AI was able to code the emotional expression of the salesperson during the video. Did the salesperson appear angry? Disgusted? Scared? Surprised? Sad? Or happy? The result: the researchers found that the emotional expression of a salesperson was a major predictor of how much product they sold.

pages: 602 words: 177,874

Thank You for Being Late: An Optimist's Guide to Thriving in the Age of Accelerations
by Thomas L. Friedman
Published 22 Nov 2016

So, for instance, in October 2015, Google released the basic algorithms for a program called TensorFlow for public consumption by the open-source community. TensorFlow is a set of algorithms that enable fast computers to do “deep learning” with big data sets to perform tasks better than a human brain. “By January 2016 we had a course online on how to use the TensorFlow open-source platform to write deep learning algorithms to teach a machine to do anything—copyediting, flying a plane, or legal discovery from documents,” explained Thrun. This is a huge new field of computer science. TensorFlow was released into the wild in October, and by January, Udacity, working directly with Google engineers, was teaching the skill on its platform.

pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next
by Jeanette Winterson
Published 15 Mar 2021

While most chatbots are narrow AI – an algorithm designed to do one thing only, like order the pizza or run through your ‘choices’ before being transferred to a human – some chatbots seem smarter. Google engineer, inventor and futurist Ray Kurzweil’s Ramona will chat with you on a variety of topics. She’s a deep-learning system whose data-set is continuously augmented by her chats with humans. Kurzweil believes that Ramona will pass the Turing Test by 2029 – that is, she will be indistinguishable, online, from a human being. And that will be the big difference, because communication is not just about asking for information or issuing commands: humans like to do exactly what chatbots don’t do well right now – which is chat – and that implies purposeless, not goal-oriented, diverse, random, often low-level, yet pleasurable communication.

The Smartphone Society
by Nicole Aschoff

Popular weariness and distrust of Silicon Valley and the technology it is developing are eloquently expressed in works of popular culture: television shows such as Silicon Valley, Westworld, and Black Mirror and novels such as Whiskey Tango Foxtrot, The Circle, and the uncannily prescient Super Sad True Love Story These pop explorations of how technology is shaping society range from dyspeptic satire to terrified (and terrifying) dystopian depictions of the future, should we continue down our current path. They function as real-time critique. Despite all the mystique surrounding deep learning and the dark web, our pop culture dystopias reveal a society well on its way to articulating a clear critique of what we don’t like about the Silicon Valley vision of the future.29 Our clarity is in part linked to the peculiar fact that many Silicon Valley visions about what technology should look like, and what our aspirations regarding technology should be, were originally located in science fiction.

Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurelien Geron
Published 14 Aug 2019

A simpler approach to maximizing the ELBO is called black box stochastic variational inference (BBSVI): at each iteration, a few samples are drawn from q and they are used to estimate the gradients of the ELBO with regards to the variational parameters λ, which are then used in a gradient ascent step. This approach makes it possible to use Bayesian inference with any kind of model (provided it is differentiable), even deep neural networks: this is called Bayesian deep learning. Tip If you want to dive deeper into Bayesian statistics, check out the Bayesian Data Analysis book by Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin. Gaussian mixture models work great on clusters with ellipsoidal shapes, but if you try to fit a dataset with different shapes, you may have bad surprises.

pages: 280 words: 76,638

Rebel Ideas: The Power of Diverse Thinking
by Matthew Syed
Published 9 Sep 2019

Further research is taking place, not just in Segal’s lab but elsewhere, seeking to build more evidence.22 The goal is to use not merely the microbiome and genome to make dietary recommendations, but other personal factors such as medication, sleep and stress. Topol writes: What we really need to do is pull in multiple types of data . . . from multiple devices, like skin patches and smartwatches. With advanced algorithms, this is eminently doable. In the next few years, you could have a virtual health coach that is deep learning about your relevant health metrics and providing you with customized dietary recommendations. Yet diet is merely one branch of this conceptual revolution. In almost all areas of our lives, we will find ourselves moving from the era of standardisation to the era of personalisation. If this transformation is guided with wisdom, it has the potential to improve health, happiness and productivity, too.

pages: 472 words: 80,835

Life as a Passenger: How Driverless Cars Will Change the World
by David Kerrigan
Published 18 Jun 2017

Driverless car makers have put a lot of effort into solving this seemingly simple problem. There is also the risk of a computer being confused by signs like this, which they could mistake for an actual traffic signal: Could a driverless car mistake these signs for a traffic signal? However, driverless cars are being taught through deep learning to identify this and similar signs so as not to confuse them with traffic lights. The best solution to date is for driverless car manufacturers to create a prior map of traffic signals, enabling the driverless cars and its perception systems to anticipate the locations of traffic lights and improve detection of the light state.[270] Thus the vehicle can predict when it should expect traffic lights and concentrate its search.

pages: 308 words: 84,713

The Glass Cage: Automation and Us
by Nicholas Carr
Published 28 Sep 2014

That’s what computer automation often does today, and it’s why Whitehead’s observation has become misleading as a guide to technology’s consequences. Rather than extending the brain’s innate capacity for automaticity, automation too often becomes an impediment to automatization. In relieving us of repetitive mental exercise, it also relieves us of deep learning. Both complacency and bias are symptoms of a mind that is not being challenged, that is not fully engaged in the kind of real-world practice that generates knowledge, enriches memory, and builds skill. The problem is compounded by the way computer systems distance us from direct and immediate feedback about our actions.

pages: 247 words: 81,135

The Great Fragmentation: And Why the Future of All Business Is Small
by Steve Sammartino
Published 25 Jun 2014

The new general practitioners of business need to make the decision to keep up, just as doctors must with their journals and conferences. There’s no choice. It’s what the new market demands; every youngster entering our industry keeps up to date by default. It’s not even a task for them; they enjoy it. It’s the world they were born into. So unless we decide to enjoy it too, and go for deep learning by using the tools, we’ll not only be left behind, but probably replaced. Now that we’re escaping the industrial machine, it’s about time marketers realised that people are not interchangeable widgets and that they would rather be spoken to and about by a human voice. What is fragmenting We’re being freed from our life in boxes.

pages: 304 words: 80,143

The Autonomous Revolution: Reclaiming the Future We’ve Sold to Machines
by William Davidow and Michael Malone
Published 18 Feb 2020

But some kinds of institutions are almost totally information proxies in disguise. Retail, banking, finance, and monetary systems are examples of institutions with extremely high information proxy content. One would expect them to be significantly transformed. INTELLIGENCE EQUIVALENCE Advances in artificial intelligence, deep learning, neural network processing, and big data have unleashed the forces of intelligence equivalence. Machines are now capable of intelligent behavior. In many applications, they can substitute for humans’ brains, minds, and senses. For more than one hundred years, technologists involved in computation have speculated about and attempted to construct machines that would exhibit intelligent behavior.

pages: 282 words: 81,873

Live Work Work Work Die: A Journey Into the Savage Heart of Silicon Valley
by Corey Pein
Published 23 Apr 2018

But SU also staged two-day traveling seminars called Singularity Summits. The next summit was to be held in Amsterdam. The promotional materials promised fantastical revelations to all who attended. Sessions on the “revolution in robotics and artificial intelligence” would cover the latest in drones, “telepresence,” and something called “deep learning.” Other speakers would explore the possibilities of bodily implants, exoskeletons, 3-D-printed organs, and nanomedicine. There would be sessions on “organizing society for accelerating change,” which was to tack government and the relationship of technology to “unemployment and inequality.” Finally, for those looking to cash in on this sneak peek at the future, the summit would feature sessions on startups and entrepreneurship in the era of “exponential technology”—a shorthand phrase describing, per Kurzweil’s theories, how the pace of invention has allegedly accelerated through history, bringing us to this moment on the cusp of the Singularity.

pages: 263 words: 81,527

The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind
by Nick Chater
Published 28 Mar 2018

Hopfield (1982), ‘Neural networks and physical systems with emergent collective computational abilities’, Proceedings of the National Academy of Sciences of the United States of America, 79(8), 2554–8). Importantly, there are powerful theoretical ideas concerning how such networks learn the constraints that govern the external world from experience (e.g. Y. LeCun, Y. Bengio and G. Hinton (2015), ‘Deep learning’, Nature, 521(7553): 436–44.). 4 Although in a digital computer, cooperative computation across the entire web of constraints is not so straightforward – more sequential methods of searching the web are often used instead. 5 The idea of ‘direct’ perception, which has been much discussed in psychology, is appealing, I think, precisely because we are only ever aware of the output of the cycle of thought: we are oblivious to the calculations involved, and the speed with which the cycle of thought can generate the illusion that our conscious experience must be in immediate contact with reality. 6 H. von Helmholtz, Handbuch der physiologischen Optik, vol. 3 (Leipzig: Voss, 1867).

pages: 289 words: 86,165

Ten Lessons for a Post-Pandemic World
by Fareed Zakaria
Published 5 Oct 2020

PublicationDocumentID=6322. 104 “25% of our workforce”: Sonal Khetarpal, “Post-COVID, 75% of 4.5 Lakh TCS Employees to Permanently Work from Home by ’25; from 20%,” Business Today India, April 30, 2020. 104 issued a correction: Saunak Chowdhury, “TCS Refutes Claims of 75% Employees Working from Home Post Lock-Down,” Indian Wire, April 28, 2020. 104 450,000 employees: Tata Consultancy Services, “About Us,” https://www.tcs.com/about-us. 106 up one billion: Jeff Becker and Arielle Trzcinski, “US Virtual Care Visits to Soar to More Than 1 Billion,” Forrester Analytics, April 10, 2020, https://go.forrester.com/press-newsroom/us-virtual-care-visits-to-soar-to-more-than-1-billion/. 106 “greatest contribution to mankind”: Lizzy Gurdus, “Tim Cook: Apple’s Greatest Contribution Will Be ‘About Health,’ ” CNBC Mad Money, January 8, 2019. 107 97% accuracy: “Using Artificial Intelligence to Classify Lung Cancer Types, Predict Mutations,” National Cancer Institute, October 10, 2018, https://www.cancer.gov/news-events/cancer-currents-blog/2018/artificial-intelligence-lung-cancer-classification. 107 up to 11% fewer false positives: D. Ardila, A. P. Kiraly, S. Bharadwaj et al., “End-to-End Lung Cancer Screening with Three-Dimensional Deep Learning on Low-Dose Chest Computed Tomography,” Nature Medicine 25 (2019): 954–61, https://doi.org/10.1038/s41591–019–0447-x. 107 designing proteins to block the virus: Kim Martineau, “Marshaling Artificial Intelligence in the Fight Against Covid-19,” MIT Quest for Intelligence, MIT News, May 19, 2020, http://news.mit.edu/2020/mit-marshaling-artificial-intelligence-fight-against-covid-19–0519. 108 hoped that AI might find solutions . . .

pages: 290 words: 85,847

A Brief History of Motion: From the Wheel, to the Car, to What Comes Next
by Tom Standage
Published 16 Aug 2021

Having gathered and combined the data from its sensors, the car needs to work out what everything is. In particular, it must identify other vehicles, pedestrians, cyclists, road markings, traffic lights, road signs, and so forth. Humans find this easy, and machines used to find it difficult. But machine vision has in recent years improved enormously, thanks to the use of deep learning, an artificial-intelligence technique in which systems learn to perform particular tasks by analyzing thousands of labeled examples. For autonomous cars, this means getting hold of thousands of images of street scenes, with each element carefully labeled, so that a perception system can be trained to recognize them.

pages: 335 words: 86,900

Empire of Ants: The Hidden Worlds and Extraordinary Lives of Earth's Tiny Conquerors
by Susanne Foitzik and Olaf Fritsche
Published 5 Apr 2021

parasitic fungi can turn ants into zombies Araújo, J. P. M. et al. (2018). Zombie-ant fungi across continents: 15 new species and new combinations within Ophiocordyceps. I. Myrmecophilous hirsutelloid species. Studies in Mycology, 90, 119–60. Fredericksen, M. A. et al. (2017). Three-dimensional visualization and a deep-learning model reveal complex fungal parasite networks in behaviorally manipulated ants. Proceedings of the National Academy of Sciences USA, 114, 12590–95. Hughes, D. P. et al. (2011). Ancient death-grip leaf scars reveal ant-fungal parasitism. Biology Letters, 7, 67–70. Kobmoo, N. et al. (2019). Population genomics revealed cryptic species within host-specific zombie-ant fungi (Ophiocordyceps unilateralis).

pages: 295 words: 81,861

Road to Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation
by Paris Marx
Published 4 Jul 2022

As the date when autonomous vehicles were supposed to arrive came and went, and the challenges facing the technology became apparent both to those in the industry who were trying to make progress with their driving systems and to the public who began to see a growing number of stories about autonomous vehicles crashing in troubling ways, some experts started to discuss how more than just a smart AI would be necessary to bring their fantasies to life. In 2018, the Verge reported that Andrew Ng, one of the co-founders of the Google Brain deep-learning AI team, said, “the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior.” He added, “we should partner with the government to ask people to be lawful and considerate. Safety isn’t just about the quality of the AI technology.”15 What Ng described was a far cry from what people like Brin or Musk were saying autonomous vehicles would do.

pages: 209 words: 81,560

Irresistible: How Cuteness Wired our Brains and Conquered the World
by Joshua Paul Dale
Published 15 Dec 2023

The earlier models were supposed to ‘learn’ and develop a unique personality over time, but owners of the original AIBO didn’t notice many changes in their robot dogs. However, this time Sony has added an AI neural network. Every day an aibo’s experiences are uploaded to the cloud, processed through deep learning and downloaded back to the aibo while it recharges – or ‘sleeps’, in Sony’s parlance.26 The new aibo employs this ‘emotional tech’ to register the feelings of its owner and elicit an emotional response. Will it be successful? I’m not sure if I could love a robot dog. My friends who owned an AIBO said they grew tired of it after a while, which made me think about the difference between robot pets, however engaging, and real ones.

pages: 366 words: 94,209

Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity
by Douglas Rushkoff
Published 1 Mar 2016

Other companies have opted to become what are known as “flexible purpose” corporations, which allows them to emphasize pretty much any priority over profits—it doesn’t even have to be explicitly beneficial to society at large.74 Flexible purpose corporations also enjoy looser reporting standards than do benefit corporations.75 Vicarious, a tech startup based in the Bay Area, is the sort of business for which the flex corp structure works well. Vicarious operates in the field of artificial intelligence and deep learning; its most celebrated project to date is an attempt to crack CAPTCHAs (those annoying tests of whether a user is human) using AI. Vicarious claims to have succeeded, and its first Turing test demonstrations appear to back up its claim.76 How would such a technology be deployed or monetized? Vicarious doesn’t need to worry about that just yet.

pages: 292 words: 94,660

The Loop: How Technology Is Creating a World Without Choices and How to Fight Back
by Jacob Ward
Published 25 Jan 2022

And then there were the fundamental limitations of the era: a lack of computing power to crunch numbers, insufficient database capacity for the awesome amounts of information necessary to train an algorithm, and the fact that those limitations made it impossible to scale up small, one-off experiments into useful real-world systems. It took two funding winters, countless research dead ends, and exponentially greater computing power and data-storage capabilities to arrive at the present moment. Today, various flavors of machine learning, from deep-learning neural networks to the generative adversarial networks that pit two neural nets against one another, can do everything from read a printed menu to steer a car along a winding mountain road. Again, this all sounds very hot—and it is amazing stuff. But what is actually being delivered into your life needs to be understood clearly, so we can see what it does and, more important, what it doesn’t do.

Learn Algorithmic Trading
by Sebastien Donadio
Published 7 Nov 2019

Other Books You May Enjoy If you enjoyed this book, you may be interested in these other books by Packt: Mastering Python for Finance - Second Edition James Ma Weiming ISBN: 9781789346466 Solve linear and nonlinear models representing various financial problems Perform principal component analysis on the DOW index and its components Analyze, predict, and forecast stationary and non-stationary time series processes Create an event-driven backtesting tool and measure your strategies Build a high-frequency algorithmic trading platform with Python Replicate the CBOT VIX index with SPX options for studying VIX-based strategies Perform regression-based and classification-based machine learning tasks for prediction Use TensorFlow and Keras in deep learning neural network architecture Hands-On Machine Learning for Algorithmic Trading Stefan Jansen ISBN: 9781789346411 Implement machine learning techniques to solve investment and trading problems Leverage market, fundamental, and alternative data to research alpha factors Design and fine-tune supervised, unsupervised, and reinforcement learning models Optimize portfolio risk and performance using pandas, NumPy, and scikit-learn Integrate machine learning models into a live trading strategy on Quantopian Evaluate strategies using reliable backtesting methodologies for time series Design and evaluate deep neural networks using Keras, PyTorch, and TensorFlow Work with reinforcement learning for trading strategies in the OpenAI Gym Leave a review - let other readers know what you think Please share your thoughts on this book with others by leaving a review on the site that you bought it from.

pages: 299 words: 88,375

Gray Day: My Undercover Mission to Expose America's First Cyber Spy
by Eric O'Neill
Published 1 Mar 2019

Imagine a collaboration of consumers, organizations, agencies, and businesses aligned in a common network of shared information. Any attempt to breach a single laptop or execute malware through an unfortunate mouse click by one member will instantly inoculate every other device on the network. Deep-learning analysis of all these devices in the cloud will allow a cybersecurity AI to identify and even predict attacks. An entire community of cybersecurity operations will simplify down to a single recurring OODA loop, one that continually resets and defeats attackers. Nobody wants to go back to file cabinets and typewriters.

pages: 374 words: 94,508

Infonomics: How to Monetize, Manage, and Measure Information as an Asset for Competitive Advantage
by Douglas B. Laney
Published 4 Sep 2017

In the examples of Dollar General and Kroger, consumer packaged goods (CPG) companies and other suppliers may find they prefer doing business with these retailers because of the transparency and value afforded by the sales and other data made available. And maybe Amazon’s new brick-and-mortar store will outmaneuver grocery giants by using cameras, sensors, deep learning, and automatic payments to track what shoppers are selecting and eliminate the checkout process altogether. This information is monetized also by eliminating the cost of checkers and point-of-sale systems, saving shoppers time during which they’re likely to shop more, and licensing or generating insights from the data.

pages: 345 words: 92,063

Power, for All: How It Really Works and Why It's Everyone's Business
by Julie Battilana and Tiziana Casciaro
Published 30 Aug 2021

As one of them put it, “It is one thing to see a VP get a promotion in the corporate sector; it is another to see a woman who hit rock bottom blossom. How do you measure that!?” Lia—who was completely dependent on the certified coaches to achieve her mission—had finally sorted out what the coaches valued most: inspirational purpose, transformative impact, deep learning, and a community of like-minded colleagues. Over time, she had made Up With Women irreplaceable for the coaches to access those valued resources all at once. It’s no wonder that you couldn’t find a more loyal group of volunteers if you tried. By understanding what the coaches needed and wanted, and then figuring out how she could give them access to those resources, Lia introduced a level of mutual dependence into the relationship.

pages: 340 words: 91,416

Lost in Math: How Beauty Leads Physics Astray
by Sabine Hossenfelder
Published 11 Jun 2018

Finding patterns and organizing information are tasks that are central to science, and those are the exact tasks that artificial neural networks are built to excel at. Such computers, designed to mimic the function of natural brains, now analyze data sets that no human can comprehend and search for correlations using deep-learning algorithms. There is no doubt that technological progress is changing what we mean by “doing science.” I try to imagine the day when we’ll just feed all cosmological data to an artificial intelligence (AI). We now wonder what dark matter and dark energy are, but this question might not even make sense to the AI.

pages: 279 words: 87,875

Underwater: How Our American Dream of Homeownership Became a Nightmare
by Ryan Dezember
Published 13 Jul 2020

She was working on a follow-up to the 2017 paper Stefania Albanesi, “Investors in the 2007–2009 Housing Crisis: An Anatomy,” University of Pittsburgh, National Bureau of Economic Research, and Center for Economic and Policy Research, September 17, 2018. The surge of people going from two first mortgages to three Stefania Albanesi and Domonkos F. Vamossy, “Predicting Consumer Default: A Deep Learning Approach,” Working Papers 26165, National Bureau of Economic Research, August 20, 2019. “The great misnomer of the 2008 crisis” Manuel Adelino, Antoinette Schoar, and Felipe Severino, “The Role of Housing and Mortgage Markets in the Financial Crisis,” Annual Review of Financial Economics 10 (2018): 25–41. 25.

pages: 326 words: 103,170

The Seventh Sense: Power, Fortune, and Survival in the Age of Networks
by Joshua Cooper Ramo
Published 16 May 2016

For a discussion of the relation between coding and the real world, see Bret Victor, “Inventing on Principle,” Speech, Canadian University Software Engineering Conference, January 20, 2012, Montreal, Quebec, available at https://www.youtube.com/watch?v=PUv66718DII. “Early intercontinental travellers”: Peter Sloterdijk, In the World Interior of Capital: Towards a Philosophical Theory of Globalization, trans. Wieland Hoban (Cambridge: Polity Press, 2013), 77. “When building”: Andrew Ng, “Deep Learning: What’s Next” (speech at GPU Technology Conference, San Jose, CA, March 19, 2015). The French philosopher: Bruno Latour, “On Technical Mediation—Philosophy, Sociology, Genealogy,” Common Knowledge 3, no. 2 (Fall 1994): 37. The immense possibility: Ryan Gallagher, “Profiled: From Radio to Porn, British Spies Track Web Users’ Online Identities,” The Intercept, September 25, 2015; GCHQ documents, “PullThrough Steering Group Meeting #16,” at https://theintercept.com/document/2015/09/25/pull-steering-group-minutes/.

pages: 349 words: 98,868

Nervous States: Democracy and the Decline of Reason
by William Davies
Published 26 Feb 2019

As the former Pentagon employee Rosa Brooks has observed, one reason why the US military spreads its tentacles ever further into American policymaking is that “Americans increasingly treat the military as an all-purpose tool for fixing anything that happens to be broken.”14 The challenge of fixing a violent and rapidly self-destructive relationship to the natural environment has greater historic importance than any other. Whatever confronts this task, if not the actual military, will have to be something with many of the same characteristics as the military. Making promises Thanks to the sudden progress of “neural networking” techniques of AI (or deep learning), we now face the potential prospect of computers matching the powers of the human mind. This is perhaps the most daunting prospect for expertise today, threatening to replace a wide range of “white collar” and “knowledge-intensive” jobs. The professional work of journalists, lawyers, accountants, and architects is already vulnerable to automation, as machine learning grows in sophistication, thanks partly to the vast quantities of data we produce.

pages: 463 words: 105,197

Radical Markets: Uprooting Capitalism and Democracy for a Just Society
by Eric Posner and E. Weyl
Published 14 May 2018

Chris Anderson, Free: The Future of a Radical Price (Hyperion, 2009). 13. Jakob Nielsen, The Case for Micropayments, Nielsen Norman Group (January 25, 1998), https://www.nngroup.com/articles/the-case-for-micropayments/. 14. Daniela Hernandez, Facebook’s Quest to Build an Artificial Brain Depends on this Guy, Wired (2014), https://www.wired.com/2014/08/deep-learning-yann-lecun/. 15. “Complexity” is often used in academic parlance to refer to the difficulty of a problem in the worst case. Often these worst-case bounds are very “conservative” in the sense that they dramatically overstate the requirements in typical real-world applications. With a slight abuse of nomenclature, we use complexity to refer to what a problem requires in a typical or “average” case in practice rather than what it can be proven to require in the worst case. 16. https://news.microsoft.com/features/democratizing-ai/. 17.

Falter: Has the Human Game Begun to Play Itself Out?
by Bill McKibben
Published 15 Apr 2019

Ayn Rand, Fountainhead, p. 11. PART THREE: THE NAME OF THE GAME CHAPTER 13 1. Personal conversation, November 22, 2017. 2. James Bridle, “Known Unknowns,” Harper’s, July 2018. 3. “Rise of the Machines,” The Economist, May 22, 2017. 4. “On Welsh Corgis, Computer Vision, and the Power of Deep Learning,” microsoft.com, July 14, 2014. 5. Andrew Roberts, “Elon Musk Says to Forget North Korea Because Artificial Intelligence Is the Real Threat to Humanity,” uproxx.com, August 12, 2017. 6. Tom Simonite, “What Is Ray Kurzweil Up to at Google? Writing Your Emails,” Wired, August 2, 2017. 7.

pages: 328 words: 96,141

Rocket Billionaires: Elon Musk, Jeff Bezos, and the New Space Race
by Tim Fernholz
Published 20 Mar 2018

“Entertainment turns out to be the driver of technologies that then become very practical and utilitarian for other things,” the Amazon founder said in 2017. “Even in the early days of aviation, one of the first uses of the very first planes was barnstorming; they would go around and land in farmers’ fields and sell tickets. Likewise, more recently, the GPUs that are now used for machine learning and deep learning: they were really invented by Nvidia for video games. New Shepard, that tourism mission, because we can fly it so frequently, is going to be a real driver of our technology.” Sercel’s estimates of the potential space tourism market are bigger than you might think. “There are nearly 250,000 people on planet earth who have more than $30 million in spare change,” he says.

pages: 405 words: 103,723

The Government of No One: The Theory and Practice of Anarchism
by Ruth Kinna
Published 31 Jul 2019

Anarchists typically understand education as an approach to life, tapping into long-established conventions that emphasize processes of socialization and moral development as well as learning or knowledge acquisition.50 Expressing a widely held anarchist view, Lucy Parsons defined education as creation of ‘self-thinking individuals’.51 Working on the other side of the Pacific in late Qing dynasty China, the foremost anarchist organizer Shifu likewise distinguished ‘formal education’ from ‘education in the transformation of quotidian life’.52 Distancing himself from campaigns his comrades promoted to instruct people about the basics of anarchism, he pushed for an education that demanded understanding of the ‘causes of the vileness of society’, the abandonment of ‘false morality and corrupt systems’. This kind of deep learning required the eradication of ‘the clever people’ and the disregard of ‘the teachings of so-called sages’. Shifu’s was a programme of disobedience and anti-government activism intended to restore ‘the essential beauty’ of ‘human morality’.53 ‘We must learn to think differently,’ said, in a similar vein, Alexander Berkman, editor of the Blast, ‘before the revolution can come’.

pages: 346 words: 97,330

Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass
by Mary L. Gray and Siddharth Suri
Published 6 May 2019

The annual ImageNet competition saw a roughly 10x reduction in error and a roughly 3x increase in precision in recognizing images over the course of eight years. Eventually the vision algorithms achieved a lower error rate than the human workers. The algorithmic and engineering advances that scientists achieved over the eight years of competition fueled much of the recent success of neural networks, the so-called deep learning revolution, which would impact a variety of fields and problem domains. [back] 13. Djellel Difallah, Elena Filatova, and Panos Ipeirotis, “Demographics and Dynamics of Mechanical Turk Workers,” in Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (New York: ACM, 2018), 135–43, https://doi.org/10.1145/3159652.3159661.

Mindf*ck: Cambridge Analytica and the Plot to Break America
by Christopher Wylie
Published 8 Oct 2019

He had been doing his Ph.D. research on modeling and predicting the behavior of C. elegans roundworms and said that he simply swapped out the worms for people. Jucikas proposed pulling a wide variety of data by building automated data-harvesting utilities, using algorithmic imputations to consolidate different data sources into a single unified identity for each individual, and then using deep-learning neural networks to predict our desired behaviors. We would still need a team of psychologists, he said, to create the narratives needed to change behaviors, but his pipeline served as the first sketch of the targeting system. But what I loved most was that he color-coded it to make the journey look like the London tube map.

pages: 285 words: 98,832

The Premonition: A Pandemic Story
by Michael Lewis
Published 3 May 2021

To inject a snake’s heart with a virus requires two postdocs and one full professor: one to hold the snake in a death grip, one to use a Doppler radar to find the snake’s heart, and a third to plunge the needle into it. It seemed exactly the sort of mission that might test the loyalty of a graduate student. The postdocs who spent time in the DeRisi Lab were, at Joe’s insistence, a mixed bag: biologists, chemists, deep learning specialists, medical doctors of every sort. But they had one thing in common: they were up for anything. “I try to recruit all kinds of people,” said Joe. “But the people who are attracted to us would have zero reservations about jumping onto that ship.” The professor and the students injected many boa constrictors, and many pythons, with the arenavirus.

pages: 337 words: 96,666

Practical Doomsday: A User's Guide to the End of the World
by Michal Zalewski
Published 11 Jan 2022

On this topic, the history of AI research offers a cautionary tale: after the initial exuberance and some stunning early successes of artificial neural networks in the 1950s and 1960s, the field slid into a prolonged “AI winter” of broken promises and constant disappointments. Funding dwindled, and few academics would take pride in associating themselves with the discipline. It wasn’t until the late 2000s that AI research made a comeback, aided with vastly superior computing resources and significant refinements to the architecture of neural networks and to deep learning algorithms. But the field focused on humble, utilitarian goals: building systems custom-tailored to perform highly specialized tasks, such as voice recognition, image classification, or the translation of text. Such architectures, although quite successful, still require quite a few quantum leaps to get anywhere close to AGI, and tellingly, the desire to build a digital “brain in a jar” is not an immediate goal for any serious corporate or academic research right now.

pages: 599 words: 98,564

The Mutant Project: Inside the Global Race to Genetically Modify Humans
by Eben Kirksey
Published 10 Nov 2020

Many IVF clinics in the world still do not have such a fancy setup, so most embryologists are still eyeballing it. The company that makes this incubator, Vitrolife, claims to have “the world’s largest morphokinetic database,” meaning that they keep track of the embryos as they grow, move, and develop a form. Deep-learning technologies are constantly feeding the algorithm new data, linking real-world events like a failed implantation, a miscarriage, and a live birth back into their patented EmbryoScope system. Embryos deemed “high-risk” are flagged for biopsy procedures and DNA testing.4 Adding CRISPR into the workflow would be easy, O’Neill said.

pages: 320 words: 95,629

Decoding the World: A Roadmap for the Questioner
by Po Bronson
Published 14 Jul 2020

There is now a large variety of algorithms, but they all use the concept of Perceptrons at their heart. Cutting-edge research in artificial intelligence today is about getting computers to create new goals for themselves and seek ways to attain them. This approach has led to the development of deep-learning techniques that have created programs that defeated grand masters at Go and chess, and fighter pilots in aerial dogfights. Today’s computers have billions of transistors to process the data they ingest. They are trained with a goal in mind, on millions of games or dogfights, and they exhibit nonhuman creativity, trying things humans never have.

pages: 328 words: 96,678

MegaThreats: Ten Dangerous Trends That Imperil Our Future, and How to Survive Them
by Nouriel Roubini
Published 17 Oct 2022

“You can get into semantics about what does reasoning mean, but clearly the AI system was reasoning at that point,” says New York Times journalist Craig Smith, who now hosts the podcast Eye on AI.5 A year later, AlphaGo Zero bested AlphaGo by learning the rules of the game and then generating billions of data points in just three days. Deep learning has progressed with mind-bending speed. In 2020, Deep Mind’s AlphaFold2 revolutionized the field of biology by solving “the protein-folding problem” that had stumped medical researchers for five decades. Besides probing massive volumes of molecular data on protein structures, AlphaFold deployed “transformers,” an innovative neural network that Google Brain scientists unveiled in a 2017 paper.

pages: 385 words: 111,113

Augmented: Life in the Smart Lane
by Brett King
Published 5 May 2016

In 1997, Bill Gates was pretty bullish on speech recognition, predicting that “In this 10-year time frame, I believe that we’ll not only be using the keyboard and the mouse to interact, but during that time we will have perfected speech recognition and speech output well enough that those will become a standard part of the interface.”25 In the year 2000, it was still a decade away. The big breakthroughs came with the application of Markov models and Deep Learning models or neural networks, basically better computer performance and bigger source databases. However, the models that we have today are limited because they still don’t learn language. These algorithms don’t learn language like a human; they identify a phrase through recognition, look it up on a database and then deliver an appropriate response.

pages: 363 words: 109,077

The Raging 2020s: Companies, Countries, People - and the Fight for Our Future
by Alec Ross
Published 13 Sep 2021

Uber China was sold to Didi: Alyssa Abkowitz and Rick Carew, “Uber Sells China Operations to Didi Chuxing,” Wall Street Journal, August 1, 2016, https://www.wsj.com/articles/china-s-didi-chuxing-to-acquire-rival-uber-s-chinese-operations-1470024403. After beating out Uber: Sarah Dai, “‘China’s Uber’ Ramps Up AI Arms Race, Says It Will Open Third Deep Learning Research Lab,” South China Morning Post, January 26, 2018, https://www.scmp.com/tech/start-ups/article/2130793/didi-chuxing-ramps-artificial-intelligence-arms-race-says-it-will; Jonathan Cheng, “China’s Ride-Hailing Giant Didi to Test Beijing’s New Digital Currency,” Wall Street Journal, July 8, 2020, https://www.wsj.com/articles/chinas-ride-hailing-giant-didi-to-test-beijings-new-digital-currency-11594206564.

pages: 344 words: 104,077

Superminds: The Surprising Power of People and Computers Thinking Together
by Thomas W. Malone
Published 14 May 2018

One of the most impressive recent examples of a computer doing unsupervised learning was when a group of Stanford University and Google researchers gave a computer system 10 million digital images from YouTube videos and let the system look for patterns. Without the researchers ever telling the system what to look for, it learned to identify 20,000 categories of objects, including human faces, human bodies, and… cat faces.19 This system used a particularly promising approach to machine learning called deep learning, which loosely simulates the way the different layers of neurons in a brain are connected to one another. Neuromorphic Computing Still another intriguing approach to creating more intelligent computers is to create new kinds of computer hardware that more closely resemble the structure of a human brain.

pages: 363 words: 109,834

The Crux
by Richard Rumelt
Published 27 Apr 2022

These technologies represented engineering problems of mixing hardware and software, closeness to customers, and lower margins that were not in synch with the bulk of the company’s x86 high-margin, down-the-nodes business. A majority felt that the AI challenge was both important and addressable. The group judged that if Habana were kept separate, it had a chance to carve out a good position in the deep- learning market. But how big would this specialized market be? The culture issue was intertwined with each upside challenge. The group believed that internal beliefs, habits, and processes had been ingrained over many years by market dominance, high margins, and clockwork scaling down through the steps of the Moore’s Law nodes.

pages: 371 words: 107,141

You've Been Played: How Corporations, Governments, and Schools Use Games to Control Us All
by Adrian Hon
Published 14 Sep 2022

But other school-based scoring and reward systems have taken their place. Take the facial recognition systems being deployed in schools in China, as reported by Xue Yujie on the Sixth Tone news site.90 Hanwang Education is responsible for the Class Care System (CCS) which claims to use cameras and deep-learning algorithms to identify students and classify their behaviour into categories including listening, writing, sleeping, answering questions, and interacting with other students. The sum of this behaviour becomes a score, accessible to teachers via an app. Zhang Haopeng, general manager at Hanwang Education, demonstrated the system used by Chifeng No. 4 Middle School in 2018 to Xue: “The parents can see [the score], too.

pages: 358 words: 106,951

Diverse Bodies, Diverse Practices: Toward an Inclusive Somatics
by Don Hanlon Johnson
Published 10 Sep 2018

There is work for us to do, to change our practices so more types of bodies feel invited to this work. A place of beginning might be deepening our awareness to how much release or constriction we need in any given moment. While it hurts and angers me when I think about all the ways I navigate whether or not my body can physically fit somewhere, there is also a deep learning. It has made me very aware of how my body interacts with my environment and the people around me. It has offered me the chance to explore in deep ways the physical and energetic space I take up. I do not take for granted that I am impacting those around me, and that I need to be paying attention to how my behavior and presence do that.

pages: 443 words: 112,800

The Third Industrial Revolution: How Lateral Power Is Transforming Energy, the Economy, and the World
by Jeremy Rifkin
Published 27 Sep 2011

Knowledge, according to Bruffee and other educational reformers, is a social construct, a consensus among the members of a learning community.17 If knowledge is something that exists between people and comes out of their shared experiences, then the way our educational process is set up is inimical to deep learning. Our schooling is often little more than a stimulus-response process, a robotic affair in which students are programmed to respond to the instructions fed into them—much like the standard operating procedures of scientific management that created the workers of the First and Second Industrial Revolutions.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

These global companies move towards smarter machine technologies because they understand the challenges and opportunities entailed in owning big data. They also understand that it is not enough to own the data. The real game changer lies in understanding the data’s true significance. Take, for instance, Professor LeCun, a pioneer in developing deep learning algorithms that can interpret meanings and contexts of symbols and images. This technology is valuable for Facebook as it aspires to increase the ways in which it serves its billions of customers – and the advertising industry – by extracting meaning from its colossal and ever-expanding archive of user-generated content.

pages: 309 words: 114,984

The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age
by Robert Wachter
Published 7 Apr 2015

Studies have shown that computers can detect significant numbers of breast cancers and pulmonary emboli missed by radiologists, although nobody has yet taken the bold step of having the computers completely supplant the humans, partly because there are armadas of malpractice attorneys waiting to pounce, and partly because, at least for now, the combination of human and machine seems to perform better than either alone. But over the long haul, I wouldn’t bet on the humans here, particularly since one of the hottest areas in artificial intelligence research is “deep learning”—research that has created computers that are reasonably skilled at “reading,” “hearing,” and, yes, “seeing.” The same kind of software that now allows Facebook to guess that a certain collection of pixels is a picture of you, or that alerts the casino’s security guards to keep an eye on that guy, is likely to eventually crack the code in radiology, and in similar areas such as dermatology and pathology.

pages: 409 words: 112,055

The Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats
by Richard A. Clarke and Robert K. Knake
Published 15 Jul 2019

ANNs can adjust their own “wiring” and weighting based upon patterns in data, in order to improve themselves and their predictive abilities. For instance, an ANN that classifies pictures of cats or dogs will, over time, learn the distinguishing factors of the two types of animals and adjust its “wiring” so that its cat or dog predictions are more accurate. The second type of ML you will also hear about is deep learning, which is in turn a type of ANN that uses multiple layers of “neurons” to analyze data, allowing it to perform very complex analysis. Enough with the definitions. What can AI do defensively or offensively in security and warfare? Artificially Intelligent About Security? AI/ML for the Defense On a large corporate network today, there are between three and six dozen separate cybersecurity software applications in use, each contributing to the security of the overall network in a specific capacity.

pages: 407 words: 116,726

Infinite Powers: How Calculus Reveals the Secrets of the Universe
by Steven Strogatz
Published 31 Mar 2019

They like to capture material. They defend like iron. But although they are far stronger than any human player, they are not creative or insightful. All that changed with the rise of machine learning. On December 5, 2017, the DeepMind team at Google stunned the chess world with its announcement of a deep-learning program called AlphaZero. The program taught itself chess by playing millions of games against itself and learning from its mistakes. In a matter of hours, it became the best chess player in history. Not only could it easily defeat all the best human masters (it didn’t even bother to try), it crushed the reigning computer world champion of chess.

pages: 399 words: 118,576

Ageless: The New Science of Getting Older Without Getting Old
by Andrew Steele
Published 24 Dec 2020

Revisiting the ratio of bacterial to host cells in humans’, Cell 164, 337–40 (2016). DOI: 10.1016/j.cell.2016.01.013 ageless.link/9oeph4 … your intestines can come to be dominated … Buford, 2017 ageless.link/y49t3u … we have managed to build ‘microbial clocks’ … Fedor Galkin et al., ‘Human microbiome aging clocks based on deep learning and tandem of permutation feature importance and accumulated local effects’, bioRxiv (2018). DOI: 10.1101/507780 ageless.link/3wtnuz One study … without a microbiome … Marisa Stebegg et al., ‘Heterochronic faecal transplantation boosts gut germinal centres in aged mice’, Nat. Commun. 10, 2443 (2019).

pages: 424 words: 114,820

Neurodiversity at Work: Drive Innovation, Performance and Productivity With a Neurodiverse Workforce
by Amanda Kirby and Theo Smith
Published 2 Aug 2021

Students expect to be taught and to learn using modern technology and methods and at a pace that they have chosen – not one that is mandated to them. They also demand an education that is tailored to their unique needs and acknowledges the diverse range of factors that need to be considered in nurturing deep learning in its cohorts.’ Gary describes one initiative he has been involved with. ‘Ravensbourne is a small specialist design and media university based in London that attracts a broad and diverse student cohort that largely reflects the ethnic diversity of the capital itself (around 44 per cent). But it also welcomes neurodivergent creatives who require specialist support and an inclusive pedagogy.

pages: 444 words: 118,393

The Nature of Software Development: Keep It Simple, Make It Valuable, Build It Piece by Piece
by Ron Jeffries
Published 14 Aug 2015

Can there be any bigger waste of system resources than burning cycles and clock time only to throw away the result? If the system can determine in advance that it will fail at an operation, it’s always better to fail fast. That way, the caller doesn’t have to tie up any of its capacity waiting and can get on with other work. How can the system tell whether it will fail? Do we need Deep Learning? Don’t worry, you won’t need to hire a cadre of data scientists. It’s actually much more mundane than that. There’s a large class of “resource unavailable” failures. For example, when a load balancer gets a connection request but not one of the servers in its service pool is functioning, it should immediately refuse the connection.

pages: 401 words: 112,589

Flowers of Fire: The Inside Story of South Korea's Feminist Movement and What It Means for Women's Rights Worldwide
by Hawon Jung
Published 21 Mar 2023

Meanwhile, some users have taken cyber sexual abuse of women to a whole new level with fakeporn, which turns ordinary images of women into porn through various technical tools as basic as Photoshop or as advanced as artificial intelligence (AI). These images and videos are also known as cheap fakes, shallow fakes, or deepfake porn, depending on the sophistication of technology involved. Deepfakes, using a form of AI called deep learning, have been a source of growing alarm over their potential use in political disinformation, after fake videos of politicians—including of Nancy Pelosi drunkenly stammering during a speech—went viral in recent years. But despite all the attention paid to disinformation in politics, most deepfakes have nothing to do with politics—96 percent of such videos circulating online are porn, and all of them target women, a 2019 study showed.53 A quarter of such porn features K-pop stars, although celebrities are not the only victims, and most abusers don’t require advanced skills or technology.

Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures
by Merlin Sheldrake
Published 11 May 2020

On the nutritional dependence of certain trees on root symbiosis with belowground fungi (an English translation of A. B. Frank’s classic paper of 1885). Mycorrhiza 15: 267–75. Fredericksen MA, Zhang Y, Hazen ML, Loreto RG, Mangold CA, Chen DZ, Hughes DP. 2017. Three-dimensional visualization and a deep-learning model reveal complex fungal parasite networks in behaviorally manipulated ants. Proceedings of the National Academy of Sciences 114: 12590–595. Fricker MD, Boddy L, Bebber DP. 2007a. “Network Organisation of Mycelial Fungi.” In Biology of the Fungal Cell. Howard RJ, Gow NAR, eds. Springer International Publishing, pp. 309–30.

pages: 416 words: 129,308

The One Device: The Secret History of the iPhone
by Brian Merchant
Published 19 Jun 2017

When Gruber says knowledge, I think he means a firm, robust grasp on how the world works and how to reason. Today, researchers are less interested in developing AI’s ability to reason and more intent on having them do more and more complex machine learning, which is not unlike automated data mining. You might have heard the term deep learning. Projects like Google’s DeepMind neural network work essentially by hoovering up as much data as possible, then getting better and better at simulating desired outcomes. By processing immense amounts of data about, say, Van Gogh’s paintings, a system like this can be instructed to create a Van Gogh painting—and it will spit out a painting that looks kinda-sorta like a Van Gogh.

pages: 527 words: 147,690

Terms of Service: Social Media and the Price of Constant Connection
by Jacob Silverman
Published 17 Mar 2015

In addition to fake accounts, people also post things that are intentionally insincere and misleading, including in their profiles, which further complicates the effort to divide people into the kinds of highly specific categories (e.g., single dads from major cities who don’t belong to gyms) that market researchers like. Of course, these analytical tools are getting better, incorporating the latest discoveries in computational linguistics and deep learning, a form of artificial intelligence in which computers are taught to understand colloquial speech and recognize objects (such as people’s faces). Some sentiment analysis software now applies several different filters to each piece of text in order to consider not only the tone and meaning of the utterance but also whether the source is reliable or somehow biased.

pages: 589 words: 147,053

The Age of Em: Work, Love and Life When Robots Rule the Earth
by Robin Hanson
Published 31 Mar 2016

Many of these people expect traditional artificial intelligence, that is, hand-coded software, to achieve broad human level abilities before brain emulations appear. I think that past rates of progress in coding smart software suggest that at previous rates it will take two to four centuries to achieve broad human level abilities via this route. These critics often point to exciting recent developments, such as advances in “deep learning,” that they think make prior trends irrelevant. More generally, some critics fault me for insufficiently crediting new trends that they expect will soon revolutionize society, even if we don’t yet see strong supporting evidence of these trends. Such revolutions include robots taking most jobs, local sourcing replacing mass production, small firms replacing big ones, worker cooperatives replacing for-profits, ability tests replacing school degrees, and 3D printers replacing manufacturing plants.

pages: 486 words: 150,849

Evil Geniuses: The Unmaking of America: A Recent History
by Kurt Andersen
Published 14 Sep 2020

” There’s a whole new subdiscipline of technologists and economists predicting and debating what work can or can’t be automated partly or entirely and, depending on the cost, what jobs will or won’t be done mainly by smart machines by what year in the twenty-first century. Even discounting for digital enthusiasts’ habitual overoptimism, the recent rate of progress in AI and robotics has been astounding. The exponential growth of digital data and cheapening of computer power reached a point in the last decade that allowed so-called deep learning on so-called neural networks—extremely smart machines—to achieve remarkable technical feat after remarkable technical feat. A common task in creating AI software, for instance, is training a system to recognize and classify millions of images. In the fall of 2017 that task typically took engineers three hours to do, but by the summer of 2019 it took only eighty-eight seconds and thus cost 99 percent less.

The New Map: Energy, Climate, and the Clash of Nations
by Daniel Yergin
Published 14 Sep 2020

Department of Transportation, Assuring America’s Leadership in Automated Vehicles Technologies: Automated Vehicles 4.0 (Washington D.C., 2020); Rebecca Yergin, “NHTSA Continues to Ramp Up Exploration of Automated Driving Technologies,” Covington & Burling, Blog, April 2020. 13. Marco della Cava, “Garage Startup Uses Deep Learning to Teach Cars to Drive,” USA Today, August 30, 2016. Chapter 39: Hailing the Future 1. Interview with Garrett Camp; “UberCab” pitch deck, December 2008. 2. Adam Lashinsky, Wild Ride: Inside Uber’s Quest for World Domination (New York: Portfolio/Penguin, 2017), pp. 80–81, 91. 3.

Spies, Lies, and Algorithms: The History and Future of American Intelligence
by Amy B. Zegart
Published 6 Nov 2021

For nuclear threat intelligence, machine learning techniques offer particular promise in analyzing satellite imagery of known missile sites or facilities to detect changes over time.53 In 2017, for example, U.S. intelligence officials from the National Geospatial-Intelligence Agency (NGA) asked researchers at the University of Missouri to develop machine learning tools to see how fast and accurately they could identify surface-to-air missile sites over a huge area in Southwest China. The research team developed a deep learning neural network (essentially, a collection of algorithms working together) and used only commercially available satellite imagery with one-meter resolution. Both the computer and the human team correctly identified 90 percent of the missile sites. But the computer completed the job eighty times faster than humans, taking just forty-two minutes to scan an area of approximately ninety thousand square kilometers (about three-fourths the size of North Korea).54 As noted in chapter 5, machine learning also holds promise for faster sifting of large quantities of written information—everything from trade documents that might suggest illicit financing schemes to the metadata of photos online—such as the date and time stamp on the picture, the type of camera used, the software that processed the image, and where the camera was placed when the picture was taken.55 In addition, computer modeling is enabling analysts to better understand the specifications and functions of structures already built.

pages: 1,239 words: 163,625

The Joys of Compounding: The Passionate Pursuit of Lifelong Learning, Revised and Updated
by Gautam Baid
Published 1 Jun 2020

Identify the core ideas and learn them deeply. This deeply ingrained knowledge base can serve as a meaningful springboard for more advanced learning and action in your field. Be brutally honest with yourself. If you do not understand something, revisit the core concepts again and again. Remember that merely memorizing stuff is not deep learning. 2. Make mistakes. Mistakes highlight unforeseen opportunities as well as gaps in our understanding. And mistakes are great teachers. As Michael Jordan once said, “I’ve missed more than nine thousand shots in my career. I’ve lost almost three hundred games. Twenty-six times, I’ve been trusted to take the game-winning shot and missed.

Alpha Trader
by Brent Donnelly
Published 11 May 2021

PART THREE METHODOLOGY AND MATHEMATICS “Nothing in the world is worth having or worth doing unless it means effort, pain, difficulty… I have never in my life envied a human being who led an easy life.” THEODORE ROOSEVELT In Part Three I will outline the specific steps you must take to master your market. Your goal must be to become an expert in the product or markets that you trade. Stay narrow (focus on one or two markets) and go deep (learn as much as possible, right down to the nitty-grittiest details). This approach gives you the best shot at a sustainable edge. Wide and shallow can work, but has a much lower probability of success than narrow and deep. Here is the abstract of an interesting study of Twitter sentiment that supports the idea that narrow focus yields higher returns99.

pages: 618 words: 180,430

The Making of Modern Britain
by Andrew Marr
Published 16 May 2007

Not refreshed enough, however, for in 1908 he became ill, resigned and died three weeks later in Downing Street – the only prime minister to do so. Asquith’s succession meant a steelier, tougher figure at the top, and a much harder time ahead for the monarch. Known to admiring cartoonists as ‘the Last of the Romans’ and later, to colleagues alarmed by his convivial habits, as ‘old Squiffy’, his deep learning awed other MPs just as much as his private life intrigued them. He had been happily married to Helen, a quiet, gentle woman who had produced five children; but in the same year he met Margot Tennant at a Commons dinner party, Helen caught typhoid and died on holiday in Scotland. Asquith was by then a rising political star and successful barrister, but Margot was slow to yield.

Blueprint: The Evolutionary Origins of a Good Society
by Nicholas A. Christakis
Published 26 Mar 2019

,” Behavioral Ecology 16 (2005): 656–660. 26. S. A. Adamo, “The Strings of the Puppet Master: How Parasites Change Host Behavior,” in D. P. Hughes, J. Brodeur, and F. Thomas, eds., Host Manipulation by Parasites (Oxford: Oxford University Press, 2012), pp. 36–53. 27. M. A. Fredericksen et al., “Three-Dimensional Visualization and a Deep-Learning Model Reveal Complex Fungal Parasite Networks in Behaviorally Manipulated Ants,” PNAS: Proceedings of the National Academy of Sciences 114 (2017): 12590–12595. 28. D. P. Hughes, T. Wappler, and C. C. Labandeira, “Ancient Death-Grip Leaf Scars Reveal Ant-Fungal Parasitism,” Biology Letters 7 (2011): 67–70. 29.

pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It
by Marc Goodman
Published 24 Feb 2015

Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing. In 2014, Google purchased DeepMind Technologies for more than $500 million in order to strengthen its already strong capabilities in deep learning AI. In the same vein, Facebook created a new internal division specifically focused on advanced AI. Optimists believe that the arrival of AGI may bring with it a period of unprecedented abundance in human history, eradicating war, curing all disease, radically extending human life, and ending poverty.

pages: 685 words: 203,949

The Organized Mind: Thinking Straight in the Age of Information Overload
by Daniel J. Levitin
Published 18 Aug 2014

For example, in English, silverfish is an insect, not a type of fish; prairie dog is a rodent, not a dog; and a toadstool is neither a toad nor a stool that a toad might sit on. Our hunger for knowledge can be at the roots of our failings or our successes. It can distract us or it can keep us engaged in a lifelong quest for deep learning and understanding. Some learning enhances our lives, some is irrelevant and simply distracts us—tabloid stories probably fall into this latter category (unless your profession is as a tabloid writer). Successful people are expert at categorizing useful versus distracting knowledge. How do they do it?

pages: 562 words: 201,502

Elon Musk
by Walter Isaacson
Published 11 Sep 2023

The OpenAI team rejected that idea, and Altman stepped in as president of the lab, starting a for-profit arm that was able to raise equity funding. So Musk decided to forge ahead with building a rival AI team to work on Tesla Autopilot. Even as he was struggling with the production hell surges in Nevada and Fremont, he recruited Andrej Karpathy, a specialist in deep learning and computer vision, away from OpenAI. “We realized that Tesla was going to become an AI company and would be competing for the same talent as OpenAI,” Altman says. “It pissed some of our team off, but I fully understood what was happening.” Altman would turn the tables in 2023 by hiring Karpathy back after he became exhausted working for Musk. 41 The Launch of Autopilot Tesla, 2014–2016 Franz von Holzhausen with an early “Robotaxi” Radar Musk had discussed with Larry Page the possibility of Tesla and Google working together to build an autopilot system that would allow cars to be self-driving.

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
by Martin Kleppmann
Published 17 Apr 2017

Porup: “‘Internet of Things’ Security Is Hilariously Broken and Getting Worse,” arstechnica.com, January 23, 2016. [96] Bruce Schneier: Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. W. W. Norton, 2015. ISBN: 978-0-393-35217-7 [97] The Grugq: “Nothing to Hide,” grugq.tumblr.com, April 15, 2016. [98] Tony Beltramelli: “Deep-Spying: Spying Using Smartwatch and Deep Learning,” Masters Thesis, IT University of Copenhagen, December 2015. Available at arxiv.org/abs/1512.05616 [99] Shoshana Zuboff: “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization,” Journal of Information Technology, volume 30, number 1, pages 75–89, April 2015. doi:10.1057/jit.2015.5 [100] Carina C.

pages: 1,237 words: 227,370

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
by Martin Kleppmann
Published 16 Mar 2017

Porup: “‘Internet of Things’ Security Is Hilariously Broken and Getting Worse,” arstechnica.com, January 23, 2016. [96] Bruce Schneier: Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. W. W. Norton, 2015. ISBN: 978-0-393-35217-7 [97] The Grugq: “Nothing to Hide,” grugq.tumblr.com, April 15, 2016. [98] Tony Beltramelli: “Deep-Spying: Spying Using Smartwatch and Deep Learning,” Masters Thesis, IT University of Copenhagen, December 2015. Available at arxiv.org/abs/1512.05616 [99] Shoshana Zuboff: “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization,” Journal of Information Technology, volume 30, number 1, pages 75–89, April 2015. doi:10.1057/jit.2015.5 [100] Carina C.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

What Walters preaches more than anything else—apart from the value of hard work—is the importance of seeking out consensus. Domain knowledge? Betting knowledge? Analytical skills? He’ll take all of the above, thank you very much. Even in his late seventies, Walters and his partners were “experimenting with deep learning algorithms” and “taking a look at random forests,” he told me—some of the same machine learning techniques that are used to power AI systems like ChatGPT. He didn’t feel like they had much choice—either you keep up or get lapped by the competition. “We’ve never ever stopped looking for different angles,” he said.

pages: 768 words: 252,874

A History of Judaism
by Martin Goodman
Published 25 Oct 2017

Following his death in 1038, authority within rabbinic Judaism was dispersed to a number of new centres in the Mediterranean world and northern Europe, where Jews came under the hegemony not just of Islamic rulers in Palestine, Egypt, North Africa and Spain but also of a multiplicity of Christian states united by recognition of papal jurisdiction in religious matters from Rome. In Spain, France and Germany, rabbis with a shared respect for, and deep learning in, the Babylonian Talmud as well as the biblical texts consolidated the expression of the law as guidance for everyday life while evolving, through mystical speculation as well as philosophical analysis, novel theologies about the relation of God to his creation. The connection of intellectual talmudic scholarship to the practical concerns of European Jews was facilitated by a new role for individual rabbis as local communal arbitrators in Jewish communities in the Rhineland and in France from the eleventh century.

pages: 898 words: 236,779

Digital Empires: The Global Battle to Regulate Technology
by Anu Bradford
Published 25 Sep 2023

This jump reveals a staggering growth in China’s AI research activity and shows China surpassing all other countries, including the US.191 On the other hand, another 2020 study that examines the distribution of top AI talent by analyzing the contributions to one of the largest and most selective AI conferences for deep learning indicates the US has far more of the world’s top AI talent than any other country. This Neural Information Processing Systems 2019 conference saw a record-high 15,920 AI researchers submit 6,614 papers. Out of those submissions, 21.6 percent were accepted, which the researchers used as a proxy to identify the top 20 percent of global AI research talent.

pages: 1,330 words: 372,940

Kissinger: A Biography
by Walter Isaacson
Published 26 Sep 2005

When a civilization does decay, a new one with higher values tends to be erected on the ruins of the old. Toynbee ultimately failed, according to Kissinger, because he claimed to view human progress in a Christian framework but he relied on empirical methods that left no room for the role of free will. It was an approach “whose exhibition of deep learning tends to obscure its methodological shallowness,” Kissinger wrote.16 Man’s knowledge of freedom, Kissinger argued, must come from an inner intuition. This led him to Immanuel Kant, the German philosopher whose main treatises were written in the 1780s. Kissinger got off to a troublesome start by asserting that the connections between causes and effects exist only in the human mind: “Causality expresses the pattern which the mind imposes on a sequence of events in order to make their appearance comprehensible.”