ImageNet competition

back to index

description: an annual competition in computer vision where algorithms are evaluated on the ImageNet dataset

27 results

Artificial Intelligence: A Guide for Thinking Humans

by Melanie Mitchell  · 14 Oct 2019  · 350pp  · 98,077 words

widely used by AI researchers for creating data sets; nowadays, academic grant proposals in AI commonly include a line item for “Mechanical Turk workers.” The ImageNet Competitions In 2010, the ImageNet project launched the first ImageNet Large Scale Visual Recognition Challenge, in order to spur progress toward more general object-recognition algorithms

of them—and a list of possible categories. The task for the trained programs was to output the correct category of each input image. The ImageNet competition had a thousand possible categories, compared with PASCAL’s twenty. The thousand possible categories were a subset of WordNet terms chosen by the organizers. The

continue; computer-vision research would chip away at the problem, with gradual improvement at each annual competition. However, these expectations were upended in the 2012 ImageNet competition: the winning entry achieved an amazing 85 percent correct. Such a jump in accuracy was a shocking development. What’s more, the winning entry did

guaranteed computer scientists a large salary in Silicon Valley or, better yet, venture capital funding for their proliferating deep-learning start-up companies. The annual ImageNet competition began to see wider coverage in the media, and it quickly morphed from a friendly academic contest into a high-profile sparring match for tech

is merely an interesting footnote to the larger history of deep learning in computer vision, I tell it to illustrate the extent to which the ImageNet competition came to be seen as the key symbol of progress in computer vision, and AI in general. Cheating aside, progress on ImageNet continued. The final

integrate vision and language. What was it that enabled ConvNets, which seemed to be at a dead end in the 1990s, to suddenly dominate the ImageNet competition, and subsequently most of computer vision in the last half a decade? It turns out that the recent success of deep learning is due less

which the correct category is at the top of the list—was about 82 percent, compared with 98 percent top-5 accuracy, in the 2017 ImageNet competition. No one, as far as I know, has reported a comparison between machines and humans on top-1 accuracy. Here’s another caveat. Consider the

also learn to draw a box around the target object, so we know the machine has actually “seen” the object. This is precisely what the ImageNet competition started doing in its second year with its “localization challenge.” The localization task provided training images with such boxes drawn (by Mechanical Turk workers) around

are at once subtler and more troubling. Remember AlexNet, which I discussed in chapter 5? It was the convolutional neural network that won the 2012 ImageNet challenge and that set in motion the dominance of ConvNets in much of today’s AI world. If you’ll recall, AlexNet’s (top-5) accuracy

performance on a more general task (for example, “reading comprehension”). If this recipe doesn’t ring a bell, look back at my description of the ImageNet competition in chapter 5. Some popular media outlets were admirably restrained in describing the SQuAD results. The Washington Post, for example, gave this careful assessment: “AI

Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass

by Mary L. Gray and Siddharth Suri  · 6 May 2019  · 346pp  · 97,330 words

, gold-standard data set of high-resolution images, each with highly accurate labels of the objects in the image. Li called it ImageNet. Thanks to ImageNet competitions held annually since its creation, research teams use the data set to develop more sophisticated image recognition algorithms and to advance the state of the

an AI, only to have the AI ultimately take over the task entirely. Researchers could then open up even harder problems. For example, after the ImageNet challenge finished, researchers turned their attention to finding where an object is in an image or video. These problems needed yet more training data, generating another

the image recognition problem, from various research teams around the world, against one another. The progress scientists made toward this goal was staggering. The annual ImageNet competition saw a roughly 10x reduction in error and a roughly 3x increase in precision in recognizing images over the course of eight years. Eventually the

Prediction Machines: The Simple Economics of Artificial Intelligence

by Ajay Agrawal, Joshua Gans and Avi Goldfarb  · 16 Apr 2018  · 345pp  · 75,660 words

problems emerged. Many were nearly impossible before the recent advances in machine intelligence technology, including object identification, language translation, and drug discovery. For example, the ImageNet Challenge is a high-profile annual contest to predict the name of an object in an image. Predicting the object in an image can be a

, http://cs.stanford.edu/people/karpathy/ilsvrc/. 8. Aaron Tilley, “China’s Rise in the Global AI Race Emerges as It Takes Over the Final ImageNet Competition,” Forbes, July 31, 2017, https://www.forbes.com/sites/aarontilley/2017/07/31/china-ai-imagenet/#dafa182170a8. 9. Dave Gershgorn, “The Data That Transformed AI

–173 IBM’s Watson, 146 identity verification, 201, 219–220 iFlytek, 26–27 if-then logic, 91, 104–109 image classification, 28–29 ImageNet, 7 ImageNet Challenge, 28–29 imitation of algorithms, 202–204 income inequality, 19, 212–214 independent variables, 45 inequality, 19, 212–214 initial public offerings (IPOs), 9–10

Why Machines Learn: The Elegant Math Behind Modern AI

by Anil Ananthaswamy  · 15 Jul 2024  · 416pp  · 118,522 words

dataset of millions of hand-labeled images consisting of thousands of categories (immense by the standards of 2009). In 2010, the team put out the ImageNet challenge: Use 1.2 million ImageNet images, binned into 1,000 categories, to train your computer vision system to correctly categorize those images, and then test

” alongside a more established contest, the PASCAL Visual Object Classes Challenge 2010. Standard computer vision still ruled the roost then. In recognition of this, the ImageNet challenge provided users with so-called scale invariant feature transforms (SIFTs). Developers could use these SIFTs to extract known types of low-level features from images

on theoretically principled mathematical frameworks (the kind that gave us support vector machines and kernel methods, for example). But by 2011, when AlexNet won the ImageNet competition, things had changed. AlexNet was a stupendous experimental success; there was no adequate theory to explain its performance. According to Goldstein, the AI community said

hyperplanes hypothesis class, 391 I idiot Bayes classifier, 138, 142 image recognition systems, 354, 359–60, 377–80. See also pattern recognition “ImageNet” (Li), 378 ImageNet challenge, 377–80 imprinting, 7–8 integral calculus, 285 Intel, 93 intromission theory, 150–51 Iris dataset, 195–200 Ising, Ernst, 246 Ising model, 246–47

Four Battlegrounds

by Paul Scharre  · 18 Jan 2023

6, 2015), https://arxiv.org/pdf/1502.01852.pdf; Richard Eckel, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, 2015, https://www.microsoft.com/en-us/research/blog/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/. 160team’s 2015 paper on “deep residual learning”: Bec Crew, “Google Scholar Reveals Its Most

Army of None: Autonomous Weapons and the Future of War

by Paul Scharre  · 23 Apr 2018  · 590pp  · 152,595 words

/GoogLeNet.pdf. 87 error rate of only 4.94 percent: Richard Eckel, “Microsoft Researchers’ Algorithm Sets ImageNet Challenge Milestone,” Microsoft Research Blog, February 10, 2015, https://www.microsoft.com/en-us/research/microsoft-researchers-algorithm-sets-imagenet-challenge-milestone/. Kaiming He et al., “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again

by Eric Topol  · 1 Jan 2019  · 424pp  · 114,905 words

in images and video to be the dark matter of the Internet.”40 Many different convolutional DNNs were used to classify the images with annual ImageNet Challenge contests to recognize the best (such as AlexNet, GoogleNet, VGG Net, and ResNet). Figure 4.6 shows the progress in reducing the error rate over

Nexus: A Brief History of Information Networks From the Stone Age to AI

by Yuval Noah Harari  · 9 Sep 2024  · 566pp  · 169,013 words

ImageNet Large Scale Visual Recognition Challenge. If you have no idea what a convolutional neural network is, and if you have never heard of the ImageNet challenge, you are not alone. More than 99 percent of us are in the same situation, which is why AlexNet’s victory was hardly front-page

incredibly valuable. The AI race was on, and the competitors were running on cat images. At the same time that AlexNet was preparing for the ImageNet challenge, Google too was training its AI on cat images, and even created a dedicated cat-image-generating AI called the Meow Generator.6 The technology

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

by Aurélien Géron  · 13 Mar 2017  · 1,331pp  · 163,200 words

been developed, leading to amazing advances in the field. A good measure of this progress is the error rate in competitions such as the ILSVRC ImageNet challenge. In this competition the top-5 error rate for image classification fell from over 26% to barely over 3% in just five years. The top

a Performance Measuremanifold, Manifold Learning hypothesis boosting (see boosting) hypothesis function, Linear Regression hypothesis, null, Regularization Hyperparameters I identity matrix, Ridge Regression, Quadratic Programming ILSVRC ImageNet challenge, CNN Architectures image classification, CNN Architectures impurity measures, Making Predictions, Gini Impurity or Entropy? in-graph replication, In-Graph Versus Between-Graph Replication inception modules

Coders: The Making of a New Tribe and the Remaking of the World

by Clive Thompson  · 26 Mar 2019  · 499pp  · 144,278 words

year he and a team of students showed off the most impressive neural net yet—by soundly beating competitors at an annual AI shootout. The ImageNet challenge, as it’s known, is an annual competition among AI researchers to see whose system is best at recognizing images. That year, Hinton’s deep

, ref2, ref3 HTTP protocol, ref1 Huang, Victoria, ref1 Hurlburt, Stephanie, ref1 Hustle, ref1 Huston, Cate, ref1 Hutchins, Marcus, ref1, ref2 IBM, ref1 IBM 704, ref1 ImageNet challenge, ref1 India, ref1 Industrial Revolution, ref1 infosec workers, ref1, ref2 malware, fighting, ref1 penetration testers, ref1 Infosystems, ref1 Inman, Bobby, ref1 Instacart, ref1 Instagram, ref1

Attention Factory: The Story of TikTok and China's ByteDance

by Matthew Brennan  · 9 Oct 2020  · 282pp  · 63,385 words

Artificial Intelligence: A Modern Approach

by Stuart Russell and Peter Norvig  · 14 Jul 2019  · 2,466pp  · 668,761 words

AI Superpowers: China, Silicon Valley, and the New World Order

by Kai-Fu Lee  · 14 Sep 2018  · 307pp  · 88,180 words

Rule of the Robots: How Artificial Intelligence Will Transform Everything

by Martin Ford  · 13 Sep 2021  · 288pp  · 86,995 words

Architects of Intelligence

by Martin Ford  · 16 Nov 2018  · 586pp  · 186,548 words

Bold: How to Go Big, Create Wealth and Impact the World

by Peter H. Diamandis and Steven Kotler  · 3 Feb 2015  · 368pp  · 96,825 words

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It

by Azeem Azhar  · 6 Sep 2021  · 447pp  · 111,991 words

Driverless: Intelligent Cars and the Road Ahead

by Hod Lipson and Melba Kurman  · 22 Sep 2016

The Ethical Algorithm: The Science of Socially Aware Algorithm Design

by Michael Kearns and Aaron Roth  · 3 Oct 2019

The Alignment Problem: Machine Learning and Human Values

by Brian Christian  · 5 Oct 2020  · 625pp  · 167,349 words

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

by Erik J. Larson  · 5 Apr 2021

The Economic Singularity: Artificial Intelligence and the Death of Capitalism

by Calum Chace  · 17 Jul 2016  · 477pp  · 75,408 words

The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip

by Stephen Witt  · 8 Apr 2025  · 260pp  · 82,629 words

The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future

by Keach Hagey  · 19 May 2025  · 439pp  · 125,379 words

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

by Karen Hao  · 19 May 2025  · 660pp  · 179,531 words

Supremacy: AI, ChatGPT, and the Race That Will Change the World

by Parmy Olson  · 284pp  · 96,087 words

Human Compatible: Artificial Intelligence and the Problem of Control

by Stuart Russell  · 7 Oct 2019  · 416pp  · 112,268 words