generative AI

back to index

72 results

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

Likenesses that have been created include footballer Neymar, whose AI avatar helped Puma launch a new product line at New York Fashion Week; and in the 2024 movie Here, actors Tom Hanks and Robin Wright will be digitally de-aged using generative AI software. Currently, though – as was the case with the pornographic deepfakes of Noelle Martin and Helen Mort – there are no regulations that govern generative AI technology and therefore legal action is largely non-existent. Artists Holly Herndon and Mathew Dryhurst decided to build something to fight back. They designed a website – Have I Been Trained? – that allows artists to search through billions of images in an open dataset named LAION-5B, used to train image-generating AI tools including Stable Diffusion and Google’s Imagen AI models, to check if their own images had been used.

Murgia, ‘OpenAI’s Mira Murati: The Woman Charged with Pushing Generative AI into the Real World’, The Financial Times, June 18, 2023, https://www.ft.com/content/73f9686e-12cd-47bc-aa6e-52054708b3b3. 4 R. Waters and T. Kinder, ‘Microsoft’s $10bn Bet on ChatGPT Developer Marks New Era of AI’, The Financial Times, January 16, 2023, https://www.ft.com/content/a6d71785-b994-48d8-8af2-a07d24f661c5. 5 M. Murgia and Visual Storytelling, ‘Generative AI Exists Because of the Transformer’, The Financial Times, September 12, 2023, https://ig.ft.com/generative-ai/. 6 Murgia and Visual Storytelling. 7 K. Woods, ‘GPT Is a Better Therapist than Any Therapist I’ve Ever Tried’, Twitter, April 6, 2023, https://twitter.com/Kat__Woods/status/1644021980948201473. 8 R.

Cutting-edge AI software is even used by researchers, such as chemists, biologists, geneticists and others, to speed up the scientific discovery process.3 Over the past year, we have seen the rise of a new subset of AI technology: generative AI, or software that can write, create images, audio or video in a way that is largely indistinguishable from human output. Generative AI is built on the bedrock of human creativity, trained on digitized books, newspapers, blogs, photographs, artworks, music, YouTube videos, Reddit posts, Flickr images and the entire swell of the English-speaking internet. It ingests this knowledge and is able to generate its own bastardized versions of creative products, delighting us with this humanlike ability to remix and regurgitate.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

If we ignore the inflated expectations, then Cyc stands up as a technically sophisticated exercise in large-scale knowledge engineering. It didn’t deliver General AI, but it taught us a lot about the development and organization of large knowledge-based systems. And to be strictly accurate, the Cyc hypothesis – that General AI is essentially a problem of knowledge, which can be solved via a suitable knowledge-based system – has been neither proved nor disproved yet. The fact that Cyc didn’t deliver General AI doesn’t demonstrate that the hypothesis was false, merely that this particular approach didn’t work. It is conceivable that a different attempt to tackle General AI through knowledge-based approaches might deliver the goods.

If I have succeeded in doing one thing in this book, I hope it is to have convinced you that, while the recent breakthroughs in AI and machine learning are real and exciting, they are not a silver bullet for General AI. Deep learning may be an important ingredient for General AI, but it is by no means the only ingredient. Indeed, we don’t yet know what some of the other ingredients are, still less what the recipe for General AI might look like. All the impressive capabilities we have developed – image recognition, language translation, driverless cars – don’t add up to general intelligence. In this sense, we are still facing the problem that Rod Brooks highlighted back in the 1980s: we have some components of intelligence, but no idea how to build a system that integrates them.

Lenat estimated the project would require 200 person years of effort, and the bulk of this was expected to be in the manual entry of knowledge – telling Cyc all about our world, and our understanding of it. Lenat was optimistic that, before too long, Cyc would be able to educate itself. A system capable of educating itself in the way Lenat envisaged would imply that General AI was effectively solved. Thus, the Cyc hypothesis was that the problem of General AI was primarily one of knowledge, and that it could be solved by a suitable knowledge-based system. The Cyc hypothesis was a bet; a high-stakes bet, but one that, if it proved successful, would have world-changing consequences. One of the paradoxes of research funding is that, sometimes, it is the absurdly ambitious ideas that win out.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

Recent advances in AI have shown the ability of these systems to, for example, generate images from a textual description. This family of techniques has been referred to as generative AI, although generative methods based on machine learning have always existed alongside the other types of tasks mentioned above, such as classification techniques or clustering. One example illustrating this possible use of AI is a recent work commissioned by the Museum of Modern Art in New York, which involved training a generative AI model on a collection of 180,000 works of art from the museum’s collection. The resulting work titled Unsupervised by the artist Refik Anadol and his studio shows an abstract and moving visual representation of artworks in the collection.10 Another example of content generation can be found in the restoration of works of art.

Between April and December 2002, 20 sessions were held at the Badisches Landesmuseum with a total of 100 interested participants, who discussed the direction and goals of AI solutions in the museum and accompanied the development of xCurator. For example, the sessions discussed the possibilities of generative AI in ex15 https://karlsruhe.digital/2022/08/ki-pilot-innen-blm/. 237 238 Part 3: Applications ploring the extent to which users would like to see the results of generative image or language models applied to museum data. In this way, developments in multimodal and generative AI were monitored and user requirements were explored in the museum context. The results were documented in written and video form, evaluated, and transferred and applied to the development of the xCurator tool.

The legacy of Duchamp’s Fountain still can be felt in contemporary art, as artists continue to question and challenge established artistic conventions and push the boundaries of what can be considered art. Today, we are discussing a similar question, since generative AI models are able to take over artistic tasks such as writing, making music, and painting. This major shift in cultural production has an impact on the future role and self-image of artists. Given the growing influence of new mul- 247 248 Part 3: Applications timodal generative AI models, the question that arises is whether the art world is facing a paradigm shift comparable in scope to the ‘conceptual turn’ (LeWitt 1967; Godry 1988; Kosuth 2002, 232) in the twentieth century.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

General (or “strong”) AI is the ability to achieve an unlimited range of goals, and even to set new goals independently, including in situations of uncertainty or vagueness. This encompasses many of the attributes we think of as intelligence in humans. Indeed, general AI is what we see portrayed in the robots and AI of popular culture discussed above. As yet, general AI approaching the level of human capabilities does not exist and some have even cast doubt on whether it is possible.19 Narrow and general AI are not hermetically sealed from each other. They represent different points on a continuum. As AI becomes more advanced, it will move further away from the narrow paradigm and closer to the general one.20 This trend may be hastened as AI systems learn to upgrade themselves21 and acquire greater capabilities than those with which they were originally programmed.22 3 Defining AI The word “artificial” is relatively uncontroversial.

AIs would help constructing better AIs, which in turn would help building better AIs, and so forth”.106 The advent of fully general AI is associated by many writers with a phenomenon some have predicted, known as “the singularity”.107 This term is usually used to describe the point at which AI matches and then surpasses human intelligence. However, the conception of the singularity as a single discernible moment is unlikely to be accurate. Like the move from weak AI to general AI, the singularity is best seen as a process rather than a single event. There is no reason to think AI will match every human capability at once.

Only governments have the power and mandate to secure a fair system that commands this kind of adherence across the board. 2 Rules for AI Should Be Made on a Cross-Industry Basis To date, the majority of legal debate on AI has been on two sectors: weapons23 and cars.24 The public, legal scholars and policy-makers have focussed on these areas at the expense of others. More importantly, though, it is misguided to approach the entirety of the regulation of AI solely on an industry-by-industry basis. 2.1 The Shift from Narrow to General AI When seeking to create regulatory principles, it is not correct to think of narrow AI (which is adept at just one task) and general AI (which can fulfil an unlimited range of tasks) as being hermetically sealed from each other. Instead, there is a spectrum along which we are gradually moving. As noted in Chapter 1, various writers have ruminated on how soon we might reach the end point on this spectrum: superintelligence 25 and some have raised powerful objections to the idea of superhuman AI ever being created.26 The observation that there is a continuum between narrow AI and general AI does not require one to take any position as to how soon (if ever) the singularity or superintelligence might appear.

pages: 344 words: 104,077

Superminds: The Surprising Power of People and Computers Thinking Together
by Thomas W. Malone
Published 14 May 2018

For instance, one study by researchers Stuart Armstrong and Kaj Sotala analyzed 95 predictions—made between 1950 and 2012—about when general AI would be achieved.12 They found a strong tendency for both experts and nonexperts to predict that general AI would be achieved between 15 and 25 years in the future… regardless of when the predictions were made! In other words, general AI has seemed about 20 years away for the last 60 years. More recent surveys and interviews tend to be consistent with this long-term pattern: people still predict that general AI will be here in about 15 to 25 years.13 So while we certainly don’t know for sure, there is at least good reason to be skeptical of confident predictions that general AI will appear in “the next few decades.”

More recent surveys and interviews tend to be consistent with this long-term pattern: people still predict that general AI will be here in about 15 to 25 years.13 So while we certainly don’t know for sure, there is at least good reason to be skeptical of confident predictions that general AI will appear in “the next few decades.” My own view is that, barring some major societal disasters, it is very likely that general AI will appear someday, but likely not until quite a few decades in the future. WHAT’S SO HARD ABOUT PROGRAMMING COMPUTERS? To understand why general AI is so hard to achieve, you need to understand what’s hard about programming computers in the first place. If you’ve ever done any nontrivial computer programming yourself, you’ve directly experienced the challenges. But if you haven’t, I’m going to try to give you a basic understanding very quickly.

To give you a rough sense of how complicated these programs can be, Google estimates that there are about 2 billion lines of code in the high-level-language versions of the software they use for all their services.15 WHAT PATHS MIGHT LEAD TO GENERAL AI? So is there any hope for general AI? Of course, there is. In spite of the difficulties of writing computer programs, we have already come a long way in creating computers with all kinds of capabilities, including many kinds of specialized AI. And we already know of a number of other kinds of programming techniques and computer architectures that might get us closer to general AI. Let’s look at a few of these possibilities. Commonsense Knowledge Think about what you need to know to understand the following snippet of conversation: PERSON A: I have a headache.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

When computer scientists refer to artificial intelligence, we make a distinction between general AI and narrow AI. General AI is the Hollywood version. This is the kind of AI that would power the robot butler, might theoretically become sentient and take over the government, could result in a real-life Arnold Schwarzenegger as the Terminator, and all of the other dread possibilities. Most computer scientists have a thorough grounding in science fiction literature and movies, and we’re almost always happy to talk through the hypothetical possibilities of general AI. Inside the computer science community, people gave up on general AI in the 1990s.3 General AI is now called Good Old-Fashioned Artificial Intelligence (GOFAI).

This interaction stuck with me because it helps me remember the difference between how computer scientists think about AI and how members of the public—including highly informed undergraduates working on tech—think about AI. General AI is the Hollywood kind of AI. General AI is anything to do with sentient robots (who may or may not want to take over the world), consciousness inside computers, eternal life, or machines that “think” like humans. Narrow AI is different: it’s a mathematical method for prediction. There’s a lot of confusion between the two, even among people who make technological systems. Again, general AI is what some people want, and narrow AI is what we have. One way to understand narrow AI is this: narrow AI can give you the most likely answer to any question that can be answered with a number.

It’s less exciting than GOFAI, but it works surprisingly well and we can do a variety of interesting things with it. However, the linguistic confusion is significant. Machine learning, a popular form of AI, is not GOFAI. Machine learning is narrow AI. The name is confusing. Even to me, the phrase machine learning still suggests there is a sentient being in the computer. The important distinction is this: general AI is what we want, what we hope for, and what we imagine (minus the evil robot overlords of golden-age science fiction). Narrow AI is what we have. It’s the difference between dreams and reality. Next, in chapter 7, I define machine learning and demonstrate how to “do” machine learning by predicting which passengers survived the Titanic crash.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

Several surveys given to AI practitioners, asking when general AI or “superintelligent” AI will arrive, have exposed a wide spectrum of opinion, ranging from “in the next ten years” to “never.”17 In other words, we don’t have a clue. What we do know is that general human-level AI will require abilities that AI researchers have been struggling for decades to understand and reproduce—commonsense knowledge, abstraction, and analogy, among others—but these abilities have proven to be profoundly elusive. Other major questions remain: Will general AI require consciousness? Having a sense of self? Feeling emotions?

What you may find surprising is that the arguments among proponents of these various approaches persist to this day. And each approach has generated its own panoply of principles and techniques, fortified by specialty conferences and journals, with little communication among the subspecialties. A recent AI survey paper summed it up: “Because we don’t deeply understand intelligence or know how to produce general AI, rather than cutting off any avenues of exploration, to truly make progress we should embrace AI’s ‘anarchy of methods.’”14 But since the 2010s, one family of AI methods—collectively called deep learning (or deep neural networks)—has risen above the anarchy to become the dominant AI paradigm. In fact, in much of the popular media, the term artificial intelligence itself has come to mean “deep learning.”

Rosenblatt and others showed that networks of perceptrons could learn to perform relatively simple perceptual tasks; moreover, Rosenblatt proved mathematically that for a certain, albeit very limited, class of tasks, perceptrons with sufficient training could, in principle, learn to perform these tasks without error. What wasn’t clear was how well perceptrons could perform on more general AI tasks. This uncertainty didn’t seem to stop Rosenblatt and his funders at the Office of Naval Research from making ridiculously optimistic predictions about their algorithm. Reporting on a press conference Rosenblatt held in July 1958, The New York Times featured this recap: The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.

pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives
by David Sumpter
Published 18 Jun 2018

It was pure entertainment. The problem with my position – that talking about what will happen when general AI arrives is idle speculation – is that it is difficult for me to prove. Part of me feels like I shouldn’t even try. If I do, I am just joining in the clamour of middle-aged men desperate to have their opinion heard. But it seems that I can’t help myself. With Stephen Hawking claiming that AI ‘could spell the end of the human race’, I can’t help wanting to clarify my own position. I have tried to argue against the likelihood of general AI before. In 2013, I had an online debate with Olle Häggström, a professor at Gothenburg University, about the subject.2 Olle believes that the risk is sufficiently large that we should make sure that humanity is prepared for its arrival.

If we are going to go as science fictiony as general AI, then we also need to consider how we might prepare for the discovery of alien intelligence when the James Webb Space Telescope starts taking more detailed pictures of our universe. What do we do if we find that the stars are moving in a way that contradicts the rules of physics and can only be explained by extraterrestrial intelligence? And what about the theory in The Matrix that we are all living in a computer simulation? Shouldn’t we invest more research in checking for potential anomalies in our reality? If any of these things happen before general AI, then they pose just as great a risk to humanity as a superintelligent computer.

This learning process isn’t going to be solved by a recurrent neural network alone, because they lack long-term memory. Tomas has instead proposed a ‘road map’ of increasingly difficult tasks, that an agent needs to be taught, in order for it to communicate properly with us humans. Tomas admitted to me that little progress has been made so far. His idea is that researchers should enter ‘general AI challenges’ each year, where progress is judged on the types of tasks the agents can complete. Rather than relying on superficial human evaluations of how realistic the agents’ responses are, these challenges should measure how bots respond to new environments. Despite Tomas’ warning, I couldn’t resist doing one last ‘totally fake thing’ with the Tolstoy neural network.

pages: 306 words: 82,909

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back
by Bruce Schneier
Published 7 Feb 2023

The 1950 version of the Turing test—called the “imitation game” in the original discussion—focused on a hypothetical computer program that humans couldn’t distinguish from an actual human. I need to differentiate between specialized—sometimes called “narrow”—AI and general AI. General AI is what you see in the movies, the AI that can sense, think, and act in a very general and human way. If it’s smarter than humans, it’s called “artificial superintelligence.” Combine it with robotics and you have an android, one that may look more or less like a human. The movie robots that try to destroy humanity are all general AI. A lot of practical research has been conducted into how to create general AI, as well as theoretical research about how to design these systems so they don’t do things we don’t want them to, like destroy humanity.

Instead, the AI grew tall enough to cross the finish line immediately by falling over it. These are all hacks. You can blame them on poorly specified goals or rewards, and you would be correct. You can point out that they all occurred in simulated environments, and you would also be correct. But the problem they illustrate is more general: AIs are designed to optimize their function in order to achieve a goal. In so doing, they will naturally and inadvertently implement unexpected hacks. Imagine a robotic vacuum assigned the task of cleaning up any mess it sees. Unless the goal is more precisely specified, it might disable its visual sensors so that it doesn’t see any messes—or just cover them up with opaque materials.

That can be done today, more or less, but is vulnerable to all of the hacking I just described. Alternatively, we can create AIs that learn our values, possibly by observing humans in action, or by taking as input all of humanity’s writings: our history, our literature, our philosophy, and so on. That is many years out, and probably a feature of general AI. Most current research oscillates between these two extremes. One can easily imagine the problems that might arise by having AIs align themselves to historical or observed human values. Whose values should an AI mirror? A Somali man? A Singaporean woman? The average of the two, whatever that means?

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

More concerningly, generators might also be used to create deep fakes—false depictions, indistinguishable from reality, of people doing or saying things they have never done or said. Generators will enrich our information space, but without checks, they will likely also blur the line between reality and fantasy. A common training technique for the creation of generative AI pits two networks with complementary learning objectives against each other. Such networks are referred to as generative adversarial networks or GANs. The objective of the generator network is to create potential outputs, while the objective of the discriminator network is to prevent poor outputs from being generated.

More dramatically, GANs may be used to develop AIs that can fill in the details of sketched code—in other words, programmers may soon be able to outline a desired program and then turn that outline over to an AI for completion. Currently, GPT-3, which can produce human-like text (see chapter 1), is one of the most noteworthy generative AIs. It extends the approach that transformed language translation to language production. Given a few words, it can “extrapolate” to produce a sentence, or given a topic sentence, can extrapolate to produce a paragraph. Transformers like GPT-3 detect patterns in sequential elements such as text, enabling them to predict and generate the elements likely to follow.

While there is broad consensus regarding the importance of preventing intentionally distributed malign disinformation from driving social trends and political events, ensuring this outcome has rarely proved to be a precise or entirely successful undertaking. Moving forward, however, both “offense” and “defense”—both the spread of disinformation and efforts to combat it—will become increasingly automated and entrusted to AI. The language-generating AI GPT-3 has demonstrated the ability to create synthetic personalities, use them to produce language that is characteristic of hate speech, and enter into conversations with human users in order to instill prejudice and even urge them toward violence.7 If such an AI were to be deployed to spread hate and division at scale, humans alone may not be capable of combating the outcome.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

What started with language has become the burgeoning field of generative AI. They can, simply as a side effect of their training, write music, invent games, play chess, and solve high-level mathematics problems. New tools create extraordinary images from brief word descriptions, images so real and convincing it almost defies belief. A fully open-source model called Stable Diffusion lets anyone produce bespoke and ultrarealistic images, for free, on a laptop. The same will soon be possible for audio clips and even video generation. AI systems now help engineers generate production-quality code.

Deeply researched and highly relevant, this book provides gripping insight into some of the most important challenges of our time.” —Al Gore, former vice president of the United States “In this bold book, Mustafa Suleyman, one of high tech’s true insiders, addresses the most important paradox of our time: we have to contain uncontainable technologies. As he explains, generative AI, synthetic biology, robotics, and other innovations are improving and spreading quickly. They bring great benefits, but also real and growing risks. Suleyman is wise enough to know that there’s no simple three-point plan for managing these risks, and brave enough to tell us so. This book is honest, passionate, and unafraid to confront what is clearly one of the great challenges our species will face this century.

So real will they seem, and so normal will it be, that the question of their consciousness will (almost) be moot. Despite recent breakthroughs, skeptics remain. They argue that AI may be slowing, narrowing, becoming overly dogmatic. Critics like NYU professor Gary Marcus believe deep learning’s limitations are evident, that despite the buzz of generative AI the field is “hitting a wall,” that it doesn’t present any path to key milestones like being capable of learning concepts or demonstrating real understanding. The eminent professor of complexity Melanie Mitchell rightly points out that present-day AI systems have many limitations: they can’t transfer knowledge from one domain to another, provide quality explanations of their decision-making process, and so on.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

The basic thing is that we know today what it takes to interpret counterfactuals and understand cause and effect. These are the mini-steps toward general AI, but there’s a lot we can learn from these steps, and that’s what I’m trying to get the machine learning community to understand. I want them to understand that deep learning is a mini-step toward general AI. We need to learn what we can from the way theoretical barriers were circumvented in causal reasoning, so that we can circumvent them in general AI. MARTIN FORD: So, you’re saying that deep learning is limited to analyzing data and that causation can never be derived from data alone.

Cambridge Analytica was set up by Bob Mercer who was a machine learning person, and you’ve seen that Cambridge Analytica did a lot of damage. We have to take that seriously. MARTIN FORD: Do you think that there’s a place for regulation? GEOFFREY HINTON: Yes, lots of regulation. It’s a very interesting issue, but I’m not an expert on it, so don’t have much to offer. MARTIN FORD: What about the global arms race in general AI, do you think it’s important that one country doesn’t get too far ahead of the others? GEOFFREY HINTON: What you’re talking about is global politics. For a long time, Britain was a dominant nation, and they didn’t behave very well, and then it was America, and they didn’t behave very well, and if it becomes the Chinese, I don’t expect them to behave very well.

If you wanted incremental progress on the Turing test, what you would get would be these systems that have a lot of canned answers plugged in, and clever tricks and gimmicks, but that actually don’t move you any closer to real AGI. If you want to make progress in the lab, or if you want to measure the rate of progress in the world, then you need other benchmarks that plug more into what is actually getting us further down the road, and that will eventually lead to fully general AI. MARTIN FORD: What about consciousness? Is that something that might automatically emerge from an intelligent system, or is that an entirely independent phenomenon? NICK BOSTROM: It depends on what you mean by consciousness. One sense of the word is the ability to have a functional form of self-awareness, that is, you’re able to model yourself as an actor in the world and reflect on how different things might change you as an agent.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

Müller and Nick Bostrom, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion,” in Fundamental Issues of Artificial Intelligence, ed. Vincent C. Müller (Cham, Switzerland: Springer, 2016), 553–71, https://philpapers.org/archive/MLLFPI.pdf; Anthony Aguirre, “Date Weakly General AI Is Publicly Known,” Metaculus, accessed April 26, 2023, https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised. BACK TO NOTE REFERENCE 12 Aguirre, “Date Weakly General AI Is Publicly Known.” BACK TO NOTE REFERENCE 13 Raffi Khatchadourian, “The Doomsday Invention,” New Yorker, November 23, 2015, https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom.

,” OECD Social, Employment and Migration Working Papers no. 255 (OECD Publishing, May 21, 2021), https://doi.org/10.1787/10bc97f4-en. BACK TO NOTE REFERENCE 22 McKinsey & Company, “The Economic Potential of Generative AI: The Next Productivity Frontier,” McKinsey & Company, June 2023: 37–41, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction. BACK TO NOTE REFERENCE 23 Richard Conniff, “What the Luddites Really Fought Against,” Smithsonian, March 2011, https://www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412.

A 2018 study by the Organisation for Economic Co-operation and Development reviewed how likely it was for each task in a given job to be automated and obtained results similar to Frey and Osborne’s.[19] The conclusion was that 14 percent of jobs across thirty-two countries had more than a 70 percent chance of being eliminated through automation over the succeeding decade, and another 32 percent had a probability of over 50 percent.[20] The results of the study suggested that about 210 million jobs were at risk in these countries.[21] Indeed, a 2021 OECD report confirmed from the latest data that employment growth has been much slower for jobs at higher risk of automation.[22] And all this research was done before generative AI breakthroughs like ChatGPT and Bard. The latest estimates, such as a 2023 report by McKinsey, found that 63 percent of all working time in today’s developed economies is spent on tasks that could already be automated with today’s technology.[23] If adoption proceeds quickly, half of this work could be automated by 2030, while McKinsey’s midpoint scenarios forecast 2045—assuming no future AI breakthroughs.

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

This is where Work would draw the line. “The danger is if you get a general AI system and it can rewrite its own code. That’s the danger. We don’t see ever putting that much AI power into any given weapon. But that would be the danger I think that people are worried about. What happens if Skynet rewrites its own code and says, ‘humans are the enemy now’? But that I think is very, very, very far in the future because general AI hasn’t advanced to that.” Even if technology did get there, Work was not so keen on using it. “We will be extremely careful in trying to put general AI into an autonomous weapon,” he said. “As of this point I can’t get to a place where we would ever launch a general AI weapon . . .

“As of this point I can’t get to a place where we would ever launch a general AI weapon . . . [that] makes all the decisions on its own. That’s just not the way that I would ever foresee the United States pursuing this technology. [Our approach] is all about empowering the human and making sure that the humans inside the battle network has tactical and operational overmatch against their enemies.” Work recognized that other countries may use AI technology differently. “People are going to use AI and autonomy in ways that surprise us,” he said. Other countries might deploy weapons that “decide who to attack, when to attack, how to attack” all on their own.

The UK doctrine note continues: As computing and sensor capability increases, it is likely that many systems, using very complex sets of control rules, will appear and be described as autonomous systems, but as long as it can be shown that the system logically follows a set of rules or instructions and is not capable of human levels of situational understanding, then they should only be considered to be automated. This definition shifts the lexicon on autonomous weapons dramatically. When the UK government uses the term “autonomous system,” they are describing systems with human-level intelligence that are more analogous to the “general AI” described by U.S. Deputy Defense Secretary Work. The effect of this definition is to shift the debate on autonomous weapons to far-off future systems and away from potential near-term weapon systems that may search for, select, and engage targets on their own—what others might call “autonomous weapons.”

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

The algorithm doesn’t have enough examples to uncover meaningful correlations. Too broad a goal? The algorithm lacks clear benchmarks to shoot for in optimization. Deep learning is what’s known as “narrow AI”—intelligence that takes data from one specific domain and applies it to optimizing one specific outcome. While impressive, it is still a far cry from “general AI,” the all-purpose technology that can do everything a human can. Deep learning’s most natural application is in fields like insurance and making loans. Relevant data on borrowers is abundant (credit score, income, recent credit-card usage), and the goal to optimize for is clear (minimize default rates).

THE AGE OF IMPLEMENTATION What they really represent is the application of deep learning’s incredible powers of pattern recognition and prediction to different spheres, such as diagnosing a disease, issuing an insurance policy, driving a car, or translating a Chinese sentence into readable English. They do not signify rapid progress toward “general AI” or any other similar breakthrough on the level of deep learning. This is the age of implementation, and the companies that cash in on this time period will need talented entrepreneurs, engineers, and product managers. Deep-learning pioneer Andrew Ng has compared AI to Thomas Edison’s harnessing of electricity: a breakthrough technology on its own, and one that once harnessed can be applied to revolutionizing dozens of different industries.

The math behind graphics processing aligned well with the requirements for AI, and Nvidia became the go-to player in the chip market. Between 2016 and early 2018, the company’s stock price multiplied by a factor of ten. These chips are central to everything from facial recognition to self-driving cars, and that has set off a race to build the next-generation AI chip. Google and Microsoft—companies that had long avoided building their own chips—have jumped into the fray, alongside Intel, Qualcomm, and a batch of well-funded Silicon Valley chip startups. Facebook has partnered with Intel to test-drive its first foray into AI-specific chips. But for the first time, much of the action in this space is taking place in China.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

We’re reinforced for doing things that seem to help everybody and discouraged from doing things that are not appreciated. Culture is the result of this sort of human AI as applied to human problems; it is the process of building social structures by reinforcing the good connections and penalizing the bad. Once you’ve realized you can take this general AI framework and create a human AI, the question becomes, What’s the right way to do that? Is it a safe idea? Is it completely crazy? My students and I are looking at how people make decisions, on huge databases of financial decisions, business decisions, and many other sorts of decisions. What we’ve found is that humans often make decisions in a way that mimics AI credit-assignment algorithms and works to make the community smarter.

How can you know whether the courts are fair or not if you don’t know the inputs and the outputs? The same problem arises with AI systems and is addressable in the same way. We need trusted data to hold current government to account in terms of what they take in and what they put out, and AI should be no different. NEXT-GENERATION AI Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force, so they need hundreds of millions of samples. They work because you can approximate anything with lots of little simple pieces. That’s a key insight of current AI research—that if you use reinforcement learning for credit-assignment feedback, you can get those little pieces to approximate whatever arbitrary function you want.

These are inputs that the AI thinks are valid examples of what it was trained to recognize (e.g., faces, cats, etc.), but to a human they’re crazy examples. Current AI is doing descriptive statistics in a way that’s not science and would be almost impossible to make into science. To build robust systems, we need to know the science behind data. The systems I view as next-generation AIs result from this science-based approach: If you’re going to create an AI to deal with something physical, then you should build the laws of physics into it as your descriptive functions, in place of those stupid little neurons. For instance, we know that physics uses functions like polynomials, sine waves, and exponentials, so those should be your basis functions and not little linear neurons.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

James Lighthill, a British applied mathematician at Cambridge, was the report’s lead author; his most damning criticism was that those early AI techniques—teaching a computer to play checkers, for example—would never scale up to solve bigger, real-world problems.30 In the wake of the reports, elected officials in the US and UK demanded answers to a new question: Why are we funding the wild ideas of theoretical scientists? The US government, including DARPA, pulled funding for machine translation projects. Companies shifted their priorities away from time-intensive basic research on general AI to more immediate programs that could solve problems. If the early years following the Dartmouth workshop were characterized by great expectations and optimism, the decades after those damning reports became known as the AI Winter. Funding dried up, students shifted to other fields of study, and progress came to a grinding halt.

I’m assuming that you use at least one of the products and services offered by the G-MAFIA. I use dozens of them with the full knowledge of the price I’m really paying. What’s implied here is that soon we won’t just be trusting the G-MAFIA with our data. As we transition from narrow AI to more general AI capable of making complex decisions, we will be inviting them directly into our medicine cabinets and refrigerators, our cars and our closets, and into the connected glasses, wristbands, and earbuds we’ll soon be wearing. This will allow the G-MAFIA to automate repetitive tasks for us, help us make decisions, and spend less of our mental energy thinking slowly.

See also Watson; Watson Health ImageNet, 181, 252 India, China’s cybersecurity laws and practices and, 83 Infoseek, 67 Intel, 36; Apollo partnership, 68; Facebook partnership, 92 Intelligence explosion, 148–149; in optimistic scenario of future, 177 Intelligence quotient (IQ), 145–146, 147 Internet, 137 iOS mobile operating system, 139, 188, 191 Japan: Fifth Generation AI plan, 38; Fukushima disaster, 136; indirect communication in, 124, 125 Jennings, Ken, 39; versus Watson, 39 Jetsons, The: Rosie, 2 Jiang Zemin, 80 Job displacement and unemployment: in catastrophic scenario of future, 220, 226; in optimistic scenario of future, 172; in pragmatic scenario of future, 197 Job search and AI: in pragmatic scenario of future, 196–197 Jobs, blue-collar glut of: in catastrophic scenario of future, 221 Joint Artificial Intelligence Center, Pentagon creation of, 243 Journalism: in optimistic scenario of future, 167–168, 175; in pragmatic scenario of future, 198.

pages: 345 words: 104,404

Pandora's Brain
by Calum Chace
Published 4 Feb 2014

We think that when the time comes to break the news it may have to be done suddenly, and we want some of the leading media presenters to have some grounding in the facts, so that they don’t get carried away re-hashing Frankenstein stories. So we are keeping certain key journalists informed, but in a very conservative way. We feed them interesting stories about narrow AI, accompanied by highly conservative timescales for human-level, general AI. We tend to say that we estimate human-level AI will arrive early next century, which seems sufficiently far off to be no kind of threat.’ Vic gave Matt a direct look. ‘And that’s about as much as I can tell you without asking you to sign an NDA. That is, if you want to continue this conversation?’

I understand the two of you are acquainted.’ Montaubon nodded and smiled in pretend disappointment, and sat back in his chair. Ross took the opportunity to take back control of his show. ‘So, coming back to my question, Dr Metcalfe, can you give us even a very broad estimate of when we will see the first general AI?’ David shook his head. ‘I really can’t say, I’m afraid. Braver and better-informed people than me have had a go, though. For instance, as mentioned in your opening package, Ray Kurzweil has been saying for some time that it will happen in 2029.’ ‘2029 is very specific!’ laughed Ross. ‘Does he have a crystal ball?’

said Montaubon, rolling his eyes dismissively. Professor Christensen cleared his throat. ‘Perhaps I can help out here. My colleagues and I at Oxford University carried out a survey recently, in which we asked most of the leading AI researchers around the world to tell us when they expect to see the first general AI. A small number of estimates were in the near future, but the median estimate was the middle of this century.’ ‘So not that far away, then,’ observed Ross, ‘and certainly within the lifetime of many people watching this programme.’ ‘Yes,’ agreed Christensen. ‘Quite a few of the estimates were further ahead, though.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

Instead, Hassabis made common cause with a fellow Gatsby researcher named Shane Legg. At the time, as he later recalled, AGI wasn’t something that serious scientists discussed out loud, even at a place like the Gatsby. “It was basically eye-rolling territory,” he says. “If you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character.” But Legg, a New Zealander who had studied computer science and mathematics while practicing ballet on the side, felt the same way as Hassabis. He dreamed of building superintelligence—technology that could eclipse the powers of the brain—though he worried that these machines could one day endanger the future of humanity.

Shane Legg had embraced the concept after his postdoctoral advisor published a paper arguing that the brain worked in much the same way, and the company had hired a long list of researchers who specialized in the idea, including David Silver. Reinforcement learning, Alan Eustace believed, allowed DeepMind to build a system that was the first real attempt at general AI. “They had superhuman performance in like half the games, and in some cases, it was astounding,” he says. “The machine would develop a strategy that was just a killer.” After the Atari demos, Shane Legg gave a presentation based on his PhD thesis, in which he described a breed of mathematical agent that could learn new tasks in any environment.

“I have always been skeptical of reinforcement learning, because it required an extraordinary amount of computation. But we’ve now got that,” he said. Still, he didn’t believe in building AGI. “The progress is being made by tackling individual problems—getting a robot to fix things or understanding a sentence so that you can translate—rather than people building general AI,” he said. At the same time, he didn’t see an end to the progress across the field, and it was now out of his hands. He hoped for one last success with capsules, but the larger community, backed by the world’s biggest companies, was racing in other directions. Asked if we should worry about the threat of superintelligence, he said this didn’t make much sense in the near term.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

These predictions have not come to pass for the same reason. As the naked-streets experiment emphasized, driving in busy cities requires a tremendous amount of situational intelligence to adapt to changing circumstances, and even more social intelligence to respond to cues from other drivers and pedestrians. General AI Illusion The apogee of the current AI approach inspired by Turing’s ideas is the quest for general, human-level intelligence. Despite tremendous advances such as GPT-3 and recommendation systems, the current approach to AI is unlikely to soon crack human intelligence or even achieve very high levels of productivity in many of the decision-making tasks humans engage in.

Already during this period, however, social media–mediated dissident activities were short-lived. The softer-touch censorship that allowed some critical messages to circulate ceased after 2014. Under the leadership of Xi Jinping, the government increased its demand for surveillance and related AI technologies first in Xinjiang and then throughout China. In 2017 it issued the “New Generation AI Development Plan,” with a goal of global leadership in AI and a clear focus on the use of AI for surveillance. Since 2014, China’s spending on surveillance software and cameras and its share of global investment in AI have increased rapidly every year, now making up about 20 percent of worldwide AI spending.

On the diagnosis of diabetic retinopathy and the combination of AI algorithms and specialists, see Raghu, Blumer, Corrado, Kleinberg, Obermeyer, and Mullainathan (2019). On the wishes of Google’s chief of self-driving cars, see Fried (2015). For Elon Musk’s comments on self-driving cars, see Hawkins (2021). General AI Illusion. On superintelligence, see Bostrom (2017). On AlphaZero, see https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go. For an interesting critique of the current AI approach to intelligence, which also emphasizes the social and situational aspects of intelligence, see Larson (2021).

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

In a policy document prepared by the Executive Office of the US President, the National Science and Technology Council (NSTC) Committee on Technology stated, “The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades. The NSTC Committee on Technology’s assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy.”28 At the same time, several companies with the expressed mission of creating AGI or machines with human-like intelligence, including Vicarious, Google DeepMind, Kindred, Numenta, and others, have raised many millions of dollars from smart and informed investors.

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

The past and present of AGI A comprehensive overview of historical and contemporary AGI projects is given in the introductory chapter of a prior edited volume focused on Artificial General Intelligence [5]. So, we will not repeat that material here. Instead, we will restrict ourselves to discussing a few recent developments in the field, and making some related general observations. What has been defined above as “AGI” is very similar to the original concept of “AI”. When the first-generation AI researchers started their exploration, they dreamed to eventually build computer systems with capabilities comparable to those of the human mind in a wide range of domains. In many cases, such a dream remained in their minds throughout their whole career, as evidenced for instance by the opinions of Newell and Simon [12]; [13], Minsky [14], and McCarthy [8].

[Eric Baum]: To go back to your question before about the science, the one thing you said was that nothing is repeated and I think that’s just false lots of things get repeated – I mean I learned here that somebody repeated Hayek [a learning system developed by Eric Baum, which was replicated by Moshe Looks], but that’s a very minor one. For example chess programs are repeated you know multiple times, or support vector machines. I could give a long list. [Phil Goetz]: I guess I was thinking of general AI systems, such as when Ben Goertzel talks about Novamente, I’ve not heard of anyone trying to duplicate what they report. [Eric Baum]: Novamente, in some ways, it seems to me like the systems of the 60’s, which were hard to repeat, and maybe weren’t repeated and that was a criticism that used to be made, but I don’t think that’s true with a lot of the progress that has come about now.

I have serious doubts about how far we are we going to get with those. But I’m not an expert there. [Steve Grand] At the end of that process, all we’d end up with is another brain and we already know how to make them. We also already know how to take them apart and we still don’t understand how they work. [Audience]: How much do you want General AI, and how afraid are you of it? [Sam Adams]: I want it a lot, and I’m not afraid of it at all. [Hugo de Garis]: To me it’s a religion, and I’m dead scared. [Stan Franklin]: My then 16 year old daughter, who had read Bill Joy’s article, said to me, daddy these intelligent systems, that’s what you do isn’t it?

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

How AI Is Impacting Journalism,” Forbes, February 8, 2019, https://www.forbes.com/sites/nicolemartin1/2019/02/08/did-a-robot-write-this-how-ai-is-impacting-journalism/#7cc457dd7795; Joe Keohane, “What News-Writing Bots Mean for the Future of Journalism,” Wired, February 16, 2017, https://www.wired.com/2017/02/robots-wrote-this-story/. 119fake news at industrial scales: Ben Buchanan et al., Truth, Lies, and Automation: How Language Models Could Change Disinformation (Center for Security and Emerging Technology, May 2021), https://cset.georgetown.edu/publication/truth-lies-and-automation/. 120malicious AI applications: Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (February 2018), https://maliciousaireport.com/. 120“fear-mongering”: Anima Kumar, “An Open and Shut Case on OpenAI,” anima-ai.org, January 1, 2021, https://anima-ai.org/2019/02/18/an-open-and-shut-case-on-openai/. 120“The words ‘too dangerous’ were casually thrown out here”: James Vincent, “AI Researchers Debate the Ethics of Sharing Potentially Harmful Programs,” The Verge, February 21, 2019, https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai. 120technical breakthrough for which they hadn’t yet seen the academic paper: James Vincent, “OpenAI’s New Multitalented AI Writes, Translates, and Slanders,” The Verge, February 14, 2019, https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2; “Better Language Models and Their Implications,” openai.com, n.d., https://openai.com/blog/better-language-models/; Alec Radford et al., Language Models Are Unsupervised Multitask Learners (openai.com, n.d.), https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf. 120“too dangerous” theme was echoed in other outlets: Delip Rao, “When OpenAI Tried to Build More Than a Language Model,” deliprao.com, February 19, 2019, http://deliprao.com/archives/314 (page discontinued); Alex Hern, “New AI Fake Text Generator May Be Too Dangerous to Release, Say Creators,” The Guardian, February 14, 2019, https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction; Aaron Mak, “When Is Technology Too Dangerous to Release to the Public?” Slate, February 22, 2019, https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html; James Vincent, “OpenAI has Published the Text-Generating AI It Said Was Too Dangerous to Share,” The Verge, November 7, 2019, https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters. 120pre-briefing the press “got us some concerns that we were hyping it”: Jack Clark, interview by author, March 3, 2020. 120more careful about the phrasing around potential dangers: “Better Language Models.” 121realistic-looking fake videos: Samantha Cole, “We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now,” Vice, January 24, 2018, https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley. 121swap the faces of celebrities: Samantha Cole, “AI-Assisted Fake Porn Is Here and We’re All Fucked,” Vice, December 11, 2017, https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn. 12114,000 deepfake porn videos online: Henry Adjer et al., The State of Deepfakes: Landscape, Threats, and Impact (DeepTrace Labs, September 2019), 1, https://regmedia.co.uk/2019/10/08/deepfake_report.pdf. 121The videos didn’t only harm the celebrities: Cole, “AI-Assisted Fake Porn Is Here.” 121revenge porn attacks: Kirsti Melville, “The Insidious Rise of Deepfake Porn Videos—and One Woman Who Won’t Be Silenced,” abc.net.au, August 29, 2019, https://www.abc.net.au/news/2019-08-30/deepfake-revenge-porn-noelle-martin-story-of-image-based-abuse/11437774. 121“Deepfake technology is being weaponized against women”: Adjer et al., The State of Deepfakes, 6. 121AI assistant called Duplex: Jeff Grubb’s Game Mess, “Google Duplex: A.I.

China, on the other hand, isn’t cutting any corners when it comes to AI competition. Following the 2017 national AI plan, Chinese leaders issued a series of implementation plans to achieve China’s goal of being the global AI leader by 2030. These include the “Three-Year Action Plan to Promote the Development of New-Generation AI Industry” and the “Thirteenth Five-Year Science and Technology Military-Civil Fusion Special Projects Plan,” both released in 2017. Chinese leaders have emphasized AI as central to national success. Chinese General Secretary Xi Jinping said in a 2018 Politburo session, “Accelerating the development of a new generation of AI is an important strategic handhold for China to gain the initiative in global science and technology competition, and it is an important strategic resource driving our country’s leapfrog development in science and technology.”

Able Archer, 287 academic espionage, 163–64 accidents, 255 ACE (Air Combat Evolution), 1–2, 222 ACLU (American Civil Liberties Union), 111, 113 Acosta, Jim, 128 Advanced Research Projects Agency, 72 Advanced Research Projects Agency-Energy, 40 adversarial examples, 239–44, 240f adversarial patches, 241–42, 242f Aether Committee, 159 Afghanistan, 45–46, 54, 255 African Union, 108 AFWERX (Air Force Works), 214 Agence France-Presse, 139 AGI (artificial general intelligence), 284 AI Global Surveillance Index, 109 AI Index, 333–34 airborne warning and control system (AWACS), 196 Air Combat Evolution (ACE), 1–2, 222 aircraft, 191, 255 aircraft availability rates, 197 aircraft carriers, 191–92 AI Research SuperCluster, 296 Air Force 480th ISR Wing, 54 Air Force Works (AFWERX), 214 airlines, 100 AI Task Force, 193–94 AI Technology and Governance conference, 177 AITHOS coalition, 136 alchemy, 232 algorithmic warfare, 53, 56, 58 Algorithmic Warfare Cross-Functional Team (AWCFT), See Project Maven algorithm(s), 288; See also machine learning computer vision, 202–3 efficiency, 51, 297–98 real world situations, vs., 230–36 in social media, 144–51 for surveillance, 82 training, 25 Alibaba, 37, 91, 212 Alibaba Cloud, 160 All-Cloud Smart Video Cloud Solution, 107 Allen, John, 280 Allen-Ebrahimian, Bethany, 82 Alphabet, 26, 296 AlphaDogfight, 1–3, 220–22, 257, 266, 272 AlphaGo, 23, 73, 180, 221, 266, 271, 274, 284, 298, 453, 454 AlphaPilot drone racing, 229–30, 250 AlphaStar, 180, 221, 269, 271, 441 AlphaZero, 267, 269–71, 284 Amazon, 32, 36, 215–16, 224 Deepfake Detection Challenge, 132 and facial recognition, 22–23 and Google-Maven controversy, 62, 66 and government regulation, 111 revenue, 297 AMD (company), 28 American Civil Liberties Union (ACLU), 111, 113 Anandkumar, Anima, 32, 120 Anduril, 66, 218, 224 Angola, 107, 108 Apollo Program, 297 Apple, 92, 95–96 application-specific integrated circuits (ASICs), 180 Applied Intuition, 224 arms race, 254, 257 Army Command College, 279 Army of None (Scharre), 196 artificial general intelligence (AGI), 284 artificial intelligence (AI) agents, 271 community, publication norms, 125 cost of, 296–97 ethics, 159 future of, 294–301 general, 284 as general-purpose enabling technology, 3–4 impact on economic productivity, 72–73 implementation, 31 indices, global, 15–17 narrowness, 233 outcomes, 299–301 regulation of, 111–13 safety, 286, 289, 304 specialized chips for, 28–29, 180, 185 “Artificial intelligence: disruptively changing the ‘rules of the game’” (Chen), 279 Artificial Intelligence Industry Alliance, 172 artificial intelligence (AI) systems future of, 294–301 humans vs., 263–75 limitations of, 229–37 roles in warfare, 273 rule-based, 230, 236 safety and security challenges of, 249–59 arXiv, 163 ASICs (application-specific integrated circuits), 180 ASML (company), 181 Associated Press, 139 Atari, 235 Atlantic, The, 173 atoms, in the universe, number of, 335 AUKUS partnership, 76 Austin, Lloyd, 292 Australia, 76, 108, 158, 182, 187 Australian Strategic Policy Institute, 82, 98, 158 Autodesk, 162 automated surveillance, 103 automatic target recognition, 56–58 automation bias, 263 autonomous cars, 23, 65 autonomous weapons, 61, 64–66, 256 autonomous weapons, lethal, 286 AWACS (airborne warning and control system), 196 AWCFT (Algorithmic Warfare Cross-Functional Team), See Project Maven Azerbaijan, 108 BAAI (Beijing Academy of Artificial Intelligence), 172, 455 backdoor poisoning attacks, 245 badnets, 246 BAE (company), 211 Baidu, 37, 92, 160, 172, 173, 212 Baise Executive Leadership Academy, 109 “Banger” (call sign), 1 Bannon, Steve, 295 Battle of Omdurman, 13 BBC, 138 BeiDou, 80 Beijing, 84, 92, 159 Beijing Academy of Artificial Intelligence (BAAI), 172, 455 Beijing AI Principles, 172, 173 Beijing Institute of Big Data Research, 157 Belt and Road Initiative, 105, 108–10 BERTLARGE, 294 Betaworks, 127–28 Bezos, Jeff, 215 biases, 234, 236 Biddle, Stephen, 219 Biden, Hunter, 131 Biden, Joe, and administration, 33–34, 147, 166–67, 184, 252, 292 big data analysis, 91 Bing, 160 Bin Salman, Mohammed, 141 biometrics, 80, 84; See also facial recognition “Bitter Lesson, The” (Sutton), 299 black box attacks, 240–41 blacklists, 99–100 BlackLivesMatter, 143, 148 “blade runner” laws, 121–22, 170 blind passes, 249 Bloomberg, 118 Bloomberg Government, 257 Boeing, 193, 216 Bolivia, 107 bots, 118, 121–22, 142, 144–49, 221 Bradford, Anu, 112 Bradshaw, Samantha, 141–42 brain drain, 31, 304 “brain scale” models, 300 Brands, Hal, 223 Brazil, 106, 107, 110 Breakfast Club, 53 Brexit referendum, 122 Bridges Supercomputer, 44 brinkmanship, 281 Brokaw, Tom, 143 Brooks, Rodney, 233 “brothers and sisters,” Han Chinese, 81 Brown, Jason, 54–55, 57, 201–3 Brown, Michael, 49, 196–97 Brown, Noam, 44, 48, 50 Bugs Bunny (fictional character), 231 Bureau of Industry and Security, 166 Burundi, 110 Buscemi, Steve, 130 Bush, George W., and administration, 68–70 ByteDance, 143 C3 AI, 196, 224 C4ISR (Command, Control, Communication, Cloud, Intelligence, Surveillance, and Reconnaissance), 107 CalFire, 201–2 California Air National Guard, 201, 203 Caltech, 32, 120 Cambridge Innovation Center, 135 cameras, surveillance, 6, 86–87, 91 Campbell, Kurt, 292 Canada, 40, 76, 158, 187 Capitol insurrection of 2021, 150 car bombs, 54–55 Carnegie Mellon University, 31–32, 45–46, 66, 193, 196, 207 Carnegie Robotics, 193 cars, self-driving, 23 Carter, Ash, 57 casualties, military, 255 CBC/Radio-Canada, 138 CCP, See Chinese Communist Party Ceaușescu, Nicolae, 345 CEIEC (China National Electronics Import and Export Corporation), 106 censorship, 175–76 centaur model, 263 Center for a New American Security, 36, 71, 222 Center for Data Innovation, 15 Center for Security and Emerging Technology, 33, 139, 162, 185, 298, 323 Center on Terrorism, Extremism, and Counterterrorism, 124 Central Military Commission, 292 Central Military Commission Science and Technology Commission, 36 central processing units (CPUs), 25 CFIUS (Committee on Foreign Investment in the United States), 179 C-5 cargo plane, 196 chance, 282 character of warfare, 280 checkers, 47 Chen Hanghui, 279 Chen Weiss, Jessica, 110 Chesney, Robert, 130 chess, 47, 267, 269, 271, 275 Chile, 107 China AI research of, 30 bots, 142 Central Military Commission Science and Technology Commission, 36 commercial tech ecosystem, 223 data privacy regulations of, 21–22 ethics standards, 171–75 High-End Foreign Expert Recruitment Program, 33 human rights abuses, 63 in industrial revolution, 12–13 internet use, 22 nuclear capabilities, 50 ranking in government strategy, 40 semiconductor imports, 29 synthetic media policies of, 140 technology ecosystem, 91–96 Thousand Talents Plan, 32 China Arms Control and Disarmament Association, 290 China Initiative, 164, 167 China National Electronics Import and Export Corporation (CEIEC), 106 China National Intellectual Property Administration (CNIPA), 353 China Security and Protection Industry Association, 91 China Telecom, 169 Chinese Academy of Sciences, 88, 158 Chinese Academy of Sciences Institute of Automation, 172 Chinese Communist Party (CCP) economic history, 85–86 human rights abuses, 79–80, 83 surveillance, 97–104, 174–77 Chinese graduate students in U.S., 31 Chinese military aggression, 76; See also People’s Liberation Army (PLA) AI dogfighting system, 257 and Google, 62–63 investments in weapons, 70 scientists in U.S., 5 and Tiananmen massacre, 68 U.S. links to, 157–58, 161, 166, 303 Chinese Ministry of Education, 162 Chinese People’s Institute of Foreign Affairs, 173 Chinese Talent Program Tracker, 33 chips, See semiconductor industry; semiconductors CHIPS and Science Act, 40, 180 Cisco, 109, 246 Citron, Danielle, 121, 130 Civil Aviation Industry Credit Management Measures, 100 Clarifai, 60–61, 63, 66, 224 Clark, Jack, 31, 117, 119–25 Clinton, Bill, and administration, 69–70, 97 CLIP (multimodal model), 295–96 cloud computing, 91, 215–16 CloudWalk, 105, 156, 389 CNIPA (China National Intellectual Property Administration), 353 COBOL (programming language), 204 cognitive revolution, 4 cognitization of military forces, 265 Colombia, 107 Command, Control, Communication, Cloud, Intelligence, Surveillance, and Reconnaissance (C4ISR), 107 command and control, 268 Commerce Department, 155–57, 166, 171, 184 Committee on Foreign Investment in the United States (CFIUS), 179 computational efficiency, 297–300 computational game theory, 47–50 compute, 25–29 control over, 27 global infrastructure, 178 hardware, 297–99 resources, size of, 294–96 trends in, 325 usage of, 26, 51 computer chips, See semiconductor industry; semiconductors Computer Science and Artificial Intelligence Laboratory (CSAIL), 156 computer vision, 55–57, 64, 224 Computer Vision and Pattern Recognition conference, 57 concentration camps, 81 confidence-building measures, 290–93 confinement, 82 content recommendations, 145 Cook, Matt, 203 cooperation, research, 303–4 Cornell University, 124 cost, of AI, 296–97 Côte d’Ivoire, 107 Cotton, Tom, 164 counter-AI techniques, 248 COVID pandemic, 74–75 CPUs (central processing units), 25 Crootof, Rebecca, 123 CrowdAI, 202, 224 CSAIL (Computer Science and Artificial Intelligence Laboratory), 156 Cukor, Drew, 57, 58–59 Customs and Border Patrol, 110–11 cyberattacks, 246 Cyber Grand Challenge, 195–96 Cybersecurity Law, 95, 174 “cyberspace,” 102 Cyberspace Administration of China, 99 cyber vulnerabilities, 238 adversarial examples, 239–44 data poisoning, 244–47 discovery, 195–96 model inversion attacks, 247 Czech Republic, 108 Dahua, 89, 156, 169, 353, 354–55, 388–89 Dalai Lama, 80 Dalian University of Technology, 212 DALL·E, 295 Darcey, Brett, 220, 249–50 DARPA (Defense Advanced Research Projects Agency), 1, 195, 210–13, 220 DARPA Squad X, 231, 233, 236 data, 18–24 explosion, 18–19 mapping, 204 open-source, 288 poisoning, 238, 244–47 privacy laws, 21–22, 111–12, 170–71, 174–77 storage, 91 usage, 51 Data Security Law, 95, 174 datasets publicly available, 139 reliance on, 323 training, see training datasets DAWNBench, 57 D-Day Invasion of Normandy, 46 dead hand, 289–90 Dead Hand, 447; See also Perimeter deception in warfare, 45 Deep Blue, 275 deepfake detection, 127, 132–33, 137–38 Deepfake Detection Challenge, 132–33 deepfake videos, 121, 130–32 deep learning, 2, 19, 31, 210, 236 Deep Learning Analytics, 209–13, 233 DeepMind, 23, 26, 32, 180, 221, 271–72, 295–96, 298–99, 441, 454 Deeptrace, 121, 130–33 defense acquisition policy, 217 Defense Advanced Research Projects Agency (DARPA), 1, 195, 210–13, 220 Defense Innovation Board, 65–66 Defense Innovation Unit (DIU), 35, 49, 57, 195–99, 214, 252 Defense One, 58 Defense Sciences Office, 231 defense start-ups, 222 Dell, 162 Deloitte, 246 Deng Xiaoping, 75, 85 Denmark, 108 Department of Defense, 35, 51–52, 56, 60–67, 70, 160, 166, 194 AI principles, 65–66 AI strategy, 249 budget, 297 contracts, 214–18 cyberattacks on, 246 innovation organizations, 198f reform, 225 Department of Energy, 246 Department of Energy’s Office of Science, 40 Department of Homeland Security, 246 Department of Justice, 164, 246 destruction, extinction-level, 282 deterrence, 51 DiCaprio, Leonardo, 130 Dick, Philip K., 81 dictator’s dilemma, 69 Didi, 92 digital devices, 18 DigitalGlobe, 204 Digital Silk Road, 110 DiResta, Renée, 139 disaster relief, 201, 204 disinformation, 117–26 AI text generation, 117–21 deepfake videos, 121 GPT-2 release, 123–24 Russian, 122 voice bots, 121–22 distributional shift, 233, 426 DIU, See Defense Innovation Unit (DIU) DNA database, 89–90 dogfighting, 1, 249–50, 272; See also Alpha Dogfight “Donald Trump neuron,” 295 Doom bots, 221 doomsday device, 282 Dota 2 (game), 26, 117, 267–72, 298 Dragonfly, 62 Drenkow, Nathan, 247 drone pilots, 223 drones, 229–30, 257, 286–87 drone video footage, 36, 53–56, 61, 65, 202–3; See also image processing; video processing drugs, 251 Dulles Airport, 110–11 Dunford, Joe, 62 Duplex, 121 Easley, Matt, 193 Eastern Foundry, 209 Economist, The, 18 Ecuador, 106 efficiency, algorithmic, 51 Egypt, 109 XVIII Airborne Corps at Fort Bragg, 194 elections, 122, 128, 129, 131, 134, 150 Elmer Fudd (fictional character), 231 Entity List, 155–57, 161, 163, 166–67, 171, 182, 184, 388–89 Environmental Protection Agency, 40 Erasmus University Medical Center, 158, 393–94 Esper, Mark, 67, 197, 205 espionage, 33, 163–64 Estonia, 108 “Ethical Norms for New Generation Artificial Intelligence,” 172 ethical use of technology, 140 ethics censorship, 175–76 Chinese standards, 171–75 data privacy, 176–77 international standards, 169–71 Ethiopia, 108 E-3 Sentry, 196 Europe AI research of, 30 in industrial revolution, 12–13 internet use, 22 and semiconductor market, 27 European Union, 76, 187 Europe Defender, 194 EUV (extreme ultraviolet lithography), 181 explainable AI, 237 export controls, 166–67, 181–86, 300 extinction-level destruction, 282 extreme ultraviolet lithography (EUV), 181 Eyes in the Sky (Michel), 54 F-35 stealth fighter jet, 254–55 Faber, Isaac, 193–94, 203 Face++, 88 Facebook account removal, 142 algorithms, 144–46 content moderation, 149 Deepfake Detection Challenge, 132 manipulated media policies of, 140 number of users, 22 and Trusted News Initiative, 139 face swapping, 121, 130–31 facial recognition attacks on, 241, 245 challenges in, 426 in China, 5–6, 80, 88–91, 103, 167 Chinese export of technology, 105–7 laws and policies for, 113, 159, 170 poor performance outside training data, 64–65 of Uighurs, 88–89, 158 in U.S., 22–23, 111, 159 fake news, 117–19, 122, 124–25 Falco (call sign), 1–2, 221, 226 Fan Hui, 298 FBI, 95–96, 164 Fedasiuk, Ryan, 162 Federal Emergency Management Agency (FEMA), 204 FedRAMP, 213 FEMA (Federal Emergency Management Agency), 204 Fidelity International, 157 field-programmable gate arrays (FPGAs), 180 “50 cent army,” 125 Fighting to Innovate (Kania), 222 filtering, of harmful content, 144 Financial Times, 157–58 Finland, 40, 187 fire perimeter mapping, 201–4 5G wireless networking, 37, 108, 182–83 Floyd, George, 143, 148 flu, H5N1 avian bird, 123 ForAllSecure, 196 Forbes magazine, 202 Ford, Harrison, 121 480th ISR Wing, 54 FPGAs (field-programmable gate arrays), 180 France, 40, 76, 108, 158, 187 Frazier, Darnella, 143 Frederick, Kara, 105 French Presidential election, 2017, 122 future, uncertainty of, 276 G7 group, 76, 187 Gab, 149 Gabon, 134 Gadot, Gal, 121 Game Changer, 206 games and gaming, 43–51, 266–73; See also specific games game trees, 47–49 GANs (generative adversarial networks), 127, 133 GAO, See Government Accountability Office (GAO) Garcia, Dominic, 203 Gates, Bill, 159 Gato, 295 GDP (gross domestic product), 69f, 85, 85f GDPR, See General Data Protection Regulation (GDPR) General Dynamics, 209, 212–13 generative adversarial networks (GANs), 127, 133 generative models, 125 genomics, 37 geopolitics, 129, 317 Germany, 12, 76, 107, 108, 158, 187 Gibson, John, 61 Gibson, William, 101, 102 Gizmodo, 120 Global AI Index, 15, 40 Global AI Vibrancy Tool, 319 go (game), 23, 47–48, 73, 180, 271, 275, 298 Golden Shield Project, 87 Goodfellow, Ian, 239 Google, 31, 32, 36, 57, 224, 294 and ASICs, 180 and Dragonfly, 339 Duplex, 121 Meena, 125 and Seven Sons of National Defense, 162 social app dominance, 143 and Trusted News Initiative, 139 work with Chinese researchers, 157, 392, 396 Google AI China Center, 62, 159, 167 Google Brain, 32, 294–96, 299 Google-Maven controversy, 22, 60–67 Google Photos, 64 Googleplex, 195 Google Translate, 234 Gorgon Stare, 53–55, 58 “Governance Principles for a New Generation of Artificial Intelligence,” 173 “Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence,” 172 Government Accountability Office (GAO), 195, 215, 217, 248 government contracting, 215–16, 222, 224–25 government-industry relationship, 95–96 government subsidies, 179–80 GPT-2 (language model), 20, 117–20, 122–25, 139, 294 GPT-3 (language model), 139, 294 GPUs (graphics processing units), 25, 28–29, 185, 296 Grace, Katja, 298 Great Britain, 191–92 Great Firewall, 62, 70, 102, 166 Great Gatsby, The (film), 130 Great Leap Forward, 85 Great Wall, 101 Greitens, Sheena, 105 Griffin, Michael, 200, 257 Guardian, The, 120, 148 Gulf War, 1991, 14, 219 HA/DR (humanitarian assistance/disaster relief), 201, 204 Hamad Bin Khalifa University, 142 Han Chinese, 81, 88 Harbin Institute of Technology, 161 hardware, computing, See compute Harvard University, 32 hashtags, 141 Hate Crimes in Cyberspace (Citron), 121 Heinrich, Martin, 37 Heritage Foundation, 105 Heron Systems in AlphaDogfight Trials, 1–2, 266, 272 background, 220–22 as defense start-up, 224 and real-world aircraft, 249–50 heuristics, 274 Hewlett Packard Enterprise, 157, 392 Hicks, Kathleen, 252 High-End Foreign Expert Recruitment Program, 33 Hikvision, 89, 91, 107, 156, 157, 353, 355, 389, 390 Hikvision Europe, 389 Himalayan border conflict, 75 Hindu, The, 139 Hinton, Geoffrey, 210 HiSilicon, 91 Hoffman, Samantha, 82, 98–99, 101, 102, 174 HoloLens, 160, 217 Honeywell, 162 Hong Kong, 75, 148, 175 Hoover Institution, 162 Horner, Chuck, 14 Howard, Philip, 141–42 Howell, Chuck, 250–51 Huawei, 29, 76, 88–89, 91, 92, 106–9, 169, 171, 182–85, 353, 354, 357, 409 Huawei France, 354 Huffman, Carter, 135–37 human cognition, 275 Human Genetics, 158 human intelligence, 284–85 humanitarian assistance/disaster relief (HA/DR), 201, 204 human-machine teaming, 263–64, 273, 276–86 human psychology, 274 human rights abuses, 63, 155, 158, 176–77 Human Rights Watch, 79, 81–82, 95, 170, 174 Hungary, 110 Hurd, Will, 39 Hurricane Dorian, 204 Husain, Amir, 66, 280 Hwang, Tim, 139, 323 hyperwar, 280 IARPA (Intelligence Advanced Research Projects Activity), 91, 246 IBM, 32, 109, 162, 215 IDG Capital, 157 IEC (International Electrotechnical Commission), 169 IEDs (improvised explosive devices), 45–46 IEEE (Institute for Electrical and Electronics Engineers), 171 iFLYTEK, 37, 91, 93–95, 104, 156, 157, 169 IJOP (Integrated Joint Operations Platform), 81–82 image classification systems, 64–65 image misclassification, 296 Imagen, 295 ImageNet, 19, 54, 210 image processing, 53–55, 58, 61 immigration policies, 33–34, 331 improvised explosive devices (IEDs), 45–46 iNaturalist, 211–12, 233 India, 75, 76, 108, 110, 187 bots, 142 in industrial revolution, 12–13 internet use, 22 industrial revolutions, 4–5, 11–13, 264–65 infant mortality, 85, 87f inference, 25, 180, 298 information processing, scale of, 269 information revolution, 14 insecure digital systems, 248 Institute for Electrical and Electronics Engineers (IEEE), 171 institutions, 35–40 Integrated Joint Operations Platform (IJOP), 81–82 Intel, 27, 29, 156, 162, 179, 181–82, 246, 390–91 intellectual property, 33, 71, 92, 163–64, 179 Intellifusion, 88, 156 intelligence, human, 284–85 intelligence, surveillance, and reconnaissance (ISR), 53–54 Intelligence Advanced Research Projects Activity (IARPA), 91, 246 intelligence analysis, 55 intelligentization of military, 37, 53, 222, 265 intelligentization of surveillance systems, 88 Intelligent Systems Center, 238, 247–48 Intelligent Trial System, 95 Intelligent UAV Swarm System Challenge, 36 international cooperation, 76 International Electrotechnical Commission (IEC), 169 International Organization for Standardization (ISO), 169 international stability, 286–93 international standard-setting, 169–71 International Telecommunication Union (ITU), 169 internet in China, 87, 92, 97, 99 data capacity of, 18 usage, 22 IP Commission, 164 iPhone encryption, 174 Iran, 142 Iraq, 45–46, 58, 253, 255–56 ISIS, 58, 63 ISO (International Organization for Standardization), 169 ISR (intelligence, surveillance, and reconnaissance), 53–54 Israel, 187, 278 IS’Vision, 156 Italy, 76, 108, 187 ITU (International Telecommunication Union), 169–70 JAIC (Joint AI Center), 35, 66, 200–208, 214, 289 jamming and anti-jamming strategies, 50 Japan, 27, 76, 108, 158, 181–82, 187 JASON scientific advisory group, 251 Javorsek, Dan “Animal,” 3, 230 jaywalking, 99 JEDI (Joint Enterprise Defense Infrastructure), 61, 214–18, 224 Jennings, Peter, 143 Johansson, Scarlett, 121, 130 Johns Hopkins University, 223 Johns Hopkins University Applied Physics Laboratory, 238, 247 Joint Enterprise Defense Infrastructure (JEDI), 61, 214–18, 224 “Joint Pledge on Artificial Intelligence Industry Self-Discipline,” 172 Jones, Marc Owen, 142 Jordan, 109 Joske, Alex, 158 Kania, Elsa, 36, 96, 222–24 Kasparov, Garry, 275 Katie Jones (fake persona), 131 Kaufhold, John, 209, 213 Kazakhstan, 108, 155–56 Keegan, John, 443 Ke Jie, 73 Kelly, Kevin, 4 Kelly, Robin, 39 Kennedy, Paul, 12, 13 Kenya, 107 Kernan, Joseph, 200 Kessel Run, 214 KFC, 92 KGB, 122 Khan, Saif, 185–86, 298 Khashoggi, Jamal, 141–42 kill chain, 263 Kim Jong-un, 131 King’s College London, 273 Kingsoft, 160 Kocher, Gabriel “Gab707,” 230 Komincz, Grzegorz “MaNa,” 270 Kovrig, Michael, 177 Krizhevsky, Alex, 210 Kuwait, 46 Lamppost-as-a-Platform, 107 language models, 20, 118–20, 124–25, 232, 234, 294; See also GPT-2; GPT-3; OpenAI Laos, 108 Laskai, Lorand, 96 Laszuk, Danika, 128, 140 Latvia, 108 Lawrence, Jennifer, 130 laws and regulations, 111–13 “blade runner,” 121–22, 170 data privacy, 21–22, 111–12, 170–71, 174–77 facial recognition, 113 and Microsoft, 111 for surveillance, 108–9 learning, unintended, 234 learning hacks, 234–35 Lebanon, 109 Lee, Kai-Fu, 22 Lee, Peter, 165, 167 legal reviews, 259 Le Monde, 108 Les, Jason, 46, 48 lethal autonomous weapons, 286 “liar’s dividend,” 130 Li Bin, 291 Libratus, 43–51, 266–67, 271 Libya, 109 Li Chijiang, 290–91 life expectancy, 85, 86f Li, Fei-Fei, 62 Lin Ji, 93–95, 104 Liu Fan, 393–94 LinkedIn, 131 lip-syncing, 130–31 lithography, extreme ultraviolet (EUV), 181 Liu He, 76 Liu Qingfeng, 156 Llorens, Ashley, 248, 249 Lockheed Martin, 1, 57, 211 London, 109 Long Kun, 291 long-term planning, 270 Lord, Ellen, 217 Lucky, Palmer, 66 Luo, Kevin, 161 Machine Intelligence Research Institute (MIRI), 298 machine learning and compute, 25–26, 32, 296–97 failure modes, 64, 232–33, 236–39, 243–44, 246–49 at Heron Systems, 220–21 opacity of algorithms, 145 and synthetic media, 127, 139 training data for, 202–5, 230 and voice synthesis, 137 at West Point, 194–95 MacroPolo, 30 Made in China 2025, 37, 183 Malaysia, 106 Management Action Group, 56 maneuver warfare, 442 Manhattan Project, 297 Mao Zedong, 85, 97 Marines, 231 marriage, coerced, 81 Martin, Rachael, 206 Martin Aspen (fake persona), 131 Massachusetts Institute of Technology (MIT), 31, 156, 157, 165, 233 Mattis, Jim, 53, 61, 197, 209, 215, 280 MAVLab (Micro Air Vehicle Lab), 250–52 Max Planck Society, 158, 393 McAulay, Daniel, 267 McCord, Brendan, 52, 56–57, 200 McKinsey, 25 McKinsey Global Institute, 72–73 McNair, Lesley, 192 McQuade, Michael, 66 media, AI-generated, 118–20 media conferences, 109 Meena, 125 Megatron-Turing NLG, 20, 294 Megvii, 88–89, 156, 160, 212, 353, 354, 357, 388 Memorandum of Understanding Regarding the Rules of Behavior for Safety of Air and Maritime Encounters, 292 Meng Wanzhou, 177 Merrill Lynch, 162 Meta, 22, 143, 296 metrics, 320 Mexico, 107 Michel, Arthur Holland, 54 Micron, 182 Microsoft, 294 China presence, 159 and computer vision, 57 and cyberattacks, 246–47 deepfake detection, 132, 138–39 and Department of Defense, 36, 62, 66, 215–17, 224–25 digital watermarks, 138 and facial recognition, 23, 111 financial backing of AI, 296–97 funding, 296 and Google-Maven controversy, 62, 66 and government regulation, 111 and ImageNet, 54 Megatron-Turing NLG, 20, 294 and OpenAI, 26 revenue, 297 and Seven Sons of National Defense, 162 and Trusted News Initiative, 139 work with Chinese researchers, 157, 393, 396 Microsoft Research, 31, 167 Microsoft Research Asia, 157–63, 165–67 Microsoft’s Asia-Pacific R&D Group, 161 Middlebury Institute, 124 military AI adoption, 35–37, 219–26 applications, 191–94 military capabilities, 47 military-civil fusion, 5, 95, 161–63 military competition, 304 military forces cognitization, 265 military organization, 278–79 military power, potential, 13 military tactics, future, 277 Ministry of Industry and Information Technology, 87 Ministry of Public Security, 87, 89–90, 158 Ministry of Public Security (MPS), 95 Ministry of Science and Technology, 172, 173 Minneapolis police, 143 minority identification technology, 88–89 “Minority Report, The” (Dick), 81 MIRI (Machine Intelligence Research Institute), 298 Missile Defense Agency, 218 MIT, See Massachusetts Institute of Technology (MIT) MITRE, 250 MIT-SenseTime Alliance on Artificial Intelligence, 156 MIT Technology Review, 93, 159 mobile devices, 18 Mock, Justin “Glock,” 2 model inversion attacks, 247 Modulate, 135–36, 138 monitoring and security checkpoints, 80 Moore’s law, 26, 28, 325 Morocco, 109 Mozur, Paul, 101, 102 MPS Key Lab of Intelligent Voice Technology, 95 MQ-9 Reaper, 53 Mulchandani, Nand, 207, 214, 217 multimodal models, 295–96 multiparty game theory, 50 mutism, 128 Mutsvangwa, Christopher, 105 NASA (National Aeronautics and Space Administration), 40, 72, 220 national AI research cloud, 32 National Artificial Intelligence Initiative Act of 2020, 32 National Artificial Intelligence Research Resource, 32 National Defense Education Act, 33 National Defense Strategy, 52 National Development and Reform Commission, 88 National Geospatial-Intelligence Agency (NGA), 56 National Institute of Standards and Technology, 40 National Institutes of Health, 40 National Instruments, 162 National Intelligence Law, 95, 174 National New Generation Artificial Intelligence Governance Expert Committee, 172 National Oceanic and Atmospheric Administration (NOAA), 40, 204 national power, 13, 318 National Robotics Engineering Center (NREC), 193 National Science Advisory Board for Biosecurity, 123 National Science Foundation, 40 National Security Agency, 216 National Security Commission on AI, 33, 39, 73, 186, 250, 252, 258 National Security Law, 95, 174 national security vulnerabilities, 239 National University of Defense Technology (NUDT), 157, 161 NATO, 287 natural language processing, 206 Nature (journal), 123 nature of war, 280–84 Naval Air Station Patuxent River, 220 Naval Research Laboratory, 162 Naval War College, 219 negative G turns, 249 Netherlands, 158, 181, 187 NetPosa, 156, 391 Neural Information Processing Systems, 232 neural networks, 19f, 25 applications, 54 badnets, 246 and Deep Learning Analytics, 210 explainability, 236–37 failure modes, 232–34, 250 and Heron Systems, 220 training, 19 NeurIPS, 30 Neuromancer (Gibson), 101 “New Generation Artificial Intelligence Development Plan,” 71, 169 New H3C Technologies, 157 “new oil,” 11–17 news articles, bot-generated, 118 new technologies, 255–56 new technologies, best use of, 191–92 New York Times, 31, 118, 125, 138, 290 NGA (National Geospatial-Intelligence Agency), 56 Nieman Journalism Lab, 145 1984 (Orwell), 97–98, 103 NIST (National Institute of Standards and Technology), 91 Nixon, Richard, and administration, 68 NOAA (National Oceanic and Atmospheric Administration), 40, 204 Nokia Bell Labs, 157 Normandy, France, 46 North Korea, 50, 117–18 Northrop Grumman, 57, 211, 216 NREC (National Robotics Engineering Center), 193 nuclear war, 288 nuclear weapons, 11, 50 NUDT (National University of Defense Technology), 157, 161 NVIDIA, 20, 28–29, 32, 120, 156, 246, 294, 390–91 Obama, Barack, and administration, 70, 71, 73, 137 object recognition and classification, 55–58 Office of Inspector General (OIG), 216 Office of Naval Research, 157 Office of Responsible AI, 159 Office of Technology Assessment, 162 OIG (Office of Inspector General), 216 oil, 20–21; See also “new oil” 160th Special Operations Aviation Regiment, 207 OpenAI, 26, 117–20, 122–25, 272, 294, 295–97, 299; See also GPT-2 (language model); GPT-3 (language model) OpenAI Five, 268, 270–71 Operation RYaN, 445; See also RYaN; VRYAN Oracle, 215–18, 224 Orwell, George, 97–98, 103 Osprey tiltrotor aircraft, 255 O’Sullivan, Liz, 60–61, 63, 65 OTA (other transaction authority), 217 outcomes of AI, 299–301 of war, 282–83 Owen, Laura Hazard, 145 Oxford Internet Institute, 141 Pakistan, 107, 142 Palantir, 109 PaLM, 294–95 Pan, Tim, 160, 161, 163 Papernot, Nicolas, 239 Pappas, Mike, 135–38, 140 Paredes, Federico, 250 Parler, 149 Partnership on AI, 132 patches, adversarial, 241–42, 242f Patrini, Giorgio, 130, 132–34, 137, 140 Patriot air and missile defense system, 253 Payne, Kenneth, 273–74 Pelosi, Nancy, 76, 128 Pence, Mike, 295 pension funds, 157 People’s Liberation Army (PLA); See also military-civil fusion affiliated companies, 166–67 and drone pilots, 222–23 researchers funded by, 158, 164 Percent Corporation, 107 Percipient.AI, 224 Perimeter, 289; See also Dead Hand Persian Gulf War, 46, 318 Personal Information Protection Law, 174, 176 pharmaceuticals, 251 phenotyping, DNA, 90 Philippines, 109 phones, 89 phone scanners, 89 photoresist, 182 phylogenic tree, 211 physical adversarial attacks, 242f, 243f, 429 Pichai, Sundar, 62 Pittsburgh, Pa., 44, 193 Pittsburgh Supercomputing Center, 44 PLA, See People’s Liberation Army Pluribus, 50, 51 poisonous animal recognition, 211 poker, 43–44, 46–48, 50, 266–67, 269–73, 335 Poland, 108 Police Audio Intelligent Service Platform, 95 Police Cloud, 89–90 policy analysis, automated, 206 Politiwatch, 124 pornography, 121, 130 Portman, Rob, 37 Poseidon, 289; See also Status-6 post-disaster assessment, 204 power metrics, 13 Prabhakar, Arati, 210 prediction systems, 287–88 predictive maintenance, 196–97, 201 Price, Colin “Farva,” 3 Primer (company), 224 Princeton University, 156, 157 Project Maven, 35–36, 52–53, 56–59, 194, 202, 205, 224; See also Google-Maven controversy Project Origin, 138 Project Voltron, 195–99 Putin, Vladimir, 9, 131, 304–5 Q*bert, 235 Quad summit, 76 Qualcomm Ventures, 157 Quantum Integrity, 132 quantum technology, 37 “rabbit hole” effect, 145 race to the bottom on safety, 286, 289, 304 radar, synthetic aperture, 210 Rahimi, Ali, 232 Raj, Devaki, 202, 207, 213, 224 Rambo (fictional character), 130 RAND Corporation, 252 ranking in government strategy, 40 Rao, Delip, 120, 123 Rather, Dan, 143 Raytheon, 211 reaction times, 272–73 real-time computer strategy games, 267–69 real-world battlefield environments, 264 situations, 230–36 Rebellion Defense, 224 Reddit, 140 reeducation, 81 Reface app, 130 reinforcement learning, 221, 232, 243, 250 repression, 81, 175–77 research and development funding, 35–39, 36f, 38f, 39f, 333–34 Research Center for AI Ethics and Safety, 172 Research Center for Brain-Inspired Intelligence, 172 research communities, 327 responsible AI guidelines, 252 Responsible Artificial Intelligence Strategy, 252 résumé-sorting model, 234 Reuters, 95, 139 Rise and Fall of the Great Powers, The (Kennedy), 12 risk, 271, 290–93 robotic nuclear delivery systems, 289 robotic process automation tools, 206 robotic vehicles, 266 robots, 92–94, 265–66, 286 Rockwell Automation, 162 Rockwell Collins, 193 Romania, 108 Root, Phil, 231 Roper, Will, 55–56, 214, 224, 225, 257 Rubik’s cube, 26 rule-based AI systems, 230, 236 Rumsfeld, Donald, 61 Russia, 12, 40, 52, 108, 110 bots, 142 cyberattacks of, 246 disinformation, 122 invasion of Ukraine, 129, 196, 219, 288 nuclear capabilities, 50 submarines, 255 Rutgers University Big Data Laboratory, 156 RYaN (computer program), 287, 445; See also Operation RYaN; VRYAN safe city technology, 107–8 safety of AI, 286, 289, 304 Samsung, 27–29, 179, 181 Sandholm, Tuomas, 43–51 Sasse, Ben, 184 satellite imagery, 56 Saudi Arabia, 40, 107, 109, 141–42 Scale AI, 224 scaling of innovation, 224 Schatz, Brian, 37 schedule pressures, 254–55 Schmidt, Eric, 39, 40, 71–73, 150, 164–65 Schumer, Chuck, 39 Science (journal), 123 Seagate, 156, 390 security applications, 110–11, 315 security dilemma, 50–51, 289 Sedol, Lee, 23, 266, 274–75, 298 self-driving cars, 23, 65 semiconductor industry; See also semiconductors in China, 178–79 chokepoints, 180–81 export controls, 181–86 global chokepoints in, 178–87 globalization of, 27–29 international strategy, 186–87 in Japan, 179 supply chains, 26, 76, 300 in U.S., 179–80 Semiconductor Manufacturing International Corporation (SMIC), 178, 181, 184 semiconductor(s) fabrication of, 32 foundries, 27–28 improvements in, 325 manufacturing equipment, 179 market, 27 as strategic asset, 300 Seminar on Cyberspace Management, 108–9 SenseNets, 91, 156, 357 SenseTime, 37, 88–89, 91, 156, 160, 169, 353–54, 357, 388 SensingTech, 88 Sensity, 130–33 Sentinel, 132 Sequoia, 157 Serbia, 107, 110 Serelay, 138 servicemember deaths, 255 Seven Sons of National Defense, 161–62 “shallow fakes,” 129 Shanahan, Jack on automated nuclear launch, 289 on international information sharing, 258, 291–92 and JAIC, 66, 201, 203, 205–6, 214 and Project Maven, 57–58 on risks, 254, 256 Sharp Eyes, 88, 91 Shenzhen, China, 37 Shield AI, 66, 196, 222, 224 shortcuts, 254–56 Silk Road, 110 SIM cards, 80, 89 Singapore, 106, 107, 158 singularity in warfare, 279–80 Skyeye, 99 Skynet, 87–88, 90, 91 Slashdot, 120 Slate, 120 smartphones, 26, 80 SMIC (Semiconductor Manufacturing International Corporation), 178, 181, 184 Smith, Brad, 159, 163, 166, 167 social app dominance, 149–50 social credit system, 99–100 social governance, 97–104 social media, 126, 141–51 socio-technical problems, 65 soft power, 317 SOFWERX (Special Operations Forces Works), 214 SolarWinds, 246 South Africa, 107 South China Sea militarization, 71, 74 South Korea, 27, 40, 182, 185, 187 Soviet Union, 287, 289, 447 Spain, 40, 107 SparkCognition, 66, 224 Spavor, Michael, 177 Special Operations Command, 218 Special Operations Forces Works (SOFWERX), 214 speech recognition, 91 “Spider-Man neuron,” 295 Springer Nature, 158 Sputnik, 33, 71–72 Stability AI, 125, 295 stability, international, 286–93 Stable Diffusion, 125, 139, 295 Stallone, Sylvester, 130 Stanford Internet Observatory, 139 Stanford University, 31, 32, 57, 162 Starbucks, 92 StarCraft, 180, 298 StarCraft II, 267, 271, 441 Status-6, 289; See also Poseidon Steadman, Kenneth A., 192 STEM talent, 30–34 sterilization and abortion, 81 Strategic Capabilities Office, 56 strategic reasoning, 49 Strategy Robot, 44–45, 49, 51 Strike Hard Campaign, 79–80 Stuxnet, 283 subsidies, government, 179–80 Sullivan, Jake, 186 Sun Tzu, 45 superhuman attentiveness, 269–70 superhuman precision, 270 superhuman reaction time, 277 superhuman speed, 269, 271 supervised learning, 232 supply chain(s), 300 attacks, 246 global, 76, 179, 183 “Surprising Creativity of Digital Evolution, The,” 235 surveillance, 79–90 cameras, 6, 86–87, 91 laws and policies for, 108–9 throughout China, 84–90 in Xinjiang, 79–83 Sutskever, Ilya, 210 Sutton, Rich, 299, 455 swarms and swarming, 277–79 autonomous systems, 50, 220 demonstrations, 257 Sweden, 108, 158, 187 Switch-C, 294 Synopsys, 162 synthetic aperture radar, 210 synthetic media, 127–34, 138–39 criminal use, 128–29 deepfake detectors, 132–33 deepfake videos, 130–32 geopolitical risks, 129–30 watermarks, digital, 138–39 Syria, 58 system integration, 91 tactics and strategies, 270 Taiwan, 27, 71, 76, 100, 175, 178, 185–86 Taiwan Semiconductor Manufacturing Company (TSMC), 27–28, 179, 181, 184 Taiwan Strait, 71, 75–76 talent, 30–34, 304 Tang Kun, 393 tanks, 192 Tanzania, 109 targeting cycle, 263 target recognition, 210 Target Recognition and Adaptation in Contested Environments (TRACE), 210–12 Tay, chatbot, 247 TDP (thermal design power), 454 TechCrunch, 120 technical standards Chinese, 171–75 international, 169–71 techno-authoritarianism, 79–110, 169 China’s tech ecosystem, 91–96 global export of, 105–10, 106f social governance, 97–104 throughout China, 83–90 in Xinjiang, 79–83 technology ecosystem, Chinese, 91–96 platforms, 35 and power, 11 transfer, 33, 163–64 Tektronix, 162 Tencent, 37, 143, 160, 169, 172 Tensor Processing Unit (TPU), 180 Terregator, 193 Tesla, 65, 180 TEVV (test and evaluation, verification and validation), 251–52 Texas Instruments, 162 text generation, 117–21, 123 text-to-image models, 125, 295 Thailand, 107, 109 thermal design power (TDP), 454 Third Offset Strategy, 53, 61 “Thirteenth Five-Year Science and Technology Military-Civil Fusion Special Projects Plan,” 73 Thousand Talents Plan, 32, 164 “Three-Year Action Plan to Promote the Development of New-Generation AI Industry,” 73 Tiananmen Square massacre, 68, 97–98, 103, 148, 160, 341, 359 tic-tac-toe, 47, 336 TikTok, 146–49 Tortoise Market Research, Inc., 15, 40 TPU (Tensor Processing Unit), 180 TRACE (Target Recognition and Adaptation in Contested Environments), 210–12 Trade and Technology Council (TTC), 187 training costs, 296–97 training datasets, 19–23 attacks on, 238–40, 244–45 of drone footage, 203 “radioactive,” 139 real world environments, vs., 58, 64, 233, 264 size of, 294–96 transistor miniaturization, 28 transparency among nations, 258–59, 288 Treasury Department, 246 Trump, Donald, and administration; See also “Donald Trump neuron” budget cuts, 39–40 and COVID pandemic, 74 and Entity List, 166 GPT-2 fictitious texts of, 117–19 graduate student visa revocation, 164 and Huawei, 182–84 and JEDI contract, 215–16 national strategy for AI, 73 relations with China, 71 and TikTok, 147 Twitter account, 150 trust, 249–53 Trusted News Initiative, 138–39 “truth,” 130 Tsinghua University, 31, 93, 173, 291 TSMC, See Taiwan Semiconductor Manufacturing Company (TSMC) TTC (Trade and Technology Council), 187 Turkey, 107, 108, 110 Turkish language, 234 Twitter, 139–40, 142, 144, 149, 247 Uganda, 108, 109 Uighurs; See also Xinjiang, China facial recognition, 88–89, 158, 353–55 genocide, 79, 304 mass detention, 74, 79–81, 102, 175 speech recognition, 94 surveillance, 82, 155–56 Ukraine, 108, 129, 196, 219, 288 United Arab Emirates, 107, 109 United Kingdom, 12, 76, 108, 122, 158, 187, 191–92 United States AI policy, 187 AI research of, 30 Chinese graduate students in, 31 competitive AI strategy, 185 United States Presidential election, 2016, 122 United States Presidential election, 2020, 128, 131, 134, 150 University of Illinois, 157 University of Richmond, 123 Uniview, 89, 355 unsupervised learning, 232 Ürümqi, 80, 84 Ürümqi Cloud Computing Center, 156 U.S.

pages: 289 words: 86,165

Ten Lessons for a Post-Pandemic World
by Fareed Zakaria
Published 5 Oct 2020

We’re currently in the foothills of each of these futures, but it is unclear which one lies ahead. ONLY HUMAN The decline of work is a massive problem, but even if we can solve it, AI confronts us with an even larger one: Will we lose control of the machines? The crucial shift that is taking place right now is from “weak” or “narrow” AI to “strong” or “general” AI. In the first, a machine is programmed to complete a specific task—say, win a game of chess—which it then does superbly. The second is the broader development of the kind of intelligence that can think creatively and make judgments. That leap in cognitive capacity was a watershed moment for AI.

In the end, the human, David Bowman, was able to outsmart the machine—but in real life it seems far more likely that the opposite would happen. That is why Bill Gates, Elon Musk, and a slew of other luminaries, usually optimistic about technology, have echoed the warnings of Oxford philosopher Nick Bostrom: they now worry that the development of general AI could threaten the human species itself. AI-powered computers are already black boxes. We know that they get to the right answer, but we don’t know how or why. What role does that leave for human judgment? Henry Kissinger has asked whether the rise of artificial intelligence will mean the end of the Enlightenment.

pages: 279 words: 85,453

Breaking Twitter: Elon Musk and the Most Controversial Corporate Takeover in History
by Ben Mezrich
Published 6 Nov 2023

Seeing the internet—a place of connectivity and diversity—as an escape from her own past, she’d thrown herself into the world of entrepreneurship, eventually moving to San Francisco to work for various social media–adjacent companies such as Circle and Lyft. Her first attempt at her own company, a short-form video product called Glimpser, had gone under when Vine was acquired by Twitter; her next company, Squad, began as a generative AI endeavor before generative AI was hot. A blowout with her cofounder had led her to pivot the company into a screen-sharing social app, inspired by her seven-year-old daughter’s eagerness to find some way to chat with her friends while she played Roblox. Toward the end of 2020, Esther had been contacted out of the blue by Discord with an offer to buy her start-up.

pages: 209 words: 81,560

Irresistible: How Cuteness Wired our Brains and Conquered the World
by Joshua Paul Dale
Published 15 Dec 2023

They are rather like the puppet masters in traditional Japanese theatre who appear onstage wearing black robes and hoods; the audience is able to disregard their presence and enjoy the puppets as if they are alive, without caring about who the puppeteers really are.45 Soon the issue of who is controlling a virtual avatar may become more complicated. Generative AI – the technology behind chatbots that can carry on a conversation and answer questions, as well as generate original images – enables the ‘birth’ of AI VTubers. Cute avatars, whether human or animal, could offer both entertainment and companionship as AI interactive personalities available twenty-four hours a day.

Lawrence Convention Center, Pittsburgh 1 Degas, Edgar 1, 2 depression 1 Diamond, Jared 1 Disney 1, 2, 3, 4, 5, 6, 7, 8 Disney, Walt 1, 2 DNA 1, 2, 3, 4, 5, 6, 7 dolls baby shows and 1 Buddhist temples and 1 Children’s Games and 1 clockwork automata 1 cult of Japan and 1, 2, 3 Doll Festival, The (hina matsuri) 1, 2 Kewpie 1, 2 Nakahara and 1 Raggedy Ann 1 saucy sense of humour, American manufacturers and 1 Shirley Temple 1 Sigourney and 1, 2 teddy bears and 1 The Pillow Book and 1 dogs bond with humans 1 burials 1 domestication 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 eyes 1 fox and 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 fur 1 Hachiko 1, 2, 3, 4 Japanese art and 1, 2, 3, 4, 5, 6, 7, 8 lapdogs 1, 2 marginalia and 1 neoteny 1 pet-adoption theory and 1, 2, 3 puppies 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 robot 1, 2 rubbish-dump theory and 1, 2, 3, 4 selective breeding and cuteness in 1 domestication alpha male, myth of 1 bonding and 1 dogs see dogs domestication syndrome 1, 2, 3, 4, 5, 6 ears, floppy and 1, 2, 3, 4, 5, 6, 7 eyes and 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 fox and see fox friendliness gene and 1 fur and 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 gazing, mutual and 1, 2, 3, 4 head shape and 1, 2, 3, 4, 5, 6, 7, 8, 9 head size and 1, 2, 3, 4 instinctual releasing mechanism 1 Island Rule and 1 jaws, shortening of and 1, 2, 3, 4, 5 language and 1 neotony and see neotony neural crest and 1, 2, 3 pet-adoption theory 1, 2, 3 rubbish-dump theory 1, 2, 3, 4 self-domestication hypothesis 1, 2, 3, 4 Siberian silver fox experiment 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 social imprinting and 1, 2, 3 socialisation period 1, 2, 3, 4 teeth and 1, 2, 3, 4, 5, 6, 7, 8, 9 unconscious selection 1, 2 wolves and 1 zebras and 1 Donatello 1 Dugatkin, Lee Alan: How to Tame a Fox (and Build a Dog) 1 ears 1, 2, 3, 4, 5, 6 floppy 1, 2, 3, 4, 5, 6, 7 Edo, Japan 1 See also Tokyo Edo period 1, 2, 3, 4 Emerald City Comic Con 1 emoji 1, 2, 3 Enlightenment 1, 2, 3 Eros 1 Etruscan wolf (Canis etruscus) 1 Europe, development of cuteness in 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 evolution ‘A Biological Homage to Mickey Mouse’ and 1, 2, 3, 4, 5, 6 alpha male, myth of 1 bonding and 1 Darwin and see Darwin, Charles dogs see dogs domestication and see domestication eyes and 1 imprinting and 1, 2, 3 language and 1 Lorenz and 1, 2, 3 March of Progress 1 Neanderthals and 1, 2, 3, 4 neoteny and see neoteny neural crest and see neural crest newborn babies and 1 pet-adoption theory 1, 2, 3 rubbish-dump theory 1, 2, 3, 4 self-domestication hypothesis and 1, 2, 3, 4 socialisation period 1, 2, 3, 4 specialists in non-specialisation 1 wolf see wolf eyes babies and 1 cooperative eye hypothesis 1 domestication and 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 elongated 1, 2 eyebrows 1, 2, 3, 4 eye contact/gazing, mutual 1, 2, 3, 4, 5 ‘Goo-goo’ 1 Lorenz’s child schema and 1, 2 manga and 1 Nakahara and 1 Pikachus 1, 2 position of 1, 2 robot 1, 2, 3 size 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 white sclera 1, 2, 3, 4, 5, 6 Yumeji style and 1 faces temperature of 1 widening of 1, 2, 3, 4, 5, 6 fan, folding 1, 2, 3, 4, 5, 6, 7 fan conventions 1, 2, 3, 4 fancy goods 1, 2 fashion androgynous look and 1, 2 Cutequake and 1 Harajuku style 1, 2 Lolita 1, 2, 3, 4 schoolgirl 1, 2, 3 tribes 1 Felix the Cat 1 fight or flight response 1 Finnish Food Authority 1 First World War (1914–18) 1 flirtation 1, 2, 3, 4, 5 Follen, Eliza Lee: ‘Three Little Kittens’ fox affiliative behaviour/social bonds and 1, 2 burial, earliest fox-human 1 domestication syndrome and 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 facial ‘expression’, cuteness of 1 people and, history of relationship 1, 2 Scroll of Frolicking Animals and 1, 2 Siberian silver fox experiment 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 sly, cunning behaviour 1 Zaō Fox Village 1, 2 Freud, Sigmund 1 friendliness 1, 2, 3, 4, 5, 6, 7, 8 Fujioka, Shizuka 1 fur 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 piebald 1, 2, 3 furries 1, 2, 3 Furscience (International Anthropomorphic Research Project) 1 fursona 1, 2 fursuits 1, 2, 3, 4, 5 Gainsborough, Thomas: Blue Boy 1, 2, 3, 4 Galton, Francis 1 gazing, mutual 1, 2, 3, 4 geese 1, 2 Generative AI 1 genetics, domestication and 1, 2, 3, 4, 5, 6, 7. See also domestication Ginsburg, Benson 1 girls avatars 1 ‘Daddy’s girl’ 1, 2 dolls and 1, 2, 3 emoji and 1 Japanese novels/illustrated magazines and 1, 2, 3, 4, 5, 6 kawaii and 1, 2, 3, 4 Lolita fashion subculture 1, 2, 3, 4 ‘magical girls’ 1 manga and 1, 2, 3 media focus on girls’ culture 1 mischievousness becomes associated with coquettishness 1, 2 portraits of 1, 2 schoolgirl image see schoolgirl Shirley Temple and image of 1 stereotypes of Japanese 1, 2, 3, 4, 5 Teddy Girl 1, 2 Uneeda Biscuit girl 1 Göbekli Tepe site, Turkey 1 golden jackal 1 Gold Dust Twins 1 Good Housekeeping 1 Google 1 Gotokuji Temple, Tokyo 1, 2 Gould, Stephen Jay: ‘A Biological Homage to Mickey Mouse’ 1, 2, 3, 4, 5, 6 Great Tokyo Earthquake (1924) 1 Gubar, Marah 1 Hagio, Moto 1, 2; They Were Eleven 3 haiku 1 halftone screenprinting 1 Harajuku, Tokyo 1, 2, 3, 4 Hare, Brian 1, 2 Harper’s Weekly 1, 2 Hartley, David 1 Harvard University 1 Having Fun Again Today!

pages: 524 words: 154,652

Blood in the Machine: The Origins of the Rebellion Against Big Tech
by Brian Merchant
Published 25 Sep 2023

We shouldn’t be surprised when they refuse to accept that it’s their duty to lay down and die, to again paraphrase Frank Peel. Other reports about the rise of the robots are more like self-fulfilling prophecies: In 2023, a Goldman Sachs study estimated that 300 million jobs worldwide were at risk of being taken over by generative AI systems—suggesting to those in a position to purchase such technology that it was high time to do so. Organizations like the International Monetary Fund, consultancies like Accenture and Deloitte, and professional services firms like PwC all issue future of work reports forecasting mass disruption and major economic growth.

Their lives have been disrupted, and now their work regimes are dictated by often inscrutable algorithms that make them compete against one another for every gig. This new generation of algorithmically arranged, precarious work structures is permeating even more of the economy—it’s already happening to lawyers, writers, emergency responders, even security forces. And in many cases, the rise of generative AI services stands to accelerate the process. Screenwriters went on strike in 2023 in part to prevent studios from using AI to generate scripts and eroding pay rates. Meanwhile, right-to-work laws make organizing much harder, and measures like Prop 22 prevent gig workers from finding stable employment, receiving benefits, or forming a union.

Individual entrepreneurs and large corporations and next-wave Frankensteins are allowed, even encouraged, to dictate the terms of that deployment, with the profit motive as their guide. Venture capital may be the radical apotheosis of this mode of technological development, capable as it is of funneling enormous sums of money into tech companies that can decide how they would like to build and unleash the products and services that shape society. Take the rise of generative AI. Ambitious start-ups like Midjourney, and well-positioned Silicon Valley companies like OpenAI, are already offering on-demand AI image and prose generation. DALL-E spurred a backlash when it was unveiled in 2022, especially among artists and illustrators, who worry that such generators will take away work and degrade wages.

Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
by Adrienne Mayor
Published 27 Nov 2018

Intelligence, or mind, displayed by artificial life or machines, analogous to the natural intelligence of animals and humans; capable of perceiving its environment and taking action. AI mimics cognitive functions associated with mind, such as learning and problem solving. “Narrow AI” allows a machine to carry out specific tasks, while “general AI” is a machine with “all-purpose algorithms” to carry out intellectual tasks that humans are capable of, with abilities to reason, plan, “think” abstractly, solve problems, and learn from experience. AI can also be classified by types: Type I machines are reactive, acting on what they have been programmed to perceive at the present, with no memory or ability to learn from past experience (examples include IBM’s Deep Blue chess computer, Google’s AlphaGo, and the ancient bronze robot Talos and the self-moving tripods in the Iliad).

Human-computer interface and thought-controlled machines, Zarkadakis 2015; “The Next Frontier: When Thoughts Control Machines” 2018. The Golden Maidens would appear to be Type III AI; see glossary. On black box dilemmas, see “AI in Society: The Unexamined Mind” 2018. 32. Mendelsohn 2015. Cf. Paipetis 2010, 110–12. 33. Big data, AI, and machine learning, Tanz 2016; see also Artificial Intelligence, “general AI,” in the glossary. 34. “Magic is linked to science in the same way as it is linked to technology. It is not only a practical art, it is also a storehouse of ideas,” Blakely 2006, 212. Maldonado 2017 reports that the sex robot-companion called “Harmony,” made by Realbotix for Abyss Creations, was endowed with a “data dump”: she is programmed with about five million words, the entirety of Wikipedia, and several dictionaries. 35.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

The attraction to military minds is obvious—games are adversarial, and the goal is to win. The differences, however, are also profound, as we’ll see. 2015 saw the public arrival of DeepMind, a relative British newcomer to AI research, newly acquired by Google. DeepMind’s founder Demis Hassabis had trained in neuroscience, and he was explicit: DeepMind intended to create ‘general’ AI, with the attributes of human intelligence. Its first landmark breakthrough was an eighties throwback: classic Atari arcade games. The scoreboard in Space Invaders is an ideal motivator for reinforcement learning. Like dopamine in the brain of a teenage arcade goer, the network responded to the reward of a higher score—pruning its connections accordingly.10 Combine that with a ConvNet that would capture what was happening on the screen, and the AI was all set to play a mean pinball, or rather Space Invaders.

New technologies are emerging right now: hypersonic cruise missiles and loitering mini-warheads; nuclear powered torpedoes, able to prowl the deep almost indefinitely; solid state laser guns, capable of downing unmanned drones as they approach ships at sea. AI is integral to all these exotica. Do you regulate by weapon type, or by the underlying AI technology? Doing either will be tricky. There’s great variety in the physical platforms, but also huge variation in the underlying code and in the capabilities it generates. AI isn’t a single technology, like a nuclear bomb. The basic science of a thermonuclear bomb is functionally identical no matter who makes it, which constrains the variation from one model to another. Not so for AI code. There are a bewildering variety of approaches to Artificial Intelligence, and even considerable variation in what the term means to different people.

pages: 404 words: 92,713

The Art of Statistics: How to Learn From Data
by David Spiegelhalter
Published 2 Sep 2019

But again we should emphasize that these are technological systems that use past data to answer immediate practical questions, rather than scientific systems that seek to understand how the world works: they are to be judged solely on how well they carry out the limited task at hand, and, although the form of the learned algorithms may provide some insights, they are not expected to have imagination or have super-human skills in everyday life. This would require ‘general’ AI, which is both beyond the content of this book and, at least at present, beyond the capacity of machines. Ever since formulae for calculating insurance and annuities were developed by Edmund Halley in the 1690s, statistical science has been concerned with producing algorithms to help in human decisions.

.* Many of the challenges listed above come down to algorithms only modelling associations, and not having an idea of underlying causal processes. Judea Pearl, who has been largely responsible for the increased focus on causal reasoning in AI, argues that these models only allow us to answer questions of the type, ‘We have observed X, what do we expect to observe next?’ Whereas general AI needs a causal model for how the world actually works, which would allow it to answer human-level questions concerning the effect of interventions (‘What if we do X?’), and counterfactuals (‘What if we hadn’t done X?’). We are a long way from AI having this ability. This book emphasizes the classic statistical problems of small samples, systematic bias (in the statistical sense) and lack of generalizability to new situations.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

But—and this is really important—there are two completely different things people mean today when they talk about artificial intelligence. There is “narrow AI” and there is “general AI.” The kind of AI we have today is narrow AI, also known as weak AI. It is the only kind of AI we know how to build, and it is incredibly useful. Narrow AI is the ability for a computer to solve a specific kind of problem or perform a specific task. The other kind of AI is referred to by three different names: general AI, strong AI, or artificial general intelligence (AGI). Although the terms are interchangeable, I will use AGI from this point forward to refer to an artificial intelligence as smart and versatile as you or me.

pages: 442 words: 94,734

The Art of Statistics: Learning From Data
by David Spiegelhalter
Published 14 Oct 2019

But again we should emphasize that these are technological systems that use past data to answer immediate practical questions, rather than scientific systems that seek to understand how the world works: they are to be judged solely on how well they carry out the limited task at hand, and, although the form of the learned algorithms may provide some insights, they are not expected to have imagination or have super-human skills in everyday life. This would require ‘general’ AI, which is both beyond the content of this book and, at least at present, beyond the capacity of machines. Ever since formulae for calculating insurance and annuities were developed by Edmund Halley in the 1690s, statistical science has been concerned with producing algorithms to help in human decisions.

Systems such as Predict, which previously would be thought of as statistics-based decision-support systems, might now reasonably be called AI.fn7 Many of the challenges listed above come down to algorithms only modelling associations, and not having an idea of underlying causal processes. Judea Pearl, who has been largely responsible for the increased focus on causal reasoning in AI, argues that these models only allow us to answer questions of the type, ‘We have observed X, what do we expect to observe next?’ Whereas general AI needs a causal model for how the world actually works, which would allow it to answer human-level questions concerning the effect of interventions (‘What if we do X?’), and counterfactuals (‘What if we hadn’t done X?’). We are a long way from AI having this ability. This book emphasizes the classic statistical problems of small samples, systematic bias (in the statistical sense) and lack of generalizability to new situations.

pages: 371 words: 98,534

Red Flags: Why Xi's China Is in Jeopardy
by George Magnus
Published 10 Sep 2018

Their focus is on key sectors, including advanced rail, ship, aviation and aerospace equipment, agricultural machinery and technology, low and new-energy vehicles, new materials, robotics, biopharmaceuticals and high-end medical equipment, integrated circuits, and 5G mobile telecommunications. Taken aback by AlphaGo’s victory, as noted earlier, China stepped up a few gears to formalise and launch nationally an ambitious AI strategy, already underway at the local government level. A year after the match, the State Council set out the Next Generation AI Development Plan with the goal of boosting China’s AI status, from being in line with competitors by 2202, to world-leading by 2025, and the world’s primary source by 2030. During this period, the industry is supposed to increase in value from RMB 1 to RMB 10 trillion, or from $150 billion to $1.5 trillion.

–China Strategic and Economic Dialogue (i) Hua Guofeng (i) Huangpu district (Shanghai) (i) Huawei (i), (ii), (iii), (iv) hukou (i), (ii), (iii), (iv) Human Freedom Index (i) Human Resources and Social Security, Ministry of (i) Hunan (i) Hungary (i), (ii), (iii), (iv) ICORs (incremental capital-output ratios) (i), (ii), (iii) n4 IMF Article IV report (i) on broadening and deepening of financial system (i) China urged to devalue (i) China’s integration and (i) concern over smaller banks (i) concern over WMPs (i) credit gaps (i) credit intensity (i) GP research (i) ICOR (i) n4 laissez-faire ideas (i) pensions, healthcare and GDP research (i), (ii), (iii) Renminbi reserves (i) risky corporate loans (i) Special Drawing Rights (i), (ii), (iii), (iv), (v) WAPs (i) immigrants see migrants income inequality (i) India Adam Smith on (i) ASEAN (i) BRI misgivings (i) BRICS (i), (ii) comparative debt in (i) demographic dividend (i) economic freedom level (i) frictions with (i) Nobel Prize (i) pushing back against China (i) regional allies of (i) SCO member (i) Indian Ocean access to ports (i) African rail projects and (i) Chinese warships enter (i) rimland (i) shorelines (i) Indo-Pacific region (i), (ii) Indonesia Asian crisis (i) BRI investment (i) debt and GDP (i) GDP (i) rail transport projects (i) RCEP (i) retirement age (i) trade with China (i) Industrial and Commercial Bank of China (i), (ii) Industrial Revolution (i), (ii) industrialisation (i), (ii) Industry and Information Technology, Minister of (i) infrastructure (i), (ii), (iii), (iv) Initial Public Offerings (IPOs) (i) Inner Mongolia (i), (ii) innovation (i), (ii) Inquiry into the Nature and Causes of the Wealth of Nations (Adam Smith) (i) Institute for International Finance (i) institutions (i), (ii) insurance companies (i), (ii), (iii) intellectual property (i) interbank funding (i), (ii), (iii), (iv), (v) investment (i), (ii), (iii) Iran (i) Ireland (i), (ii), (iii) Iron Curtain (i) ‘iron rice bowl’ (i) Israel (i), (ii) Italy (i), (ii), (iii) Jakarta (i), (ii) Japan acts of aggression by (i) aftermath of war (i) ASEAN (i) between the wars (i) bond market (i) Boxer Rebellion and (i) Chiang Kai-shek fights (i) China and (i) China’s insecurity (i) credit gap comparison (i) dispute over Diaoyu islands (i), (ii) export-led growth (i), (ii) financial crisis (i) friction with (i) full-scale war with China (i), (ii) growth (i) high-speed rail (i) India and (i) Liaodong peninsula (i) Manchuria taken (i), (ii), (iii) Mao fights (i) middle- to high-income (i) migrants to (i) Okinawa (i) old-age dependency ratio (i) pensions, healthcare and GDP research (i) pushing back against China (i) RCEP (i) Renminbi block, attitude to (i) research and development (i) rimland (i) robots (i) seas and islands disputes (i) Shinzō Abe (i) TPP (i) trade and investment from (i) yen (i) Jardine Matheson Holdings (i) Jiang Zemin 1990s (i) Deng’s reforms amplified (i), (ii), (iii) influence and allies (i) Xiao Jianhua and (i) Johnson, Lyndon (i) Julius Caesar (i) Kamchatka (i) Kashgar (i) Kashmir (i) Kazakhstan (i), (ii) Ke Jie (i) Kenya (i) Keynes, John Maynard (i) Kharas, Homi (i) Kissinger, Henry (i), (ii), (iii), (iv) Korea (i), (ii), (iii) see also North Korea; South Korea Korean War (i), (ii) Kornai, János (i), (ii), (iii) n16 Kowloon (i), (ii) Krugman, Paul (i) Kunming (i) Kuomintang (KMT) (i), (ii) Kyrgyzstan (i) Kyushu (i) labour productivity (i) land reform (i) Laos (i), (ii), (iii) Latin America (i), (ii), (iii) Lattice Semiconductor Corporation (i) leadership (i) Leading Small Groups (LSGs) (i), (ii), (iii), (iv) Lee Kuan Yew (i) Lee Sodol (i) Legendary Entertainment (i) Lehman Brothers (i) lending (i) Leninism governance tending to (i) late 1940s (i) party purity (i) Xi’s crusade on (i), (ii) Lenovo (i), (ii) Lewis, Arthur (i) Lewis turning point (i) LGFVs (local government financing vehicles) (i) Li Keqiang (i), (ii) Liaodong peninsula (i), (ii) LinkedIn (i) Liu He (i), (ii), (iii) Liu Xiaobo (i) local government (i), (ii), (iii) London (i), (ii), (iii) Luttwak, Edward (i), (ii), (iii) Macartney, Lord George (i), (ii), (iii) Macau (i), (ii) Made in China 2025 (MIC25) ambitious plans (i) importance of (i) mercantilism (i) priority sectors (i) robotics (i) Maddison, Angus (i), (ii), (iii) n3 (C1) Maghreb (i) major banks see individual entries Malacca, Straits of (i) Malay peninsula (i) Malaysia ASEAN member (i) Asian crisis (i) high growth maintenance (i) Nine-Dash Line (i) rail projects (i), (ii) Renminbi reserves (i) TPP member (i) trade with (i) Maldives (i) Malthus, Thomas (i), (ii) Manchuria Communists retake (i) Japanese companies in (i) Japanese puppet state (i), (ii), (iii) key supplier (i) North China Plain and (i) Pacific coast access (i) Russian interests (i) targeted (i) Manhattan (i), (ii) see also New York Mao Zedong arts and sciences (i) China stands up under (i) China under (i) Communist Party’s grip on power (i) consumer sector under (i) Deng rehabilitated (i) Deng, Xi and (i) east wind and west wind (i) Great Leap Forward (i) industrial economy under (i) nature of China under (i) People’s Republic proclaimed (i) positives and negatives (i) property rights (i) women and the workforce (i) Xi and (i) Maoism (i) Mar-a-Lago (i) Mark Antony (i) Market Supervision Administration (i) Marshall Plan (i), (ii) Marxism (i), (ii), (iii), (iv) Mauritius (i) May Fourth Movement (i) McCulley, Paul (i) n18 Mediterranean (i) Menon, Shivshankar (i) mergers (i) MES (market economy status (ii)) Mexico completion of education rates (i) debt comparison (i) GDP comparison (i) NAFTA (i) pensions comparison (i) TPP member (i) US border (i) viagra policy (i) Middle East (i), (ii), (iii) middle-income trap (i), definition (i) evidence and argument for (i) governance (i) hostility to (i) hukou system (i) lack of social welfare for (i) low level of (i) migrant factory workers (i) patents and innovation significance (i) significance of technology tech strengths and weaknesses (i) total factor productivity focus (i) vested and conflicted interests (i) ultimate test (i) World Bank statistics (i) migrants (i), (ii), (iii), (iv), (v) Ming dynasty (i) Minsky, Hyman (i) mixed ownership (i), (ii) Modi, Narendra (i) Mombasa (i) monetary systems (i) Mongolia (i), (ii) Monogram (i) Moody’s (i) Morocco (i) mortality rates (i) see also population statistics mortgages (i) motor cars (i), (ii) Moutai (i) Mundell, Robert (i) Muslims (i) Mutual Fund Connect (i) Myanmar ASEAN (i) Chinese projects (i) disputes (i) low value manufacturing moves to (i) Qing Empire in (i) ‘string of pearls’ (i) ‘Myth of Asia’s Miracle, The’ (Paul Krugman) (i) NAFTA (North American Free Trade Agreement) (i) Nairobi (i) Namibia (i) Nanking (i) Treaty of (i), (ii) National Bureau of Statistics fertility rates (i) GDP figures (i) ICOR estimate (i), (ii), (iii) n4 SOE workers (i) National Cyberspace Work Conference (i) National Development and Reform Commission (i), (ii), (iii) National Financial Work Conferences (i) National Health and Family Planning Commission (i) National Medium and Long-Term Plan for the Development of Science and Technology (i) National Natural Science Foundation (i) National People’s Congress 2007 (i) 2016 (i) 2018 (i), (ii), (iii), (iv) National People’s Party of China (i) National Science Foundation (US) (i) National Security Commission (i) National Security Strategy (US) (i), (ii) National Supervision Commission (i), (ii), (iii), (iv) Needham, Joseph (i) Nepal (i), (ii) Netherlands (i) New Development Bank (i), (ii) New Eurasian Land Bridge (i) New Territories (i), (ii) New York (i) see also Manhattan New Zealand (i), (ii), (iii) Next Generation AI Development Plan (i) Nigeria (i) Nine-Dash Line (i) Ningpo (i) Nixon, Richard (i) Nobel Prizes (i), (ii) Nogales, Arizona (i) Nogales, Sonora (i) Nokia (i) non-communicable disease (i) non-performing loans (i), (ii), (iii), (iv), (v), (vi) North China Plain (i) North Korea (i) see also Korea Northern Rock (i) Norway (i) Nye, Joseph (i) Obama, Barack Hu Jintao and (i) Pacific shift recognised (i) Renminbi (i) US and China (i), (ii) OECD (Organisation for Economic Co-operation and Development) China’s ranking (i) GDP rates for pension and healthcare (i) GP doctors in (i) tertiary education rates (i) US trade deficit with China (i) Office of the US Trade Representative (i) Official Investment Assistance (Japan) (i) Okinawa (i) old-age dependency ratios (i), (ii), (iii) Olson, Mancur (i) Oman (i) one-child policy (i), (ii) Opium Wars financial cost of (i) First Opium War (i), (ii), (iii) Qing dynasty defeated (i) Oriental Pearl TV Tower, Shanghai (i) Pacific (i), (ii), (iii) Padma Bridge (i) Pakistan Economic Corridor (i) long-standing ally (i) Renminbi reserves (i) SCO member (i) ‘string of pearls’ (i) Paris (i) Party Congresses see numerical list at head of index patents (i) Peking (i), (ii), (iii) see also Beijing pensions (i) People’s Bank of China see also banks cuts interest rates again (i) floating exchange rates (i) lender of last resort (i), (ii) long term governor of (i) new rules issued (i) new State Council committee coordinates (i) places severe restrictions on banks (i) publishing Renminbi values (i) Renminbi/dollar rate altered (i) repo agreements (i) sells dollar assets (i) stepping in (i) Zhou Xiaochuan essay (i) People’s Daily front-page interview (i), (ii) on The Hague tribunal (i) riposte to Soros (i) stock market encouragement (i) People’s Liberation Army (i), (ii) Persia (i) Persian Gulf (i), (ii) Peru (i) Pettis, Michael (i) n12 Pew Research (i) Peyrefitte, Alain (i) Philippines (i), (ii), (iii), (iv) Piraeus (i) PISA (Programme for International Student Assessment) (i) Poland (i), (ii), (iii) ‘Polar Silk Road’ (i) Politburo (i), (ii), (iii), (iv) pollution (i) Polo, Marco (i) Pomeranz, Kenneth (i) population statistics (i) see also ageing trap; WAP (working-age population) consequences of ageing (i) demographic dividends (i), (ii) hukou system and other effects (i) low fertility (i), (ii), (iii) migrants (i), (ii) old-age dependency ratios (i), (ii), (iii) one-child policy (i), (ii) places with the most ageing populations (i) rural population (i) savings trends (i) technology and (i) under Mao (i) women (i) Port Arthur (i) Port City Colombo (i), (ii) Portugal (i), (ii), (iii), (iv) pricing (i), (ii) private ownership (i), (ii) productivity (i), (ii) Propaganda, Department of (i) property (i) property rights (i) Puerto Rico (i) Punta Gorda, Florida (i) Putin, Vladimir (i) Qianlong, Emperor (i) Qing dynasty (i), (ii), (iii) Qingdao (i) Qualcomm (i) Qualified Domestic Institutional Investors (i), (ii) Qualified Foreign Institutional Investors (i), (ii) Qiushi, magazine (i) rail network (i), (ii) RCEP (Regional Comprehensive Economic Partnership) (i), (ii), (iii) real estate (i), (ii) reform authoritative source warns of need for (i), (ii) different meaning from West (i) of economy via rebalancing (i), (ii) as embraced by Deng Xiaoping (i) fiscal, foreign trade and finance (i), (ii) Hukou (i) of ownership (i) state-owned enterprises (i) third plenum announcements (i) in Xi Jinping’s China (i) ‘Reform and Opening Up’ (Deng Xiaoping) (i), (ii), (iii) regulations and regulatory authorities (financial) (i), (ii) Reinhart, Carmen (i) Renminbi (i) 2015 mini-devaluation and capital outflows (i), (ii) appreciates (i) banking system’s assets in (i) bloc for (i) capital flight risk (i) devaluation (i), (ii), (iii), (iv) dim sum bonds (i) efforts to internationalise (i) end of peg (i) foreign investors and (i) fully convertible currency, a (i) growing importance of (i) IMF’s Special Drawing Rights (i) Qualified Institutional Investors (i) in relation to reserves (i) Renminbi trap (i), (ii), (iii), (iv) share of world reserves (i) significance of (i), (ii) Special Drawing Rights and (i), (ii) US dollar and (i), (ii), (iii), (iv), (v) repo markets (i), (ii) research and development (R&D) (i), (ii) Resources Department (i) retirement age (i) Rhodium Group (i) rimland (i) Robinson, James (i) robots (i) Rogoff, Kenneth (i) Roman Empire (i) Rotterdam (i) Rozelle, Scott (i) Rudd, Kevin (i) Rudong County (i) Rumsfeld, Donald (i) Rural Cooperative Medical Scheme (i) rural workers (i) Russia see also Soviet Union 19th century acquisitions (i), (ii) ageing population (i) BRI and (i) BRICS (i), (ii), (iii) C929s (i) China’s view of (i) early attempts at trade (i) fertility rates (i) Human Freedom Index (i) middle income trap and (i) Pacific sea ports (i) Polar Silk Road (i) Renminbi reserves (i) SCO member (i) Ryukyu Islands (i) Samsung (i) San Francisco (i) SASAC (i), (ii) Saudi Arabia (i) savings (i), (ii), (iii) Scarborough Shoal (i) Schmidt, Eric (i) Schumpeter, Joseph (i) SCIOs (i) Second Opium War (i) Second World War China and Japan (i), (ii) economic development since (i) Marshall Plan (i), (ii) US and Japan (i) Senkaku islands see Diaoyu islands separatism (i), (ii) Serbia (i) service sector (i), (ii) Seventh Fleet (US) (i) SEZs (special economic zones) (i), (ii), (iii), (iv) shadow banks (i), (ii), (iii), (iv), (v), (vi), (vii), (viii), (ix) n18 see also banks Shandong (i), (ii) Shanghai 1st Party Congress (i) arsenal (i) British influence in (i) central bank established (i) Deng’s Southern Tour (i) firms halt trading (i) income per head (i) interbank currency market (i) PISA scores (i) pollution (i) property price rises (i) stock market (i), (ii), (iii) Western skills used (i) Shanghai Composite Index (i), (ii) Shanghai Cooperation Organisation (SCO) (i), (ii), (iii) Shanghai Free Trade Zone (i), (ii), (iii) Shanghai–Hong Kong Bond Connect Scheme (i) Shanghai–Hong Kong Stock Connect Scheme (i), (ii) Shanghai World Financial Centre (i) Shenzhen first foreign company in (i) n3 (Intro.)

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

That’s the ultimate goal of AI research: a system that needs no problem-specific engineering and can simply be asked to teach a molecular biology class or run a government. It would learn what it needs to learn from all the available resources, ask questions when necessary, and begin formulating and executing plans that work. Such a general-purpose method does not yet exist, but we are moving closer. Perhaps surprisingly, a lot of this progress towards general AI results from research that isn’t about building scary, general-purpose AI systems. It comes from research on tool AI or narrow AI, meaning nice, safe, boring AI systems designed for particular problems such as playing Go or recognizing handwritten digits. Research on this kind of AI is often thought to present no risk because it’s problem-specific and nothing to do with general-purpose AI.

Under the headline of “deep learning,” they have revolutionized speech recognition and visual object recognition. They are also one of the key components in AlphaZero as well as in most of the current self-driving car projects. If you think about it, it’s hardly surprising that progress towards general AI is going to occur in narrow-AI projects that address specific tasks; those tasks give AI researchers something to get their teeth into. (There’s a reason people don’t say, “Staring out the window is the mother of invention.”) At the same time, it’s important to understand how much progress has occurred and where the boundaries are.

Succeeding With AI: How to Make AI Work for Your Business
by Veljko Krunic
Published 29 Mar 2020

Examples of adjusting such relationships might include escalating problems to the supplier’s management or asking for monetary compensation for defective parts. While those might be viable actions for customers of your suppliers that are much bigger than your organization is, they aren’t viable actions for your organization. Generic AI solutions tailored to much bigger organizations might focus on actions you can’t take. One final question in this scenario: Supposing that you’re a large organization with many departments, on which level of granularity should you request that business actions be supported by the decision support system?

Unless you have a team of experts in AI research working for you, stick to applying an existing AI capability to a new context. Avoid AI products that require you to develop new AI capabilities that no one else has demonstrated yet, because they’re unpredictable, difficult, and risky to develop. WARNING On the other hand, if you understand a general AI capability, then there is much less risk in applying that capability to a product in some new field. For example, it’s known that AI is getting very good at recognizing the context of an image—that’s a general capability. If you can apply that capability to a specific area, you might have a viable product.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

These fundamental questions about the costs and benefits of controlling knowledge and the distributional effects of intellectual property and antitrust regimes must now be worked out with regard to big data and AI capabilities. Just like information, knowledge, innovation, and our talent pools more generally, AI and data should be understood as a commons—a shared resource capable of addressing some of the world’s toughest problems: global health and pandemics, world hunger, environmental sustainability and climate change, and poverty and inequality. We should move toward more open-source big data as well as public initiatives to crowdsource data collection for public goals.

Robotics, in the mechanical sense, is trying to simulate our physical bodies; put together, the software and hardware of tomorrow’s technologies aspire to offer the complete package: mind, body, senses, and perhaps even heart and soul. Today, there are computer programs that simulate human decision-making and algorithms that dynamically learn from data to perform specific tasks once performed exclusively by people. Algorithms mine data about the past to predict outcomes in the future. But at this point, there is not yet a general AI—that is, a hypothetical (or future) machine that can do any intellectual task humans can. Maybe one day there will be an artificial intelligence explosion, a tipping point when a powerful superintelligence will surpass human intelligence to the point that humans lose control over technological advancements.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

Jaan Tallinn, co-founder Skype, co-founder Centre for the Study of Existential Risk (CSER), co-founder Future of Life Institute (FLI) Understanding AI – its promise and its dangers – is emerging as one of the great challenges of coming decades and this is an invaluable guide to anyone who’s interested, confused, excited or scared. David Shukman, BBC Science Editor As artificial intelligence drives the pace of automation ever faster, it is timely that we consider in detail how it might eventually make an even more profound change to our lives – how truly general AI might vastly exceed our capabilities in all areas of endeavour. The opportunities and challenges of this scenario are daunting for humanity to contemplate, let alone to manage in our best interests. We have recently seen a surge inthe volume of scholarly analysis of this topic; Chace impressively augments that with this high-quality, more general-audience discussion.

pages: 175 words: 45,815

Automation and the Future of Work
by Aaron Benanav
Published 3 Nov 2020

For a critique, see Gordon, Rise and Fall of American Growth, pp. 444–7. Gordon argues that since 2005, Moore’s law has collapsed. See also Tom Simonite, “Moore’s Law is Dead. Now What?,” MIT Technology Review, May 13, 2016. 35 For a set of critical reflections by AI scientists, who mostly doubt that general AI is anywhere near this level of development, see Martin Ford, Architects of Intelligence: The Truth about AI from the People Building It, Packt Publishing, 2018. 36 James Vincent, “Former Facebook Exec Says Social Media Is Ripping Apart Society,” Verge, December 11, 2017; Mattha Busby, “Social Media Copies Gambling Methods ‘to Create Psychological Cravings,’” Guardian, May 8, 2018. 37 See Raniero Panzieri, “The Capitalist Use of Machinery: Marx versus the Objectivists,” in Phil Slater, ed., Outlines of a Critique of Technology, Humanities Press, 1980; Derek Sayer, The Violence of Abstraction, Basil Blackwell, 1987. 38 See Nick Dyer-Witheford, Cyber-proletariat: Global Labour in the Digital Vortex, Pluto, 2015, pp. 87–93. 39 For the classic account, see David Noble, Forces of Production: A Social History of Industrial Automation, Knopf, 1984.

pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It)
by Jamie Bartlett
Published 4 Apr 2018

.* There are a couple of widely held misconceptions about AI that need to be cleared up. Despite the Hollywood movies and the breathless headlines, no machines are remotely close to reaching a human level of intelligence, which we can define as ‘performing as well as humans in a series of different domains’ (often known as ‘general AI’). Although divided, most experts don’t think this level of intelligence will be possible for another 50 to 100 years – but to be honest, no one really has a clue. And whether machines will ever achieve consciousness is altogether another question entirely, and probably one best left to philosophers rather than roboticists.

pages: 898 words: 236,779

Digital Empires: The Global Battle to Regulate Technology
by Anu Bradford
Published 25 Sep 2023

Those Snowden revelations exposed how the NSA had engaged in a mass surveillance of individuals by harvesting data available through Facebook.21 Without proper oversight, it is both tempting and feasible for any government to utilize the surveillance capabilities of tech companies to advance their political goals or national security objectives, even when that surveillance undermines individuals’ civil liberties. Many of these concerns are now amplified with the rapid advances in artificial intelligence (AI). The innovations in so-called generative AI technologies, in particular, have the potential to revolutionize the way we work and interact with information and each other. At best, generative AI will allow humans to reach new frontiers of knowledge and productivity, leading to unprecedented levels of economic growth and societal progress. At the same time, the pace of AI development is unsettling technologists, citizens, and regulators alike.

Mentioning China 699 times,173 the report emphasizes the importance of the US maintaining its technological advantage over China and urges the US government to double its annual AI R&D spending to $32 billion by 2026.174 The US is aware that China engages in the AI race from a unique position of strength. Chinese AI companies are able to harness an unparalleled amount of data generated by the country’s vast consumer market, which is both digitally connected and subject to extensive online surveillance.175 China also benefits from its fiercely effective culture of copying, which may not generate AI breakthroughs but gives China a leg up in refining new commercial applications from existing AI technologies.176 The Chinese private sector’s AI development also benefits from extensive government support.177 Riding on these enabling features of the Chinese AI environment, the big Chinese tech companies are already today leading in AI developments, with Baidu excelling in automated driving, Alibaba in AI cities, and Tencent in smart medicine and health.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

As I explored these trends, it became clear that in order to figure out a survival strategy for the future, I needed to start by understanding where today’s machines are weak, relative to humans. So, I started asking experts one question: What can humans do much, much better than even our most advanced AI? Surprising The first thing I learned is that, in general, AI is better than humans at operating in stable environments, with static, well-defined rules and consistent inputs. On the other hand, humans are much better than AI at handling surprises, filling in gaps, or operating in environments with poorly defined rules or incomplete information. This is why, for example, a computer can beat a human grandmaster at chess, but would make for an extraordinarily bad kindergarten teacher.

pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI
by Paul R. Daugherty and H. James Wilson
Published 15 Jan 2018

But as those imperfections are fixed and the robot becomes less distinguishable from a human, our positive emotions toward it grow again, eventually approaching an empathy level similar to that of one human toward another. Mori labeled the sudden drop the “uncanny valley,” a phenomenon that can impede the success of human-to-robot interactions in the workplace.16 Automation ethicists need to be aware of such phenomena. In general, AI systems that perform well should be promoted, with variants replicated and deployed to other parts of the organization. On the other hand, AI systems with poor performance should be demoted and, if they can’t be improved, they should be decommissioned. These tasks will be the responsibility of machine relations managers—individuals who will function like HR managers, except that they will oversee AI systems, not human workers.

pages: 196 words: 61,981

Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside
by Xiaowei Wang
Published 12 Oct 2020

For example, AI models trained to perform facial recognition can classify well-lit images with great accuracy, but have a difficult time classifying faces if the photos are obscured, occluded, or shown in different lighting conditions than the images on which the AI model was trained. This barrier, along with increasing techno-pessimism, led to a decreased public interest in AI. In the midst of winter 2022, when venture capital funding and public enthusiasm for AI dried up, a group of Chinese scientists and researchers at the Alibaba AI lab took up the task of generalizing AI models. Instead of Western philosophies of mind, they started from Chinese theories of the body, Chinese medicine, and Buddhist thought. In Western medicine, models of the body center around the brain, which controls other organs, the processes that regulate our body, consciousness, and emotion.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they will have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity. We will return to the issue of AI motivation in later chapters, but it is so central to the argument in this book that it is worth bearing in mind throughout.

However, to understand what a snapshot of a digitized human intellect can and cannot do is not the same as to understand how such an intellect will respond to modifications aimed at enhancing its performance. An artificial intellect, by contrast, might be carefully designed to be understandable, in both its static and dynamic dispositions. So while whole brain emulation may be more predictable in its intellectual performance than a generic AI at a comparable stage of development, it is unclear whether whole brain emulation would be dynamically more predictable than an AI engineered by competent safety-conscious programmers. ii As for an emulation inheriting the motivations of its human template, this is far from guaranteed. Capturing human evaluative dispositions might require a very high-fidelity emulation.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

The most recent work appears in the proceedings of the major AI conferences: the International Joint Conference on AI (IJCAI), the annual European Conference on AI (ECAI), and the AAAI Conference. Machine learning is covered by the International Conference on Machine Learning and the Neural Information Processing Systems (NeurIPS) meeting. The major journals for general AI are Artificial Intelligence, Computational Intelligence, the IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Intelligent Systems, and the Journal of Artificial Intelligence Research. There are also many conferences and journals devoted to specific areas, which we cover in the appropriate chapters. 1In the public eye, there is sometimes confusion between the terms “artificial intelligence” and “machine learning.”

Pearl’s (1984) Heuristics and Edelkamp and Schrödl’s (2012) Heuristic Search are influential textbooks on search. Papers about new search algorithms appear at the International Symposium on Combinatorial Search (SoCS) and the International Conference on Automated Planning and Scheduling (ICAPS), as well as in general AI conferences such as AAAI and IJCAI, and journals such as Artificial Intelligence and Journal of the ACM. 1We are assuming that most readers are in the same position and can easily imagine themselves to be as clueless as our agent. We apologize to Romanian readers who are unable to take advantage of this pedagogical device. 2For problems with an infinite number of actions we would need techniques that go beyond this chapter. 3In any problem with a cycle of net negative cost, the cost-optimal solution is to go around that cycle an infinite number of times.

In 2018, ALPHAZERO surpassed ALPHAGO at Go, and also defeated top programs in chess and shogi, learning through self-play without any expert human knowledge and without access to any past games. (It does, of course, rely on humans to define the basic architecture as Monte Carlo tree search with deep neural networks and reinforcement learning, and to encode the rules of the game.) The success of ALPHAZERO has led to increased interest in reinforcement learning as a key component of general AI (see Chapter 23). Going one step further, the MUZERO system operates without even being told the rules of the game it is playing—it has to figure out the rules by making plays. MUZERO achieved state-of-the-art results in Pacman, chess, Go, and 75 Atari games (Schrittwieser et al., 2019). It learns to generalize; for example, it learns that in Pacman the “up” action moves the player up a square (unless there is a wall there), even though it has only observed the result of the “up” action in a small percentage of the locations on the board.

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

Banking, finance and healthcare may be automated today, and they may be in a stage where 88 J. Bessen there is a lot of unmet demand, so there may be job growth. In 30 or 50 years from now, that may no longer be the case. Perhaps we will meet all of our financial needs, in some sense. Then of course there is this question about general AI coming along. The general takeaway is that demand matters. Repeatedly over the last 200 years we have had various people concerned about the effect of automation on jobs. These predictions have typically not been borne out. That is not a reason to say that predictions today are necessarily wrong.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

But before considering these changes, and other important obstacles to AGI development and the intelligence explosion, let’s wrap up the question of funding as a critical barrier. Simply put, it isn’t one. AGI development isn’t wanting for cash, in three ways. First, there’s no shortage of narrow AI projects that will inform or even become components of general AI systems. Second, a handful of “uncloaked” AGI projects are in the works and making significant headway with various sources of funding, to say nothing of probable stealth projects. Third, as AI technology approaches the level of AGI, a flood of funding will push it across the finish line. So large will the cash infusion be, in fact, that the tail will wag the dog.

The Ages of Globalization
by Jeffrey D. Sachs
Published 2 Jun 2020

Go is a board game of such sophistication and subtlety that it was widely believed that machines would be unable to compete with human experts for years or decades to come. Sedol, like Kasparov before him, believed that he would triumph easily over AlphaGo. In the event, he was decisively defeated by the system. Then, to make matters even more dramatic, AlphaGo was decisively defeated by a next-generation AI system that learned Go from scratch in self-play over a few hours. Once again, hundreds of years of expert study and competition could be surpassed in a few hours of learning through self-play. The advent of learning through self-play, sometimes called “tabula rasa” or blank-slate learning, is mind-boggling.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

Think about it: when you or I have an accident driving a car, you or I learn, but when a self-driving car makes a mistake, all self-driving cars learn. Every single one of them, including the ones that have not yet been ‘born’. By 2049, probably in our lifetimes and surely in those of the next generation, AI is predicted to be a billion times smarter (in everything) than the smartest human. To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein. We call that moment singularity. Singularity is the moment beyond which we can no longer see, we can no longer forecast.

pages: 301 words: 89,076

The Globotics Upheaval: Globalisation, Robotics and the Future of Work
by Richard Baldwin
Published 10 Jan 2019

When the training data sets get big enough and computers powerful enough, white-collar robots may be able to understand everything we say, but so far there are still many misunderstandings. That’s why McKinsey graded these AI’s language-understanding skills as below the average human (see Table 6.1). When it comes to speaking (“natural language generation”), AI is much better so AI’s capability is graded as equal to an average human. The reason, as we saw with the Siri-learning-Shanghainese example in Chapter 4, is that speaking is much simpler for machines to master. The next communication skill is more specialized—crafting nonverbal outputs. There are more ways to communicate than speaking and writing.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

The Baidu executive in charge, Yin Shiming, was yet another example of an engineer who had developed deep experience working at Western companies, including SAP and Apple. At an event announcing the partnership, Yin declared that Baidu and the military institute would “work hand in hand to link up computing, data and logic resources to further advance the application of new generation AI technologies in the area of defense.”18 Contrast this with the pressure that unhappy employees put on Google to end its bid to compete for the Pentagon’s JEDI cloud computing contract. Another defense initiative, Project Maven, which involved the development of computer vision algorithms that could be used to analyze images collected from U.S. military drones, generated even more outrage among Google workers.

pages: 340 words: 90,674

The Perfect Police State: An Undercover Odyssey Into China's Terrifying Surveillance Dystopia of the Future
by Geoffrey Cain
Published 28 Jun 2021

“It was a sudden, unexpected leap, one that portended a future in which AI systems could look at cancer screenings, evaluate climate data, and analyze poverty in nonhuman ways—potentially leading to breakthroughs that human researchers never would have thought of on their own.”34 By the time I watched AlphaGo beat the eighteen-time world champion, South Korean Lee Sedol, it was advancing toward a capability called “general AI,” meaning AI not constrained to a single purpose. It could be put to use for so much more than games. Observing the match at the Four Seasons Hotel in Seoul, in March 2016, I slowly came to realize how important this moment was. “AI has the power to do so much,” a Go player told me at the match, commenting on each move.

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
by Thomas H. Davenport and Julia Kirby
Published 23 May 2016

Researchers —Given that the field of automated decision systems is—in most areas of inquiry, at least—relatively new, there is still important research that needs to be done to advance the state of knowledge. Researchers who step forward may take several forms. They may do basic scientific research in artificial intelligence (usually in universities), applied research in general AI (usually in a corporate research lab, such as the “deep learning” research at Google), or applied research in a specific nonvendor setting. One prominent example of the latter type is the many physicians and scientists who undertake clinical trials within hospitals and medical schools. This sort of work usually applies to new drugs and medical devices, but it also sometimes involves automated systems.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

Whether intelligent or not, these systems do what they do without being conscious of anything. Projecting into the future, the stated moonshot goal of many AI researchers is to develop systems with the general intelligence capabilities of a human being – so-called ‘artificial general intelligence’, or ‘general AI’. And beyond this point lies the terra incognita of post-Singularity intelligence. But at no point in this journey is it warranted to assume that consciousness just comes along for the ride. What’s more, there may be many forms of intelligence that deviate from the humanlike, complementing rather than substituting or amplifying our species-specific cognitive toolkit – again without consciousness being involved.

pages: 356 words: 105,533

Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market
by Scott Patterson
Published 11 Jun 2012

Many of the hubs lay in the oceans, leading to the fanciful notion that particularly ambitious high-frequency trading outfits would plant themselves in the middle of the Atlantic or the Mediterranean or the South China Sea and get the jump on competitors using floating micro-islands populated by small communities of elite pattern-recognition programmers overseeing the hyperfast flow of data through their superservers. Better yet: unmanned pods of densely packed microprocessors overseen by next-generation AI Bots processing billions of orders streaming out of other unmanned AI pods positioned optimally around the world, the silent beams of high-frequency orders shifting trillions across the earth’s oceans at light speeds, all automated, beyond the scope of humans to remotely grasp the nature of the transactions.

pages: 379 words: 109,223

Frenemies: The Epic Disruption of the Ad Business
by Ken Auletta
Published 4 Jun 2018

The industry has come to rely on the tools of Math Men—machines, algorithms, puréed data, artificial intelligence—and on the skills of engineers. Among the many speakers and presentations at the TechCrunch confab in Brooklyn in May 2016, few were greeted with the same wide-eyed enthusiasm as Dag Kittlaus, who founded Siri in 2007, sold it to Apple in 2010, and left to cofound and serve as CEO of Viv, a next generation AI personal assistant. Standing before an audience of more than a thousand, he said Viv’s AI software builds itself rather than just relying on data, and he said Viv was scheduled to be available commercially in 2017. He demonstrated a series of complex tasks he said the voice-activated personal assistant would answer in ten milliseconds.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

Through general purpose computation, the Llull-inspired idea of computing over symbols (rather than numbers) to operate on challenging human issues would in time turn into AI. While Llull, Pascal, Leibniz and Babbage all did their parts in advancing the machinery for computing over general concepts, the idea would not be fully realized until after the development of electronic computation. In 1955, General Problem Solver, which many call the first general AI program, was created by computer scientist Allen Newell and economist Herb Simon. General Problem Solver used what Simon and Newell called means–ends analysis as the foundation for its reasoning. In many ways, it’s a technique that takes the symbolic computing of Llull and returns to its heart the numeric computing of the abacus.

pages: 374 words: 111,284

The AI Economy: Work, Wealth and Welfare in the Robot Age
by Roger Bootle
Published 4 Sep 2019

At the outset, I should say that a general restriction of research only begins to make any sense if you adopt an ultra-pessimistic view of the implications of robots and AI for humanity. Perhaps if you embraced some of the radical views about humanity’s fate that I discuss in the Epilogue, this might make sense. But on anything like the view of the implications of robots and AI for humanity’s future expounded in this book, any attempt to restrict or discourage general AI research would be absurd. But there are some exceptions and limitations to this conclusion. Where certain types of development in robotics or AI can be shown to be harmful to humans, not in the general sense, discussed above, but in some specific respect, then there could be a case for the introduction of restrictions on research into those applications.

pages: 402 words: 110,972

Nerds on Wall Street: Math, Machines and Wired Markets
by David J. Leinweber
Published 31 Dec 2008

Shortly after we started the company, a colleague from the AI group at Arthur D. Little, the venerable Cambridge consulting firm, asked me to fill Intr oduction xxxi in for him at the last minute at a technology session at a finance conference being held in Los Angeles; his dog was sick. The topic was a generic “AI on Wall Street,” the last one in a catchall session. The other speakers were from brokerage firms, plus someone from the American Stock Exchange (Amex).The audience was about 75 technology managers. I’d planned sort of an AI 101 talk, going over various solution methods, forward and backward chaining, generate and test, predicate logic, and the rest.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

Knight, W., “An Algorithm Summarizes Lengthy Text Surprisingly Well,” MIT Technology Review. 2017; Shen, J., et al., Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. arXiv, 2017. 1. 54. Steinberg, R., “6 Areas Where Artificial Neural Networks Outperform Humans,” Venture Beat. 2017. 55. Gershgorn, D., “Google’s Voice-Generating AI Is Now Indistinguishable from Humans,” Quartz. 2017. 56. Quain, J. R., “Your Car May Soon Be Able to Read Your Face,” New York Times. 2017, p. B6. 57. Dixit, V. V., S. Chand, and D. J. Nair, “Autonomous Vehicles: Disengagements, Accidents and Reaction Times.” PLoS One, 2016. 11(12): p. e0168054. 58.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

They may be experts in their respective fields, but not necessarily the most objective arbitrator of those facts. Conversely, AI researchers are often too focused on the immediate difficult problems they are trying to solve to be able to sit back and take a longterm view. Kurzweil’s esteemed colleague at Google, Peter Norvig, says that building a general AI is not on his research horizon. Not because it will never happen, but simply because he believes that it is too far off to focus active research projects on it. Given all these qualifications, it seems fairly safe to say that a median prediction of “roughly fifty years” will continue to stand until the goal is almost reached.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

But even more important, this questioning suggests a large future possibility-space for intelligence. There could be “classic,” unenhanced humans; enhanced humans (with nootropics, wearables, brain-computer interfaces); neocortical simulations; uploaded-mind files; corporations as digital abstractions; and many forms of generated AI—deep-learning meshes, neural networks, machine-learning clusters, blockchain-based distributed autonomous organizations, and empathic compassionate machines. We should consider the future world as one of multispecies intelligence. What we call the human function of “thinking” could be quite different in the variety of possible future implementations of intelligence.

pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts
by Richard Susskind and Daniel Susskind
Published 24 Aug 2015

In the thaw that has followed the winter, over the past few years, we have seen a series of significant developments—Big Data, Watson, robotics, and affective computing—that we believe point to a second wave of AI. In summary, the computerization of the work of professionals began in earnest in the late 1970s with information retrieval systems. Then, in the 1980s, there were first-generation AI systems in the professions, whose main focus was expert systems technologies. In the next decade, the 1990s, there was a shift towards the field of knowledge management, when professionals started to store and retrieve not just source materials but know-how and working practices. In the 2000s, Google came to dominate the research habits of many professionals, and grew to become the indispensable tool of practitioners searching for materials, if not for solutions.

How I Became a Quant: Insights From 25 of Wall Street's Elite
by Richard R. Lindsey and Barry Schachter
Published 30 Jun 2007

Shortly after we started the company, a colleague from the AI group at Arthur D. Little, the venerable Cambridge consulting firm, asked me JWPR007-Lindsey May 7, 2007 16:12 David Leinweber 21 to fill in for him at the last minute at a technology session at a finance conference being held in Los Angeles. His dog was sick. The topic was a generic “AI on Wall Street,” the last one in a catch-all session. The other speakers were from brokerage firms, plus someone from the American Stock Exchange. The audience was about 75 technology managers. I’d planned sort of an AI 101 talk, going over various solution methods, forward and backward chaining, generate and test, predicate logic, and the rest.

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

When I watch a Skydio drone in action, it’s both thrilling and freaky. It’s not hard to imagine one of the drones being used for ill—to track and hunt a human. “I think there’s real risks of AI that should be thought about,” Martiros agrees. Tons of firms worldwide are all fantasizing about a “general” AI that could think in human terms. “It’d be a trillion-dollar industry, and it’s not implausible. We can’t predict these things.” He’s in favor of groups like OpenAI pondering the hard questions. So for my friends who want to know about superhuman AI? I’d love to have a definite answer, but I can’t offer one.

pages: 661 words: 156,009

Your Computer Is on Fire
by Thomas S. Mullaney , Benjamin Peters , Mar Hicks and Kavita Philip
Published 9 Mar 2021

This feedback loop between industry, government, and academic computer science progressively sought to heighten our dependence on computers without any proof that those with technical skills could solve, or even understand, social, political, economic, or other problems. Indeed, often there was little evidence they could even deliver on the technical solutions that they promised. The visions of general AI outlined by Alan Turing, Marvin Minsky, and others in the twentieth century still have barely materialized, Broussard points out, and where they have, they have come with devastating technical flaws too often excused as being “bugs” rather than fundamental system design failures. In addition to a lack of accountability, power imbalances continued to be a bug—or, if you prefer, a feature—in the drive to computerize everything.

pages: 625 words: 167,097

Kiln People
by David Brin
Published 15 Jan 2002

But an expert glance showed their fleshtones to be dye jobs. Their faces really gave them away -- bearing familiar expressions of resigned ennui. These were dittos at the end of a long work day, waiting patiently to expire. Two of them sat before expensive interface screens, talking to computer-generated AI avatars with faces similar to their own. One was a small, childlike golem, wearing scuffed denim. I couldn't catch any of his words. But the other one, fashioned after a buxom woman with reddish hair, wearing ill-fitting matronly garb, spoke loudly enough to overhear as the guard pulled me along

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

Likewise for reinforcement learning, where in the world of Atari games, every tenth of a second the game tells you with perfect authority exactly how you’re doing. “It works very well, but it requires something, again, very, very weird,” he says, “which is this reward.” Unplugging the hardwired external rewards may be a necessary part of building truly general AI: because life, unlike an Atari game, emphatically does not come pre-labeled with real-time feedback on how good or bad each of our actions is. We have parents and teachers, sure, who can correct our spelling and pronunciation and, occasionally, our behavior. But this hardly covers a fraction of what we do and say and think, and the authorities in our life do not always agree.

pages: 598 words: 183,531

Hackers: Heroes of the Computer Revolution - 25th Anniversary Edition
by Steven Levy
Published 18 May 2010

But one charge leveled at the AI lab by the antiwar movement was entirely accurate: all the lab’s activities, even the most zany or anarchistic manifestations of the Hacker Ethic, had been funded by the Department of Defense. Everything, from the Incompatible Time-sharing System to Peter Samson’s subway hack, was paid for by the same Department of Defense that was killing Vietnamese and drafting American boys to die overseas. The general AI lab response to that charge was that the Defense Department’s Advanced Research Projects Agency (ARPA), which funded the lab, never asked anyone to come up with specific military applications for the computer research engaged in by hackers and planners. ARPA had been run by computer scientists; its goal had been the advancement of pure research.

pages: 1,737 words: 491,616

Rationality: From AI to Zombies
by Eliezer Yudkowsky
Published 11 Mar 2015

If there is a piece that, relative to its context, is locally systematically unreliable—for some possible beliefs “Bi” and conditions Ai, it adds some “Bi” to the belief pool under local condition Ai, where reflection by the system indicates that Bi is not true (or in the case of probabilistic beliefs, not accurate) when the local condition Ai is true—then this is a bug. This kind of modularity is a way to make the problem tractable, and it’s how I currently think about the first-generation AI design. [Edit 2013: The actual notion I had in mind here has now been fleshed out and formalized in Tiling Agents for Self-Modifying AI, section 6.] The notion is that a causally closed cognitive system—such as an AI designed by its programmers to use only causally efficacious parts; or an AI whose theory of its own functioning is entirely testable; or the outer Chalmers that writes philosophy papers—that believes that it has an epiphenomenal inner self, must be doing something systematically unreliable because it would conclude the same thing in a Zombie World.

In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be “moral.” They can’t agree among themselves on why, or what they mean by the word “moral”; but they all agree that doing Friendly AI theory is unnecessary. And when you ask them how an arbitrarily generated AI ends up with moral outputs, they proffer elaborate rationalizations aimed at AIs of that which they deem “moral”; and there are all sorts of problems with this, but the number one problem is, “Are you sure the AI would follow the same line of thought you invented to argue human morals, when, unlike you, the AI doesn’t start out knowing what you want it to rationalize?”

Global Catastrophic Risks
by Nick Bostrom and Milan M. Cirkovic
Published 2 Jul 2008

The easy scenario would hold if, for example, human institutions can reliably distinguish Friendly Ais from unFriendly ones, and give revocable power into the hands of Friendly Ais. Thus we could pick and choose our allies. The only requirement is that the Friendly AI problem must be solvable (as opposed to being completely beyond human ability) . Both of the above scenarios assume that the first AI (the first powerful, general AI) cannot by itself do global catastrophic damage. Most concrete visualizations that imply this use a g metaphor: A Is as analogous to unusually able humans. In Section 1 5 .8 on rates of intelligence increase, I listed some reasons to be wary of a huge, fast jump in intelligence: • The distance from idiot to Einstein, which looms large to us, is a small dot on the scale of minds-in-general

Applied Cryptography: Protocols, Algorithms, and Source Code in C
by Bruce Schneier
Published 10 Nov 1993

It produces a stream of 32-bit words which can be XORed with a plaintext stream to produce ciphertext, or XORed with a ciphertext stream to produce plaintext. The algorithm is named as it is because it is a Fibonacci shrinking generator. First, use these two additive generators. The key is the initial values of these generators. Ai = (Ai-55 + Ai-24) mod 232 Bi = (Bi-52 + Bi-19) mod 232 These sequences are shrunk, as a pair, depending on the least significant bit of Bi: if it is 1, use the pair; if it is 0, ignore the pair. Cj is the sequence of used words from Ai, and Dj is the sequence of used words from Bi. These words are used in pairs—C2j, C2j+1, D2j, and D2j+1—to generate two 32-bit output words: K2j and K2j+1.