strong AI

back to index

64 results

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
Published 14 Jul 2005

A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg"(nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing assemblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology. The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation.

However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI). As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled. Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely.

Once we can take down these signs, we'll have Turing-level machines, and the era of strong AI will have started. This era will creep up on us. As long as there are any discrepancies between human and machine performance—areas in which humans outperform machines—strong AI skeptics will seize on these differences. But our experience in each area of skill and knowledge is likely to follow that of Kasparov. Our perceptions of performance will shift quickly from pathetic to daunting as the knee of the exponential curve is reached for each human capability. How will strong AI be achieved? Most of the material in this book is intended to layout the fundamental requirements for both hardware and software and explain why we can be confident that these requirements will be met in nonbiological systems.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

But that doesn’t imply (to me at least) that machine consciousness is impossible – just that machine consciousness would be different. Nagel’s argument is one of many that have been set out in an attempt to show strong AI to be impossible. Let’s take a look at the best-known of these. Is Strong AI Impossible? Nagel’s argument is closely related to a common-sense objection to the possibility of strong AI, which says that it is not possible because there is something special about people. This intuitive response starts from the view that computers are different from people because people are animate objects, but computers are not.

For all that they may excel at what they do, they are nothing more than software components optimized to carry out a specific narrow task. Since I believe we are a long way from General AI, it naturally follows that I should be even more dubious about the prospects for strong AI: the idea of machines that are, like us, conscious, self-aware, truly autonomous beings. Nevertheless, in this final chapter, let’s indulge ourselves. Even though strong AI is not anywhere in prospect we can still have some fun thinking about it, and speculating about how we might progress towards it. So, let’s take a trip together down the road to conscious machines. We’ll imagine what the landscape might look like, which obstacles we might meet and what sights we can expect to see on the way.

, and to argue that certain forms of consciousness must remain beyond our comprehension (we can’t imagine what it would be like to be a bat). But his test can be applied to computers, and most people seem to believe that it isn’t like anything to be a computer, any more than a toaster. For this reason, Nagel’s ‘What is it like’ argument has been used against the possibility of strong AI. Strong AI is impossible, according to this argument, because computers cannot be conscious by Nagel’s argument. I am personally not convinced by this argument, because asking ‘What is it like to be a …’ is, for me, nothing more than an appeal to our intuition. Our intuition works well at separating out the obvious cases – orang-utans and toasters – but I don’t see why we should expect it to be a reliable guide in the more subtle cases, or cases that are far outside our own experience of the natural world – such as AI.

The Book of Why: The New Science of Cause and Effect
by Judea Pearl and Dana Mackenzie
Published 1 Mar 2018

Typical examples are introducing new price structures or subsidies or changing the minimum wage. In technical terms, machine-learning methods today provide us with an efficient way of going from finite sample estimates to probability distributions, and we still need to get from distributions to cause-effect relations. When we start talking about strong AI, causal models move from a luxury to a necessity. To me, a strong AI should be a machine that can reflect on its actions and learn from past mistakes. It should be able to understand the statement “I should have acted differently,” whether it is told as much by a human or arrives at that conclusion itself. The counterfactual interpretation of this statement reads, “I have done X = x, and the outcome was Y = y.

Finally, when we start to adjust our own software, that is when we begin to take moral responsibility for our actions. This responsibility may be an illusion at the level of neural activation but not at the level of self-awareness software. Encouraged by these possibilities, I believe that strong AI with causal understanding and agency capabilities is a realizable promise, and this raises the question that science fiction writers have been asking since the 1950s: Should we be worried? Is strong AI a Pandora’s box that we should not open? Recently public figures like Elon Musk and Stephen Hawking have gone on record saying that we should be worried. On Twitter, Musk said that AIs were “potentially more dangerous than nukes.”

In the late 1980s, I realized that machines’ lack of understanding of causal relations was perhaps the biggest roadblock to giving them human-level intelligence. In the last chapter of this book, I will return to my roots, and together we will explore the implications of the Causal Revolution for artificial intelligence. I believe that strong AI is an achievable goal and one not to be feared precisely because causality is part of the solution. A causal reasoning module will give machines the ability to reflect on their mistakes, to pinpoint weaknesses in their software, to function as moral entities, and to converse naturally with humans about their own choices and intentions.

pages: 261 words: 10,785

The Lights in the Tunnel
by Martin Ford
Published 28 May 2011

While narrow AI is increasingly deployed to solve real world problems and attracts most of the current commercial interest, the Holy Grail of artificial intelligence is, of course, strong AI—the construction of a truly intelligent machine. The realization of strong AI would mean the existence of a machine that is genuinely competitive with, or perhaps even superior to, a human being in its ability to reason and conceive ideas. The arguments I have made in this book do not depend on strong AI, but it is worth noting that if truly intelligent machines were built and became affordable, the trends I have predicted here would likely be amplified, and the economic impact would certainly be dramatic and might unfold in an accelerating fashion. Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible.

Research into strong AI has suffered because of some overly optimistic predictions and expectations back in the 1980s—long before computer hardware was fast enough to make true machine intelligence feasible. When reality fell far short of the projections, focus and financial backing shifted away from research into strong AI. Nonetheless, there is evidence that the vastly superior performance and affordability of today’s processors is helping to revitalize the field. Research into strong AI can be roughly divided into two main approaches. The direct computational approach attempts to extend traditional, algorithmic computing into the realm of true intelligence. This involves the development of sophisticated software applications that exhibit general reasoning.

Once researchers gain an understanding of the basic operating principles of the brain, it may be possible to build an artificial intelligence based on that framework. This would not be an exact replication of a human brain; instead, it would be something completely new, but based on a similar architecture. When might strong AI become reality—if ever? I suspect that if you were to survey the top experts working in the field, you would get a fairly wide range of estimates. Optimists might say it will happen within the next 20 to 30 years. A more cautious group would place it 50 or more years in the future, and some might argue that it will never happen.

pages: 797 words: 227,399

Wired for War: The Robotics Revolution and Conflict in the 21st Century
by P. W. Singer
Published 1 Jan 2010

Computers eventually develop to the equivalent of human intelligence (“strong AI”) and then rapidly push past any attempts at human control. Ray Kurzweil explains how this would work. “As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI, but takes less time than the cycle before it as is the nature of technological evolution. The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating super-intelligence.”

There was even one robot that became the equivalent of artificially stupid or suicidal, that is, a robot that evolved to constantly make the worst possible decision. This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence. This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II.

The cost/performance ratio of Internet service providers is doubling every twelve months. Internet bandwidth backbone is doubling roughly every twelve months. The number of human genes mapped per year doubles every eighteen months. The resolution of brain scans (a key to understanding how the brain works, an important part of creating strong AI) doubles every twelve months. And, as a by-product, the number of personal and service robots has so far doubled every nine months. The darker side of these trends has been exponential change in our capability not merely to create, but also to destroy. The modern-day bomber jet has roughly half a million times the killing capacity of the Roman legionnaire carrying a sword in hand.

pages: 185 words: 43,609

Zero to One: Notes on Startups, or How to Build the Future
by Peter Thiel and Blake Masters
Published 15 Sep 2014

The logical endpoint to this substitutionist thinking is called “strong AI”: computers that eclipse humans on every important dimension. Of course, the Luddites are terrified by the possibility. It even makes the futurists a little uneasy; it’s not clear whether strong AI would save humanity or doom it. Technology is supposed to increase our mastery over nature and reduce the role of chance in our lives; building smarter-than-human computers could actually bring chance back with a vengeance. Strong AI is like a cosmic lottery ticket: if we win, we get utopia; if we lose, Skynet substitutes us out of existence. But even if strong AI is a real possibility rather than an imponderable mystery, it won’t happen anytime soon: replacement by computers is a worry for the 22nd century.

Kaczynski, Ted Karim, Jawed Karp, Alex, 11.1, 12.1 Kasparov, Garry Katrina, Hurricane Kennedy, Anthony Kesey, Ken Kessler, Andy Kurzweil, Ray last mover, 11.1, 13.1 last mover advantage lean startup, 2.1, 6.1, 6.2 Levchin, Max, 4.1, 10.1, 12.1, 14.1 Levie, Aaron lifespan life tables LinkedIn, 5.1, 10.1, 12.1 Loiseau, Bernard Long-Term Capital Management (LTCM) Lord of the Rings (Tolkien) luck, 6.1, 6.2, 6.3, 6.4 Lucretius Lyft MacBook machine learning Madison, James Madrigal, Alexis Manhattan Project Manson, Charles manufacturing marginal cost marketing Marx, Karl, 4.1, 6.1, 6.2, 6.3 Masters, Blake, prf.1, 11.1 Mayer, Marissa Medicare Mercedes-Benz MiaSolé, 13.1, 13.2 Michelin Microsoft, 3.1, 3.2, 3.3, 4.1, 5.1, 14.1 mobile computing mobile credit card readers Mogadishu monopoly, monopolies, 3.1, 3.2, 3.3, 5.1, 7.1, 8.1 building of characteristics of in cleantech creative dynamism of new lies of profits of progress and sales and of Tesla Morrison, Jim Mosaic browser music recording industry Musk, Elon, 4.1, 6.1, 11.1, 13.1, 13.2, 13.3 Napster, 5.1, 14.1 NASA, 6.1, 11.1 NASDAQ, 2.1, 13.1 National Security Agency (NSA) natural gas natural secrets Navigator browser Netflix Netscape NetSecure network effects, 5.1, 5.2 New Economy, 2.1, 2.2 New York Times, 13.1, 14.1 New York Times Nietzsche, Friedrich Nokia nonprofits, 13.1, 13.2 Nosek, Luke, 9.1, 14.1 Nozick, Robert nutrition Oedipus, 14.1, 14.2 OfficeJet OmniBook online pet store market Oracle Outliers (Gladwell) ownership Packard, Dave Page, Larry Palantir, prf.1, 7.1, 10.1, 11.1, 12.1 PalmPilots, 2.1, 5.1, 11.1 Pan, Yu Panama Canal Pareto, Vilfredo Pareto principle Parker, Sean, 5.1, 14.1 Part-time employees patents path dependence PayPal, prf.1, 2.1, 3.1, 4.1, 4.2, 4.3, 5.1, 5.2, 5.3, 8.1, 9.1, 9.2, 10.1, 10.2, 10.3, 10.4, 11.1, 11.2, 12.1, 12.2, 14.1 founders of, 14.1 future cash flows of investors in “PayPal Mafia” PCs Pearce, Dave penicillin perfect competition, 3.1, 3.2 equilibrium of Perkins, Tom perk war Perot, Ross, 2.1, 12.1, 12.2 pessimism Petopia.com Pets.com, 4.1, 4.2 PetStore.com pharmaceutical companies philanthropy philosophy, indefinite physics planning, 2.1, 6.1, 6.2 progress without Plato politics, 6.1, 11.1 indefinite polling pollsters pollution portfolio, diversified possession power law, 7.1, 7.2, 7.3 of distribution of venture capital Power Sellers (eBay) Presley, Elvis Priceline.com Prince Procter & Gamble profits, 2.1, 3.1, 3.2, 3.3 progress, 6.1, 6.2 future of without planning proprietary technology, 5.1, 5.2, 13.1 public opinion public relations Pythagoras Q-Cells Rand, Ayn Rawls, John, 6.1, 6.2 Reber, John recession, of mid-1990 recruiting, 10.1, 12.1 recurrent collapse, bm1.1, bm1.2 renewable energy industrial index research and development resources, 12.1, bm1.1 restaurants, 3.1, 3.2, 5.1 risk risk aversion Romeo and Juliet (Shakespeare) Romulus and Remus Roosevelt, Theodore Royal Society Russia Sacks, David sales, 2.1, 11.1, 13.1 complex as hidden to non-customers personal Sandberg, Sheryl San Francisco Bay Area savings scale, economies of Scalia, Antonin scaling up scapegoats Schmidt, Eric search engines, prf.1, 3.1, 5.1 secrets, 8.1, 13.1 about people case for finding of looking for using self-driving cars service businesses service economy Shakespeare, William, 4.1, 7.1 Shark Tank Sharma, Suvi Shatner, William Siebel, Tom Siebel Systems Silicon Valley, 1.1, 2.1, 2.2, 2.3, 5.1, 5.2, 6.1, 7.1, 10.1, 11.1 Silver, Nate Simmons, Russel, 10.1, 14.1 singularity smartphones, 1.1, 12.1 social entrepreneurship Social Network, The social networks, prf.1, 5.1 Social Security software engineers software startups, 5.1, 6.1 solar energy, 13.1, 13.2, 13.3, 13.4 Solaria Solyndra, 13.1, 13.2, 13.3, 13.4, 13.5 South Korea space shuttle SpaceX, prf.1, 10.1, 11.1 Spears, Britney SpectraWatt, 13.1, 13.2 Spencer, Herbert, 6.1, 6.2 Square, 4.1, 6.1 Stanford Sleep Clinic startups, prf.1, 1.1, 5.1, 6.1, 6.2, 7.1 assigning responsibilities in cash flow at as cults disruption by during dot-com mania economies of scale and foundations of founder’s paradox in lessons of dot-com mania for power law in public relations in sales and staff of target market for uniform of venture capital and steam engine Stoppelman, Jeremy string theory strong AI substitution, complementarity vs. Suez Canal tablet computing technological advance technology, prf.1, 1.1, 1.2, 2.1, 2.2, 2.3 American fear of complementarity and globalization and proprietary technology companies terrorism Tesla Motors, 10.1, 13.1, 13.2 Thailand Theory of Justice, A (Rawls) Timberlake, Justin Time magazine Tolkien, J.R.R.

pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning
by Justin E. H. Smith
Published 22 Mar 2022

And those who do this also tend to see our computers differently than Leibniz saw his stepped reckoner: not as prostheses to which we outsource those computational activities of the mind that can be done without real thought, but as rivals or equals, as artificially generated kin, or as mutant enemies. In this respect, though the defenders of strong AI and of various species of the computational theory of mind might accuse defenders of the mill argument and its variants of attachment to a will-o’-the-wisp, to a vestige of prescientific thinking, they are sooner the ones who follow in the footsteps of the alchemists such as Roger Bacon, and of the people who feared the alchemists and their dark conjurings. The imminent arrival of strong AI is in many respects a neo-alchemist idea, of no more real interest in our efforts to understand the promises and threats of technology than any of the other forces medieval conjurers sought to awaken, and charlatans pretended to awaken, and chiliasts warned against awakening.

Judgment, in Cantwell Smith’s view, is “the normative ideal to which … we should hold full-blooded human intelligence—a form of dispassionate deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed.”11 Cantwell Smith cites the philosopher and scholar of existential phenomenology John Haugeland, with whom he aligns his own views very closely, according to whom the thing that distinguishes computers from us the most is that, unlike us, “they don’t give a damn.”12 In Cantwell Smith’s gloss of this distinction, things will only begin to matter to computers when “they develop committed and deferential existential engagement with the world.”13 Now, it is not certain that such deferential engagement can only be instantiated in a non-mechanical mind, and it is possible that if reckoning just keeps getting streamlined and quicker, eventually it will cross over into judgment. But mere possibility, as opposed to concrete evidence, is not a very strong foundation for speculation about the inevitable emergence of strong AI, that is, of AI that matches or surpasses human beings in its power of judgment. Few theorists of the coming AI takeover, again, see the dawning of AI consciousness as a necessary or even likely part of this projected scenario. Yet the way they talk about it is often confused enough to make it unclear whether they envision conscious machines as a likely development.

If at least some of the people who are convinced of the likelihood of a singularity moment or an imminent AI takeover are themselves unconvinced that the AI in question must be “strong,” must experience its own consciousness as we do, this only makes it more surprising that those who are attracted to the simulation argument—who overlap to some considerable extent with those who defend the singularity thesis—have at least implicitly allowed strong AI—consciousness, reflective judgment, and so on—to sneak back into the account of what artificial intelligence does or in principle is capable of doing. Again, that this is what they have done is clear from the implicit commitments of the simulation hypothesis we have already considered: we know ourselves, from immediate first-person experience, to be conscious beings; therefore, if it is possible that we are artificial simulations created in the same way that we create our own artificial simulations with our computers, then it is possible that artificial simulations may be conscious beings.

pages: 271 words: 79,355

The Dark Cloud: How the Digital World Is Costing the Earth
by Guillaume Pitron
Published 14 Jun 2023

But with an AI, we as humans will therefore be able to be way ahead in terms of planning an environmental strategy.’57 This outlook comes with risks: given strong AI’s consumption of mineral and energy resources, it could actually do more harm than good to the planet. ‘[L]eft unguided, it also has the capability to accelerate the environment’s degradation’, confirms the PwC report.58 According to pessimistic scenarios, by 2040 AI could monopolise half of the world’s energy production.59 And surely placing all our hopes in AI is tantamount to passing on the responsibility for climate action to future generations? ‘We need major political change over the next 10 to 20 years, but strong AI won’t be ready by that time’, warns a researcher.60 Moreover, the fight against climate change would give internet companies an excellent argument to ramp up their AI research.

Researchers were given the core task of developing an IT system capable of predicting 72 hours in advance how bad pollution would be in Beijing.45 The tool is a source of pride for its developers, for in the first three quarters of 2015, boasted IBM, Beijing’s authorities managed to reduce particle emissions by 20 per cent in the Chinese capital.46 Better still, the company continued, other than predicting pollution levels, Green Horizon could ‘eventually offer specific recommendations on how to reduce pollution to an acceptable level — for example, by closing certain factories or temporarily restricting the number of drivers on the road’.47 We are therefore in the presence of one of the first tools capable of furnishing an environmental strategy, with temporal and spatial scalability, using vast amounts of parameters beyond the processing capacity of one human brain. The tech giant was unequivocal, and called Green Horizon nothing short of AI, artificial intelligence.48 This trendy catch-all phrase covers a variety of definitions. ‘Strong AI’ is a super intelligence so powerful that it can supposedly experience ‘emotions, intuitions, and feelings to the point of becoming aware of its own existence’, says the Dutchman Lex Coors, one of the stars of the data-centre industry.49 The more optimistic believe that such an entity will become a reality in the next five to 10 years, once humanity has produced 175 zettabytes of data — enough for an AI to learn and perfect itself by processing data itself through ‘deep learning’.

Mitigating the dangers demands a holistic response that can simultaneously tackle sectors ranging from electricity generation, transportation, and housing to farming.51 It will take formulating long-term strategies and maintaining a constant course of action over decades, whatever the future holds, to reach the desired outcome. This is the price of the planet’s salvation. When we see how international communities struggle to lower their global carbon dioxide emissions, we might rightly question our own ability to take on such a challenge. Thus, some scientists are considering the hypothesis of a superhuman, or even strong, AI that alone could undertake such a mission.52 It would be the ultimate phase of the ‘sustainable digital’ order discussed at the start of this book: ‘green IT’ in its purest form. Entrepreneurs have already declared their ambitions. One of them is Demis Hassabis, chief founder of the UK company DeepMind, whose mission, he says, is twofold: ‘Step one, solve intelligence.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

In this widely read, controversial piece, Searle introduced the concepts of “strong” and “weak” AI in order to distinguish between two philosophical claims made about AI programs. While many people today use the phrase strong AI to mean “AI that can perform most tasks as well as a human” and weak AI to mean the kind of narrow AI that currently exists, Searle meant something different by these terms. For Searle, the strong AI claim would be that “the appropriately programmed digital computer does not just simulate having a mind; it literally has a mind.”13 In contrast, in Searle’s terminology, weak AI views computers as tools to simulate human intelligence and does not make any claims about them “literally” having a mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”?

There is no danger of duplicating it anytime soon.”10 The roboticist (and former director of MIT’s AI Lab) Rodney Brooks agreed, stating that we “grossly overestimate the capabilities of machines—those of today and of the next few decades.”11 The psychologist and AI researcher Gary Marcus went so far as to assert that in the quest to create “strong AI”—that is, general human-level AI—“there has been almost no progress.”12 I could go on and on with dueling quotations. In short, what I found is that the field of AI is in turmoil. Either a huge amount of progress has been made, or almost none at all. Either we are within spitting distance of “true” AI, or it is centuries away.

For Searle, the strong AI claim would be that “the appropriately programmed digital computer does not just simulate having a mind; it literally has a mind.”13 In contrast, in Searle’s terminology, weak AI views computers as tools to simulate human intelligence and does not make any claims about them “literally” having a mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”? Like my mother, Searle believes there is a fundamental difference, and he argued that strong AI is impossible even in principle.15 The Turing Test Searle’s article was spurred in part by Alan Turing’s 1950 paper, “Computing Machinery and Intelligence,” which had proposed a way to cut through the Gordian knot of “simulated” versus “actual” intelligence. Declaring that “the original question ‘Can a machine think?’

pages: 268 words: 109,447

The Cultural Logic of Computation
by David Golumbia
Published 31 Mar 2009

Perhaps because language per se is a much more objective part of the social world than is the abstraction called “thinking,” however, the history of computational linguistics reveals a particular dynamism with regard to the data it takes as its object— exaggerated claims, that is, are frequently met with material tests that confirm or disconfirm theses. Accordingly, CL can claim more practical successes than can the program of Strong AI, but at the same time demonstrates with particular clarity where ideology meets material constraints. Computers invite us to view languages on their terms: on the terms by which computers use formal systems that we have recently decided to call languages—that is, programming languages. But these closed systems, subject to univocal, correct, “activating” interpretations, look little like human language practices, which seems not just to allow but to thrive on ambiguity, context, and polysemy.

(Even in Turing’s The Cultural Logic of Computation p 98 original statement of the Test, the interlocutors are supposed to be passing dialogue back and forth in written form, because Turing sees the obvious inability of machines to adequately mimic human speech as a separate question from whether computers can process language.) By focusing on written exemplars, CL and NLP have pursued a program that has much in common with the “Strong AI” programs of the 1960s and 1970s that Hubert Dreyfus (1992), John Haugeland (1985), John Searle (1984, 1992), and others have so effectively critiqued. This program has two distinct aspects, which although they are joined intellectually, are often pursued with apparent independence from each other—yet at the same time, the mere presence of the phrase “computational linguistics” in a title is often not at all enough to distinguish which program the researcher has in mind.

We find it easy to divorce technology from the raced, gendered, politicized parts of our own world—to think of computers as “pure” technology in much the same way that mechanical rationality is supposed to be pure reason. But when we look closer we see something much more reflective of our world and its political history in this technology than we might think at first. The “strong AI” movements of the late 1960s and 1970s, for example, represent and even implement powerful gender ideologies (Adam 1998). In what turns out in retrospect to be a field of study devoted to a mistaken metaphor (see especially Dreyfus 1992), according to which the brain primarily computes like any other Turing machine, we see advocates The Cultural Logic of Computation p 202 unusually invested in the idea that they might be creating something like life—in resuscitating the Frankenstein story that may, in fact, be inapplicable to the world of computing itself.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

He wanted to argue, with John Searle and Roger Penrose, that “Strong AI” is impossible, but there are no good arguments for that conclusion. After all, everything we now know suggests that, as I have put it, we are robots made of robots made of robots . . . down to the motor proteins and their ilk, with no magical ingredients thrown in along the way. Weizenbaum’s more important and defensible message was that we should not strive to create Strong AI and should be extremely cautious about the AI systems that we can create and have already created. As one might expect, the defensible thesis is a hybrid: AI (Strong AI) is possible in principle but not desirable.

I believe that charting these barriers may be no less important than banging our heads against them. Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence. To achieve human-level intelligence, learning machines need the guidance of a blueprint of reality, a model—similar to a road map that guides us in driving through an unfamiliar city. To be more specific, current learning machines improve their performance by optimizing parameters for a stream of sensory inputs received from the environment.

It was this wild modeling strategy, not Babylonian extrapolation, that jolted Eratosthenes (276–194 BC) to perform one of the most creative experiments in the ancient world and calculate the circumference of the Earth. Such an experiment would never have occurred to a Babylonian data fitter. Model-blind approaches impose intrinsic limitations on the cognitive tasks that Strong AI can perform. My general conclusion is that human-level AI cannot emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data and models. Data science is a science only to the extent that it facilitates the interpretation of data—a two-body problem, connecting data to reality.

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

To avoid confusion, technical experts in the field of AI prefer to use the term Artificial General Intelligence (AGI) to refer to machines with human-level or higher intelligence, capable of abstracting concepts from limited experience and transferring knowledge between domains. AGI is also called “Strong AI” to differentiate from “Weak AI” or “Narrow AI," which refers to systems designed for one specific task and whose capabilities are not easily transferable to other systems. We go into more detail about the distinction between AI and AGI in our Machine Intelligence Continuum in Chapter 2. Though Deep Blue, which beat the world champion in chess in 1997, and AlphaGo, which did the same for the game of Go in 2016, have achieved impressive results, all of the AI systems we have today are “Weak AI."

You can also host competitions on Kaggle or similar platforms. Provide a problem, a dataset, and a prize purse to attract competitors. This is a good way to get international talent to work on your problem and will also build your reputation as a company that supports AI. As with any industry, like attracts like. Dominant tech companies build strong AI departments by hiring superstar leaders. Google and Facebook attracted university professors and AI research pioneers such as Geoffrey Hinton, Fei-Fei Li, and Yann LeCun with plum appointments and endless resources. These professors either take a sabbatical from their universities or split their time between academia and industry.

To meet business needs in the short-term, consider evaluating third-party solutions built by vendors who specialize in applying AI to enterprise functions.(58) Both startups and established enterprise vendors offer solutions to address common pain points for all departments, including sales and marketing, finance, operations and back-office, customer support, and even HR and recruiting. Emphasize Your Company’s Unique Advantages At the end of an interview cycle, a strong AI candidate will have multiple offers in hand. In order to close the candidate, you’ll need to differentiate your company from others. In addition to compensation, culture, and other general fit criteria, AI talent tends to evaluate offers on the following areas: Availability of Data Candidates want to be able to train their models with as much data as possible.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

If you want to survive this coming fourth phase in the next fewdecades and prepare for it, you cannot afford NOT to read Chace’s book. Prof. Dr. Hugo de Garis, author of The Artilect War, former director of the Artificial Brain Lab, Xiamen University, China. Advances in AI are set to affect progress in all other areas in the coming decades. If this momentum leads to the achievement of strong AI within the century, then in the words of one field leader it would be “the biggest event in human history”. Now is therefore a perfect time for the thoughtful discussion ofchallenges and opportunities that Chace provides. Surviving AI is an exceptionally clear, well-researched and balanced introduction to a complex and controversial topic, and is a compelling read to boot.

Whether intelligence resides in the machine or in the software is analogous to the question of whether it resides in the neurons in your brain or in the electrochemical signals that they transmit and receive. Fortunately we don’t need to answer that question here. ANI and AGI We do need to discriminate between two very different types of artificial intelligence: artificial narrow intelligence (ANI) and artificial general intelligence (AGI (4)), which are also known as weak AI and strong AI, and as ordinary AI and full AI. The easiest way to do this is to say that artificial general intelligence, or AGI, is an AI which can carry out any cognitive function that a human can. We have long had computers which can add up much better than any human, and computers which can play chess better than the best human chess grandmaster.

Informed scepticism about near-term AGI We should take more seriously the arguments of very experienced AI researchers who claim that although the AGI undertaking is possible, it won’t be achieved for a very long time. Rodney Brooks, a veteran AI researcher and robot builder, says “I think it is a mistake to be worrying about us developing [strong] AI any time in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.” Andrew Ng at Baidu and Yann LeCun at Facebook are of a similar mind, as we saw in the last chapter.

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

The next major step in this direction was the May 2006 AGIRI Workshop, of which this volume is essentially a proceedings. The term AGI, artificial general intelligence, was introduced as a modern successor to the earlier strong AI. Artificial General Intelligence What is artificial general intelligence? The AGIRI website lists several features, describing machines • • • • with human-level, and even superhuman, intelligence. that generalize their knowledge across different domains. that reflect on themselves. and that create fundamental innovations and insights. Even strong AI wouldn’t push for this much, and this general, an intelligence. Can there be such an artificial general intelligence? I think there can be, but that it can’t be done with a brain in a vat, with humans providing input and utilizing computational output.

Machine learning algorithms may be applied quite broadly in a variety of contexts, but the breadth and generality in this case is supplied largely by the human user of the algorithm; any particular machine learning program, considered as a holistic system taking in inputs and producing outputs without detailed human intervention, can solve only problems of a very specialized sort. Specified in this way, what we call AGI is similar to some other terms that have been used by other authors, such as “strong AI” [7], “human-level AI” [8], “true synthetic intelligence” [9], “general intelligent system” [10], and even “thinking machine” [11]. Though no term is perfect, we chose to use “AGI” because it correctly stresses the general nature of the research goal and scope, without committing too much to any theory or technique.

In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed. Introduction Early AI researchers aimed at what was later called “strong AI,” the simulation of human level intelligence. One of AI’s founders, Herbert Simon, claimed (circa 1957) that “… there are now in the world machines that think, that learn and that create.” He went on to predict that with 10 years a computer would beat a grandmaster at chess, would prove an “important new mathematical theorem, and would write music of “considerable aesthetic value.”

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

“Chapter eight is the deeply intertwined promise and peril in GNR [genetics, nanotechnology, and robotics] and I go into pretty graphic detail on the downsides of those three areas of technology. And the downside of robotics, which really refers to AI, is the most profound because intelligence is the most important phenomenon in the world. Inherently there is no absolute protection against strong AI.” Kurzweil’s book does underline the dangers of genetic engineering and nanotechnology, but it gives only a couple of anemic pages to strong AI, the old name for AGI. And in that chapter he also argues that relinquishment, or turning our backs on some technologies because they’re too dangerous, as advocated by Bill Joy and others, isn’t just a bad idea, but an immoral one.

We’ve seen how all of these drives will lead to very bad outcomes without extremely careful planning and programming. And we’re compelled to ask ourselves, are we capable of such careful work? Do you, like me, look around the world at expensive and lethal accidents and wonder how we’ll get it right the first time with very strong AI? Three-Mile Island, Chernobyl, Fukushima—in these nuclear power plant catastrophes, weren’t highly qualified designers and administrators trying their best to avoid the disasters that befell them? The 1986 Chernobyl meltdown occurred during a safety test. All three disasters are what organizational theorist Charles Perrow would call “normal accidents.”

It will be capable of thinking, planning, and gaming its makers. No other tool does anything like that. Kurzweil believes that a way to limit the dangerous aspects of AI, especially ASI, is to pair it with humans through intelligence augmentation—IA. From his uncomfortable metal chair the optimist said, “As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such it will reflect our values because it will be us.” And so, the argument goes, it will be as “safe” as we are. But, as I told Kurzweil, Homo sapiens are not known to be particularly harmless when in contact with one another, other animals, or the environment.

pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe
by William Poundstone
Published 3 Jun 2019

But this might be all on the surface. Inside, the AI-bot could be empty, what philosophers call a zombie. It would have no soul, no subjectivity, no inner spark of whatever it is that makes us what we are. Bostrom’s trilemma takes strong AI as a given. Maybe it should be called a quadrilemma, with strong AI as the fourth leg of the stool. But for most of those following what Bostrom is saying, strong AI is taken for granted. If simulated people have real feelings, then simulation is an ethically fraught enterprise. A simulation of global history would recreate famine, plague, natural disasters, murders, wars, slavery, and genocide.

Most of today’s AI researchers, and most in the tech community generally, believe that something that acts like a human and talks like a human and thinks like a human—to a sufficiently subtle degree—would have “a mind in exactly the same sense human beings have minds,” in philosopher John Searle’s words. This view is known as “strong AI.” Searle is among a dissenting faction of philosophers, and regular folk, who are not so sure about that. Almost all contemporary philosophers agree in principle that code could pass the Turing test, that it could be programmed to insist on having private moods and emotions, and that it could narrate a stream of consciousness as convincing as any human’s.

pages: 219 words: 63,495

50 Future Ideas You Really Need to Know
by Richard Watson
Published 5 Nov 2013

So what happens when machine intelligence starts to rival that of its human designers? Before we descend down this rabbit hole we should first split AI in two. “Strong AI” is the term generally used to describe true thinking machines. “Weak AI” (sometimes known as “Narrow AI”) is intelligence intended to supplement rather than exceed human intelligence. So far most machines are preprogrammed or taught logical courses of action. But in the future, machines with strong AI will be able to learn as they go and respond to unexpected events. The implications? Think of automated disease diagnosis and surgery, military planning and battle command, customer-service avatars, artificial creativity and autonomous robots that predict then respond to crime (a “Department of Future Crime”—see also Chapter 32 and Biocriminology).

Dick Cheney Glossary 3D printer A way to produce 3D objects from digital instructions and layered materials dispersed or sprayed on via a printer. Affective computing Machines and systems that recognize or simulate human effects or emotions. AGI Artificial general intelligence, a term usually used to describe strong AI (the opposite of narrow or weak AI). It is machine intelligence that is equivalent to, or exceeds, human intelligence and it’s usually regarded as the long-term goal of AI research and development. Ambient intelligence Electronic or artificial environments that recognize the presence of other machines or people and respond to their needs.

pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts
by Richard Susskind and Daniel Susskind
Published 24 Aug 2015

In the language of some AI scientists and philosophers of the 1980s, these systems would be labelled, perhaps a little pejoratively, as ‘weak AI’ rather than ‘strong AI’.8 Broadly speaking, ‘weak AI’ is a term applied to systems that appear, behaviourally, to engage in intelligent human-like thought but in fact enjoy no form of consciousness; whereas systems that exhibit ‘strong AI’ are those that, it is maintained, do have thoughts and cognitive states. On this latter view, the brain is often equated with the digital computer. Today, fascination with ‘strong AI’ is perhaps more intense than ever, even though really big questions remain unanswered and unanswerable.

Undeterred by these philosophical challenges, books and projects abound on building brains and creating minds.9 In the 1980s, in our speeches, we used to joke about the claim of one of the fathers of AI, Marvin Minsky, who reportedly said that ‘the next generation of computers will be so intelligent, we will be lucky if they keep us around as household pets’.10 Today, it is no longer laugh-worthy or sciencefictional11 to contemplate a future in which our computers are vastly more intelligent than us—this prospect is discussed at length in Superintelligence by Nick Bostrom, who runs the Future of Humanity Institute at the Oxford Martin School at the University of Oxford.12 Ironically, this growth in confidence in the possibility of ‘strong AI’, at least in part, has been fuelled by the success of Watson itself. The irony here is that Watson in fact belongs in the category of ‘weak AI’, and it is precisely because it cannot meaningfully be said to think that the system is not deemed very interesting by some AI scientists, psychologists, and philosophers. For pragmatists (like us) rather than purists, whether Watson is an example of ‘weak’ or ‘strong’ AI is of little moment. Pragmatists are interested in high-performing systems, whether or not they can think.

pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech
by Franklin Foer
Published 31 Aug 2017

In Kurzweil’s telling, the singularity is when artificial intelligence becomes all-powerful, when computers are capable of designing and building other computers. This superintelligence will, of course, create a superintelligence even more powerful than itself—and so on, down the posthuman generations. At that point, all bets are off—“strong AI and nanotechnology can create any product, any situation, any environment that we can imagine at will.” As a scientist, Kurzweil believes in precision. When he makes predictions, he doesn’t chuck darts; he extrapolates data. In fact, he’s loaded everything we know about the history of human technology onto his computer and run the numbers.

There’s a school of incrementalists, who cherish everything that has been accomplished to date—victories like the PageRank algorithm or the software that allows ATMs to read the scrawled writing on checks. This school holds out little to no hope that computers will ever acquire anything approximating human consciousness. Then there are the revolutionaries who gravitate toward Kurzweil and the singularitarian view. They aim to build computers with either “artificial general intelligence” or “strong AI.” For most of Google’s history, it trained its efforts on incremental improvements. During that earlier era, the company was run by Eric Schmidt—an older, experienced manager, whom Google’s investors forced Page and Brin to accept as their “adult” supervisor. That’s not to say that Schmidt was timid.

he made an appearance on Steve Allen’s game show, I’ve Got a Secret: Ray Kurzweil, “I’ve Got a Secret,” 1965, https://www.youtube.com/watch?v=X4Neivqp2K4. “to invent things so that the blind could see”: Steve Rabinowitz quoted in Transcendent Man, directed by Barry Ptolemy, 2011. “profoundly sad, lonely feeling that I really can’t bear it”: Transcendent Man. “strong AI and nanotechnology can create any product”: Ray Kurzweil, The Singularity Is Near (Viking Penguin, 2005), 299. “Each epoch of evolution has progressed more rapidly”: Kurzweil, Singularity, 40. “version 1.0 biological bodies”: Kurzweil, Singularity, 9. “We will be software, not hardware”: Ray Kurzweil, The Age of Spiritual Machines (Viking Penguin, 1999), 129.

pages: 245 words: 64,288

Robots Will Steal Your Job, But That's OK: How to Survive the Economic Collapse and Be Happy
by Pistono, Federico
Published 14 Oct 2012

A machine able to pass the Turning test is said to have achieved human-level intelligence, or at least perceived intelligence (whether we consider that to be true intelligence or not is irrelevant for the purpose of the argument). Some people call this Strong Artificial Intelligence (Strong AI), and many see Strong AI as an unachievable myth, because the brain is mysterious, and so much more than the sum of its individual components. They claim that the brain operates using unknown, possibly unintelligible quantum mechanical processes, and any effort to reach or even surpass it using mechanical machines is pure fantasy.

Others claim that the brain is just a biological machine, not much different from any other machine, and that it is merely a matter of time before we can surpass it using our artificial creations. This is certainly a fascinating topic, one that would require a thorough examination. Perhaps I will explore it on another book. For now, let us concentrate on the present, on what we know for sure, and on the upcoming future. As we will see, there is no need for machines to achieve Strong AI in order to change the nature of the economy, employment, and our lives, forever. We will start by looking at what intelligence is, how it can be useful, and if machines have become intelligent, perhaps even more so than us. Chapter 5 Intelligence There is a great deal of confusion regarding the meaning of the word intelligence, mainly because nobody really knows what it is.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

MARTIN FORD: So, you believe that the capability to think causally is critical to achieving what you’d call strong AI or AGI, artificial general intelligence? JUDEA PEARL: I have no doubt that it is essential. Whether it is sufficient, I’m not sure. However, causal reasoning doesn’t solve every problem of general AI. It doesn’t solve the object recognition problem, and it doesn’t solve the language understanding problem. We basically solved the cause-effect puzzle, and we can learn a lot from these solutions so that we can help the other tasks circumvent their obstacles. MARTIN FORD: Do you think that strong AI or AGI is feasible? Is that something you think will happen someday?

A breakthrough that allowed machines to efficiently learn in a truly unsupervised way would likely be considered one of the biggest events in AI so far, and an important waypoint on the road to human-level AI. ARTIFICIAL GENERAL INTELLIGENCE (AGI) refers to a true thinking machine. AGI is typically considered to be more or less synonymous with the terms HUMAN-LEVEL AI or STRONG AI. You’ve likely seen several examples of AGI—but they have all been in the realm of science fiction. HAL from 2001 A Space Odyssey, the Enterprise’s main computer (or Mr. Data) from Star Trek, C3PO from Star Wars and Agent Smith from The Matrix are all examples of AGI. Each of these fictional systems would be capable of passing the TURING TEST—in other words, these AI systems could carry out a conversation so that they would be indistinguishable from a human being.

MARTIN FORD: It sounds like your strategy is to attract AI talent in part by offering the opportunity and infrastructure to found a startup venture. ANDREW NG: Yes, building a successful AI company takes more than AI talent. We focus so much on the technology because it’s advancing so quickly, but building a strong AI team often needs a portfolio of different skills ranging from the tech, to the business strategy, to product, to marketing, to business development. Our role is building full stack teams that are able to build concrete business verticals. The technology is super important, but a startup is much more than technology.

pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us
by Tim O'Reilly
Published 9 Oct 2017

As computational neuroscientist and AI entrepreneur Beau Cronin puts it, “In many cases, Google has succeeded by reducing problems that were previously assumed to require strong AI—that is, reasoning and problem-solving abilities generally associated with human intelligence—into narrow AI, solvable by matching new inputs against vast repositories of previously encountered examples.” Enough narrow AI infused with the data thrown off by billions of humans starts to look suspiciously like strong AI. In short, these are systems of collective intelligence that use algorithms to aggregate the collective knowledge and decisions of millions of individual humans.

And then we need to understand how financial markets (often colloquially, and inaccurately, referred to simply as “Wall Street”) have become a machine that its creators no longer fully understand, and how the goals and operation of that machine have become radically disconnected from the market of real goods and services that it was originally created to support. THREE TYPES OF ARTIFICIAL INTELLIGENCE As we’ve seen, when experts talk about artificial intelligence, they distinguish between “narrow artificial intelligence” and “general artificial intelligence,” also referred to as “weak AI” and “strong AI.” Narrow AI burst into the public debate in 2011. That was the year that IBM’s Watson soundly trounced the best human Jeopardy players in a nationally televised match in February. In October of that same year, Apple introduced Siri, its personal agent, able to answer common questions spoken aloud in plain language.

Rather than spelling out every procedure, a base program such as an image recognizer or categorizer is built, and then trained by feeding it large amounts of data labeled by humans until it can recognize patterns in the data on its own. We teach the program what success looks like, and it learns to copy us. This leads to the fear that these programs will become increasingly independent of their creators. Artificial general intelligence (also sometimes referred to as “strong AI”) is still the stuff of science fiction. It is the product of a hypothetical future in which an artificial intelligence isn’t just trained to be smart about a specific task, but to learn entirely on its own, and can effectively apply its intelligence to any problem that comes its way. The fear is that an artificial general intelligence will develop its own goals and, because of its ability to learn on its own at superhuman speeds, will improve itself at a rate that soon leaves humans far behind.

pages: 247 words: 43,430

Think Complexity
by Allen B. Downey
Published 23 Feb 2012

One of the strongest challenges to compatibilism is the consequence argument. What is the consequence argument? What response can you give to the consequence argument based on what you have read in this book? Example 10-7. In the philosophy of mind, Strong AI is the position that an appropriately programmed computer could have a mind in the same sense that humans have minds. John Searle presented a thought experiment called The Chinese Room, intended to show that Strong AI is false. You can read about it at http://en.wikipedia.org/wiki/Chinese_room. What is the system reply to the Chinese Room argument? How does what you have learned about complexity science influence your reaction to the system response?

pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed
by Ray Kurzweil
Published 13 Nov 2012

The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex. Another objection to the feasibility of “strong AI” (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of analog computing, whereas digital methods inherently cannot replicate the gradations of value that analog representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy.

“IBM Unveils Cognitive Computing Chips,” IBM news release, August 18, 2011, http://www-03.ibm.com/press/us/en/pressrelease/35251.wss. 8. “Japan’s K Computer Tops 10 Petaflop/s to Stay Atop TOP500 List.” Chapter 9: Thought Experiments on the Mind 1. John R. Searle, “I Married a Computer,” in Jay W. Richards, ed., Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 2. Stuart Hameroff, Ultimate Computing: Biomolecular Consciousness and Nanotechnology (Amsterdam: Elsevier Science, 1987). 3. P. S. Sebel et al., “The Incidence of Awareness during Anesthesia: A Multicenter United States Study,” Anesthesia and Analgesia 99 (2004): 833–39. 4.

Modha et al., “Cognitive Computing,” Communications of the ACM 54, no. 8 (2011): 62–71, http://cacm.acm.org/magazines/2011/8/114944-cognitive-computing/fulltext. 9. Kurzweil, The Singularity Is Near, chapter 9, section titled “The Criticism from Ontology: Can a Computer Be Conscious?” (pp. 458–69). 10. Michael Denton, “Organism and Machine: The Flawed Analogy,” in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI (Seattle: Discovery Institute, 2002). 11. Hans Moravec, Mind Children (Cambridge, MA: Harvard University Press, 1988). Epilogue 1. “In U.S., Optimism about Future for Youth Reaches All-Time Low,” Gallup Politics, May 2, 2011, http://www.gallup.com/poll/147350/optimism-future-youth-reaches-time-low.aspx. 2.

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

These theorists claim that a suitably programmed computer could imitate conscious mental states, such as self-awareness, understanding or love, but could never actually experience them—it could never be conscious, and hence it could never be self-aware and would never actually understand or love anything. Proponents of Strong AI believe the opposite. They claim that a ­computer could, given the right programming, possess consciousness and thereby experience conscious mental states. This title is inspired by Dreyfus’s (1992) ‘What Computers Still Can’t Do’. T. Tozer (*) Centre for Global Studies, London, UK © The Author(s) 2020 R.

Indeed, there is no example to be found of an entity producing its opposite by itself.2 Therefore, it is unreasonable to suppose that the brain would be able to do so: an entirely physical thing (brain/ computer) could not produce something that is entirely non-physical (consciousness). Note that although the above examples admit of the possibility of something leading to its opposite if combined with something else, the argument of Strong AI proponents such as Kurzweil entails no such ­non-­physical enabling substance. Such theorists suggest that a physical thing, and that physical thing alone, can produce consciousness. That is precisely the claim I am rejecting. 2 Or if there is, let the reader pen it in writing as an objection to this premise. 11 What Computers Will Never Be Able To Do 105 Finally, let us consider an objection from Turing (1950: 445–447).

pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It
by Marc Goodman
Published 24 Feb 2015

Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing.

To resolve the contradiction in his program, he attempts to kill the crew. As narrow AI becomes more powerful, robots grow more autonomous, and AGI looms large, we need to ensure that the algorithms of tomorrow are better equipped to resolve programming conflicts and moral judgments than was HAL. It’s not that any strong AI would necessarily be “evil” and attempt to destroy humanity, but in pursuit of its primary goal as programmed, an AGI might not stop until it had achieved its mission at all costs, even if that meant competing with or harming human beings, seizing our resources, or damaging our environment. As the perceived risks from AGI have grown, numerous nonprofit institutes have been formed to address and study them, including Oxford’s Future of Humanity Institute, the Machine Intelligence Research Institute, the Future of Life Institute, and the Cambridge Centre for the Study of Existential Risk.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

These limited goals might include natural language processing functions like translation, or navigating through an unfamiliar physical environment. A narrow AI system is suited only to the task for which it is designed. The great majority of AI systems in the world today are closer to this narrow and limited type. General (or “strong”) AI is the ability to achieve an unlimited range of goals, and even to set new goals independently, including in situations of uncertainty or vagueness. This encompasses many of the attributes we think of as intelligence in humans. Indeed, general AI is what we see portrayed in the robots and AI of popular culture discussed above.

What about if 20%, 50% or 80% of their mental functioning was the result of computer processing powers? On one view, the answer would be the same—a human should not lose rights just because they have added to their mental functioning. However, consistent with his view that no artificial process can produce “strong” AI which resembles human intelligence, the philosopher John Searle argues that replacement would gradually remove conscious experience.118 Replacement or augmentation of human physical functions with artificial ones does not render someone less deserving of rights.119 Someone who loses an arm and has it replaced with a mechanical version is not considered less human.

See Kill Switch OpenAI Open Roboethics Institute (ORI) Organisation for Economic Co-operation and Development (OECD) P Paris Climate Agreement Partnership on AI to benefit People and Society Positivism Posthumanism Private Law Product liability EU Product Liability Directive US Restatement (Third) of Torts–Products Liability Professions, The Public International Law Q Qualia R Random Darknet Shopper Rawls, John Red Flag Law Rousseau, Jean Jacques S Safe interruptibility Sandbox Saudi Arabia Self-driving cars. See Autonomous vehicles Sexbots Sex Robots. See Sexbots Singularity, The “Shut Down” Problem Slavery Space Law Outer Space Treaty 1967 Stochastic Gradient Descent Strict liability Strong AI. See General AI Subsidiarity Superintelligence Symbolic AI T TD-gammon Teleological Principle TenCent TensorFlow Transhumanism. See Posthumanism Transparency See alsoExplanation, Black Box Problem Trolley Problem Turing Test U UAE, The UK, The UK Financial Conduct Authority (FCA) sandbox Uncanny Valley The US US National Institute of Standards and Technology (NIST) V Vicarious liability Villani Report W Warnock Inquiry Weak AI.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

These are the sorts of skills that might persuade us it had human-like intelligence. Conversely, perhaps those are the wrong sort of skills to be thinking about when gauging intelligence. We certainly have a tendency to judge by human-centric standards. Perhaps we shouldn’t. A tell-tale is the term ‘strong AI’, used by afficionados to mean AI that can perform like a human—flexibly, socially, emotionally. But that’s certainly not the only yardstick for intelligence. After all, we may be very good at those things, but there are plenty of other things we are poor at. We are for example weak at statistical reasoning and our memory is patchy and selective.

A-10 Warthog abacuses Abbottabad, Pakistan Able Archer (1983) acoustic decoys acoustic torpedoes Adams, Douglas Aegis combat system Aerostatic Corps affective empathy Affecto Afghanistan agency aircraft see also dogfighting; drones aircraft carriers algorithms algorithm creation Alpha biases choreography deep fakes DeepMind, see DeepMind emotion recognition F-117 Nighthawk facial recognition genetic selection imagery analysis meta-learning natural language processing object recognition predictive policing alien hand syndrome Aliens (1986 film) Alpha AlphaGo Altered Carbon (television series) Amazon Amnesty International amygdala Andropov, Yuri Anduril Ghost anti-personnel mines ants Apple Aristotle armour arms races Army Research Lab Army Signal Corps Arnalds, Ólafur ARPA Art of War, The (Sun Tzu) art Artificial Intelligence agency and architecture autonomy and as ‘brittle’ connectionism definition of decision-making technology expert systems and feedback loops fuzzy logic innateness intelligence analysis meta-learning as ‘narrow’ needle-in-a-haystack problems neural networks reinforcement learning ‘strong AI’ symbolic logic and unsupervised learning ‘winters’ artificial neural networks Ashby, William Ross Asimov, Isaac Asperger syndrome Astute class boats Atari Breakout (1976) Montezuma’s Revenge (1984) Space Invaders (1978) Athens ATLAS robots augmented intelligence Austin Powers (1997 film) Australia authoritarianism autonomous vehicles see also drones autonomy B-21 Raider B-52 Stratofortress B2 Spirit Baby X BAE Systems Baghdad, Iraq Baidu balloons ban, campaigns for Banks, Iain Battle of Britain (1940) Battle of Fleurus (1794) Battle of Midway (1942) Battle of Sedan (1940) batwing design BBN Beautiful Mind, A (2001 film) beetles Bell Laboratories Bengio, Yoshua Berlin Crisis (1961) biases big data Bin Laden, Osama binary code biological weapons biotechnology bipolarity bits Black Lives Matter Black Mirror (television series) Blade Runner (1982 film) Blade Runner 2049 (2017 film) Bletchley Park, Buckinghamshire blindness Blunt, Emily board games, see under games boats Boden, Margaret bodies Boeing MQ-25 Stingray Orca submarines Boolean logic Boston Dynamics Bostrom, Nick Boyd, John brain amygdala bodies and chunking dopamine emotion and genetic engineering and language and mind merge and morality and plasticity prediction and subroutines umwelts and Breakout (1976 game) breathing control brittleness brute force Buck Rogers (television series) Campaign against Killer Robots Carlsen, Magnus Carnegie Mellon University Casino Royale (2006 film) Castro, Fidel cat detector centaur combination Central Intelligence Agency (CIA) centre of gravity chaff Challenger Space Shuttle disaster (1986) Chauvet cave, France chemical weapons Chernobyl nuclear disaster (1986) chess centaur teams combinatorial explosion and creativity in Deep Blue game theory and MuZero as toy universe chicken (game) chimeras chimpanzees China aircraft carriers Baidu COVID-19 pandemic (2019–21) D-21 in genetic engineering in GJ-11 Sharp Sword nuclear weapons surveillance in Thucydides trap and US Navy drone seizure (2016) China Lake, California Chomsky, Noam choreography chunking Cicero civilians Clarke, Arthur Charles von Clausewitz, Carl on character on culmination on defence on genius on grammar of war on materiel on nature on poker on willpower on wrestling codebreaking cognitive empathy Cold War (1947–9) arms race Berlin Crisis (1961) Cuban Missile Crisis (1962) F-117 Nighthawk Iran-Iraq War (1980–88) joint action Korean War (1950–53) nuclear weapons research and SR-71 Blackbird U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) VRYAN Cole, August combinatorial creativity combinatorial explosion combined arms common sense computers creativity cyber security games graphics processing unit (GPU) mice Moore’s Law symbolic logic viruses VRYAN confirmation bias connectionism consequentialism conservatism Convention on Conventional Weapons ConvNets copying Cormorant cortical interfaces cost-benefit analysis counterfactual regret minimization counterinsurgency doctrine courageous restraint COVID-19 pandemic (2019–21) creativity combinatorial exploratory genetic engineering and mental disorders and transformational criminal law CRISPR, crows Cruise, Thomas Cuban Missile Crisis (1962) culmination Culture novels (Banks) cyber security cybernetics cyborgs Cyc cystic fibrosis D-21 drones Damasio, Antonio dance DARPA autonomous vehicle research battlespace manager codebreaking research cortical interface research cyborg beetle Deep Green expert system programme funding game theory research LongShot programme Mayhem Ng’s helicopter Shakey understanding and reason research unmanned aerial combat research Dartmouth workshop (1956) Dassault data DDoS (distributed denial-of-service) dead hand system decision-making technology Deep Blue deep fakes Deep Green DeepMind AlphaGo Atari playing meta-learning research MuZero object recognition research Quake III competition (2019) deep networks defence industrial complex Defence Innovation Unit Defence Science and Technology Laboratory defence delayed gratification demons deontological approach depth charges Dionysus DNA (deoxyribonucleic acid) dodos dogfighting Alpha domains dot-matrix tongue Dota II (2013 game) double effect drones Cormorant D-21 GJ-11 Sharp Sword Global Hawk Gorgon Stare kamikaze loitering munitions nEUROn operators Predator Reaper reconnaissance RQ-170 Sentinel S-70 Okhotnik surveillance swarms Taranis wingman role X-37 X-47b dual use technology Eagleman, David early warning systems Echelon economics Edge of Tomorrow (2014 film) Eisenhower, Dwight Ellsberg, Daniel embodied cognition emotion empathy encryption entropy environmental niches epilepsy epistemic community escalation ethics Asimov’s rules brain and consequentialism deep brain stimulation and deontological approach facial recognition and genetic engineering and golden rule honour hunter-gatherer bands and identity just war post-conflict reciprocity regulation surveillance and European Union (EU) Ex Machina (2014 film) expert systems exploratory creativity extra limbs Eye in the Sky (2015 film) F-105 Thunderchief F-117 Nighthawk F-16 Fighting Falcon F-22 Raptor F-35 Lightning F/A-18 Hornet Facebook facial recognition feedback loops fighting power fire and forget firmware 5G cellular networks flow fog of war Ford forever wars FOXP2 gene Frahm, Nils frame problem France Fukushima nuclear disaster (2011) Future of Life Institute fuzzy logic gait recognition game theory games Breakout (1976) chess, see chess chicken Dota II (2013) Go, see Go Montezuma’s Revenge (1984) poker Quake III (1999) Space Invaders (1978) StarCraft II (2010) toy universes zero sum games gannets ‘garbage in, garbage out’ Garland, Alexander Gates, William ‘Bill’ Gattaca (1997 film) Gavotti, Giulio Geertz, Clifford generalised intelligence measure Generative Adversarial Networks genetic engineering genetic selection algorithms genetically modified crops genius Germany Berlin Crisis (1961) Nuremburg Trials (1945–6) Russian hacking operation (2015) World War I (1914–18) World War II (1939–45) Ghost in the Shell (comic book) GJ-11 Sharp Sword Gladwell, Malcolm Global Hawk drone global positioning system (GPS) global workspace Go (game) AlphaGo Gödel, Kurt von Goethe, Johann golden rule golf Good Judgment Project Google BERT Brain codebreaking research DeepMind, see DeepMind Project Maven (2017–) Gordievsky, Oleg Gorgon Stare GPT series grammar of war Grand Challenge aerial combat autonomous vehicles codebreaking graphics processing unit (GPU) Greece, ancient grooming standard Groundhog Day (1993 film) groupthink guerilla warfare Gulf War First (1990–91) Second (2003–11) hacking hallucinogenic drugs handwriting recognition haptic vest hardware Harpy Hawke, Ethan Hawking, Stephen heat-seeking missiles Hebrew Testament helicopters Hellfire missiles Her (2013 film) Hero-30 loitering munitions Heron Systems Hinton, Geoffrey Hitchhiker’s Guide to the Galaxy, The (Adams) HIV (human immunodeficiency viruses) Hoffman, Frank ‘Holeshot’ (Cole) Hollywood homeostasis Homer homosexuality Hongdu GJ-11 Sharp Sword honour Hughes human in the loop human resources human-machine teaming art cyborgs emotion games King Midas problem prediction strategy hunter-gatherer bands Huntingdon’s disease Hurricane fighter aircraft hydraulics hypersonic engines I Robot (Asimov) IARPA IBM identity Iliad (Homer) image analysis image recognition cat detector imagination Improbotics nformation dominance information warfare innateness intelligence analysts International Atomic Energy Agency International Criminal Court international humanitarian law internet of things Internet IQ (intelligence quotient) Iran Aegis attack (1988) Iraq War (1980–88) nuclear weapons Stuxnet attack (2010) Iraq Gulf War I (1990–91) Gulf War II (2003–11) Iran War (1980–88) Iron Dome Israel Italo-Turkish War (1911–12) Jaguar Land Rover Japan jazz JDAM (joint directed attack munition) Jeopardy Jobs, Steven Johansson, Scarlett Johnson, Lyndon Joint Artificial Intelligence Center (JAIC) de Jomini, Antoine jus ad bellum jus in bello jus post bellum just war Kalibr cruise missiles kamikaze drones Kasparov, Garry Kellogg Briand Pact (1928) Kennedy, John Fitzgerald KGB (Komitet Gosudarstvennoy Bezopasnosti) Khrushchev, Nikita kill chain King Midas problem Kissinger, Henry Kittyhawk Knight Rider (television series) know your enemy know yourself Korean War (1950–53) Kratos XQ-58 Valkyrie Kubrick, Stanley Kumar, Vijay Kuwait language connectionism and genetic engineering and natural language processing pattern recognition and semantic webs translation universal grammar Law, Jude LeCun, Yann Lenat, Douglas Les, Jason Libratus lip reading Litvinenko, Alexander locked-in patients Lockheed dogfighting trials F-117 Nighthawk F-22 Raptor F-35 Lightning SR-71 Blackbird logic loitering munitions LongShot programme Lord of the Rings (2001–3 film trilogy) LSD (lysergic acid diethylamide) Luftwaffe madman theory Main Battle Tanks malum in se Manhattan Project (1942–6) Marcus, Gary Maslow, Abraham Massachusetts Institute of Technology (MIT) Matrix, The (1999 film) Mayhem McCulloch, Warren McGregor, Wayne McNamara, Robert McNaughton, John Me109 fighter aircraft medical field memory Merkel, Angela Microsoft military industrial complex Mill, John Stuart Milrem mimicry mind merge mind-shifting minimax regret strategy Minority Report (2002 film) Minsky, Marvin Miramar air base, San Diego missiles Aegis combat system agency and anti-missile gunnery heat-seeking Hellfire missiles intercontinental Kalibr cruise missiles nuclear warheads Patriot missile interceptor Pershing II missiles Scud missiles Tomahawk cruise missiles V1 rockets V2 rockets mission command mixed strategy Montezuma’s Revenge (1984 game) Moore’s Law mosaic warfare Mueller inquiry (2017–19) music Musk, Elon Mutually Assured Destruction (MAD) MuZero Nagel, Thomas Napoleon I, Emperor of the French Napoleonic France (1804–15) narrowness Nash equilibrium Nash, John National Aeronautics and Space Administration (NASA) National Security Agency (NSA) National War College natural language processing natural selection Nature navigation computers Nazi Germany (1933–45) needle-in-a-haystack problems Netflix network enabled warfare von Neumann, John neural networks neurodiversity nEUROn drone neuroplasticity Ng, Andrew Nixon, Richard normal accident theory North Atlantic Treaty Organization (NATO) North Korea nuclear weapons Cuban Missile Crisis (1962) dead hand system early warning systems F-105 Thunderchief and game theory and Hiroshima and Nagasaki bombings (1945) Manhattan Project (1942–6) missiles Mutually Assured Destruction (MAD) second strike capability submarines and VRYAN and in WarGames (1983 film) Nuremburg Trials (1945–6) Obama, Barack object recognition Observe Orient Decide and Act (OODA) offence-defence balance Office for Naval Research Olympic Games On War (Clausewitz), see Clausewitz, Carl OpenAI optogenetics Orca submarines Ottoman Empire (1299–1922) pain Pakistan Palantir Palmer, Arnold Pandemonium Panoramic Research Papert, Seymour Parkinson’s disease Patriot missile interceptors pattern recognition Pearl Harbor attack (1941) Peloponnesian War (431–404 BCE) Pentagon autonomous vehicle research codebreaking research computer mouse development Deep Green Defence Innovation Unit Ellsberg leaks (1971) expert system programme funding ‘garbage in, garbage out’ story intelligence analysts Project Maven (2017–) Shakey unmanned aerial combat research Vietnam War (1955–75) perceptrons Perdix Pershing II missiles Petrov, Stanislav Phalanx system phrenology pilot’s associate Pitts, Walter platform neutrality Pluribus poker policing polygeneity Portsmouth, Hampshire Portuguese Man o’ War post-traumatic stress disorder (PTSD) Predator drones prediction centaur teams ‘garbage in, garbage out’ story policing toy universes VRYAN Prescience principles of war prisoners Project Improbable Project Maven (2017–) prosthetic arms proximity fuses Prussia (1701–1918) psychology psychopathy punishment Putin, Vladimir Pyeongchang Olympics (2018) Qinetiq Quake III (1999 game) radar Rafael RAND Corporation rational actor model Rawls, John Re:member (Arnalds) Ready Player One (Cline) Reagan, Ronald Reaper drones reciprocal punishment reciprocity reconnaissance regulation ban, campaigns for defection self-regulation reinforcement learning remotely piloted air vehicles (RPAVs) revenge porn revolution in military affairs Rid, Thomas Robinson, William Heath Robocop (1987 film) Robotics Challenge robots Asimov’s rules ATLAS Boston Dynamics homeostatic Shakey symbolic logic and Rome Air Defense Center Rome, ancient Rosenblatt, Frank Royal Air Force (RAF) Royal Navy RQ-170 Sentinel Russell, Stuart Russian Federation German hacking operation (2015) Litvinenko murder (2006) S-70 Okhotnik Skripal poisoning (2018) Ukraine War (2014–) US election interference (2016) S-70 Okhotnik SAGE Said and Done’ (Frahm) satellite navigation satellites Saudi Arabia Schelling, Thomas schizophrenia Schwartz, Jack Sea Hunter security dilemma Sedol, Lee self-actualisation self-awareness self-driving cars Selfridge, Oliver semantic webs Shakey Shanahan, Murray Shannon, Claude Shogi Silicon Valley Simon, Herbert Single Integrated Operations Plan (SIOP) singularity Siri situational awareness situationalist intelligence Skripal, Sergei and Yulia Slaughterbots (2017 video) Slovic, Paul smartphones Smith, Willard social environments software Sophia Sorcerer’s Apprentice, The (Goethe) South China Sea Soviet Union (1922–91) aircraft Berlin Crisis (1961) Chernobyl nuclear disaster (1986) Cold War (1947–9), see Cold War collapse (1991) Cuban Missile Crisis (1962) early warning systems Iran-Iraq War (1980–88) Korean War (1950–53) nuclear weapons radar technology U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) VRYAN World War II (1939–45) Space Invaders (1978 game) SpaceX Sparta Spike Firefly loitering munitions Spitfire fighter aircraft Spotify Stanford University Stanley Star Trek (television series) StarCraft II (2010 game) stealth strategic bombing strategic computing programme strategic culture Strategy Robot strategy Strava Stuxnet sub-units submarines acoustic decoys nuclear Orca South China Sea incident (2016) subroutines Sukhoi Sun Tzu superforecasting surveillance swarms symbolic logic synaesthesia synthetic operation environment Syria Taliban tanks Taranis drone technological determinism Tempest Terminator franchise Tesla Tetlock, Philip theory of mind Threshold Logic Unit Thucydides TikTok Tomahawk cruise missiles tongue Top Gun (1986 film) Top Gun: Maverick (2021 film) torpedoes toy universes trade-offs transformational creativity translation Trivers, Robert Trump, Donald tumours Turing, Alan Twitter 2001: A Space Odyssey (1968 film) Type-X Robotic Combat Vehicle U2 incident (1960) Uber Uexküll, Jacob Ukraine ultraviolet light spectrum umwelts uncanny valley unidentified flying objects (UFOs) United Kingdom AI weapons policy armed force, size of Battle of Britain (1940) Bletchley Park codebreaking Blitz (1940–41) Cold War (1947–9) COVID-19 pandemic (2019–21) DeepMind, see DeepMind F-35 programme fighting power human rights legislation in Litvinenko murder (2006) nuclear weapons principles of war Project Improbable Qinetiq radar technology Royal Air Force Royal Navy Skripal poisoning (2018) swarm research wingman concept World War I (1914–18) United Nations United States Afghanistan War (2001–14) Air Force Army Research Lab Army Signal Corps Battle of Midway (1942) Berlin Crisis (1961) Bin Laden assassination (2011) Black Lives Matter protests (2020) centaur team research Central Intelligence Agency (CIA) Challenger Space Shuttle disaster (1986) Cold War (1947–9), see Cold War COVID-19 pandemic (2019–21) Cuban Missile Crisis (1962) culture cyber security DARPA, see DARPA Defense Department drones early warning systems F-35 programme Gulf War I (1990–91) Gulf War II (2003–11) IARPA Iran Air shoot-down (1988) Korean War (1950–53) Manhattan Project (1942–6) Marines Mueller inquiry (2017–19) National Security Agency National War College Navy nuclear weapons Office for Naval Research Patriot missile interceptor Pearl Harbor attack (1941) Pentagon, see Pentagon Project Maven (2017–) Rome Air Defense Center Silicon Valley strategic computing programme U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) universal grammar Universal Schelling Machine (USM) unmanned aerial vehicles (UAVs), see drones unsupervised learning utilitarianism UVision V1 rockets V2 rockets Vacanti mouse Valkyries Van Gogh, Vincent Vietnam War (1955–75) Vigen, Tyler Vincennes, USS voice assistants VRYAN Wall-e (2008 film) WannaCry ransomware War College, see National War College WarGames (1983 film) warrior ethos Watson weapon systems WhatsApp Wiener, Norbert Wikipedia wingman role Wittgenstein, Ludwig World War I (1914–18) World War II (1939–45) Battle of Britain (1940) Battle of Midway (1942) Battle of Sedan (1940) Bletchley Park codebreaking Blitz (1940–41) Hiroshima and Nagasaki bombings (1945) Pearl Harbor attack (1941) radar technology V1 rockets V2 rockets VRYAN and Wrangham, Richard Wright brothers WS-43 loitering munitions Wuhan, China X-37 drone X-drone X-rays YouTube zero sum games

pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life
by Daniel C. Dennett
Published 15 Jan 1995

Simpler survival machines — plants, for instance — never achieve the heights of self-redefinition made possible by the complexities of your robot; considering them just as survival machines for their comatose inhabitants leaves no patterns in their behavior unexplained. If you pursue this avenue, which of course I recommend, then you must abandon Searle's and Fodor's "principled" objection to "strong AI." The imagined robot, however difficult or unlikely an engineering feat, is not an impossibility — nor do they claim it to be. They concede the possibility of such a robot, but just dispute its "metaphysical status"; however adroitly it managed its affairs, they say, its intentionality would not be the real thing.

Certainly everybody in AI has always known about Godel's Theorem, and they have all continued, unworried, with their labors. In fact, Hofstadter's classic Godel Escher Bach (1979) can be read as the demonstration that Godel is an unwilling champion of AI, providing essential insights about the paths to follow to strong AI, not showing the futility of the field. But Roger Penrose, Rouse Ball Professor of Mathematics at Oxford, and one of the world's leading mathematical physicists, thinks otherwise. His challenge has to be taken seriously, even if, as I and others in AI are convinced, he is making a fairly simple mistake.

As a product of biological design processes (both genetic and individual), it is almost certainly one of those algorithms that are somewhere or other in the Vast space of interesting algorithms, full of typographical errors or "bugs," but good enough to bet your life on — so far. Penrose sees this as a "far-fetched" possibility, but if that is all he can say against it, he has not yet come to grips with the best version of "strong AI." {444} 3. THE PHANTOM QUANTUM-GRAVITY COMPUTER: LESSONS FROM LAPLAND I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

One result of this conservatism has been increased concentration on “weak AI”—the variety devoted to providing aids to human thought—and away from “strong AI”—the variety that attempts to mechanize human-level intelligence.73 Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.74 The last few years have seen a resurgence of interest in AI, which might yet spill over into renewed efforts towards artificial general intelligence (what Nilsson calls “strong AI”). In addition to faster hardware, a contemporary project would benefit from the great strides that have been made in the many subfields of AI, in software engineering more generally, and in neighboring fields such as computational neuroscience.

13 K Kasparov, Garry 12 Kepler, Johannes 14 Knuth, Donald 14, 264 Kurzweil, Ray 2, 261, 269 L Lenat, Douglas 12, 263 Logic Theorist (system) 6 logicist paradigm, see Good Old-Fashioned Artificial Intelligence (GOFAI) Logistello 12 M machine intelligence; see also artificial intelligence human-level (HLMI) 4, 19–21, 27–35, 73–74, 207, 243, 264, 267 revolution, see intelligence explosion machine learning 8–18, 28, 121, 152, 188, 274, 290 machine translation 15 macro-structural development accelerator 233–235 malignant failure 123–126, 149, 196 Malthusian condition 163–165, 252 Manhattan Project 75, 80–87, 276 McCarthy, John 5–18 McCulloch–Pitts neuron 237 MegaEarth 56 memory capacity 7–9, 60, 71 memory sharing 61 Mill, John Stuart 210 mind crime 125–126, 153, 201–208, 213, 226, 297 Minsky, Marvin 18, 261, 262, 282 Monte Carlo method 9–13 Moore’s law 24–25, 73–77, 274, 286; see also computing power moral growth 214 moral permissibility (MP)218–220, 297 moral rightness (MR)217–220.296, 297 moral status 125–126, 166–169, 173, 202–205, 268, 288, 296 Moravec, Hans 24, 265, 288 motivation selection 29, 127–129, 138–144, 147, 158, 168, 180–191, 222 definition 138 motivational scaffolding 191, 207 multipolar scenarios 90, 132, 159–184, 243–254, 301 mutational load 41 N nanotechnology 53, 94–98, 103, 113, 177, 231, 239, 276, 277, 299, 300 natural language 14 neural networks 5–9, 28, 46, 173, 237, 262, 274 neurocomputational modeling 25–30, 35, 61, 301; see also whole brain emulation (WBE) and neuromorphic AI neuromorphic AI 28, 34, 47, 237–245, 267, 300, 301 Newton, Isaac 56 Nilsson, Nils 18–20, 264 nootropics 36–44, 66–67, 201, 267 Norvig, Peter 19, 264, 282 O observation selection theory, see anthropics Oliphant, Mark 85 O’Neill, Gerard 101 ontological crisis 146, 197 optimality notions 10, 186, 194, 291–293 Bayesian agent 9–11 value learner (AI-VL) 194 observation-utility-maximizer (AI-OUM) 194 reinforcement learner (AI-RL) 194 optimization power 24, 62–75, 83, 92–96, 227, 274 definition 65 oracle AI 141–158, 222–226, 285, 286 definition 146 orthogonality thesis 105–109, 115, 279, 280 P paperclip AI 107–108, 123–125, 132–135, 153, 212, 243 Parfit, Derek 279 Pascal’s mugging 223, 298 Pascal’s wager 223 person-affecting perspective 228, 245–246, 301 perverse instantiation 120–124, 153, 190–196 poker 13 principal–agent problem 127–128, 184 Principle of Epistemic Deference 211, 221 Proverb (program) 12 Q qualia, see consciousness quality superintelligence 51–58, 72, 243, 272 definition 56 R race dynamic, see technology race rate of growth, see growth ratification 222–225 Rawls, John 150 Reagan, Ronald 86–87 reasons-based goal 220 recalcitrance 62–77, 92, 241, 274 definition 65 recursive self-improvement 29, 75, 96, 142, 259; see also seed AI reinforcement learning 12, 28, 188–189, 194–196, 207, 237, 277, 282, 290 resource acquisition 113–116, 123, 193 reward signal 71, 121–122, 188, 194, 207 Riemann hypothesis catastrophe 123, 141 robotics 9–19, 94–97, 117–118, 139, 238, 276, 290 Roosevelt, Franklin D.85 RSA encryption scheme 80 Russell, Bertrand 6, 87, 139, 277 S Samuel, Arthur 12 Sandberg, Anders 265, 267, 272, 274 scanning, see whole brain emulation (WBE) Schaeffer, Jonathan 12 scheduling 15 Schelling point 147, 183, 296 Scrabble 13 second transition 176–178, 238, 243–245, 252 second-guessing (arguments) 238–239 seed AI 23–29, 36, 75, 83, 92–96, 107, 116–120, 142, 151, 189–198, 201–217, 224–225, 240–241, 266, 274, 275, 282 self-limiting goal 123 Shakey (robot) 6 SHRDLU (program) 6 Shulman, Carl 178–180, 265, 287, 300, 302, 304 simulation hypothesis 134–135, 143, 278, 288, 292 singleton 78–90, 95–104, 112–114, 115–126, 136, 159, 176–184, 242, 275, 276, 279, 281, 287, 299, 301, 303 definition 78, 100 singularity 1, 2, 49, 75, 261, 274; see also intelligence explosion social signaling 110 somatic gene therapy 42 sovereign AI 148–158, 187, 226, 285 speech recognition 15–16, 46 speed superintelligence 52–58, 75, 270, 271 definition 53 Strategic Defense Initiative (“Star Wars”) 86 strong AI 18 stunting 135–137, 143 sub-symbolic processing, see connectionism superintelligence; see also collective superintelligence, quality superintelligence and speed superintelligence definition 22, 52 forms 52, 59 paths to 22, 50 predicting the behavior of 108, 155, 302 superorganisms 178–180 superpowers 52–56, 80, 86–87, 91–104, 119, 133, 148, 277, 279, 296 types 94 surveillance 15, 49, 64, 82–85, 94, 117, 132, 181, 232, 253, 276, 294, 299 Szilárd, Leó 85 T TD-Gammon 12 Technological Completion Conjecture 112–113, 229 technology race 80–82, 86–90 203–205, 231, 246–252, 302 teleological threads 110 Tesauro, Gerry 12 TextRunner (system) 71 theorem prover 15, 266 three laws of robotics 139, 284 Thrun, Sebastian 19 tool-AI 151–158 definition 151 treacherous turn 116–119, 128 Tribolium castaneum 154 tripwires 137–143 Truman, Harry 85 Turing, Alan 4, 23, 29, 44, 225, 265, 271, 272 U unemployment 65, 159–180, 287 United Nations 87–89, 252–253 universal accelerator 233 unmanned vehicle, see drone uploading, see whole brain emulation (WBE) utility function 10–11, 88, 100, 110, 119, 124–125, 133–134, 172, 185–187, 192–208, 290, 292, 293, 303 V value learning 191–198, 208, 293 value-accretion 189–190, 207 value-loading 185–208, 293, 294 veil of ignorance 150, 156, 253, 285 Vinge, Vernor 2, 49, 270 virtual reality 30, 31, 53, 113, 166, 171, 198, 204, 300 von Neumann probe 100–101, 113 von Neumann, John 44, 87, 114, 261, 277, 281 W wages 65, 69, 160–169 Watson (IBM) 13, 71 WBE, see whole brain emulation (WBE) Whitehead, Alfred N.6 whole brain emulation (WBE) 28–36, 50, 60, 68–73, 77, 84–85, 108, 172, 198, 201–202, 236–245, 252, 266, 267, 274, 299, 300, 301 Wigner, Eugene 85 windfall clause 254, 303 Winston, Patrick 18 wire-heading 122–123, 133, 189, 194, 207, 282, 291 wise-singleton sustainability threshold 100–104, 279 world economy 2–3, 63, 74, 83, 159–184, 274, 277, 285 Y Yudkowsky, Eliezer 70, 92, 98, 106, 197, 211–216, 266, 273, 282, 286, 291, 299

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

Text and signatories available online. https://futureoflife.org/ai-principles/. Gaddis, J. L. The Cold War: A New History. New York: Penguin Press, 2006. . On Grand Strategy. New York: Penguin Press, 2018. Gilder, G. F., and Ray Kurzweil. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI. edited by Jay Wesley Richards. Seattle: Discovery Institute Press, 2001. Goertzel, B., and C. Pennachin, eds. Artificial General Intelligence. Cognitive Technologies Series. Berlin: Springer, 2007. doi:10.1007/978-3-540-68677-4. Gold, E. M. “Language Identification in the Limit.” Information and Control 10, no. 5 (1967): 447–474.

Deciding is a computational activity, something that can be programmed. Choice, however, is the product of judgment, not calculation. It is the capacity to choose that ultimately makes us human. University of California, Berkeley, philosopher John Searle, in his paper “Minds, Brains, and Programs,” argued against the plausibility of general, or what he called “strong,” AI. Searle said a program cannot give a computer a “mind,” “understanding,” or “consciousness,” regardless of how humanlike the program might behave. 34. Jonathan Schaeffer, Robert Lake, Paul Lu, and Martin Bryant, “CHINOOK: The World Man-Machine Checkers Champion,” AI Magazine 17, no. 1 (Spring 1966): 21–29, https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1208/1109.pdf. 35.

pages: 345 words: 104,404

Pandora's Brain
by Calum Chace
Published 4 Feb 2014

‘This all sounds like an argument for stopping people working on strong AI?’ asked Matt. ‘Although I guess that would be hard to do. There are too many people working in the field, and as you say, a lot of them show no sign of understanding the danger.’ ‘You’re right,’ Ivan agreed, ‘we’re on a runaway train that cannot be stopped. Some science fiction novels feature a powerful police force – the Turing Police – that keeps watch to ensure that no-one creates a human-level artificial intelligence. But that’s hopelessly unrealistic. The prize – both intellectual and material – for owning an AGI is too great. Strong AI is coming, whether we like it or not.’

Robot Futures
by Illah Reza Nourbakhsh
Published 1 Mar 2013

Eye tracking A skill enabling a robot to visually examine the scene before it, identify the faces in the scene, mark the location of the eyes on each face, and then find the irises so that the gaze directions of the humans are known. Humans are particularly good at this even when we face other people at acute angles. Hard AI Also known as strong AI, this embodies the AI goal of going all the way toward human equivalence: matching natural intelligence along every possible axis so that artificial beings and natural humans are, at least from a cognitive point of view, indistinguishable. Laser cutting A rapid-prototyping technique in which flat material such as plastic or metal lays on a table and a high-power laser is able to rapidly cut a complex two-dimensional shape out of the raw material.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

Thirdly, that intelligence, from its simplest manifestation in a squirming worm to self-awareness and consciousness in sophisticated cappuccino-sipping humans, is a purely material, indeed biological, phenomenon. Finally, that if a material object called ‘brain’ can be conscious then it is theoretically feasible that another material object, made of some other material stuff, can also be conscious. Based on those four propositions, empiricism tells us that ‘strong AI’ is possible. And that’s because, for empiricists, a brain is an information-processing machine, not metaphorically but literarily. We have several billion cells in our body.27 If we adopt an empirical perspective, the scientific problem of intelligence – or consciousness, natural or artificial – can be (re)defined as a simple question: how can several billion unconscious nanorobots arrive at consciousness?

And although they produced some very capable systems, none of them could arguably be called intelligent. Of course, how one defines intelligence is also crucial. For the pioneers of AI, ‘artificial intelligence’ was nothing less than the artificial equivalent of human intelligence, a position nowadays referred to as ‘strong AI’. An intelligent machine ought to be one that possessed general intelligence, just like a human. This meant that the machine ought to be able to solve any problem using first principles and experience derived from learning. Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps.

Succeeding With AI: How to Make AI Work for Your Business
by Veljko Krunic
Published 29 Mar 2020

While the logic from the sidebar “Imagine that you’re a CEO” applies to businesses such as Google, Baidu, or Microsoft, there’s an unfortunate tendency for many enterprises to emulate these companies without understanding the rationale behind their actions. Yes, the biggest players make significant money with their AI efforts. They also Pitfalls to avoid 77 invest a lot in AI research. Before you start emulating their AI research efforts, ask yourself, “Am I in the same business?” If your company were to invent something important for strong AI/AGI [76], would you know how to monetize it? Suppose you’re a large brick-and-mortar retailer. Could you take full advantage of that discovery? Probably not—the retailer’s business is different from Google’s. Almost certainly, your company would benefit more from AI technology if you used it to solve your own concrete business problems.

AI will make actuarial mistakes that an average human, uninformed about AI, will see as malicious. Juries, whether in court or in the court of public opinion, are made up of humans. WARNING There’s no way to know if AI will ever develop common sense. It may not for quite a while; maybe not even until we get strong AI/Artificial General Intelligence [76]. Accounting for AI’s actuarial view is a part of your problem domain and part of why understanding your domain is crucial. It’s difficult to account for the differences between the actuarial view AI takes and human social expectations. Accounting for those differences is not an engineering problem and something that you should pass on to the engineering team to solve.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Some want to replace humans with machines; some are resigned to the inevitability—“I for one, welcome our insect overlords” (later “robot overlords”) was a meme that was popularized by The Simpsons—and some of them just as passionately want to build machines to extend the reach of humans. The question of whether true artificial intelligence—the concept known as “Strong AI” or Artificial General Intelligence—will emerge, and whether machines can do more than mimic humans, has also been debated for decades. Today there is a growing chorus of scientists and technologists raising new alarms about the possibility of the emergence of self-aware machines and their consequences.

Whether or not Google is on the trail of a genuine artificial “brain” has become increasingly controversial. There is certainly no question that the deep learning techniques are paying off in a wealth of increasingly powerful AI achievements in vision and speech. And there remains in Silicon Valley a growing group of engineers and scientists who believe they are once again closing in on “Strong AI”—the creation of a self-aware machine with human or greater intelligence. Ray Kurzweil, the artificial intelligence researcher and barnstorming advocate for technologically induced immortality, joined Google in 2013 to take over the brain work from Ng, shortly after publishing How to Create a Mind, a book that purported to offer a recipe for creating a working AI.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we’ve “solved” AI doesn’t realize the limitations of the current technology. To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there’s been scarcely more than linear progress in five decades of working toward strong AI. For example, the different flavors of intelligent personal assistants available on your smartphone are only modestly better than Eliza, an early example of primitive natural-language-processing from the mid-1960s. We still have no machine that can, for instance, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class or an eighth-grade science exam.

AI can easily look like the real thing but still be a million miles away from being the real thing—like kissing through a pane of glass: It looks like a kiss but is only a faint shadow of the actual concept. I concede to AI proponents all of the semantic prowess of Shakespeare, the symbol juggling they do perfectly. Missing is the direct relationship with the ideas the symbols represent. Much of what is certain to come soon would have belonged in the old-school “Strong AI” territory. Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases . . . here in a couple of decades.

pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It)
by Jamie Bartlett
Published 4 Apr 2018

The General Data Protection Regulation (GDPR) which is due to come into law across Europe shortly after this book goes to print, is a good example and must be enforced with vigour.* SAFE AI FOR GOOD Artificial Intelligence must not become a proprietary operating system owned and run by a single winner-takes-all company. However, we cannot fall behind in the international race to develop strong AI. Non-democracies must not get an edge on us. We should encourage the sector, but it must be subject to democratic control and, above all, tough regulation to ensure it works in the public interest and is not subject to being hacked or misused.2 Just as the inventors of the atomic bomb realised the power of their creation and so dedicated themselves to creating arms control and nuclear reactor safety, so AI inventors should take similar responsibility.

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Published 7 Feb 2017

There is a long tradition of hype in AI, going back to the earliest days, and many of us have a well-developed habit of discounting the latest “revolutionary breakthrough” by, say, 70% or more, but when such high-tech mavens as Elon Musk and such world-class scientists as Sir Martin Rees and Stephen Hawking start ringing alarm bells about how AI could soon lead to a cataclysmic dissolution of human civilization in one way or another, it is time to rein in one’s habits and reexamine one’s suspicions. Having done so, my verdict is unchanged but more tentative than it used to be. I have always affirmed that “strong AI” is “possible in principle”—but I viewed it as a negligible practical possibility, because it would cost too much and not give us anything we really needed. Domingos and others have shown me that there may be feasible pathways (technically and economically feasible) that I had underestimated, but I still think the task is orders of magnitude larger and more difficult than the cheerleaders have claimed, for the reasons presented in this chapter, and in chapter 8 (the example of Newyorkabot, p. 164).

I discuss the prospects of such a powerful theory or model of an intelligent agent, and point out a key ambiguity in the original Turing Test, in an interview with Jimmy So about the implications of Her, in “Can Robots Fall in Love” (2013), The Daily Beast, http://www.thedailybeast.com/articles/2013/12/31/can-robots-fall-in-love-and-why-would-they.html. 400a negligible practical possibility. When explaining why I thought strong AI was possible in principle but practically impossible, I have often compared it to the task of making a robotic bird that weighed no more than a robin, could catch insects on the fly, and land on a twig. No cosmic mystery, I averred, in such a bird, but the engineering required to bring it to reality would cost more than a dozen Manhattan Projects, and to what end?

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

And how do we control machines that may become more intelligent than us? 28.1The Limits of AI In 1980, philosopher John Searle introduced a distinction between weak AI—the idea that machines could act as if they were intelligent—and strong AI—the assertion that machines that do so are actually consciously thinking (not just simulating thinking). Over time the definition of strong AI shifted to refer to what is also called “human-level AI” or “general AI”—programs that can solve an arbitrarily wide variety of tasks, including novel ones, and do so as well as a human. Critics of weak AI who objected to the very possibility of intelligent behavior in machines now appear as shortsighted as Simon Newcomb, who in October 1903 wrote “aerial flight is one of the great class of problems with which man can never cope”—just two months before the Wright brothers’ flight at Kitty Hawk.

In fact, that is the plot of Terry Bisson’s (1990) science fiction story They’re Made Out of Meat, in which alien robots explore Earth and can’t believe that hunks of meat could possibly be sentient. How they can be remains a mystery. 28.2.2Consciousness and qualia Running through all the debates about strong AI is the issue of consciousness: awareness of the outside world, and of the self, and the subjective experience of living. The technical term for the intrinsic nature of experiences is qualia (from the Latin word meaning, roughly, “of what kind”). The big question is whether machines can have qualia.

We humans would do well to make sure that any intelligent machine we design today that might evolve into an ultraintelligent machine will do so in a way that ends up treating us well. As Eric Brynjolfsson puts it, “The future is not preordained by machines. It’s created by humans.” Summary This chapter has addressed the following issues: •Philosophers use the term weak AI for the hypothesis that machines could possibly behave intelligently, and strong AI for the hypothesis that such machines would count as having actual minds (as opposed to simulated minds). •Alan Turing rejected the question “Can machines think?” and replaced it with a behavioral test. He anticipated many objections to the possibility of thinking machines. Few AI researchers pay attention to the Turing test, preferring to concentrate on their systems’ performance on practical tasks, rather than the ability to imitate humans.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

As Alan Turing pointed out with his Turing Test, the question of whether or not a machine can think is ‘meaningless’ in the sense that it is virtually impossible to assess with any certainty. As we saw in the last chapter, the idea that consciousness is some emergent byproduct of faster and faster computers is overly simplistic. Consider the difficulty in distinguishing between ‘weak’ and ‘strong’ AI. Some people mistakenly suggest that, in the former, an AI’s outcome has been pre-programmed and it is therefore the result of an algorithm carrying out a specific series of steps to achieve a knowable outcome. This means an AI has little to no chance of generating an unpredictable outcome, provided that the training process is properly carried out.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

Finally, the school would adjust other elements of the work flow to take advantage of being able to provide instantaneous school admission decisions. 13 Decomposing Decisions Today’s AI tools are far from the machines with human-like intelligence of science fiction (often referred to as “artificial general intelligence” or AGI, or “strong AI”). The current generation of AI provides tools for prediction and little else. This view of AI does not diminish it. As Steve Jobs once remarked, “One of the things that really separates us from the high primates is that we’re tool builders.” He used the example of the bicycle as a tool that had given people superpowers in locomotion above every other animal.

Toast
by Stross, Charles
Published 1 Jan 2002

“Way I see it, we’ve been fighting a losing battle here. Maybe if we hadn’t put a spike in Babbage’s gears he’d have developed computing technology on an ad-hoc basis and we might have been able to finesse the mathematicians into ignoring it as being beneath them—brute engineering—but I’m not optimistic. Immunizing a civilization against developing strong AI is one of those difficult problems that no algorithm exists to solve. The way I see it, once a civilization develops the theory of the general purpose computer, and once someone comes up with the goal of artificial intelligence, the foundations are rotten and the dam is leaking. You might as well take off and nuke them from orbit; it can’t do any more damage.”

pages: 329 words: 95,309

Digital Bank: Strategies for Launching or Becoming a Digital Bank
by Chris Skinner
Published 27 Aug 2013

Social media is creating new currencies and new economic models, and this will be very big and very important in the two to three years downstream from now. The question for the banks is how will they position in this new world of peer-to-peer currencies in social media. That is going to be a key question for banks in innovation for the next few years. The other area is what I call strong AI. This is a modern way of looking at AI. The old way was mechanical and thought of this as expert systems. Today, we have this enormous computational power in our hands now, and we should make a big splash around this for the next four or five years. So social data, social media, alternative currencies and peer-to-peer payments will dominate for the near term, and then big data and AI in four or five years from now.

Noam Chomsky: A Life of Dissent
by Robert F. Barsky
Published 2 Feb 1997

Chomsky, who in fact only attended the conference briefly, preferring to spend his time engaged in the subject in the context of "talks to popular audiences," insists that the Times misrepresented what had occurred at the meetings: "There was scientific interest, but it had nothing whatsoever to do with language translation (MT) and artificial intelligence file:///D|/export3/www.netlibrary.com/nlreader/nlreader.dll@bookid=9296&filename=page_174.html [4/16/2007 3:21:17 PM] Document Page 175 (AI). MT is a very low level engineering project, and so-called classic strong AI is largely vacuous, dismissed by most serious scientists and lacking any results, as its leading exponents concede" (31 Mar. 1995). Entire research projects, on language acquisition and other topics, were now being conducted with the aim of either establishing or disproving Chomsky's theories. Chomsky himself fuelled these enterprises by maintaining a high level of productivity: he published Reflections on Language (1975), Essays on Form and Interpretation (1977), Rules and Representations (1980), and Modular Approaches to the Study of the Mind (1984).

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

THE AI WORLD ORDER Inequality will not be contained within national borders. China and the United States have already jumped out to an enormous lead over all other countries in artificial intelligence, setting the stage for a new kind of bipolar world order. Several other countries—the United Kingdom, France, and Canada, to name a few—have strong AI research labs staffed with great talent, but they lack the venture-capital ecosystem and large user bases to generate the data that will be key to the age of implementation. As AI companies in the United States and China accumulate more data and talent, the virtuous cycle of data-driven improvements is widening their lead to a point where it will become insurmountable.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

The kind of AI we have today is narrow AI, also known as weak AI. It is the only kind of AI we know how to build, and it is incredibly useful. Narrow AI is the ability for a computer to solve a specific kind of problem or perform a specific task. The other kind of AI is referred to by three different names: general AI, strong AI, or artificial general intelligence (AGI). Although the terms are interchangeable, I will use AGI from this point forward to refer to an artificial intelligence as smart and versatile as you or me. A Roomba vacuum cleaner, Siri, and a self-driving car are powered by narrow AI. A hypothetical robot that can unload the dishwasher would be powered by narrow AI.

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
by Thomas H. Davenport and Julia Kirby
Published 23 May 2016

Deloitte is working with companies like IBM and Cognitive Scale to create not just a single application, but a broad “Intelligent Automation Platform.” Even when progress is made on these types of integration, the result will still fall short of the all-knowing “artificial general intelligence” or “strong AI” that we discussed in Chapter 2. That may well be coming, but not anytime soon. Still, these short-term combinations of tools and methods may well make automation solutions much more useful. Broadening Application of the Same Tools —In addition to employing broader types of technology, organizations that are stepping forward are using their existing technology to address different industries and business functions.

Falter: Has the Human Game Begun to Play Itself Out?
by Bill McKibben
Published 15 Apr 2019

You’ll be able to drink IPAs for hours at your local tavern, and the self-driving car will take you home—and it may well be able to recommend precisely which IPAs you’d like best. But it won’t be able to carry on an interesting discussion about whether this is the best course for your life. That next step up is artificial general intelligence, sometimes referred to as “strong AI.” That’s a computer “as smart as a human across the board, a machine that can perform any intellectual task a human being can,” in Urban’s description. This kind of intelligence would require “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”9 Five years ago a pair of researchers asked hundreds of AI experts at a series of conferences when we’d reach this milestone—more precisely, it asked them to name a “median optimistic year,” when there was a 10 percent chance we’d get there; a median realistic year, a 50 percent chance; and a “pessimistic” year, in which there was a 90 percent chance.

pages: 385 words: 111,113

Augmented: Life in the Smart Lane
by Brett King
Published 5 May 2016

Automated UAVs, autonomous emergency vehicles and robots, and sensor nets giving feedback loops to the right algorithms or AIs to dispatch those resources. Artificial intelligence will not only be an underpinning of smart cities, it will also be necessary simply to process all of the sensor data coming into smart city operations centres. Humans will only slow down the process too much. Strong AI involvement running smart cities is closer to two decades away. Within 20 to 30 years, we will see smart governance at the hands of AI—coded laws and enforcement, resource allocation, budgeting and optimal decision-making made by algorithms that run independent of human committees and voting. The manual counting of votes for elections will be a thing of the past, as citizens will BYOD (bring your own device) to the challenge of casting their votes.

pages: 379 words: 108,129

An Optimist's Tour of the Future
by Mark Stevenson
Published 4 Dec 2010

This proposed necessity of having to raise robots might lead you to the conclusion that truly intelligent robots will be few and far between. But the thing about robots is you can replicate them. Once we’ve got one intelligent robot brain, we can copy it to another machine, and another, and another. The robots have finally arrived, bringing an explosion of ‘strong AI’. Of course, it may not just be us (the humans) doing the copying, it might be the robots themselves. And because technology improves at a startling rate (way faster than biological evolution), one has to consider the possibility that things won’t stop there. Once we achieve a robot with human-level (if not human-like) intelligence, it won’t be very long until robot cognition outstrips the human mind – marrying the human-like intelligence with instant recall, flawless memory and the number-crunching ability of Deep Blue.

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future
by Martin Ford
Published 4 May 2015

MIT physicist Max Tegmark, one of the co-authors of the Hawking article, told The Atlantic’s James Hamblin that “this is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”11 Others view a thinking machine as fundamentally possible, but much further out. Gary Marcus, for example, thinks strong AI will take at least twice as long as Kurzweil predicts, but that “it’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.”12 In recent years, speculation about human-level AI has shifted increasingly away from a top-down programming approach and, instead, toward an emphasis on reverse engineering and then simulating the human brain.

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

Charles Darwin, On the Origin of Species (London: Penguin Books, 2009), p. 427. 20.  See Isaiah Berlin, The Hedgehog and the Fox (New York: Simon & Schuster, 1953). 21.  The distinction between AGI and ANI is often conflated with another one made by John Searle, who speaks of the difference between “strong” AI and “weak” AI. But the two are not the same thing at all. AGI and ANI reflect the breadth of a machine’s capability, while Searle’s terms describe whether a machine thinks like a human being (“strong”) or unlike one (“weak”). 22.  Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence” in William Ramsey and Keith Frankish, eds., Cambridge Handbook of Artificial Intelligence (Cambridge: Cambridge University Press, 2011). 23.  

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

AI systems are designed to operate with varying degrees of autonomy’ (OECD 2019). The European Commission has defined AI rather vaguely as ‘a collection of technologies that combine data, algorithms and computing power’ (European Commission 2020). John Searle’s (1980) differentiation between weak and strong AI leads to the distinction between the idea of consciously thinking machines as opposed to a simple simulation of thinking. The idea of ‘general artificial intelligence’ is an outgrowth of the idea of an overarching, independently thinking machine, a vision of a machine that possesses abilities beyond human skills and intelligence, or even consciousness—a notion that can best be explored culturally or classified historically.

pages: 463 words: 118,936

Darwin Among the Machines
by George Dyson
Published 28 Mar 2012

The argument over where to draw this distinction has been going on for a long time. Can machines calculate? Can machines think? Can machines become conscious? Can machines have souls? Although Leibniz believed that the process of thought could be arithmetized and that mechanism could perform the requisite arithmetic, he disagreed with the “strong AI” of Hobbes that reduced everything to mechanism, even our own consciousness or the existence (and corporeal mortality) of a soul. “Whatever is performed in the body of man and of every animal is no less mechanical than what is performed in a watch,” wrote Leibniz to Samuel Clarke.51 But, in the Monadology, Leibniz argued that “perception, and that which depends upon it, are inexplicable by mechanical causes,” and he presented a thought experiment to support his views: “Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

Until machines are capable of defining their own goals, the choices of the problems we want to solve with these technologies—what goals are worthy to pursue—are still ours. There is an outer frontier of AI that occupies the fantasies of some technologists: the idea of artificial general intelligence (AGI). Whereas today’s AI progress is marked by a computer’s ability to complete specific narrow tasks (“weak AI”), the aspiration to create AGI (“strong AI”) involves developing machines that can set their own goals in addition to accomplishing the goals set by humans. Although few believe that AGI is on the near horizon, some enthusiasts claim that the exponential growth in computing power and the astonishing advances in AI in just the past decade make AGI a possibility in our lifetimes.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

So, where does AI go next as the wave fully breaks? Today we have narrow or weak AI: limited and specific versions. GPT-4 can spit out virtuoso texts, but it can’t turn around tomorrow and drive a car, as other AI programs do. Existing AI systems still operate in relatively narrow lanes. What is yet to come is a truly general or strong AI capable of human-level performance across a wide range of complex tasks—able to seamlessly shift among them. But this is exactly what the scaling hypothesis predicts is coming and what we see the first signs of in today’s systems. AI is still in an early phase. It may look smart to claim that AI doesn’t live up to the hype, and it’ll earn you some Twitter followers.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

Discussions about potential large-scale impacts from future AI systems suffer from a proliferation of terminology: apart from AGI, people have talked about transformative AI (Cotra 2020; Karnofsky 2016), smarter-than-human AI (Machine Intelligence Research Institute, n.d.), superintelligence (Bostrom 1998, 2014a), ultraintelligent machines (Good 1966), advanced AI (Center for the Governance of AI, n.d.), high-level machine intelligence (Grace et al. 2018; and, using a slightly different definition, V. C. Müller and Bostrom 2016), comprehensive AI services (Drexler 2019), strong AI (J. R. Searle 1980, but since used in a variety of different ways), and human-level AI (AI Impacts, n.d.-c). I’m using the term “AGI” simply because it is probably the most widely used one, and its definition is easy to understand. However, in this chapter, I am interested in any way in which AI could enable permanent value lock-in, and by using “AGI” as opposed to any of the other terms mentioned previously, I do not intend to exclude any possibility for how this could happen.

pages: 492 words: 141,544

Red Moon
by Kim Stanley Robinson
Published 22 Oct 2018

It has a very low bit rate because it’s so hard to detect neutrinos, but his people have a way to send a real flood of them, and the ice flooring this crater is just enough to catch a signal strength that is about the equal of the first telegraphs. So he keeps his messages brief.” “Seems like a lot of trouble for a telegraph,” John Semple observed. Anna nodded. “Just a toy, at least for now. The real power here is the quantum computer, down there in that building you see in the ice. That thing is a monster.” “Strong AI?” Ta Shu asked. “I don’t know what you mean by that, but definitely a lot of AI. Not strong in the philosophical sense, but, you know—fast. Yottaflops fast.” “Yottaflops,” Ta Shu repeated. “I like that word. That means very fast?” “Very fast. Not so much strong, in my opinion, because of how lame we are at programming.

pages: 561 words: 167,631

2312
by Kim Stanley Robinson
Published 22 May 2012

In these years all the bad trends converged in “perfect storm” fashion, leading to a rise in average global temperature of five K, and sea level rise of five meters—and as a result, in the 2120s, food shortages, mass riots, catastrophic death on all continents, and an immense spike in the extinction rate of other species. Early lunar bases, scientific stations on Mars. The Turnaround: 2130 to 2160. Verteswandel (Shortback’s famous “mutation of values”), followed by revolutions; strong AI; self-replicating factories; terraforming of Mars begun; fusion power; strong synthetic biology; climate modification efforts, including the disastrous Little Ice Age of 2142–54; space elevators on Earth and Mars; fast space propulsion; the space diaspora begun; the Mondragon Accord signed. And thus: The Accelerando: 2160 to 2220.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

It’s worth noting that handing an object to another person is itself a surprisingly subtle and complex action that includes making inferences about how the other person will want to take hold of the object, how to signal to them that you are intending for them to take it, etc. See, e.g., Strabala et al., “Toward Seamless Human-Robot Handovers.” 40. Hadfield-Menell et al., “Cooperative Inverse Reinforcement Learning.” (“CIRL” is pronounced with a soft c, homophonous with the last name of strong AI skeptic John Searle (no relation). I have agitated within the community that a hard c “curl” pronunciation makes more sense, given that “cooperative” uses a hard c, but it appears the die is cast.) 41. Dylan Hadfield-Menell, personal interview, March 15, 2018. 42. Russell, Human Compatible. 43.

Global Catastrophic Risks
by Nick Bostrom and Milan M. Cirkovic
Published 2 Jul 2008

The catastrophic scenario that stems from underestimating the power of intelligence is that someone builds a button, and does not care enough what the button does, because they do not think the button is powerful enough to hurt them. Or the wider field of AI researchers will not pay enough attention to risks of strong AI, and therefore good tools and firm foundations for friendliness will not be available when it becomes possible to build strong intelligences. And one should not fail to mention - for it also impacts upon existential risk ­ that AI could be the powerful solution to other existential risks, and by mistake we will ignore our best hope of survival.

pages: 1,152 words: 266,246

Why the West Rules--For Now: The Patterns of History, and What They Reveal About the Future
by Ian Morris
Published 11 Oct 2010

Becoming Human: Innovation in Prehistoric Material and Spiritual Culture. Cambridge, UK: Cambridge University Press, 2009. Reynolds, David. One World Divisible: A Global History Since 1945. New York: Norton, 2000. Richards, Jay, et al. Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I. Seattle: Discovery Institute, 2002. Richards, John. Unending Frontier: An Environmental History of the Early Modern World. Berkeley: University of California Press, 2003. Richardson, Lewis Fry. Statistics of Deadly Quarrels. Pacific Grove, CA: Boxwood Press, 1960. Richerson, Peter, Robert Boyd, and Robert Bettinger.

pages: 1,737 words: 491,616

Rationality: From AI to Zombies
by Eliezer Yudkowsky
Published 11 Mar 2015

Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies. So now you perceive, I hope, why, if you wanted to teach someone to do fundamental work on strong AI—bearing in mind that this is demonstrably a very difficult art, which is not learned by a supermajority of students who are just taught existing reductions such as search trees—then you might go on for some length about such matters as the fine art of reductionism, about playing rationalist’s Taboo to excise problematic words and replace them with their referents, about anthropomorphism, and, of course, about early stopping on mysterious answers to mysterious questions