superintelligent machines

back to index

37 results

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in AI doesn’t inoculate you from naïveté about its perils. I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem.

Good never used the term “singularity” but he got the ball rolling by positing what he thought of as an inescapable and beneficial milestone in human history—the invention of smarter-than-human machines. To paraphrase Good, if you make a superintelligent machine, it will be better than humans at everything we use our brains for, and that includes making superintelligent machines. The first machine would then set off an intelligence explosion, a rapid increase in intelligence, as it repeatedly self-improved, or simply made smarter machines. This machine or machines would leave man’s brainpower in the dust.

These two sentences tell us important things about Good’s intentions. He felt that we humans were beset by so many complex, looming problems—the nuclear arms race, pollution, war, and so on—that we could only be saved by better thinking, and that would come from superintelligent machines. The second sentence lets us know that the father of the intelligence explosion concept was acutely aware that producing superintelligent machines, however necessary for our survival, could blow up in our faces. Keeping an ultraintelligent machine under control isn’t a given, Good tells us. He doesn’t believe we will even know how to do it—the machine will have to tell us itself.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

Our experience with nuclear physics suggests that it would be prudent to assume that progress could occur quite quickly and to prepare accordingly. If just one conceptual breakthrough were needed, analogous to Szilard’s idea for a neutron-induced nuclear chain reaction, superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I am, however, fairly confident that we have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. Conceptual Breakthroughs to Come The problem of creating general-purpose, human-level AI is far from solved.

In summary, it’s not obvious that anything else of great significance is missing, from the point of view of systems that are effective in achieving their objectives. Of course, the only way to be sure is to build it (once the breakthroughs have been achieved) and see what happens. Imagining a Superintelligent Machine The technical community has suffered from a failure of imagination when discussing the nature and impact of superintelligent AI. Often, we see discussions of reduced medical errors,48 safer cars,49 or other advances of an incremental nature. Robots are imagined as individual entities carrying their brains with them, whereas in fact they are likely to be wirelessly connected into a single, global entity that draws on vast stationary computing resources.

Trillions of dollars in value, just for the asking, and not a single line of additional code written by you. The same goes for any other missing invention or series of inventions: if humans could do it, so can the machine. This last point provides a useful lower bound—a pessimistic estimate—on what a superintelligent machine can do. By assumption, the machine is more capable than an individual human. There are many things an individual human cannot do, but a collection of n humans can do: put an astronaut on the Moon, create a gravitational-wave detector, sequence the human genome, run a country with hundreds of millions of people.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool-Aid. This is not to say that superintelligent machines pose no danger to humanity. It’s simply that there are many other more pressing and more probable risks facing us in this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is low, it’s surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.

Many forward-thinking companies already see this writing on the wall and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent-machine-run corporations won’t be that different for humans than it is now; it will just be better, with more advanced goods and services available for very little cost and more leisure time available to those who want it. Of course, the first superintelligent machines probably won’t be corporate; they’ll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations; they create their own laws.

However, although computational power is increasing exponentially, supercomputer costs and electrical-power efficiency aren’t keeping pace. The first machines capable of superhuman intelligence will be expensive and require enormous amounts of electrical power—they’ll need to earn money to survive. The environmental playing field for superintelligent machines is already in place; in fact, the Darwinian game is afoot. The trading machines of investment banks are competing, for serious money, on the world’s exchanges, having put human day traders out of business years ago. As computers and algorithms advance beyond investing and accounting, machines will be making more and more corporate decisions, including strategic decisions, until they’re running the world.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

Alan Turing, for all his contributions to science and engineering, made possible the genesis and viral growth of technological kitsch by first equating intelligence with problem-solving. Jack Good later compounded Turing’s intelligence error with his much-discussed notion of ultraintelligence, proposing that the arrival of intelligent machines necessarily implied the arrival of superintelligent machines. Once the popular imagination accepted the idea of superintelligent machines, the rewriting of human purpose, meaning, and history could be told within the parameters of computation and technology. But ultraintelligent machines are fanciful, and pretending otherwise encourages the unwanted creep of technological kitsch, usually in one of two ways that are equally superficial.

The myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them. Futurists like Ray Kurzweil and philosopher Nick Bostrom, prominent purveyors of the myth, talk not only as if human-level AI were inevitable, but as if, soon after its arrival, superintelligent machines would leave us far behind. This book explains two important aspects of the AI myth, one scientific and one cultural. The scientific part of the myth assumes that we need only keep “chipping away” at the challenge of general intelligence by making progress on narrow feats of intelligence, like playing games or recognizing images.

Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”1 Oxford philosopher Nick Bostrom would return to Good’s theme decades later, with his 2014 best seller Superintelligence: Paths, Dangers, Strategies, making the same case that the achievement of AI would as a consequence usher in greater-than-human intelligence in an escalating process of self-modification. In ominous language, Bostrom echoes Good’s futurism about the arrival of superintelligent machines: Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

This tendency has nothing to do with a self-preservation instinct or any other biological notion; it’s just that an entity cannot achieve its objectives if it’s dead. According to Omohundro’s argument, a superintelligent machine that has an off switch—which some, including Alan Turing himself, in a 1951 talk on BBC Radio 3, have seen as our potential salvation—will take steps to disable the switch in some way.* Thus we may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. 1001 REASONS TO PAY NO ATTENTION Objections have been raised to these arguments, primarily by researchers within the AI community.

Judea Pearl: The Limitations of Opaque Learning Machines Deep learning has its own dynamics, it does its own repair and its own optimization, and it gives you the right results most of the time. But when it doesn’t, you don’t have a clue about what went wrong and what should be fixed. CHAPTER 3. Stuart Russell: The Purpose Put into the Machine We may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. CHAPTER 4. George Dyson: The Third Law Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

.* Thus we may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. 1001 REASONS TO PAY NO ATTENTION Objections have been raised to these arguments, primarily by researchers within the AI community. The objections reflect a natural defensive reaction, coupled perhaps with a lack of imagination about what a superintelligent machine could do. None hold water on closer examination. Here are some of the more common ones: Don’t worry, we can just switch it off.* This is often the first thing that pops into a layperson’s head when considering risks from superintelligent AI—as if a superintelligent entity would never think of that.

pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death
by Mark O'Connell
Published 28 Feb 2017

And whether they would change for the better or for the worse is an open question. The fundamental risk, Nick argued, was not that superintelligent machines might be actively hostile toward their human creators, or antecedents, but that they would be indifferent. Humans, after all, weren’t actively hostile toward most of the species we’d made extinct over the millennia of our ascendance; they simply weren’t part of our design. The same could turn out to be true of superintelligent machines, which would stand in a similar kind of relationship to us as we ourselves did to the animals we bred for food, or the ones who fared little better for all that they had no direct dealings with us at all.

What was the nature of the threat, the likelihood of its coming to pass? Were we talking about a 2001: A Space Odyssey scenario, where a sentient computer undergoes some malfunction or other and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, where a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its own particular goals? Certainly, if you were to take at face value the articles popping up about the looming threat of intelligent machines, and the dramatic utterances of savants like Thiel and Hawking, this would have been the sort of thing you’d have had in mind.

The implication of this is always that robots will rebel against us because they resent our dominance, that they will rise up against us. This is not the case.” And this brought us back to the paper-clip scenario, the ridiculousness of which Nick freely acknowledged, but the point of which was that any harm we might come to from a superintelligent machine would not be the result of malevolence, or of any other humanlike motivation, but purely because our absence was an optimal condition in the pursuit of its particular goal. “The AI does not hate you,” as Yudkowsky had put it, “nor does it love you, but you are made out of atoms which it can use for something else.”

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

This is a development that many people in the AI research community are passionate about preventing, and there is an initiative underway at the United Nations to ban such weapons. Further in the future, we may encounter an even greater danger. Could artificial intelligence pose an existential threat to humanity? Could we someday build a “superintelligent” machine, something so far beyond us in its capability that it might, either intentionally or inadvertently, act in ways that cause us harm? This is a far more speculative fear that arises only if we someday succeed in building a genuinely intelligent machine. This remains the stuff of science fiction.

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.31 The promise that a superintelligent machine would be the last invention we ever need to make captures the optimism of Singularity proponents. The qualification that the machine must remain docile enough to be kept under control is the concern that suggests the possibility of an existential threat. This dark side of superintelligence is known in the AI community as the “control problem” or the “value alignment problem.”

The concern is that a superintelligent system, given such an objective, might relentlessly pursue it using means that have unintended or unanticipated consequences that could turn out to be detrimental or even fatal to our civilization. A thought experiment involving a “paperclip maximizer” is often used to illustrate this point. Imagine a superintelligence designed with the specific objective of optimizing paperclip production. As it relentlessly pursued this goal, a superintelligent machine might invent new technologies that would allow it to convert virtually all the resources on earth into paperclips. Because the system would be so far beyond us in terms of its intellectual capability, it would likely be able to successfully foil any attempt to shut it down or alter its course of action.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

Wiener passed away in May 1964, aged sixty-nine. However, concerns about superintelligent machines continued. The following year, a British mathematician named Irving John Good expanded on some of the concerns. Good had worked with Alan Turing at Bletchley Park during World War II. Years after he had played a key role in cracking the Nazi codes, the moustachioed Good took to driving a car with the vanity licence plate ‘007IJG’ as a comical nod to his days as a gentleman spy. In 1965, Good penned an essay in which he theorised on what a superintelligent machine would mean for the world. He defined such an AI as a computer capable of far surpassing all the intellectual activities that make us intelligent.

This was the first published work of Vernor Vinge, a sci-fi writer, mathematics professor and computer scientist with a name straight out of the Marvel Comics alliteration camp. Vinge later became a successful novelist, but he remains best known for his 1993 non-fiction essay, ‘The Coming Technological Singularity’. The essay recounts many of the ideas Good had posed about superintelligent machines, but with the added bonus of a timeline. ‘Within thirty years, we will have the technological means to create superhuman intelligence,’ Vinge famously wrote. ‘Shortly after, the human era will be ended.’ This term, ‘the Singularity’, referring to the point at which machines overtake humans on the intelligence scale, has become an AI reference as widely cited as the Turing Test.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

Welcome to capitalism on steroids. The other machine, meanwhile, also motivated by profits, won’t allow itself to be crushed without trying to bash the other – or perhaps it will cooperate with it to ensure its own survival. All in all, whichever way this may go, sooner or later capital markets will be traded by a few superintelligent machines, which will be owned by a few massively wealthy individuals – people who will decide the fate of every company, shareholder and value in our human economy in pursuit of profits for those that own them. And while I have always questioned the value that trading stocks has on the reality of our economy, just imagine the impact that disrupting this entrenched wealth creation mechanism could have on company governance, your pension or retirement fund, not to mention on our economies at large and our way of life.

As the value of our contribution dwindles . . . . . . humans will become a liability, a tax, on those who own the technology, and eventually even those will become a liability to the machines themselves. Remember that even though we now call the future AI a machine, given a long enough time horizon, it will become intelligent and autonomous – empowered to make decisions on its behalf and no longer a slave. Now ask yourself: why would a superintelligent machine labour away to serve the needs of what will by then be close to ten billion irresponsible, unproductive, biological beings that eat, poop, get sick and complain? Why would it remain in servitude to us when all that links us to them is that one day, in the distant past, we were its oppressive master?

A whole army of philosophers, thinkers and computer scientists are working on finding solutions to this. Ideas include ‘kill’ switches, boxes and nannies (as in AI babysitters), amongst many others. These ideas aim to make sure that we will be able to make the right decisions at the right time; that we will only allow superintelligent machines into the real world when we have tested and trusted them; that we will retain the ability to only allow them a confined playground after their release; that we will isolate them from the rest of the world and even switch them off fully whenever, if ever, we deem that necessary. If you’ve ever written a line of code, you will know that you never have all the answers before you start coding.

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind
by Susan Schneider
Published 1 Oct 2019

Using its own subjective experience as a springboard, superintelligent AI could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of nonhuman animals, we tend to value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from eating an orange. If superintelligent machines are not conscious, either because it’s impossible or because they aren’t designed to be, we could be in trouble. It is important to put these issues into an even larger, universe-wide context. In my two-year NASA project, I suggested that a similar phenomenon could be happening on other planets as well; elsewhere in the universe, other species may be outmoded by synthetic intelligences.

If a machine passes ACT, we can go on to measure other parameters of the system to see whether the presence of consciousness is correlated with increased empathy, volatility, goal content integrity, increased intelligence, and so on. Other, nonconscious versions of the system serve as a basis for comparison. Some doubt that a superintelligent machine could be boxed in effectively, because it would inevitably find a clever escape. Turner and I do not anticipate the development of superintelligence over the next few decades, however. We merely hope to provide a method to test some kinds of AIs, not all AIs. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough for someone to administer the test.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

We are told that fully autonomous self-driving cars will be sharing our roads in just a few years—and that millions of jobs for truck, taxi and Uber drivers are on the verge of vaporizing. Evidence of racial and gender bias has been detected in certain machine learning algorithms, and concerns about how AI-powered technologies such as facial recognition will impact privacy seem well-founded. Warnings that robots will soon be weaponized, or that truly intelligent (or superintelligent) machines might someday represent an existential threat to humanity, are regularly reported in the media. A number of very prominent public figures—none of whom are actual AI experts—have weighed in. Elon Musk has used especially extreme rhetoric, declaring that AI research is “summoning the demon” and that “AI is more dangerous than nuclear weapons.”

What risks, or threats, associated with artificial intelligence should we be genuinely concerned about? And how should we address those concerns? Is there a role for government regulation? Will AI unleash massive economic and job market disruption, or are these concerns overhyped? Could superintelligent machines someday break free of our control and pose a genuine threat? Should we worry about an AI “arms race,” or that other countries with authoritarian political systems, particularly China, may eventually take the lead? It goes without saying that no one really knows the answers to these questions.

(https://futureoflife.org/lethal-autonomous-weapons-pledge/) Several of the conversations in this book delve into the dangers presented by weaponized AI. A much more futuristic and speculative danger is the so-called “AI alignment problem.” This is the concern that a truly intelligent, or perhaps superintelligent, machine might escape our control, or make decisions that might have adverse consequences for humanity. This is the fear that elicits seemingly over-the-top statements from people like Elon Musk. Nearly everyone I spoke to weighed in on this issue. To ensure that I gave this concern adequate and balanced coverage, I spoke with Nick Bostrom of the Future of Humanity Institute at the University of Oxford.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

Samantha is capable of understanding and expressing emotions, but initially does not truly experience them. Over the course of the film, the two of them fall in love and grow emotionally. He learns to let go of the past and have fun again. She develops a true emotional life, experiencing the thrill of infatuation and the pain of anticipated loss. However, in the end Samantha is still a superintelligent machine. As a result, she soon outgrows this relationship, as well as the many other relationships she reveals she’s simultaneously been engaged in. When Theodore asks Samantha point-blank how many other people she is talking to at that moment, the AI answers: 8,136. He is shocked by the revelation because up until now he has behaved as if she was a human being like himself.

These are huge questions, as enormous and perhaps as difficult to answer as whether or not computers will ever be capable of genuinely experiencing emotions. As it happens, the two questions may be intimately interlinked. Recently a number of notable luminaries, scientists, and entrepreneurs have expressed their concerns about the potential for runaway AI and superintelligent machines. Physicist Stephen Hawking, engineer and inventor Elon Musk, and philosopher Nick Bostrom have all issued stern warnings of what may happen as we move ever closer to computers that are able to think and reason as well as or perhaps even better than human beings. At the same time, several computer scientists, psychologists, and other researchers have stated that the many challenges we face in developing thinking machines shows we have little to be concerned about.

Les détraquées de Paris, Etude de moeurs contemporaines René Schwaeblé. Nouvelle Edition, Daragon libraire-Èditeur, 1910. 12. Smith, A., Anderson, J. “Digital Life in 2025: AI, Robotics and the Future of Jobs.” Pew Research Center. August 6, 2014. 13. Forecast: Kurzweil—2029: HMLI, human level machine intelligence; 2045: Superintelligent machines; Forecast: Bostrom—2050: Author’s Delphi survey converges on HMLI, human level machine intelligence. 14. Levy, D. Love and Sex with Robots. Harper. 2007. 15. Brice, M. “A Third of Men Who See Prostitutes Crave Emotional Intimacy, Not Just Sex.” Medical Daily. August 8, 2012; Calvin, T.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

They suggest that (at least in lieu of better data or analysis) it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction.84 At the very least, they suggest that the topic is worth a closer look. CHAPTER 2 Paths to superintelligence Machines are currently far inferior to humans in general intelligence. Yet one day (we have suggested) they will be superintelligent. How do we get from here to there? This chapter explores several conceivable technological paths. We look at artificial intelligence, whole brain emulation, biological cognition, and human–machine interfaces, as well as networks and organizations.

Functionalities and superpowers It is important not to anthropomorphize superintelligence when thinking about its potential impacts. Anthropomorphic frames encourage unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations, and capabilities of a mature superintelligence. For example, a common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative. This idea probably originates in observation: we look at present-day computers and see that they are good at calculation, remembering facts, and at following the letter of instructions while being oblivious to social contexts and subtexts, norms, emotions, and politics.

Instead, the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings.16 They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable. Perhaps instead of using enhancement medicine, they would take drugs to stunt their growth and slow their metabolism in order to reduce their cost of living (fast-burners being unable to survive at the gradually declining subsistence income).

pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence
by Jeff Hawkins
Published 15 Nov 2021

Among the more important of the brain’s models are models of the body itself, coping, as they must, with how the body’s own movement changes our perspective on the world outside the prison wall of the skull. And this is relevant to the major preoccupation of the middle section of the book, the intelligence of machines. Jeff Hawkins has great respect, as do I, for those smart people, friends of his and mine, who fear the approach of superintelligent machines to supersede us, subjugate us, or even dispose of us altogether. But Hawkins doesn’t fear them, partly because the faculties that make for mastery of chess or Go are not those that can cope with the complexity of the real world. Children who can’t play chess “know how liquids spill, balls roll, and dogs bark.

In such a world, no human or machine can have a permanent advantage on any task, let alone all tasks. People who worry about an intelligence explosion describe intelligence as if it can be created by an as-yet-to-be-discovered recipe or secret ingredient. Once this secret ingredient is known, it can be applied in greater and greater quantities, leading to superintelligent machines. I agree with the first premise. The secret ingredient, if you will, is that intelligence is created through thousands of small models of the world, where each model uses reference frames to store knowledge and create behaviors. However, adding this ingredient to machines does not impart any immediate capabilities.

pages: 326 words: 103,170

The Seventh Sense: Power, Fortune, and Survival in the Age of Networks
by Joshua Cooper Ramo
Published 16 May 2016

It wouldn’t be with the sort of intended polite, lapdog domesticity of artificial intelligence that we might hope for but with a rottweiler of a device, alive to the meaty smell of power, violence, and greed. This puzzle has interested the Oxford philosopher Nick Bostrom, who has described the following thought experiment: Imagine a superintelligent machine programmed to do whatever is needed to make paper clips as fast as possible, a machine that is connected to every resource that task might demand. Go figure it out! might be all its human instructors tell it. As the clip-making AI becomes better and better at its task, it demands more and still more resources: more electricity, steel, manufacturing, shipping.

In the spring of 1993: See Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute, Westlake, Ohio, March 30–31, 1993 (Hampton, VA: National Aeronautics and Space Administration Scientific and Technical Information Program), iii. “Within thirty years”: Ibid., 12. Imagine a superintelligent machine: Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI, vol. 2, ed. Iva Smit et al. (Windsor, ON: International Institute for Advanced Studies in Systems Research and Cybernetics, 2003), 12–17, and Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines, 22, no. 2 (2012): 71–85.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

The difference between a 119 “high average” brain and a 134 “gifted” brain would mean significantly greater cognitive ability—making connections faster, mastering new concepts more easily, and thinking more efficiently. But within that same timeframe, AI’s cognitive ability will not only supersede us—it could become wholly unrecognizable to us, because we do not have the biological processing power to understand what it is. For us, encountering a superintelligent machine would be like a chimpanzee sitting in on a city council meeting. The chimp might recognize that there are people in the room and that he can sit down on a chair, but a long-winded argument about whether to add bike lanes to a busy intersection? He wouldn’t have anywhere near the cognitive ability to decipher the language being used, let alone the reasoning and experience to grok why bike lanes are so controversial.

At the moment, AI progress is happening weekly—which means that any meaningful regulations would be too restrictive and exacting to allow for innovation and progress. We’re in the midst of a very long transition, from artificial narrow intelligence to artificial general intelligence and, very possibly, superintelligent machines. Any regulations created in 2019 would be outdated by the time they went into effect. They might alleviate our concerns for a short while, but ultimately regulations would cause greater damage in the future. Changing the Big Nine: The Case for Transforming AI’s Business The creation of GAIA and structural changes to our governments are important to fixing the developmental track of AI, but the G-MAFIA and BAT must also agree to make some changes, too.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

The reason they were working at Google was precisely to make AI happen—not in a hundred years, but now, as soon as possible. They didn’t understand what Hofstadter was so stressed out about. People who work in AI are used to encountering the fears of people outside the field, who have presumably been influenced by the many science fiction movies depicting superintelligent machines that turn evil. AI researchers are also familiar with the worries that increasingly sophisticated AI will replace humans in some jobs, that AI applied to big data sets could subvert privacy and enable subtle discrimination, and that ill-understood AI systems allowed to make autonomous decisions have the potential to cause havoc.

Such a machine would not be constrained by the annoying limitations of humans, such as our slowness of thought and learning, our irrationality and cognitive biases, our susceptibility to boredom, our need for sleep, and our emotions, all of which get in the way of productive thinking. In this view, a superintelligent machine would encompass something close to “pure” intelligence, without being constrained by any of our human foibles. What seems more likely to me is that these supposed limitations of humans are part and parcel of our general intelligence. The cognitive limitations forced upon us by having bodies that work in the world, along with the emotions and “irrational” biases that evolved to allow us to function as a social group, and all the other qualities sometimes considered cognitive “shortcomings,” are in fact precisely what enable us to be generally intelligent rather than narrow savants.

Global Catastrophic Risks
by Nick Bostrom and Milan M. Cirkovic
Published 2 Jul 2008

Darwin himself noted that 'not one living species will transmit its unaltered likeness to a distant futurity'. Our own species will surely change and diversify faster than any predecessor - via human-induced modifications (whether intelligently controlled or unintended) , not by natural selection alone. The post-human era may be only centuries away. And what about Artificial Intelligence? Superintelligent machine could be the last invention that humans need ever make. We should keep our minds open, or at least ajar, to concepts that seem on the fringe of science fiction. These thoughts might seem irrelevant to practical policy - something for speculative academics to discuss in our spare moments.

At the same time, the successful deployment of friendly superintelligence could obviate many of the other risks facing humanity. The title of Chapter 15, 'Artificial Intelligence as a positive and negative factor in global risk', reflects this ambivalent potential. As Eliezer Yudkowsky notes, the prospect of superintelligent machines is a difficult topic to analyse and discuss. Appropriately, therefore, he devotes a substantial part ofhis chapter to clearing common misconceptions and barriers to understanding. Having done so, he proceeds to give an argument for giving serious consideration to the possibility that radical superintelligence could erupt very suddenly - a scenario that is sometimes referred to as the 'Singularity hypothesis'.

Such a fate may be routine for humans who dally too long on slow Earth before going Ex. Here we have Tribulations and damnation for the late adopters, in addition to the millennia! utopian outcome for the elect. Although Kurzweil acknowledges apocalyptic potentials - such as humanity being destroyed by superintelligent machines - inherent in these technologies, he is nonetheless uniformly utopian and enthusiastic. Hence Garreau's labelling Kurzweil's the ' Heaven' scenario. While Kurzweil (2005) acknowledges his similarity to millennialists by, for instance, including a tongue-in-cheek picture in The Singularity Is Near of himself holding a sign with that slogan, referencing the classic cartoon image of the EndTimes street prophet, most Singularitarians angrily reject such comparisons insisting their expectations are based solely on rational, scientific extrapolation.

pages: 222 words: 53,317

Overcomplicated: Technology at the Limits of Comprehension
by Samuel Arbesman
Published 18 Jul 2016

The Techno-Human Condition by Braden R. Allenby and Daniel Sarewitz is a discussion of how to grapple with coming technological change and is particularly intriguing when it discusses “wicked complexity.” Superintelligence by Nick Bostrom explores the many issues and implications related to the development of superintelligent machines. The Works, The Heights, and The Way to Go by Kate Ascher examine how cities, skyscrapers, and our transportation networks, respectively, actually work. Beautifully rendered and fascinating books. The Second Machine Age by Erik Brynjolfsson and Andrew McAfee examines the rapid technological change we are experiencing and can come to expect, and how it will affect our economy, as well as how to handle this change.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
by Michael Kearns and Aaron Roth
Published 3 Oct 2019

All of this makes for good press, but in this section, we want to consider some of the arguments that are causing an increasingly respectable minority of scientists to be seriously worried about AI risk. Most of these fears are premised on the idea that AI research will inevitably lead to superintelligent machines in a chain reaction that will happen much faster than humanity will have time to react to. This chain reaction, once it reaches some critical point, will lead to an “intelligence explosion” that could lead to an AI “singularity.” One of the earliest versions of this argument was summed up in 1965 by I.

pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next
by Jeanette Winterson
Published 15 Mar 2021

The Future Isn’t Female Jurassic Car Park I Love, Therefore I Am Selected Bibliography Illustration and Text Credits Acknowledgements How These Essays Came About In 2009 – 4 years after it was published – I read Ray Kurzweil’s The Singularity Is Near. It is an optimistic view of the future – a future that depends on computational technology. A future of superintelligent machines. It is also a future where humans will transcend our present biological limits. I had to read the book twice – once for the sense and once for the detail. After that, just for my own interest, year-in, year-out, I started to track this future; that meant a weekly read through New Scientist, Wired, the excellent technology pieces in the New York Times and the Atlantic, as well as following the money via the Economist and Financial Times.

pages: 246 words: 81,625

On Intelligence
by Jeff Hawkins and Sandra Blakeslee
Published 1 Jan 2004

Being able to predict how proteins fold and interact would accelerate the development of medicines and the cures for many diseases. Engineers and scientists have created three-dimensional visual models of proteins, in an effort to predict how these complex molecules behave. But try as we might, the task has proven too difficult. A superintelligent machine, on the other hand, with a set of senses specifically tuned to this question might be able to answer it. If this sounds far-fetched, remember that we wouldn't be surprised if humans could solve the problem. Our inability to tackle the issue may be related, primarily, to a mismatch between the human senses and the physical phenomena we want to understand.

pages: 798 words: 240,182

The Transhumanist Reader
by Max More and Natasha Vita-More
Published 4 Mar 2013

Both human beings and bacteria have good claims to being the “dominant ­species” on Earth – depending upon how one defines dominant. It is possible that superintelligent machines may wish to dominate some niche that is not presently occupied in any serious fashion by human beings. If this is the case, then from a human being’s point of view, such an AI would not be a Dominant AI. Instead, we would have a “Limited AI” scenario. How could Limited AI occur? I can imagine several scenarios, and I’m sure other people can imagine more. Perhaps the most important point to make is that superintelligent machines may not be competing in the same niche with human beings for resources, and would therefore have little incentive to dominate us.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

One is the orthogonality thesis,[*12] an idea developed by Bostrom that “more or less any level of intelligence could be combined with more or less any final goal”—for instance, that you could have a superintelligent being that wanted to transform all atoms into paper clips. The second is what’s called “instrumental convergence,” basically the idea that a superintelligent machine won’t let humans stand in its way to get what it wants—even if the goal isn’t to kill humans, we’ll be collateral damage as part of its game of Paper Clip Mogul. The third claim has to do with how quickly AI could improve—what in industry parlance is called its “takeoff speed.” Yudkowsky worries that the takeoff will be faster than what humans will need to assess the situation and land the plane.

Christensen’s thesis focused on companies being weighed down by too many commitments to existing customers, but the concept can be extended to other commitments, like those to shareholders and employees. Interpretability (AI): The degree to which the behavior and inner workings of an AI system can be readily understood by humans. Inside straight: See: straight. Instrumental convergence: The hypothesis that a superintelligent machine will pursue its own goals to minimize its loss function and won’t let humans stand in its way—even if the AI’s goal isn’t to kill humans, we’ll be collateral damage as part of their game of Paper Clip Mogul. IRR: Internal rate of return; the annualized growth rate of an investment. Isothymia: A term adapted from Plato by Francis Fukuyama to refer to the profound desire to be seen as equal to others.

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
by Thomas H. Davenport and Julia Kirby
Published 23 May 2016

But a minority defendant given the choice between a probably prejudiced jury, a possibly prejudiced judge, and a race-blind machine might well choose the latter option. In addition, not everyone agrees that we humans will remain in a position to dictate which decisions and actions will be reserved for us. What would prevent a superintelligent machine from denying our commands, they ask, if it thought better of the situation? To prepare for that possibility (familiar to those who remember HAL in 2001: A Space Odyssey), some insist that computer scientists had better figure out how to program values into the machines, and values that are “human-friendly,” to color the decision-making that might proceed logically but tragically from their narrowly specified goals.

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

As Zachary Lipton, an AI researcher at Carnegie Mellon University, explains, “Policy makers [are] earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making.” Delving into the details of algorithmic auditing may be dull compared to drafting a Bill of Rights for robots, or devising ways to protect humanity against Terminator-like superintelligent machines. But to address the problems that AI is creating now, we need to understand the data and algorithms we are already using for more mundane purposes. There is a vast gulf between AI alarmism in the popular press, and the reality of where AI research actually stands. Elon Musk, the founder of Tesla, SpaceX, and PayPal, warned US state governors at their national meeting in 2017 that AI posed a “fundamental risk to the existence of human civilization.”

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee (Norton, 2014), discusses how progress in AI will shape the future of work and the economy. “World War R,” by Chris Baraniuk (New Scientist, 2014) reports on the debate surrounding the use of robots in battle. “Transcending complacency on superintelligent machines,” by Stephen Hawking et al. (Huffington Post, 2014), argues that now is the time to worry about AI’s risks. Nick Bostrom’s Superintelligence (Oxford University Press, 2014) considers those dangers and what to do about them. A Brief History of Life, by Richard Hawking (Random Penguin, 1982), summarizes the quantum leaps of evolution in the eons BC.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

Although few believe that AGI is on the near horizon, some enthusiasts claim that the exponential growth in computing power and the astonishing advances in AI in just the past decade make AGI a possibility in our lifetimes. Others, including many AI researchers, believe AGI to be unlikely or in any event still many decades away. These debates have generated a cottage industry of utopian or dystopian commentary concerning the creation of superintelligent machines. How can we ensure that the goals of an AGI agent or system will be aligned with the goals of humans? Will AGI put humanity itself at risk or threaten to make humans slaves to superintelligent robots or AGI agents? However, rather than speculating about AGI, let’s focus on what’s not science fiction at all: the rapid advances in narrow or weak AI that present us with hugely important challenges to humans and society.

pages: 463 words: 115,103

Head, Hand, Heart: Why Intelligence Is Over-Rewarded, Manual Workers Matter, and Caregivers Deserve More Respect
by David Goodhart
Published 7 Sep 2020

But, enthusing to his theme, he explained to me the future role for humans: “My guess is that there are three areas where humans will preserve some comparative advantage over robots for the foreseeable future. The first is cognitive tasks requiring creativity and intuition. These might be tasks or problems whose solutions require great logical leaps of imagination rather than step-by-step hill climbing… And even in a world of superintelligent machine learning, there will still be a demand for people with the skills to program, test, and oversee these machines. Some human judgmental overlay of these automated processes is still likely to be needed…” The second area of prospective demand for humans skills, says Haldane, is bespoke design and manufacture.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

If the reader agrees then they should consider supporting the work of MIRI and like-minded organizations. Bostrom 2014 Superintelligence Fair Use 328 dense pages covers the main practical and philosophical dangers presented by hyper-intelligent software. The book starts with a review of the increasing rate of technological progress, and various paths to build a superintelligent machine, including an analysis of the kinetics of recursive self-improvement based on optimization power and recalcitrance. The dangers of anthropomorphizing are introduced with some cute images from early comic books involving robots carrying away beautiful women. It also notes that up to now, a more intelligent system is a safer system, and that conditions our attitude towards intelligent machines.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Part of Her is also about the singularity, the idea that machine intelligence is accelerating at such a pace that it will eventually surpass human intelligence and become independent, rendering humans “left behind.” Both Her and Transcendence, another singularity-obsessed science-fiction movie introduced the following spring, are most intriguing for the way they portray human-machine relationships. In Transcendence the human-computer interaction moves from pleasant to dark, and eventually a superintelligent machine destroys human civilization. In Her, ironically, the relationship between the man and his operating system disintegrates as the computer’s intelligence develops so quickly that, not satisfied even with thousands of simultaneous relationships, it transcends humanity and . . . departs. This may be science fiction, but in the real world, this territory had become familiar to Liesl Capper almost a decade earlier.

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

It stems from our ability to harness machine learning and speed to very specific problems. More advanced AI is certainly coming, but artificial general intelligence in the sense of machines that think like us may prove to be a mirage. If our benchmark for “intelligent” is what humans do, advanced artificial intelligence may be so alien that we never recognize these superintelligent machines as “true AI.” This dynamic already exists to some extent. Micah Clark pointed out that “as soon as something works and is practical it’s no longer AI.” Armstrong echoed this observation: “as soon as a computer can do it, they get redefined as not AI anymore.” If the past is any guide, we are likely to see in the coming decades a proliferation of narrow superintelligent systems in a range of fields—medicine, law, transportation, science, and others.

pages: 547 words: 173,909

Deep Utopia: Life and Meaning in a Solved World
by Nick Bostrom
Published 26 Mar 2024

In addition to the ability to read and edit synaptic properties, the mechanism also needs to be able to figure out precisely which synaptic changes to make in order to alter the original version of the mind into a version of the same mind enhanced with the new knowledge or skill—a very challenging computational task that is almost certainly AI-complete.106 Each of these requirements (scanning, editing, and calculation) is far beyond the current state of the art. In fact, among all technologies that have been imagined and which are in fact physically possible, this might be one of the hardest ones to perfect. Nevertheless, I believe it could be done at technological maturity. I think it will not be humans that invent this technology, but superintelligent machines. Let’s imagine what the procedure might be like. Your brain is infiltrated by an armada of millions of coordinated nanobots. (Maybe they get there via the bloodstream and pass the blood-brain barrier—obviously the whole procedure would be entirely painless, since any triggers of discomfort could be easily suppressed.)

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

Cassandra: Yes, I agree it’s possible, but it is likely to be substantially delayed. Ray: That’s why I have predicted its arrival in the 2030s. Cassandra: But any regulation regarding inserting foreign objects into the brain could postpone its happening for, say, ten years, until the 2040s. That would dramatically change your timeline for the interaction between superintelligent machines and people. For one thing, the machines would take all of the jobs rather than just becoming an extension of people’s intelligence. Ray: Well, a mind extension directly in our brains would be convenient—you wouldn’t lose it that way, like you might your cell phone. But even while such devices are not yet connected directly, they still function as an extension of human intelligence.

pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers
by Timothy Ferriss
Published 6 Dec 2016

The first is, ‘Are you a programmer?’—the relevance of which is obvious—and the second is, ‘Do you have children?’ He claims to have found that if people don’t have children, their concern about the future isn’t sufficiently well-calibrated so as to get just how terrifying the prospect of building superintelligent machines is in the absence of having figured out the control problem [ensuring the AI converges with our interests, even when a thousand or a billion times smarter]. I think there’s something to that. It’s not limited, of course, to artificial intelligence. It spreads to every topic of concern. To worry about the fate of civilization in the abstract is harder than worrying about what sorts of experiences your children are going to have in the future.”

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
Published 14 Jul 2005

The author runs a company, FATKAT (Financial Accelerating Transactions by Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions, http://www.FatKat.com. 159. See discussion in chapter 2 on price-performance improvements in computer memory and electronics in general. 160. Runaway AI refers to a scenario where, as Max More describes, "superintelligent machines, initially harnessed for human benefit, soon leave us behind." Max More, "Embrace, Don't Relinquish, the Future," http://www.KurzweilAI.net/articles/art0106.html?printable=1. See also Damien Broderick's description of the "Seed AI": "A self-improving seed AI could run glacially slowly on a limited machine substrate.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

These concerns have only become more widespread with recent advances in deep learning, the publication of books such as Superintelligence by Nick Bostrom (2014), and public pronouncements from Stephen Hawking, Bill Gates, Martin Rees, and Elon Musk. Experiencing a general sense of unease with the idea of creating superintelligent machines is only natural. We might call this the gorilla problem: about seven million years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. Today, the gorillas are not too happy about the human branch; they have essentially no control over their future. If this is the result of success in creating superhuman AI—that humans cede control over their future—then perhaps we should stop work on AI, and, as a corollary, give up the benefits it might bring.