Chinese Room

back to index

48 results

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind
by Susan Schneider
Published 1 Oct 2019

Still, the systems reply strikes me as wrong about one thing. It holds that the Chinese Room is a conscious system. It is implausible that a simplistic system like the Chinese Room is conscious, because conscious systems are far more complex. The human brain, for instance, consists of 100 billion neurons and more than 100 trillion neural connections or synapses (a number which is, by the way, 1,000 times the number of stars in the Milky Way Galaxy.) In contrast to the immense complexity of a human brain or even the complexity of a mouse brain, the Chinese Room is a Tinkertoy case. Even if consciousness is a systemic property, not all systems have it.

It might just be that a different type of property, or properties, gives rise to consciousness in machines. As I shall explain in Chapter Four, to tell whether AI is conscious, we must look beyond the chemical properties of particular substrates and seek clues in the AI’s behavior. Another line of argument is more subtle and harder to dismiss. It stems from a famous thought experiment, called “The Chinese Room,” authored by the philosopher John Searle. Searle asks you to suppose that he is locked inside a room. Inside the room, there is an opening through which he is handed cards with strings of Chinese symbols. But Searle doesn’t speak Chinese, although before he goes inside the room, he is handed a book of rules (in English) that allows him to look up a particular string and then write down some other particular string in response.

But Searle doesn’t speak Chinese, although before he goes inside the room, he is handed a book of rules (in English) that allows him to look up a particular string and then write down some other particular string in response. So Searle goes in the room, and he is handed a note card with Chinese script. He consults his book, writes down Chinese symbols, and passes the card through a second hole in the wall.5 Searle in the Chinese Room You may ask: What does this have to do with AI? Notice that from the vantage point of someone outside the room, Searle’s responses are indistinguishable from those of a Chinese speaker. Yet he doesn’t grasp the meaning of what he’s written. Like a computer, he’s produced answers to inputs by manipulating formal symbols.

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
Published 14 Jul 2005

Now the answer is no longer so obvious. What Searle is saying in the Chinese Room argument is that we take a simple "machine" and then consider how absurd it is to consider such a simple machine to be conscious. The fallacy has everything to do with the scale and complexity of the system. Complexity alone does not necessarily give us consciousness, but the Chinese Room tells us nothing about whether or not such a system is conscious. Kurzweil's Chinese Room. I have my own conception of the Chinese Room—call it Ray Kurzweil's Chinese Room. In my thought experiment there is a human in a room. The room has decorations from the Ming dynasty, including a pedestal on which sits a mechanical typewriter.

A machine that could really do what Searle describes in the Chinese Room argument would not merely be manipulating language symbols, because that approach doesn't work. This is at the heart of the philosophical sleight of hand underlying the Chinese Room. The nature of computing is not limited to manipulating logical symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities. Adherents appear to believe that Searle's Chinese Room argument demonstrates that machines (that is, nonbiological entities) can never truly understand anything of significance, such as Chinese.

·The "criticism from ontology": John Searle describes several versions of his Chinese Room analogy. In one formulation a man follows a written program to answer questions in Chinese. The man appears to be answering questions competently in Chinese, but since he is "just mechanically following a written program, he has no real understanding of Chinese and no real awareness of what he is doing. The "man" in the room doesn't understand anything, because, after all, "he is just a computer," according to Searle. So clearly, computers cannot understand what they are doing, since they are just following rules. Searle's Chinese Room arguments are fundamentally tautological, as they just assume his conclusion that computers cannot possibly have any real understanding.

Speaking Code: Coding as Aesthetic and Political Expression
by Geoff Cox and Alex McLean
Published 9 Nov 2012

Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3 (1980): 417. 51. Ibid., 418. 52. Diane Proudfoot, “Wittgenstein’s Anticipation of the Chinese Room,” in John Preston and Mark Bishop, eds., Views into the Chinese Room: New Essays on Searle and Artificial Intelligence (Oxford: Clarendon Press, 2002), 168. 53. Ibid., 168–169, citing Wittgenstein’s Philosophical Investigations (1953). 54. Proudfoot, “Wittgenstein’s Anticipation of the Chinese Room,” 177–178. 55. John R. Searle, “Twenty-One Years in the Chinese Room,” in Preston and Bishop, Views into the Chinese Room, 56. 56. Ibid. With this statement, Searle is arguing that Turing machines rely on abstract mathematical processes but not on energy transfer like some other machines; and one might extrapolate that the discourse around artificial life reinvigorates the fantasies of artificial intelligence in this way.

His observation is that the syntactical, abstract or formal content of a computer program is not the same as semantic or mental content associated with the human mind. The cognitive processes of the human mind can be simulated but not duplicated as such. Searle develops his thought experiment known as the “Chinese Room argument” as follows: “Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken. . . . To me, Chinese is just so many meaningless squiggles.”50 Given linguistic instruction, Searle imagines that he becomes able to answer questions that are indistinguishable from those of native Chinese speakers, but insists, “I produce the answers by manipulating uninterpreted formal symbols.

For the purposes of the Chinese, I am simply an instantiation of the computer program.”51 Searle’s position is based on the linguistic distinction between syntax and semantics as applied to the digital computer or Turing machine as a “symbol-manipulating device,” where the units have no meaning in themselves (a position that follows from semiotics). Even if it is argued that there is some sense of intentionality in the program or a degree of meaning in the unit, it is not the same as human information processing, and this sense of agency is what Searle calls “as-if intentionality.” In the Chinese Room, Searle becomes an instantiation of a computer program, drawing on a database 32 Chapter 1 of symbols and arranging them according to program rules. The point to be stressed is that formal principles alone remain insufficient to demonstrate human reason; the Chinese language is a particularly complex kind of invention that challenges human capabilities (so that machines might usefully supplement them), but it also combines images and speech in ways that confound formal logic.

pages: 246 words: 81,625

On Intelligence
by Jeff Hawkins and Sandra Blakeslee
Published 1 Jan 2004

* * * My skepticism of AI's assertions was honed around the same time that I applied to MIT. John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be, intelligent. To prove it, in 1980 he came up with a thought experiment called the Chinese Room. It goes like this: Suppose you have a room with a slot in one wall, and inside is an English-speaking person sitting at a desk. He has a big book of instructions and all the pencils and scratch paper he could ever need. Flipping through the book, he sees that the instructions, written in English, dictate ways to manipulate, sort, and compare Chinese characters.

It wasn't the book, which is just, well, a book, sitting inertly on the writing desk amid piles of paper. So where did the understanding occur? Searle's answer is that no understanding did occur; it was just a bunch of mindless page flipping and pencil scratching. And now the bait-and-switch: the Chinese Room is exactly analogous to a digital computer. The person is the CPU, mindlessly executing instructions, the book is the software program feeding instructions to the CPU, and the scratch paper is the memory. Thus, no matter how cleverly a computer is designed to simulate intelligence by producing the same behavior as a human, it has no understanding and it is not intelligent.

AI defenders came up with dozens of counterarguments to Searle, such as claiming that although none of the room's component parts understood Chinese, the entire room as a whole did, or that the person in the room really did understand Chinese, but just didn't know it. As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere. I was convinced we needed to understand what "understanding" is, a way to define it that would make it clear when a system was intelligent and when it wasn't, when it understands Chinese and when it doesn't.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

(To actually write out such code in a form that a human could read would require tens of thousands of printed volumes.) A computer can retrieve instructions from memory in microseconds, whereas the Chinese room computer would be billions of times slower. Given these practical considerations, the Chinese room and its contents could not convince an interrogator that it was a person: it could not in fact pass the Turing test. Another standard response to the Chinese room is that while the person in the room does not exhibit understanding, and the room itself doesn’t, the system containing the person, the room, the instructions and so on does.

And, as we have seen, computers can learn from experience, and become effective decision-makers, even if they cannot articulate the rationale for their decisions. The most famous argument against the possibility of strong AI is due to the philosopher John Searle, who was in fact the person who coined the terms strong and weak AI. He invented a scenario called the Chinese room in an attempt to show that strong AI is impossible. The Chinese room scenario goes like this: Imagine a man working alone in a room. Through a slot in the door he receives cards on which questions are written in Chinese; he understands no Chinese himself. He takes these cards, and then carefully follows a list of written instructions in order to write an answer in Chinese, which he then passes back out of the room.

While there certainly are areas of the human brain that seem to be responsible for language understanding, we will not find, within these, understanding of the kind Searle asks for. I believe that there is an even simpler response to Searle’s ingenious thought experiment. The Chinese room puzzle, expressed as a kind of Turing test, is a cheat because it does not treat the room as a black box. We only claim that there is no understanding in the Chinese room when we start to look inside it. The Turing test itself insisted that we should only look at the inputs and outputs, and ask whether the behaviour we witness is indistinguishable from that of a human. It seems to me to be pointless to get caught up in an argument about whether a computer ‘really’ understands if, in fact, it is doing something that is indistinguishable from human understanding.

pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed
by Ray Kurzweil
Published 13 Nov 2012

Searle compares this to a computer and concludes that a computer that could answer questions in Chinese (essentially passing a Chinese Turing test) would, like the man in the Chinese room, have no real understanding of the language and no consciousness of what it was doing. There are a few philosophical sleights of hand in Searle’s argument. For one thing, the man in this thought experiment is comparable only to the central processing unit (CPU) of a computer. One could say that a CPU has no true understanding of what it is doing, but the CPU is only part of the structure. In Searle’s Chinese room, it is the man with his rulebook that constitutes the whole system. That system does have an understanding of Chinese; otherwise it would not be capable of convincingly answering questions in Chinese, which would violate Searle’s assumption for this thought experiment.

Although I’m not prepared to move up my prediction of a computer passing the Turing test by 2029, the progress that has been achieved in systems like Watson should give anyone substantial confidence that the advent of Turing-level AI is close at hand. If one were to create a version of Watson that was optimized for the Turing test, it would probably come pretty close. American philosopher John Searle (born in 1932) argued recently that Watson is not capable of thinking. Citing his “Chinese room” thought experiment (which I will discuss further in chapter 11), he states that Watson is only manipulating symbols and does not understand the meaning of those symbols. Actually, Searle is not describing Watson accurately, since its understanding of language is based on hierarchical statistical processes—not the manipulation of symbols.

Humans in fact do a very poor job of solving the kinds of problems that a quantum computer would excel at (such as factoring large numbers). And if any of this proved to be true, there would be nothing barring quantum computing from also being used in our computers. John Searle is famous for introducing a thought experiment he calls “the Chinese room,” an argument I discuss in detail in The Singularity Is Near.9 In short, it involves a man who takes in written questions in Chinese and then answers them. In order to do this, he uses an elaborate rulebook. Searle claims that the man has no true understanding of Chinese and is not “conscious” of the language (as he does not understand the questions or the answers) despite his apparent ability to answer questions in Chinese.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

(Brooks, by the way, is convinced it is purely mechanistic and categorically rejects the notion of “the juice” as some attribute beyond normal physics.) What do you think “the juice” is? To try to get some resolution to the question of the possibility of an AGI, I invite you to answer six yes or no questions. Keep track of how many times you answer yes. Does the Chinese room think? Does the Chinese room or the Librarian understand Chinese? Whatever you think “the juice” is, could a machine be get it? (If you don’t think it exists at all, count that as a yes answer.) Did you answer the “What are we?” foundational question with “machines”? Did you answer the “What is your ‘self’?”

There is no “I” and there is no understanding. His conclusion is not simply linguistic hairsplitting. The entire question of AGI hinges on this point of understanding something. To get at the heart of this argument, consider the thought experiment offered up in 1980 by the American philosopher John Searle. It is called the Chinese room argument. Here it is in broad form: There is a giant room, sealed off, with one person in it. Let’s call him the Librarian. The Librarian doesn’t know any Chinese. However, the room is filled with thousands of books that allow him to look up any question in Chinese and produce an answer in Chinese.

So that is the basic argument against the possibility of AGI. First, computers simply manipulate ones and zeros in memory. No matter how fast you do that, that doesn’t somehow conjure up intelligence. Second, the computer just follows a program that was written for it, just as in the case of the Chinese room. So no matter how impressive it looks, it doesn’t really understand anything. It is just a party trick. It should be noted that many people in the AI field would most likely scratch their heads at the reasoning of the case against AGI and find it all quite frustrating. They would say that of course the brain is a machine—what else could it be?

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

Three reasons to be doubtful Having looked at one argument for why it should be possible to create an artificial mind, let’s turn to three arguments that have been advanced to prove that it will not be possible for us to create conscious machines. These are: The Chinese Room thought experiment The claim that consciousness involves quantum phenomena that cannot be replicated The claim that we have souls The Chinese Room American philosopher John Searle first described his Chinese Room thought experiment in 1980. It tries to show that a computer which could engage in a conversation would not understand what it was doing, which means that it would not be conscious. He described a computer that takes Chinese sentences as input, processes them by following the instructions of its software, and produces new sentences in Chinese as output.

Rather he was arguing that computers do not process information in the way that human brains do. Until and unless one is built which does this, it will not be conscious, however convincing a simulation it produces. Down the years Searle’s argument has generated a substantial body of commentary, mostly claiming to refute it. Most computer scientists would say that the Chinese room is a very poor analogy for how a conscious machine would actually operate, and that a simple input-output device like this would not succeed in appearing to converse. Many have also claimed that if such a machine were to succeed, there would be an understanding of Chinese somewhere within the system – perhaps in the programme, or in the totality of the room, the person and the programme.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

BACK TO NOTE REFERENCE 99 For more in-depth information on GPT-3, see Greg Brockman et al., “OpenAI API,” OpenAI, June 11, 2020, https://openai.com/blog/openai-api; Brown et al., “Language Models Are Few-Shot Learners”; Kelsey Piper, “GPT-3, Explained: This New Language AI Is Uncanny, Funny—and a Big Deal,” Vox, August 13, 2020, https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language; “GPT-3 Demo: New AI Algorithm Changes How We Interact with Technology,” Disruption Theory, YouTube video, August 28, 2020, https://www.youtube.com/watch?v=8V20HkoiNtc. BACK TO NOTE REFERENCE 100 David Cole, “The Chinese Room Argument,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Winter 2020), https://plato.stanford.edu/archives/win2020/entries/chinese-room; Amanda Askell (@amandaaskell), “GPT-3’s completion of the Chinese room argument from Searle’s ‘Minds, Brains, and Programs’ (original text is in bold),” Twitter, July 17, 2020, https://twitter.com/AmandaAskell/status/1284186919606251521; David J. Chalmers, The Conscious Mind: In Search of a Fundamental Theory (New York: Oxford University Press, 1996), 327.

BACK TO NOTE REFERENCE 10 For more on David Chalmers’s concept of zombies and John Searle’s related “Chinese room” thought experiment showing why subjective consciousness cannot be proven from behavior, see John Green, “Where Does Your Mind Reside?,” CrashCourse, YouTube video, August 1, 2016, https://www.youtube.com/watch?v=3SJROTXnmus; John Green, “Artificial Intelligence & Personhood,” CrashCourse, YouTube video, August 8, 2016, https://www.youtube.com/watch?v=39EdqUbj92U; Marcus Du Sautoy, “The Chinese Room Experiment: The Hunt for AI,” BBC Studios, YouTube video, September 17, 2015, https://www.youtube.com/watch?

v=39EdqUbj92U; Marcus Du Sautoy, “The Chinese Room Experiment: The Hunt for AI,” BBC Studios, YouTube video, September 17, 2015, https://www.youtube.com/watch?v=D0MD4sRHj1M; Robert Kirk, “Zombies,” in Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Spring 2019), https://plato.stanford.edu/entries/zombies; David Cole, “The Chinese Room Argument,” in Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Spring 2019), https://plato.stanford.edu/entries/chinese-room. BACK TO NOTE REFERENCE 11 For Chalmers’s more detailed views on the hard and easy problems of consciousness, see David Chalmers, “Hard Problem of Consciousness,” Serious Science, YouTube video, July 5, 2016, https://www.youtube.com/watch?v=C5DfnIjZPGw; David Chalmers, “The Meta-Problem of Consciousness,” Talks at Google, YouTube video, April 2, 2019, https://www.youtube.com/watch?

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

But the point is, it’s not clear if computers will think as we define it, or if they’ll ever possess anything like intention or consciousness. Therefore, some scholars say, artificial intelligence equivalent to human intelligence is impossible. Philosopher John Searle created a thought experiment called the Chinese Room Argument that aims to prove this point: Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols, which, unknown to the person in the room, are questions in Chinese (the input).

At best what researchers will get from efforts to reverse engineer the brain will be a refined mimic. And AGI systems will achieve similarly mechanical results. Searle’s not alone in believing computers will never think or attain consciousness. But he has many critics, with many different complaints. Some detractors claim he is computerphobic. Taken as a whole, everything in the Chinese room, including the man, come together to create a system that persuasively “understands” Chinese. Seen this way Searle’s argument is circular: no part of the room (computer) understands Chinese, ergo the computer cannot understand Chinese. And you can just as easily apply Searle’s objection to humans: we don’t have a formal description of what understanding language really is, so how can we claim humans “understand” language?

He rejected the criticism that the machine might not be thinking like a human at all. He wrote, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” In other words, he objects to the assertion John Searle made with his Chinese Room Experiment: if it doesn’t think like a human it’s not intelligent. Most of the experts I’ve spoken with concur. If the AI does intelligent things, who cares what its program looks like? Well, there may be at least two good reasons to care. The transparency of the AI’s “thought” process before it evolves beyond our understanding is crucial to our survival.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

“Scientists Propose a Novel Regional Path Tracking Scheme for Autonomous Ground Vehicles.” Phys Org, January 16, 2017. https://phys.org/news/2017-01-scientists-regional-path-tracking-scheme.html. Searle, John R. “Artificial Intelligence and the Chinese Room: An Exchange.” New York Review of Books, February 16, 1989. http://www.nybooks.com/articles/1989/02/16/artificial-intelligence-and-the-chinese-room-an-ex/. Seife, Charles. Proofiness: How You’re Being Fooled by the Numbers. New York: Penguin, 2011. Sharkey, Patrick. “The Destructive Legacy of Housing Segregation.” Atlantic, June 2016. https://www.theatlantic.com/magazine/archive/2016/06/the-eviction-curse/480738/.

The underpinnings are absurd, from a critical perspective, in that both the man and woman are given gender-coded physical and moral attributes. The philosophical underpinnings of Turing’s argument are unsound. One of the most compelling counterarguments was addressed by the philosopher John Searle in an argument known as the Chinese Room. Searle summarized it in a 1989 piece in the New York Review of Books: A digital computer is a device which manipulates symbols, without any reference to their meaning or interpretation. Human beings, on the other hand, when they think, do something much more than that. A human mind has meaningful thoughts, feelings, and mental contents generally.

There are different ways to react to this news: you can be sad that the thing you dreamed of is not possible—or you can be excited and embrace what is possible when artificial devices (computers) work in sync with truly intelligent beings (humans). I prefer the latter approach. Notes 1. Silver et al., “Mastering the Game of Go with Deep Neural Networks and Tree Search,” 484. 2. Turing, “Computing Machinery and Intelligence.” 3. Searle, “Artificial Intelligence and the Chinese Room.” 4 Hello, Data Journalism We are at an exciting moment, when every field has taken a computational turn. We now have computational social science, computational biology, computational chemistry, or digital humanities; visual artists use languages like Processing to create multimedia art; 3-D printing allows sculptors to push further into physical possibilities with art.

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

The man thus passes the test of intelligence proposed by Turing (1950), who argued that a computer can be called intelligent if it can engage in conversation in a way that would pass for the natural language of a human 11 What Computers Will Never Be Able To Do 101 being.1 And yet the man does not actually understand anything of the questions he is asked or the answers he gives, thus proving, Searle argues, that computation by itself is not sufficient for understanding. Searle (1993) later extends this conclusion to consciousness, stating that the Chinese room argument shows the computational model to be insufficient for consciousness, and syntax to be insufficient for semantics. In other words, although a machine may be able to imitate consciousness through advanced symbol manipulation according to formal rules, this (which is all that computers are capable of ) will never amount to consciousness. Kurzweil (2005: 458–469) disagrees. He regards Searle’s Chinese room argument as tautological: Searle concludes that a computer could never ‘understand’ anything only because he has already assumed that it is only biological entities which could be conscious and understand things.

The truth of such a view hinges on what Dreyfus coined the ‘biological assumption’ (1992: 156, 159–160) and the ‘psychological assumption’ (1992: 263): the brain is like a very advanced computer, and the mind like computer software; and mental activity can be reduced to symbol manipulation, consisting of no more than a device operating on bits of information according to formal rules. If these assumptions are correct then we should expect a sufficiently advanced computer to be able to manipulate symbols in the same way as the brain, and thereby to be able to produce consciousness. Searle (1980) produced a powerful challenge to this view with his ‘Chinese room argument’: An English speaker is in a room, holding an English instruction manual on how to manipulate Chinese symbols so as to reply to questions posed in Chinese by people outside the room. He is surrounded by boxes of these symbols. By following the instructions, the man is able to pass out symbols which are correct answers to the questions and which are indistinguishable from the answers that would be given by an actual Chinese speaker.

And that, Kurzweil contends, is precisely what happens in the case of the human brain: no single neuron is conscious, but when put together consciousness arises as an emergent property from complex patterns of neuronal activity. Why, he asks, could the same not happen with a sufficiently vast equivalent of the Chinese room? That is, if there were billions of people inside a massive room simulating the different processes of the brain, why should we not say that such a system is conscious? Yet this is implausible. Of course, the collection of people may seem to act as if it has a ‘mind of its own’. That is a well-known sociological phenomenon (see e.g.

pages: 337 words: 103,522

The Creativity Code: How AI Is Learning to Write, Paint and Think
by Marcus Du Sautoy
Published 7 Mar 2019

The machines could speak to one another securely without us humans being able to eavesdrop on their private conversations. Stuck in the Chinese Room These algorithms that are navigating language, translating from English to Spanish, answering Jeopardy! questions and comprehending narrative raise an interesting question which is important for the whole sphere of AI. At what point should we consider that the algorithm understands what it is actually doing? This challenge was captured in a thought experiment created by John Searle and called ‘The Chinese Room’. Imagine that you are put in a room with an instruction manual which gives you an appropriate response to any written string of Chinese characters posted into the room.

His point was that provided the things had the relationship expressed by the axioms, then the deductions would make as much sense for chairs and beer mugs as for geometric lines and planes. This allows the computer to follow rules and create mathematical deductions without really knowing what the rules are about. This will be relevant when we come later to the idea of the Chinese room experiment devised by John Searle. This thought experiment explores the idea of machine translation and tries to illustrate why following rules doesn’t show intelligence or understanding. Nevertheless, follow the rules of the mathematical game and you get mathematical theorems. But where did this urge to establish proof in mathematics come from?

E. 139 Beveridge, Andrew 56 Beyond the Fence (musical) 290–1 Białystok University 236 biases and blind spots, algorithmic 91–5 Birtwistle, Harrison 193 Blake, William 279 Blombos Cave, South Africa 103 Bloom (app) 229 BOB (artificial life form) 146–8 Boden, Margaret 9, 10, 11, 16, 39, 209, 222 Boeing 114 Bonaparte, Napoleon 158 bone carvings 104–5 booksellers 62–5 bordeebook 62–5 Borges, Jorge Luis: ‘The Library of Babel’ 241–4, 253, 304 Botnik 284–6 Boulanger, Nadia 186, 189, 205, 209 Boulez, Pierre 11, 223 brachistochrone 244 Braff, Zach 284 brain: biases and blind spots 91–2; consciousness and 274, 304–5; fractals and 124–5; mathematics and 155, 156, 160–1, 171, 174, 177, 178; musical composition and 187, 189, 193, 203, 205, 231; neural networks and 68–71, 68, 70; pattern recognition and 6, 20–1, 99–101, 155; stroke and 133–4; visual recognition and 76, 79, 143–4 Breakout (game) 26–8, 91, 92, 210 Brew, Jamie 284 Brin, Sergey 48–9, 51–2, 57 Bronowski, Jacob 104 Brown, Glenn 141 Bruner, Jerome 303 Buolamwini, Joy 94 Cage, John 106, 206 Calculus of Constructions (CoC) 173–4 see also Coq Cambridge Analytica 296 Cambridge University 18–19, 23–4, 43, 72, 81, 150, 225, 240, 278, 290 Carpenter, Loren 114, 115 Carré, Benoit 224 cars, driverless 6, 29–30, 79, 91 Cartesian geometry 110–11 Catmull, Ed 115 cave art, ancient 103–4, 105, 156, 230 Cawelti, John: Adventure, Mystery and Romance 252–3 Chang, Alex 23 chaos theory 124 Cheng, Ian 146–8 chess 16, 18–20, 21, 22–3, 29, 32–3, 34, 97, 151, 153, 162, 163, 246, 260–1, 304 child pornography 77 Chilvers, Peter 229 Chinese Room experiment 164, 273–5 Chomsky, Noam 260 Chopin, Frédéric 13, 197, 200, 202, 204, 206–7, 304 Christie’s 141 classemes 138 Classical era of music 10, 12–13, 190, 199, 207 Classification of Finite Simple Groups 18, 172, 175, 177, 244 Coelho, Paulo 302 Cohen, Harold 116–17, 118, 121 Coleridge, Samuel Taylor: ‘Kubla Khan’ 14 Colton, Simon 119, 120, 121–2, 291, 292, 293 Coltrane, John 223 Commodore Amiga 23 Congo (chimp) 107 consciousness 107, 231, 232, 270, 274, 283, 300, 302–6 Continuator, The 218–21, 286 Conway, John 18–19 Cope, David 195–203, 207, 208, 210, 304 copyright ownership 108–9 Coq 173–6, 177, 184 Coquand, Thierry 173 correlation as causation, mistaking 92–4 Corresponding Society of Musical Sciences 193, 208 Coulom, Rémi 31 Crazy Stone 31 Creative Adversarial Networks 140–1 creativity: algorithmic and rule-based, as 5; animals and 107–9; art, definition of and 103–7; audiences and 303; coder to code, shifting from 7, 102–3, 116–22, 132–42, 219–20; combinational 10–11, 16, 181, 222, 299; commercial incentive and 131–2; competition and 132–42; consciousness and 301–2, 303–5; death and 304; definition of 3–5, 9–13, 301–2; drugs and 181–2; exploratory 9–10, 40, 181, 219, 299; failure as component part of 17; feedback from others and 132; flow and 221–4, 222; Go and see Go; human lives as act of 303–4; Lovelace Test and see Lovelace Test; mathematics and 3, 150–1, 153, 161, 167–8, 170, 181–2, 185, 245–8, 253, 279–80; mechanical nature of 298; music and see music; new/novelty and 3, 4, 7–8, 12, 13, 16, 17, 40–3, 102–3, 109, 138–41, 140, 167–8, 238–9, 291–3, 299, 301; origins of our obsession with 301; political role of 303; randomness and 117–18; romanticising 14–15; self-reflection and 300; storytelling and see storytelling; surprise and 4, 8, 40, 65, 66, 102–3, 148, 168, 202, 241, 248–9; teaching 13–17; three types of 9–13; transformational 11–13, 17, 39, 41, 181, 209, 299; value and 4, 8, 12, 16, 17, 40–1, 102–3, 167–8, 238–9, 301, 304 Csikszentmihalyi, Mihaly 221 Cubism 11, 138, 139 Cybernetic Poet 280–2 Cybernetic Serendipity (ICA exhibition, 1968) 118–19 Dahl, Roald: Tales of the Unexpected 276–7; ‘The Great Automatic Grammatizator’ 276–7, 297 dating/matching 57–61, 58, 59, 60 da Vinci, Leonardo 106, 118, 128; Treatise on Painting 117 Davis, Miles: Kind of Blue 214 Debussy, Claude 1 DeepBach 210–12, 232 DeepBlue 29, 214, 260–1 DeepMind 25–43, 65, 95, 97, 98, 131, 132, 151, 210, 233–9, 241, 266 Deep Watch 224 Delft University of Technology 127 democracy 165–6 Dennett, Daniel 147 Descartes, René 12, 110–11 Disney 289–90 Duchamp, Marcel 106 du Sautoy, Marcus: attempts to fake a Jackson Pollock 123–5; composes music 186–8; The Music of the Primes 285–6; uses AI to write section of this book 297 Dylan, Bob 223 EEG 125 Egyptians, Ancient 157, 165 eigenvectors of matrices 53 Eisen, Michael 62, 64 Elgammal, Ahmed 132–3, 134, 135, 139, 140, 141 Eliot, George 302 Eliot, T.

pages: 247 words: 43,430

Think Complexity
by Allen B. Downey
Published 23 Feb 2012

Example 10-7. In the philosophy of mind, Strong AI is the position that an appropriately programmed computer could have a mind in the same sense that humans have minds. John Searle presented a thought experiment called The Chinese Room, intended to show that Strong AI is false. You can read about it at http://en.wikipedia.org/wiki/Chinese_room. What is the system reply to the Chinese Room argument? How does what you have learned about complexity science influence your reaction to the system response? Chapter 11. Case Study: Sugarscape Dan Kearney, Natalie Mattison, and Theo Thompson The Original Sugarscape Sugarscape is an agent-based model developed by Joshua M.

pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning
by Justin E. H. Smith
Published 22 Mar 2022

Or the experiment could have been imagined in a cosmopolitan vein, with each human being alive contributing to the “earth brain”—a scenario that would have brought us at least a small step closer to a one-one correspondence between people and neurons. In the case of the Chinese room, Searle seems to have chosen this language in particular because he could attest that he did not know a single word of it. But nor does he know a word of Yukaghir or Tupi, and yet a moment’s reflection is enough to see that using one of these languages in the thought experiment would run the risk of eliciting very different intuitions. Since at least the seventeenth century, European observers have imagined Chinese people as being Chinese rooms, processing information and delivering rational and correct responses without any real conscious understanding of what they were doing; and China as a whole has been understood for equally long as a sort of “China brain,” functioning as a whole and as a true unity, rather than, in contrast with European nations, as a collection of individuals.

In the “China brain” thought experiment, articulated by Lawrence Davis in 197426 and then again by Ned Block in 1978,27 each citizen of China is imagined playing the role of a single neuron, using telecommunication devices to connect them to one another in the same way that axons and dendrites connect the neurons of the brain. In such a scenario, the thought experiment sought to know, would China itself become conscious? In 1980, in turn, John Searle imagined the “Chinese room,” which was meant to show the falsehood of “strong AI,” that is, again, of the view that machines can ever be made to literally understand anything.28 A machine that could be shown to convincingly display “understanding” of Chinese would be indistinguishable from a room in which Searle himself, or some other human being, was holed up, receiving sentences in good Chinese written on paper and passed through a slit in the wall, to which he would then respond, in equally good Chinese, by simply consulting various reference works available in the room.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

Many techniques have been developed to improve the performance of algorithms and avoid or at least delay exponential complexity. It would appear that our human brains have very limited ability to search large numbers of possibilities looking for solutions, and yet, people appear to be able to think. Chinese room Man processing Chinese without any understanding Multiple John Searle provided an alternative argument known as the Chinese Room. Suppose an AGI was implemented as a person in a room full of instructions written on paper cards. Someone outside the room slips pieces of paper through a slot in a door with Chinese questions and assertions written on it. The person inside the room cannot read Chinese, but he can look up the symbols in his list of instructions and perform the steps they contain.

Definitions of singularity 4. Hollywood and HAL 2001 1. Anthropomorphic zap gun vs. virus 2. The two HAL's 3. HAL dialog 5. The Case Against Machine Intelligence 1. Turing halting problem 2. Gödel's incompleteness theorem 3. Incompleteness argument against general AGI 4. Combinatorial explosion 5. Chinese room 6. Simulated vs. real intelligence 7. Emperors new mind 8. Intentionality 9. Brain in a vat 10. Understanding the brain 11. Consciousness and the soul 12. Only what was programmed 13. What computers can't do 14. Over-hyped technologies 15. Nonlinear difficulty, chimpanzees 16. End of Moore's law 17.

Intelligent computers’ moral values will be driven by natural selection for the same reason that human moral values have been driven by natural selection. It is unclear whether the computers will be friendly. There are several objections that have been raised to this line of reasoning. These include theoretical objections based on Turing and Gödel, Chinese room style objections based on the nature of computation, and our historical lack of success in building intelligent machines. They will also be examined in detail in a few chapters time, but they are all easily discounted. The thorny issue of consciousness will also be investigated, as well as the distinction between real intelligence and simulated intelligence.

pages: 913 words: 265,787

How the Mind Works
by Steven Pinker
Published 1 Jan 1997

Grammar in the head: Chomsky, 1991; Jackendoff, 1987, 1994; Pinker, 1994. 90 Mentalese: Anderson & Bower, 1973; Fodor, 1975; Jackendoff, 1987, 1990, 1994; Pinker, 1989, 1994. 90 “Processed” inputs to the hippocampus: Churchland & Sejnowski, 1992, p. 286. “Processed” inputs to the frontal lobe: Crick & Koch, 1995. 90 Programming style: Kernighan & Plauger, 1978. 92 Architecture of complexity: Simon, 1969. 92 Hora and Tempus: Simon, 1969, p. 188. 93 The Chinese Room: Block, 1978; Searle, 1980. 94 Chinese Room commentary: Searle, 1980; Dietrich, 1994. Chinese Room update: Searle, 1992. 94 Chinese Room refutations: Churchland & Churchland, 1994; Chomsky, 1993; Dennett, 1995. 96 They’re made out of meat: Bisson, 1991. 97 The emperor’s new mind: Penrose, 1989, 1990. Update: Penrose, 1994. 97 The emperor’s new book: Penrose, 1989; Wilczek, 1994; Putnam, 1994; Crick, 1994; Dennett, 1995. 98 Tortoise and Achilles: Carroll, 1895/1956. 99 Neuro-logical networks: McCulloch & Pitts, 1943. 101 Neural networks: Hinton & Anderson, 1981; Feldman & Ballard, 1982; Rumelhart, McClelland, & the PDP Research Group, 1986; Grossberg, 1988; Churchland & Sejnowski, 1992; Quinlan, 1992. 106 Necker network: Feldman & Ballard, 1982. 107 Pattern associators: Hinton, McClelland, & Rumelhart, 1986; Rumelhart & McClelland, 1986b. 109 Problems with perceptrons: Minsky & Papert, 1988a; Rumelhart, Hinton, & Williams, 1986. 111 Hidden-layer networks as function approximators: Poggio & Girosi, 1990. 112 Connectionism: Rumelhart, McClelland, & the PDP Research Group, 1986; McClelland, Rumelhart, & the PDP Research Group, 1986; Smolensky, 1988; Morris, 1989.

The first attack comes from the philosopher John Searle. Searle believes that he refuted the computational theory of mind in 1980 with a thought experiment he adapted from another philosopher, Ned Block (who, ironically, is a major proponent of the computational theory). Searle’s version has become famous as the Chinese Room. A man who knows no Chinese is put in a room. Pieces of paper with squiggles on them are slipped under the door. The man has a long list of complicated instructions such as “Whenever you see [squiggle squiggle squiggle], write down [squoggle squoggle squoggle].” Some of the rules tell him to slip his scribbles back out under the door.

Many people have interpreted him as saying that the program is missing consciousness, and indeed Searle believes that consciousness and intentionality are closely related because we are conscious of what we mean when we have a thought or use a word. Intentionality, consciousness, and other mental phenomena are caused not by information processing, Searle concludes, but by the “actual physical-chemical properties of actual human brains” (though he never says what those properties are). The Chinese Room has kicked off a truly unbelievable amount of commentary. More than a hundred published articles have replied to it, and I have found it an excellent reason to take my name off all Internet discussion-group lists. To people who say that the whole room (man plus rule sheet) understands Chinese, Searle replies: Fine, let the guy memorize the rules, do the calculations in his head, and work outdoors.

pages: 210 words: 62,771

Turing's Vision: The Birth of Computer Science
by Chris Bernhardt
Published 12 May 2016

Before you can submit the form, you have to answer a CAPTCHA, which customarily involves reading some deformed text and typing the letters and numbers into a box. The notion of machines thinking naturally leads to the notions of whether machines can understand and can be conscious. There have been heated arguments both for and against. The most famous argument against the idea of machines as entities that can think and understand is the Chinese Room Argument. This is a thought experiment invented by the philosopher John Searle in 1980, and is based on the Turing Test. Searle imagines that he has been placed in a room. He doesn’t understand a word of Chinese, but he has a book that tells him what to do in various situations. People can slide pieces of paper into the room through a slot.

When he finds the appropriate string it tells him what to write on his piece of paper and slide to the people outside. The people on the outside are Chinese interrogators. They are trying to determine whether there is someone or something inside the room who understands Chinese. Since they keep getting correct responses to their questions they assume that there is. Does the Chinese room understand Chinese or is it just simulating understanding Chinese? Ever since the argument was first given, it has provoked an enormous number of arguments both for and against whether real understanding is taking place. Clearly, Searle is acting as a universal computer and the book is the program.

J., 124 Brown, Gordon, 161 Busy beaver function, 119 Canonical systems, 62 Cantor, Georg, 12, 108, 111, 123 Cantor’s Theorem, 132 CAPTCHA (Completely Automated Public Turing Test To Tell Computers and Humans Apart), 158 Cardinality, 124 computations, 140 real numbers, 136 Cells, 25, 43 Cellular automata, 82 Central Limit Theorem, 1 Central processing unit, 98 Chinese Room Argument, 158 Church, Alonzo, 16, 24, 62, 63, 71, 148 Church-Turing thesis, 61–62 Clay Mathematics Institute, 66 Code breaking, 147, 150, 153, 160 Collatz function, 80–81 Colossus, 153 Compiler, 98, 105, 156 Complement of a language, 33, 40 Complexity theory, 66 Computable function, 59, 120 Computable numbers, 141 Computational power, 12, 25, 63, 71, 101 Computing Machine Laboratory, 147, 153, 160 Concatenation, 38, 91 Configurations, 43, 46 Continuum hypothesis, 22, 139 Control unit, 98 Conway, John, 164 Cook, Matthew, 86, 103 Copeland, Jack, 160, 163 Correspondence problem.

pages: 245 words: 64,288

Robots Will Steal Your Job, But That's OK: How to Survive the Economic Collapse and Be Happy
by Pistono, Federico
Published 14 Oct 2012

If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either. http://plato.stanford.edu/entries/chinese-room/ http://en.wikipedia.org/wiki/Chinese_room 28 A ‘facepalm’ is the physical gesture of placing one’s hand flat across one’s face or lowering one’s face into one’s hand or hands. The gesture is found in many cultures as a display of frustration, disappointment, embarrassment, shock, or surprise. It has been popularised as an Internet meme based on an image of the character Captain Jean-Luc Picard performing the gesture in the Star Trek: The Next Generation episode “DéjàQ”.

http://en.wikipedia.org/wiki/File:Wheat_Chessboard_with_line.svg 25 Cramming more components onto integrated circuits, Gordon E. Moore, 1965. Electronics Magazine. p. 4. http://download.intel.com/museum/Moores_Law/Articles-Press_Releases/Gordon_Moore_1965_Article.pdf 26 The Law of Accelerating Returns March 7, Ray Kurzweil, 2001. http://www.kurzweilai.net/the-law-of-accelerating-returns 27 The Chinese room is a thought experiment presented by John Searle. It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

In 1965, Herbert Simon stated that in just twenty years’ time, machines would be capable ‘of doing any work a man can do’. Not long after, Marvin Minsky added that ‘within a generation … the problem of creating Artificial Intelligence will substantially be solved’. The Chinese Room Philosophical problems were also beginning to be raised concerning Symbolic AI. Perhaps the best-known criticism is the thought experiment known as ‘the Chinese Room’. Put forward by the American philosopher John Searle, it questions whether a machine processing symbols can ever truly be considered intelligent. Imagine, Searle says, that he is locked in a room and given a collection of Chinese writings.

To find a specific word or phrase from the index, please use the search feature of your ebook reader. 2001: A Space Odyssey (1968) 2, 228, 242–4 2045 Initiative 217 accountability issues 240–4, 246–8 Active Citizen 120–2 Adams, Douglas 249 Advanced Research Projects Agency (ARPA) 19–20, 33 Affectiva 131 Age of Industry 6 Age of Information 6 agriculture 150–1, 183 AI Winters 27, 33 airlines, driverless 144 algebra 20 algorithms 16–17, 59, 67, 85, 87, 88, 145, 158–9, 168, 173, 175–6, 183–4, 186, 215, 226, 232, 236 evolutionary 182–3, 186–8 facial recognition 10–11, 61–3 genetic 184, 232, 237, 257 see also back-propagation AliveCor 87 AlphaGo (AI Go player) 255 Amazon 153, 154, 198, 236 Amy (AI assistant) 116 ANALOGY program 20 Analytical Engine 185 Android 59, 114, 125 animation 168–9 Antabi, Bandar 77–9 antennae 182, 183–5 Apple 6, 35, 56, 65, 90–1, 108, 110–11, 113–14, 118–19, 126–8, 131–2, 148–9, 158, 181, 236, 238–9, 242 Apple iPhone 108, 113, 181 Apple Music 158–9 Apple Watch 66, 199 architecture 186 Artificial Artificial Intelligence (AAI) 153, 157 Artificial General Intelligence (AGI) 226, 230–4, 239–40, 254 Artificial Intelligence (AI) 2 authentic 31 development problems 23–9, 32–3 Good Old-Fashioned (Symbolic) 22, 27, 29, 34, 36, 37, 39, 45, 49–52, 54, 60, 225 history of 5–34 Logical Artificial Intelligence 246–7 naming of 19 Narrow/Weak 225–6, 231 new 35–63 strong 232 artificial stupidity 234–7 ‘artisan economy’ 159–61 Asimov, Isaac 227, 245, 248 Athlone Industries 242 Atteberry, Kevan J. 112 Automated Land Vehicle in a Neural Network (ALVINN) 54–5 automation 141, 144–5, 150, 159 avatars 117, 193–4, 196–7, 201–2 Babbage, Charles 185 back-propagation 50–3, 57, 63 Bainbridge, William Sims 200–1, 202, 207 banking 88 BeClose smart sensor system 86 Bell Communications 201 big business 31, 94–6 biometrics 77–82, 199 black boxes 237–40 Bletchley Park 14–15, 227 BMW 128 body, machine analogy 15 Bostrom, Nick 235, 237–8 BP 94–95 brain 22, 38, 207–16, 219 Brain Preservation Foundation 219 Brain Research Through Advanced Innovative Neurotechnologies 215–16 brain-like algorithms 226 brain-machine interfaces 211–12 Breakout (video game) 35, 36 Brin, Sergey 6–7, 34, 220, 231 Bringsjord, Selmer 246–7 Caenorhabditis elegans 209–10, 233 calculus 20 call centres 127 Campbell, Joseph 25–6 ‘capitalisation effect’ 151 cars, self-driving 53–56, 90, 143, 149–50, 247–8 catering 62, 189–92 chatterbots 102–8, 129 Chef Watson 189–92 chemistry 30 chess 1, 26, 28, 35, 137, 138–9, 152–3, 177, 225 Cheyer, Adam 109–10 ‘Chinese Room, the’ 24–6 cities 89–91, 96 ‘clever programming’ 31 Clippy (AI assistant) 111–12 clocks, self-regulating 71–2 cognicity 68–9 Cognitive Assistant that Learns and Organises (CALO) 112 cognitive psychology 12–13 Componium 174, 176 computer logic 8, 10–11 Computer Science and Artificial Intelligence Laboratory (CSAIL) 96–7 Computer-Generated Imagery (CGI) 168, 175, 177 computers, history of 12–17 connectionists 53–6 connectomes 209–10 consciousness 220–1, 232–3, 249–51 contact lenses, smart 92 Cook, Diane 84–6 Cook, Tim 91, 179–80 Cortana (AI assistant) 114, 118–19 creativity 163–92, 228 crime 96–7 curiosity 186 Cyber-Human Systems 200 cybernetics 71–4 Dartmouth conference 1956 17–18, 19, 253 data 56–7, 199 ownership 156–7 unlabelled 57 death 193–8, 200–1, 206 Deep Blue 137, 138–9, 177 Deep Knowledge Ventures 145 Deep Learning 11–12, 56–63, 96–7, 164, 225 Deep QA 138 DeepMind 35–7, 223, 224, 245–6, 255 Defense Advanced Research Projects Agency (DARPA) 33, 112 Defense Department 19, 27–8 DENDRAL (expert system) 29–31 Descartes, René 249–50 Dextro 61 DiGiorgio, Rocco 234–5 Digital Equipment Corporation (DEC) 31 Digital Reasoning 208–9 ‘Digital Sweatshops’ 154 Dipmeter Advisor (expert system) 31 ‘do engines’ 110, 116 Dungeons and Dragons Online (video game) 197 e-discovery firms 145 eDemocracy 120–1 education 160–2 elderly people 84–6, 88, 130–1, 160 electricity 68–9 Electronic Numeric Integrator and Calculator (ENIAC) 12, 13, 92 ELIZA programme 129–30 Elmer and Elsie (robots) 74–5 email filters 88 employment 139–50, 150–62, 163, 225, 238–9, 255 eNeighbor 86 engineering 182, 183–5 Enigma machine 14–15 Eterni.me 193–7 ethical issues 244–8 Etsy 161 Eurequa 186 Eve (robot scientist) 187–8 event-driven programming 79–81 executives 145 expert systems 29–33, 47–8, 197–8, 238 Facebook 7, 61–2, 63, 107, 153, 156, 238, 254–5 facial recognition 10–11, 61–3, 131 Federov, Nikolai Fedorovich 204–5 feedback systems 71–4 financial markets 53, 224, 236–7 Fitbit 94–95 Flickr 57 Floridi, Luciano 104–5 food industry 141 Ford 6, 230 Foxbots 149 Foxconn 148–9 fraud detection 88 functional magnetic resonance imaging (fMRI) 211 Furbies 123–5 games theory 100 Gates, Bill 32, 231 generalisation 226 genetic algorithms 184, 232, 237, 257 geometry 20 glial cells 213 Go (game) 255 Good, Irving John 227–8 Google 6–7, 34, 58–60, 67, 90–2, 118, 126, 131, 155–7, 182, 213, 238–9 ‘Big Dog’ 255–6 and DeepMind 35, 245–6, 255 PageRank algorithm 220 Platonic objects 164, 165 Project Wing initiative 144 and self-driving cars 56, 90, 143 Google Books 180–1 Google Brain 61, 63 Google Deep Dream 163–6, 167–8, 184, 186, 257 Google Now 114–16, 125, 132 Google Photos 164 Google Translate 11 Google X (lab) 61 Government Code and Cypher School 14 Grain Marketing Adviser (expert system) 31 Grímsson, Gunnar 120–2 Grothaus, Michael 69, 93 guilds 146 Halo (video game) 114 handwriting recognition 7–8 Hank (AI assistant) 111 Hawking, Stephen 224 Hayworth, Ken 217–21 health-tracking technology 87–8, 92–5 Healthsense 86 Her (film, 2013) 122 Herd, Andy 256–7 Herron, Ron 89–90 High, Rob 190–1 Hinton, Geoff 48–9, 53, 56, 57–61, 63, 233–4 hive minds 207 holograms 217 HomeChat app 132 homes, smart 81–8, 132 Hopfield, John 46–7, 201 Hopfield Nets 46–8 Human Brain Project 215–16 Human Intelligence Tasks (HITs) 153, 154 hypotheses 187–8 IBM 7–11, 136–8, 162, 177, 189–92 ‘IF THEN’ rules 29–31 ‘If-This-Then-That’ rules 79–81 image generation 163–6, 167–8 image recognition 164 imagination 178 immortality 204–7, 217, 220–1 virtual 193–8, 201–4 inferences 97 Infinium Robotics 141 information processing 208 ‘information theory’ 16 Instagram 238 insurance 94–5 Intellicorp 33 intelligence 208 ambient 74 ‘intelligence explosion’ 228 top-down view 22, 25, 246 see also Artificial Intelligence internal combustion engine 140–1, 150–1 Internet 10, 56 disappearance 91 ‘Internet of Things’ 69, 70, 83, 249, 254 invention 174, 178, 179, 182–5, 187–9 Jawbone 78–9, 92–3, 254 Jennings, Ken 133–6, 138–9, 162, 189 Jeopardy!

The Book of Why: The New Science of Cause and Effect
by Judea Pearl and Dana Mackenzie
Published 1 Mar 2018

There is no way to distinguish (so the argument goes) between a machine that stores a dumb question-answer list and one that answers the way that you and I do—that is, by understanding the question and producing an answer using a mental causal model. So what would the mini-Turing test prove, if cheating is so easy? The philosopher John Searle introduced this cheating possibility, known as the “Chinese Room” argument, in 1980 to challenge Turing’s claim that the ability to fake intelligence amounts to having intelligence. Searle’s challenge has only one flaw: cheating is not easy; in fact, it is impossible. Even with a small number of variables, the number of possible questions grows astronomically.

Our comparisons between the Ladder of Causation and human cognitive development were inspired by Harari (2015) and by the recent findings by Kind et al. (2014). Kind’s article contains details about the Lion Man and the site where it was found. Related research on the development of causal understanding in babies can be found in Weisberg and Gopnik (2013). The Turing test was first proposed as an imitation game in 1950 (Turing, 1950). Searle’s “Chinese Room” argument appeared in Searle (1980) and has been widely discussed in the years since. See Russell and Norvig (2003); Preston and Bishop (2002); Pinker (1997). The use of model modification to represent intervention has its conceptual roots with the economist Trygve Haavelmo (1943); see Pearl (2015) for a detailed account.

Clarendon Press, Oxford, UK, 697–727. Pearl, J. (2015). Trygve Haavelmo and the emergence of causal calculus. Econometric Theory 31: 152–179. Special issue on Haavelmo centennial. Pinker, S. (1997). How the Mind Works. W. W. Norton and Company, New York, NY. Preston, J., and Bishop, M. (2002). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press, New York, NY. Reichenbach, H. (1956). The Direction of Time. University of California Press, Berkeley, CA. Russell, S. J., and Norvig, P. (2003). Artificial Intelligence: A Modern Approach. 2nd ed. Prentice Hall, Upper Saddle River, NJ.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

These scenarios echo Kurt Vonnegut’s 1961 short story “Harrison Bergeron,” in which exceptional aptitude is suppressed in deference to the mediocre lowest common denominator of society. Thought experiments like John Searle’s Chinese Room and Isaac Asimov’s Three Laws of Robotics all appeal to the sorts of intuitions plaguing human brains that Daniel Kahneman, Amos Tversky, and others have demonstrated. The Chinese Room experiment posits that a mind composed of mechanical and Homo sapiens parts cannot be conscious, no matter how competent at intelligent human (Chinese) conversation, unless a human can identify the source of the consciousness and “feel” it.

Ross, 39, 179 Ashby’s Law of Requisite Variety (First Law of Cybernetics), 39, 179, 180 Asilomar AI Principles, 2017, 81, 84 Asimov, Isaac, 250 astonishing corollary (natural intelligence as special case of AI), 67–70 astonishing hypothesis, 66–67 Astonishing Hypothesis (Crick), 66 AUM Conference, xxi–xxii automation, in manufacturing, 4, 154 Barry, Judith, 262 Bateson, Gregory, xx–xxi, 179, 264–65 Bateson, Mary Catherine, 264 Bayesian models, 226–28 Better Angels of Our Nature, The (Pinker), 118 Bostrom, Nick, xxvi, 27, 80 bounded optimality, 132 brain organoids, 245–46 Brand, Lois, xvii Brand, Stewart, xvii, xxv Bricogne, Gérard, 183 Bronowski, Jacob, 118 Brook, Peter, 213 Brooks, Rodney, 54–63 background and overview of work of, 54–55 data gathering and exploitation, computation platforms used for, 61–63 software engineering, lack of standards and failures in, 60–61 on Turing, 57, 60 on von Neumann, 57–58, 60 on Wiener, 56–57, 59–60 buffer overrun, 61 Bush, Vannevar, 163, 179–80 Cage, John, xvi causal reasoning, 17–19 cellular automaton, von Neumann’s, 57–58 Cheng, Ian, 216–18 chess, 8, 10, 119–20, 150, 184, 185 children, learning in, 222, 228–30 Chinese Room experiment, 250 Chomsky, Noam, 223, 226 Church, Alonzo, 180 Church, George M., 49, 240–53 AI safety concerns, 242–43 background and overview of work of, 240–41 conventional computers versus bio-electronic hybrids, 246–48 equal rights, 248–49 ethical rules for intelligent machines, 243–44 free will of machines, and rights, 250–51 genetic red lines, 251–52 human manipulation of humans, 244–46, 252 humans versus nonhumans and hybrids, treatment of, 249–53 non-Homo intelligences, fair and safe treatment of, 247–48 rights for nonhumans and hybrids, 249–53 science versus religion, 243–44 self-consciousness of machines, and rights, 250–51 technical barriers/red lines, malleability of, 244–46 transhumans, rights of, 252–53 clinical (subjective) method of prediction, 233, 234–35 Colloquy of Mobiles (Pask), 259 Colossus: The Forbin Project (film), 242 competence of superintelligent AGI, 85 computational theory of mind, 102–3, 129–33, 222 computer learning systems Bayesian models, 226–28 cooperative inverse-reinforcement learning (CIRL), 30–31 deep learning (See deep learning) human learning, similarities to, 11 reality blueprint, need for, 16–17 statistical, model-blind mode of current, 16–17, 19 supervised learning, 148 unsupervised learning, 225 Computer Power and Human Reason (Weizenbaum), 48–49, 248 computer virus, 61 “Computing Machinery and Intelligence” (Turing), 43 conflicts among hybrid superintelligences, 174–75 controllable-agent designs, 31–32 control systems beyond human control (control problem) AI designed as tool and not as conscious agent, 46–48, 51–53 arguments against AI risk (See risk posed by AI, arguments against) Ashby’s Law and, 39, 179, 180 cognitive element in, xx–xxi Dyson on, 38–39, 40 Macy conferences, xx–xxi purpose imbued in machines and, 23–25 Ramakrishnan on, 183–86 risk of superhuman intelligence, arguments against, 25–29 Russell on templates for provably beneficial AI, 29–32 Tallinn on, 93–94 Wiener’s warning about, xviii–xix, xxvi, 4–5, 11–12, 22–23, 35, 93, 104, 172 Conway, John Horton, 263 cooperative inverse-reinforcement learning (CIRL), 30–31 coordination problem, 137, 138–41 corporate/AI scenario, in relation of machine superintelligences to hybrid superintelligences, 176 corporate superintelligences, 172–74 credit-assignment function, 196–200 AI and, 196–97 humans, applied to, 197–200 Crick, Francis, 58, 66 culture in evolution, selecting for, 198–99 curiosity, and AI risk denial, 96 Cybernetic Idea, xv cybernetics, xv–xxi, 3–7, 102–4, 153–54, 178–80, 194–95, 209–10, 256–57 “Cybernetic Sculpture” exhibition (Tsai), 258, 260–61 “Cybernetic Serendipity” exhibition (Reichardt), 258–59 Cybernetics (Wiener), xvi, xvii, 3, 5, 7, 56 “Cyborg Manifesto, A” (Haraway), 261 data gathering and exploitation, computation platforms used for, 61–63 Dawkins, Richard, 243 Declaration of Helsinki, 252 declarative design, 166–67 Deep Blue, 8, 184 Deep Dream, 211 deep learning, 184–85 bottom-up, 224–26 Pearl on lack of transparency in, and limitations of, 15–19 reinforcement learning, 128, 184–85, 225–26 unsupervised learning, 225 visualization programs, 211–13 Wiener’s foreshadowing of, 9 Deep-Mind, 184–85, 224, 225, 262–63 Deleuze, Gilles, 256 Dennett, Daniel C., xxv, 41–53, 120, 191 AI as “helpless by themselves,” 46–48 AI as tool, not colleagues, 46–48, 51–53 background and overview of work of, 41–42 dependence on new tools and loss of ability to thrive without them, 44–46 gap between today’s AI and public’s imagination of AI, 49 humanoid embellishment of AI, 49–50 intelligent tools versus artificial conscious agents, need for, 51–52 operators of AI systems, responsibilities of, 50–51 on Turing Test, 46–47 on Weizenbaum, 48–50 on Wiener, 43–45 Descartes, René, 191, 223 Desk Set (film), 270 Deutsch, David, 113–24 on AGI risks, 121–22 background and overview of work of, 113–14 creating AGIs, 122–24 developing AI with goals under unknown constraints, 119–21 innovation in prehistoric humans, lack of, 116–19 knowledge imitation of ancestral humans, understanding inherent in, 115–16 reward/punishment of AI, 120–21 Differential Analyzer, 163, 179–80 digital fabrication, 167–69 digital signal encoding, 180 dimensionality, 165–66 distributed Thompson sampling, 198 DNA molecule, 58 “Dollie Clone Series” (Hershman Leeson), 261, 262 Doubt and Certainty in Science (Young), xviii Dragan, Anca, 134–42 adding people to AI problem definition, 137–38 background and overview of work of, 134–35 coordination problem, 137, 138–41 mathematical definition of AI, 136 value-alignment problem, 137–38, 141–42 The Dreams of Reason: The Computer and the Rise of the Science of Complexity (Pagels), xxiii Drexler, Eric, 98 Dyson, Freeman, xxv, xxvi Dyson, George, xviii–xix, 33–40 analog and digital computation, distinguished, 35–37 background and overview of work of, 33–34 control, emergence of, 38–39 electronics, fundamental transitions in, 35 hybrid analog/digital systems, 37–38 on three laws of AI, 39–40 “Economic Possibilities for Our Grandchildren” (Keynes), 187 “Einstein, Gertrude Stein, Wittgenstein and Frankenstein” (Brockman), xxii emergence, 68–69 Emissaries trilogy (Cheng), 216–17 Empty Space, The (Brook), 213 environmental risk, AI risk as, 97–98 Eratosthenes, 19 Evans, Richard, 217 Ex Machina (film), 242 expert systems, 271 extreme wealth, 202–3 fabrication, 167–69 factor analysis, 225 Feigenbaum, Edward, xxiv Feynman, Richard, xxi–xxii Fifth Generation, xxiii–xxiv The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World (Feigenbaum and McCorduck), xxiv Fodor, Jerry, 102 Ford Foundation, 202 Foresight and Understanding (Toulmin), 18–19 free will of machines, and rights, 250–51 Frege, Gottlob, 275–76 Galison, Peter, 231–39 background and overview of work of, 231–32 clinical versus objective method of prediction, 233–35 scientific objectivity, 235–39 Gates, Bill, 202 generative adversarial networks, 226 generative design, 166–67 Gershenfeld, Neil, 160–69 background and overview of work of, 160–61 boom-bust cycles in evolution of AI, 162–63 declarative design, 166–67 digital fabrication, 167–69 dimensionality problem, overcoming, 165–66 exponentially increasing amounts of date, processing of, 164–65 knowledge in AI systems, 164 scaling, and development of AI, 163–66 Ghahramani, Zoubin, 190 Gibson, William, 253 Go, 10, 150, 184–85 goal alignment.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

Stuart Russell and Peter Norvig, Artificial Intelligence: International Version: A Modern Approach (Englewood Cliffs, NJ: Prentice Hall, 2010), para. 1.1 (hereafter “Russell and Norvig, Artificial Intelligence”). However, John Searle’s “Chinese Room” thought experiment demonstrates the difficulty of distinguishing between acts and thoughts. In short, the Chinese Room experiment suggests that we cannot distinguish between intelligence of Russell and Norvig’s types (i) and (ii), or types (iii) and (iv) John R. Searle, “Minds, Brains, and Programs”, Behavioral and Brain Sciences, Vol. 3, No. 3 (1980), 417–457. Searle’s experiment has been met with various numbers of replies and criticisms, which are set out in the entry on The Chinese Room Argument, Stanford Encyclopedia of Philosophy, First published 19 March 2004; substantive revision 9 April 2014, https://​plato.​stanford.​edu/​entries/​chinese-room/​, accessed 1 June 2018. 33Alan M.

Searle’s experiment has been met with various numbers of replies and criticisms, which are set out in the entry on The Chinese Room Argument, Stanford Encyclopedia of Philosophy, First published 19 March 2004; substantive revision 9 April 2014, https://​plato.​stanford.​edu/​entries/​chinese-room/​, accessed 1 June 2018. 33Alan M. Turing, “Computing Machinery and Intelligence”, Mind: A Quarterly Review of Psychology and Philosophy, Vol. 59, No. 236 (October 1950), 433–460, 460. 34Yuval Harari has offered the interesting explanation that the form of Turing’s Imitation Game resulted in part from Turing’s own need to suppress his homosexuality, to fool society and the authorities into thinking he was something that he was not. The focus on gender and subterfuge in the first iteration of the test is, perhaps, not accidental.

pages: 720 words: 197,129

The Innovators: How a Group of Inventors, Hackers, Geniuses and Geeks Created the Digital Revolution
by Walter Isaacson
Published 6 Oct 2014

Geoffrey Jefferson, “The Mind of Mechanical Man,” Lister Oration, June 9, 1949, Turing Archive, http://www.turingarchive.org/browse.php/B/44. 93. Hodges, Alan Turing, 10983. 94. For an online version, see http://loebner.net/Prizef/TuringArticle.html. 95. John Searle, “Minds, Brains and Programs,” Behavioral and Brain Sciences, 1980. See also “The Chinese Room Argument,” The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/chinese-room/. 96. Hodges, Alan Turing, 11305; Max Newman, “Alan Turing, An Appreciation,” the Manchester Guardian, June 11, 1954. 97. M. H. A. Newman, Alan M. Turing, Sir Geoffrey Jefferson, and R. B. Braithwaite, “Can Automatic Calculating Machines Be Said to Think?”

When the human player of the Turing Test uses words, he associates those words with real-world meanings, emotions, experiences, sensations, and perceptions. Machines don’t. Without such connections, language is just a game divorced from meaning. This objection led to the most enduring challenge to the Turing Test, which was in a 1980 essay by the philosopher John Searle. He proposed a thought experiment, called the Chinese Room, in which an English speaker with no knowledge of Chinese is given a comprehensive set of rules instructing him on how to respond to any combination of Chinese characters by handing back a specified new combination of Chinese characters. Given a good enough instruction manual, the person might convince an interrogator that he was a real speaker of Chinese.

The correct answer was that Eyser was missing a leg. The problem was understanding oddity, explained David Ferrucci, who ran the Watson project at IBM. “The computer wouldn’t know that a missing leg is odder than anything else.”6 John Searle, the Berkeley philosophy professor who devised the “Chinese room” rebuttal to the Turing Test, scoffed at the notion that Watson represented even a glimmer of artificial intelligence. “Watson did not understand the questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand anything,” Searle contended.

pages: 219 words: 63,495

50 Future Ideas You Really Need to Know
by Richard Watson
Published 5 Nov 2013

Instead scientists and developers focused on specific problems, such as speech and text recognition and computer vision. However, we may now be less than a decade away from seeing the AI vision become a reality. The Chinese room experiment In 1980, John Searle, an American philosopher, argued in a paper that a computer, or perhaps more accurately a bit of software, could pass the Turing test and behave much like a human being at a distance without being truly intelligent—that words, symbols or instructions could be interpreted or reacted to without any true understanding. In what has become known as the Chinese room thought experiment (because of the use of Chinese characters to interact with an unknown person—actually a computer), Searle argued that it’s perfectly possible for a computer to simulate the illusion of intelligence, or give the illusion of understanding a human being, without really doing so.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

Philosophers such as John Searle slammed Turing’s Imitation Game as being too simplistic and downright wrong: for a machine to deceive a human was not enough to make the machine intelligent. The machine processed symbols by following instructions. It had no understanding of the meaning of the symbols. To Turing’s Imitation Game Searle counterpoised a thought experiment that he called the ‘Chinese Room’. The set-up is much the same but the conversation now consists of messages exchanged in Chinese. Searle noted that it was possible to have a system that received the input in Chinese, then matched this input to an output also in Chinese by following a set of rules, without necessarily knowing or understanding Chinese.

The Turing Test blurs the borders between the ‘real’ and the ‘artificial’ on the basis of an emotional perception from a human observer. If the human observer feels that the machine in the other room responds like a human, then the machine must be intelligent. This dimension of the Turing Test is very important and mostly missing from philosopher John Searle’s critical juxtaposition of the Chinese Room. It is not only what happens inside the room, or behind the wall, that is important. Although it is philosophically significant to accept the difference between understanding what you do and simply following a procedure, this is immaterial as far as the external observer is concerned. In Artificial Intelligence, the external observer of an intelligent system cannot be separated from the system.

A. 61–2, 68 Hofstadter, Douglas 186–8 Hohlenstein Stadel lion-man statuette 3–5, 19–20 holistic approach to knowledge 174–5 holistic scientific methods 41–2 Holocene period 10 Holy Scripture, authority of 113–14 homeostasis 173 Homo erectus 6–7, 8, 10 Homo habilis 6, 12 Homo heidelbergensis 7 Homo sapiens archaic species 7, 8, 10 emergence of modern humans 8 Homo neanderthalensis (Neanderthals) 4, 7–8, 9–10 Homo sapiens sapiens 9–10 human ancestors aesthetic practices 9 archaic Homo sapiens 7, 8, 10 arrival in Europe 3–5 australopithecines 5, 6, 22 changes in the Upper Palaeolithic Age 9–10, 11 common ancestor with chimpanzees 5 emergence of art in Europe 3–5 emergence of modern humans 8 exodus from Africa 3–4, 6–7, 8–9 Homo erectus 6–7, 8, 10 Homo habilis 6, 12 Homo heidelbergensis 7 Homo sapiens 7, 8, 10 Homo sapiens sapiens 9–10 in Africa 5–7 Neanderthals (Homo neanderthalensis) 4, 7–8, 9–10 Human Brain Project (HBP) xiv–xvi, 164–5, 287 see also brain (human) human culture, approaches to understanding 74–9 human replicas, disturbing feelings caused by 66–73 humanity becoming like machines (cyborgs) 79–85 future of 304–17 Hume, David 139–40 humors theory of life 31–4 humour, and theory of mind 54 Humphrey, Nicholas 11 hunter-gatherer view of the natural world 20–2 hydraulic and pneumatic automata 32–6 IBM (International Business Machines) 230, 263, 264 Ice Age Europe 4, 10, 21–2 iconoclasm 67 idealism versus materialism 92–4 identity theory 144–5 imagined world of the spirits 22–3, 25, 27 inanimate objects, projection of theory of mind 15–18 Incompleteness Theorem (Gödel) 180, 186, 206–9, 211–16 inductive logic 196, 197 information disembodiment of 146–52 significance of context 151–2 the mind as 123–5 information age 232–4 information theory 147–52 Ingold, Tim 20 intelligence, definitions of 48–9, 52 intelligent machines as objects of love 48–59 Internet brain metaphor 43 collection and manipulation of users’ data 250–3 origins of 238 potential for sentience 214–15 Internet of things 251–3 intuition 200, 211 Iron Man (film) 82 Ishiguro, Hiroshi 72 Islam 102 Jacquard loom 225 James, William 162 Johnson, Samuel 140 Kasparov, Garry 263 Kauffman, Stuart 295 Kempelen, Wolfgang von 37 Kline, Nathan 79 Koch, Christof 167–8 Krauss, Lawrence 244–5 Krugman, Paul 269 Kubrick, Stanley 56, 257 Kuhn, Thomas 29, 75 Kurzweil, Ray 126, 270–1 Lang, Fritz 50 language and genesis of the modern mind 13–15 and human relationship with objects 15–18 evolution of 13–15 naming of objects 16–17 LeCun, Yann 255 Leibniz, Gottfried Wilhelm 116–17, 218–20 Lettvin, Jerry 293 liberty, end of 313–17 life algorithms of 292–6 origins of 181–3 Life in the Bush of Ghosts (Tutuola) 19 linguistics, descriptions of reality 75 lion-man statuette of Stadel cave 3–5, 19–20 Llull, Ramon (Doctor Illuminatus) 218 Locke, John 139 locked-in syndrome 307 logic x–xi, 195–202 logical substitution method 180, 183, 186 Lokapannatti (early Buddhist story) 34 London forces 107 love conscious artefacts as objects of 48–59 human need for 55–6 human relationships with androids 53–9 Lovelace, Ada 62, 226–7, 228 Luddites 268 Machine Intelligence Research Institute 58–9 machine metaphor for life 36–8 Magdalenian period 21 magnetoencephalography (MEG) 159–60, 161 Maillardet, Henri 218 Marconi, Guglielmo 239 Maria (robot in Metropolis) 50, 51 Marlowe, Christopher 63 Mars colonisation 291 Marx, Groucho 205 materialism versus idealism 92–4 mathematical dematerialisation view 92 mathematical foundations of the universe 103–6 mathematical reflexivity 186–7 mathematics 31 formal logical systems 200–11 views on the nature of 136 Maturana, Humberto 294 McCarthy, John 256, 307 McCorduck, Pamela 45, 67 McCulloch, Warren S. 36, 175, 176–8, 256, 293 Mead, Margaret 175 mechanical metaphor for life 36–8 mechanical Turk 37 medicine, development of 31–2 meditation 157 memristors 286–7 Menabrea, Luigi 226, 227 Mesmer, Franz Anton 40 mesmerism 40 Mesopotamian civilisations 30 metacognition 184 metamathematics 202, 205, 207 metaphors confusing with the actual 44–5 for life and the mind 28–47 in general-purpose language 75 misunderstanding caused by 308–13 Metropolis (1927 film) 50, 51 Middle Palaeolithic 6 Miller, George 154, 155 Milton, John 1 mind altered states 110, 111 as pure information 123–5 aspects of 85–7 debate over the nature of 91–4 disembodiment of 42 empirical approach 143–6 quantum hypothesis 106–9 scientific theory of 152–3 search for a definition 189–91 self-awareness 86–7 separate from the body 110–15 view of Aristotole 137–8 mind-body problem 32, 114–19, 129–31 Minsky, Marvin 178, 256 modern mind big bang of 10, 12–15 birth of 10–15 impacts of the evolution of language 13–15 monads 117, 119 monism versus dualism 92–3 Moore’s Law 244–5, 263, 270–1, 287 moral decision-making 277–8 Moravec paradox 275–6 Morris, Ian 222 Morse, Samuel 42 mud metaphor for life 29–31, 45 My Life in the Bush of Ghosts (music album) 19 Nabokov, Vladimir 167 Nagel, Thomas 120, 121 Nariokotome boy 7 narratives 18–27, 75 see also metaphor Neanderthals (Homo neanderthalensis) 4, 7–8, 9–10 Negroponte, Nicholas 243–4 neopositivism 141 neural machines 282–7 neural networks theory 36 neural synapses, functioning of 117–19 neuristors 286–7 neurodegenerative diseases xiii–xiv, 163–4 Neuromancer (Gibson) 36 neuromorphic computer archtectures 286–7 neurons, McCulloch and Pitts model 177–8 neuroscience 158, 306–8 Newton, Isaac 38, 103 Nike’s Fuel Band 81 noetic machines (Darwins) 284 nootropic drugs 81 Nouvelle AI concept 288 Offray de La Mettrie, Julien 37 Ogawa, Seiji 158–9 Omo industrial complex 6 On the Origin of Species (Darwin) 289–90 ‘ontogeny recapitulates phylogeny’ concept 10 Otlet, Paul 239–40 out-of-body experiences 110–11 Ovid 49, 64 Paley, William 289 panpsychism 92, 117, 252 paradigm shifts 75 in the concept of life 29–47 Pascal, Blaise 219–20 Penrose, Roger 106–9, 117, 211–12, 214 Pert, Candace B. 170 physics, gaps in the Standard Model 105 Piketty, Thomas 267, 269 pineal gland 115–16 Pinker, Steven 13, 275 Pinocchio story 56 Pitts, Walter 36, 177–8, 256, 293 Plato 134, 143, 152, 176, 189, 305 central role of mathematics 103–6 idea of reality 78, 83 influence of 95–106 notion of philosopher-kings 98–9 separation of body and mind 112 The Republic 97–101, 309, 310 theory of forms 99–101, 104, 106 Platonism 101–2, 135–7, 139, 142, 146, 147, 182, 189, 190, 242–3, 296 Pleistocene epoch 7 Poe, Edgar Allan 79 Polidori, John William 60, 62 Popov, Alexander 239 Popper, Karl 98 Porter, Rodney 282 posthuman existence 147 postmodernism 208 post-structuralist philosophers 75–9 precautionary principle 64–5 predicate logic 198–200, 206 Principia Mathematica 205–6, 207 Prometheus 29–30, 63–4 psychoanalysis 50 psychons 118, 119 Pygmalion narrative 49–52 qualia of consciousness 120–3, 157 Quantified Self movement 81–2 quantum hypothesis for consciousness 106–9 quantum tunnelling 118–19 Ramachandran, Vilayanur 70 rationalism 116 Reagan, President Ronald 237 reality, impact of acquisition of language 15–18 reductionism 41–2, 104–5, 121, 184 reflexivity 183–4, 186–9 religions condemnation of human replicas 67 seeds of 22–3, 25–6 Renaissance 34, 103, 139, 218 RepRap machines 290 res cogitans (mental substance) 38, 113–14 res extensa (corporeal substance) 38, 113–14 resurrection beliefs 126–7 RoboCop 80 robot swarm experiments 287–8 robots human attitudes towards 50–1 rebellion against humans 53, 57–9 self-replication 289–92 see also androids Rochester, Nathaniel 256 Romans 31 Rubenstein, Michael 287–8, 291 Russell, Bertrand viii, 92, 198, 204, 205–6, 207, 208, 215 Russell, Stuart 270 Sagan, Carl 133 Saygin, Ayse Pinar 69 science as a cultural product 75–9 influence of Aristotle 134–8 influence of Descartes 113–19 influence of Plato see Plato scientific method 102–5, 121 scientific paradigms 75 scientific reasoning, as unnatural to us 133–4, 137 scientific theory, definition 166, 196 Scott, Ridley 53 Searle, John, Chinese Room experiment 52, 71 Second Commandment (Bible) 67 second machine age, impact of AI 266–9 Second World War 234–6 self-awareness 16, 86–7, 157, 215–16, 273–5 self-driving vehicles 263–4 self-organisation in cybernetic systems 273–4 in living things 292 self-referencing 186–9, 215–16 see also reflexivity self-referencing paradoxes 204–6 self-replicating machines/systems 179–82, 289–92 sensorimotor skills, deficiency in AI 275–6 servers, dependence on 245–9 Shannon, Claude 147–52, 154, 176, 230–1, 256 Shaw, George Bernard Shaw 49–50 Shelley, Mary, Frankenstein 40, 60–5, 165 Shelley, Percy Bysshe 60, 62, 63–4 Shickard, Wilhem 219 Silvester II, Pope 35 Simmons, Dan 160 simulated universe concept 127–9 smart drugs 81 Snow, C.

pages: 48 words: 12,437

Smarter Than Us: The Rise of Machine Intelligence
by Stuart Armstrong
Published 1 Feb 2014

So a true AI would be able to converse with us about the sex lives of Hollywood stars, compose passable poetry or prose, design an improved doorknob, guilt trip its friends into coming to visit it more often, create popular cat videos for YouTube, come up with creative solutions to the problems its boss gives it, come up with creative ways to blame others for its failure to solve the problems its boss gave it, learn Chinese, talk sensibly about the implications of Searle’s Chinese Room thought experiment, do original AI research, and so on. When we list the things that we expect the AI to do (rather than what it should be), it becomes evident that the creation of AI is a gradual process, not an event that has either happened or not happened. We see sequences of increasingly more sophisticated machines that get closer to “AI.”

pages: 626 words: 181,434

I Am a Strange Loop
by Douglas R. Hofstadter
Published 21 Feb 2011

And each time one of the noble beasts bows humbly down to my triumphant bullets, I give one of those “I’m great” jerks with my arm, which one so often sees when a football player scores a touchdown. And lastly, needless to say, after Switch #5 has been thrown, I totally agree with John Searle’s Chinese Room experiment, and I think that Derek Parfit’s ideas about personal identity are a complete crock. Oh, I forgot — can’t do that, since I never think about philosophical issues at all! You may have noticed that when I discussed Switch #1, I put quotes around the word “my” when talking about the brain in which a veneration for Ludwig, Béla, Elvis, and Eminem flowers.

, he mused, counting his bumps, ‘If I had as many bumps on the left side of my right adenoid as six and three-quarters times seven-eighths of those between the heel of Achilles and the circumference of Adam’s apple, how long would it take a boy rolling a hoop up a moving stairway going down to count the splinters on a boardwalk if a horse had six legs?’ ” And so I thought I’d give a little posthumous hat-tip to Bob. Page 305 Dan calls such carefully crafted fables ‘intuition pumps’… Dennett introduced his term “intuition pump”, I believe, in the Reflections that he wrote on John Searle’s “Chinese room” thought experiment in Chapter 22 of [Hofstadter and Dennett]. Page 308 The term Parfit prefers is “psychological continuity”… See [Nozick] for a lengthy treatment of the closely related concept of “closest continuer”. Page 309 what Einstein accomplished in creating special relativity… See [Hoffmann].

James) Bloomington, Indiana blue humpback blueprint used in self-replicating machine blurriness of everyday concepts boat with endless succession of leaks bodies vs. souls body parts initiate self-representation Bohr atom, as stepping stone en route to quantum mechanics Bohr, Niels boiling water, reliability of Bonaparte, see Napoleon bon mots: by Carol Hofstadter; by David Moser “Book of nature written in mathematics” (Galileo) Boole, George boot-removal analogy boundaries between souls, blurriness of boundaries, macroscopic, as irrelevant to particles box with flaps making loop Brabner, George brain activity: hiddenness of substrate of; modeled computationally; need for high-level view of; obviousness of high-level view of brain-in-vat scenario brain research, nature of brain-scanning gadgets brain structures brains: compared to hearts; complexity of, as relevant to consciousness; controlling bodies directly vs. indirectly; eerieness of; evolution of; as fusion of two half-brains; as inanimate; inhabited by more than one “I”; interacting via ideas; main; as multi-level systems; not responsible for color qualia; perceiving multiple environments simultaneously; receiving sensory input directly or indirectly; resembling inert sponges; unlikely substrate for interiority Braitenberg, Valentino bread becoming a gun Brown, Charlie Brownian motion Brünn, Austria (birthplace of Kurt Gödel) buck stopping at “I” Bugeaud, Yann bunnies as edible beings “burstwise advance in evolution” (Sperry) Bushmiller, Ernie butterflies: not respecting precinct boundaries; in orchard, as metaphor for human soul Buzzaround Betty C caged-bird metaphor; as analogous to Newtonian physics; hints at wrongness of; as ingrained habit; at level of countries and cultures; metaphors opposed to; normally close to correct; as reinforced by language; temptingness of Cagey’s doubly-hearable line cake whose pieces all taste bad, as inferrred by analogy candles cantata aria Cantor, Georg capital punishment Capitalized Essences; canceled careenium; growing up; self-image of; two views of; unsatisfactory to skeptics Carnap, Rudolf Carol-and-Doug: as higher-level entity; joint mind of; shared dreads and dreams of Carolness, survival of Carol-symbol in Doug’s brain: being vs. representing a person; triggerability of cars: as high-level objects; pushed around by desires Cartesian Eggo Cartesian Ego; as commonsensical view; fading of Cartesian Ergo Cartier-Bresson, Henri Caspian Gemstones, allegory of casual façade as Searlian ploy Catcher in the Rye, The (Salinger) categories and symbols; see also repertoires categorization mechanisms: converting complexity into simplicity; as determining size of self; efficiency of Caulfield, Holden causality: bottoming out in “I”; buck of, stopping at “I”; of dogmas in triggering wars; and insight; schism between two types of; stochasticity of in everyday life; tradeoffs in; upside-down; see also downward causality causal potency: of ideas in brain; of meanings of PM strings; of patterns “causal powers of the brain”, semantic cell phones as universal machines Center for Research into Consciousness and Cognetics Central Consciousness Bank central loop of cognition cerulean sardine chain of command in brain chainium (dominos), causality in Chaitin, Greg Chalmers, David; zombie twin of chameleonic nature: of integers; of universal machines Chantal Duplessix, seeing pixel-patterns as events; missing second level of Aimable’s remarks chaos, potential, in number theory Chaplin twins, (Freda and Greta) character structure of an individual Chávez, César chemistry: bypassed in explanation of heredity and reproduction; of carbon as supposed key to consciousness; reduced to physics; virtual, inside computers “chemistry” (interpersonal); enabling people to live inside each other; as function of musical taste alignment; as highly real causal agent; as “hooked atoms”; lack of, between people “chicken” (meat) vs. chickens chickens as edible creatures children: as catalysts to soul merger of parents; as having fewer hunekers than adults; self-awareness of chimpanzees chinchillas Chinese people, as spread-out entity Chinese Room chirpy notes and deep notes interchanged Chopin, Frédéric; étude Op. 25 no. 4 in A minor; étude Op. 25 no. 11 in A minor; nostalgia of; pieces by, as soul-shards of; survival of chord–angle theorem Christiansen, Winfield church bells Church’s theorem cinnamon-roll aroma in airport corridor circular reasoning, validity of, in video feedback circumventing bans by exploiting flexible substrates clarity as central goal Class A vs.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Turing argues that we would also extend the polite convention to machines, if only we had experience with ones that act intelligently. However, now that we do have some experience, it seems that our willingness to ascribe sentience depends at least as much on humanoid appearance and voice as on pure intelligence. 28.2.1The Chinese room The philosopher John Searle rejects the polite convention. His famous Chinese room argument (Searle, 1990) goes as follows: Imagine a human, who understands only English, inside a room that contains a rule book, written in English, and various stacks of paper. Pieces of paper containing indecipherable symbols are slipped under the door to the room.

From the outside, we see a system that is taking input in the form of Chinese sentences and generating fluent, intelligent Chinese responses. Searle then argues: it is given that the human does not understand Chinese. The rule book and the stacks of paper, being just pieces of paper, do not understand Chinese. Therefore, there is no understanding of Chinese. And Searle says that the Chinese room is doing the same thing that a computer would do, so therefore computers generate no understanding. Searle (1980) is a proponent of biological naturalism, according to which mental states are high-level emergent features that are caused by low-level physical processes in the neurons, and it is the (unspecified) properties of the neurons that matter: according to Searle’s biases, neurons have “it” and transistors do not.

The Turing test (Turing, 1950) has been debated (Shieber, 2004), anthologized (Epstein et al., 2008), and criticized (Shieber, 1994; Ford and Hayes, 1995). Bringsjord (2008) gives advice for a Turing test judge, and Christian (2011) for a human contestant. The annual Loebner Prize competition is the longest-running Turing test-like contest; Steve Worswick’s MITSUKU won four in a row from 2016 to 2019. The Chinese room has been debated endlessly (Searle, 1980; Chalmers, 1992; Preston and Bishop, 2002). Hernández-Orallo (2016) gives an overview of approaches to measuring AI progress, and Chollet (2019) proposes a measure of intelligence based on skill-acquisition efficiency. Consciousness remains a vexing problem for philosophers, neuroscientists, and anyone who has pondered their own existence.

What We Cannot Know: Explorations at the Edge of Knowledge
by Marcus Du Sautoy
Published 18 May 2016

But isn’t there a difference between a machine following instructions and my brain’s conscious involvement in an activity? If I type a sentence in English into my smartphone, the apps on board are fantastic at translating it into any language I choose. But no one thinks the smartphone understands what it’s doing. The difference perhaps can be illustrated by an interesting thought experiment called the Chinese Room, devised by philosopher John Searle of the University of California. It demonstrates that following instructions doesn’t prove that a machine has a mind of its own. I don’t speak Mandarin, but imagine I was put in a room with an instruction manual that gave me an appropriate response to any string of Chinese characters posted into the room.

Acta Mathematica (Royal Swedish Academy of Science journal) 39 aeons 292–6 ageing 5, 8, 258, 269, 318 al-Sufi, Abd al-Rahman 203 algebra 89, 372–3, 374 Alhazen: Book of Optics 198 Allen Institute for Brain Science 347, 348 Allen, Paul G. 347 Allen, Woody 303 Alpha Centauri 188 alpha particles 98–100, 119, 131, 133, 166–7, 171, 172, 173, 176 alpha waves 314–16 American Association for the Advancement of Science 46 Amiot, Lawrence 280 anaesthesia 334–5, 345 Anderson, Carl 104 Andromeda nebula 203, 204 animals: consciousness and 317–19, 322, 325; evolution and 56, 57, 61; mathematics of animal kingdom 393–4; population dynamics 48–51; species classification 107 Aniston, Jennifer 4, 324–7, 347, 359 antimatter 104 Apple 321, 322, 355 Aquinas, Thomas 297, 390–1, 406 Arago, Francois 197 Archimedes 86 Aristarchus of Samos 189–90 Aristotle 22, 32, 82, 86, 87, 95, 101, 198, 306, 368, 369, 390; Metaphysics 1, 2 Armstrong, Karen 181, 410 artificial intelligence 8, 281, 303–4, 313, 317, 322, 337–9, 345–6, 417 asteroids 2, 182 Asteroids (game) 205–6, 207, 209 astronomy 10, 40, 63, 187–211, 213–16, 218, 222, 223, 236–7, 238–9, 271, 280, 296, 413 see also under individual area of astronomy asymmetrical twins 269–72, 283 atom 78–9, 80; atomic number 90; Brownian motion and 92, 93–5; charge and 96, 97, 98, 99–101, 104, 105, 106, 107, 108, 109, 110–11, 117, 118, 119, 125, 136, 142, 230, 356; dice and 64, 78, 79, 80, 91–2, 94, 103; discovery of 79–80, 95–101, 103, 104; discovery of smaller constituents that make up 95–127; electron microscopes and 78, 79; experimental justification for atomistic view of the matter, first 80, 89–92; LHC and 3–4, 98; measuring time and 123, 249, 251–2, 252, 254, 269; periodic table and 86–92; quantum microscopes and 79; strangeness and 108, 109–11, 115–16; symmetry and 111–17, 120, 121, 125; theoretical atomistic view of matter, history of 78–88, 93 atomic clock 123, 252, 254, 269 axioms 52, 367–8, 371, 377, 378–9, 383, 384–6, 387, 388, 397–8, 400, 401, 402, 403, 404, 413 Babylonians 83, 251, 366, 368, 417 Bach 77, 121, 304 Bacon, Francis 399 banking, chaos theory and 54 Barbour, Julian: The End of Time 299–300 Barrow, Professor John 236–40, 242 baryons 107, 108, 109, 110, 115, 119 Beit Guvrin, Israel, archaeological dig in 20–1 Bell, John/Bell’s theorem 170, 171, 173, 174 Berkeley, Bishop: The Analyst 87 Berger, Hans 314 Berlin Academy 382 Berlin Observatory 197 Bessel, Friedrich 201 Besso, Michele 296–7 beta particles 98, 131 Bible 192 Big Bang 237, 377; cosmic microwave background and 226, 228, 289; as creation myth 235; emergence of consciousness and 319, 377, 407; infinite universe and 219–21; singularity 278, 281–2, 284; testing conditions of 234; time before, existence of 7, 9, 248–9, 262–7, 284, 290, 291–6, 407 biology 237, 405, 416; animal see animals; breakthroughs in 4; consciousness see consciousness; emergence concept and 332; evolution of life and 56–62, 230; gene therapy 416; hypothetical theory and 405; limitations of our 406; telomeres, discovery of 5; unknowns in 7–8 Birch–Swinnerton-Dyer conjecture 376 black holes: Big Bang and 293–4; computer simulation of 352; cosmic microwave background and 293–4; Cygnus X–1 276–7; discovery of 274–6; electron creation of tiny, possibility of 126; entropy and 285–7, 288, 290, 293; future of universe and 291, 293; Hawking radiation and 182, 288–90; infinite density and 277–8; information lost inside of 167, 284–5, 287, 288, 289–90, 293, 355; ‘no-hair theorem’ 285; second law of thermodynamics and 285–6, 290; singularity 278, 279, 280, 281–2; time inside 282–4 black swan 239–40 Blair, Tony 52 Bohr, Niels 103, 123, 131, 159, 178, 418 Bois-Reymond, Emil du 382, 383 Boisbaudran, Lecoq de 90–1 Boltzmann, Ludwig 92 Bombelli, Rafael 372 Borges, Jorge Luis: The Library of Babel 187 bottom quark 120, 121 Boyle, Robert 86; The Sceptical Chymist 86–7 Bradwardine, Thomas 391–2 Brady, Nicholas: ‘Ode to Saint Cecilia’ 88 Brahmagupta 371, 372 brain: alpha waves and 314–16; Alzheimer’s disease and 313–14; animal 317–19; artificial 351–3; Broca area 308, 352; cells, different types of 348; cerebellum 306, 307, 344; cerebrum 306; consciousness and see consciousness; corpus callosum/corpus callosotomy 309–11; EEG scanner and 305, 314–16, 323, 340; fMRI scanner and 4, 305, 316, 323, 333–9, 350, 351, 354, 357; free will and 335–9; integrated information theory (IIT) and 342–5; left side of 308, 310; limits of understanding 5, 9, 376, 377, 387, 408–9, 415, 416; mind-body problem and 330–2; music and see music; neurons and 4, 5, 258, 259, 309, 311–14, 323–9, 340, 341, 342, 343–6, 347, 348, 349, 350, 351, 353, 359, 376–7; out-of-body experiences and 328–30; pineal gland 307; right side of 308–9, 310; self-recognition test and 317–19; synapses 5, 313, 314, 324, 376; two hemispheres of 308–11; unconscious 315, 336–7, 339–41; vegetative state/locked in and 333–5; ventricles 306–7, 308; visual data processing 320–30 Braudel, Fernand 54–5 British Association of Science 10 Broca, Paul 308 Bronowski, Jacob: Ascent of Man 2, 420 Brown, Robert/Brownian motion 92, 93, 141 Bruno, Giordano: On the Infinite Universe and Worlds 192, 393 Buddhism 113, 354 C. elegans worm 4, 345, 349 caesium fountain 252 calculus 30–2, 33, 34, 36, 87, 88, 369 Caltech 104, 105–6, 115, 175, 289, 321, 323, 324, 347 Cambrian period 58 Cambridge University 30, 69, 174–5, 179, 236, 275, 334 cancer 8, 204 Candelas, Philip 155 Cantor, Georg 65–6, 393–402, 406 Cardano, Girolamo 23–4, 25; Liber de Ludo Aleae 24 Carroll, Lewis: Alice’s Adventures in Wonderland 159 Carroll, Sean 236 cascade particles 110 Cassini, Giovanni 199 Castro, Patricia 226 cathode rays 96 Catholic Church 192, 235 cello 77, 78, 79, 80–1, 82, 90, 121, 122, 126, 127, 137, 138, 139, 140, 191, 225, 285, 304, 305, 308, 313, 314, 315 celluloid 91 Cepheid star 202–3, 204 Chadwick, James 100–1 Chalmers, David 347 Chandrasekhar, Subrahmanyan 275 Chaos 67 chaos theory 39–41, 43–53, 54, 55, 56, 58–9, 60, 61, 62–4, 68–72, 157, 168, 178, 179, 242, 402–3, 408, 419 charm quark 120, 121 China 15, 344, 371 chemistry: atomistic view of matter and chemical elements 81, 82, 86–8, 89–92 see also periodic table; brain see brain; breakthroughs in 4; elements and 81–2; emergence concept and 332; Greek, ancient 81–2 Chomsky, Noam 388 Christianity 13, 22, 69, 240, 390–1, 398 see also God and religion Church, Alonso 414 Cicero 188 Clairaut, Alexis 29 Cleverbot (app) 303, 313, 317, 332 climate change 6, 53 cloud chambers 100, 104–6 Cohen, Paul 401–2 Compton wavelength 167 computers: chaos theory modelling on 61–2, 64; consciousness/artificial intelligence and 8, 281, 303–4, 313, 317, 322, 325, 336, 337–9, 345–6, 349, 351, 352, 355, 417; growth in power of 8, 53, 281 Comte, Auguste 10, 202, 243, 347, 409 Connes, Alain 300 ‘connectome’ 345 consciousness 303–60, 403; anaesthesia and 334–5, 345; animals and 317–20, 322; brain as location of 306–11; brain cell types and 348; brain switching between perceptions and 320–3; Buddhism and 354; building an artificial brain that has 351–3; Chinese Room experiment and 338–9; Cleverbot app and 303–4, 313, 315–16, 317, 332, 338; computers/machines and 8, 303–4, 313, 315–16, 317, 322, 337–9, 345–6; ‘connectome’ and 345; two sides of brain and 308–11; death and 353–5; Descartes and 304, 350, 359; different qualities of 305–6; EEG/fMRI and 305, 314–16, 323, 333–9, 340, 350, 351, 354, 356–7; emergence in child 319; first emergence in universe 319–20; focus and 327; free will and 334–5; God concept and 319–20, 348–9; hard problem of 304–6, 347, 360; Human Brain Project and 352; humanities and expression of 419; integrated information theory (IIT) and 341, 342–5, 346, 347, 349, 350, 352, 353–4; internet and 345–6; language and 356–8; mathematical formula for 341, 342–5, 346, 347, 349, 352, 353–4; mind-body problem and 330–2; mirror recognition test and 317–19; mysterianism and 349–50, 351; Necker cube and 321, 323; neurons and 311–14, 323–9, 340, 341, 342, 343–6, 347, 348, 349, 350, 351, 353, 359, 376–7; out-of-body experiences and 328–30; perceptronium and 356; qualia and 325, 350; sleep and 339–41, 342, 343; synesthesia and 305, 325–6; thalamocortical system and 343–4; transcranial magnetic stimulation (TMS) and 339–41; unconsciousness and brain activity 334–7, 339–41, 342–3; unknowable nature of 347, 349–50, 353 355–60, 407–8; vegetative state/locked in and 333–4; virtual reality goggles and 330; vision and 322–3; wave function and 156; where is?

pages: 244 words: 73,966

Brief Peeks Beyond: Critical Essays on Metaphysics, Neuroscience, Free Will, Skepticism and Culture
by Bernardo Kastrup
Published 28 May 2015

I can create a computer program that ultimately attributes the logical value ‘true’ to a variable labeled ‘conscious,’ but obviously that doesn’t take the computer any closer to having inner life the way you and I have, no matter how complex the program. As philosopher John Searle demonstrated decades ago with his famous ‘Chinese Room’ thought experiment, the manipulation of variables is utterly unrelated to subjective experience.54 Moreover, eliminative materialists fail to notice that their claim about the non-existence of consciousness is itself the output of their intellectual models; models that, according to their own logic, cannot be trusted.

Toast
by Stross, Charles
Published 1 Jan 2002

First I sat through a rather odd monologue with only three other attendees (one of them deeply asleep in the front row): a construct shaped like a cross between a coat-rack and a preying mantis was vigorously attacking the conceit of human consciousness, attempting to prove (by way of an updated version of Searle’s Chinese Room attack, lightly seasoned à la Penrose) that dumb neurons can’t possibly be intelligent in the same way as a, well, whatever the thing on the podium was. It was almost certainly a prank, given our proximity to MIT (not to mention the Gates millenniumDepartment of Amplified Intelligence at Harvard), but it was still absorbing to listen to its endless spew of rolling, inspired oratory.

pages: 388 words: 211,314

Frommer's Washington State
by Karl Samson
Published 2 Nov 2010

Opened in 1914, this was Seattle’s first skyscraper and, for 50 years, the tallest building west of Chicago. Although the Smith Tower has only 42 stories, it still offers excellent views from its 35th-floor observation deck, which surrounds the ornate Chinese Room, a banquet hall with a carved ceiling. A lavish lobby and original manual elevators make this a fun and historic place to take in the Seattle skyline. Deck hours vary with the time of year and scheduled events in the Chinese Room; check in advance to be sure it will be open when you want to visit. Admission is $7.50 for adults, $6 for seniors and students, and $5 for children 6 to 12. If you’ve ever seen a photo of the Space Needle framed by Mount Rainier and the high-rises of downtown Seattle, it was probably taken from Kerry Viewpoint, on Queen Anne Hill.

pages: 350 words: 96,803

Our Posthuman Future: Consequences of the Biotechnology Revolution
by Francis Fukuyama
Published 1 Jan 2002

Newton shows that this rule works for celestial bodies like planets and stars, and assumes that it will also work for other natural objects, like animals. o A spandrel is an architectural feature that emerges, unplanned by the architect, from the intersection of a dome and the walls that support it. p Searle’s critique of this approach is contained in his “Chinese room” puzzle, which raises the question of whether a computer could be said to understand Chinese any more than a non-Chinese-speaking individual locked in a room who received instructions on how to manipulate a series of symbols in Chinese. See Searle (1997), p. 11. q The Greek root of sympathy and the Latin root of compassion both refer to the ability to feel another person’s pain and suffering.

pages: 311 words: 94,732

The Rapture of the Nerds
by Cory Doctorow and Charles Stross
Published 3 Sep 2012

I’m drifting off into cyberspace here, becoming a worse and worse pencil-drawn copy of a copy of my original self.” “Thank you,” the djinni says. “I’ll draw your attention to our immediate neighborhood. Next argument, please?” “Whu-well, nothing happens in here that isn’t determined by some algorithm, so it’s not really real. For real spontaneity, you need—” The djinni is sighing and shaking his head. “Chinese room?” Huw offers hopefully. A slot appears in the wall of the kettle, and a slip of paper uncoils from it. The djinni takes the slip and frowns. “Hmm, one General Tso’s chicken to go. And a can of Diet Slurm.” He reaches down into the floor, rummages around for a few seconds, pulls out a delivery bag, and shoves it through the wall next to the slot.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

The story of the original program, AlphaGo, is beautifully told in a film of the same name: https://www.alphagomovie.com/. Some might quibble that these programs are more accurately described as playing ‘the history of Go’ rather than Go itself. there’s a valid question: A more sophisticated version of this argument has been developed by John Searle in his famous ‘Chinese room’ thought experiment. I didn’t use this example here because Searle’s argument is targeted primarily at intelligence (or ‘understanding’) rather than consciousness (Searle, 1980). an empirical dead end: The philosopher John Perry said: ‘If you think about consciousness long enough, you either become a panpsychist or you go into administration.’

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

The first in a series of books seeking physical explanations for consciousness and raising doubts about the ability of AI systems to achieve real intelligence: Roger Penrose, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford University Press, 1989). 8. A revival of the critique of AI based on the incompleteness theorem: Luciano Floridi, “Should we be afraid of AI?” Aeon, May 9, 2016. 9. A revival of the critique of AI based on the Chinese room argument: John Searle, “What your computer can’t know,” The New York Review of Books, October 9, 2014. 10. A report from distinguished AI researchers claiming that superhuman AI is probably impossible: Peter Stone et al., “Artificial intelligence and life in 2030,” One Hundred Year Study on Artificial Intelligence, report of the 2015 Study Panel, 2016. 11.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

“That can’t be right,” Searle thought to himself, “because you could give me a story in Chinese with a whole lot of rules for shuffling the Chinese symbols, and I don’t understand a word of Chinese but all the same I could give the right answer.”17 He decided that it just didn’t follow that the computer had the ability to understand anything just because it could interpret a set of rules. While flying to his lecture, he came up with what has been called the “Chinese Room” argument against sentient machines. Searle’s critique was that there could be no simulated “brains in a box.” His argument was different from the original Dreyfus critique, which asserted that obtaining human-level performance from AI software was impossible. Searle simply argued that a computing machine is little more than a very fast symbol shuffler that uses a set of syntactical rules.

pages: 666 words: 131,148

Frommer's Seattle 2010
by Karl Samson
Published 10 Mar 2010

Not far from the Bank of America Tower is the Smith Tower, 506 Second Ave. ( 20 6/622-4004;www.smithtower.com). Opened in 1914, this was Seattle’s first skyscraper and, for 50 years, the tallest building west of Chicago. Although the Smith Tower has only 42 stories, it still offers excellent views from its 35th-floor observation deck, which surrounds the ornate Chinese Room, a banquet hall with a carved ceiling. A lavish lobby and original manual elevators make this a fun and historic place to take in the Seattle skyline. May through September, the deck is open daily from 10am to sunset; April and October, it’s open daily from 10am to 5pm; November through March, it’s open Saturday and Sunday from 10am to 4pm.

pages: 311 words: 168,705

The Rough Guide to Vienna
by Humphreys, Rob

(He refused formally to abdicate or to renounce his claim to the throne, and made two unsuccessful attempts to regain the Hungarian half of his title in 1921, before dying in exile on Madeira the following year.) As the name suggests, the room is another Chinoiserie affair – all the rage in the eighteenth century – lined with yellow wallpaper, hand-painted on rice paper, and inset with serene scenes of Chinese life on a deep-blue background. The audience rooms The lightness of the Blue Chinese Room is in complete contrast to the oppressively opulent Vieux-Laque Room, with its black-and-gold lacquer panels, exquisite parquetry and walnut wainscoting. During his two sojourns at Schönbrunn, Napoleon is thought to have slept in the neighbouring walnut-panelled Napoleon Room, lined with Brussels tapestries depicting the Austrian army in Italy.

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Published 7 Feb 2017

I discuss this design strategy in an unpublished paper, “A Route to Intelligence: Oversimplify and Self-monitor” (1984c) that can be found on my website: http://ase.tufts.edu/cogstud/dennett/recent.html. 395Turing Test. For an analysis and defense of the Turing Test as a test of genuine comprehension, see my “Can Machines Think?” (1985), reprinted in Brainchildren, with two postscripts (1985) and (1997); “Fast Thinking” in The Intentional Stance (1987); and especially “The Chinese Room,” in IP (2013), where I discuss examples of the cognitive layering that must go into some exchanges in a conversation (pp. 326–327). 396Debner and Jacoby (1994). For more on such experiments, see my “Are We Explaining Consciousness Yet?” (2001c) and also Dehaene and Naccache (2001), Smith and Merikle (1999), discussed in Merikle et al. (2001). 399theory of agents with imagination.

pages: 661 words: 156,009

Your Computer Is on Fire
by Thomas S. Mullaney , Benjamin Peters , Mar Hicks and Kavita Philip
Published 9 Mar 2021

And the fact is that the combat model that dominates FPS gameplay is not compatible with a story about interacting with anything human, unless our role is to play a soldier, assassin, or psychopath. This is why environmental storytelling often works better without a combat model added to the spatial model, without “finding cover and chaining headshots.” Art and indie game studios such as Tale of Tales, The Chinese Room, Fullbright, and Galactic Cafe have explored this in the genre sometimes called the “walking simulator.” But when traveling through a walking simulator, if the goal is to tell a story of a traditional shape, the space must be contorted to fit—as seen in the strange, circuitous route through the house of Gone Home and the spaces that wink in and out of existence in The Beginner’s Guide.20 If we actually want players to be able to use the spatial model for more open exploration, environmental storytelling is probably better suited to the kinds of experimental story shapes pioneered by the literary hypertext community, as seen in the town geography of Marble Springs or the spaces of the body in Patchwork Girl.21 All that said—though FPS games are ill-suited to human interaction, and walking simulators have more often focused on storytelling than philosophy—it is not impossible to make a successful game focused on philosophical issues.

Lonely Planet Southern Italy
by Lonely Planet

oVilla LysisHISTORIC BUILDING (map Google map; www.villalysiscapri.com; Via Lo Capo 12; €2; h10am-7pm Thu-Tue Jun-Aug, to 6pm Apr, May, Sep & Oct, to 4pm Nov & Dec) This beautifully melancholic art-nouveau villa is set on a clifftop on Capri’s northeast tip and was the one-time retreat of French poet Jacques d’Adelswärd-Fersen, who came to Capri in 1904 to escape a gay sex scandal in Paris. Unlike other stately homes, the interior has been left almost entirely empty; this is a place to let your imagination flesh out the details. It’s a 40-minute walk from Piazza Umberto I and rarely crowded. One notable curiosity is the ‘Chinese room’ in the basement, which includes a semicircular opium den with a swastika emblazoned on the floor. Fersen became addicted to opium following a visit to Ceylon in the early 1900s; the swastika is the Sanskrit symbol for well-being. Equally transfixing is the sun-dappled garden, a triumph of classical grandiosity half given over to nature.

pages: 798 words: 240,182

The Transhumanist Reader
by Max More and Natasha Vita-More
Published 4 Mar 2013

Let’s take understanding which is the basis of Searle’s confusion. I have a rule that works both for Penrose and Searle which is, if it were written by a sophomore he would have gotten a B because there was some pretty good stuff. If he were a junior it would be a C, and so on. Searle has this Chinese room problem in which he talks about a machine that simulates something that people would say requires intelligence, but since the computer doesn’t have any intelligence and the database doesn’t have any understanding, it has to do with understanding sentences in a language you don’t know. Searle’s argument is that if you have a computer and a database, where the computer using the database can translate from Chinese to English then this must be a sham because since the computer processor is just flip-flops and whatever it is and certainly doesn’t understand Chinese and since the database is just a bunch of data in a disk file, it can’t understand Chinese then the combination can’t understand it either and it’s just making you think that it understands.

pages: 778 words: 239,744

Gnomon
by Nick Harkaway
Published 18 Oct 2017

’ – That is an interpretation, the Witness agrees, and the lack of inflection makes it sound ironically bland. ‘Is there a band? A musical group? Check venues close to the Thames.’ – I have. There is not. She’s never comfortable with that ‘I’ – not because she thinks it augurs some sort of awakening, but because she knows it does not. The Chinese room is empty. There is no god in the machine, just a very sophisticated card index. It should not pretend to experience. A while later, she realises she has stopped asking questions and that she is falling asleep. The copper inside wants to push on, but the rest of her is comfortable, and tired. The machine was right: concussion is exhausting.

England
by David Else
Published 14 Oct 2010

Once a Cistercian abbey but dissolved by Henry VIII and awarded to the earl of Bedford, Woburn Abbey (01525-290333; www.woburnabbey.com; adult/child £10.50/6; 11am-5.30pm Apr-Oct, last entry 4pm) is a wonderful country pile set within a 1200-hectare deer park. The house is stuffed with 18th-century furniture, porcelain and silver, and displays paintings by Gainsborough, Van Dyck and Canaletto. Highlights include Queen Victoria’s bedroom, where she slept with Prince Albert; the beautiful wall hangings and cabinets of the Chinese Room; the mysterious story of the Flying Duchess; and the gilt-adorned dining room. An audio tour brings the history of the house and the people who lived here to life. Outside, the gardens are well worth a wander and host theatre and music events during the summer months. On an equally grand scale is Woburn Safari Park (01525-290407; www.woburnsafari.co.uk; adult/child £17.50/13.50; 10am-6pm Apr-Oct, 11am-4pm Sat & Sun Nov-Feb, last entry 1hr before closing), the country’s largest drive-through animal reserve.

Italy
by Damien Simonis
Published 31 Jul 2010

VICENZA SOUTH Head down Viale X Giugno and east along Via San Bastiano and in about 20 minutes you’ll reach the Villa Valmarana ‘ai Nani’ ( 0444 32 18 03; www.villavalmarana.com; Via dei Nani 8; admission adult/student/child under 12 €8/4/free; 10am-noon & 3-6pm Tue-Sun Mar-Oct, 10am-noon & 2-4.30pm Sat & Sun Nov-Feb), covered with sublime 1757 frescoes by Giambattista Tiepolo and his son Giandomenico. Giambattista painted the Palazzina wing with his signature mythological epics, while his son painted the Foresteria with fanciful themes in rural, carnival and Chinese rooms. Nicknamed ‘ai Nani’ (dwarfs) for the 17 garden-gnome statues around the garden walls, this estate is a wonderful spot for a summer concert; check dates online. From ’ai Nani, a path leads to Palladio’s Villa Capra, better known as La Rotonda ( 0444 32 17 93; Via Rotonda 29; admission villa/gardens €6/3; villa 10am-noon & 3-6pm Wed Mar-Nov, gardens 10am-noon & 3-6pm Tue-Sun Mar-Nov).

pages: 3,292 words: 537,795

Lonely Planet China (Travel Guide)
by Lonely Planet and Shawn Low
Published 1 Apr 2015

Luoyang Yijia International Youth HostelHOSTEL (Luoyang Yijia Guoji Qingnian Lushe MAP GOOGLE MAP ; %6351 2311; 329 Zhongzhou Donglu, dm ¥45-55, d/tw ¥140/180; aiW) Located in the busy old town, this hostel hits its stride with a lively communal area, bar and excellent food (pizzas ¥32 to ¥38). Six-bed dorms are a little tight but private rooms are the equivalent of a two-star Chinese room. Rooms facing the main road are noisy, so check first. Transport to town and all the major sights are within walking distance of the hostel. Buses 5 and 41 from the train and bus stations come past. oChristian’s HotelBOUTIQUE HOTEL (Kelisiting Jiudian MAP GOOGLE MAP ; %6326 6666; www.5xjd.com; 56 Jiefang Lu, d inc breakfast ¥1390; ai) This boutique hotel scores points for its variety of rooms, each one with a kitchen and dining area, large plush beds, flat-screen TVs, and mini-bar.