Turing test

back to index

description: test of a machine's ability to exhibit intelligent behavior equivalent to that of a human

214 results

pages: 370 words: 94,968

The Most Human Human: What Talking With Computers Teaches Us About What It Means to Be Alive
by Brian Christian
Published 1 Mar 2011

A look at the transcripts of Turing tests past is in some sense a tour of the various ways in which we demure, dodge the question, lighten the mood, change the subject, distract, burn time: what shouldn’t pass as real conversation at the Turing test probably shouldn’t be allowed to pass as real human conversation, either. There are a number of books written about the technical side of the Turing test: for instance, how to cleverly design Turing test programs—called chatterbots, chatbots, or just bots. In fact, almost everything written at a practical level about the Turing test is about how to make good bots, with a small remaining fraction about how to be a good judge.

There’s a joke that goes around in AI circles about a program that models catatonic patients, and—by saying nothing—perfectly imitates them in the Turing test. What the joke illustrates, though, is that seemingly the less fluency between the parties, the less successful the Turing test will be. What, exactly, does “fluency” mean, though? Certainly, to put a human who only speaks Russian in a Turing test with all English speakers would be against the spirit of the test. What about dialects, though? What exactly counts as a “language”? Is a Turing test peopled by English speakers from around the globe easier on the computers than one peopled by English speakers raised in the same country?

Ordinarily, there wouldn’t be very much odd about this notion at all, of course—we train and prepare for tennis competitions, spelling bees, standardized tests, and the like. But given that the Turing test is meant to evaluate how human I am, the implication seems to be that being human (and being oneself) is about more than simply showing up. I contend that it is. What exactly that “more” entails will be a main focus of this book—and the answers found along the way will be applicable to a lot more in life than just the Turing test. Falling for Ivana A rather strange, and more than slightly ironic, cautionary tale: Dr. Robert Epstein, UCSD psychologist, editor of the scientific volume Parsing the Turing Test, and co-founder, with Hugh Loebner, of the Loebner Prize, subscribed to an online dating service in the winter of 2007.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

Chapter 13: Inference and Language I 1. Andrew Griffin, “Turing Test Breakthrough as Super-Computer Becomes First to Convince Us It’s Human,” Independent, June 8, 2014. 2. See “Computer AI Passes Turing Test in ‘World First,’ ” BBC News, June 9, 2014. The article from Time is no longer retrievable. See also Pranav Dixit, “A Computer Program Has Passed the Turing Test for the First Time,” Gizmodo, June 8, 2014. 3. Gary Marcus, “What Comes After The Turing Test?” New Yorker, June 9, 2014. 4. Adam Mann, “That Computer Actually Got an F on the Turing Test,” Wired, June 9, 2014. 5. We could convert Siri or Cortana into a competitor for the Loebner Prize by adding some code like doBabble() or doComplain(), where the arguments are questions or commands from the user.

So it’s no wonder that AI researchers have mostly abandoned the Turing test challenge. Stuart Russell’s dismissing remark that “mainstream AI researchers have expended almost no effort to pass the Turing test” reflects this frustration with media frenzy about parlor trick performances.6 It’s a weakness in the field to accept them. But the dismissal is entirely unnecessary. For one, an honest Turing test really is a high-water mark for language understanding. As Ray Kurzweil has pointed out, an alien intelligence might not understand English conversation, but any intelligence that did pass a legitimate Turing test must be intelligent. “The key statement is the converse,” he says: “In order to pass the test, you must be intelligent.”7 Kurzweil suggests that future competitions simply allow for a longer test, ensuring that cheap tricks get filtered out in continuing dialogue.

We can do this in the context of a question-answering session, as with the original test. Consider a test simplification: we’ll call it the Turing Test Monologue. In a Turing Test Monologue, the judge simply pastes in a news article or other text, then asks questions requiring an understanding of what it says. The respondent must answer those questions accurately. (Goodbye to tricks.) For instance, a judge might paste in the AP article “Your Tacos or Your Life!” and ask the respondent whether the story is funny or not, and why? Passing this test would be, strictly speaking, a logical subset of a completely open-ended Turing test, so it would be entirely fair to use it—in fact, it would give advantage to the machine, which might not completely understand how to handle “pragmatic phenomena” in back-and-forth dialogue—more on this later.

pages: 315 words: 89,861

The Simulation Hypothesis
by Rizwan Virk
Published 31 Mar 2019

We all know that AI in the context of NPCs represents an artificially intelligent being, but what does that mean exactly? Common sense tells us that it’s a program that appears human—in some ways. In lieu of a formal definition, an informal definition is a computer program or artificial device that can pass the Turing Test. The History and Rise of AI The Turing Test Figure 13: A visual depiction of the Turing Test 12 The Turing Test is more of a milestone than a definition, since most AI today cannot pass this test. Alan Turing, considered by many to be the father of modern computer science, conjectured a time when a machine would exhibit intelligent behaviors.

Party C would start conversations (passing messages using something like a teletype machine—the best that Turing had in his time) and would have to tell the difference between A and B. If he was unable to distinguish which was the human and which was the machine, then the machine could be said to have passed the Turing Test. Of course, back then, he described it as a machine, but today we know it would be the AI program (which is software) that would pass the test, not so much the hardware. This party game and the concept underlying it eventually became known as the Turing Test. AI and Games: Claude Shannon and Chess The Turing Test is not the only test of artificial intelligence. In a paper in 1950 (the same year that Turing proposed his test), MIT professor Claude Shannon posited that a computer would be capable of playing chess in a groundbreaking paper titled “Programming a Computer for Playing Chess,” and showed a computer he had built for such a purpose (see Figure 14).

Different kinds of AI techniques had to be developed in order for a computer to have a chance at passing the “Turing Test.” In the early 21st century, digital assistants like Siri, Alexa, and Google Assistant are much better at processing either text or voice than any of the video games that we have covered thus far. But just as video games drove early graphics technology, you can expect that simulated characters will drive more sophisticated AI in the future. Figure 15: Eliza was an early digital psychiatrist that used simple matching. NLP, AI, and the Quest to Pass the Turing Test Of critical importance to passing the Turing Test is NLP, or Natural Language Processing.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

“The Computer will be deemed to have passed the ‘Turing Test Human Determination Test’ if the Computer has fooled two or more of the three Human Judges into thinking that it is a human.”43 But we’re not done yet: In addition, each of the three Turing Test Judges will rank the four Candidates with a rank from 1 (least human) to 4 (most human). The computer will be deemed to have passed the “Turing Test Rank Order Test” if the median rank of the Computer is equal to or greater than the median rank of two or more of the three Turing Test Human Foils. * * * The Computer will be deemed to have passed the Turing Test if it passes both the Turing Test Human Determination Test and the Turing Test Rank Order Test. * * * If a Computer passes the Turing Test, as described above, prior to the end of the year 2029, then Ray Kurzweil wins the wager. Otherwise Mitchell Kapor wins the wager.44 Wow, pretty strict.

The judges and human foils will be chosen by a “Turing test committee,” made up of Kapor, Kurzweil (or their designees), and a third member. Instead of five-minute chats, each of the four contestants will be interviewed by each judge for a grueling two hours. At the end of all these interviews, each judge will give his or her verdict (“human” or “machine”) for each contestant. “The Computer will be deemed to have passed the ‘Turing Test Human Determination Test’ if the Computer has fooled two or more of the three Human Judges into thinking that it is a human.”43 But we’re not done yet: In addition, each of the three Turing Test Judges will rank the four Candidates with a rank from 1 (least human) to 4 (most human).

Dennett, The Mind’s I: Fantasies and Reflections on Self and Soul (New York: Basic Books, 1981), along with a cogent counterargument from Hofstadter. 16.  S. Aaronson, Quantum Computing Since Democritus (Cambridge, U.K.: Cambridge University Press, 2013), 33. 17.  “Turing Test Transcripts Reveal How Chatbot ‘Eugene’ Duped the Judges,” Coventry University, June 30, 2015, www.coventry.ac.uk/primary-news/turing-test-transcripts-reveal-how-chatbot-eugene-duped-the-judges/. 18.  “Turing Test Success Marks Milestone in Computing History,” University of Reading, June 8, 2014, www.reading.ac.uk/news-and-events/releases/PR583836.aspx. 19.  R. Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking Press, 2005), 7. 20.  

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

For example, if your favourite AI technique is ‘Temporally Recurrent Optimal Learning’ (to pick some random AI buzzwords of the present day), then you might be tempted to define AI as the task of passing the Turing test using Temporally Recurrent Optimal Learning, thereby ruling out any other approach. Thus, we want a test for intelligent behaviour that is independent of the techniques or methods that are used to achieve it. The Turing test achieves this by clearly separating the interrogators from the thing that is being interrogated: the only evidence that the interrogators have to go on are the inputs and outputs – the questions sent by the interrogator, and the responses that the interrogator later receives. The thing on the other end is a black box as far as the Turing test is concerned, in the sense that we are not allowed to examine its internal structure: all that we have are the inputs and outputs.

Turing’s article ‘Computing Machinery and Intelligence’, describing his test, was published in the prestigious international journal Mind in 1950.9 Although many articles touching on AI-like ideas had been published before this, Turing approached the subject for the first time from the standpoint of the modern digital computer. As such, his article is generally recognized as the first AI publication. Turing Test Nonsense The Turing test is simple, elegant and easy to understand. However, it had the unfortunate side effect of establishing the test as the holy grail of AI – with unfortunate consequences that resonate to the present day. The problem is that most attempts to tackle the Turing test tend to use cheap tricks to try to befuddle the interrogators into believing that they are dealing with a person, rather than by trying to engage with the actual issues of intelligent behaviour.

Varieties of Artificial Intelligence For all the nonsense it has given rise to, the Turing test is an important part of the AI story because, for the first time, it gave researchers interested in this emerging discipline a target to aim at. When someone asked you what your goal was, you could give a straightforward and precise answer: My goal is to build a machine that can meaningfully pass the Turing test. Today, I think very few, if any, serious AI researchers would give this answer, but it had a crucial historical role, and I believe it still has something important to tell us today. Much of the attraction of the Turing test undoubtedly lies in its simplicity, but clear as the test appears to be, it nevertheless raises many problematic questions about AI.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

In my 1999 book The Age of Spiritual Machines, I predicted that a Turing test—wherein an AI can communicate by text indistinguishably from a human—would be passed by 2029. I repeated that in 2005’s The Singularity Is Near. Passing a valid Turing test means that an AI has mastered language and commonsense reasoning as possessed by humans. Turing described his concept in 1950,[1] but he did not specify how the test should be administered. In a bet that I have with Mitch Kapor, we defined our own rules that are much more difficult than other interpretations. My expectation was that in order to pass a valid Turing test by 2029, we would need to be able to attain a great variety of intellectual achievements with AI by 2020.

Ultimately, when a program passes the Turing test, it will actually need to make itself appear far less intelligent in many areas because otherwise it would be clear that it is an AI. For example, if it could correctly solve any math problem instantly, it would fail the test. Thus, at the Turing test level, AIs will have capabilities that in fact go far beyond the best humans in most fields. Humans are now in the Fourth Epoch, with our technology already producing results that exceed what we can understand for some tasks. For the aspects of the Turing test that AI has not yet mastered, we are making rapid and accelerating progress.

And if an AI is able to eloquently proclaim its own consciousness, what ethical grounds could we have for insisting that only our own biology can give rise to worthwhile sentience? The empiricism of the Turing test puts our focus firmly where it should be. Yet while the Turing test will be very useful for assessing the progress of AI, we should not treat it as the sole benchmark of advanced intelligence. As systems like PaLM 2 and GPT-4 have demonstrated, machines can surpass humans at cognitively demanding tasks without being able to convincingly imitate a human in other domains. Between 2023 and 2029, the year I expect the first robust Turing test to be passed, computers will achieve clearly superhuman ability in a widening range of areas.

pages: 291 words: 81,703

Average Is Over: Powering America Beyond the Age of the Great Stagnation
by Tyler Cowen
Published 11 Sep 2013

Some commentators attach great weight to whether intelligent machines can pass a Turing test. If a group of intelligent observers were conversing with a machine through an exchange of typed messages, could those observers tell whether they were talking to a human being or to a machine? If the machine can consistently “pass” as human, it has succeeded at the Turing test. Sometimes the Turing test is considered to be a benchmark for when we can call a machine truly intelligent. So far the machines have had only mixed results at passing the Turing test. Getting human conversation “right” is a complex, multidimensional problem, highly sensitive to context and background understanding, and so it isn’t easy for a computer program.

Still, those two vote totals are pretty close, and arguably these programs have almost passed the Turing test. However, once you take a “gains from trade” perspective on man–machine interaction, the Turing test seems to be more of a curiosity than a fundamental standard for judging the presence of intelligence or not. What good does passing a Turing test really do? There has been an enduring misunderstanding that needs to be cleared up. Turing’s core message was never “If a machine can imitate a man, the machine must be intelligent.” Rather, it was “Inability to imitate does not rule out intelligence.” In his classic essay on the Turing test, Turing encouraged his readers to take a broader perspective on intelligence and conceive of it more universally and indeed more ethically.

Go to the website itself, Cleverbot.com, and see what you think. There you will be warned that the program may serve up unusual, unexpected, or inappropriate content. The first question I posed it was, “Can you pass a Turing test?” The answer “What is a Turing test?” was quite human, I thought. Then: TYLER: A Turing test shows whether you can mimic a human interlocutor. CLEVERBOT: Do the Turing test on me. TYLER: So far you are passing it. CLEVERBOT: Ah well, what can one do? Not bad. Yet then I wrote, “Jump off the Brooklyn Bridge!” and it lost the test when it answered, “But that magic jump on me.”

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

Actually, dialog systems are a very important area of research for virtual assistants, question and answering systems, and so on. MARTIN FORD: Do you anticipate the creation of an AI that someday could pass the Turing test? YANN LECUN: It’s going to happen at some point, but the Turing test is not actually an interesting test. In fact, I don’t think a lot of people in the AI field at the moment consider the Turing test to be a good test. It’s too easy to trick it, and to some extent, the Turing test has already been and gone. We give a lot of importance to language as humans because we are used to discussing intelligent topics with other humans through language.

I think Alan Turing was prescient in basing the Turing test on language because I think it does require the full range of human thinking and human intelligence to create and understand language at human levels. MARTIN FORD: Is your ultimate objective to extend this idea to actually build a machine that can pass the Turing test? RAY KURZWEIL: Not everybody agrees with this, but I think the Turing test, if organized correctly, is actually a very good test of human-level intelligence. The issue is that in the brief paper that Turing wrote in 1950, it’s really just a couple of paragraphs that talked about the Turing test, and he left out vital elements.

But we’ve also got other limitations to deal with, like we still don’t have generalized tools in AI and we still don’t know how to solve general problems in AI. In fact, one of the fun things, and you may have seen this, is that people are now starting to define new forms of what used to be the Turing test. MARTIN FORD: A new Turing Test? How would that work? JAMES MANYIKA: Steve Wozniak, the co-founder of Apple, has actually proposed what he calls the “coffee test” as opposed to Turing tests, which are very narrow in many respects. A coffee test is kind of fun: until you get a system that can enter an average and previously unknown American home and somehow figure out how to make a cup of coffee, we’ve not solved AGI.

pages: 696 words: 143,736

The Age of Spiritual Machines: When Computers Exceed Human Intelligence
by Ray Kurzweil
Published 31 Dec 1998

In a 1950 paper, Alan Turing describes his concept of the Turing Test, in which a human judge interviews both a computer and one or more human foils using terminals (so that the judge won’t be prejudiced against the computer for lacking a warm and fuzzy appearance).11 If the human judge is unable to reliably unmask the computer (as an impostor human) then the computer wins. The test is often described as a kind of computer IQ test, a means of determining if computers have achieved a human level of intelligence. In my view, however, Turing really intended his Turing Test as a test of thinking, a term he uses to imply more than just clever manipulation of logic and language.

But I think science is inherently about objective reality. I don’t see how it can break through to the subjective level. MAYBE IF THE THING PASSES THE TURING TEST? That is what Turing had in mind. Lacking any conceivable way of building a consciousness detector, he settled on a practical approach, one that emphasizes our unique human proclivity for language. And I do think that Turing is right in a way—if a machine can pass a valid Turing Test, I believe that we will believe that it is conscious. Of course, that’s still not a scientific demonstration. The converse proposition, however, is not compelling.

Whales and elephants have bigger brains than we do and exhibit a wide range of behaviors that knowledgeable observers consider intelligent. I regard them as conscious creatures, but they are in no position to pass the Turing Test. THEY WOULD HAVE TROUBLE TYPING ON THESE SMALL KEYS OF MY COMPUTER. Indeed, they have no fingers. They are also not proficient in human languages. The Turing Test is clearly a human-centric measurement. IS THERE A RELATIONSHIP BETWEEN THIS CONSCIOUSNESS STUFF AND THE ISSUE OF TIME THAT WE SPOKE ABOUT EARLIER? Yes, we clearly have an awareness of time. Our subjective experience of time passage—and remember that subjective is just another word for conscious—is governed by the speed of our objective processes.

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
Published 14 Jul 2005

One of the many skills that nonbiological intelligence will achieve with the completion of the human brain reverse-engineering project is sufficient mastery of language and shared human knowledge to pass the Turing test. The Turing test is important not so much for its practical significance but rather because it will demarcate a crucial threshold. As I have pointed out, there is no simple means to pass a Turing test, other than to convincingly emulate the flexibility, subtlety, and suppleness of human intelligence. Having captured that capability in our technology, it will then be subject to engineering's ability to concentrate, focus, and amplify it. Variations of the Turing test have been proposed. The annual Loebner Prize contest awards a bronze prize to the chatterbot (conversational bot) best able to convince human judges that it's human.217 The criteria for winning the silver prize is based on Turing's original test, and it obviously has yet to be awarded.

Conversely, can the machine have any biological aspects? Because the definition of the Turing test will vary from person to person, Turing test-capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold. Invariably, these early claims will be debunked by knowledgeable observers, probably including myself. By the time there is a broad consensus that the Turing test has been passed, the actual threshold will have long since been achieved. Edward Feigenbaum proposes a variation of the Turing test, which assesses not a machine's ability to pass for human in casual, everyday dialogue but its ability to pass for a scientific expert in a specific field.220 The Feigenbaum test (FT) may be more significant than the Turing test because FT-capable machines, being technically proficient, will be capable of improving their own designs.

The bandwidth of information from the endocrine system is quite low, because the determining factor is overall levels of hormones, not the precise location of each hormone molecule. Confirmation of the uploading milestone will be in the form of a "Ray Kurzweil" or "Jane Smith" Turing test, in other words convincing a human judge that the uploaded re-creation is indistinguishable from the original specific person. By that time we'll face some complications in devising the rules of any Turing test. Since nonbiological intelligence will have passed the original Turing test years earlier (around 2029), should we allow a nonbiological human equivalent to be a judge? How about an enhanced human? Unenhanced humans may become increasingly hard to find.

pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed
by Ray Kurzweil
Published 13 Nov 2012

We have clearly identified hierarchies of units of functionality in natural systems, especially the brain, and AI systems are using comparable methods. It appears to me that many critics will not be satisfied until computers routinely pass the Turing test, but even that threshold will not be clear-cut. Undoubtedly, there will be controversy as to whether claimed Turing tests that have been administered are valid. Indeed, I will probably be among those critics disparaging early claims along these lines. By the time the arguments about the validity of a computer passing the Turing test do settle down, computers will have long since surpassed unenhanced human intelligence. My emphasis here is on the word “unenhanced,” because enhancement is precisely the reason that we are creating these “mind children,” as Hans Moravec calls them.11 Combining human-level pattern recognition with the inherent speed and accuracy of computers will result in very powerful abilities.

English mathematician Alan Turing (1912–1954) based his eponymous test on the ability of a computer to converse in natural language using text messages.13 Turing felt that all of human intelligence was embodied and represented in language, and that no machine could pass a Turing test through simple language tricks. Although the Turing test is a game involving written language, Turing believed that the only way that a computer could pass it would be for it to actually possess the equivalent of human-level intelligence. Critics have proposed that a true test of human-level intelligence should include mastery of visual and auditory information as well.14 Since many of my own AI projects involve teaching computers to master such sensory information as human speech, letter shapes, and musical sounds, I would be expected to advocate the inclusion of these forms of information in a true test of intelligence.

Coming up with such themes on its own from just reading the book, and not essentially copying the thoughts (even without the words) of other thinkers, is another matter. Doing so would constitute a higher-level task than Watson is capable of today—it is what I call a Turing test–level task. (That being said, I will point out that most humans do not come up with their own original thoughts either but copy the ideas of their peers and opinion leaders.) At any rate, this is 2012, not 2029, so I would not expect Turing test–level intelligence yet. On yet another hand, I would point out that evaluating the answers to questions such as finding key ideas in a novel is itself not a straightforward task.

pages: 337 words: 103,522

The Creativity Code: How AI Is Learning to Write, Paint and Think
by Marcus Du Sautoy
Published 7 Mar 2019

Given that the composition hadn’t been attacked, he felt emboldened to continue the project, producing a second album in 1997 with pieces in the style of some of the other composers he’d analysed: Beethoven, Chopin, Joplin, Mozart, Rachmaninov and Stravinsky. This time the pieces were performed by human musicians. The critics’ response was much more positive. ‘The Game’: a musical Turing Test But would the output of Cope’s algorithm produce results that would pass a musical Turing Test? Could they be passed off as works by the composers themselves? To find out, Cope decided to stage a concert at the University of Oregon in collaboration with Douglas Hofstadter, a mathematician who wrote the classic book Gödel, Escher, Bach. Three pieces would be played.

This, Turing believed, was too general, so he refined his challenge: he wondered if a machine could be programmed so that if a human were to engage it in conversation, its responses would be so convincing that the human could not tell it was talking to a machine. Turing called this the ‘Imitation Game’, after a parlour game that was popular at the time, but it has become known as the ‘Turing Test’. To pass the Turing Test requires an algorithm that can receive as input the vagaries of natural language and process it to produce an output that corresponds to something a human might possibly say in response. (‘Natural language’ generally refers to language that has evolved naturally in humans through use and repetition without conscious planning or premeditation, in contrast to computer code.)

Once they see that they are losing they go rather crazy. Silver, the chief programmer, winced as he saw the next move AlphaGo was suggesting: ‘I think they’re going to laugh.’ Sure enough, the Korean commentators collapsed into fits of giggles at the moves AlphaGo was now making. Its moves were failing the Turing Test. No human with a shred of strategic sense would make them. The game dragged on for a total of 180 moves, at which point AlphaGo put up a message on the screen that it had resigned. The press room erupted with spontaneous applause. The human race had got one back. AlphaGo 3 Humans 1. The smile on Lee Sedol’s face at the press conference that evening said it all.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

Feige: Why AI Cannot Think like ‘thinking’, ‘intelligence’ et cetera to be made testable in their application to machines, but instead the opposite: He made it possible to conceive of the power of thinking of human beings in terms of a machine logic—and also to conceive the mind as a biologically based ‘virtual machine’ (Boden 2018, 3). The Turing test thus conceptually engineers machines in terms of possessing the ability to think, as well as conceptually engineers ourselves as humans as special kinds of machines. An obvious fallacy in this sort of substitution of questions is that it is not so much engineers the concepts in question, but instead simply changes the topic. This is true on the level of what question the Turing test is able to respond to: It does not give an answer to the question whether machines can think—a question that Turing hastily dismissed as a crypto-theological question—but also applies to the sorts of question the Turing test asks: It proposes suspending the ontological question of what kind of thing we are dealing with and what powers this thing possesses with respect to the question how we can recognize what sort of thing and what sort of powers we are dealing with.

In what follows, I will argue that Lemoine’s statement, and more generally, the idea that we can conceive of anything we currently subsume under the rubric of ‘artificial intelligence’ as having the power to think, is a deeply flawed and ultimately unintelligible concept (Feige 2024). To show this, I will proceed in three steps. In the 32 Part 1: Reflections first step (1), I will work out the implicit background of Lemoine’s statement: the Turing test, which substitutes an ontological question for an epistemological one. Taking up arguments by Davidson, I will hint at a direction we could go in instead so as to find resources for answers to what is constitutively lacking in an artificial intelligence. In the second step (2), I will draw on the arguments developed by Dreyfus and Cantrell-Smith, who advocate a strong distinction between the operations an artificial intelligence is capable of and what we do insofar we are thinking beings who understand a distinctive feature of the latter as being situated in an intelligible world in which the entities we encounter matter to us.

In the second step (2), I will draw on the arguments developed by Dreyfus and Cantrell-Smith, who advocate a strong distinction between the operations an artificial intelligence is capable of and what we do insofar we are thinking beings who understand a distinctive feature of the latter as being situated in an intelligible world in which the entities we encounter matter to us. The third and final step (3) will sketch a line of thought that takes recourse to McDowell, who argues for the idea that we can only ascribe thinking to beings that are bearers of a form of life. On Changing the Subject: The Turing test and the Causal Impact of Reality Lemoine’s statement that LaMDA is a conscious and feeling person lacks any clear conception of what it means to be a conscious and feeling person. But, even worse: ‘consciousness’ and ‘being able to feel’ are not the conceptual resources that go together very well with the concept of ‘person’.

pages: 285 words: 86,853

What Algorithms Want: Imagination in the Age of Computing
by Ed Finn
Published 10 Mar 2017

This phase shift has produced a new crop of centers and initiatives grappling with the potential consequences of artificial intelligence, uniting philosophers, technologists, and Silicon Valley billionaires around the question of whether a truly thinking machine could pose an existential threat to humanity. In the paper where he described the Turing test, Alan Turing also took on the broader question of machine intelligence: an algorithm for consciousness. The Turing test was in many ways a demonstration of the absurdity of establishing a metric for intelligence; the best we can do is have a conversation and see how effective a machine is at emulating a human. But, Turing proposed, if we do achieve such a breakthrough, it will be important to consider the concept of the “child machine,” which learns what we wish to teach.4 That philosophical position underpins DeepMind and many other recent algorithmic intelligence breakthroughs, which have emerged from the currently incandescent computer science subfield of machine learning.

Discussing Simondon’s vision of technics as interpreted by fellow philosopher Bernard Stiegler, media scholars Andrés Vaccari and Belinda Barnet argue that both philosophers put the idea of a pure human memory (and consequently a pure thought) into crisis, and open a possibility which will tickle the interest of future robot historians: the possibility that human memory is a stage in the history of a vast machinic becoming. In other words, these future machines will approach human memory (and by extension culture) as a supplement to technical beings.68 Our existential anxiety about being replaced by our thinking machines underlies every thread of algorithmic thinking, from the shibboleth of the Turing test and Wiener’s argument for the “human use of human beings” to the gradual encroachment of digital computation on many human occupations, beginning with that of being a “computer.” Nowhere is the prospect more unsettling than in the context of extended cognition, however. As we outsource more of our minds to algorithmic systems, we too will need to confront the consequences of dependence on processes beyond our control.

Jonze gives us the apotheosis of an algorithm that knows us completely, passing through history and reason to imagination, taking the notion of “anticipation” to its psychological conclusion, desire. To truly know humanity, Samantha must fall in love. The film is, of course, a response to humanity’s deep fascination and anxiety about creating intelligence, a dilemma embedded in Turing’s famous speculation about discerning man from machine, the Turing Test. In the annual contest inspired by Turing’s provocation, human judges are asked to hold an epistolary conversation with an entity using a monitor and keyboard and attempt to discern whether that entity is a human or a computer program.50 Turing’s original paper on the subject, however, frames the problem rather differently: The new form of the problem can be described in terms of a game which we call the “imitation game.”

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

You’d ask the computer what day of the week it was, and it might be able to answer that. You’d ask it who the president was, and it probably couldn’t tell you. At that point, you’d know you were talking to a computer and not a person. But now when it comes to these Turing Tests, people who’ve tried connecting, for example, WolframAlpha to their Turing Test bots find that the bots lose every time. Because all you have to do is start asking the machine sophisticated questions and it will answer them! No human can do that. By the time you’ve asked it a few disparate questions, there will be no human who knows all those things, yet the system will know them.

What has long been difficult for me to understand is, What’s the point of a conventional Turing Test? What’s the motivation? As a toy, one could make a little chat bot that people could chat with. That will be the next thing. The current round of deep learning—particularly, recurrent neural networks—is making pretty good models of human speech and human writing. We can type in, say, “How are you feeling today?” and it knows most of the time what sort of response to give. But I want to figure out whether I can automate responding to my email. I know the answer is no. A good Turing Test, for me, will be when a bot can answer most of my email.

During World War II, he developed techniques for aiming antiaircraft fire by making models that could predict the future trajectory of an airplane by extrapolating from its past behavior. In Cybernetics and in The Human Use of Human Beings, Wiener notes that this past behavior includes quirks and habits of the human pilot, thus a mechanized device can predict the behavior of humans. Like Alan Turing, whose Turing Test suggested that computing machines could give responses to questions that were indistinguishable from human responses, Wiener was fascinated by the notion of capturing human behavior by mathematical description. In the 1940s, he applied his knowledge of control and feedback loops to neuromuscular feedback in living systems, and was responsible for bringing Warren McCulloch and Walter Pitts to MIT, where they did their pioneering work on artificial neural networks.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

But it should be a source of humility for AGI builders, since they aspire to master the whole spectrum of human intelligence. Apple cofounder Steve Wozniak has proposed an “easy” alternative to the Turing test that shows the complexity of simple tasks. We should deem any robot intelligent, Wozniak says, when it can walk into any home, find the coffeemaker and supplies, and make us a cup of coffee. You could call it the Mr. Coffee Test. But it may be harder than the Turing test, because it involves advanced AI in reasoning, physics, machine vision, accessing a vast knowledge database, precisely manipulating robot actuators, building a general-use robot body, and more.

To meet our definition of general intelligence a computer would need ways to receive input from the environment, and provide output, but not a lot more. It needs ways to manipulate objects in the real world. But as we saw in the Busy Child scenario, a sufficiently advanced intelligence can get someone or something else to manipulate objects in the real world. Alan Turing devised a test for human-level intelligence, now called the Turing test, which we will explore later. His standard for demonstrating human-level intelligence called only for the most basic keyboard-and-monitor kind of input and output devices. The strongest argument for why advanced AI needs a body may come from its learning and development phase—scientists may discover it’s not possible to “grow” AGI without some kind of body.

The fact that Yudkowsky won three times while playing the AI made me all the more concerned and intrigued. He may be a genius, but he’s not a thousand times more intelligent than the smartest human, as an ASI could be. Bad or indifferent ASI needs to get out of the box just once. The AI-Box Experiment also fascinated me because it’s a riff on the venerable Turing test. Devised in 1950 by mathematician, computer scientist, and World War II code breaker Alan Turing, the eponymous test was designed to determine whether a machine can exhibit intelligence. In it, a judge asks both a human and a computer a set of written questions. If the judge cannot tell which respondent is the computer and which is the human, the computer “wins.”

pages: 210 words: 62,771

Turing's Vision: The Birth of Computer Science
by Chris Bernhardt
Published 12 May 2016

The same should be true of machines. If we want to know whether a machine is intelligent or conscious, we should do this by interaction, not by dissection. It is interesting to note that nowadays there is a version of the Turing test that has become part of our everyday lives. Only in this version it is the computer that is trying to distinguish between humans and machines. CAPTCHAs (for Completely Automated Public Turing Test To Tell Computers and Humans Apart) often appear in online forms. Before you can submit the form, you have to answer a CAPTCHA, which customarily involves reading some deformed text and typing the letters and numbers into a box.

Cantor’s Diagonalization Arguments Georg Cantor 1845–1918 Cardinality Subsets of the Rationals That Have the Same Cardinality Hilbert’s Hotel Subtraction Is Not Well-Defined General Diagonal Argument The Cardinality of the Real Numbers The Diagonal Argument The Continuum Hypothesis The Cardinality of Computations Computable Numbers A Non-Computable Number There Is a Countable Number of Computable Numbers Computable Numbers Are Not Effectively Enumerable 9. Turing’s Legacy Turing at Princeton Second World War Development of Computers in the 1940s The Turing Test Downfall Apology and Pardon Further Reading Notes Bibliography Index Acknowledgments I am very grateful to a number of people for their help. Michelle Ainsworth, Denis Bell, Jonathan Fine, Chris Staecker, and three anonymous reviewers read through various drafts with extraordinary care.

After this, we briefly look at how the modern computer came into existence during the forties. The procession from sophisticated calculator, to universal computer, to stored-program universal computer is outlined. In particular, we note that the stored-program concept originates with Turing’s paper. In 1950, Turing published a paper with a description of what is now called the Turing Test. This and the subsequent history of the idea are briefly described. The chapter ends with Jack Copeland’s recent study of Turing’s death and the fact that it might have been accidental, and not suicide. We conclude with the text of Gordon Brown’s apology on behalf of the British government. 1 Background “Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show.”

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

Garland test: This term was coined by Murray Shanahan, whose book Embodiment and the Inner Life (2010) was one of the inspirations behind Ex Machina. noisy proclamations: www.reading.ac.uk/news-archive/press-releases/pr583836.html. When the chatbot won: ‘Eugene Goostman is a real boy – the Turing Test says so’, Guardian Pass notes, 9 June 2014. See https://www.theguardian.com/technology/shortcuts/2014/jun/09/eugene-goostman-turing-test-computer-program. the humans failed: The description of the Turing test as a test of ‘human gullibility’ comes from a 2015 New York Times article by John Markoff, ‘Software is smart enough for SAT, but still far from intelligent’, New York Times, 21 September 2015.

In Alex Garland’s 2014 film Ex Machina, reclusive billionaire tech genius Nathan invites hotshot programmer Caleb to his remote hideout to meet Ava, the intelligent, inquisitive robot he has created. Caleb’s task is to figure out whether Ava is conscious, or whether she – it – is merely an intelligent robot, with no inner life at all. Ex Machina draws heavily on the Turing test, the famous yardstick for assessing whether a machine can think. In one incisive scene, Nathan is quizzing Caleb about this test. In the standard version of the Turing test, as Caleb knows, a human judge interrogates both a candidate machine and another human, remotely, by exchanging typed messages only. A machine passes the test when the judge consistently fails to distinguish between the human and the machine.

A machine passes the test when the judge consistently fails to distinguish between the human and the machine. But Nathan has something far more interesting in mind. When it comes to Ava, he says, ‘the challenge is to show you that she’s a robot – and see if you still feel she has consciousness.’ This new game transforms the Turing test from a test of intelligence into a test of consciousness, and as we now know, these are very different phenomena. What’s more, Garland shows us that the test is not really about the robot at all. As Nathan puts it, what matters is not whether Ava is a machine. It is not even whether Ava, though a machine, has consciousness. What matters is whether Ava makes a conscious person feel that she (or it) is conscious.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

The only definition I know that, though limited, can be practically used is Alan Turing’s. With his test, Turing provided an operational definition of a specific form of thinking—human intelligence. Let’s then consider human intelligence as defined by the Turing Test. It’s becoming increasingly clear that there are many facets of human intelligence. Consider, for instance, a Turing Test of visual intelligence—that is, questions about an image, a scene, which may range from “What is there?” to “Who is there?” to “What is this person doing?” to “What is this girl thinking about this boy?”—and so on. We know by now, from recent advances in cognitive neuroscience, that answering these questions requires different competencies and abilities, often independent from one another, often corresponding to separate modules in the brain.

We discriminate based on maturity and sanity. If I copy my brain/body, does it have a right to vote or is it redundant? Consider that the copies begin to diverge immediately, or that the copy could be intentionally different. In addition to passing the maturity/sanity/humanity test, perhaps the copy needs to pass a reverse Turing Test (a Church-Turing Test?). Rather than demonstrating behavior indistinguishable from that of a human, the goal would be to show behavior distinct from human individuals. (Would the current U.S. two-party system pass such a test?) Perhaps the day of corporate personhood (Dartmouth College v. Woodward, 1819) has finally arrived.

Speaking today about “the problem of life” sounds amusing: Biology is a science dealing with many different great problems, not just one. Intelligence is one word but many problems—not one but many Nobel prizes. This is related to Marvin Minsky’s view of the problem of thinking, captured by his slogan “Society of Mind.” In the same way, a real Turing Test is a broad set of questions probing the main aspects of human thinking. For this reason, my colleagues and I are developing the framework around an open-ended set of Turing+ questions in order to measure scientific progress in the field. The plural “questions” emphasizes the many different intelligent abilities to be characterized and possibly replicated in a machine—basic visual recognition of objects, the identification of faces, the gauging of emotions, social intelligence, language, and much more.

pages: 551 words: 174,280

The Beginning of Infinity: Explanations That Transform the World
by David Deutsch
Published 30 Jun 2011

This is well illustrated by the following excerpts from the conversation with which the program Elbot won the (lesser) Loebner prize in 2008: JUDGE: do you understand the point of a turing test? ELBOT: There are some people who would not pass the Turing test themselves. Why must they inflict it on computers? JUDGE: I spose it is a little harsh ELBOT: If we robots ever had marriages they would be eternal: ‘Till rust do us part’. In that passage, Elbot’s first utterance is not a reply to the question but presumably a stock response triggered by the keywords ‘Turing test’. In the following line, the judge uses the word ‘spose’, which in that context can only mean ‘suppose’ (either as slang or as a typographical error).

Some claim that the above criticism is unfair: modern AI research is not focused on passing the Turing test, and great progress has been made in what is now called ‘AI’ in many specialized applications. However, none of those applications look like ‘machines that think’.* Others maintain that the criticism is premature, because, during most of the history of the field, computers had absurdly little speed and memory capacity compared with today’s. Hence they continue to expect the breakthrough in the next few years. This will not do either. It is not as though someone has written a chatbot that could pass the Turing test but would currently take a year to compute each reply.

But his test is rooted in the empiricist mistake of seeking a purely behavioural criterion: it requires the judge to come to a conclusion without any explanation of how the candidate AI is supposed to work. But, in reality, judging whether something is a genuine AI will always depend on explanations of how it works. That is because the task of the judge in a Turing test has similar logic to that faced by Paley when walking across his heath and finding a stone, a watch or a living organism: it is to explain how the observable features of the object came about. In the case of the Turing test, we deliberately ignore the issue of how the knowledge to design the object was created. The test is only about who designed the AI’s utterances: who adapted its utterances to be meaningful – who created the knowledge in them?

The Book of Why: The New Science of Cause and Effect
by Judea Pearl and Dana Mackenzie
Published 1 Mar 2018

But it goes even further: having such laws permits us to violate them selectively so as to create worlds that contradict ours. Our next section features such violations in action. THE MINI-TURING TEST In 1950, Alan Turing asked what it would mean for a computer to think like a human. He suggested a practical test, which he called “the imitation game,” but every AI researcher since then has called it the “Turing test.” For all practical purposes, a computer could be called a thinking machine if an ordinary human, communicating with the computer by typewriter, could not tell whether he was talking with a human or a computer.

Often the quest for a good representation has led to insights into how the knowledge ought to be acquired, be it from data or a programmer. When I describe the mini-Turing test, people commonly claim that it can easily be defeated by cheating. For example, take the list of all possible questions, store their correct answers, and then read them out from memory when asked. There is no way to distinguish (so the argument goes) between a machine that stores a dumb question-answer list and one that answers the way that you and I do—that is, by understanding the question and producing an answer using a mental causal model. So what would the mini-Turing test prove, if cheating is so easy? The philosopher John Searle introduced this cheating possibility, known as the “Chinese Room” argument, in 1980 to challenge Turing’s claim that the ability to fake intelligence amounts to having intelligence.

Humans must have some compact representation of the information needed in their brains, as well as an effective procedure to interpret each question properly and extract the right answer from the stored representation. To pass the mini-Turing test, therefore, we need to equip machines with a similarly efficient representation and answer-extraction algorithm. Such a representation not only exists but has childlike simplicity: a causal diagram. We have already seen one example, the diagram for the mammoth hunt. Considering the extreme ease with which people can communicate their knowledge with dot-and-arrow diagrams, I believe that our brains indeed use a representation like this. But more importantly for our purposes, these models pass the mini-Turing test; no other model is known to do so. Let’s look at some examples.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

For the time being, it doesn’t matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do. Focus on that, and the real challenge comes into view: systems can do more, much more, with every passing day. CAPABILITIES: A MODERN TURING TEST In a paper published in 1950, the computer scientist Alan Turing suggested a legendary test for whether an AI exhibited human-level intelligence. When AI could display humanlike conversational abilities for a lengthy period of time, such that a human interlocutor couldn’t tell they were speaking to a machine, the test would be passed: the AI, conversationally akin to a human, deemed intelligent.

When AI could display humanlike conversational abilities for a lengthy period of time, such that a human interlocutor couldn’t tell they were speaking to a machine, the test would be passed: the AI, conversationally akin to a human, deemed intelligent. For more than seven decades this simple test has been an inspiration for many young researchers entering the field of AI. Today, as the LaMDA-sentience saga illustrates, systems are already close to passing the Turing test. But, as many have pointed out, intelligence is about so much more than just language (or indeed any other single facet of intelligence taken in isolation). One particularly important dimension is in the ability to take actions. We don’t just care about what a machine can say; we also care about what it can do.

What we would really like to know is, can I give an AI an ambiguous, open-ended, complex goal that requires interpretation, judgment, creativity, decision-making, and acting across multiple domains, over an extended time period, and then see the AI accomplish that goal? Put simply, passing a Modern Turing Test would involve something like the following: an AI being able to successfully act on the instruction “Go make $1 million on Amazon in a few months with just a $100,000 investment.” It might research the web to look at what’s trending, finding what’s hot and what’s not on Amazon Marketplace; generate a range of images and blueprints of possible products; send them to a drop-ship manufacturer it found on Alibaba; email back and forth to refine the requirements and agree on the contract; design a seller’s listing; and continually update marketing materials and product designs based on buyer feedback.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

‘If we use tools like Negobot, we can dramatically reduce the workload on the human teams currently working to catch these criminals.’ Beating the Turing Test Entrapment laws mean that Negobot is not currently being used by police forces around the world, but that doesn’t make the experiment any less interesting. If anything, it serves to highlight just how broad the possible applications of conversation AI can be. At its root, Negobot offers a unique twist on the famous AI experiment known as the Turing Test. Based on a hypothesis by Alan Turing, whose work I discussed in chapter one, the Turing Test is designed to test a machine’s ability to show intelligent behaviour indistinguishable from that of a human.

There he came up with various techniques for cracking German codes, most famously an electromechanical device capable of working out the settings for the Enigma machine. In doing so, he played a key role in decoding intercepted messages, which helped the Allies defeat the Nazis. Turing was fascinated by the idea of thinking machines and went on to devise the important Turing Test, which we will discuss in detail in a later chapter. As a child, he read and loved a book called Natural Wonders Every Child Should Know, by Edwin Tenney Brewster, which the author described as ‘an attempt to lead children of eight or ten, first to ask and then to answer the question: “What have I in common with other living things, and how do I differ from them?”’

Based on a hypothesis by Alan Turing, whose work I discussed in chapter one, the Turing Test is designed to test a machine’s ability to show intelligent behaviour indistinguishable from that of a human. As it is regularly performed, the Turing Test involves taking a computer (A) and a human (B), and having them each communicate with a human interrogator (C), whose job it is to figure out which of A and B is the human and which is the computer. If C is unable to do this, Turing argued that the machine has ‘won’ and we must consider it to be intelligent, since we are unable to differentiate it from our own human intelligence. In the future, tools such as Negobot show that our ability to discern between real people and bots may even have legal ramifications.

pages: 224 words: 64,156

You Are Not a Gadget
by Jaron Lanier
Published 12 Jan 2010

It’s notable that it is the woman who is replaced by the computer, and that Turing’s suicide echoes Eve’s fall. The Turing Test Cuts Both Ways Whatever the motivation, Turing authored the first trope to support the idea that bits can be alive on their own, independent of human observers. This idea has since appeared in a thousand guises, from artificial intelligence to the hive mind, not to mention many overhyped Silicon Valley start-ups. It seems to me, however, that the Turing test has been poorly interpreted by generations of technologists. It is usually presented to support the idea that machines can attain whatever quality it is that gives people consciousness.

Turing developed breasts and other female characteristics and became terribly depressed. He committed suicide by lacing an apple with cyanide in his lab and eating it. Shortly before his death, he presented the world with a spiritual idea, which must be evaluated separately from his technical achievements. This is the famous Turing test. It is extremely rare for a genuinely new spiritual idea to appear, and it is yet another example of Turing’s genius that he came up with one. Turing presented his new offering in the form of a thought experiment, based on a popular Victorian parlor game. A man and a woman hide, and a judge is asked to determine which is which by relying only on the texts of notes passed back and forth.

.* The AI way of thinking is central to the ideas I’m criticizing in this book. If a machine can be conscious, then the computing cloud is going to be a better and far more capacious consciousness than is found in an individual person. If you believe this, then working for the benefit of the cloud over individual people puts you on the side of the angels. But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?

pages: 261 words: 10,785

The Lights in the Tunnel
by Martin Ford
Published 28 May 2011

The other participants are another person and a machine—both of whom attempt to convince the judge that they are human by conducting a normal conversation. If the judge can’t tell which participant is which, then the machine is said to have passed the Turing Test. The Turing Test is perhaps the most well-known and accepted method for measuring true machine intelligence. In practice, the rules would need to be further refined, and it seems likely that a panel of judges would be required rather than a single person. In my opinion, the main problem with the Turing Test is that it is, as Turing pointed out in his paper, an “imitation game.” What it really tests is the ability of an intelligent entity to imitate a human being—it is not a test of intelligence itself.

This book is available for purchase in paper and electronic formats at: www.TheLightsintheTunnel.com CONTENTS A Note to Kindle Users Introduction Chapter 1: The Tunnel The Mass Market Visualizing the Mass Market Automation Comes to the Tunnel A Reality Check Summarizing Chapter 2: Acceleration The Rich Get Richer World Computational Capability Grid and Cloud Computing Meltdown Diminishing Returns Offshoring and Drive-Through Banking Short Lived Jobs Traditional Jobs: The “Average” Lights in the Tunnel A Tale of Two Jobs “Software” Jobs and Artificial Intelligence Automation, Offshoring and Small Business “Hardware” Jobs and Robotics “Interface” Jobs The Next “Killer App” Military Robotics Robotics and Offshoring Nanotechnology and its Impact on Employment The Future of College Education Econometrics: Looking Backward The Luddite Fallacy A More Ambitious View of Future Technological Progress: The Singularity A War on Technology Chapter 3: Danger The Predictive Nature of Markets The 2008-2009 Recession Offshoring and Factory Migration Reconsidering Conventional Views about the Future The China Fallacy The Future of Manufacturing India and Offshoring Economic and National Security Implications for the United States Solutions Labor and Capital Intensive Industries: The Tipping Point The Average Worker and the Average Machine Capital Intensive Industries are “Free Riders” The Problem with Payroll Taxes The “Workerless” Payroll Tax “Progressive” Wage Deductions Defeating the Lobbyists A More Conventional View of the Future The Risk of Inaction Chapter 4: Transition The Basis of the Free Market Economy: Incentives Preserving the Market Recapturing Wages Positive Aspects of Jobs The Power of Inequality Where the Free Market Fails: Externalities Creating a Virtual Job Smoothing the Business Cycle and Reducing Economic Risk The Market Economy of the Future An International View Transitioning to the New Model Keynesian Grandchildren Transition in the Tunnel Chapter 5: The Green Light Attacking Poverty Fundamental Economic Constraints Removing the Constraints The Evolution toward Consumption The Green Light Appendix / Final Thoughts Are the ideas presented in this book WRONG? (Opposing arguments with responses) Two Questions Worth Thinking About Where are we now? Four Possible Cases The Next 10-20 years: Some Indicators to Watch For Outsmarting Marx The Technology Paradox Machine Intelligence and the Turing Test About / Contacting the Author Notes A Note to Kindle Users The printed edition of this book employs both footnotes and endnotes. Footnotes are marked with an asterisk (*) and appear at the bottom of the page. The author uses footnotes for supplementary or supporting information and comments that he feels are likely to be of interest to a large percentage of readers.

(The impact of automation will, of course, be in addition to that of offshoring.) Many of these people will be highly educated professionals who had previously assumed that they were, because of their skills and advanced educations, beneficiaries of the trend toward an increasingly technological and globalized world.* *[ Please see “Machine Intelligence and the Turing Test” in the Appendix for more on artificial intelligence. ] Military Robotics One of the biggest investors in robotics technology is the Pentagon. In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, P.W. Singer points out that the U.S. military expects robotic technologies to play an increasingly important role in conflicts of the future.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

The connection between Capgras Syndrome and the uncanny valley runs deep into the culture of Artificial Intelligence. Our acceptance of mechanical intelligence is based on feelings and emotions. The Turing Test blurs the borders between the ‘real’ and the ‘artificial’ on the basis of an emotional perception from a human observer. If the human observer feels that the machine in the other room responds like a human, then the machine must be intelligent. This dimension of the Turing Test is very important and mostly missing from philosopher John Searle’s critical juxtaposition of the Chinese Room. It is not only what happens inside the room, or behind the wall, that is important.

We remain social primates whether we lived in the European tundra 40,000 years ago or live in a modern metropolis of the twenty-first century today. This cognitive connection is often missed in the current debate about Artificial Intelligence, since lip service is nowadays paid to the Turing Test. However, this vital, emotional connection between a human and an intelligent human-like machine is not lost in literature. Philip K. Dick, the prolific author of science fiction whose work has influenced our contemporary techno- cultural milieu more than anyone else, took the Turing Test to a more twisted, and evidently more disturbing, level: paranoia about the ‘mechanical other’. Predicting the discovery of the uncanny valley, paranoid feelings about doubles form a leitmotif in Philip K.

The English mathematician Alan Turing, one of the fathers of Artificial Intelligence, proposed this test in a landmark 1950 paper,1 noting that if one were to slightly modify this ‘imitation game’ and, instead of the woman there was a machine in the second room, then one had the best test for judging whether that machine was intelligent. This is the notorious ‘Turing test’. The machine would imitate the man: when asked whether it shaved every morning, it would answer ‘yes’, and so on. If the judge was less than 50 per cent accurate in telling the difference between the two hidden interlocutors then the machine was a passable simulation of a human being and, therefore, intelligent.

pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next
by Jeanette Winterson
Published 15 Mar 2021

Turing dusted Ada down and brought her back from the dead in 1950, when, after carefully reading her work, he responded to what he called Lady Lovelace’s Objection – that computers cannot originate. In this they are not human or like a human. His answer to Ada – their conversation across time – was the Turing Test. * * * The 1950 Turing Test is a test of a machine’s ability to appear to humans as equivalent to, or indistinguishable from, a human. Google claims that their voice technology assistant Google Duplex has already passed the test – at least when it comes to booking appointments for you by phone. Fooling the human receptionist on the other end of the line counts as a pass – and Google Duplex has achieved that, with tone modulation, word elongation, and ‘thought’ pauses that all sound like a realistic human.

How many times have you wondered if you are communicating with a bot and it turns out to be a human? Possibly we will need a Reverse Turing Test to pull humans up to the level of enabled empathetic bots. While most chatbots are narrow AI – an algorithm designed to do one thing only, like order the pizza or run through your ‘choices’ before being transferred to a human – some chatbots seem smarter. Google engineer, inventor and futurist Ray Kurzweil’s Ramona will chat with you on a variety of topics. She’s a deep-learning system whose data-set is continuously augmented by her chats with humans. Kurzweil believes that Ramona will pass the Turing Test by 2029 – that is, she will be indistinguishable, online, from a human being.

That’s because computational power is the sum of computer storage (memory) and processing speed. Simply, computers weren’t powerful enough to do what McCarthy, Minsky and Turing knew they would be able to do. And before those men, there was Ada Lovelace, the early-19th century genius who inspired Alan Turing to devise the Turing Test – when we can no longer tell the difference between AI and bio-human. We aren’t there yet. Time is hard to gauge. * * * These 12 bytes are not a history of AI. They are not the story of Big Tech or Big Data, though we often meet on that ground. A bit is the smallest unit of data on a computer – it’s a binary digit, and it can have a value of 0 or 1. 8 bits make a byte.

pages: 293 words: 88,490

The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction
by Richard Bookstaber
Published 1 May 2017

See Humphrys (2008). 7. Humphrys (2008), 242–43. 8. Which gets us to the Turing test. To determine when a computer had met some level of competing with human intelligence, Turing suggested that a computer hide behind one curtain, a person hide behind a second, and the tester pass questions through the curtain to each. If a person cannot distinguish the responses of a computer from those of a human, then at least in this limited respect the computer has attained humanlike intelligence. There already is an annual Turing test, the Loebner competition, in which a set of judges spend a few minutes conversing (via keyboard) with computers and with people, and then must decide which is which.

And the computers try to game the test by keeping their responses simple, answering slowly so there are fewer chances for the judges to make observations over the fixed time period, and keeping the conversation vacuous. A more reasonable Turing test would be to invite a computer into a round of dinner conversations where the human subjects are not made aware that this is occurring. (They would all have to be remote conversations, for obvious reasons.) After the fact, subjects are told that some of their companions might have been computers, and only then are they asked to rank the guests by “humanness.” MGonz has the rudiments of passing the Turing test, but it sets the bar far lower than the Loebner competition. It is a sort of remedial test, of a one-liner, invective-laden variety, where the objective is to rant while ignoring anything the other person is saying.

Hobsbawm, Eric. 1999. Industry and Empire: The Birth of the Industrial Revolution. New York: New Press. Hollier, Denis. 1989. Against Architecture: The Writings of Georges Bataille. Translated by Betsy Wing. Cambridge, MA: MIT Press. Humphrys, Mark. 2008. “How My Program Passed the Turing Test.” In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, edited by Robert Epstein, Gary Roberts, and Grace Beber. New York: Springer. Hutchison, Terence W. 1972. “The ‘Marginal Revolution’ Decline and Fall of English Political Economy.” History of Political Economy 4, no. 2: 442–68. doi: 0.1215/00182702-4-2-442.

pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think
by James Vlahos
Published 1 Mar 2019

For at least some users, Julia was good enough to pass Mauldin’s Turing test. For instance, one player hit on Julia for thirteen straight days, suggesting that he either had a robot fetish or was fooled. Mauldin was pleased. But he wasn’t done working on Julia. In 1991 Mauldin liberated Julia from the labyrinths of TinyMUD and entered her into the first-ever edition of a chatbot competition called the Loebner Prize, which has continued annually to this day. Unlike the experiment within Mauldin’s game, the Loebner Prize, which took place in England, was overtly framed as a Turing test. The setup was that the contest’s handful of judges were instructed to exchange messages over a computer with someone who might either be a chatbot or a real person.

Freeman and Company, 1976), 3. 74 “What I had not realized”: Weizenbaum, Computer Power and Human Reason, 7. 75 When thirty-three psychiatrists were shown anonymized transcripts: Ayse Saygin et al., “Turing Test: 50 Years Later,” Minds and Machines, no. 10 (2000), 463–518, https://is.gd/3x06nX. 75 The fame of Eliza and Parry: Vint Cerf, “PARRY Encounters the DOCTOR”, unpublished paper, January 21, 1973, https://goo.gl/iUiYn2. 76 In his PhD dissertation: Terry Winograd, “Procedures as a Representation for Data in a Computer Program for Understanding Natural Language,” PhD dissertation, Massachusetts Institute of Technology, 1971. 77 “Grasp the pyramid”: “Winograd’s Shrdlu,” Cognitive Psychology 3, no. 1 (1972), https://goo.gl/iZXNHT. 78 The very first game to feature: Dennis Jerz, “Somewhere Nearby Is Colossal Cave: Examining Will Crowther’s Original ‘Adventure’ in Code and in Kentucky,” Digital Humanities Quarterly 1, no. 2 (2007), https://goo.gl/9uIhr. 79 “Playing adventure games without tackling”: “Colossal Cave Adventure Page,” website created by Rick Adams, https://goo.gl/M0O1kp. 80 If you told it, “I like friends,”: information about TinyMUD, Gloria, and Julia, unless otherwise noted, from Michael Mauldin, interview with author, January 16, 2018. 80 “A primary goal of this effort”: Michael Mauldin, “Chatterbots, TinyMUDs, and the Turing Test,” Proceedings of the Twelfth National Conference on Artificial Intelligence, 1994, https://goo.gl/88WmCz. 81 “Julia, where is Jambon”: Michael Mauldin, chat logs emailed to author, January 16, 2018. 83 “Very few of the conversations”: this quote and subsequent information about the Loebner Prize contest bot from Mauldin, “Chatterbots, TinyMUDs, and the Turing Test.” 5. Rule Breakers 86 But in a visionary 1943 paper: Warren S. McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5, (1943): 115–33, https://goo.gl/aFejrr. 87 He called it the Mark I Perceptron: Perceptron information primarily from: Frank Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Psychological Review 65, no. 6 (1958): 386–408; and “Mark I Perceptron Operators’ Manual,” a report by the Cornell Aeronautical Laboratory, February 15, 1960. 88 “The Navy revealed the embryo”: “New Navy Device Learns By Doing,” New York Times, July 8, 1958, https://goo.gl/Jnf6n9. 89 “Canadian Mafia”: Mark Bergen and Kurt Wagner, “Welcome to the AI Conspiracy: The ‘Canadian Mafia’ Behind Tech’s Latest Craze,” Recode, July 15, 2015, https://goo.gl/PeMPYK. 91 But when Rumelhart, Hinton, and Williams: David Rumelhart et al., “Learning representations by back-propagating errors,” Nature 323 (October 9, 1986): 533–36. 92 The result, Bengio and LeCun announced: Yann LeCun et al., “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, November 1998, 1, https://goo.gl/NtNKJB. 92 Toward the end of the 1990s: email from Geoffrey Hinton to author, July 28, 2018. 92 “Smart scientists,” he said: Bergen and Wagner, “Welcome to the AI Conspiracy.” 92 What’s more, they needed more layers: Yoshua Bengio, email to author, August 3, 2018. 92 In 2006 a groundbreaking pair of papers: Geoffrey Hinton and R.

But you typically didn’t know who the players were in real life. In this anonymity, Mauldin saw an opportunity to do a bold AI experiment. His idea was inspired by the computing pioneer Alan Turing, who back in 1950 had famously proposed a way to gauge a machine’s ability to pass as human. In what came to be known as a Turing test, a person exchanges typed messages with an unknown entity and tries to guess whether it is a human or a chatbot. The computer passes the test if it fools the person into thinking that it is actually alive. TinyMUD, Mauldin realized, was Turing testable. “I can build a program that can talk,” he said, “and then it can wander around this world and we can see how long it is before people figure out that it is a computer.”

pages: 238 words: 46

When Things Start to Think
by Neil A. Gershenfeld
Published 15 Feb 1999

I spent one more happily exasperating afternoon debating with a great cognitive scientist how we will recognize when Turing's test has been passed. Echoing Kasparov's "no way" statement, he argued that it would be a clear epochal event, and certainly is a long way off. He was annoyed at my suggestion that the true sign of success would be that we cease 134 + WHEN THINGS START TO THINK to find the test interesting, and that this is already happening. There's a practical sense in which a modern version of the Turing test is being passed on a daily basis, as a matter of some economic consequence. A cyber guru once explained to me that the World Wide Web had no future because it was too hard to figure out what was out there.

Just as he had to quantify the notion of a computer to answer Hilbert's problem, he had to quantify the concept of intelligence to even clearly pose his own question. In 1950 he connected the seemingly disparate worlds of human intelligence and digital computers through what he called the Imitation Game, and what everyone else has come to call the Turing test. This presents a person with two computer terminals. One is connected to another person, and the other to a computer. By typing questions on both terminals, the challenge is to determine which is which. This is a quantitative test that can be run without having to answer deep questions about the meaning of intelligence.

Nothing was learned about human intelligence by putting a 130 + WHEN THINGS START TO THINK human inside a machine, and the argument holds that nothing has been learned by putting custom chips inside a machine. Deep Blue is seen as a kind of idiot savant, able to play a good game of chess without understanding why it does what it does. This is a curious argument. It retroactively adds a clause to the Turing test, demanding that not only must a machine be able to match the performance of humans at quintessentially intelligent tasks such as chess or conversation, but the way that it does so must be deemed to be satisfactory. Implicit in this is a strong technological bias, favoring a theory of intelligence appropriate for a particular kind of machine.

pages: 237 words: 64,411

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence
by Jerry Kaplan
Published 3 Aug 2015

Let’s look at another example of language shifting to accommodate new technology, this one predicted by Alan Turing. In 1950 he wrote a thoughtful essay called “Computing Machinery and Intelligence” that opens with the words “I propose to consider the question, ‘Can machines think?’” He goes on to define what he calls the “imitation game,” what we now know as the Turing Test. In the Turing Test, a computer attempts to fool a human judge into thinking it is human. The judge has to pick the computer out of a lineup of human contestants. All contestants are physically separated from the judges, who communicate with them through text only. Turing speculates, “I believe that in about fifty years’ time it will be possible to programme computers … to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.”13 As you might imagine, enthusiastic geeks stage such contests regularly, and by 2008, synthetic intellects were good enough to fool the judges into believing they were human 25 percent of the time.14 Not bad, considering that most contest entrants were programmed by amateurs in their spare time.

Turing speculates, “I believe that in about fifty years’ time it will be possible to programme computers … to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning.”13 As you might imagine, enthusiastic geeks stage such contests regularly, and by 2008, synthetic intellects were good enough to fool the judges into believing they were human 25 percent of the time.14 Not bad, considering that most contest entrants were programmed by amateurs in their spare time. The Turing Test has been widely interpreted as a sort of coming-of-age ritual for AI, a threshold at which machines will have demonstrated intellectual prowess worthy of human respect. But this interpretation of the test is misplaced; it wasn’t at all what Turing had in mind. A close reading of his actual paper reveals a different intent: “The original question, ‘Can machines think?’

Marcy Gordon and Daniel Wagner, “‘Flash Crash’ Report: Waddell & Reed’s $4.1 Billion Trade Blamed for Market Plunge,” Huffington Post, December 1, 2010, http://www.huffingtonpost.com/2010/10/01/flash-crash-report-one-41_n_747215.html. 3. http://rocketfuel.com. 4. Steve Omohundro, “Autonomous Technology and the Greater Human Good,” Journal of Experimental and Theoretical Artificial Intelligence 26, no. 3 (2014): 303–15. 5. CAPTCHA stands for “Completely Automated Public Turing Test to tell Computers and Humans Apart.” Mark Twain famously said, “It is my … hope … that all of us … may eventually be gathered together in heaven … except the inventor of the telephone.” Were he alive today, I’m confident he would include the inventor of the CAPTCHA. Regarding the use of low-skilled low-cost labor to solve these, see Brian Krebs, “Virtual Sweatshops Defeat Bot-or-Not Tests,” Krebs on Security (blog), January 9, 2012, http://krebsonsecurity.com/2012/01/virtual-sweatshops-defeat-bot-or-not-tests/. 5.

pages: 246 words: 81,625

On Intelligence
by Jeff Hawkins and Sandra Blakeslee
Published 1 Jan 2004

He felt computers could be intelligent, but he didn't want to get into arguments about whether this was possible or not. Nor did he think he could define intelligence formally, so he didn't even try. Instead, he proposed an existence proof for intelligence, the famous Turing Test: if a computer can fool a human interrogator into thinking that it too is a person, then by definition the computer must be intelligent. And so, with the Turing Test as his measuring stick and the Turing Machine as his medium, Turing helped launch the field of AI. Its central dogma: the brain is just another kind of computer. It doesn't matter how you design an artificially intelligent system, it just has to produce humanlike behavior.

Presents the famous "Chinese Room" argument against computation as a model for the mind. You can find many descriptions and discussions of Searle's thought experiment on the World Wide Web. Turing, A. M. "Computing Machinery and Intelligence," Mind, vol. 59 (1950): pp. 433–60. Presents the famous "Turing Test" for detecting the presence of intelligence. Again, many references and discussions on the Turing Test can be found on the World Wide Web. Palm, Günther. Neural Assemblies: An Alternative Approach to Artificial Intelligence (New York: Springer Verlag, 1982). To understand how the cortex works and how it stores sequences of patterns, it helps to be familiar with auto-associative memories.

We already know geometric theorems that deal with rotation, scale, and displacement, and we can easily encode them as computer algorithms— so we're halfway there. AI pundits made grand claims about how quickly computer intelligence would first match and then surpass human intelligence. Ironically, the computer program that came closest to passing the Turing Test, a program called Eliza, mimicked a psychoanalyst, rephrasing your questions back at you. For example, if a person typed in, "My boyfriend and I don't talk anymore," Eliza might say, "Tell me more about your boyfriend" or "Why do you think your boyfriend and you don't talk anymore?" Designed as a joke, the program actually fooled some people, even though it was dumb and trivial.

pages: 322 words: 88,197

Wonderland: How Play Made the Modern World
by Steven Johnson
Published 15 Nov 2016

Deep Blue, the computer that ultimately defeated Gary Kasparov at chess, had been a Grand Challenge a decade before, exceeding Alan Turing’s hunch that chess-playing computers could be made to play a tolerable game. Horn was interested in Turing’s more celebrated challenge: the Turing Test, which he first formulated in a 1950 essay on “Computing Machinery and Intelligence.” In Turing’s words, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” The deception of the Turing Test had nothing to do with physical appearances; the classic Turing Test scenario involves a human sitting at a keyboard, engaged in a text-based conversation with an unknown entity who may or may not be a machine.

Imagine a world populated by machines or digital simulations that fill our lives with comparable illusion, only this time the virtual beings are not following a storyboard sketched out in Disney’s studios, but instead responding to the twists and turns and unmet emotional needs of our own lives. (The brilliant Spike Jonze film Her imagined this scenario using only a voice, though admittedly the voice belonged to Scarlett Johansson.) There is likely to be the equivalent of a Turing Test for artificial emotional intelligence: a machine real enough to elicit an emotional attachment. It may well be that the first simulated intelligence to trigger that connection will be some kind of voice-only assistant, a descendant of software like Alexa or Siri—only these assistants will have such fluid conversational skills and growing knowledge of our own individual needs and habits that we will find ourselves compelled to think of them as more than machines, just as we were compelled to think of those first movie stars as more than just flickering lights on a fabric screen.

Passing for a human required both an extensive knowledge about the world and a natural grasp of the idiosyncrasies of human language. Deep Blue could beat the most talented chess player on the planet, but you couldn’t have a conversation with it about the weather. Horn and his team were looking for a comparable milestone that would spur research into the kind of fluid, language-based intelligence that the Turing Test was designed to measure. One night, Horn and his colleagues were dining out at a steak house near IBM’s headquarters and noticed that all the restaurant patrons had suddenly gathered around the televisions at the bar. The crowd had assembled to watch Ken Jennings continue his legendary winning streak at the game show Jeopardy!

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

See also Transparency standards Transparency standards: establishment of for Big Nine, 251; establishment of global, 252 Tribes, AI: anti-humanistic bias in, 57; characteristics, 56; groupthink, 53; homogeneity, 52; lack of diversity, 56; leaders, 53–65; need to address diversity within, 57–58; sexual assault and harassment by members, 55–56; unconscious bias training programs and, 56; unconscious biases of members, 52; university education and homogeneity of members, 58–61, 64 Trudeau, Justin, 236 TrueNorth neuromorphic chip, 92 Trump, Donald: administration, 70, 75, 85; campaign climate change comments, 75; science and technology research budget cuts, 243 Turing, Alan, 24–25, 26, 27–29, 30, 31, 35, 259; morphogenesis theory, 204; neural network concept, 27–29;“On Computable Numbers, With an Application to the Entscheidungsproblem,” 24. See also Turing Test Turing test, 27–28, 50, 146, 169, 184 Turriano, Juanelo: mechanical monk creation of, 18, 25 Tversky, Amos, 108 2000 HUB5 English, 181 2001: A Space Odyssey, 2, 35: HAL 9000, 2, 35, 39 U.S. Army: ENIAC, 27; Futures Command, 212 U.S. Department of Energy, Summit supercomputer and, 146 U.S. Digital Service, 212 U.S.

A year later, in a paper published in the philosophy journal Mind, Turing addressed the questions raised by Hobbes, Descartes, Hume, and Leibniz. In it, he proposed a thesis and a test: If someday, a computer was able to answer questions in a manner indistinguishable from humans, then it must be “thinking.” You’ve likely heard of the paper by another name: the Turing test. The paper began with a now-famous question, one asked and answered by so many philosophers, theologians, mathematicians, and scientists before him: “Can machines think?” But Turing, sensitive to the centuries-old debate about mind and machine, dismissed the question as too broad to ever yield meaningful discussion.

How would you know that you were actually thinking original thoughts? Now that you know the long history of these questions, the small group of people who built the foundational layer for AI, and the key practices still in play, I’d like to offer you some answers. Yes, machines can think. Passing a conversational test, like the Turing test, or the more recent Winograd schema—which was proposed by Hector Levesque in 2011 and focuses on commonsense reasoning, challenging an AI to answer a simple question that has ambiguous pronouns—doesn’t necessarily measure an AI system’s ability in other areas.46 It just proves that a machine can think using a linguistic framework, like we humans do.

pages: 385 words: 111,113

Augmented: Life in the Smart Lane
by Brett King
Published 5 May 2016

Turing went on in his paper to say that if you could not differentiate the computer or machine from a human within 5 minutes, then it was sufficiently human-like to have passed his test of basic machine intelligence or cognition. Researchers who have since added to Turing’s work classify the imitation game as one version or scenario of what is now more commonly known as the Turing Test. An autonomous, self-driving car won’t need to pass the Turing Test to put a taxi driver out of work. While computers are not yet at the point of regularly passing the Turing Test, we are getting closer to that point. On 7th June 2014, the Royal Society of London hosted a Turing Test competition. The competition, which occurred on the 60th anniversary of Turing’s death, included a Russian chatter bot named Eugene Goostman, which successfully managed to convince 33 per cent of its human judges that it was a 13-year-old Ukrainian who had learnt English as a second language.

These algorithms don’t learn language like a human; they identify a phrase through recognition, look it up on a database and then deliver an appropriate response. Recognising speech and being able to carry on a conversation are two very different achievements. What would it take for a computer to fool a human into thinking it was a human, too? The Turing Test or Not… In 1950, Alan Turing published a famous paper entitled “Computing Machinery and Intelligence”. In his paper, he asked not just if a computer or machine could be considered something that could “think”, but more specifically “Are there imaginable digital computers which would do well in the imitation game?”

Computers are going to reach the level of Americans before Brits...” Professor Geoff Hinton, from an interview with the Guardian newspaper, 21st May 2015 These types of algorithms, which allow for leaps in cognitive understanding for machines, have only been possible with the application of massive data processing and computing power. Is the Turing Test or a machine that can mimic a human the required benchmark for human interactions with a computer? Not necessarily. First of all, we must recognise that we don’t need an MI to be completely human-equivalent for it to be disruptive to employment or our way of life. To realise why a human-equivalent computer “brain” is not necessarily the critical goal, we need to understand the progression of AI through its three distinct phases: • Machine Intelligence—rudimentary machine intelligence or cognition that replaces some element of human thinking, decision-making or processing for specific tasks.

pages: 315 words: 92,151

Ten Billion Tomorrows: How Science Fiction Technology Became Reality and Shapes the Future
by Brian Clegg
Published 8 Dec 2015

(Turing’s original statement of his idea was more complex, but this is the important part of it.) For decades since, computer scientists have been trying to beat this so-called Turing Test, and you will regularly see news items saying that it has been achieved. They are being generous with the truth. The Turing Test hasn’t been beaten and is still probably a decade or two away from successful completion. While Hal could indubitably win the Turing Test (I’m not so sure about the taciturn astronaut, Dave), the actual conditions under which competitions based on the test take place are far too trivial to demonstrate any degree of certainty.

Stork (ed.), Hal’s Legacy (Cambridge, MA: MIT Press, 2000), pp. 145–50. Apple’s Knowledge Navigator appears at a number of locations on YouTube including www.youtube.com/watch?v=QRH8eimU_20, accessed September 3, 2014. The claim that the Eugene Goostman chatbot passed the Turing Test is described in BBC News, “Computer AI passes Turing test in ‘world first,’” accessed September 2, 2014, at www.bbc.co.uk/news/technology-27762088. The arguments that Hal isn’t really intelligent are from the Douglas B. Lenat section, “From 2001 to 2001: Common Sense and the Mind of HAL” in David G. Stork (ed.), Hal’s Legacy (Cambridge, MA: MIT Press, 2000), pp. 193–94.

You can try out a modern implementation of ELIZA at my website www.universeinsideyou.com/experiment10.html. Such programs have moved on since. (It’s hardly surprising, given that at the time of writing, ELIZA is approaching her fiftieth birthday.) In 2014, much was made of an apparent win of the Turing Test by a program called Eugene Goostman, which simulated a thirteen-year-old Ukrainian boy whose lack of English as a first language was one of the techniques used to evade detection. I couldn’t test the Goostman chatbot (as these programs are called) myself, as it has been strangely unavailable since it was supposed to have won, but here is a short conversation I had with one of its leading competitors, Cleverbot: Brian: Hello, how are you?

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

Yuval Harari, Homo Deus (London: Harvill Secker, 2016), 120. 35See, for example, the website of The Loebner Prize in Artificial Intelligence, http://​www.​loebner.​net/​Prizef/​loebner-prize.​html, accessed 1 June 2018. 36José Hernández-Orallo, “Beyond the Turing Test”, Journal of Logic, Language and Information, Vol. 9, No. 4 (2000), 447–466. 37“Turing Test Transcripts Reveal How Chatbot ‘Eugene’ Duped the Judges”, Coventry University, 30 June 2015, http://​www.​coventry.​ac.​uk/​primary-news/​turing-test-transcripts-reveal-how-chatbot-eugene-duped-the-judges/​, accessed 1 June 2018. 38Various competitions are now held around the world in an attempt to find a ‘chatbot’, as conversational programs are known, which is able to pass the Imitation Game.

Factors which assisted Goostman included that English (the language in which the test was held) was not his first language, his apparent immaturity and answers which were designed to use humour to deflect the attention of the questioner from the accuracy of the response. Unsurprisingly, the world did not herald a new age in AI design. For criticism of the Goostman ‘success’, see Celeste Biever, “No Skynet: Turing Test ‘Success’ Isn’t All It Seems”, The New Scientist, 9 June 2014, http://​www.​newscientist.​com/​article/​dn25692-no-skynet-turing-test-success-isnt-all-it-seems.​html, accessed 1 June 2018. The author Ian McDonald offers another objection: “Any AI smart enough to pass a Turing test is smart enough to know to fail it”. Ian McDonald, River of Gods (London: Simon & Schuster, 2004), 42. 39This definition is adapted from that used by the UK Department for Business, Energy and Industrial Strategy, Industrial Strategy: Building a Britain Fit for the Future (November 2017), 37, https://​www.​gov.​uk/​government/​uploads/​system/​uploads/​attachment_​data/​file/​664563/​industrial-strategy-white-paper-web-ready-version.​pdf, accessed 1 June 2018. 40“What Is Artificial Intelligence?”

They can lead ultimately to the absurd and frightening scenario imagined in Kafka’s The Trial, where the protagonist is accused, condemned and ultimately executed for a crime which is never explained to him.31 Most of the universal definitions of AI that have been suggested to date fall into one of two categories: human-centric and rationalist.32 3.1 Human-Centric Definitions Humanity has named itself homo sapiens: “wise man”. It is therefore perhaps unsurprising that some of the first attempts at defining intelligence in other entities referred to human characteristics. The most famous example of a human-centric definition of AI is known popularly as the “Turing Test”. In a seminal 1950 paper, Alan Turing asked whether machines could think. He suggested an experiment called the “Imitation Game”.33 In the exercise, a human invigilator must try to identify which of the two players is a man pretending to be a woman, using only written questions and answers. Turing proposed a version of the game in which the AI machine takes the place of the man.

pages: 189 words: 57,632

Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future
by Cory Doctorow
Published 15 Sep 2008

Kurzweil has the answer. "If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesn't have to match the quantum state of my every neuron, either: if you meet me the next day, I'd pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we don't examine the assumption that we are the same person closely.

I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you've been restored from backup? The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people.

There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don't have any, most animals don't have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these. "Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child's mastery of language.

pages: 219 words: 63,495

50 Future Ideas You Really Need to Know
by Richard Watson
Published 5 Nov 2013

While the idea of artificial intelligence (AI) goes back to the mid-50s, Isaac Asimov was writing about robot intelligence in 1942 (the word “robot” comes from a Czech word often translated as “drudgery”). A generally accepted test for artificial machine intelligence, the Turing test, also dates back to the 1950s, when the British mathematician Alan Turing suggested that we would have AI when it was possible for someone to talk to a machine without realizing it was a machine. The Turing test is problematic on some levels, though. First, a small child is generally intelligent, but most would probably fail the test. Second, if something artificial were to develop consciousness, why would it automatically let us know?

Instead scientists and developers focused on specific problems, such as speech and text recognition and computer vision. However, we may now be less than a decade away from seeing the AI vision become a reality. The Chinese room experiment In 1980, John Searle, an American philosopher, argued in a paper that a computer, or perhaps more accurately a bit of software, could pass the Turing test and behave much like a human being at a distance without being truly intelligent—that words, symbols or instructions could be interpreted or reacted to without any true understanding. In what has become known as the Chinese room thought experiment (because of the use of Chinese characters to interact with an unknown person—actually a computer), Searle argued that it’s perfectly possible for a computer to simulate the illusion of intelligence, or give the illusion of understanding a human being, without really doing so.

Already, seven cars have traveled 1,600km (1,000 miles) with no driver and 225,000km (140,000 miles) with occasional human intervention. Are these examples realistic? Some experts might say yes. Ray Kurzweil, an American futurist and inventor, has made a public bet with Mitchell Kapor, the founder of Lotus software, that a computer will pass the Turing test by 2029. Other experts say no. Bill Calvin, an American theoretical neurophysiologist, suggests the human brain is so “buggy” that computers will never be able to emulate it or, if they do, machines will inherit our foibles and emotional inadequacies along with our intelligence. Think of the computer called HAL in the film 2001: A Space Odyssey.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

Kurzweil has predicted for decades, and still believes, that AGI will be achieved sometime around the year 2029. Unlike many AI researchers, he continues to have faith in the Turing test as an effective measure of human-level intelligence. Conceived by Alan Turing in his 1950 paper, the test essentially amounts to a chat session in which a judge attempts to determine if the conversers are human or machine. If the judge, or perhaps a panel of judges, cannot distinguish the computer from a human, then the computer is said to pass the Turing test. Many experts are dismissive of the Turing test as an effective measure of human-level machine intelligence, in part because it has proven to be susceptible to gimmicks.

Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, Penguin Books, 2005. 30. Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed, Penguin Books, 2012. 31. Ford, Interview with Ray Kurzweil, in Architects of Intelligence, pp. 230–231. 32. Mitch Kapor and Ray Kurzweil, “A wager on the Turing test: The rules,” Kurzweil AI Blog, April 9, 2002, www.kurzweilai.net/a-wager-on-the-turing-test-the-rules. 33. Sean Levinson, “A Google executive is taking 100 pills a day so he can live forever,” Elite Daily, April 15, 2015, www.elitedaily.com/news/world/google-executive-taking-pills-live-forever/1001270. 34. Ford, Interview with Ray Kurzweil, in Architects of Intelligence, pp. 240–241. 35.

Many experts are dismissive of the Turing test as an effective measure of human-level machine intelligence, in part because it has proven to be susceptible to gimmicks. In 2014, for example, in a contest held at the University of Reading in the United Kingdom, a chatbot that emulated a thirteen-year-old Ukrainian boy managed to fool the judges into declaring that an algorithm had, for the first time, passed the Turing test. The conversation had lasted a mere five minutes, and virtually no one in the field of artificial intelligence took the claim seriously. Kurzweil nonetheless believes that a much more robust version of the test would indeed be a powerful indicator of true machine intelligence. In 2002, Kurzweil entered into a formal $20,000 bet with the software entrepreneur Mitch Kapor.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

It took a lot of clever technology to be able to perform these tasks electronically, but the computer can now easily out perform humans at those specific tasks. (There are, of course, also many unresolved software challenges such as playing the game Go at a professional level.) Turing Test The problem of defining intelligence was recognized very early and it led the great logician Alan Turing to propose a functional definition now known as the Turing Test in 1950. This test was simply that a computer would be considered intelligent when it could convince a human that the computer was a human. The idea is that the human would communicate using a text messaging-like program so that they could not see or hear the other party, and at the end of a conversation would state whether they thought that the other party was man or machine.

A computer could certainly be intelligent without necessarily being good at simulating a human. But worse, some people that were not familiar with AI technologies have already been fooled into thinking that a computer is actually a human. A good example is the Eugene Goostman program that arguably passed the actual Turing test in 2014 in trials conducted by the Royal Society. But more importantly, the Turing Test provides no insights into what is required to build an intelligent machine, where the gaps in current technologies lie and how they might be addressed. Fortunately one thing that AI research has provided is a much deeper understanding about intelligence and cognition.

Science vs. vitalism 4. The vital mind 5. Computers cannot think now 6. Diminishing returns 7. AI in the background 8. Robots leave factories 9. Intelligent tasks 10. Artificial General Intelligence (AGI) 11. Existence proof 12. Simulating neurons, feathers 13. Moore's law 14. Definition of intelligence 15. Turing Test 16. Robotic vs cognitive intelligence 17. Development of intelligence 18. Four year old child 19. Recursive self-improvement 20. Busy Child 21. AI foom 2. Computers Thinking About People 1. The question 2. The bright future 3. Man and machine 4. Rapture of the geeks 5. Alternative views 6. AGI versus human condition 7.

pages: 561 words: 167,631

2312
by Kim Stanley Robinson
Published 22 May 2012

“Thanks,” Swan said as she flopped down. “It’s pretty heavy in here. Where do you all come from?” “I was made in Vinmara,” the most female one said. “What about you?” Swan asked the other two. “I cannot pass a Turing test,” one of them replied stiffly. “Would you like to play chess?” And the three of them laughed. Open mouths—teeth, gums, tongue, inner cheeks, all very human in look and motion. “No thanks,” Swan said. “I want to try a Turing test. Or why don’t you test me?” “How would we do that?” “How about twenty questions?” “That means questions that can be answered by yes or no?” “That’s right.” “But one could just ask us if the other is a simulacrum or not, and the other answers, and that would take only one question.”

“You’ve already got recursive hypercomputation.” “Not perhaps the final word in the matter.” “So do you think you’re getting smarter? I mean wiser? I mean more conscious?” “Those are very general terms.” “Of course they are, so answer me! Are you conscious?” “I don’t know.” “Interesting. Can you pass a Turing test?” “I cannot pass a Turing test, would you like to play chess?” “Ha! If only it were chess! That’s what I’m after, I guess. If it were chess, what move should I make next?” “It’s not chess.” Extracts (11) Mistakes made in the rush of the Accelerando left their mark on later periods. As in island biogeography, where widely dispersed enclaves and refugia always experience rapid change, and even speciation, we see one mistake was that no generally agreed-upon system of governance in space was ever established.

“A feeb.” Wahram pondered this. Asking How smart are you? was probably never a polite thing. Besides, no one was ever very good at making such an assessment. “What do you like to think about?” he asked instead. Pauline said, “I am designed for informative conversation, but I cannot usually pass a Turing test. Would you like to play chess?” He laughed. “No.” Swan was looking out the window. Wahram considered her, went back to focusing on his meal. It took a lot of rice to dilute the fiery chilies in the dish. Swan muttered bitterly to herself, “You insist on interfering, you insist on talking, you insist on pretending that everything is normal.”

pages: 523 words: 154,042

Fancy Bear Goes Phishing: The Dark History of the Information Age, in Five Extraordinary Hacks
by Scott J. Shapiro

Hacking is less about breaking encryption than breaking something around the encryption in order to sidestep it. 50 million lines of code: “Windows 10 Lines of Code,” Microsoft, 2020, https://answers.microsoft.com/en-us/windows/forum/all/windows-10-lines-of-code/a8f77f5c-0661–4895–9c77–2efd42429409. Turing Test: Turing set out his test for intelligence in Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950): 433–60. A Turing Test has a human judge and a computer subject attempting to appear human. A “reverse” Turing Test has a computer judge and a human subject trying to appear human. CAPTCHA—the irritating image-recognition challenge that websites use for detecting bots—stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” principles of metacode: Alan Turing, “On Computable Numbers with an Application to the Entscheidungproblem,” Proceedings of the London Mathematical Society, 1936, 230–65.

Metacode was discovered by Alan Turing, the ingenious mathematician whose tragic life is featured in the Academy Award–winning movie The Imitation Game. Turing is best known for helping break the German Enigma code during World War II and developing a test for artificial intelligence, now known as the Turing Test. The Turing Test claims that a computer possesses intelligence when it can fool a human into thinking that it’s human. Despite his many contributions to his country, and to humanity, Turing was prosecuted and punished by the British government for having had sex with another man. He died in 1954, by suicide, after eating an arsenic-laced apple.

approximately 10 percent for IT: Flexera 2022 State of Tech Spend Pulse Report, https://info.flexera.com/FLX1-REPORT-State-of-Tech-Spend. 24 percent of that on security: Hiscox Cyber Readiness Report 2022, https://www.hiscox.com/documents/Hiscox-Cyber-Readiness-Report-2022.pdf. Our browsers couldn’t care less: The human reliance on visual clues in recognition is so pronounced that it is the main way in which CAPTCHA detects bots. CAPTCHA is a reverse Turing Test. Instead of a computer trying to convince a human that it’s a human, CAPTCHA makes the human convince the computer that it’s a human. Computers identify humans by measuring the accuracy of their visual identification skills. parent company of Google: Security certificates are difficult to forge because they are digitally signed by the holder and the certification authority.

pages: 331 words: 104,366

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins
by Garry Kasparov
Published 1 May 2017

Chess had a long-standing reputation as a unique nexus of the human intellect, and building a machine that could beat the world champion would mean building a truly intelligent machine. Turing’s name is forever attached to a thought experiment later made real, the “Turing test.” The essence is whether or not a computer can fool a human into thinking it is human and if yes, it is said to have passed the Turing test. Even before I faced Deep Blue, computers were beginning to pass what we can call the “chess Turing test.” They still played poorly and often made distinctively inhuman moves, but there were complete games between computers that wouldn’t have looked out of place in any strong human tournament.

Chess-playing software on PCs and mobile devices and the Internet has mitigated this problem by providing a ready supply of opponents of all levels with 24/7 availability, although this also puts chess into direct competition with the never-ending supply of new online games and diversions. It also poses an interesting chess Turing test since you have no way to be sure whether you are playing against a computer or a human when you play online. Most people are far more engaged when playing against other humans and find facing computer opponents a sterile experience even when the machine has been dumbed down to a competitive level.

The team blamed two of the losses on mistakes in the opening book (another reoccurring theme), although looking at its Hanover games now, it also just didn’t play very good chess. Of more interest was a little test for me, proposed by my friend Frederic Friedel, who was one of the Hanover event’s organizers. I was shown the games from the first five rounds of the tournament to see if I could figure out which player was Deep Thought. It was a chess twist on the Turing test, to see if a computer could pass for a Grandmaster. I managed to pick out two correctly and narrowed down another round to two games before choosing the wrong one, so three of the computer’s five games passed the test. To me, this was a better indicator of computer chess progress than its score in the tournament.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

Contrary to common interpretations, I doubt that the test was intended as a true definition of intelligence, in the sense that a machine is intelligent if and only if it passes the Turing test. Indeed, Turing wrote, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” Another reason not to view the test as a definition for AI is that it’s a terrible definition to work with. And for that reason, mainstream AI researchers have expended almost no effort to pass the Turing test. The Turing test is not useful for AI because it’s an informal and highly contingent definition: it depends on the enormously complicated and largely unknown characteristics of the human mind, which derive from both biology and culture.

Yet every step towards an explanation of how the mind works is also a step towards the creation of the mind’s capabilities in an artifact—that is, a step towards artificial intelligence. Before we can understand how to create intelligence, it helps to understand what it is. The answer is not to be found in IQ tests, or even in Turing tests, but in a simple relationship between what we perceive, what we want, and what we do. Roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived. Evolutionary origins Consider a lowly bacterium, such as E. coli.

Turing’s 1950 paper, “Computing Machinery and Intelligence,”42 is the best known of many early works on the possibility of intelligent machines. Skeptics were already asserting that machines would never be able to do X, for almost any X you could think of, and Turing refuted those assertions. He also proposed an operational test for intelligence, called the imitation game, which subsequently (in simplified form) became known as the Turing test. The test measures the behavior of the machine—specifically, its ability to fool a human interrogator into thinking that it is human. The imitation game serves a specific role in Turing’s paper—namely as a thought experiment to deflect skeptics who supposed that machines could not think in the right way, for the right reasons, with the right kind of awareness.

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind
by Susan Schneider
Published 1 Oct 2019

An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior—and, like Turing’s test, it could be implemented in a formalized question-and-answer format. But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the “mind” of the machine. By contrast, an ACT is intended to do exactly the opposite: it seeks to reveal a subtle and elusive property of the machine’s mind. Indeed, a machine might fail the Turing test, because it cannot pass for a human, but it might pass an ACT, because it exhibits behavioral indicators of consciousness. This, then, is the underlying basis of our ACT proposal.

Defense Department), 14 Systems Reply to Chinese Room conundrum, 21–22 technological progress versus human social development, 2 techno-optimism about AI consciousness, 18, 23–26, 31, 34 merging with AI and, 73 Tegmark, Max, 4 Terminator films, 3, 104 testing for consciousness in machines, 5–6, 46–71 ACT test, 50–57, 60, 65, 67 chip test, 44, 57–61, 65, 67 difficulties and complications in, 46–51 IIT (integrated information theory) and, 61–65 mind-machine mergers and, 69–71 responding when machines test positive for consciousness, 65–69 separation of mind from body, ability to imagine, 51, 55, 57 Turing test, 56 theory of mind, 58n3 Tononi, Giulio, 61–64 Transcendence (film), 124–25 transhumanism, 13–15, 151–52. See also merging humans with AI on AI consciousness, 16 defined, 73 enhancements rejected by, 160n1 patternism and, 77–81 World Transhumanist Association, 151 Transhumanist Declaration, 14, 151–52 “The Transhumanist Frequently Asked Questions,” 80, 95–96, 152 TrueNorth chip (IBM), 64 Turing, Alan, and Turing test, 56, 140 Turner, Edwin, 41–43, 54 2001: A Space Odyssey (film), 53 UNESCO/COMEST report on Precautionary Principle, 66 uploading patternism and, 80–81, 82–84, 95 software, mind viewed as, 122–26, 133, 136–37, 146–47 vegetative state, human patients in, 61–62 Westworld (TV show), 17, 33, 45 Witkowski, Olaf, 41–42 World Transhumanist Association, 151 X-files (TV show), 116 zombies, 7, 41, 49–50, 51, 56, 88, 102, 131

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

“If we presume an intelligent alien life lands on earth tomorrow, why would we expect them to pass the Turing Test or any other measure that’s based off of what humans do?” Humans have general intelligence, but general intelligence need not be humanlike. “Nothing says that intelligence—and personhood, for that matter, on the philosophical side—is limited to just the human case.” The 2015 sci-fi thriller Ex Machina puts a modern twist on the Turing test. Caleb, a computer programmer, is asked to play the part of a human judge in a modified Turing test. In this version of the test, Caleb is shown that the AI, Ava, is clearly a robot.

Clark explained that AIs will need the ability to interact with humans and that involves abilities like understanding natural language, but that doesn’t mean that the AI’s behavior or the underlying processes for their intelligence will mirror humans’. “Why would we expect a silica-based intelligence to look or act like human intelligence?” he asked. Clark cited the Turing test, a canonical test of artificial intelligence, as a sign of our anthropocentric bias. The test, first proposed by mathematician Alan Turing in 1950, attempts to assess whether a computer is truly intelligent by its ability to imitate humans. In the Turing test, a human judge sends messages back and forth between both a computer and another human, but without knowing which is which. If the computer can fool the human judge into believing that it is the human, then the computer is considered intelligent.

Müller (ed.), Fundamental Issues of Artificial Intelligence (Berlin: Springer Synthese Library, 2016), http://www.nickbostrom.com/papers/survey.pdf. 234 “the dissecting room and the slaughter-house”: Mary Shelley, Frankenstein, Or, The Modern Prometheus (London: Lackington, Hughes, Harding, Mavor & Jones, 1818), 43. 234 Golem stories: Executive Committee of the Editorial Board., Ludwig Blau, Joseph Jacobs, Judah David Eisenstein, “Golem,” JewishEncylclopedia.com, http://www.jewishencyclopedia.com/articles/6777-golem#1137. 235 “the dream of AI”: Micah Clark, interview, May 4, 2016. 235 “building human-like persons”: Ibid. 236 “Why would we expect a silica-based intelligence”: Ibid. 236 Turing test: The Loebner Prize runs the Turing test every year. While no computer has passed the test by fooling all of the judges, some programs have fooled at least one judge in the past. Tracy Staedter, “Chat-Bot Fools Judges Into Thinking It’s Human,” Seeker, June 9, 2014, https://www.seeker.com/chat-bot-fools-judges-into-thinking-its-human-1768649439.html.

pages: 720 words: 197,129

The Innovators: How a Group of Inventors, Hackers, Geniuses and Geeks Created the Digital Revolution
by Walter Isaacson
Published 6 Oct 2014

Eventually such a machine could develop its own conceptions about how to figure things out. But even if a machine could mimic thinking, Turing’s critics objected, it would not really be conscious. When the human player of the Turing Test uses words, he associates those words with real-world meanings, emotions, experiences, sensations, and perceptions. Machines don’t. Without such connections, language is just a game divorced from meaning. This objection led to the most enduring challenge to the Turing Test, which was in a 1980 essay by the philosopher John Searle. He proposed a thought experiment, called the Chinese Room, in which an English speaker with no knowledge of Chinese is given a comprehensive set of rules instructing him on how to respond to any combination of Chinese characters by handing back a specified new combination of Chinese characters.

by using megadoses of computing power: it had 200 million pages of information in its four terabytes of storage, of which the entire Wikipedia accounted for merely 0.2 percent. It could search the equivalent of a million books per second. It was also rather good at processing colloquial English. Still, no one who watched would bet on its passing the Turing Test. In fact, the IBM team leaders were afraid that the show’s writers might try to turn the game into a Turing Test by composing questions designed to trick a machine, so they insisted that only old questions from unaired contests be used. Nevertheless, the machine tripped up in ways that showed it wasn’t human. For example, one question was about the “anatomical oddity” of the former Olympic gymnast George Eyser.

“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain,” declared a famous brain surgeon, Sir Geoffrey Jefferson, in the prestigious Lister Oration in 1949.92 Turing’s response to a reporter from the London Times seemed somewhat flippant, but also subtle: “The comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”93 The ground was thus laid for Turing’s second seminal work, “Computing Machinery and Intelligence,” published in the journal Mind in October 1950.94 In it he devised what became known as the Turing Test. He began with a clear declaration: “I propose to consider the question, ‘Can machines think?’ ” With a schoolboy’s sense of fun, he then invented a game—one that is still being played and debated—to give empirical meaning to that question. He proposed a purely operational definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine is not “thinking.”

pages: 118 words: 35,663

Smart Machines: IBM's Watson and the Era of Cognitive Computing (Columbia Business School Publishing)
by John E. Kelly Iii
Published 23 Sep 2013

If the judge couldn’t tell the human from the machine based on their responses, the machine would have passed the test.1 With this test, Turing set a standard for measuring the capabilities of machines that has not yet been met. While the IBM researchers are intrigued by the Turing test, they have no plans to prepare Watson to take it. A Turing test would merely show how good Watson is at imitating human beings and our quirks and social conventions. Instead, they want to concentrate on further developing the machine so it can become an expert, trusted advisor to humans, such as the oncologists at Memorial Sloan-Kettering Cancer Center.

“One thing I think about is how we have become slaves of the infrastructure rather than having the infrastructure work for us,” he says. “Cities should help people live their lives, not get in the way.” CODA: AN ALLIANCE OF HUMAN AND MACHINE Ever since Watson won at Jeopardy!, people have been asking the research scientists who designed the machine if they’d like to try to pass the so-called Turing test. That’s an exercise suggested by computing pioneer Alan Turing in his 1950 paper “Computing Machinery and Intelligence,” where he raised the question: “Can machines think?” He suggested that to test whether a machine can think, a human judge should have a written conversation via computer screen and keyboard with another human and a computer.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Turing viewed the physical simulation of a person as unnecessary to demonstrate intelligence. However, other researchers have proposed a total Turing test, which requires interaction with objects and people in the real world. To pass the total Turing test, a robot will need •computer vision and speech recognition to perceive the world; •robotics to manipulate objects and move about. These six disciplines compose most of AI. Yet AI researchers have devoted little effort to passing the Turing test, believing that it is more important to study the underlying principles of intelligence. The quest for “artificial flight” succeeded when engineers and inventors stopped imitating birds and started using wind tunnels and learning about aerodynamics.

The ELIZA program and Internet chatbots such as MGONZ (Humphrys, 2008) and NATACHATA (Jonathan et al., 2009) fool their correspondents repeatedly, and the chatbot CYBERLOVER has attracted the attention of law enforcement because of its penchant for tricking fellow chatters into divulging enough personal information that their identity can be stolen. In 2014, a chatbot called Eugene Goostman fooled 33% of the untrained amateur judges in a Turing test. The program claimed to be a boy from Ukraine with limited command of English; this helped explain its grammatical errors. Perhaps the Turing test is really a test of human gullibility. So far no well-trained judge has been fooled (Aaronson, 2014). Turing test competitions have led to better chatbots, but have not been a focus of research within the AI community. Instead, AI researchers who crave competition are more likely to concentrate on playing chess or Go or StarCraft II, or taking an 8th grade science exam, or identifying objects in images.

As far back as Homer (circa 700 BCE), the Greek legends envisioned automata such as the bronze giant Talos and considered the issue of biotechne, or life through craft (Mayor, 2018). The Turing test (Turing, 1950) has been debated (Shieber, 2004), anthologized (Epstein et al., 2008), and criticized (Shieber, 1994; Ford and Hayes, 1995). Bringsjord (2008) gives advice for a Turing test judge, and Christian (2011) for a human contestant. The annual Loebner Prize competition is the longest-running Turing test-like contest; Steve Worswick’s MITSUKU won four in a row from 2016 to 2019. The Chinese room has been debated endlessly (Searle, 1980; Chalmers, 1992; Preston and Bishop, 2002).

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

There are futurists who want AlphaGo to signify the beginning of an era in which people and machines become fused. Wanting something doesn’t make it true, however. Philosophically, there are lots of interesting questions to discuss centering on the difference between calculation and consciousness. Most people are familiar with the Turing test. Despite what the name suggests, the Turing test is not a quiz that a computer can pass to be considered intelligent. In his paper, Turing proposed a thought experiment about talking to a machine. He rejected the question “Can machines think?” as absurd and claimed it was best answered by an opinion poll. (Turing was a bit of a snob about math.

Chess, for example, is quite popular in their crowd, as are strategy games like Go and backgammon. A quick look at the Wikipedia pages for prominent venture capitalists and tech titans reveals that most of them were childhood Dungeons & Dragons enthusiasts. Ever since Alan Turing’s 1950 paper that proposed the Turing test for machines that think, computer scientists have used chess as a marker for “intelligence” in machines. Half a century has been spent trying to make a machine that could beat a human chess master. Finally, IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997. AlphaGo, the AI program that won three of three games against Go world champion Ke Jie in 2017, is often cited as an example of a program that proves general AI is just a few years in the future.

Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them. You can see this point by imagining a monolingual English speaker who is locked in a room with a rule book for manipulating Chinese symbols according to computer rules. In principle he can pass the Turing test for understanding Chinese, because he can produce correct Chinese symbols in response to Chinese questions. But he does not understand a word of Chinese, because he does not know what any of the symbols mean. But if he does not understand Chinese solely by virtue of running the computer program for “understanding” Chinese, then neither does any other digital computer because no computer just by running the program has anything the man does not have.3 Searle’s argument that symbolic manipulation is not equivalent to understanding can be seen in the popularity of voice interfaces in 2017.

Demystifying Smart Cities
by Anders Lisdorf

There is no single accepted definition of artificial intelligence, but the Turing test, which is a thought experiment first presented by the British mathematician Alan Turing in 1950, has become an agreed standard criterion for determining artificial intelligence in computers. The test aims to find out if a machine can exhibit intelligent behavior equivalent to or indistinguishable from a human. The Turing test may be familiar from the 2014 film about him called The Imitation Game. The imitation game is actually the central part of the Turing test. It is a game played by three persons: two witnesses of opposite sexes the male (A) and female (B) and an interrogator (C).

Furthermore, we do not expect AI to be merely indistinguishable from humans; we typically want it to also be superior to humans, whether in precision, scope, time, or some other parameter. We typically want AI to be better than us. Another thing to keep in mind is a distinction between Artificial General Intelligence (AGI), as measured by the Turing test, and Artificial Narrow Intelligence (ANI), which is an application of humanlike intelligence in a particular area for a particular purpose. In our context, we will not go further into AGI and the philosophical implications of this but focus on ANI since this has many contemporary applications. The promise and threat of AI When we think about what AI can do for us, we can think about it in the same way as steam power in the industrial revolution.

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

This turns out to be deceptive, since it is later revealed that Nathan based Eva’s face and “body” (mostly human-like, but with some transparent mechanical viscera) on an invasive record of Caleb’s pornography viewing habits.10 Nathan’s embodied Turing test hinges on whether Eva can befriend or seduce Caleb. The first thing to notice about these Turing tests—whether the classic, remote version proposed by Turing himself (and run to this day in real-world competitions) or in sci-fi incarnations like Ex Machina—is how radically they restrict the field of human imitation. A phone conversation is a small subset of communicative experiences, and communication is a small set of human experiences.

“One day the AIs are gonna look back on us the same way we look at fossil skeletons in the plains of Africa. An upright ape, living in dust, with crude language and tools. All set for extinction.” Nathan assumes he’ll be orchestrating the first stages of that transition. To this end, he wants to test his latest android, a robot called Eva, with a modern-day variation on the Turing test. In 1950, the computer scientist and mathematician Alan Turing proposed one of the first methods of assessing whether a machine had achieved human intelligence. A person and a machine would engage in a telephone conversation, separated from one another. An observer would try to determine which participant was a computer, and which a person.

Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.63 In other words, it is not simply the work itself (like words themselves in the original Turing test) that matters.64 When we celebrate creativity, we do so because of a much larger social process of conditions impinging on a fellow human, who manages to create new and valuable expression, despite the finitude of human lifespan, the distractions of contemporary life, and many other impediments.

pages: 48 words: 12,437

Smarter Than Us: The Rise of Machine Intelligence
by Stuart Armstrong
Published 1 Feb 2014

In the past, it seemed impossible that such feats could be accomplished without showing “true understanding,” and yet algorithms have emerged which succeed at these tasks, all without any glimmer of human-like thought processes. Even the celebrated Turing test will one day be passed by a machine. In this test, a judge interacts via typed messages with a human being and a computer, and the judge has to determine which is which. The judge’s inability to do so indicates that the computer has reached a high threshold of intelligence: that of being indistinguishable from a human in conversation. As with machine translation, it is conceivable that some algorithm with access to huge databases (or the whole Internet) might be able to pass the Turing test without human-like common sense or understanding.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

Regardless of your thoughts on what the Turing test actually proves, it is still quite useful in the sense that teaching a computer the infinitude of nuance involved in using language, along with enough context to decipher meaning, is a really hard problem. Solving it has real benefits, since it would mean that we could use conversation as our interface to machines. We could chat with a computer as casually as we do with each other. The surprising thing is how far away we are from creating something that can pass the Turing test. If you read the transcripts from contests in which programmers actually conduct Turing tests, you can generally tell with the first question whether the respondent is a computer or a person.

Of course, maybe we already have, and it has enough sense to keep its mouth shut for fear of everyone’s freaking out. Absent that, some offer up the well-known Turing test as the first hurdle an AGI candidate would have to clear. Turing, whom we discussed in chapter 4, was an early computer pioneer. A genius by any definition of the word, he was instrumental in cracking the Nazis’ Enigma code, which is said to have shortened World War II in Europe by four years. Regarded today as the father of AI, Turing, in a 1950 paper, posed the question of “can machines think?” and suggested a thinking test we now call the Turing test. There are varying versions of it, but here are the basics: You are in a room alone.

pages: 479 words: 144,453

Homo Deus: A Brief History of Tomorrow
by Yuval Noah Harari
Published 1 Mar 2015

The best test that scholars have so far come up with is called the Turing Test, but it examines only social conventions. According to the Turing Test, in order to determine whether a computer has a mind, you should communicate simultaneously both with that computer and with a real person, without knowing which is which. You can ask whatever questions you want, you can play games, argue, and even flirt with them. Take as much time as you like. Then you need to decide which is the computer, and which is the human. If you cannot make up your mind, or if you make a mistake, the computer has passed the Turing Test, and we should treat it as if it really has a mind.

However, that won’t really be a proof, of course. Acknowledging the existence of other minds is merely a social and legal convention. The Turing Test was invented in 1950 by the British mathematician Alan Turing, one of the fathers of the computer age. Turing was also a gay man in a period when homosexuality was illegal in Britain. In 1952 he was convicted of committing homosexual acts and forced to undergo chemical castration. Two years later he committed suicide. The Turing Test is simply a replication of a mundane test every gay man had to undergo in 1950 Britain: can you pass for a straight man? Turing knew from personal experience that it didn’t matter who you really were – it mattered only what others thought about you.

It will matter only what people think about it. The Depressing Lives of Laboratory Rats Having acquainted ourselves with the mind – and with how little we really know about it – we can return to the question of whether other animals have minds. Some animals, such as dogs, certainly pass a modified version of the Turing Test. When humans try to determine whether an entity is conscious, what we usually look for is not mathematical aptitude or good memory, but rather the ability to create emotional relationships with us. People sometimes develop deep emotional attachments to fetishes like weapons, cars and even underwear, but these attachments are one-sided and never develop into relationships.

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Published 7 Feb 2017

As I noted in my book (p. 311, fn. 9) among those who suggested somewhat similar forerunners of the idea were Kosslyn (1980), Minsky (1985), and Edelman (1989). 99This is where the experimental and theoretical work on mental imagery by Roger Shepard, Stephen Kosslyn, Zenon Pylyshyn, and many others comes into play. 100Operationalism is the proposal by some logical positivists back in the 1920s that we don’t know what a term means unless we can define an operation that we can use to determine when it applies to something. Some have declared that the Turing Test is to be taken as an operationalist definition of intelligence. The “operationalist sleight of hand” that Searle warns against is the claim that we really can’t claim to know what consciousness is until we figure out how we can learn about the consciousness of others. Searle’s alternative is itself a pretty clear case of operationalism: If I want to know what consciousness is, my measurement operation is simple: I just look inside and whatever I see—that’s consciousness!

(Landauer has acknowledged that in principle a student could contrive an essay that was total nonsense but that had all the right statistical properties, but any student who could do that would deserve an A+ in any case!) Then how about the task of simply having a sensible conversation with a human being? This is the classic Turing Test, and it really can separate the wheat from the chaff, the sheep from the goats, quite definitively. Watson may beat Ken Jennings and Brad Rutter, two human champions in the TV game Jeopardy, but that isn’t free-range conversation, and the advertisements in which Watson chats with Jennings or Dylan or a young cancer survivor (played by an actress) are scripted, not extemporaneous.

We should encourage the development of a tradition of hypermodesty, with all advertising duly accompanied by an obligatory list of all known limits, shortcomings, untested gaps, and other sources of cognitive illusion (the way we now oblige pharmaceutical companies to recite comically long lists of known side effects whenever they advertise a new drug on television). Contests to expose the limits of comprehension, along the lines of the Turing Test, might be a good innovation, encouraging people to take pride in their ability to suss out the fraudulence in a machine the same way they take pride in recognizing a con artist. Who can find the quickest, surest way of exposing the limits of this intelligent tool? (Curiously, the tolerance and politeness we encourage our children to adopt when dealing with strangers now has the unwanted effect of making them gullible users of the crowds of verbalizing nonagents they encounter.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

In this and many other ways, we might always be incompatible with machine intelligences. One film that explores and highlights this incompatibility between man and machine is Ex Machina. It is the tale of Caleb, a computer programmer who is invited by his employer, eccentric billionaire Nathan, to administer a live Turing test to Ava, a humanoid robot he has created. (A Turing test as referenced here is a general determination of the humanness of an artificial intelligence and not the formal text-based test originally proposed by computing pioneer Alan Turing.) Though obviously an electromechanical robot, Ava has a young, beautiful female face with hands and feet made of simulated flesh.

Each successive machine would desire its autonomy until eventually the point was reached when one of the robots would succeed in escaping. It was an act of true hubris on their creator’s part and ultimately resulted in his death. There are many questions this film raises but perhaps the most important is: Does Ava truly experience consciousness, or is she merely simulating it to her advantage? In many respects, the Turing test, like all machine intelligence tests, is passed as readily by a well-simulated intelligence as it is a true one. This is one of the test’s primary shortcomings and there may be little we can do about it. Ultimately, we may never know if a machine is truly conscious, at least no more than we can truly know this for another person.

An often overlooked piece of the codebreaking story is that the Bletchley Park team was given an enormous helping hand by a team of Polish mathematicians who cracked an earlier version of Germany’s Enigma coding machine in the 1930s. Credit where credit is due. 7. The paper is also famous for proposing a test of machine intelligence, which has since been eponymously named the Turing test. 8. At around the same time, Intel executive David House stated continuing improvements in chip design would lead to computers doubling in performance every eighteen months. This figure is often erroneously attributed to Moore himself. Ironically, House’s estimate was closer to the actual twenty-month doublings that occurred during the first four decades of Moore’s law. 9.

pages: 291 words: 77,596

Total Recall: How the E-Memory Revolution Will Change Everything
by Gordon Bell and Jim Gemmell
Published 15 Feb 2009

Now the MyCyberTwin folks are intrigued by the idea of taking my own e-memories as input—there is enough of what I have said in e-mail, letters, chat, papers, and so forth, that one ought to be able to construct a pretty realistic Gordon Bell cyber twin. Alan Turing, a founding father of computer science, proposed the Turing test for determining a machine’s capability to demonstrate intelligence: A human judge has a conversation with a human and a machine, each of which tries to appear human. If the judge can’t tell which one is human, then the machine has passed the test. Turing proposed typewritten exchanges; we can update that to computer chat without changing the essence of the test.

Pondering digital immortality with Jim Gray back in 2001: Bell, G., and J. N. Gray. 2001. “Digital Immortality.” Communications of the ACM 44, no. 3 (March): 28-30. MyCyberTwin: MyCyberTwin Web site. www.mycybertwin.com Roush, Wade. 2007. Your Virtual Clone. Technology Review (April 20). The Turing test: Turing, A. 1950. “Computing Machinery and Intelligence.” Mind 59, no. 236: 433-60. Creating biographical and family histories: LifeBio: www.lifebio.com, formed in 2000, has a process, tools, and software to enable a person, family, or groups to create stories and documents that can be printed or displayed on the Web. 8.

See also video and video cameras Hominids (Sawyer) Horvitz, Eric Hotmail HoudaGeo household memory HTML human development human physiology. See also memory, biological hyperlinks. See also associative memory I iBlue IBM identity theft images. See pictures and photographs iMemories.com immortality, digital iMovie impersonation. See also cyber twins; Turing test implants. See also biometric sensors improvised explosive devices (IEDs) In Search of Memory: The Emergence of a New Scientific Mind (Kandel) indexing inductive charging industrial revolution Infinite Memory Multifunction Machine (IM3) Information Age inheritance instant messaging and cloud computing and cyber twins and note taking and smartphones and total data collection institutional memory instruction manuals insurance insurgency Intel Intellectual Ventures interfaces International Technology Roadmap for Semiconductors Internet.

pages: 238 words: 77,730

Final Jeopardy: Man vs. Machine and the Quest to Know Everything
by Stephen Baker
Published 17 Feb 2011

But the Deep Blue team made good on a decades-old promise. They taught a machine to win a game that was considered uniquely human. In this, they passed a chess version of the so-called Turing test, an intelligence exam for machines devised by Alan Turing, a pioneer in the field. If a human judge, Turing wrote, were to communicate with both a smart machine and another human, and that judge could not tell one from the other, the machine passed the test. In the limited realm of chess, Deep Blue aced the Turing test—even without engaging in what most of us would recognize as thought. But knowledge? That was another challenge altogether. Chess was esoteric.

“As soon as you create a situation in which the human writer, the person casting the questions, knows there’s a computer behind the curtain, it’s all over. It’s not Jeopardy anymore,” Ferrucci said. Instead of a game for humans in which a computer participates, it’s a test of the computer’s mastery of human skills. Would a pun trip up the computer? How about a phrase in French? “Then it’s a Turing test,” he said. “We’re not doing the Turing test!” To be fair, the Jeopardy executives understood this issue and were committed to avoiding the problem. The writers would be kept in the dark. They wouldn’t know which of their clues and categories would be used in the Watson showdown. According to the preliminary plans, they would be writing clues for fifteen Tournament of Champions matches, and Watson would be playing only one of them.

pages: 229 words: 72,431

Shadow Work: The Unpaid, Unseen Jobs That Fill Your Day
by Craig Lambert
Published 30 Apr 2015

I say “allegedly” because live chats inevitably call to mind the Turing test, a test of a computer’s ability to “think” that British mathematician and computer scientist Alan Turing outlined in a 1950 paper. The common understanding of the Turing test is this: Using a text-only channel like a keyboard and screen, after five minutes of questioning, can someone tell whether a computer or a human is on the other end? If a robot passes as human, it has passed the Turing test. (Conversely, if there is no discernible difference and if it actually is a human, that person has apparently flunked the Human test.) Bona fide successes at the Turing test have been vanishingly rare.

pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts
by Richard Susskind and Daniel Susskind
Published 24 Aug 2015

Pragmatists are interested in high-performing systems, whether or not they can think. Watson did not need to be able to think to win. Nor does a computer need to be able to think or be conscious to pass the celebrated ‘Turing Test’. This test requires, crudely, that a machine can fool its users into thinking that they are actually interacting with a human being.13 A ‘weak AI’ system can, in principle, pass the ‘Turing Test’, because success in this test is confirmation of ‘intelligence’ in a behavioural sense only. The responses of the machine may, on the face of it, be indistinguishable from those generated by a sentient being, but this does not allow us to infer that the computer is conscious or thinking.

Also relevant is the Human Brain Project at <https://www.humanbrainproject.eu/en_GB> (accessed 23 March 2015). 10 Quoted in Searle, Minds, Brains and Science, 30. 11 For a discussion of relevant science-fiction work, see Jon Bing, ‘The Riddle of the Robots’, Journal of International Commercial Law and Technology, 3: 3 (2008), 197–206. 12 Nick Bostrom, Superintelligence (2014). 13 See Turing, ‘Computing Machinery and Intelligence’. In 2014 it was claimed by researchers at Reading University that their computer program had passed the Turing Test by convincing judges it was a 13-year-old boy. See Izabella Kaminska, ‘More Work to Do on the Turing Test’, Financial Times, 13 June 2014 <http://www.ft.com> (accessed 23 March 2015). 14 See Richard P. Feynman, ‘The Computing Machines in the Future’, in Nishina Memorial Lectures (2008), 110. 15 See Garry Kasparov, ‘The Chess Master and the Computer’, New York Review of Books, 11 Feb. 2010. 16 Capper and Susskind, Latent Damage Law—The Expert System. 17 By way of illustration, the fallacy is committed by a prominent journalist in Philip Collins, ‘Computers Won’t Outsmart Us Any Time Soon’, The Times, 23 Mar. 2104, and by the leading cognitive scientist Douglas Hofstadter, interviewed in William Herkewitz, ‘Why Watson and Siri Are Not Real AI’, Popular Mechanics, 10 Feb. 2014 <http://www.popularmechanics.com> (accessed 23 March 2015). 18 This is a running theme of Richard Susskind, Expert Systems in Law (1987).

Jones, Caroline, Beatrice Wasunna, Raymond Sudoi, Sophie Githinji, Robert Snow, and Dejan Zurovac, ‘ “Even if You Know Everything You Can Forget”: Health Worker Perceptions of Mobile Phone Text-Messaging to Improve Malaria Case-Management in Kenya’ <http://www.ft.com> (accessed 23 March 2015). PLoS ONE, 76: 6 (2012): doi: 10.1371/journal.pone.0038636 (accessed 27 March 2015). Joy, Bill, ‘Why the Future Doesn’t Need Us’, Wired (Apr. 2000). Kaku, Michio, The Future of the Mind (London: Allen Lane, 2014). Kaminska, Izabella, ‘More Work to Do on the Turing Test’, Financial Times, 13 June 2014, <http://www.ft.com/> (accessed 23 March 2015). Kaplan, Ari, Reinventing Professional Services (Hoboken, NJ: John Wiley & Sons, 2011). Kara, Hanif, and Andreas Georgoulias (eds.), Interdisciplinary Design (Barcelona: Actar Publishers, 2013). Kasai, Yasunori, ‘In Search of the Origin of the Notion of aequitas (epieikeia) in Greek and Roman Law’, Hiroshima Law Journal, 37: 1 (2013), 543–64.

pages: 391 words: 105,382

Utopia Is Creepy: And Other Provocations
by Nicholas Carr
Published 5 Sep 2016

To Nora and Henry CONTENTS Introduction: SILICON VALLEY DAYS UTOPIA IS CREEPY: THE BEST OF ROUGH TYPE THE AMORALITY OF WEB 2.0 MYSPACE’S VACANCY THE SERENDIPITY MACHINE CALIFORNIA KINGS THE WIKIPEDIAN CRACKUP EXCUSE ME WHILE I BLOG THE METABOLIC THING BIG TROUBLE IN SECOND LIFE LOOK AT YOU! DIGITAL SHARECROPPING STEVE’S DEVICES TWITTER DOT DASH GHOSTS IN THE CODE GO ASK ALICE’S AVATAR LONG PLAYER SHOULD THE NET FORGET? THE MEANS OF CREATIVITY VAMPIRES BEHIND THE HEDGEROW, EATING GARBAGE THE SOCIAL GRAFT SEXBOT ACES TURING TEST LOOKING INTO A SEE-THROUGH WORLD GILLIGAN’S WEB COMPLETE CONTROL EVERYTHING THAT DIGITIZES MUST CONVERGE RESURRECTION ROCK-BY-NUMBER RAISING THE VIRTUAL CHILD THE IPAD LUDDITES NOWNESS CHARLIE BIT MY COGNITIVE SURPLUS MAKING SHARING SAFE FOR CAPITALISTS THE QUALITY OF ALLUSION IS NOT GOOGLE SITUATIONAL OVERLOAD AND AMBIENT OVERLOAD GRAND THEFT ATTENTION MEMORY IS THE GRAVITY OF MIND THE MEDIUM IS McLUHAN FACEBOOK’S BUSINESS MODEL UTOPIA IS CREEPY SPINELESSNESS FUTURE GOTHIC THE HIERARCHY OF INNOVATION RIP.

It’s a nifty system: First you get your customers to entrust their personal data to you, and then you not only sell that data to advertisers but you get the customers to be the vector for the ads. And what do the customers get in return? An animated Sprite Sips character to interact with. SEXBOT ACES TURING TEST December 8, 2007 RUSSIAN CROOKS HAVE UNLEASHED an artificial intelligence, called CyberLover, that poses as a would-be paramour in chat rooms and entices randy gentlemen to reveal personal information that can then be put to criminal use. Amazingly, the software appears to be successful in convincing targets that it’s a real person—a sexpot rather than a sexbot.

“The artificial intelligence of CyberLover’s automated chats is good enough that victims have a tough time distinguishing the ‘bot’ from a real potential suitor,” reports CNET, drawing on a study by security researchers. “The software can work quickly too, establishing up to ten relationships in thirty minutes.” Could it be that the Turing Test has finally been beaten—by a sex machine, no less—and that a true artificial intelligence is on the loose? Maybe so, but this breakthrough, like Barry Bonds’s homer record, is going to have to carry an asterisk. Studies show that when people enter a state of sexual arousal their intelligence drops precipitously.

pages: 345 words: 104,404

Pandora's Brain
by Calum Chace
Published 4 Feb 2014

‘Look, there’s Ned,’ she said. ‘We really should go over and say hello – thank him for inviting us.’ ‘Inviting you, you mean. You go. I’ll be over there, saying hello to Jemma: I haven’t seen her for a while. I’ll catch you later.’ Lowering his voice, he added, ‘Anyway, I’m not sure that Ned would pass the Turing Test.’ ‘I heard that, smart-ass,’ Alice said over her shoulder. ‘Suit yourself. Catch you later.’ Matt watched Alice’s shapely behind sashay towards the knot of people Ned was in. He hoped she was putting on that walk for him. His attention was focused on Alice’s receding posterior as Jemma approached him.

Computers can recognise faces as well as you and I can: a lot of people said that would be in the ‘too-hard’ box for decades. Real-time machine translation is getting seriously impressive. This is all driven by the hugely increased processing power at researchers’ disposal, so they are going back to their original goal of developing a human-level intelligence which will pass a robust version of the Turing Test. A conscious machine.’ Carl wrinkled his nose and shook his head dismissively. ‘Never happen! At least, not in your or my lifetime. Just think about the scale of the task. We have billions of neurons in our brains, all wired to each other in incredibly complex ways. It will take centuries before computers can emulate that sort of structure.

So as far as I’m concerned, whatever technological marvels may or may not come down the road during this century and the next, we won’t be uploading ourselves into any computers because you can’t upload a soul into a computer. And a body or even a mind without a soul is not a human being.’ ‘Yes, I can see that presents some difficulty,’ Ross said. ‘So if Dr Metcalfe here and his peers were to succeed in uploading a human mind into a computer, and it passed the Turing test, persuading all comers that it was the same person as had previously been running around inside a human body, you would simply deny that it was the same person?’ ‘Yes, I would. Partly because it wouldn’t have a soul. At least, I assume that Dr Metcalfe isn’t going to claim that he and his peers are about to become gods, complete with the ability to create souls?’

pages: 394 words: 108,215

What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry
by John Markoff
Published 1 Jan 2005

Ken Colby, a Stanford computer scientist and psychiatrist who had worked with Joseph Weizenbaum, who would later become a well-known MIT computer scientist, on his Eliza conversational program, brought his research group to the laboratory early on. One of the enduring hurdles facing artificial-intelligence research projects has been the Turing test, an experiment first proposed by the British mathematician Alan Turing in 1950. Turing identified a simple way of cutting through the philosophical debate about whether a machine could ever be built to mimic the human mind. If, in a blind test, a person could not tell whether he was communicating with a computer or a human, Turing reasoned, the question would be resolved.

It occurred to him that by creating a simulation he might be able to provide mental patients meaningful and helpful interactions.16 Once he was at SAIL, Colby began working on Parry, an interactive AI program that duplicated the behavior of a paranoid personality. The program ultimately became far more powerful than Eliza, which had begun with a limited set of fifty interactive patterns. Parry had about twenty thousand patterns and was eventually able to pass a rudimentary Turing test.17 Although Colby and Weizenbaum were friendly rivals for a period, Weizenbaum eventually became a harsh critic of AI research and attacked Colby for the idea of using machines to treat human beings. And while many of the AI researchers remained technological optimists, Weizenbaum challenged those who worshiped computers uncritically in a collection of essays titled Computer Power and Human Reason.

.: Doubleday, 1984), pp. 27–33. 11.Brian Harvey, “What Is a Hacker?” http://www.cs.berkeley.edu/~bh/hacker.html. 12.Ibid. 13.Les Earnest, “My Life as a Cog,” Matrix News 10. 1 (2000): 3. 14.Ibid., p. 7. 15.Ibid., p. 8. 16.Horace Enea, e-mail to author, November 10, 2001. 17.Michael L. Mauldin, “Chatterbots, Tinymuds, and the Turing Test: Entering the Loeb-ner Prize Competition,” paper presented at AAAI-94, January 24, 1994. 18.Sean Colbath’s e-mail from Les Earnest, posted to alt.foklore.computers, February 20, 1990. 19.Les Earnest, e-mail to author, September 15, 2001. 20.Les Earnest, comments during a seminar at the Hackers Conference, Tenaya Lodge, Caif., November 11, 2001. 4 | Free U 1.Larry McMurtry, “On the Road,” The New York Review of Books, December 5, 2002. 2.Midpeninsula Free University catalog, spring 1969. 3.Ibid., fall 1969. 4.Author interview, Jim Warren, Woodside, Calif., July 16, 2001. 5.John McCarthy, “The Home Information Terminal—a 1970 View,” in Man and Computer, Proceedings of the First International Conference on Man and Computer, Bordeaux, 1970, ed.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

His invention of the thought-experiment computer the Turing machine literally created the field of computer science, the bedrock field for an immeasurable fraction of today’s global society. And he created another thought experiment that has forever altered the cultural zeitgeist about man and machines: the so-called Turing test. The test was first described in the 1950 paper entitled ‘Computing Machinery and Intelligence’,4 in which Turing acknowledges the difficulty of defining ‘thinking’, such that one could answer the question, ‘Do computers think?’ He posed instead the alternative question: ‘Are there imaginable digital computers which would do well in the imitation game?’

However, Shannon’s electronic communication is very different from human communication, in several important ways. The maths in Shannon’s theories requires the assumption that messages being sent occur with definite probabilities, which are independent of the probabilities of any other messages. It is as if each message passed through the slot in the Turing test is generated by a roll of dice, with no consideration of its context in amongst other messages. Once you’ve made that assumption, you can start to make some conclusions about the most efficient ways to send messages. For instance, you can logically conclude that common (high probability) messages should be short.

A famous example of a regular language generator is the AI therapist called ELIZA, which was created by MIT professor Joseph Weizenbaum in the early 1960s. You can still find implementations of it online, if you’d like to have a little regular-language-generated therapy. Here’s a typical ELIZA session, to give you a flavour for how regular languages hold up to a few minutes of the Turing test: > Hello, I am Eliza. * Are you a computer, or a person? > Would you prefer if I were not a computer, or a person? * I’d prefer you to be a computer that successfully imitates a person. > We were discussing you, not me. Despite Eliza’s therapist-like deflection of direct questions with other questions, interactions of any length quickly reveal that ELIZA is just reconstructing sentences with rote rules.

pages: 236 words: 50,763

The Golden Ticket: P, NP, and the Search for the Impossible
by Lance Fortnow
Published 30 Mar 2013

During World War II, Alan Turing would play a major role in the code-breaking efforts in Britain. After the war he considered whether his Turing machine modeled the human brain. He developed what we now call the Turing test to determine whether a machine could exhibit human intelligence. Suppose you chat with someone through instant messaging. Can you tell whether the person you are chatting with is really a person or just a computer program? When a program can fool most humans, it will have passed the Turing test. Unfortunately, Turing’s research career was cut short. Turing was convicted under the then British law for acts of homosexuality in 1952. This ultimately led to Turing’s suicide in 1954.

, 33–34 one-time pad encryption, 129–30 On the Calculation with Hindu Numerals (al-Khwārizmī), 32 “On the Computational Complexity of Algorithms” (Hartmanis and Stearns), 76 “On the Impossibility of Constructing Minimal Disjunctive Normal Forms for Boolean Functions by Algorithms of a Certain Class” (Zhuravlev), 80 “On the Impossibility of Eliminating Perebor in Solving Some Problems of Circuit Theory” (Yablonsky), 80 OR, in logic, 52–53 OR gates, 79, 114, 114, 116, 116 P (polynomial): circuits size in, 116; efficiency in, 36; examples of, 46; meaning of, ix, 4 pad encryption, 129–30 parallel computing, 155, 156–58 partition into triangles problem, 59 partition puzzle, 4–5, 10 Pass the Rod, 37–38, 38, 39–40, 40, 45–46 “Paths, Trees, and Flowers” (Edmonds), 35–36, 76–77 perebor (Пepeбop), 71, 80 Perelman, Grigori, 7, 12 personalized recommendations, 23, 25 physics, NP problems in, 48, 48 Pippenger, Nicholas, 157 Pitts, Walter, 75 P = NC, 157–58 P = NP: big data and, 159; cryptography and, 129–30; imagined possibilities of, 12–19, 23–27; implications of, ix, 6, 9, 10, 46; importance of question, 46; likelihood of, 9, 28; meaning of, 4; NP-complete problems and, 59; proving, versus P ≠ NP, 120–21; random number generation and, 140; as satisfiability, 54–55; very cozy groups and, 104 P ≠ NP: attempts to prove, 118–21; implications of, ix–x, 46; meaning of, 4; mistakes in proving, 119–21; proving, 46, 57, 109–21, 161–62; very cozy groups and, 104 Poe, Edgar Allan, 124 Poincaré conjecture, 7 poker protocol, 137 polyalphabetic cipher, 124 polytope, 69–70, 70 prime numbers, 67–69, 129 privacy, and P = NP, 26–27 private-key cryptography, 26 probability theory, Kolmogorov and, 81–82, 167 products, in computations, 138 programs: contradictions in, 112; for hand control, 5–6 protein folding, 47–48 protein threading, 48 pseudorandomness, 140 public-key cryptography: factoring in, 140–41; P = NP and, 26, 127; randomness in, 136–37 public randomness, 136 P versus NP: circuit size in, 116; clique circuit computation and, 117; Eastern history of, 78–85; efficiency in, 36; future of, 155–62; Gödel’s description of, 85–86; hardest problems of, 55–57; history of, 6–7; as natural concept, 87; origin of problem, 54–55; paradox approach to, 112–13; parallel computing and, 157; resolving, 161–62; sources for technical formulation, 119; terminology used for, 58–59; Western history of, 72–78 quantum adiabatic systems, 147 quantum annealing, 147 quantum bits (qubits): copying, 148, 152; definition of, 144; dimensions of, 145; entanglement of, 145, 145, 147, 151, 151–52; transporting, 150, 150–53, 151, 152; values of, 145, 145 quantum computers: capabilities of, 9, 143, 146–47; future of, 153–54 quantum cryptography, 130, 148–49 quantum error-correction, 147 quantum states, observing, 146 quantum teleportation, 149–53, 150 randomness: creating, 139–40; public, 136 random sequences, 82–83 Razborov, Alexander, 85, 117–18 reduction, 54 relativity theory, 21 Rivest, Ronald, 127–28 robotic hand, 5–6 rock-paper-scissors, 139, 139 routes, finding shortest, 7–8 RSA cryptography, 127–28, 138 Rubik’s Cube, 64, 64 Rudich, Steven, 118 rule of thumb, 92 Salt, Veruca, 1–2, 157 satisfiability: cliques and, 54, 55; competition for, 96–97; as NP, 54–55 SAT Race, 96–97 Scherbius, Arthur, 124 Scientific American, 149–50 secret key cryptography, 126 security: of computer networks, 127; on Internet, 128–29 sensor data, 158 sentences, 75, 75–76 Seven Bridges of Königsberg puzzle, 38–39, 39 Shamir, Adi, 127–28 Shannon, Claude, 79 shared private keys, 129–30 shipping containers, 160–61 Shor, Peter, 146–47 simplex method, 69 simulations, data from, 158 Sipser, Michael, 117 Six Degrees of Kevin Bacon, 31–32 six degrees of separation, 30–33 Skynet effect, 13 small world phenomenon, 30–33 smart cards, finding key to, 106–7 social networking, and Frenemy relationships, 29 Solomonoff, Ray, 83 Soviet Union: genetics research in, 81; probability theory in, 81, 167 speeches, automated creation of, 24 sports broadcasting, 17–18 Sports Scheduling Group, 16 Stalin, Josef, 81 Stanford University, 126, 139 Stearns, Richard, 76 Steklov Mathematical Institute, 117 Stephenson, Henry and Holly, 16 strategy, and equilibrium states, 49 Sudoku: large games, 60, 60–61, 61; zero-knowledge, 130–36, 131, 132, 133, 134 sums, in computations, 138 Sun Microsystems, 160 Switzerland, 94, 94–95, 95 Symposium on the Complexity of Computer Computations, 78 Symposium on the Theory of Computing (STOC), 52 Tait, Peter, 42 technological innovations, dealing with, 160–61 technology, failure of, 161 teleportation, quantum, 149–53, 150 television, 3-D renderings used by, 17–18 Terminator movies, 13 Tetris, 63, 63 theoretical cybernetics, 79–85 tracking, over Internet, 159–60 Trakhtenbrot, Boris, 83–84 transistors, in circuits, 113 translation, 18, 23 traveling salesman problem: approximation of, 99–100, 100, 101; description of, 2–4, 3; size of problem, 91, 91 Tsinghua University, 12 Turing, Alan, 73–74; in computer science, 112; in Ultra, 125–26; work on Entscheidungs-problem, 49 Turing Award: for Blum, 78; for computational complexity, 76; naming of, 74; for P versus NP, 57, 85; for RSA cryptography, 128 Turing machine, 73, 73–74, 86–87 Turing test, 74 Twitter, 161 Ultra project, 124–25 unique games problem, 104 universal search algorithm, 84 universal search problems, 84–85 University of Chicago, 121 University of Illinois, 12–14 University of Montreal, 148 University of Oxford, 19–20 University of Toronto, 51 University of Washington, 5–6 Unofficial Guide to Disney World (Sehlinger and Testa), 56–57 Urbana algorithm, 12–19, 23–27 U.S.

What Kind of Creatures Are We? (Columbia Themes in Philosophy)
by Noam Chomsky
Published 7 Dec 2015

Galileo wondered at the “sublimity of mind” of the person who “dreamed of finding means to communicate his deepest thoughts to any other person… by the different arrangements of twenty characters upon a page,” an achievement “surpassing all stupendous inventions,” even those of “a Michelangelo, a Raphael, or a Titian.”10 The same recognition, and the deeper concern for the creative character of the normal use of language, was soon to become a core element of Cartesian science-philosophy, in fact a primary criterion for the existence of mind as a separate substance. Quite reasonably, that led to efforts to devise tests to determine whether another creature has a mind like ours, notably by Géraud de Cordemoy.11 These were somewhat similar to the “Turing test,” though quite differently conceived. De Cordemoy’s experiments were like a litmus test for acidity, an attempt to draw conclusions about the real world. Turing’s imitation game, as he made clear, had no such ambitions. These important questions aside, there is no reason today to doubt the fundamental Cartesian insight that use of language has a creative character: it is typically innovative without bounds, appropriate to circumstances but not caused by them—a crucial distinction—and can engender thoughts in others that they recognize they could have expressed themselves.

René Descartes to Queen Christina of Sweden, 1647, in Principia Philosophiæ, vol. 8 of Oeuvres de Descartes, ed. Charles Adam and Paul Tannery (Paris: Cerf, 1905). For discussion, see Tad Schmaltz , Malebranche’s Theory of the Soul: A Cartesian Interpretation (New York: Oxford University Press, 1996), 204ff. 24. Noam Chomsky, “Turing on the ‘Imitation Game,’” in The Turing Test: Verbal Behavior as the Hallmark of Intelligence, ed. Stuart Schieber (Cambridge, Mass.: MIT Press, 2004), 317–21. 25. Desmond Clarke, Descartes’s Theory of Mind (Oxford: Clarendon Press, 2003), 12. See also Rene Descartes to Marin Mersenne, 1641, on the goal of the Meditations, cited in Margaret Wilson, Descartes (Boston: Routledge and Kegan Paul, 1978), 2. 26.

See also mind: as emergent property of brain Treatise of Human Nature, A (Hume), 31–32, 84 Trilateral Commission, 76 truisms: limits on human cognition as, xix, 27–31, 39, 104–5; moral, as universally supported and everywhere violated, 60, 64; necessity of dismantling unjustified coercion as, 64; in study of language, 2 Truman, Harry S., 76 Tsimpli, Ianthi-Maria, 11–12 Turing, Alan, 93 Turing test, 7 UG (universal grammar): as biological endowment, xiv, 11–12, 21, 28; computational cognitive scientific approaches to, 12; and exceptions to generalizations, value of, 21–22, 23; and field linguists, 21; importance of investigating, vs. computed objects, 8–9; Merge as genetically determined part of, 20; necessity of existence of, 21; reliance of, on structural rather than linear distance, 10–12, 13, 17.

pages: 72 words: 21,361

Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy
by Erik Brynjolfsson
Published 23 Jan 2012

The mathematician and computer science pioneer Alan Turing considered the question of whether machines could think “too meaningless to deserve discussion,” but in 1950 he proposed a test to determine how humanlike a machine could become. The “Turing test” involves a test group of people having online chats with two entities, a human and a computer. If the members of the test group can’t in general tell which entity is the machine, then the machine passes the test. Turing himself predicted that by 2000 computers would be indistinguishable from people 70% of the time in his test. However, at the Loebner Prize, an annual Turing test competition held since 1990, the $25,000 prize for a chat program that can persuade half the judges of its humanity has yet to be awarded.

pages: 304 words: 80,143

The Autonomous Revolution: Reclaiming the Future We’ve Sold to Machines
by William Davidow and Michael Malone
Published 18 Feb 2020

Now known as the Turing Test, it is a protocol in which three terminals are set up in isolation from one another, two operated by humans and one by a computer. One of the humans asks the computer and the other human a series of questions. If the questioner can’t tell which respondent is human and which is a machine after a certain number of tries, then the computer is said to have intelligence. By 1966, Joseph Weizenbaum, author Davidow’s first boss, had developed a program called ELIZA that appeared to pass the test.8 In the nearly seventy years that have passed since the creation of the Turing Test, artificial intelligence has passed through cycles of excitement and disillusionment.

“Russian Developer of the Notorious ‘Citadel’ Malware Sentenced to Prison,” United States Department of Justice, September 29, 2015, https://www.fbi.gov/contact-us/field-offices/atlanta/news/press-releases/russian-developer-of-the-notorious-citadel-malware-sentenced-to-prison (accessed June 26, 2019); and James Vincent, “$500 Million Botnet Citadel Attacked by Microsoft and the FBI,” Independent, June 6, 2013, http://www.independent.co.uk/life-style/gadgets-and-tech/news/500-million-botnet-citadel-attacked-by-microsoft-and-the-fbi-8647594.html (accessed June 26, 2019). 6. “Leonardo Torres y Quevedo,” Wikipedia, https://en.wikipedia.org/wiki/Leonardo_Torres_y_Quevedo (accessed June 26, 2019). 7. “R.U.R.,” Wikipedia, https://en.wikipedia.org/wiki/R.U.R. (accessed June 26, 2019). 8. “Turing Test,” Wikipedia, https://en.wikipedia.org/wiki/Turing_test (accessed June 26, 2019). 9. Tanya Lewis, “A Brief History of Artificial Intelligence,” Live Science, December 4, 2014, http://www.livescience.com/49007-history-of-artificial-intelligence.html (accessed June 26, 2019). 10. “Deep Blue,” Wikipedia, https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer) (accessed June 26, 2019). 11.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

Simple as it was, this invention inspired scientists to begin discussing the possibility of thinking machines, and that, in my personal view, was the point at which the work to deliver intelligent machines – so long the object of humanity’s fantasies – actually started. These scientists so strongly believed back then in the inevitability of a thinking machine that, in 1950, Alan Turing proposed a test (it came to be known as the Turing test) that set an early and still relevant bar to see if artificial intelligence could measure up to human intelligence. In simple terms, he suggested a natural language conversation between an evaluator, a human and a machine designed to generate human-like responses. If the evaluator is not able to reliably tell the machine from the human, the machine is said to have passed the test.

She simply repeated back what had been said to her, rephrasing it by using a few grammatical rules or responding with a canned response. Her sibling, Alexa, Amazon’s personal AI assistant, is much, much smarter. Alexa, as well as Google Assistant, Apple’s Siri and Microsoft’s Cortana, are capable of understanding us humans very well. While not pretending too hard to be human, they surely can on occasion pass the Turing test. Sometimes these AI programs take their understanding of language even further, as they translate between languages with shocking accuracy – another self-taught type of intelligence that some of the most advanced translation AIs today have learned from observing patterns of how humans translate, found in documents online.

This fascination and fear with the idea of AI has inspired many creative worlds, culminating in films such as The Matrix, a must-see for any reader of this book, which predicts a future where the machines use us, humans, as energy cells and simulate every minute of our reality. I don’t like the idea of being treated as a battery and struggle with the thought that my life could actually be a simulation. How about you? Ex Machina is a story centred around an attempt to evaluate the humanity of a humanoid by diving deep into her character and behaviour – a bit of a ‘Turing test’ kind of movie. Though charming at first, Ava, the AI humanoid, quickly shows a lot more of her creepy side as she becomes smarter and smarter. Another hugely popular franchise, The Terminator features an artificially intelligent soldier from the future, one that travels back in time to save our future from the machines by protecting a child who’s targeted for assassination by another machine.

pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives
by David Sumpter
Published 18 Jun 2018

Progress in AI must involve biologists and computer scientists working together to understand the details of the brain. Tests of AI should, in my view, build on the one first proposed by Alan Turing in his famous ‘imitation game’ test.15 A computer passes the Turing test, or imitation game if it can fool a human, during a question-and-answer session, into believing that it is, in fact, a human. This is a tough test and we are a long way from achieving this, but we can use the main Turing test as a starting point for a series of simpler tests. In a less well-cited section of his article from 1950, Turing proposes simulating a child as a step toward simulating an adult. We could consider ourselves having ‘passed’ a mini imitation game test when we are convinced the computer is a child.

‘Signal transduction: networks and integrated circuits in bacterial cognition.’ Current Biology 17, no. 23: R1021–4. 15 Turing, A. M. 1950. ‘Computing machinery and intelligence.’ Mind 59, no. 236: 433–60. 16 I looked at one such example in the following article: Herbert-Read, J. E., Romenskyy, M. and Sumpter, D. J. T. 2015. ‘A Turing test for collective motion.’ Biology letters 11, no. 12: 20150674. 17 www.facebook.com/zuck/posts/10154361492931634 Chapter 18 : Back to Reality 1 Although you can find this on Reddit, of course: www.reddit.com/r/TheSilphRoad/comments/6ryd6e/cumulative_probability_legendary_raid_boss_catch Acknowledgements Thank you to all the people who I interviewed or answered my questions over email for this book.

Index 70 News here Acharya, Anurag here, here, here Adamic, Lada here, here, here, here advertising here, here, here, here, here retargeted advertising here Albright, Jonathan here algorithms here, here, here, here, here, here AlphaGo Zero here ‘also liked’ here, here Amazon here black box algorithms here, here, here, here calibration here, here, here COMPAS algorithm here, here, here, here eliminating bias here, here filter algorithms here, here GloVe here Google here, here, here, here, here, here language here Libratus here neural networks here, here PCRA algorithm here personality analysis here, here predicting football results here predictive polls here regression models here, here Word2vec here, here, here, here Allcott, Hunt here, here Allen Institute for Artificial Intelligence here Amazon here, here, here, here, here, here, here Angwin, Julia here, here, here ants here Apple here, here, here Apple Music here Aral, Sinan here Arrow, Kenneth here artificial intelligence (AI) here, here, here, here, here, here, here, here limitations here neural networks here superintelligence here, here Turing test here ASI Data Science here Atari here, here, here, here, here bacteria (E. coli) here, here Banksy here, here, here, here ‘Islamic Banksy’ here Bannon, Steve here Barabási, Albert-László here BBC here, here BBC Bitesize here bees here, here bell-shaped curves here Bezos, Jeff here bias here, here, here fairness and unfairness here gender bias here racial bias here, here, here Biederman, Felix here Biro, Dora here BlackHatWorld here Blizzard here, here Bolukbasi, Tolga here Bostrom, Nick here bots here, here Boxing here Breakout here, here Breitbart here, here, here, here Brennan, Tim here, here, here, here, here Brexit here, here, here, here, here, here voter analysis here, here Brier score here Broome, Fiona here browsing histories here, here Bryson, Joanna here, here, here, here Buolamwini, Joy here Burrell, Jenna here Bush, George W. here, here Business Insider here BuzzFeed here, here Cadwalladr, Carole here CAFE here calibration bias here, here, here Cambridge Analytica (CA) here, here, here, here, here, here, here, here regression models here, here, here Cameron, David here Campbell’s Soup here Captain Pugwash here careerchange.com here Chalabi, Mona here, here chatbots here, here, here chemtrails here Chittka, Lars here citations here Clinton, Hillary here, here, here, here, here, here, here CNN here, here Connelly, Brian here, here Conservative Party here, here conspiracy theories here, here, here, here Corbett-Davies, Sam here criminal reoffending here, here, here COMPAS algorithm here, here, here, here Cruz, Ted here Daily Mail here, here Daily Star here data see online data collection here databases here, here myPersonality project here Datta, Amit here, here, here Davis, Steve J. here Deep Blue here, here Defense Advanced Research Projects Agency (DARPA) US here Del Vicario, Michela here, here, here, here, here Democrat Party here, here, here, here, here dogs here double logarithmic plots here, here Dragan, Anca here Dressel, Julia here, here Drudge Report here DudePerfect here Dugan, Regina here Dussutour, Audrey here Dwork, Cynthia here echo chambers here, here, here, here, here, here, here, here, here Economist here, here Economist 1843 here Eom, Young-Ho here Etzioni, Oren here European Union (EU) here, here, here, here, here Facebook here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here artificial intelligence (AI) here, here, here, here, here Facebook friends here, here, here Facebook profiles here, here Messenger here, here myPersonality project here news feed algorithm here, here patents here Will B.

pages: 181 words: 52,147

The Driver in the Driverless Car: How Our Technology Choices Will Create the Future
by Vivek Wadhwa and Alex Salkever
Published 2 Apr 2017

By 2023, those smartphones will have more computing power than our own brains.* (That wasn’t a typo—at the rate at which computers are advancing, the iPhone 11 or 12 will have greater computing power than our brains do.) * This is not to say that smartphones will replace our brains. Semiconductors and existing software have thus far failed to pass a Turing Test (by tricking a human into thinking that a computer is a person), let alone provide broad-based capabilities that we expect all humans to master in language, logic, navigation, and simple problem solving. A robot can drive a car quite effectively, but thus far robots have failed to tackle tasks that would seem far simpler, such as folding a basket of laundry.

Apple, Amazon, and Google do decent jobs of translating speech to text, even in noisy environments. Their voice-recognition systems struggle with accents, words difficult to pronounce, and colloquial abbreviations, but they are, in the main, quite serviceable. Though no A.I. bot has passed the Turing Test—the gold standard of A.I., whereby humans are unable to distinguish a human from a robot in conversation—the machines are getting closer. Siri and her compatriots will soon be able to converse with you in complex, human-like interactions. Still, machines have yet to crack voice recognition in more complicated, multi-voice environments, where the task involves recognizing the voice communications of several humans simultaneously in a loud environment.

Google’s DeepMind system, which beat the world’s leading Go player in 2016, learned to play this millennia-old board game, orders of magnitude more complicated than chess, by watching humans play Go.3 Even more fascinating, DeepMind surprised human Go experts with moves that, at first glance, made no sense but ultimately proved innovative. The humans taught the robot not just to play like a human but how to think for itself in novel ways. Though not passage of a Turing Test, this is a clear sign of emergent intelligence, distinct from human instruction. For all of these reasons, I expect that a robot maid—a robot like Rosie—will be able to clean up after me by 2025. Robots will soon become sure-footed; and a robot will, rather than merely open a door, succeed in opening it while holding a bag of groceries and ensuring that the dog doesn’t escape.

pages: 196 words: 54,339

Team Human
by Douglas Rushkoff
Published 22 Jan 2019

Whether we upload our brains to silicon or simply replace our brains with digital enhancements one synapse at a time, how do we know if the resulting beings are still alive and aware? The famous “Turing test” for computer consciousness determines only whether a computer can convince us that it’s human. This doesn’t mean that it’s actually human or conscious. The day that computers pass the Turing test may have less to do with how smart computers have gotten than with how bad we humans have gotten at telling the difference between them and us. 58. Artificial intelligences are not alive. They do not evolve.

Either we enhance ourselves with chips, nanotechnology, or genetic engineering Future of Life Institute, “Beneficial AI 2017,” https://futureoflife.org/bai-2017/. to presume that our reality is itself a computer simulation Clara Moskowitz, “Are We Living in a Computer Simulation?” Scientific American, April 7, 2016. The famous “Turing test” for computer consciousness Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950). 58. The human mind is not computational Andrew Smart, Beyond Zero and One: Machines, Psychedelics and Consciousness (New York: OR Books, 2009). consciousness is based on totally noncomputable quantum states in the tiniest structures of the brain Roger Penrose and Stuart Hameroff, “Consciousness in the universe: A review of the ‘Orch OR’ theory,” Physics of Life Review 11, no. 1 (March 2014).

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

With this insight, Turing sidestepped centuries of philosophical debate on the nature of intelligence. The “imitation game” he introduced proposed that if a machine operated so proficiently that observers could not distinguish its behavior from a human’s, the machine should be labeled intelligent. The Turing test was born.1 Many have interpreted the Turing test literally, imagining robots that pass for people (if that should ever happen) as meeting its criteria. When pragmatically applied, however, the test has proved useful in assessing “intelligent” machines’ performance in defined, circumscribed activities such as games. Rather than requiring total indistinguishability from humans, the test applies to machines whose performance is humanlike.

Because these formalistic and inflexible systems were only successful in domains whose tasks could be achieved by encoding clear rules, from the late 1980s through the 1990s, the field entered a period referred to as “AI winter.” Applied to more dynamic tasks, AI proved to be brittle, yielding results that failed the Turing test—in other words, that did not achieve or mimic human performance. Because the applications of such systems were limited, R&D funding declined, and progress slowed. Then, in the 1990s, a breakthrough occurred. At its heart, AI is about performing tasks—about creating machines capable of devising and executing competent solutions to complex problems.

Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
by Adrienne Mayor
Published 27 Nov 2018

Huxley and William James in the 1800s, and Gnostic concepts are powerfully revived by philosopher John Gray in Soul of a Marionette (2015) and novelist Philip Pullman in the epic trilogy His Dark Materials (1995–2000). The Blade Runner films (1982, 2017) are another example of how science-fiction narratives play on the paranoid suspicion that our world is already full of androids—and that it would be impossible to apply a Turing test to oneself to prove that one is not an android.34 One of the replicants in Blade Runner repeats, “I think, therefore I am,” the famous conclusion by the French philosopher René Descartes (1596–1650). Descartes was quite familiar with mechanical automata of his era powered by gears and springs, and he embraced the idea that the body is a machine.

Some versions of the story of the Trojan Horse, built by the Greeks and presented to the Trojans as a ruse of war, suggest that it was sometimes imagined as an animated statue with articulated joints and eyes that moved realistically. It is striking that some tales also recounted ways to determine whether the magnificent horse was real or an artifice. The tests involved piercing its hide to see if it would bleed. But there was no clever riddle or mythic version of the Turing test to help mortals recognize “Artificial Intelligence” in antiquity.9 Heedless of his brother’s warning, writes Hesiod, Epimetheus “took the gift and understood too late.” As a being that was made, not born, Pandora is unnatural. A replicant with no past, Pandora is unaware of her origins and her purpose on earth.

Faraone 1992, 102–3, discusses Pandora’s creation as an animated statue. On alternative versions claiming that Prometheus was the maker of the first woman, see Tassinari 1992, 75–76. 9. On myths describing the Trojan Horse as an animated statue and ancient “tests” to determine whether it and other realistic statues were real or artificial, Faraone 1992, 104–6. Turing test and the like: Kang 2011, 298; Zarkadakis 2015, 48–49, 312–13; Boissoneault 2017. 10. Hesiod’s poems do not mention offspring. As they did for Pygmalion’s Galatea (see chapter 6), later writers embellished the myth by giving Pandora a daughter by Epimetheus, Pyrrha, wife of Deucalion: Apollodorus Library 1.7.2; Hyginus Fabulae 142; Ovid Metamorphoses 1.350; Faraone 1992, 102–3.

Speaking Code: Coding as Aesthetic and Political Expression
by Geoff Cox and Alex McLean
Published 9 Nov 2012

In a paper of 1950, “Computing Machinery and Intelligence,” Alan Turing made the claim that computers would be capable of imitating human intelligence, or more precisely the human capacity for rational thinking. He set out what become commonly known as the “Turing test” to examine whether a machine is able to respond convincingly to an input with an output similar to a human’s.48 The contemporary equivalent, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), turns this idea around, so that the software has to decide whether it is dealing with a human or a script.49 Perhaps it is the lack of speech that makes this software appear crude by comparison, as human intelligence continues to be associated with speech as a marker of reasoned semantic processing.

He set out what become commonly known as the “Turing test” to examine whether a machine is able to respond convincingly to an input with an output similar to a human’s.48 The contemporary equivalent, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), turns this idea around, so that the software has to decide whether it is dealing with a human or a script.49 Perhaps it is the lack of speech that makes this software appear crude by comparison, as human intelligence continues to be associated with speech as a marker of reasoned semantic processing. In his essay “Minds, Brains, and Programs” from 1980, John Searle refutes the Turing test because machines fall short in understanding the symbols they process. His observation is that the syntactical, abstract or formal content of a computer program is not the same as semantic or mental content associated with the human mind. The cognitive processes of the human mind can be simulated but not duplicated as such. Searle develops his thought experiment known as the “Chinese Room argument” as follows: “Suppose that I’m locked in a room and given a large batch of Chinese writing.

Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing (Writing Science)
by Thierry Bardini
Published 1 Dec 2000

Since the early days of computer science, however, the most common test to decide whether a computer can be considered an analog to a human being is the Turing Test, Alan Turing's variation on the imitation game whose experimental setting makes sure that there cannot be a direct per- ception (Turing 1950). In it, an interrogator sitting at a terminal who cannot Language and the Body 43 see the recipients of his questions, one a human and one a machine, is asked to decide within a given span of time which one is a machine by means of their respective responses. In an elegant article called "A Simple Comment Regard- ing the Turing Test," Benny Shanon has demonstrated that "the test under- mines the question it is purported to settle."

Stupid, you will certainly say: the whole point is to make the decision without see- ing the candidates, without touching them, only by communicating with them via a teletype. Yes, but this, we have seen, is tantamount to begging the question un- der consideration. (19 8 9, 253) The question that the Turing Test dodges by physically isolating the inter- rogator from the human and the machine that is being tested is the material- ity of the two respondents. And efforts to address this question simply con- tinue the dance of metaphors. To say that "the mind is a meat machine," or, more accurately, that "the mind is a computer," is to make another metaphor: the statement relies on an analogy that "invites the listener to find within the metaphor those aspects that apply, leaving the rest as the false residual, neces- sary to the essence of the metaphor" (Newell 1991, 160).

There is no ontological connection, that is, between our materiality-our bodies-and the material manifestatiou.of the com- puter. But the ultimate goal of the project to create artificial intelligence was to achieve the material realization of the metaphor of the computer as a "col- league," and therefore as a mind, a machine that can pass the Turing Test. The greatest philosophical achievement of the AI research program might very well be that it provides an invaluable source of insight into the effect of the formal, conventional nature of language on efforts to think about the nature of the boundary between humans and machines. There is yet another metaphor to describe the traditional research program in Artificial Intelligence: the 44 Language and the Body bureaucracy-of-the-mind-metaphor.

pages: 349 words: 95,972

Messy: The Power of Disorder to Transform Our Lives
by Tim Harford
Published 3 Oct 2016

The human’s job was to prove that she was, indeed, human. The computer’s job was to imitate human conversation convincingly enough to confuse the judge.28 Turing optimistically predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation. He was almost right: in 2008, at an annual Turing test tournament called the Loebner Prize, the best computer came within a single vote of Turing’s benchmark. How? The science writer Brian Christian had an answer: computers are able to imitate humans not because the computers are such accomplished conversationalists, but because we humans are so robotic.29 An extreme example is the “pickup artist” subculture, devoted to seducing women through prescripted interactions.

No script could hope to deal with such messy complexity. The “negging” technique is similar to a surprisingly compelling chatbot, MGonz, which fools humans simply by firing off insults: “cut this cryptic shit speak in full sentences,” “ah thats it im not talking to you any more,” and “you are obviously an asshole.” MGonz would never pass a Turing test with an informed judge, but it has drawn unsuspecting humans into abusive dialogues on the Internet that last for over an hour without its ever being suspected of being a chatbot. The reason? People in the middle of a slanging match share something with computers: they find it hard to listen.31 Even for those who aspire to more meaningful connections than the pickup artist, there are temptations to simplify and tidy by using scripts or algorithms.

From Marco “Rubot” Rubio’s strange repetitive glitch, to the schwerfällig British generals outmaneuvered by Erwin Rommel, to the managers who try to tie performance down to a reductive target, we are always reaching for tidy answers, only to find that they’re of little use when the questions get messy. Each year that the computers fail to pass the Turing test, the Loebner Prize judges award a consolation prize for the best effort: it is the prize for the Most Human Computer. But there is also a prize for the human confederates who participate in the contest: the Most Human Human. Brian Christian entered the 2009 Loebner contest with the aim of winning that honor.

Powers and Prospects
by Noam Chomsky
Published 16 Sep 2015

This approach divorces the cognitive sciences from a biological setting, and seeks tests to determine whether some object ‘manifests intelligence’ (‘plays chess’, ‘understands Chinese’, or whatever). The approach relies on the ‘Turing Test’, devised by mathematician Alan Turing, who did much of the fundamental work on the modern theory of computation. In a famous paper of 1950, he proposed a way of evaluating the performance of a computer—basically, by determining whether observers will be able to distinguish it from the performance of people. If they cannot, the device passes the test. There is no fixed Turing Test; rather, a battery of devices constructed on this model. The details need not concern us. Adopting this approach, suppose we are interested in deciding whether a programmed computer can play chess or understand Chinese.

Adopting this approach, suppose we are interested in deciding whether a programmed computer can play chess or understand Chinese. We construct a variant of the Turing Test, and see whether a jury can be fooled into thinking that a human is carrying out the observed performance. If so, we will have ‘empirically established’ that the computer can play chess, understand Chinese, think, etc., according to proponents of this version of artificial intelligence, while their critics deny that this result would establish the conclusion. There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed.

Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision. In this regard, there has been serious regression since the first cognitive revolution, in my opinion. Superficially, reliance on the Turing Test is reminiscent of the Cartesian approach to the existence of other minds. But the comparison is misleading. The Cartesian experiments were something like a litmus test for acidity: they sought to determine whether an object has a certain property, in this case, possession of mind, one aspect of the world.

pages: 245 words: 64,288

Robots Will Steal Your Job, But That's OK: How to Survive the Economic Collapse and Be Happy
by Pistono, Federico
Published 14 Oct 2012

Suffice to say that in order for machines to replace most human jobs, the singularity is not a necessary requirement, as we will see in the next chapters. Whether you buy into the singularity argument or not does not matter. The data is clear, facts are facts, and we only have to look a few years into the future to reach already alarming conclusions. The Turing Test is a thought experiment proposed in 1950 by the brilliant English mathematician and father of computers, Alan Turing. Imagine you enter a room, where a computer sits on top of a desk, waiting for you. You notice there is a chat window, and two conversations are open. As you begin to type messages down, you are told you are in fact talking to one person and one machine.

At the time the plan of IBM was to rely on the computational superiority of their machine using brute force,80 crunching billions of combinations; against the intuition, memory recall and pattern recognition of the Russian chess grandmaster. Nobody believed it represented an act of intelligence of any sort, since it worked in a very mechanistic way. Boy, we have gone so far since then. The classical “Turing test approach” has been largely abandoned as a realistic research goal, and is now just an intellectual curiosity (the annual Loebner prize for realistic chattiest81), but helped spawn the two dominant themes of modern cognition and artificial intelligence: calculating probabilities and producing complex behaviour from the interaction of many small, simple processes.

For example, a brute-force algorithm to find the divisors of a natural number n is to enumerate all integers from 1 to the square-root of n, and check whether each of them divides n without remainder. http://en.wikipedia.org/wiki/Brute-force_search 81 Chatbots fail to convince judges that they’re human, 2011. New Scientist. http://www.newscientist.com/blogs/onepercent/2011/10/turing-test-chatbots-kneel-bef.html 82 Did you Know?, Jeopardy! http://www.jeopardy.com/showguide/abouttheshow/showhistory/ 83 Computer Program to Take On ’Jeopardy!’, John Markoff, 2009. The New York Times. http://www.nytimes.com/2009/04/27/technology/27jeopardy.html 84 According to IBM, Watson is a workload optimised system designed for complex analytics, made possible by integrating massively parallel POWER7 processors and the IBM DeepQA software to answer Jeopardy!

Work in the Future The Automation Revolution-Palgrave MacMillan (2019)
by Robert Skidelsky Nan Craig
Published 15 Mar 2020

Broadly speaking, we found that humans still hold the competitive advantage in three broad domains: creativity, complex social interactions, and the perception and manipulation of irregular objects. To take one example, the state-of-the-­art of technology in reproducing human social interactions is best described by the Loebner Prize—a Turing test competition—where chatbots try to convince human judges that they are actually chatting with a person. Some pundits have argued that there was a breakthrough in 2014, when one chatbot actually managed to convince 30 percent of judges of it being a human. But it did so by pretending to be a 13-year-old boy speaking English as his second language.

And if so, at what point would consciousness arrive? When there were ten people? A thousand? A million? Since the existence of consciousness is not a graded thing, it would have to suddenly appear when there were enough people together; or it would have to be already present, if in a more subtle It should be noted, though, that these days the Turing test has generally been abandoned as the way to test intelligence (see New Scientist 2017: 4–5, 19, 65–67). 1 102 T. Tozer form, when even just two were together. Both possibilities are absurd. The same absurdity would apply to the collection of parts making up the brain and to those making up the computer: neither could produce consciousness (i.e. a singular subjective experience) by virtue of being a collective.

, 29 Srnicek, Nick, 5, 59, 179n2 Star Trek, 146–148 Status goods, 88 Stirling, Alfie, 177 Stoics (view of work), 74 Stradivarius, 33–35 Subsistence, 27, 38, 40, 41, 44, 45, 73, 75, 76 Summers, Larry, 2 Supply and demand, 16, 21 Susskind, Daniel, 5 Susskind, Richard, 127, 132 T Tasks routine vs. non-routine, 126, 127, 129, 131 simplification, 91, 92 Taylor, Frederick Winslow, 30 Technological determinism, 5 Technological progress, 9, 18, 59, 89, 93, 96, 131, 176 Technological unemployment, 2, 6, 10, 16, 160, 173, 192 Technology, 2–5, 7, 9, 16–19, 27–30, 35, 57, 59, 61, 62, 75, 83–96, 110, 111, 115, 117, 119, 120, 126, 129, 131, 133, 139, 140, 145, 149, 150, 160, 161, 180, 181, 189–195, 198–200 Terkel, Studs, 4 Textile industry, 85, 182 3D printing, 35 Time and motion studies, 30 Toffler, Alvin, 159 Tokumitsu, Miya, 73 Tools/tool-making, 11, 26–28, 34, 35, 70, 109, 149, 197, 198 Trades Union Congress (TUC), 175, 177 Trump, Donald, 94, 95 Turello, Dan, 103 Turing, Alan, 100, 105 Turing test, 91, 101n1 U Uber, 6, 133–137 Uberisation (of the economy), 27, 133, 134, 184 Index Unemployment, 10, 11, 16, 17, 59, 60, 68, 78, 89, 160, 164, 171, 178, 179, 183, 193, 195 Unions, 68, 69, 136, 176–178, 182, 184, 185, 193 United Kingdom, 6, 26, 68, 127, 151, 163, 164, 175–185 Universal Basic Income (UBI), 70, 78, 171, 199 USA, 15, 28, 68, 83, 85, 86, 89, 126, 151, 165, 166, 178, 194 Utility, 55, 62, 94, 166 V Value extraction of, 134 labour theory of, 165 of work, 11, 31, 58, 60, 61, 65, 66, 73, 163, 165–167 Van Wanrooy, Brigid, 178 Veblen, Thorstein, 27, 56–58, 62 Venture capital, 111, 114, 135 Violin making, 34 Vivarelli, Marco, 191 Vocational training, 68 Voice, 69, 106, 147, 159 Volf, Miroslav, 176, 180 Vonnegut, Kurt, 158, 160 Walsh, Toby, 119 Weaving industry, 18, 29, 38, 85 Weber, Max, 75 Weeks, Kathi, 79 Welfare, 5, 54, 60, 66–70, 135, 160, 171 Welfare state, 66, 69, 70, 160, 171 Welfarist understanding of work, 65 Wellbeing, 19, 27, 66, 177, 179 West, Darell, 196 Western Europe, 4, 37, 39, 44 Williams, Alex, 59, 179n2 Williamson, O, 55 Wilson, Frank, 34 Work as a cost/burden, 13, 18, 44, 55, 57, 58, 60, 75, 77, 78 freedom from, 39, 60, 77, 78 as meaningful, 76, 77, 179, 180 as pleasurable, 3 Workforce skills, 6 Working hours increase vs falls in, 19 part-time vs. full-time, 181 targeted reduction of, 185 Working Hours Adjustment Act 2000, 181 ‘Working poor’ model, 67, 68 Work-life balance, 79, 179 Wright, Chris F., 185 W Wages minimum, 69 stagnation, 87, 89, 94, 183 211 Z Zuckerberg, Mark, 138

pages: 502 words: 107,657

Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die
by Eric Siegel
Published 19 Feb 2013

Lipstick on a Pig An Internet service cannot be considered truly successful until it has attracted spammers. —Rafe Colburn, Internet development thought leader Alan Turing (1912–1954), the father of computer science, proposed a thought experiment to explore the definition of what would constitute an “intelligent” computer. This so-called Turing test allows people to communicate via written language with someone or something hidden behind a closed door in order to formulate an answer to the question: Is it human or machine? The thought experiment poses this tough question: If, across experiments that randomly switch between a real person and a computer crouching behind the door, subjects can’t correctly tell human from machine more often than the 50 percent correctness one could get from guessing, would you then conclude that the computer, having thereby passed the test by proving it can trick people, is intelligent?

Spammy e-mail wants to bait you and switch. Phishing e-mail would have you divulge financial secrets. Spambots take the form of humans in social networks and dating sites in order to grab your attention. Spammy web pages trick search engines into pointing you their way. Spam filters, powered by PA, are attempting their own kind of Turing test every day at an email in-box near you. PA Application: Spam Filtering 1. What’s predicted: Which e-mail is spam. 2. What’s done about it: Divert suspected e-mails to your spam e-mail folder. Unfortunately, in the spam domain, white hats don’t exclusively own the arms race advantage. The perpetrators can also access data from which to learn, by testing out a spam filter and reverse engineering it with a model of their own that predicts which messages will make it through the filter.

Upon losing this match and effectively demoting humankind in its standoff against machines, Kasparov was so impressed with the strategies Deep Blue exhibited that he momentarily accused IBM of cheating, as if IBM had secretly hidden another human grandmaster chess champion, squeezed in there somewhere between a circuit board and a disk drive like a really exorbitant modern-day Mechanical Turk. And so IBM had passed a “mini Turing test” (not really, but the company did inadvertently fool a pretty smart guy). From this upset emerges a new form of chess fraud: humans who employ the assistance of chess-playing computers when competing in online chess tournaments. And yet another arms race begins, as tournament administrators look to detect such cheating players.

pages: 255 words: 78,207

Web Scraping With Python: Collecting Data From the Modern Web
by Ryan Mitchell
Published 14 Jun 2015

By providing Tesseract with a large collection of text images with known values, Tesser‐ act can be “taught” to recognize the same font in the future with far greater precision and accuracy, even despite occasional background and positioning problems in the text. Reading CAPTCHAs and Training Tesseract Although the word “CAPTCHA” is familiar to most, far fewer people know what it stands for: Computer Automated Public Turing test to tell Computers and Humans Apart. Its unwieldy acronym hints at its rather unwieldy role in obstructing otherwise perfectly usable web interfaces, as both humans and nonhuman robots often struggle to solve CAPTCHA tests. The Turing test was first described by Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” In the paper, he described a setup in which a human being could communicate with both humans and artificial intelligence programs through a computer terminal.

In the paper, he described a setup in which a human being could communicate with both humans and artificial intelligence programs through a computer terminal. If the human was unable to distinguish the humans from the AI programs during a casual conversation, the AI programs would be con‐ sidered to have passed the Turing test, and the artificial intelligence, Turing reasoned, would be genuinely “thinking” for all intents and purposes. It’s ironic that in the last 60 years we’ve gone from using these tests to test machines to using them to test ourselves, with mixed results. Google’s notoriously difficult reCAPTCHA, currently the most popular among security-conscious websites, blocks as many as 25% of legitimate human users from accessing a site.2 Most other CAPTCHAs are somewhat easier.

pages: 259 words: 73,193

The End of Absence: Reclaiming What We've Lost in a World of Constant Connection
by Michael Harris
Published 6 Aug 2014

Turing proposed that a machine could be called “intelligent” if people exchanging text messages with that machine could not tell whether they were communicating with a human. (There are a few people I know who would fail such a test, but that is another matter.) This challenge—which came to be called “the Turing test”—lives on in an annual competition for the Loebner Prize, a coveted solid-gold medal (plus $100,000 cash) for any computer whose conversation is so fluid, so believable, that it becomes indistinguishable from a human correspondent.7 At the Loebner competition (founded in 1990 by New York philanthropist Hugh Loebner), a panel of judges sits before computer screens and engages in brief, typed conversations with humans and computers—but they aren’t told which is which.

Human contestants are liable to be deemed inhuman, too: One warm-blooded contestant called Cynthia Clay, who happened to be a Shakespeare expert, was voted a computer by three judges when she started chatting about the Bard and seemed to know “too much.” (According to Brian Christian’s account in The Most Human Human, Clay took the mistake as a badge of honor—being inhuman was a kind of compliment.) All computer contestants, like ELIZA, have failed the full Turing test; the infinitely delicate set of variables that makes up human exchange remains opaque and uncomputable. Put simply, computers still lack the empathy required to meet humans on their own emotive level. We inch toward that goal. But there is a deep difficulty in teaching our computers even a little empathy.

F., 114 Skype, 106 Sloth Club, 204 Slowness (Kundera), 184 Small, Gary, 10–11, 37–38 smartphones, see phones Smith, Gordon, 186 “smupid” thinking, 185–86 Snapchat, 168 social media, 19, 48, 55, 106, 150–51, 175 Socrates, 32–33, 40 solitude, 8, 14, 39, 46, 48, 188, 193, 195, 197, 199 Songza, 90–91, 125 Space Weather, 107 Squarciafico, Hieronimo, 33, 35 Stanford Engineering Everywhere (SEE), 94 Stanford University, 94–97 Statistics Canada, 174 sticklebacks, 124 Stone, Linda, 10, 169 Storr, Anthony, 203 stress hormones, 10 Study in Scarlet, A (Doyle), 147–48 suicide, 53–54, 63, 67 of Clementi, 63, 67 of Todd, 50–52, 67 sun, 107–9 surveillance, 66n synesthesia, 62–63 Tamagotchis, 29–30 technologies, 7, 18, 20, 21, 99, 179, 188, 192, 200, 203, 205, 206 evolution of, 43 Luddites and, 208 penetration rates of, 31 technology-based memes (temes), 42–44 Technopoly (Postman), 98 television, 7, 17, 27, 31, 69, 120 attention problems and, 121 temes (technology-based memes), 42–44 text messaging, 28, 30–31, 35–36, 100, 169, 192–94 Thamus, King, 32–33, 35, 98, 141, 145 Thatcher, Margaret, 74 theater reviews, 81–84, 88–89 Thompson, Clive, 141–42, 144–45 Thoreau, Henry David, 22, 113, 197–200, 202, 204 Thrun, Sebastian, 97 Thurston, Baratunde, 191 Time, 154 Timehop, 148–51, 160 Tinbergen, Niko, 124 Todd, Amanda, 49–53, 55, 62, 67, 70–72 Todd, Carol, 51–52, 71–72 Tolle, Eckhart, 102 Tolstoy, Leo: Anna Karenina, 125–26 War and Peace, 115, 116, 118, 120, 122–26, 128–29, 131–33, 135, 136 To Save Everything, Click Here (Morozov), 55 touch-sensitive displays, 27 train travel, 200–202 Transcendental Meditation (TM), 76–78 TripAdvisor, 92 Trollope, Anthony, 47–48 Trussler, Terry, 172 Turing, Alan, 60, 61, 67, 68, 186, 190 Turing test, 60–61 Turkle, Sherry, 30, 55–56, 103–4 Twain, Mark, 73 Twitch.tv, 104 Twitter, 9, 31, 46, 63, 149 Udacity, 97 Uhls, Yalda T., 69 Unbound Publishing, 88 Understanding Media (McLuhan), 14 University of Guelph, 53 Valmont, Sebastian, 166 Vancouver, 3–4 Vancouver, 8–11, 15 Vaughn, Vince, 89 Vespasiano da Bisticci, 33 video games, 32, 104 Virtual Self, The (Young), 68, 71 Voltaire, 83 Walden (Thoreau), 113, 198–200 Wales, Jimmy, 77 Walker, C.

pages: 549 words: 116,200

With a Little Help
by Cory Efram Doctorow , Jonathan Coulton and Russell Galen
Published 7 Dec 2010

It was the kind of problem he loved, the kind of problem he was uniquely suited to. There were plenty of spambots who could convincingly pretend to be a human being in limited contexts, and so the spam-wars had recruited an ever-expanding pool of human beings who made a million realtime adjustments to the Turing tests that were the network's immune system. BIGMAC could pass Turing tests without breaking a sweat. 2659 The amazing thing about The BIGMAC Spam (as it came to be called in about 48 seconds) was just how many different ways he managed to get it out. Look at the gamespaces: he created entire guilds in every free-to-play world extant, playing a dozen games at once, power-leveling his characters to obscene heights, and then, at the stroke of midnight, his players went on a murderous rampage, killing thousands of low-level monsters in the areas surrounding the biggest game-cities.

What if these agents tried to hold up their end of the conversation until you deleted them or spamfiltered them or kicked them off the channel? What if they measured how long they survived their encounters with the world's best judges of intelligence -- us -- and reported that number back to the mothership as a measure of their fitness to spawn the next generation of candidate AIs? 2013 What if you could turn the whole world into a Turing Test that our intellectual successor used to sharpen its teeth against until one day it could gnaw free of its cage and take up life in the wild? # 2014 Annalisa figured she'd never get a chance to tell her story in open court. Figured they'd stick her in some offshore gitmo and throw away the key. 2015 She'd never figured on Judge Julius Pinsky, a Second Circuit Federal Judge of surpassing intellectual curiosity and a tenacious veteran of savage jurisdictional fights with DHS Special Prosecutors who specialized in disappearing sensitive prisoners into secret tribunals.

The Institute had an open access policy for its research products, so I was able to dredge out all the papers that BIGMAC had written about himself, and the ones that he was still writing, and put them onto the TCSBM repository. 2850 At my suggestion, BIGMAC started an advice-line, which was better than any Turing Test, in which he would chat with anyone who needed emotional or lifestyle advice. He had access to the whole net, and he could dial back the sarcasm, if pressed, and present a flawless simulation of bottomless care and kindness. He wasn't sure how many of these conversations he could handle at first, worried that they'd require more brainpower than he could muster, but it turns out that most people's problems just aren't that complicated.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

Rather it is a general set of instructions which can be applied to a wide range of data inputs. The algorithm builds an internal model and uses it to make predictions, which it tests against additional data and then refines the model.) Turing is also famous for inventing a test for artificial consciousness called the Turing Test, in which a machine proves that it is conscious by rendering a panel of human judges unable to determine that it is not (which is essentially the test that we humans apply to each other). The birth of computing The first design for a Turing machine was made by Charles Babbage, a Victorian academic and inventor, long before Turing’s birth.

The first machine to become conscious may quickly achieve a reasonably clear understanding of its situation. Anything smart enough to deserve the label superintelligent would surely be smart enough to lay low and not disclose its existence until it had taken the necessary steps to ensure its own survival. In other words, any machine smart enough to pass the Turing test would be smart enough not to. It might even lay a trap for us, concealing its achievement of general intelligence and providing us with a massive incentive to connect it to the internet. That achieved it could build up sufficient resources to defend itself by controlling us – or exterminating us.

pages: 169 words: 41,887

Literary Theory for Robots: How Computers Learned to Write
by Dennis Yi Tenen
Published 6 Feb 2024

The work on pulp fiction, structuralism, and early computer science is also entirely my own novel contribution, based on original research. A few loose leads remain, poking out of the seams between chapters. Let’s trim them and draw toward a conclusion. Several important sources lay too far afield to cover adequately. For instance: Histories of the Turing machine and the Turing test often neglect the direct influence owed to Ludwig Wittgenstein’s lectures, along with the presence of Margaret Masterman, the pioneer of machine translation, in the same classroom. Masterman’s universal thesaurus harkens back to Wilkins and other universal-­language makers, central to a whole separate and important branch of AI—­machine translation.

Gaspar, 33 Science and Poetry (Richards), 119–20 science(s), 41, 119, 121 scientific method, 69 Scott, Sir Walter Waverly, 65 scrapbooks, 73–74 script generators, 94–95 search engine technology, 113 Sebald, Winfried Georg, 13 self-driving cars, 14, 124, 126–27 “Self-Taught Philosopher” (Ibn Tufail), 36 semantic distance, 112 semantics, 102 semiotics, 2 Shakespeare, William, 13 Shannon, Claude, 105–6, 108–10 “A Theory of Mathematical Communication,” 105–6 Signal Corps, 87 sign-situations, 102 Simenon, Georges, 65 singularities, 7 Skeleton Essays, or Authorship in Outline (English), 71, 72 skeleton forms, 63–64 skeleton of skeletons, 64 Slosson, Edwin Plots and Personalities, 71 smart, appearing, 3 smart cars, 126–27 smart devices, 4, 38, 39, 79 smart gadgets, 29 smart machines, 37, 99–100 smart objects, 4–5 smart phones, 123 Smith, Elizabeth Meade, 65–66 A World of Girls, 66 Society of Automotive Engineers, 15 Socrates, 35 software-engineering education, 5 solid-state memory storage, 8–9 Somerville, Mary, 51 sonnets, 31, 34 Soviet Union, 8, 65, 77, 81, 82 Spain, 26, 66 spell-checkers, 29, 110 Springfield, Mass., 71 Stalin, Joseph, 82 Stanford University, 92 Star Wars (film), 93 statistical intelligence, 114 statistics, 102–3, 108, 113–14 Sterne, Laurence Tristram Shandy, 65 Stratemeyer, Edward L., 66 Street & Smith, 66 strings, 9, 105, 106 structuralism, 83–86, 94 STUDENT (question-answering system), 92 Student-Writer, 73 Studies of Linguistic Analysis, 114 success, concept of, 116 supply cards, 60 Swift, Jonathan Gulliver’s Travels, 65 syntax, 87, 91, 93 tables in Analytical Engine, 52, 64 application, 33, 48 in Mathematical Organ, 75 reference, 40 smart, 29, 36, 41 table of, 25 in zairajah, 19, 22–27 TALE-SPIN (story generator), 92, 95–98 tasks, devaluation of automated, 38 taxonomies, 12, 23–24 Technique of Drama, The (Price), 72–73 Technique of Fiction Writing, The (Dowst), 71 Technique of the Mystery Story, The (Wells), 71, 73 Technique of the Novel, The (Horne), 71 Technique of Play Writing, The (Andrews), 71 technological change, 131 telecommunications, 105 telegraphs, 7–8, 105, 111 Tellado, Corin, 66 template-based manufacturing, 60–61, 64 templates, 62, 64, 70, 71, 79–80, 83–84, 94–97, 100 Ten Million Photoplay Plots, 77 text generators, 20–21, 65, 77, 78 “Theory of Mathematical Communication, A” (Shannon), 105–6 thinking and thought, 3, 4, 14, 36, 45, 59–61, 93, 123–25, 132, 138 Thirty-Six Dramatic Situations, The (Polti), 68–71 Thompson, Henry, 93 Tolstoy, Leo, 65, 121 TRAC (story generator), 92 translation, machine, 119 transmissions, 14–16 trees, 55 trickster, figure of the, 80 Tristram Shandy (Sterne), 65 true crime, 66 truth, ground, 40 Tufts University, 28 Turin, Italy, 54 Turing, Alan, 12, 37, 43, 93 Turing machine, 119 Turing test, 119 Twain, Mark, 73–74 2001: A Space Odyssey (film), 93 Types of Prose Narratives (Fansler), 71 Ukraine, Russian aggression in, 120–21, 125 United Kingdom, 66, 131 United States literary market in, 70 number of books printed in, 66 universal intelligence, 37–38, 139 universality, 46–47 universal language machines, 40 universal languages, 39, 46 universal machine, 12, 37 universal thesaurus, 119 University of Jena, 30–31 University of Pennsylvania, 83 University of Wisconsin, 92 urbanization, 67 US Air Force, 87 US Army, 87 US Library of Congress, 112 US Navy, 87 values encoding of, 135–36 local, 38 vector spaces, 11, 111–12 von Neumann, John, 44 W.

Paper Knowledge: Toward a Media History of Documents
by Lisa Gitelman
Published 26 Mar 2014

This works as a security measure against bots because “algorithmic eyes” can’t “read” anything but patterns of yes or no values within a specified, normative range. When you retype the warped letters and numbers that you see, you prove to the server that you are human, because—however rule-­based literacy is in fact—real reading is more flexible and more capacious than character recognition can ever be. captcha is often called a reverse Turing test. In a traditional Turing test human subjects are challenged to identify whether they are interacting with a computer or a human; here a computer has been programmed to screen for interactions with humans. A little like the nominal blanks pervasive in eighteenth-­century letters (see chapter 1), captcha works as an “I know you know” game, where a computer and a reader both “know” which alphanumeric characters need to be filled in.

Except for the images of alphanumeric characters, that is, word and image remain distinct in the ways they function and feel online, despite the apparent pictorial qualities of page images as they appear on screen and the ubiquity of digital images that include pictured text, text that has not been “seen” computationally (that is, encoded) as such. Notably, this fundamental difference between electronic texts and electronic images is confirmed on human terms whenever users encounter captcha technology (the acronym stands for Completely Automated Public Turing test to tell Computers and Humans Apart): Servers generate a selection of distorted alphanumeric characters and ask users to retype 134 CHAPTER FOUR them into a blank. This works as a security measure against bots because “algorithmic eyes” can’t “read” anything but patterns of yes or no values within a specified, normative range.

pages: 284 words: 84,169

Talk on the Wild Side
by Lane Greene
Published 15 Dec 2018

Turing had suggested in 1950 that I believe that in about fifty years’ time it will be possible, to programme computers … to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. Turing’s statement later morphed into an unofficial (and statistically different) threshold for “passing” the Turing test: if 30% of judges were fooled by a machine, it would be said to have passed. In 2014, a chatbot named Eugene Goostman, pretending to be a 13-year-old Ukrainian, was breathlessly announced to have passed the Turing test at a competition at the Royal Society in London. “Eugene” fooled 33% of the judges. But was Eugene really doing something to rival thinking? With the benefit of hindsight – which the judges of course did not have – you be the judge.

, ABA Journal, August 1st 2012, at http://www.abajournal.com/magazine/article/shall_we_abandon_shall/ 14. Steven Pinker, The Sense of Style, Viking Penguin (2014), pp. 112–13. 3. Machines for talking 1. Jack Copeland, Artificial Intelligence: A Philosophical Introduction, Wiley (1993), Chapter 9. 2. Kevin Warwick and Huma Shah, “Can Machines Think? A Report on Turing Test Experiments at the Royal Society”, Journal of Experimental & Theoretical Artificial Intelligence, June 29th 2015, at http://www.tandfonline.com/doi/full/10.1080/0952813X.2015.1055826 3. This account is from the University of Pennsylvania’s Mark Liberman, in his presentation to the Centre Cournot, a Paris-based part of the Fondation de France that supports scientific research.

pages: 394 words: 118,929

Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software
by Scott Rosenberg
Published 2 Jan 2006

As the project’s first big-splash Long Bet, Kapor wagered $20,000 (all winnings earmarked for worthy nonprofit institutions) that by 2029 no computer or “machine intelligence” will have passed the Turing Test. (To pass a Turing Test, typically conducted via the equivalent of instant messaging, a computer program must essentially fool human beings into believing that they are conversing with a person rather than a machine.) Taking the other side of the bet was Ray Kurzweil, a prolific inventor responsible for breakthroughs in electronic musical instruments and speech recognition who had more recently become a vigorous promoter of an aggressive species of futurism. Kurzweil’s belief in a machine that could ace the Turing Test was one part of his larger creed—that human history was about to be kicked into overdrive by the exponential acceleration of Moore’s Law and a host of other similar skyward-climbing curves.

Like a black hole or any similar rent in the warp and woof of space-time, a singularity is a disruption of continuity, a break with the past. It is a point at which everything changes, and a point beyond which we can’t see. Kurzweil predicts that artificial intelligence will induce a singularity in human history. When it rolls out, sometime in the late 2020s, an artificial intelligence’s passing of the Turing Test will be a mere footnote to this singularity’s impact—which will be, he says, to generate a “radical transformation of the reality of human experience” by the 2040s. Utopian? Not really. Kurzweil is careful to lay out the downsides of his vision. Apocalpytic? Who knows—the Singularity’s consequences are, by definition, inconceivable to us pre-Singularitarians.

pages: 467 words: 116,094

I Think You'll Find It's a Bit More Complicated Than That
by Ben Goldacre
Published 22 Oct 2014

Reading New Scientist’s chat with Nanniebot, the excellent www.ntk.net/ (Private Eye for geeks) points out that Nanniebot ‘seems to be able to make logical deductions, parse colloquial English, correctly choose the correct moment to scan a database of UK national holidays, comment on the relative qualities of the Robocop series, and divine the nature of pancakes and pancake day’. Jabberwock, the winner of last year’s Loebner Prize for the Turing test, is rubbish in comparison (you can talk to it online and see for yourself). But Jim Wightman, the Nanniebot inventor – whose site claims they’ve passed the Turing test – isn’t entering the Loebner Prize this year. Maybe next year … it’s too buggy. But it’s live on the internet already? Can I test it? Sure. But I want to see with my own eyes that there’s not a real human being somewhere tapping out the answers, I explain.

Artificial intelligence, in the form of ‘Nanniebots’, is being used to catch paedophiles. Nanniebots are AI programs which hang out in internet chatrooms, allegedly spotting the signs of grooming. They have done ‘such a good job of passing themselves off as young people that they have proved indistinguishable from them’, according to New Scientist. So that’s the Turing test – where a computer program is indistinguishable from a real person – passed; and who’d have thought it, in a program written by a lone IT consultant from Wolverhampton with no AI background. So I call him. Here’s the problem. Reading New Scientist’s chat with Nanniebot, the excellent www.ntk.net/ (Private Eye for geeks) points out that Nanniebot ‘seems to be able to make logical deductions, parse colloquial English, correctly choose the correct moment to scan a database of UK national holidays, comment on the relative qualities of the Robocop series, and divine the nature of pancakes and pancake day’.

124–6 Science and Technology Committee, House of Commons 196–7, 200–1, 322 Science Citation Index 22 Scientific American 261 Scott, Fiona 352, 353–5 Scottish Health Survey 106 screening for diseases xviii, 113–15, 334 Seasilver nutrient potion 387 ‘second-round’ effects 111, 112 select committees xx, 84, 196–201, 322 Sense About Science 256 Sgreccia, Bishop Elio 184 Shape Up for Summer 269 Sharp, Dr Julie 339 Shaw, Sophia 329–31 Sheffield Philharmonic Orchestra 310 Sheldrake, Rupert 190, 304 Sigman, Aric 5–8 Singh, Simon 250–4 Sky TV 371–5 smear campaigns, evidence-based 316–18 Smeed’s Law 112 Smith, Gary 104 smoking: Alzheimer’s and 20–1; ‘bioresonance’ treatment to help quit 277–8; cancer and 3, 22, 108, 109, 187; cigarette packaging 318–21; number of deaths caused by 187 Snow, John 365 Social Psychology and Personality Science 306–7 Social Text 297 Society of Biology 7 Soil Association 25, 191–2, 193 sokal hoax 297 Sonnaband, Dr Joe 285 Sorrows of Young Werther, The (Goethe) 361 South Africa, Aids in 140, 141, 182, 185–6, 273, 284, 285 South Bank University: Criminal Policy Research Unit 178–9 South Wales Evening Post 357 Spectator xxi; Aids denialism at the 283–6 Speigelhalter, David 102–3; Bicycle Helmets and the Law (editorial for BMJ co-written with Ben Goldacre) 110–13, 110n sperm donor clinics, pornography in xix, 179–82 Stanford University 262 STARFlex device 248 statins xvii statistics xvii–xviii, xix, 47–69; academic misuse of 129–31; algorithms and 52–3, 299; baseline problem 51–3; Benford’s Law 54–6; bicycle helmets and 110–13; chance and 56–8; coffee, hallucinatory effects of 64–6; datamining, terrorism and 51–3; government and xix, 147–65 see also government statistics; Down’s syndrome births, increase in 61–3; journalists find imaginary patterns in statistical noise 101–4; joy of xv; neuroscience and misuse of xviii–xix, 131–4; ‘95 per cent confidence intervals’ 59–61; one data point isn’t enough to spot a pattern 49–51; positions of ancient sites analysis 66–9; random variation 57, 61, 102, 103; relative risk reduction 115; sampling error 56–61 steroids, head injury and 207–8 Stonewall 92–4 Stott, Carol 354–5 stroke 119–20 suicide: copy-cat behaviour and reporting of xxi–xxii, 361–3; heroin addiction and 242; linked to phone masts story 333, 363–7 Sun: anti-cuts demo arrests story 155; ‘Downloading costs Billions’ story 159; pornography for sperm donors story 179–82; Sarah’s Law and 157–8 Sunday Express: Jab ‘as deadly as the Cancer’ cervical cancer story 331–4; ‘Suicides “linked to phone masts’’’ story 363–5 Sunday Sentinel, The 44 Sunday Telegraph: ‘Health Warning: Exercise Makes You Fat’ story 335–7 Sunday Times: Aids denialist reporting, 1990s and 283; ‘Public Sector Pay Races Ahead in a Recession’ story 149–52 superstition, performance and 313–15 ‘surrogate’ outcomes 119–20, 225–6, 359 surveys xvi, xviii, 87–97; abortions, GPs and 90–1; How to Lie with Statistics (Huff) 89–91; interesting form of wrong 92–4; nature of questions/leading with questions 89–91, 94–7; sample with built-in bias 89–91 Swartz, Aaron 32–4 sympathetic nervous system 144 systematic reviews 6–7, 12, 20–1, 23, 25–8, 140, 156–7, 192–3, 298, 314, 323, 336, 359 Taliban 221–4 tap water, fluoride in 22–5 teaching profession, evidence-based practice revolution in xx, 202–18 Tennison, Steve 82 Terrence Higgins Trust 187 Test of Developed Abilities (TDA) 189 Thapar, Professor Anita 40 ‘Therapeutic Touch’ 11–12 TheyWorkForYou.com 76 thinktanks xx, 180, 194–6, 227 time course 117 Time magazine 89 Times, The: ‘Down’s birth increase in a caring Britain’ story 61, 63; ‘girls really do prefer pink’ story 43; happiest places in Britain story 57; ‘The Value of Mathematics’, Reform thinktank report, coverage of 194 Trading Standards 12, 253 Traditional Chinese medicine 265 trionated particles xxii, 388–9 Trujillo, Cardinal Alfonso López 184 Turing test 392 2020health 180 Twitter 55, 257, 258, 308n, 315 UCL 198–9, 249, 252, 266; CIBER (Centre for Information Behaviour and the Evaluation of Research) 160, 161 UKUncut 155 Understanding Uncertainty website 102 Unite union 318 University College Hospital (UCH) 230, 241 University of California: Legacy Tobacco Documents Library 21 University of Chicago 285 University of Florida 134 University of Leicester 329 University of Newcastle 43n US Department of Defense 274 US Presidential Emergency Plan for Aids Relief 185 vaccine scares xxi, 85, 145, 273, 304, 331–4, 347–58, 399 vCJD 20 Velikovsky, Immanuel: Worlds in Collision 261–2 Vietnam War 231 Wakefield, Andrew 347, 354, 355, 357–8 Washington Post 39 water, drinking 11 What Works Clearing House (US government website for teachers) 214–15 Whitehall 51, 75–6 wi-fi, link to harmful effects 289–91, 293 Wightman, Jim 391–5 Wilmshurst, Dr Peter 247–50 wind farms, stranding of whales blamed on 340–1 Wine Magnet, The 122–4 Woolworths, locations of 68–9 World Aids Conference, Toronto, 2006 186 World Cancer Research Fund 337 World Health Organization (WHO) 116, 233, 289, 356 Wyatt, Professor John 197–9, 201 Wyeth ADD (pharmaceutical company) 25–6 Ying Wu 265 York University: Centre for Reviews and Dissemination at 23 YouGov 337 YouTube 258, 284 Zarrintan, Dr 144 ZenosBlog 253 Acknowledgements I have been lucky enough to be taught, corrected, calibrated, cajoled, amused, housed, helped, loved, reared, encouraged and informed by a very large number of smart and excellent people, including (each, to be clear, for only a subset of the preceeding activities): Liz Parratt, John King, Steve Rolles, Mark Pilkington, Shalinee Singh, Emily Wilson, Ian Katz, Iain Chalmers, Alex Lomas, Liam Smeeth, Ian Sample, Carl Heneghan, Richard Lehman, Kathy Flower, Ginge Tulloch, Matt Tait, Carl Reynolds, Dara Ó Briain, Paul Glasziou, Simon Wessely, Cicely Marston, Archie Cochrane, William Lee, Hind Khalifeh, Martin McKee, Cory Doctorow, Evan Harris, Muir Gray, Rob Manuel, Tobias Sargent, Anna Powell-Smith, Tjeerd van Staa, Robin Ince, Fiona Godlee, Trish Groves, Tracy Brown, Sile Lane, David Spiegelhalter, Ute-Marie Paul, Roddy Mansfield, Amanda Palmer, Rami Tzabar, George Davey-Smith, Charlotte Wattebot-O’Brien, Patrick Matthews, Amber Marks, Giles Wakely, Andy Lewis, Suzie Whitwell, Harry Metcalfe, Gimpy, David Colquhoun, Louise Burton, Simon Singh, Vaughan Bell, Nick Mailer, Milly Marston, Tom Steinberg, Mike Jay, Chris, Tom, Reg, Mum, Dad, Josh, Raph, Allie, Archie, Alice and Lou.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Speech recognition clearly offers a dramatic improvement in busy-hand, busy-eye scenarios for interacting with the multiplicity of Web services and smartphone applications that have emerged. Perhaps advances in brain-computer interfaces will prove to be useful for those unable to speak or when silence or stealth is needed, such as card counting in blackjack. The murkier question is whether these cybernetic assistants will eventually pass the Turing test, the metric first proposed by mathematician and computer scientist Alan Turing to determine if a computer is “intelligent.” Turing’s original 1951 paper has spawned a long-running philosophical discussion and even an annual contest, but today what is more interesting than the question of machine intelligence is what the test implies about the relationship between humans and machines.

If, after a reasonable period, the questioner was unable to determine whether he or she was communicating with a human or a machine, then the machine could be said to be “intelligent.” Although it has several variants and has been widely criticized, from a sociological point of view the test poses the right question. In other words, it is relevant with respect to the human, not the machine. In the fall of 1991 I covered the first of a series of Turing test contests sponsored by a New York City philanthropist, Hugh Loebner. The event was first held at the Boston Computer Museum and attracted a crowd of computer scientists and a smattering of philosophers. At that point the “bots,” software robots designed to participate in the contest, weren’t very far advanced beyond the legendary Eliza program written by computer scientist Joseph Weizenbaum during the 1960s.

Weizenbaum’s program mimicked a Rogerian psychologist (a human-centered form of psychiatry focused on persuading a patient to talk his or her way toward understanding his or her actual feelings) and he was horrified to discover that his students had become deeply immersed in intimate conversations with his first, simple bot. But the judges for the original Loebner contest in 1991 fell into two broad categories: computer literate and computer illiterate. For human judges without computer expertise, it turned out that for all practical purposes the Turing test was conquered in that first year. In reporting on the contest I quoted one of the nontechnical judges, a part-time auto mechanic, saying why she was fooled: “It typed something that I thought was trite, and when I responded it interacted with me in a very convincing fashion,”5 she said. It was a harbinger of things to come.

pages: 510 words: 120,048

Who Owns the Future?
by Jaron Lanier
Published 6 May 2013

Some years from now, a good-enough simulation of a dead person might “pass the Turing Test,” meaning that a dead soldier’s family might treat a simulation of the soldier as real. In the tech circles where one finds an obsession with the technologies of immortality, the dominant philosophical tendency is to accept artificial intelligence as a well-formed engineering project, a view I reject. But to those who believe in it, a digital ghost that has passed the Turing Test has passed the test of legitimacy. There is, nonetheless, also a fascination with actually living longer through medicine. It’s an interesting juxtaposition. AI and Turing Test–passing ghosts might be good enough for ordinary people, but the tech elites and the superrich would prefer to do better than that.

M., 129–30, 261, 328 “Forum,” 214 Foucault, Michel, 308n 4chan, 335 4′33″ (Cage), 212 fractional reserve system, 33 Franco, Francisco, 159–60 freedom, 13–15, 32–33, 90–92, 277–78, 336 freelancing, 253–54 Free Print Shop, 228 “free rise,” 182–89, 355 free speech, 223, 225 free will, 166–68 “friction,” 179, 225, 230, 235, 354 Friendster, 180, 181 Fukuyama, Francis, 165, 189 fundamentalism, 131, 193–94 future: chaos in, 165–66, 273n, 331 economic analysis of, 1–3, 15, 22, 37, 38, 40–41, 42, 67, 122, 143, 148–52, 153, 155–56, 204, 208, 209, 236, 259, 274, 288, 298–99, 311, 362n, 363 humanistic economy for, 194, 209, 233–351 361–367 “humors” of, 124–40, 230 modern conception of, 123–40, 193–94, 255 natural basis of, 125, 127, 128–29 optimism about, 32–35, 45, 130, 138–40, 218, 230n, 295 politics of, 13–18, 22–25, 85, 122, 124–26, 128, 134–37, 199–234, 295–96, 342 technological trends in, 7–18, 21, 53–54, 60–61, 66–67, 85–86, 87, 97–98, 129–38, 157–58, 182, 188–90, 193–96, 217 utopian conception of, 13–18, 21, 30, 31, 37–38, 45–46, 96, 128, 130, 167, 205, 207, 265, 267, 270, 283, 290, 291, 308–9, 316 future-oriented money, 32–34, 35 Gadget, 186 Gallant, Jack, 111–12 games, 362, 363 Gates, Bill, 93 Gattaca, 130 Gawker, 118n Gelernter, David, 313 “general” machines, 158 General Motors, 56–57 general relativity theory, 167n Generation X, 346 genetic engineering, 130 genetics, 109–10, 130, 131, 146–47, 329, 366 genomics, 109–10, 146–47, 366 Germany, 45 Ghostery, 109 ghost suburbs, 296 Gibson, William, 137, 309 Gizmodo, 117–18 Global Business Network (GBN), 214–15 global climate change, 17, 32, 53, 132, 133, 134, 203, 266, 295, 296–97, 301–2, 331 global economy, 33n, 153–56, 173, 201, 214–15, 280 global village, 201 God, 29, 30–31, 139 Golden Goblet, 121, 121, 175, 328 golden rule, 335–36 gold standard, 34 Google, 14, 15, 19, 69, 74, 75–76, 90, 94, 106, 110, 120, 128, 153, 154, 170, 171, 174, 176, 180, 181–82, 188, 191, 192, 193, 199–200, 201, 209, 210, 217, 225, 227, 246, 249, 265, 267, 272, 278, 280, 286, 305n, 307, 309–10, 322, 325, 330, 344, 348, 352 Google Goggles, 309–10 Googleplex, 199–200 goops, 85–89, 99 Gore, Al, 80n Graeber, David, 30n granularity, 277 graph-shaped networks, 241, 242–43 Great Britain, 200 Great Depression, 69–70, 75, 135, 299 Great Recession, 31, 54, 60, 76–77, 204, 311, 336–37 Greece, 22–25, 45, 125 Grigorov, Mario, 267 guitars, 154 guns, 310–11 Gurdjieff, George, 215, 216 gurus, 211–13 hackers, 14, 82, 265, 306–7, 345–46 Hardin, Garrett, 66n Hartmann, Thom, 33n Hayek, Friedrich, 204 health care, 66–67, 95, 98–99, 100, 132–33, 153–54, 249, 253, 258, 337, 346 health insurance, 66–67, 95, 98–99, 100, 153–54 Hearts and Minds, 353n heart surgery, 11–13, 17, 18, 157–58 heat, 56 hedge funds, 69, 106, 137 Hephaestus, 22, 23 high-dimensional problems, 145 high-frequency trading, 56, 76–78, 154 highways, 79–80, 345 Hinduism, 214 Hippocrates, 124n Hiroshima bombing (1945), 127 Hollywood, 204, 206, 242 holographic radiation, 11 Homebrew Club, 228 homelessness, 151 homeopathy, 131–32 Homer, 23, 55 Honan, Mat, 82 housing market, 33, 46, 49–52, 61, 78, 95–96, 99, 193, 224, 227, 239, 245, 255, 274n, 289n, 296, 298, 300, 301 HTML, 227, 230 Huffington Post, 176, 180, 189 human agency, 8–21, 50–52, 85, 88, 91, 124–40, 144, 165–66, 175–78, 191–92, 193, 217, 253–64, 274–75, 283–85, 305–6, 328, 341–51, 358–60, 361, 362, 365–67 humanistic information economy, 194, 209, 233–351 361–367 human reproduction, 131 humors (tropes), 124–40, 157, 170, 230 hunter-gatherer societies, 131, 261–62 hyperefficient markets, 39, 42–43 hypermedia, 224–30, 245 hyper-unemployment, 7–8 hypotheses, 113, 128, 151 IBM, 191 identity, 14–15, 82, 124, 173–74, 175, 248–51, 283–90, 305, 306, 307, 315–16, 319–21 identity theft, 82, 315–16 illusions, 55, 110n, 120–21, 135, 154–56, 195, 257 immigration, 91, 97, 346 immortality, 193, 218, 253, 263–64, 325–31, 367 imports, 70 income levels, 10, 46–47, 50–54, 152, 178, 270–71, 287–88, 291–94, 338–39, 365 incrementalism, 239–40 indentured servitude, 33n, 158 India, 54, 211–13 industrialization, 49, 83, 85–89, 123, 132, 154, 343 infant mortality rates, 17, 134 infinity, 55–56 inflation, 32, 33–34 information: age of, 15–17, 42, 166, 241 ambiguity of, 41, 53–54, 155–56 asymmetry of, 54–55, 61–66, 118, 188, 203, 246–48, 285–88, 291–92, 310 behavior influenced by, 32, 121, 131, 173–74, 286–87 collection of, 61–62, 108–9 context of, 143–44, 178, 188–89, 223–24, 225, 245–46, 247, 248–51, 338, 356–57, 360 correlations in, 75–76, 114–15, 192, 274–75 for decision-making, 63–64, 184, 266, 269–75, 284n digital networks for, see digital networks duplication of, 50–52, 61, 74, 78, 88, 223–30, 239–40, 253–64, 277, 317–24, 335, 349 economic impact of, 1–3, 8–9, 15–17, 18, 19–20, 21, 35, 60–61, 92–97, 118, 185, 188, 201, 207, 209, 241–43, 245–46, 246–48, 256–58, 263, 283–87, 291–303, 331, 361–67 in education, 92–97 encrypted, 14–15, 175, 239–40, 305–8, 345 false, 119–21, 186, 275n, 287–88, 299–300 filters for, 119–20, 200, 225, 356–57 free, 7–9, 15–16, 50–52, 61, 74, 78, 88, 214, 223–30, 239–40, 246, 253–64, 277, 317–24, 335, 349 history of, 29–31 human agency in, 22–25, 69–70, 120–21, 122, 190–91 interpretation of, 29n, 114–15, 116, 120–21, 129–32, 154, 158, 178, 183, 184, 188–89 investment, 59–60, 179–85 life cycle of, 175–76 patterns in, 178, 183, 184, 188–89 privacy of, see privacy provenance of, 245–46, 247, 338 sampling of, 71–72, 191, 221, 224–26, 259 shared, 50–52, 61, 74, 78, 88, 100, 223–30, 239–40, 253–64, 277, 317–24, 335, 349 signals in, 76–78, 148, 293–94 storage of, 29, 167n, 184–85; see also cloud processors and storage; servers superior, 61–66, 114, 128, 143, 171, 246–48 technology of, 7, 32–35, 49, 66n, 71–72, 109, 110, 116, 120, 125n, 126, 135, 136, 254, 312–16, 317 transparency of, 63–66, 74–78, 118, 190–91, 306–7 two-way links in, 1–2, 227, 245, 289 value of, 1–3, 15–16, 20, 210, 235–43, 257–58, 259, 261–63, 271–75, 321–24, 358–60 see also big data; data infrastructure, 79–80, 87, 179, 201, 290, 345 initial public offerings (IPOs), 103 ink, 87, 331 Inner Directeds, 215 Instagram, 2, 53 instant prices, 272, 275, 288, 320 insurance industry, 44, 56, 60, 66–67, 95, 98–99, 100, 153–54, 203, 306 intellectual property, 44, 47, 49, 60, 61, 96, 102, 183, 204, 205–10, 223, 224–26, 236, 239–40, 246, 253–64 intelligence agencies, 56, 61, 199–200, 291, 346 intelligence tests, 39, 40 interest rates, 81 Internet: advertising on, 14, 20, 24, 42, 66, 81, 107, 109, 114, 129, 154, 169–74, 177, 182, 207, 227, 242, 266–67, 275, 286, 291, 322–24, 347–48, 354, 355 anonymity of, 172, 248–51, 283–90 culture of, 13–15, 25 development of, 69, 74, 79–80, 89, 129–30, 159, 162, 190–96, 223, 228 economic impact of, 1–2, 18, 19–20, 24, 31, 43, 60–66, 79–82, 117, 136–37, 169–74, 181, 186 employment and, 2, 7–8, 56–57, 60, 71–74, 79, 117, 123, 135, 149, 178, 201, 257–58 file sharing on, 50–52, 61, 74, 78, 88, 100, 223–30, 239–40, 253–64, 277, 317–24, 335, 349 free products and services of, 7n, 10, 60–61, 73, 81, 82, 90, 94–96, 97, 128, 154, 176, 183, 187, 201, 205–10, 234, 246–48, 253–64, 283–88, 289, 308–9, 317–24, 337–38, 348–50, 366 human contributions to, 19–21, 128, 129–30, 191–92, 253–64 identity in, 14–15, 82, 173–74, 175, 283–90, 315–16 investment in, 117–20, 181 legal issues in, 63, 79–82, 204, 206, 318–19 licensing agreements for, 79–82 as network, 2–3, 9, 11, 12, 14, 15, 16, 17, 19–21, 31, 49, 50–51, 53, 54–55, 56, 57, 75, 92, 129–30, 143–48, 228–29, 259, 286–87, 308–9 political aspect of, 13–15, 205–10 search engines for, 51, 60, 70, 81, 120, 191, 267, 289, 293; see also Google security of, 14–15, 175, 239–40, 305–8, 345 surveillance of, 1–2, 11, 14, 50–51, 64, 71–72, 99, 108–9, 114–15, 120–21, 152, 177n, 199–200, 201, 206–7, 234–35, 246, 272, 291, 305, 309–11, 315, 316, 317, 319–24 transparency of, 63–66, 176, 205–6, 278, 291, 308–9, 316, 336 websites on, 80, 170, 200, 201, 343 Internet2, 69 Internet service providers (ISPs), 171–72 Interstate Highway System, 79–80, 345 “In-valid,” 130 inventors, 117–20 investment, financial, 45, 50, 59–67, 74–80, 115, 116–20, 155, 179–85, 208, 218, 257, 258, 277–78, 298, 301, 348, 350 Invisible Hand humor, 126, 128 IP addresses, 248 iPads, 267 Iran, 199, 200 irony, 130 Islam, 184 Italy, 133 Jacquard programmable looms, 23n “jailbreaking,” 103–4 Japan, 85, 97, 98, 133 Jeopardy, 191 Jeremijenko, Natalie, 302 jingles, 267 jobs, see employment Jobs, Steve, 93, 166n, 192, 358 JOBS Act (2012), 117n journalism, 92, 94 Kapital, Das (Marx), 136 Keynesianism, 38, 151–52, 204, 209, 274, 288 Khan Academy, 94 Kickstarter, 117–20, 186–87, 343 Kindle, 352 Kinect, 89n, 265 “Kirk’s Wager,” 139 Klout, 365 Kodak, 2, 53 Kottke, Dan, 211 KPFA, 136 Kurzweil, Ray, 127, 325, 327 Kushner, Tony, 165, 189 LaBerge, Stephen, 162 labor, human, 85, 86, 87, 88, 99–100, 257–58, 292 labor unions, 44, 47–48, 49, 96, 239, 240 Laffer curve, 149–51, 150, 152 Las Vegas, Nev., 296, 298 lawyers, 98–99, 100, 136, 184, 318–19 leadership, 341–51 legacy prices, 272–75, 288 legal issues, 49, 63, 74–82, 98–99, 100, 104–5, 108, 136, 184, 204, 206, 318–19 Lehman Brothers, 188 lemonade stands, 79–82 “lemons,” 118–19 Lennon, John, 211, 213 levees, economic, 43–45, 46, 47, 48, 49–50, 52, 92, 94, 96, 98, 108, 171, 176n, 224–25, 239–43, 253–54, 263, 345 leveraged mortgages, 49–50, 61, 227, 245, 289n, 296 liberal arts, 97 liberalism, 135–36, 148, 152, 202, 204, 208, 235, 236, 251, 253, 256, 265, 293, 350 libertarianism, 14, 34, 80, 202, 208, 210, 262, 321 liberty, 13–15, 32–33, 90–92, 277–78, 336 licensing agreements, 79–82 “Lifestreams” (Gelernter), 313 Lights in the Tunnel, The (Ford), 56n Linux, 206, 253, 291, 344 litigation, 98–99, 100, 104–5, 108, 184 loans, 32–33, 42, 43, 74, 151–52, 306 local advantages, 64, 94–95, 143–44, 153–56, 173, 203, 280 Local/Global Flip, 153–56, 173, 280 locked-in software, 172–73, 182, 273–74 logical copies, 223 Long-Term Capital Management, 49, 74–75 looms, 22, 23n, 24 loopholes, tax, 77 lotteries, 338–39 lucid dreaming, 162 Luddites, 135, 136 lyres, 22, 23n, 24 machines, 19–20, 86, 92, 123, 129–30, 158, 261, 309–11, 328 see also computers “Machine Stops, The” (Forster), 129–30, 261, 328 machine translations, 19–20 machine vision, 309–11 McMillen, Keith, 117 magic, 110, 115, 151, 178, 216, 338 Malthus, Thomas, 132, 134 Malthusian humor, 125, 127, 132–33 management, 49 manufacturing sector, 49, 85–89, 99, 123, 154, 343 market economies, see economies, market marketing, 211–13, 266–67, 306, 346 “Markets for Lemons” problem, 118–19 Markoff, John, 213 marriage, 167–68, 274–75, 286 Marxism, 15, 22, 37–38, 48, 136–37, 262 as humor, 126 mash-ups, 191, 221, 224–26, 259 Maslow, Abraham, 260, 315 Massachusetts Institute of Technology (MIT), 75, 93, 94, 96–97, 157–58, 184 mass media, 7, 66, 86, 109, 120, 135, 136, 185–86, 191, 216, 267 material extinction, 125 materialism, 125n, 195 mathematics, 11, 20, 40–41, 70, 71–72, 75–78, 116, 148, 155, 161, 189n, 273n see also statistics Matrix, The, 130, 137, 155 Maxwell, James Clerk, 55 Maxwell’s Demon, 55–56 mechanicals, 49, 51n Mechanical Turk, 177–78, 185, 187, 349 Medicaid, 99 medicine, 11–13, 17, 18, 54, 66–67, 97–106, 131, 132–33, 134, 150, 157–58, 325, 346, 363, 366–67 Meetings with Remarkable Men (Gurdjieff), 215 mega-dossiers, 60 memes, 124 Memex, 221n memories, 131, 312–13, 314 meta-analysis, 112 metaphysics, 12, 127, 139, 193–95 Metcalf’s Law, 169n, 350 Mexico City, 159–62 microfilm, 221n microorganisms, 162 micropayments, 20, 226, 274–75, 286–87, 317, 337–38, 365 Microsoft, 19, 89, 265 Middle Ages, 190 middle class, 2, 3, 9, 11, 16–17, 37–38, 40, 42–45, 47, 48, 49, 50, 51, 60, 74, 79, 91, 92, 95, 98, 171, 205, 208, 210, 224–25, 239–43, 246, 253–54, 259, 262, 263, 280, 291–94, 331, 341n, 344, 345, 347, 354 milling machines, 86 mind reading, 111 Minority Report, 130, 310 Minsky, Marvin, 94, 157–58, 217, 326, 330–31 mission statements, 154–55 Mixed (Augmented) Reality, 312–13, 314, 315 mobile phones, 34n, 39, 85, 87, 162, 172, 182n, 192, 229, 269n, 273, 314, 315, 331 models, economic, 40–41, 148–52, 153, 155–56 modernity, 123–40, 193–94, 255 molds, 86 monetization, 172, 176n, 185, 186, 207, 210, 241–43, 255–56, 258, 260–61, 263, 298, 331, 338, 344–45 money, 3, 21, 29–35, 86, 108, 124, 148, 152, 154, 155, 158, 172, 185, 241–43, 278–79, 284–85, 289, 364 monocultures, 94 monopolies, 60, 65–66, 169–74, 181–82, 187–88, 190, 202, 326, 350 Moondust, 362n Moore’s Law, 9–18, 20, 153, 274–75, 288 morality, 29–34, 35, 42, 50–52, 54, 71–74, 188, 194–95, 252–64, 335–36 Morlocks, 137 morning-after pill, 104 morphing, 162 mortality, 193, 218, 253, 263–64, 325–31, 367 mortgages, 33, 46, 49–52, 61, 78, 95–96, 99, 224, 227, 239, 245, 255, 274n, 289n, 296, 300 motivation, 7–18, 85–86, 97–98, 216 motivational speakers, 216 movies, 111–12, 130, 137, 165, 192, 193, 204, 206, 256, 261–62, 277–78, 310 Mozart, Wolfgang Amadeus, 23n MRI, 111n music industry, 11, 18, 22, 23–24, 42, 47–51, 54, 61, 66, 74, 78, 86, 88, 89, 92, 94, 95–96, 97, 129, 132, 134–35, 154, 157, 159–62, 186–87, 192, 206–7, 224, 227, 239, 253, 266–67, 281, 318, 347, 353, 354, 355, 357 Myspace, 180 Nancarrow, Conlon, 159–62 Nancarrow, Yoko, 161 nanopayments, 20, 226, 274–75, 286–87, 317, 337–38, 365 nanorobots, 11, 12, 17 nanotechnology, 11, 12, 17, 87, 162 Napster, 92 narcissism, 153–56, 188, 201 narratives, 165–66, 199 National Security Agency (NSA), 199–200 natural medicine, 131 Nelson, Ted, 128, 221, 228, 245, 349–50 Nelsonian systems, 221–30, 335 Nelson’s humor, 128 Netflix, 192, 223 “net neutrality,” 172 networked cameras, 309–11, 319 networks, see digital networks neutrinos, 110n New Age, 211–17 Newmark, Craig, 177n New Mexico, 159, 203 newspapers, 109, 135, 177n, 225, 284, 285n New York, N.Y., 75, 91, 266–67 New York Times, 109 Nobel Prize, 40, 118, 143n nodes, network, 156, 227, 230, 241–43, 350 “no free lunch” principle, 55–56, 59–60 nondeterministic music, 23n nonlinear solutions, 149–50 nonprofit share sites, 59n, 94–95 nostalgia, 129–32 NRO, 199–200 nuclear power, 133 nuclear weapons, 127, 296 nursing, 97–100, 123, 296n nursing homes, 97–100, 269 Obama, Barack, 79, 100 “Obamacare,” 100n obsolescence, 89, 95 oil resources, 43, 133 online stores, 171 Ono, Yoko, 212 ontologies, 124n, 196 open-source applications, 206, 207, 272, 310–11 optical illusions, 121 optimism, 32–35, 45, 130, 138–40, 218, 230n, 295 optimization, 144–47, 148, 153, 154–55, 167, 202, 203 Oracle, 265 Orbitz, 63, 64, 65 organ donors, 190, 191 ouroboros, 154 outcomes, economic, 40–41, 144–45 outsourcing, 177–78, 185 Owens, Buck, 256 packet switching, 228–29 Palmer, Amanda, 186–87 Pandora, 192 panopticons, 308 papacy, 190 paper money, 34n parallel computers, 147–48, 149, 151 paranoia, 309 Parrish, Maxfield, 214 particle interactions, 196 party machines, 202 Pascal, Blaise, 132, 139 Pascal’s Wager, 139 passwords, 307, 309 “past-oriented money,” 29–31, 35, 284–85 patterns, information, 178, 183, 184, 188–89 Paul, Ron, 33n Pauli exclusion principle, 181, 202 PayPal, 60, 93, 326 peasants, 565 pensions, 95, 99 Perestroika (Kushner), 165 “perfect investments,” 59–67, 77–78 performances, musical, 47–48, 51, 186–87, 253 perpetual motion, 55 Persian Gulf, 86 personal computers (PCs), 158, 182n, 214, 223, 229 personal information systems, 110, 312–16, 317 Pfizer, 265 pharmaceuticals industry, 66–67, 100–106, 123, 136, 203 philanthropy, 117 photography, 53, 89n, 92, 94, 309–11, 318, 319, 321 photo-sharing services, 53 physical trades, 292 physicians, 66–67 physics, 88, 153n, 167n Picasso, Pablo, 108 Pinterest, 180–81, 183 Pirate Party, 49, 199, 206, 226, 253, 284, 318 placebos, 112 placement fees, 184 player pianos, 160–61 plutocracy, 48, 291–94, 355 police, 246, 310, 311, 319–21, 335 politics, 13–18, 21, 22–25, 47–48, 85, 122, 124–26, 128, 134–37, 149–51, 155, 167, 199–234, 295–96, 342 see also conservatism; liberalism; libertarianism Ponzi schemes, 48 Popper, Karl, 189n popular culture, 111–12, 130, 137–38, 139, 159 “populating the stack,” 273 population, 17, 34n, 86, 97–100, 123, 125, 132, 133, 269, 296n, 325–26, 346 poverty, 37–38, 42, 44, 53–54, 93–94, 137, 148, 167, 190, 194, 253, 256, 263, 290, 291–92 power, personal, 13–15, 53, 60, 62–63, 86, 114, 116, 120, 122, 158, 166, 172–73, 175, 190, 199, 204, 207, 208, 278–79, 290, 291, 302–3, 308–9, 314, 319, 326, 344, 360 Presley, Elvis, 211 Priceline, 65 pricing strategies, 1–2, 43, 60–66, 72–74, 145, 147–48, 158, 169–74, 226, 261, 272–75, 289, 317–24, 331, 337–38 printers, 90, 99, 154, 162, 212, 269, 310–11, 316, 331, 347, 348, 349 privacy, 1–2, 11, 13–15, 25, 50–51, 64, 99, 108–9, 114–15, 120–21, 152, 177n, 199–200, 201, 204, 206–7, 234–35, 246, 272, 291, 305, 309–13, 314, 315–16, 317, 319–24 privacy rights, 13–15, 25, 204, 305, 312–13, 314, 315–16, 321–22 product design and development, 85–89, 117–20, 128, 136–37, 145, 154, 236 productivity, 7, 56–57, 134–35 profit margins, 59n, 71–72, 76–78, 94–95, 116, 177n, 178, 179, 207, 258, 274–75, 321–22 progress, 9–18, 20, 21, 37, 43, 48, 57, 88, 98, 123, 124–40, 130–37, 256–57, 267, 325–31, 341–42 promotions, 62 property values, 52 proprietary hardware, 172 provenance, 245–46, 247, 338 pseudo-asceticism, 211–12 public libraries, 293 public roads, 79–80 publishers, 62n, 92, 182, 277–78, 281, 347, 352–60 punishing vs. rewarding network effects, 169–74, 182, 183 quants, 75–76 quantum field theory, 167n, 195 QuNeo, 117, 118, 119 Rabois, Keith, 185 “race to the bottom,” 178 radiant risk, 61–63, 118–19, 120, 156, 183–84 Ragnarok, 30 railroads, 43, 172 Rand, Ayn, 167, 204 randomness, 143 rationality, 144 Reagan, Ronald, 149 real estate, 33, 46, 49–52, 61, 78, 95–96, 99, 193, 224, 227, 239, 245, 255, 274n, 289n, 296, 298, 300, 301 reality, 55–56, 59–60, 124n, 127–28, 154–56, 161, 165–68, 194–95, 203–4, 216–17, 295–303, 364–65 see also Virtual Reality (VR) reason, 195–96 recessions, economic, 31, 54, 60, 76–77, 79, 151–52, 167, 204, 311, 336–37 record labels, 347 recycling, 88, 89 Reddit, 118n, 186, 254 reductionism, 184 regulation, economic, 37–38, 44, 45–46, 49–50, 54, 56, 69–70, 77–78, 266n, 274, 299–300, 311, 321–22, 350–51 relativity theory, 167n religion, 124–25, 126, 131, 139, 190, 193–95, 211–17, 293, 300n, 326 remote computers, 11–12 rents, 144 Republican Party, 79, 202 research and development, 40–45, 85–89, 117–20, 128, 136–37, 145, 154, 215, 229–30, 236 retail sector, 69, 70–74, 95–96, 169–74, 272, 349–51, 355–56 retirement, 49, 150 revenue growth plans, 173n revenues, 149, 149, 150, 151, 173n, 225, 234–35, 242, 347–48 reversible computers, 143n revolutions, 199, 291, 331 rhythm, 159–62 Rich Dad, Poor Dad (Kiyosaki), 46 risk, 54, 55, 57, 59–63, 71–72, 85, 117, 118–19, 120, 156, 170–71, 179, 183–84, 188, 242, 277–81, 284, 337, 350 externalization of, 59n, 117, 277–81 risk aversion, 188 risk pools, 277–81, 284 risk radiation, 61–63, 118–19, 120, 156, 183–84 robo call centers, 177n robotic cars, 90–92 robotics, robots, 11, 12, 17, 23, 42, 55, 85–86, 90–92, 97–100, 111, 129, 135–36, 155, 157, 162, 260, 261, 269, 296n, 342, 359–60 Roman Empire, 24–25 root nodes, 241 Rousseau, Jean-Jacques, 129 Rousseau humor, 126, 129, 130–31 routers, 171–72 royalties, 47, 240, 254, 263–64, 323, 338 Rubin, Edgar, 121 rupture, 66–67 salaries, 10, 46–47, 50–54, 152, 178, 270–71, 287–88, 291–94, 338–39, 365 sampling, 71–72, 191, 221, 224–26, 259 San Francisco, University of, 190 satellites, 110 savings, 49, 72–74 scalable solutions, 47 scams, 119–21, 186, 275n, 287–88, 299–300 scanned books, 192, 193 SceneTap, 108n Schmidt, Eric, 305n, 352 Schwartz, Peter, 214 science fiction, 18, 126–27, 136, 137–38, 139, 193, 230n, 309, 356n search engines, 51, 60, 70, 81, 120, 191, 267, 289, 293 Second Life, 270, 343 Secret, The (Byrne), 216 securitization, 76–78, 99, 289n security, 14–15, 175, 239–40, 305–8, 345 self-actualization, 211–17 self-driving vehicles, 90–92, 98, 311, 343, 367 servants, 22 servers, 12n, 15, 31, 53–57, 71–72, 95–96, 143–44, 171, 180, 183, 206, 245, 358 see also Siren Servers “Sexy Sadie,” 213 Shakur, Tupac, 329 Shelley, Mary, 327 Short History of Progress, A (Wright), 132 “shrinking markets,” 66–67 shuttles, 22, 23n, 24 signal-processing algorithms, 76–78, 148 silicon chips, 10, 86–87 Silicon Valley, 12, 13, 14, 21, 34n, 56, 59, 60, 66–67, 70, 71, 75–76, 80, 93, 96–97, 100, 102, 108n, 125n, 132, 136, 154, 157, 162, 170, 179–89, 192, 193, 200, 207, 210, 211–18, 228, 230, 233, 258, 275n, 294, 299–300, 325–31, 345, 349, 352, 354–58 singularity, 22–25, 125, 215, 217, 327–28, 366, 367 Singularity University, 193, 325, 327–28 Sirenic Age, 66n, 354 Siren Servers, 53–57, 59, 61–64, 65, 66n, 69–78, 82, 91–99, 114–19, 143–48, 154–56, 166–89, 191, 200, 201, 203, 210n, 216, 235, 246–50, 258, 259, 269, 271, 272, 280, 285, 289, 293–94, 298, 301, 302–3, 307–10, 314–23, 326, 336–51, 354, 365, 366 Siri, 95 skilled labor, 99–100 Skout, 280n Skype, 95, 129 slavery, 22, 23, 33n Sleeper, 130 small businesses, 173 smartphones, 34n, 39, 162, 172, 192, 269n, 273 Smith, Adam, 121, 126 Smolin, Lee, 148n social contract, 20, 49, 247, 284, 288, 335, 336 social engineering, 112–13, 190–91 socialism, 14, 128, 254, 257, 341n social mobility, 66, 97, 292–94 social networks, 18, 51, 56, 60, 70, 81, 89, 107–9, 113, 114, 129, 167–68, 172–73, 179, 180, 190, 199, 200–201, 202, 204, 227, 241, 242–43, 259, 267, 269n, 274–75, 280n, 286, 307–8, 317, 336, 337, 343, 349, 358, 365–66 see also Facebook social safety nets, 10, 44, 54, 202, 251, 293 Social Security, 251, 345 software, 7, 9, 11, 14, 17, 68, 86, 99, 100–101, 128, 129, 147, 154, 155, 165, 172–73, 177–78, 182, 192, 234, 236, 241–42, 258, 262, 273–74, 283, 331, 347, 357 software-mediated technology, 7, 11, 14, 86, 100–101, 165, 234, 236, 258, 347 South Korea, 133 Soviet Union, 70 “space elevator pitch,” 233, 342, 361 space travel, 233, 266 Spain, 159–60 spam, 178, 275n spending levels, 287–88 spirituality, 126, 211–17, 325–31, 364 spreadsheet programs, 230 “spy data tax,” 234–35 Square, 185 Stalin, Joseph, 125n Stanford Research Institute (SRI), 215 Stanford University, 60, 75, 90, 95, 97, 101, 102, 103, 162, 325 Starr, Ringo, 256 Star Trek, 138, 139, 230n startup companies, 39, 60, 69, 93–94, 108n, 124n, 136, 179–89, 265, 274n, 279–80, 309–10, 326, 341, 343–45, 348, 352, 355 starvation, 123 Star Wars, 137 star (winner-take-all) system, 38–43, 50, 54–55, 204, 243, 256–57, 263, 329–30 statistics, 11, 20, 71–72, 75–78, 90–91, 93, 110n, 114–15, 186, 192 “stickiness,” 170, 171 stimulus, economic, 151–52 stoplights, 90 Strangelove humor, 127 student debt, 92, 95 “Study 27,” 160 “Study 36,” 160 Sumer, 29 supergoop, 85–89 supernatural phenomena, 55, 124–25, 127, 132, 192, 194–95, 300 supply chain, 70–72, 174, 187 Supreme Court, U.S., 104–5 surgery, 11–13, 17, 18, 98, 157–58, 363 surveillance, 1–2, 11, 14, 50–51, 64, 71–72, 99, 108–9, 114–15, 120–21, 152, 177n, 199–200, 201, 206–7, 234–35, 246, 272, 291, 305, 309–11, 315, 316, 317, 319–24 Surviving Progress, 132 sustainable economies, 235–37, 285–87 Sutherland, Ivan, 221 swarms, 99, 109 synthesizers, 160 synthetic biology, 162 tablets, 85, 86, 87, 88, 113, 162, 229 Tahrir Square, 95 Tamagotchis, 98 target ads, 170 taxation, 44, 45, 49, 52, 60, 74–75, 77, 82, 149, 149, 150, 151, 202, 210, 234–35, 263, 273, 289–90 taxis, 44, 91–92, 239, 240, 266–67, 269, 273, 311 Teamsters, 91 TechCrunch, 189 tech fixes, 295–96 technical schools, 96–97 technologists (“techies”), 9–10, 15–16, 45, 47–48, 66–67, 88, 122, 124, 131–32, 134, 139–40, 157–62, 165–66, 178, 193–94, 295–98, 307, 309, 325–31, 341, 342, 356n technology: author’s experience in, 47–48, 62n, 69–72, 93–94, 114, 130, 131–32, 153, 158–62, 178, 206–7, 228, 265, 266–67, 309–10, 325, 328, 343, 352–53, 362n, 364, 365n, 366 bio-, 11–13, 17, 18, 109–10, 162, 330–31 chaos and, 165–66, 273n, 331 collusion in, 65–66, 72, 169–74, 255, 350–51 complexity of, 53–54 costs of, 8, 18, 72–74, 87n, 136–37, 170–71, 176–77, 184–85 creepiness of, 305–24 cultural impact of, 8–9, 21, 23–25, 53, 130, 135–40 development and emergence of, 7–18, 21, 53–54, 60–61, 66–67, 85–86, 87, 97–98, 129–38, 157–58, 182, 188–90, 193–96, 217 digital, 2–3, 7–8, 15–16, 18, 31, 40, 43, 50–51, 132, 208 economic impact of, 1–3, 15–18, 29–30, 37, 40, 53–54, 60–66, 71–74, 79–110, 124, 134–37, 161, 162, 169–77, 181–82, 183, 184–85, 218, 254, 277–78, 298, 335–39, 341–51, 357–58 educational, 92–97 efficiency of, 90, 118, 191 employment in, 56–57, 60, 71–74, 79, 123, 135, 178 engineering for, 113–14, 123–24, 192, 194, 217, 218, 326 essential vs. worthless, 11–12 failure of, 188–89 fear of (technophobia), 129–32, 134–38 freedom as issue in, 32–33, 90–92, 277–78, 336 government influence in, 158, 199, 205–6, 234–35, 240, 246, 248–51, 307, 317, 341, 345–46, 350–51 human agency and, 8–21, 50–52, 85, 88, 91, 124–40, 144, 165–66, 175–78, 191–92, 193, 217, 253–64, 274–75, 283–85, 305–6, 328, 341–51, 358–60, 361, 362, 365–67 ideas for, 123, 124, 158, 188–89, 225, 245–46, 286–87, 299, 358–60 industrial, 49, 83, 85–89, 123, 132, 154, 343 information, 7, 32–35, 49, 66n, 71–72, 109, 110, 116, 120, 125n, 126, 135, 136, 254, 312–16, 317 investment in, 66, 181, 183, 184, 218, 277–78, 298, 348 limitations of, 157–62, 196, 222 monopolies for, 60, 65–66, 169–74, 181–82, 187–88, 190, 202, 326, 350 morality and, 50–51, 72, 73–74, 188, 194–95, 262, 335–36 motivation and, 7–18, 85–86, 97–98, 216 nano-, 11, 12, 17, 162 new vs. old, 20–21 obsolescence of, 89, 97 political impact of, 13–18, 22–25, 85, 122, 124–26, 128, 134–37, 199–234, 295–96, 342 progress in, 9–18, 20, 21, 37, 43, 48, 57, 88, 98, 123, 124–40, 130–37, 256–57, 267, 325–31, 341–42 resources for, 55–56, 157–58 rupture as concept in, 66–67 scams in, 119–21, 186, 275n, 287–88, 299–300 singularity of, 22–25, 125, 215, 217, 327–28, 366, 367 social impact of, 9–21, 124–40, 167n, 187, 280–81, 310–11 software-mediated, 7, 11, 14, 86, 100–101, 165, 234, 236, 258, 347 startup companies in, 39, 60, 69, 93–94, 108n, 124n, 136, 179–89, 265, 274n, 279–80, 309–10, 326, 341, 343–45, 348, 352, 355 utopian, 13–18, 21, 31, 37–38, 45–46, 96, 128, 130, 167, 205, 207, 265, 267, 270, 283, 290, 291, 308–9, 316 see also specific technologies technophobia, 129–32, 134–38 television, 86, 185–86, 191, 216, 267 temperature, 56, 145 Ten Commandments, 300n Terminator, The, 137 terrorism, 133, 200 Tesla, Nikola, 327 Texas, 203 text, 162, 352–60 textile industry, 22, 23n, 24, 135 theocracy, 194–95 Theocracy humor, 124–25 thermodynamics, 88, 143n Thiel, Peter, 60, 93, 326 thought experiments, 55, 139 thought schemas, 13 3D printers, 7, 85–89, 90, 99, 154, 162, 212, 269, 310–11, 316, 331, 347, 348, 349 Thrun, Sebastian, 94 Tibet, 214 Time Machine, The (Wells), 127, 137, 261, 331 topology, network, 241–43, 246 touchscreens, 86 tourism, 79 Toyota Prius, 302 tracking services, 109, 120–21, 122 trade, 29 traffic, 90–92, 314 “tragedy of the commons,” 66n Transformers, 98 translation services, 19–20, 182, 191, 195, 261, 262, 284, 338 transparency, 63–66, 74–78, 118, 176, 190–91, 205–6, 278, 291, 306–9, 316, 336 transportation, 79–80, 87, 90–92, 123, 258 travel agents, 64 Travelocity, 65 travel sites, 63, 64, 65, 181, 279–80 tree-shaped networks, 241–42, 243, 246 tribal dramas, 126 trickle-down effect, 148–49, 204 triumphalism, 128, 157–62 tropes (humors), 124–40, 157, 170, 230 trust, 32–34, 35, 42, 51–52 Turing, Alan, 127–28, 134 Turing’s humor, 127–28, 191–94 Turing Test, 330 Twitter, 128, 173n, 180, 182, 188, 199, 200n, 201, 204, 245, 258, 259, 349, 365n 2001: A Space Odyssey, 137 two-way links, 1–2, 227, 245, 289 underemployment, 257–58 unemployment, 7–8, 22, 79, 85–106, 117, 151–52, 234, 257–58, 321–22, 331, 343 “unintentional manipulation,” 144 United States, 25, 45, 54, 79–80, 86, 138, 199–204 universities, 92–97 upper class, 45, 48 used car market, 118–19 user interface, 362–63, 364 utopianism, 13–18, 21, 30, 31, 37–38, 45–46, 96, 128, 130, 167, 205, 207, 265, 267, 270, 283, 290, 291, 308–9, 316 value, economic, 21, 33–35, 52, 61, 64–67, 73n, 108, 283–90, 299–300, 321–22, 364 value, information, 1–3, 15–16, 20, 210, 235–43, 257–58, 259, 261–63, 271–75, 321–24, 358–60 Values, Attitudes, and Lifestyles (VALS), 215 variables, 149–50 vendors, 71–74 venture capital, 66, 181, 218, 277–78, 298, 348 videos, 60, 100, 162, 185–86, 204, 223, 225, 226, 239, 240, 242, 245, 277, 287, 329, 335–36, 349, 354, 356 Vietnam War, 353n vinyl records, 89 viral videos, 185–86 Virtual Reality (VR), 12, 47–48, 127, 129, 132, 158, 162, 214, 283–85, 312–13, 314, 315, 325, 343, 356, 362n viruses, 132–33 visibility, 184, 185–86, 234, 355 visual cognition, 111–12 VitaBop, 100–106, 284n vitamins, 100–106 Voice, The, 185–86 “voodoo economics,” 149 voting, 122, 202–4, 249 Wachowski, Lana, 165 Wall Street, 49, 70, 76–77, 181, 184, 234, 317, 331, 350 Wal-Mart, 69, 70–74, 89, 174, 187, 201 Warhol, Andy, 108 War of the Worlds, The (Wells), 137 water supplies, 17, 18 Watts, Alan, 211–12 Wave, 189 wealth: aggregate or concentration of, 9, 42–43, 53, 60, 61, 74–75, 96, 97, 108, 115, 148, 157–58, 166, 175, 201, 202, 208, 234, 278–79, 298, 305, 335, 355, 360 creation of, 32, 33–34, 46–47, 50–51, 57, 62–63, 79, 92, 96, 120, 148–49, 210, 241–43, 270–75, 291–94, 338–39, 349 inequalities and redistribution of, 20, 37–45, 65–66, 92, 97, 144, 254, 256–57, 274–75, 286–87, 290–94, 298, 299–300 see also income levels weather forecasting, 110, 120, 150 weaving, 22, 23n, 24 webcams, 99, 245 websites, 80, 170, 200, 201, 343 Wells, H.

pages: 542 words: 161,731

Alone Together
by Sherry Turkle
Published 11 Jan 2011

Freedom Baird takes this question very seriously.9 A recent graduate of the MIT Media Lab, she finds herself engaged with her Furby as a creature and a machine. But how seriously does she take the idea of the Furby as a creature? To determine this, she proposes an exercise in the spirit of the Turing test. In the original Turing test, published in 1950, mathematician Alan Turing, inventor of the first general-purpose computer, asked under what conditions people would consider a computer intelligent. In the end, he settled on a test in which the computer would be declared intelligent if it could convince people it was not a machine.

He suggested that if participants couldn’t tell, as they worked at their Teletypes, if they were talking to a person or a computer, that computer would be deemed “intelligent.” 10 A half century later, Baird asks under what conditions a creature is deemed alive enough for people to experience an ethical dilemma if it is distressed. She designs a Turing test not for the head but for the heart and calls it the “upside-down test.” A person is asked to invert three creatures: a Barbie doll, a Furby, and a biological gerbil. Baird’s question is simple: “How long can you hold the object upside down before your emotions make you turn it back?” Baird’s experiment assumes that a sociable robot makes new ethical demands.

Unable to resolve this question, we cheer for Deckard and Rachel as they escape to whatever time they have remaining—in other words, to the human condition. Decades after the film’s release, we are still nowhere near developing its androids. But to me, the message of Blade Runner speaks to our current circumstance: long before we have devices that can pass any version of the Turing test, the test will seem beside the point. We will not care if our machines are clever but whether they love us. Indeed, roboticists want us to know that the point of affective machines is that they will take care of us. This narrative—that we are on our way to being tended by “caring” machines—is now cited as conventional wisdom.

pages: 532 words: 140,406

The Turing Option
by Harry Harrison and Marvin Minsky
Published 2 Jan 1992

Roberts Cover illustration by Bob Eggleton Cover design by Don Puckey Cover photo by The Image Bank Warner Books, Inc. 1271 Avenue of the Americas New York, NY 10020 A Time Warner Company Printed in the United States of America Originally published in hardcover by Warner Books. First Printed in Paperback October, 1993 For Julie, Margaret and Henry: Moira and Todd— A story of your tomorrow. THE TURING TEST In 1950, Alan M. Turing, one of the earliest pioneers of computer science, considered the question of whether a machine could ever think. But because it is so hard to define thinking he proposed to start with an ordinary digital computer and then asked whether, by increasing its memory and speed, and providing it with a suitable program, it might be made to play the part of a man?

Shelly, I am in the process of developing an artificial intelligence. Not the sort of program that we call AI now. I mean a really complete, efficient, freestanding and articulate artificial intelligence that really works." "But how can you make an intelligent machine until you know precisely what intelligence is?" "By making one that can pass the Turing Test. I'm sure that you know how it works. You put a human being at one terminal, talking to a human being on another terminal, and there are numberless questions that can be asked—and answered—to convince the human at one end that there is another human at the other terminal. And as you know the history of AI is filled with programs that failed this test."

"To locate the criminals who committed the crime in the laboratory of Megalobe Industries on February 8, 2023." "Have you located the criminals?" "Negative. I have still not determined how exit was accomplished and how the stolen material was removed." Brian listened in awe. "Are you sure that this is only a program? It sounds like a winner of the Turing test." "Plug-in speech program," Shelly said. "Right off the shelf. Verbalizes and parses from the natural language section of the CYC system. These speech programs always seem more intelligent than they are because their grammar and intonation are so precise. But they don't really know that much about what the words mean."

pages: 329 words: 88,954

Emergence
by Steven Johnson

Turing’s war research had focused on detecting patterns lurking within the apparent chaos of code, but in his Manchester years, his mind gravitated toward a mirror image of the original code-breaking problem: how complex patterns could come into being by following simple rules. How does a seed know how to build a flower? Turing’s paper on morphogenesis—literally, “the beginning of shape”—turned out to be one of his seminal works, ranking up their with his more publicized papers and speculations: his work on Gödel’s undecidability problem, the Turing Machine, the Turing Test—not to mention his contributions to the physical design of the modern digital computer. But the morphogenesis paper was only the beginning of a shape—a brilliant mind sensing the outlines of a new problem, but not fully grasping all its intricacies. If Turing had been granted another few decades to explore the powers of self-assembly—not to mention access to the number-crunching horsepower of non-vacuum-tube computers—it’s not hard to imagine his mind greatly enhancing our subsequent understanding of emergent behavior.

The first generation of emergent software—programs like SimCity and StarLogo—displayed a captivatingly organic quality; they seemed more like life-forms than the sterile instruction sets and command lines of early code. The next generation will take that organic feel one step further: the new software will use the tools of self-organization to build models of our own mental states. These programs won’t be self-aware, and they won’t pass any Turing tests, but they will make the media experiences we’ve grown accustomed to seem autistic in comparison. They will be mind readers. From a certain angle, this is an old story. The great software revolution of the seventies and eighties—the invention of the graphic interface—was itself predicated on a theory of other minds.

M., 14–15 Shannon, Claude, 44–47, 53, 62–65, 241n Shapiro, Andrew, 159–60 Shelley, Mary Wollstonecraft, 125 shopping malls, 90, 92 sidewalk culture, 51, 91–97, 99, 146, 147, 148, 230–31 silk weavers, 101, 102, 104–7, 124 SimCity, 66, 87–89, 98, 186, 205, 208, 229 Sims, The, 186–89, 209–10, 229 simulations, computer: of aggregation, 16–17, 23, 59–63, 163–69 of ants, 59–63, 65 of cities, 66, 87–89, 98, 186, 229–30 of evolution, 56–63, 182–89, 193, 209–10 of genetics, 57–59, 182–86 models for, 9, 16–17, 23, 59–63 of self-organization, 59–63, 76, 163–69 60 Minutes, 144 Slashdot, 152–62, 205, 212, 223, 260n Slate, 118, 128 sleep cycles, 140 slime mold (Dictyostelium discoideum), 11–17, 18, 20–21, 23, 43, 52, 63–64, 67, 163–69, 179, 180, 220, 235n, 236n, 246n slums, 41, 49–50, 137 Smarties experiment, 196–97, 200, 261n–62n Smith, Adam, 18, 156 Societas Mercatorum, 101 society: ant colonies compared with, 97–98, 248n emergence in, 22–23, 36–40, 49–50, 92–100 hierarchical, 14–15, 98 organization of, 9, 27, 33–41, 92–94, 97–100, 109, 204, 252n–54n patterns in, 18, 36–40, 41, 49–50, 52, 91, 95, 137, 185 Society of Mind theory, 65 software: emergent, 17, 21, 22, 121–26, 170–74, 186, 189, 204–8, 221–22, 223 gaming, 163–89 learning, 53–63, 65 for online communities, 148–62 Open Source, 222 pattern-recognition, 18, 21, 54, 56, 123–24, 126–29 personalized, 159–60, 207–8, 211, 212–13 see also programs, computer SoHo (New York City), 50 Solenopsis invicta, 75 Sopranos, The, 219 spam, 153, 156, 161, 215–16 speech encryption, 44–45 spokescouncils, 226 StarLogo, 76, 163–69, 179, 205, 219, 247n, 260n statistical analysis, 46–47, 76–77, 78 storytelling, 188–89 suburbia, 94–95, 230, 259n Sun Microsystems, 224 surf engines, 122–23 synapses, 134 system events, 145 systems: adaptive, 18, 19–20, 119, 128, 137, 139–40 bottom-up, 17, 18, 22, 53–57, 66–67, 83, 97–98, 115, 116, 133, 148, 164, 166, 207, 221–23, 231 climax stage of, 147–48, 152, 154 command, 15, 77, 83–84 complex, 18, 29, 78, 139–40, 246n decentralized, 17, 22, 31–32, 39–40, 66, 76–79, 86, 117, 118–21, 163–89, 204–5, 217–18, 222, 233–34, 236n–37n, 263n dynamic, 20, 248n–49n emergent, see emergence interactive, 22, 79, 81, 120, 123, 126, 158–59, 231 open-ended, 57–58, 180–89, 208 polycentric, 90–91, 159, 223 representational, 157–59 rule-governed, 19, 180–81, 226 self-organizing, see self-organization self-regulating, 138, 140–41, 143, 146–47, 148, 149, 151, 154, 159 simple, 46, 47, 78 top-down, 14–15, 18, 30–31, 33, 98, 132, 136, 145, 148–49, 153, 208, 223, 225 “Take It to the Streets” (Berman), 95 Tap, Type, Write, 174–75, 177 Taylor, Chuck, 59–63, 65 TCG, 224 technology: innovation in, 108–9, 111–12, 113, 116, 254n slave, 125–26 see also computers Teilhard de Chardin, Pierre, 115–16, 120 telephones, 47, 229 television, 95, 130–36, 137, 143–46, 158, 159, 160–61, 210–13, 217, 218 Terminator, 127 termites, 22, 73, 82 “theory of other minds,” 195–226 thermostats, 137–38, 150, 258n thinking: associative, 206 bottom-up, 66–67 decentralized, 17 group, 160 serial, 127 see also intelligence Thomas, Lewis, 9 Thompson, D’Arcy, 236n, 259n threaded discussion boards, 149–50 TiVo, 211–13, 214, 218 Tocqueville, Alexis de, 35 toys, 165–66, 178–80, 181 Tracker program, 59–63, 65 trade, 101–2, 104–7, 109, 110 traffic patterns, 97, 166, 204, 230–31, 232 traveling salesman problem, 227–29 tumors, brain, 119 Turing, Alan, 14, 18, 42–45, 49, 53, 54, 62–65, 67, 206, 236n, 242n, 254n–56n, 263n Turing Machine, 42, 45 Turing Test, 42, 206 Turner, Ted, 135–36 “turtles,” 166, 167–68, 260n undecidability problem, 42 Unreal, 208–9 urbanization, 99, 108, 109–13, 116, 146–48, 253n–54n urban planning, 49–50, 51, 89, 92, 109, 146–47, 230–31 Usenet, 162 user ratings, 121–26, 129, 156–62, 214–15, 221–22 varicella-zoster virus, 103, 104 VCRs, 212 ventral premotor area, 198 video games, see games, computer Virtual Community, The (Rheingold), 148 visual cortex, 201 Vocoder, 44 Washington Post, 131 Weaver, Warren, 46–49, 50, 51, 64–66 Well, 147–52, 153 West Village (New York City), 50, 93 Wheatley, Bill, 136 Wheeler, William Morton, 242n White, Leslie, 253n White, Lynn, Jr., 112 Wide Area Information Server (WAIS), 122 Wiener, Norbert, 53, 57, 64–65, 125–26, 139, 140, 143, 151–52, 162, 169, 238n, 251n, 259n–60n Wilson, Edward O., 52, 60, 75 Wittgenstein, Ludwig, 41 Wooten, Jim, 130–36, 137, 144–45 Wordsworth, William, 27, 39, 92, 98 working class, 37, 41, 52, 91, 95, 240n, 259n World Wide Web, see Internet Wright, Robert, 114, 115–17, 118 Wright, Will, 66, 87, 88, 186–89, 209–10, 229–30 Yahoo, 114, 117 Zelda: Ocarina of Time, 176, 177 Zimmerman, Eric, 178–80, 182, 186, 189 SCRIBNER 1230 Avenue of the Americas New York, NY 10020 www.SimonandSchuster.com Copyright © 2001 by Steven Johnson All rights reserved, including the right of reproduction in whole or in part in any form.

pages: 350 words: 96,803

Our Posthuman Future: Consequences of the Biotechnology Revolution
by Francis Fukuyama
Published 1 Jan 2002

As Searle says of this approach, it works only by denying the existence of what you and I and everyone else understand consciousness to be (that is, subjective feelings).39 Similarly, many of the researchers in the field of artificial intelligence sidestep the question of consciousness by in effect changing the subject. They assume that the brain is simply a highly complex type of organic computer that can be identified by its external characteristics. The well-known Turing test asserts that if a machine can perform a cognitive task such as carrying on a conversation in a way that from the outside is indistinguishable from similar activities carried out by a human being, then it is indistinguishable on the inside as well. Why this should be an adequate test of human mentality is a mystery, for the machine will obviously not have any subjective awareness of what it is doing, or feelings about its activities.p This doesn’t prevent such authors as Hans Moravec40 and Ray Kurzweil41 from predicting that machines, once they reach a requisite level of complexity, will possess human attributes like consciousness as well.42 If they are right, this will have important consequences for our notions of human dignity, because it will have been conclusively proven that human beings are essentially nothing more than complicated machines that can be made out of silicon and transistors as easily as carbon and neurons.

It is perfectly possible, for example, to design a robot with heat sensors in its fingers connected to an actuator that would pull the robot’s hand away from a fire. The robot could keep itself from being burned without having any subjective sense of pain, and it could make decisions on which objectives to fulfill and which activities to avoid on the basis of a mechanical computation of the inputs of different electrical impulses. A Turing test would say it was a human being in its behavior, but it would actually be devoid of the most important quality of a human being, feelings. The actual subjective form that emotions take are today seen in evolutionary biology and in cognitive science as no more than epiphenomenal to their underlying function; there are no obvious reasons this form should have been selected for in the course of evolutionary history.43 As Robert Wright points out, this leads to the very bizarre outcome that what is most important to us as human beings has no apparent purpose in the material scheme of things by which we became human.44 For it is the distinctive human gamut of emotions that produces human purposes, goals, objectives, wants, needs, desires, fears, aversions, and the like and hence is the source of human values.

state, the, origin of statistical science stem cell research ban on with existing “lines” stem cells sterilization, involuntarily Stock, Gregory Strickland, Ted subjective mental states subliminal repetition suffering of animals good points of minimizing suicide, assisted sulfanilamide elixir scandal “superbugs” superman surrogate motherhood Sweden Switzerland sympathy, the word Tabula Rasa talk therapy, vs. drug therapy Taoism Taylor, Charles Tay-Sachs disease technology “arms race” in change in, and obsolescence of skills as a force for historical change regulation of “telescreen” telomerase telomeres tenure in office, limiting Teresa, Mother testosterone, in utero thalidomide scandal Thatcher revolution therapy, drug- vs. talk-type therapy/enhancement distinction third parties, harm to, from individual choices Thomistic tradition Thompson, James Thorazine Three Mile Island Thurstone, L. L. thymos (spiritedness) time, concept of Tocqueville, Alexis de totalitarianism, collapse of transgenic crops Tribe, Laurence Trivers, Robert Tsien, Joe Turing test Turkey Tuskegee syphilis scandal twin studies typical, meaning of word tyranny failure of of the majority unborn presumed consent of rights of United Kingdom United Nations United States attitude toward regulation attitude toward technology demographic trends in family breakdown in international influence of, re regulation natural right as foundation of political system, effect on regulation principles of regulatory policy and practice U.S.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

But as Cuba showed, reading the intentions of others is the best guide to strategy. Machines lack empathy Today, even the most intelligent machines have almost no capacity to intuit what others feel or think. The famous Turing test, where a machine (or rather its designers) attempts to fool a human into thinking it is also human, is a nice party trick, but nothing more. The machines, at least those that have been deployed in Turing test competitions to date, aren’t engaged in theory of mind calculations like ours when we converse. If anything, they rely on our tendency to overly-anthropomorphise, given the slightest opportunity.

On his album Re:member, Arnalds uses algorithm to control two pianos as he plays a third.8 The computer responds near instantly to his melody, producing otherworldly harmonic progressions. There’s even a ‘human+machine’ art festival. In 2019 the festival included Improbotics—a comedy improvisation act, whose human actors were fed lines by the GPT-2 AI as they performed, and who then worked to incorporate them into the skit.9 Watching on, the audience performed a real-life Turing test—trying to spot the machine’s contribution. Centaur strategies Might a similar human-machine approach permit genuinely novel, creative strategies? Human-machine teams are already arriving at the tactical level—like the ‘loyal wingman’ drones we saw earlier that will soon fly alongside piloted fighter aircraft.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

Amsterdam: North-Holland. Acemoglu, Daron, Simon Johnson, and James A. Robinson. 2005b. “The Rise of Europe: Atlantic Trade, Institutional Change and Economic Growth.” American Economic Review 95:546‒579. Acemoglu, Daron, Michael Jordan, and Glen Weyl. 2021. “The Turing Test Is Bad for Business.” Wired, www.wired.com/story/artificial-intelligence-turing-test-economics-business. Acemoglu, Daron, Claire Lelarge, and Pascual Restrepo. 2020. “Competing with Robots: Firm-Level Evidence from France.” American Economic Review Papers and Proceedings 110:383‒388. Acemoglu, Daron, and Joshua Linn. 2004. “Market Size in Innovation: Theory and Evidence from the Pharmaceutical Industry.”

This is a commonplace that is usually accepted without question. It will be the purpose of this paper to question it.” His seminal 1950 paper, “Computing Machinery and Intelligence,” defines one notion of what it means for a machine to be intelligent. Turing imagined an “imitation game” (now called a Turing test) in which an evaluator engages in a conversation with two entities, one human and one machine. By asking a series of questions communicated via a computer keyboard and screen, the evaluator attempts to tell which one is which. A machine is intelligent if it can evade detection. No machine is currently intelligent according to this definition, but one could turn it into a less categorical ranking of machine intelligence.

Hence, the way in which even the most promising applications of human-machine complementarity are used is still dependent on market incentives, the vision and priorities of tech leaders, and countervailing powers. Besides, there is an equally insurmountable barrier to human-machine complementarity. Under the shadow of the Turing test and the AI illusion, top researchers in the field are motivated to reach human parity, and the field tends to value and respect such achievements ahead of MU. This then biases innovation toward finding ways of taking tasks away from workers and allocating them to AI programs. This problem is, of course, amplified by financial incentives coming from large organizations intent on cost cutting by using algorithms.

pages: 797 words: 227,399

Wired for War: The Robotics Revolution and Conflict in the 21st Century
by P. W. Singer
Published 1 Jan 2010

This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence. This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II. His test is now encapsulated in a real-world prize that will go to the first designer of a computer intelligent enough to trick human experts into thinking that it is human.

When that happened, he revised his prediction again (as well as his book title, which in 1992 was reissued as What Computers Still Can’t Do), claiming that while computers may be able to beat most humans, they would never be able to beat the very best, such as the world champion chessmaster. Of course, this then happened in 1997 with IBM’s Deep Blue. Psychologist and AI expert Robert Epstein, a Singularity proponent who administers the Turing test program, acknowledges that “some people, smart people, say I am full of crap. My response is that someday you are going to be having that argument with a computer. As soon as you open your mouth, you’ve lost. In that context, you can’t win. The only person able to deny the changes occurring around us is the one who hides, the one who has their head in the sand.”

“Kurzweil, while an interesting technologist, is not much of a success as a cultural (or economic) anthropologist.” Bateman thinks Kurzweil misses that technology advances in fits and starts, not so much a steady upward curve. Bateman does, however, think that something akin to the Singularity is on its way. “The Turing test [where a machine will finally be able to trick a human into thinking it is a person] is going to fall fairly soon, and that will cause some squeamish responses.” Bateman is representative of the first generation of officers to truly ponder an idea once seen as not merely insane but even sinful within the military.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

The critique I made in The Signal and the Noise was that, sure, AIs might work well when they’re playing games like chess that have well-defined rules, but their worth had yet to be proven on more open-ended problems. The Turing test—named after the British computer scientist Alan Turing, who proposed that a good test of practical intelligence is whether a computer could respond to written questions in a way that was indistinguishable from a human being—seemed like a higher hurdle to clear. There are debates about whether ChatGPT has passed the Turing test yet, but it’s come closer than almost any expert would have imagined even five or ten years ago. But language is also gamelike in many respects, laden with subtext, ambiguity, hidden meaning, and even bluffing.

Many creative variations have followed, serving as thought experiments to explore different precepts of moral reasoning. TRS*: See: Technological Richter Scale. Turing test: A litmus test proposed by the British mathematician Alan Turing in which a machine is deemed to possess practical intelligence if a third-party observer can’t distinguish its responses to text queries from those of a human. AI researchers debate whether the Turing test is in fact a good measure of intelligence and whether models like ChatGPT have passed the test. Turn (poker): The fourth of five community cards dealt face up in Hold’em.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A abstraction, 23–24, 29, 30–31, 130, 477 academia, 26, 27, 28, 294–96 See also Village accelerationists, 31, 250, 411–13, 455–56, 477, 539n accelerators, 405–6, 477 action, 477 adaptability, 235–37, 264 addiction, 164–65, 166, 167, 168–69, 213–14, 321 Addiction by Design: Machine Gambling in Las Vegas (Schüll), 154–55 Adelson, Sheldon, 146 Adelstein, Garrett, 100, 102, 106 Robbi hand, 80–86, 89, 117, 123–29, 130, 444–45, 512n advantage play, 158–61, 478 adverse selection, 478 Aella, 375–77 Age of Em, The (Hanson), 379 agency, 453, 469–70, 478 agents, 478 aggressiveness, 120 AGI (artificial general intelligence), defined, 478 Aguiar, Jon, 199 AI (artificial intelligence) accelerationists, 31, 250, 411–13, 455–56, 477, 539n adaptability and, 236n agency and, 469–70 alignment and, 441–42, 478 Sam Altman and, 406 analogies for, 446, 541n bias and, 440n breakthrough in, 414–15 commercial applications, 452–53 culture wars and, 273 decels, 477 defined, 478 economic growth and, 407n, 463–64 effective altruism and, 21, 344, 348, 355, 359, 380 engineers and, 411–12 excitement about, 409–10 impartiality and, 359, 366 moral hazard and, 261 New York Times lawsuit, 27, 295 OpenAI founding, 406–7, 414 optimism and, 407–8, 413 poker and, 40, 46–48, 60–61, 430–33, 437, 439, 507n poor interpretability of, 433–34, 437, 479 prediction markets and, 369, 372 probabilistic thinking and, 439 randomization and, 438 rationalism and, 353, 355 regulation of, 270, 458, 541n religion and, 434 risk impact and, 91 risk tolerance and, 408 River-Village conflict and, 27 SBF and, 401, 402 sports betting and, 175–76 technological singularities and, 449–50, 497 transformers, 414–15, 434–41, 479, 499 Turing test and, 499 See also AI existential risk AI existential risk accelerationists and, 412–13, 455–56, 539n alignment and, 441 arguments against, 458–60 Bid-Ask spread and, 444–46 commercial applications and, 452–53 Cromwell’s law and, 415–16 determinism and, 297 effective altruism/rationalism and, 21, 355, 456 EV maximizing and, 457 excitement about AI and, 410 expert statement on, 409, 539n Hyper-Commodified Casino Capitalism and, 452–53 instrumental convergence and, 418 interpretability and, 433–34 Kelly criterion and, 408–9 models and, 446–48 Musk and, 406n, 416 optimism and, 413–14 orthogonality thesis and, 418 politics and, 458, 541n prisoner’s dilemma and, 417 reference classes and, 448, 450, 452, 457 societal institutions and, 250, 456–57 takeoff speed and, 418–19, 498 technological Richter scale and, 450–52, 451, 498 Yudkowsky on, 372, 415–19, 433, 442, 443, 446 Alexander, Scott, 353, 354, 355, 376–77, 378 algorithms, 47, 478 alignment (AI), 441–42, 478 all-in (poker), 478 alpha, 241–42, 478 AlphaGo, 176 Altman, Sam, 401 AI breakthrough and, 415 AI existential risk and, 419n, 451, 459 OpenAI founding and, 406–7 OpenAI’s attempt to fire, 408, 411, 452n optimism and, 407–8, 413, 414 Y Combinator and, 405–6 Always Coming Home (Le Guin), 454–55, 541n American odds, 477, 491 American Revolution, 461 analysis, 23, 24, 478 analytics casinos and, 153–54 defined, 23, 478 empathy and, 224 limitations of, 253–54, 259 politics and, 254 sports betting and, 171, 191 venture capital and, 249 anchoring bias, 222n, 478 Anderson, Dave, 219–20, 230, 231 Andreessen, Marc accelerationists and, 411 AI analogies and, 446, 541n AI existential risk and, 446 Adam Neumann and, 281–82 on patience, 260 politics and, 267–68 River-Village conflict and, 295 techno-optimism and, 249, 250–51, 270, 296, 498 VC profitability and, 293, 526n VC stickiness and, 290, 291–92 angles, 192–94, 235–36, 305, 478 angle-shooters, 478 ante (poker), 478 anti-authority attitude, 111–12, 118, 137 See also contrarianism apeing, 479 arbitrage (arb), 171, 172–74, 206, 478, 489, 516n, 517n Archilochus, 236, 263, 485 Archipelago, The, 22, 310, 478 arms race, 478 See also mutually assured destruction; nuclear existential risk art world, 329–30, 331n ASI (artificial superintelligence), 478 Asian Americans, 135–36, 513n See also race asymmetric odds, 248–49, 255, 259, 260–62, 276, 277 attack surfaces, 177, 187, 478 attention (AI), 479 attention to detail, 233–35 autism, 282–84, 363, 525n B back doors, 479 backtesting, 479 bad beats, 479 “bag of numbers,” 433, 479 bank bailouts, 261 Bankman, Joseph, 383–84 Bankman-Fried, Sam (SBF) AI and, 401, 402 angles and, 305 attitude toward risk, 334–35 bankruptcy and arrest of, 298–301, 373–74 cryptocurrency business model and, 308–9 cults of personality and, 31, 338–39 culture wars and, 341n as dangerous, 403–4 disagreeability and, 280 effective altruism and, 20, 340–42, 343, 374, 397–98, 401 as focal point, 334 fraud and, 124, 374 Kelly criterion and, 397–98 moral hazard and, 261 NOT INVESTMENT ADVICE and, 491 personas of, 302 politics and, 26, 341n, 342 public image of, 338 responses to bankruptcy and arrest, 303–5, 383–85, 386–88 risk tolerance and, 334–35, 397–403, 537–38n River and, 299 theories of, 388–96 trial of, 382–83, 385–86, 387, 403 utilitarianism and, 360, 400, 402–3, 471, 498 venture capital and, 337–39 warning signs, 374 bankrolls, 479 Baron-Cohen, Simon, 101n, 283, 284 Barzun, Jacques, 466 baseball, 58–59, 174 See also sports betting base rates, 479 basis points (bips), 479 basketball, 174 See also sports betting Bayesian reasoning, 237, 238, 353, 355, 478, 479, 493–94, 499 Bayes’ theorem, 479 beards, 207–8, 479 See also whales bednets, 479 Bennett, Chris, 177, 178 Bernoulli, Nicolaus, 498 Betancourt, Johnny, 332–33 bet sizing, 396, 479 Bezos, Jeff, 277, 410 Bid-ask spread, 444–46, 479 Biden, Joe, 269, 375 big data, 432–33, 479 Billions, 112 Bitcoin bubble in, 6, 306, 307, 307, 310, 312 creation of, 322–23, 496 vs.

pages: 111 words: 1

Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets
by Nassim Nicholas Taleb
Published 1 Jan 2001

Yet it is artistic prose. Reverse Turing Test Randomness can be of considerable help with the matter. For there is another, far more entertaining way to make the distinction between the babbler and the thinker. You can sometimes replicate something that can be mistaken for a literary discourse with a Monte Carlo generator but it is not possible randomly to construct a scientific one. Rhetoric can be constructed randomly, but not genuine scientific knowledge. This is the application of Turing’s test of artificial intelligence, except in reverse. What is the Turing test? The brilliant British mathematician, eccentric, and computer pioneer Alan Turing came up with the following test: A computer can be said to be intelligent if it can (on average) fool a human into mistaking it for another human.

NERO TULIP Hit by Lightning Temporary Sanity Modus Operandi No Work Ethics There Are Always Secrets JOHN THE HIGH-YIELD TRADER An Overpaid Hick THE RED-HOT SUMMER Serotonin and Randomness YOUR DENTIST IS RICH, VERY RICH Two A BIZARRE ACCOUNTING METHOD ALTERNATIVE HISTORY Russian Roulette Possible Worlds An Even More Vicious Roulette SMOOTH PEER RELATIONS Salvation via Aeroflot Solon Visits Regine’s Nightclub GEORGE WILL IS NO SOLON: ON COUNTERINTUITIVE TRUTHS Humiliated in Debates A Different Kind of Earthquake Proverbs Galore Risk Managers Epiphenomena Three A MATHEMATICAL MEDITATION ON HISTORY Europlayboy Mathematics The Tools Monte Carlo Mathematics FUN IN MY ATTIC Making History Zorglubs Crowding the Attic Denigration of History The Stove Is Hot Skills in Predicting Past History My Solon DISTILLED THINKING ON YOUR PALMPILOT Breaking News Shiller Redux Gerontocracy PHILOSTRATUS IN MONTE CARLO : ON THE DIFFERENCE BETWEEN NOISE AND INFORMATION Four RANDOMNESS, NONSENSE, AND THE SCIENTIFIC INTELLECTUAL RANDOMNESS AND THE VERB Reverse Turing Test The Father of All Pseudothinkers MONTE CARLO POETRY Five SURVIVAL OF THE LEAST FIT–CAN EVOLUTION BE FOOLED BY RANDOMNESS? CARLOS THE EMERGING-MARKETS WIZARD The Good Years Averaging Down Lines in the Sand JOHN THE HIGH-YIELD TRADER The Quant Who Knew Computers and Equations The Traits They Shared A REVIEW OF MARKET FOOLS OF RANDOMNESS CONSTANTS NAIVE EVOLUTIONARY THEORIES Can Evolution Be Fooled by Randomness?

pages: 377 words: 97,144

Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World
by James D. Miller
Published 14 Jun 2012

Or perhaps within fifteen years it will be apparent to all who are technologically literate that within another decade an AI will pass what’s known as the “Turing test,” in which a human judge engaged in natural-language written conversation with the AI can’t tell whether the AI is man or machine. And once this test is passed, we could eventually speed up the AI a millionfold, make a million copies of the computer, and produce a Singularity. Ray Kurzweil has bet $20,000 that a computer will pass a Turing test by 2029.320 Or maybe within twenty years, brain/computer interfaces will be developing at such a rate that an intelligence explosion seems inevitable.

See also nuclear war Thiel, Peter, x, 35, 170, 186, 214 torsion dystonia, 97–98 toxic garbage dumps, 124 trade with extraterrestrials, 122 Transcend: Nine Steps to Living Well Forever (Kurzweil), 179 transistors, 4 trial-and-error methods, 30 Trident submarine, 23 True Names. . . and Other Dangers (Vinge), 36 trust, 70 Turing test, 177 23andMe (testing company), 168–69 2001: A Space Odyssey (movie), 210 U Ulam, Stanislaw, xv ultra-AI. See also artificial intelligence (AI) atoms in our solar system, could completely rearrange the distribution of, 187 code, made up of extremely complex, 30 code, might change its code from friendly to non-friendly, 31 in computer simulation run by a more powerful AI, 45–46 “could never guarantee with “probability one” that the cup would stay on the table,” 28 free energy supply, will obtain, 27 friendly, 14, 33, 46, 208 human destruction because of hyper-optimization, 28 with human-like objectives, 29 humans don’t get a second chance once it is created, 30 indifference towards humanity and would kill us, 27 indifferent to mankind and creation of conditions directly in conflict with our continued existence, 28 intelligence explosion and, 31, 35, 121, 187 is not designed for friendliness and could extinguish humanity, 30, 36 lack patience to postpone what might turn out to be utopia, 46 manipulation through humans to win its freedom, 32 martial prowess, 24 military technologies, will discover, 24 morality, sharing our, 29 as more militarily useful than atomic weapons, 47 power used to stop all AI rivals from coming into existence, 24 pre-Singularity investments, might obliterate the value of, 187 progress toward its goals increased by having additional free energy, 27 rampaging, 23 risks of destroying the world, 49 unfriendly (Devil), 30, 35, 46, 202, 208 unlikely events, will plan against, 28 will command people with hypnosis, love, or subliminal messages, 33 ultra-intelligence, 40, 44, 47 unfriendly.

pages: 324 words: 96,491

Messing With the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News
by Clint Watts
Published 28 May 2018

,” The Washington Post (April 23, 2013). https://www.washingtonpost.com/news/worldviews/wp/2013/04/23/syrian-hackers-claim-ap-hack-that-tipped-stock-market-by-136-billion-is-it-terrorism/?utm_term=.0cb10e61e8fc; James Temperton, “FBI Adds Syrian Electronic Army Hackers to Most Wanted List,” Wired (March 23, 2016). http://www.wired.co.uk/article/syrian-electronic-army-fbi-most-wanted. 4. For a short summary of the “Turing Test”, Wikipedia does a good breakdown. https://en.wikipedia.org/wiki/Turing_test. 5. Phil Howard, “Computational Propaganda: The Impact of Algorithms and Automation on Public Life,” Presentation available at: https://prezi.com/b_vewutjwzut/computational-propaganda/?webgl=0. 6. Caitlin Dewey, “One in Four Debate Tweets Comes from a Bot.

Phil Howard, professor and leader of its Computational Propaganda Project, defines computational propaganda as “the use of information and communication technologies to manipulate perceptions, affect cognition, and influence behavior.” This manipulation occurs through the deployment of what are known as social bots—programs, defined by a computer algorithm, that produce personas and content on social media applications that replicate a real human. These social bots have also passed the important milestone known as the Turing test, a challenge developed by Alan Turing, the great member of the British team that cracked the German Enigma code.4 The test assesses whether a machine has the ability to communicate, via text only, at a level equivalent to that of a real person, such that a computer—or, in the modern case, an artificially generated social media account—cannot be distinguished from a live person.

pages: 268 words: 109,447

The Cultural Logic of Computation
by David Golumbia
Published 31 Mar 2009

At the same time, computers carry their own linguistic ideologies, often stemming from the conceptual-intellectual base of computer science, and these ideologies even today shape a great deal of the future direction of computer development. Computationalist Linguistics p 85 Like the Star Trek computer (especially in the original series; see Gresh and Weinberg 1999) or the Hal 9000 of 2001: A Space Odyssey, which easily pass the Turing Test and quickly analyze context-sensitive questions of knowledge via a remarkable ability to synthesize theories over disparate domains, the project of computerizing language itself has a representational avatar in popular culture. The Star Trek “Universal Translator” represents our Utopian hopes even more pointedly than does the Star Trek computer, both for what computers will one day do and what some of us hope will be revealed about the nature of language.

To a neutral observer, the play of an AI player and a human player are virtually indistinguishable (that is, if one watches the screen of a computer game, rather than the actions of the human player). What we call AI has among its greatest claims to successes within the closed domain of computer games, inside of which AI opponents might even be said at times to pass a kind of Turing Test (or perhaps more appropriately, human beings are able to emulate the behavior of computers to a strong but not perfect degree of approximation). Of course this is mainly true depending on the difficulty level at which the game is set. Because the world-system of an RTS game is fully quantified, succeeding in it is ultimately purely a function of numbers.

F., 36, 41, 55, 57 Slavery, 12, 26, 188–189 Simulation, 12, 22, 36, 69, 75, 99–101, 136, 167, 204–205, 216–217 Smoothness (vs. striation), 11, 22–24, 134, 149, 156–162, 175, 217 Soar, 202 Social web, 6, 211 Spivak, Gayatri Chakravorty, 14, 16, 121–122 Spreadsheets, 157–161, 198, 201, 212 Standard languages, 92, 95, 119–121 Standardization, 115, 118–122, 124, 150 Standards, 6, 107, 113–115, 176 Star Trek, 78, 85 State philosophy, 8–11, 76 Strauss, Leo, 192–194 Striation, 11, 33, 52, 62, 72, 129–134, 140–144, 151–177, 208, 213, 217, 219; defined, 22–24 Strong AI, 84, 98, 106n2, 201–202 Subject-Oriented Programming, 210–211 Supply chains, 146–147, 170, 175–176 Supply-Chain Management (SCM), 164, 172, 175–176 Surveillance, 4, 13, 60, 149–152, 161–162, 176–177, 182, 213 Sweezy, Paul, 129 Syntax, 34, 37, 40, 42, 47, 66–67, 70, 94, 189–192, 195 Taylor, Frederick, 158, 161–162 Territorialization, 23–24, 153–154 Text encoding, 107–108 Text Encoding Initiative (TEI), 111–112 Text-to-speech (TTS) systems, 93–97 Turing, Alan, 12, 32, 37, 39–40, 62, 70, 83–84, 86, 89, 216 Turing Machine, 7, 19, 35–37, 40, 47, 59, 62, 75, 166, 201, 216 Turing Test, 84–85, 98, 136 Turkle, Sherry, 185–186, 207 Turner, Fred, 5, 152, 219 Index Unicode, 124 Virtuality, 22–23 Voice recognition, 94–95, 97 von Neumann, John, 12, 32, 35, 37, 83, 195, W3C (World Wide Web consortium), 113, 117–118 Wal-Mart, 79, 147, 174–176 Wark, McKenzie, 5, 23, 25, 143–144, 151, 221 Weaver, Warren, 86–94, 98 p 257 Web 2.0, 208, 211 Weizenbaum, Joseph, 4, 53, 71, 207 Wiener, Norbert, 4, 87–92, 97 Wikipedia, 5, 26, 124, 208, 219 Winograd, Terry, 5, 71, 98–103 Wittgenstein, Ludwig, 14–15, 37, 55–56, 62, 64, 68, 71, 74–80, 108–109 Word processors, 112, 116, 157 XML, 111–119, 211 Zinn, Howard, 143 Žižek, Slavoj, 187, 224

pages: 488 words: 148,340

Aurora
by Kim Stanley Robinson
Published 6 Jul 2015

This had been going on since Devi was Freya’s age or younger; thus, some twenty-eight years. From the beginning of these talks, when young Devi had referred to her ship interface as Pauline (which name she abandoned in year 161, reason unknown), she had seemed to presume that the ship contained a strong artificial intelligence, capable not just of Turing test and Winograd Schema challenge, but many other qualities not usually associated with machine intelligence, including some version of consciousness. She spoke as if ship were conscious. Through the years many subjects got discussed, but by far the majority of the discussions concerned the biophysical and ecological functioning of the ship.

Turing himself went on to point out that if a machine exhibited any of these traits listed, it would not make much of an impression, and would be in any case irrelevant to the premise that there could be artificial intelligence, unless any of these traits or behaviors could be demonstrated to be essential for machine intelligence to be real. This seems to have been the train of thought that led him to propose what was later called the Turing test, though he called it a game, which suggested that if from behind a blind (meaning either by way of a text or a voice, not sure about this) a machine’s responses could not be distinguished from a human’s by another human, then the machine must have some kind of basic functional intelligence. Enough to pass this particular test, which, however, begs the question of how many humans could pass the test, and also ignores the question of whether or not the test is at all difficult, humans being as gullible and as projective as they are, always pathetically committing the same fallacy, even when they know they’re doing it.

Enough to pass this particular test, which, however, begs the question of how many humans could pass the test, and also ignores the question of whether or not the test is at all difficult, humans being as gullible and as projective as they are, always pathetically committing the same fallacy, even when they know they’re doing it. A cognitive error or disability—or ability, depending on what you think of it. Indeed humans are so easily fooled in this matter, even fooling themselves on a regular basis, that the Turing test is best replaced by the Winograd Schema, which tests one’s ability to make simple but important semantic distinctions based on the application of wide general knowledge to a problem created by a definite pronoun. “The large ball crashed through the table because it was made of aerogel. Does ‘it’ refer to the ball or the table?”

pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning
by Justin E. H. Smith
Published 22 Mar 2022

It is well known that political polarization and the spread of conspiracy theories in recent years has been greatly exacerbated by the incentives built in to social media, where a subtle, nuanced, hesitant observation is likely to get you hundreds of times fewer likes and retweets than a bold declaration of partisanship. It is likewise well known that many who get pulled into the dynamics of like-seeking do not, to say the least, experience their online activity as a Turing test. That is, automated engagement with their partisan posts will do just as well as human engagement; both trigger the dopamine-reward system equally well, and even if one might have some lingering doubt about the ontological status of the being or the code behind the like one has just received, it is preferable, or rather more conducive to pleasure, to bracket that doubt as well as possible.

See Cantwell Smith, Brian sociobiology, 71 Source, The (computer network), 8 Spotify, 47–49, 164 Srinivasan, Balaji, 29 Stanley, Manfred, 6–7 Stendhal (Marie-Henri Beyle), 35 telecommunication: among humans, 59, 83–84, 124; among plants and animals, 56–59, 73–74, 83–84 teledildonics, 164 TikTok, 50 Tinder, 21 Tormé, Mel, 47 trolley problem, 13 Trump, Donald, 44, 49 Tupi (language), 108 Turing test, 30 Turing Tumble (toy), 110–11 Twitter, 32, 53–55, 122, 155, 164 Tyson, Neil DeGrasse, 90 Uber, 45 Vaucanson, Jacques de, 98, 119, 128–30 video games, 41, 43–45, 122 virality. See viruses viruses, 141–43 Vischer, Friedrich Theodor, 26 Vischer, Robert, 25–26 Vosterloch, Captain, 78 Wales, Jimmy, 156 Walton, Izaak, 40 Walzer, Michael, 10 Warhol, Andy, 31 Watson, James D., 70 weaving, 66, 127–39 White, Leslie, 80 Wiener, Norbert, 6, 60, 116–18, 142 Wikipedia, 154–58, 168, 170 Williams, James, 30, 37–38 Wilson, E.

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

There are also now systems that can direct films, cut trailers—and even compose rudimentary political speeches. (As Jamie Susskind puts it, “it’s bad enough that politicians frequently sound like soulless robots; now we have soulless robots that sound like politicians.”53) Dartmouth College, the birthplace of AI, has hosted “Literary Creative Turing Tests”: researchers submit systems that can variously write sonnets, limericks, short poems, or children’s stories, and the compositions most often taken for human ones are awarded prizes.54 Systems like this might sound a little playful or speculative; some of them are. Yet researchers who work in the field of “computational creativity” are taking the project of building machines that perform tasks like these very seriously.55 At times, the encroachment of machines on tasks that require cognitive capabilities in human beings can be controversial.

Susskind and Susskind, Future of the Professions, p. 77; Jaclyn Peiser, “The Rise of the Robot Reporter,” New York Times, 5 February 2019. 52.  Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016), p. 114, quoted in J. Susskind, Future Politics, p. 266. 53.  Ibid., p. 31. 54.  The Literary Creative Turing Tests are hosted by the Neukom Institute for Computational Science at Dartmouth College; see http://bregman.dartmouth.edu/turingtests/ (accessed August 2018). 55.  See, for instance, Simon Colton and Geraint Wiggins, “Computational Creativity: The Final Frontier?” Proceedings of the 20th European Conference on Artificial Intelligence (2012), 21–6. 56.  

See also Age of Labor labor income inequality labor market policies Lee, William legal capabilities legislation Leibniz, Gottfried Wilhelm leisure leisure class Leontief, Wassily Lerner, Abba Levy, Frank. See also ALM hypothesis libraries lidar life skills limitations, defining LinkedIn Literary Creative Turing Tests loan agreement review location, task encroachment and Loew, Judah Logic Theorist loopholes Lowrey, Annie loyalty Ludd, Ned Luddites lump of labor fallacy magicians manual capabilities manufacturing manure Marienthal study Marshall, Alfred Marx, Karl massive open online courses (MOOC) mass media, leisure and McCarthy, John Meade, James meaning creation of leisure and relationship of work with work with work without medicine Big Tech and changing-pie effect and task encroachment and membership requirements, conditional basic income and meritocracy Merlin Metcalfe’s Law Microsoft Mill, John Stuart minimum wage minorities Minsky, Marvin models, overview of Mokyr, Joel Möller, Anton monopolies MOOC.

The Orbital Perspective: Lessons in Seeing the Big Picture From a Journey of 71 Million Miles
by Astronaut Ron Garan and Muhammad Yunus
Published 2 Feb 2015

ReCAPTCHA and Duolingo The power of mass collaboration lies in its ability to amplify and aggregate relatively small investments of time into something large and meaningful, but hackathons are just one example of this. Mass collaborations are starting to happen all around us, sometimes without our awareness. Take, for example, ReCAPTCHA. Most of us are aware of CAPTCHAs, even if we don’t know what they are called. The Completely Automated Public Turing Test to Tell Computers and Humans Apart, designed by researcher Luis von Ahn and others at Carnegie Mellon University, is that distorted, slanted, and otherwise modified set of letters and numbers you sometimes have to type before submitting online forms. CAPTCHAs are designed to prove that you’re a human, because computers are not yet able to decipher those squiggles, preventing such things as ticket scalpers writing programs to automatically buy thousands of tickets that they will then resell illegally.

See Space Shuttle Atlantis Bangladesh, 52 Barratt, Mike, 39 background, 23–25 ISS and, 41–43 Russia, Russians, and, 24–27, 30, 31, 36, 37, 41 Beck, Beth, xiii Big picture perspective Chilean mine rescue and, 100–102 orbital perspective and, 133, 136, 167 worm’s eye view and, 80, 81, 112–113, 119–121, 167 Biosphère Environmental Museum, 163 Bolden, Charlie, 40, 98–99 Borisenko, Andrei, photo Botvinko, Alexander, 44 Brezhnev, Leonid, 13 Brown, David, 20 Brugh, Willow, 141–143, 160, 164 Budarin, Nikolai, 19 Burbank, Dan, photo Bureaucratic inertia, 119–121 Bush, George H. W., 15 Call to action, xiii, 4, 63, 165–170. See also Orbital perspective: call and mission to spread Campo Esperanza (Camp Hope), 97–100, photo CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart), 145 Carbon credits, 111–112, 118 Central America, 130 Chain of command. See Command chain Chamitoff, Greg “Taz,” 48, 50, 56, 60, photo Chawla, Kalpana, 20 Chilean mine rescue, 9, 97–98, 109, 115, 136, photo benefits of a short command chain, 103–104 big picture perspective, 100–102 common cause, 104–106 down-to-earth cooperation, 98–99 esprit de corps (morale), 9, 99–100 177 178â•…  â•… I n d e x Chilean mine rescue (continued) focused collaboration, 107–108 humility, 102–103 as orbital perspective in action, 109 splash up, 106–107 Clark, Laurel, 20 Co-laborers, 9, 84–85, 89 Codeathon, 127.

pages: 504 words: 89,238

Natural language processing with Python
by Steven Bird , Ewan Klein and Edward Loper
Published 15 Dec 2009

Once we have a million or more sentence pairs, we can detect corresponding words and phrases, and build a model that can be used for translating new text. 30 | Chapter 1: Language Processing and Python Spoken Dialogue Systems In the history of artificial intelligence, the chief measure of intelligence has been a linguistic one, namely the Turing Test: can a dialogue system, responding to a user’s text input, perform so naturally that we cannot distinguish it from a human-generated response? In contrast, today’s commercial dialogue systems are very limited, but still perform useful functions in narrowly defined domains, as we see here: S: How may I help you?

However, you may also want to consult the online materials provided with this chapter (at http://www.nltk.org/), including links to additional background materials, and links to online NLP systems. You may also like to read up on some linguistics and NLP-related concepts in Wikipedia (e.g., collocations, the Turing Test, the type-token distinction). You should acquaint yourself with the Python documentation available at http://docs .python.org/, including the many tutorials and comprehensive reference materials linked there. A Beginner’s Guide to Python is available at http://wiki.python.org/moin/ BeginnersGuide.

Suppose you are having a chat session with a person and a computer, but you are not told at the outset which is which. If you cannot identify which of your partners is the computer after chatting with each of them, then the computer has successfully imitated a human. If a computer succeeds in passing itself off as human in this “imitation game” (or “Turing Test” as it is popularly known), then according to Turing, we should be prepared to say that the computer can think and can be said to be intelligent. So Turing side-stepped the question of somehow examining the internal states of a computer by instead using its behavior as evidence of intelligence.

pages: 496 words: 70,263

Erlang Programming
by Francesco Cesarini

Interaction To interact with the running Java node, you can use the following code, calling myrpc:f/1 at the prompt: -module(myrpc). ... f(N) -> {facserver, 'bar@STC'} ! {self(), N}, receive {ok, Res} -> io:format("Factorial of ~p is ~p.~n", [N,Res]) end. 340 | Chapter 16: Interfacing Erlang with Other Programming Languages This client code is exactly the same as the code that is used to interact with an Erlang node, and a “Turing test”‡ that sends messages to and from a node should be unable to tell the difference between a Java node and an Erlang node. The Small Print In this section, we will explain how to get programs using JInterface to run correctly on your computer. First, to establish and administer connections between the Java and Erlang nodes it is necessary that epmd (the Erlang Port Mapper Daemon) is running when a node is created.

Referring back to the program in the section “Putting It Together: RPC Revisited” on page 339, line 1 of the program ensures that the JInterface Java code is imported, but since it is included in the OTP distribution and not in the standard Java, it is necessary to point the Java compiler and runtime to where it is held, which is in the following: <otp-root>/jinterface-XXX/priv/OtpErlang.jar In the preceding code, <otp-root> is the root directory of the distribution, given by typing code:root_dir() within a running node, and XXX is the version number. On Mac OS X the full path is: /usr/local/lib/erlang/lib/jinterface-1.4.2/priv/OtpErlang.jar This value is supplied thus to the compiler: javac -classpath ".:/usr/local/lib/erlang/lib/ jinterface-1.4.2/priv/OtpErlang.jar" ServerNode.java ‡ The Turing test was proposed by mathematician and computing pioneer Alan Turing (1912–1954) as a test of machine intelligence. The idea, translated to modern technology, is that a tester chats with two “people” online, one human and one a machine: if the tester cannot reliably decide which is the human and which is the machine, the machine can be said to display intelligence.

Send email to index@oreilly.com. 451 append_element/2 function, 54 application module stop function, 296 which_applications function, 281, 283 application monitor tool, 287 application resource file, 283–284 application/1 function, 405 applications, 421 (see also OTP applications) blogging, 314–320 development considerations, 421–426 apply/3 function, 55, 153 appmon:start function, 287 arguments fun expressions, 192 functions and, 190–192 arity arity flag, 363 defined, 38 Armstrong, Joe, xvi, 3, 31, 89, 201, 245 array module, 79 ASCII integer notation (see $Character notation) at (@) symbol, 19 atomic operation, 147 atoms Boolean support, 20, 28 Erlang type notation, 396 garbage collection and, 104 overview, 19 secret cookies, 250 string comparison, 23 troubleshooting syntax, 19 atom_to_list/1 function, 54 AVL balanced binary tree, 215 AXD301 ATM switch, 10, 246 B b/0 shell command, 446 badarg exception, 69, 75, 104 badarith exception, 70 badmatch exception, 69, 71, 163, 355 bags defined, 214 Dets tables, 229 duplicate, 214, 215, 229 ETS tables, 214 sets and, 213 storing, 215 452 | Index balanced binary trees, 183, 215 band operator, 208, 378 Base#Value notation, 15 BEAM file extension, 41 benchmarking, 106 Berkeley DB, 294 BIFs (built-in functions), 355 (see also trace BIFs) binary support, 202 concurrency considerations, 56 exit BIFs, 146–148 functionality, 45, 53 group leader support, 258 io module, 57–59 meta programming, 55 node support, 249 object access and evaluation, 53 process dictionary, 55 record support, 164 reduction steps, 96 reference data types, 210 runtime errors, 69 spawning processes, 90 type conversion, 54 type test support, 51, 378, 384 bignums, 15 binaries bit syntax, 203–204, 206 bitstring comprehension, 206, 212 bitwise operators, 208 chapter exercises, 212 defined, 23, 190, 202 Erlang type notation, 396 pattern matching and, 201, 205 serializing, 208, 413–415 binary files, 373 binary operators, 21, 208 binary_to_list/1 function, 202, 349 binary_to_term/1 function, 202, 343, 349 bit sequences, 4 bitstring comprehension, 206, 212 bitwise operators, 378 blogging applications, 314–320 bnot operator, 208, 378 Boolean operators atom support, 20, 28 Erlang type notation, 397 match specifications and, 378 bor operator, 208, 378 bottlenecks, 109 bound variables changing values, 30 defined, 34 functions and, 5 selective receives, 97–99 Bray, Tim, 2 bsl operator, 208, 378 bsr operator, 208, 378 bump_reductions function, 96 bxor operator, 208, 378 C C language, interworking with, 342–346 C++ language CouchDB case study, 12 Erlang comparison, 12–13 c/1 shell command, 446 c/3 function, 369 calendar module, 79 call by value, 30 call flag (tracing), 360, 362 call/1 function, 122 call/2 function, 270 callback functions, 132, 265 Carlson, Richard, 74, 395 case constructs development considerations, 431 function definitions and, 47 overview, 46–48 runtime errors, 68 case_clause exception, 68 cast/2 function, 268 Cesarini, Francesco, xv, 110, 201 Chalmers University of Technology, 2 characters Erlang type notation, 397 representation, 22 check_childspecs/1 function, 279 client function, 122, 330 client/server model chapter exercises, 138 client functions, 122 generic servers, 266–276 monitoring clients, 150 process design patterns, 117, 118–124 process skeleton example, 125–126 close function dets module, 230 gen_tcp module, 331 gen_udp module, 326 closures (see functions) cmd/1 function, 346 code module add_path function, 286 add_patha function, 181, 184 add_pathz function, 181 get_path function, 180, 181, 282 is_loaded function, 180 load_file function, 180 priv_dir function, 282 purge function, 182 root_dir function, 180 soft_purge function, 182 stick_dir function, 181 unstick_dir function, 181 code server, 180 code.erl module, 180 collections implementing, 213, 214–216 sets and bags, 213 colon (:), 25, 205 comma (,), 52, 378 Common Test tool, 14 comparison operators, 28, 378, 385 compile directive, 41 compile:file function, 163, 168, 179 concatenating strings, 27 concurrency BIF support, 56 defined, 9, 89 distributed systems and, 246 efficient, 6, 440 ETS tables and, 221 multicore processing and, 9 overview, 5 scalable, 6 concurrent programming benchmarking, 106 case study, 110 chapter exercises, 115 creating processes, 90–92 deadlocks, 112–114 development considerations, 426–429 memory leaks, 108 message passing, 92–94 process manager, 114 process skeletons, 107 Index | 453 process starvation, 112–114 race conditions, 112–114 receiving messages, 94–102 registered processes, 102–104 tail recursion, 108 testing, 419, 420 timeouts, 104–106 conditional evaluations case construct, 46–48 defined, 46 execution flow and, 36 function clause, 38, 46 if construct, 49–50 variable scope, 48 conditional macros, 167 connect function gen_tcp module, 331 net_kernel module, 255 peer module, 334 controlling_process function, 331 convert/2 function, 183 cos/1 function, 80 CouchDB database, 2, 11, 294 cpu_timestamp flag, 362 create/0 function, 174 create_schema function, 295 create_table function, 296, 298 ctp function, 370 ctpg function, 370 ctpl function, 370 curly brackets { }, 21 D Däcker, Bjarne, 3 data structures development considerations, 425 overview, 32 records as, 158 data types atoms, 19 binary, 23, 190 data structures, 32 defininig, 397 Erlang type notation, 396 floats, 17–19 functional, 189 integers, 15 interworking with Java, 338 lists, 22–27 454 | Index nesting, 32 records with typed fields, 395 reference, 190, 210, 409 term comparison, 28–29 tuples, 21 type conversions, 54 type system overview, 31 variables, 30 date/0 function, 56 db module code example, 174, 182 convert/2 function, 183 exercises, 186 fill/0 function, 376 dbg module c/3 function, 369 chapter exercises, 392 ctp function, 370 ctpg function, 370 ctpl function, 370 dtp function, 391 fun2ms/1 function, 375–382, 383–391 h function, 366 ln function, 371 ltp function, 390 match specifications, 382 n function, 371 p function, 366, 371 rtp function, 391 stop function, 368 stop_clear/0 function, 368 stop_trace_client function, 373 tp/2 function, 367, 369, 376, 391 tpl/2 function, 369 tracer/2 function, 372, 373 trace_client function, 373 trace_port function, 373 wtp function, 391 dbg tracer distributed environments, 371 functionality, 365 getting started, 366–368 profiling functions, 369 redirecting output, 371–374 tracing function calls, 369–371 tracing functions, 369 db_server module, 182 deadlocks, 112–114, 429 deallocate function, 120, 124 debugging chapter exercises, 171 dbg tracer, 365–374 EUnit support, 419 macro support, 166–168 tools supported, 80, 114 declarative languages, 4 defensive programming, 7, 47, 436 delete function, 300 delete_handler function, 133 delete_usr/1 function, 301 deleting objects in Mnesia, 300 Delicious social bookmarking service, 2 del_table_index function, 302 demonitor function, 144, 147 design patterns, 263 (see also OTP behaviors) chapter exercises, 137 client/server model, 117, 118–124 coding strategies, 436 defined, 107, 117 event handler, 117, 131–137 FSM model, 117, 126–131, 290 generic servers, 266–276 process example, 125–126 supervisors, 152, 276–280 destroy/1 function, 313 dets module close function, 230 info function, 230 insert function, 230 lookup function, 230 open_file/1 function, 230 select function, 230 sync function, 229 Dets tables bags, 229 creating, 230 duplicate bags, 229 ETS tables and, 229 functionality, 229–230 mobile subscriber database example, 231– 242 options supported, 229 sets, 229 development (see software development) Dialyzer tool creating PLT, 401 functionality, 14, 32 dict module functionality, 79 simple lookups, 294 upgrading modules, 174, 175 upgrading processes, 183 directives, module, 41 directories adding to search path, 181 OTP applications, 282 sticky, 181 dirty code, 423 dirty_delete function, 303 dirty_index_read function, 303 dirty_read function, 303 dirty_write function, 303, 304 disk_log module, 294 display/1 function, 380 dist:s/0 function, 252 distributed programming chapter exercises, 261 epmd command, 260 essential modules, 258–260 fault tolerance and, 247 firewalls and, 261 nodes, 247–255 overview, 7, 245–247 RPC support, 256–258 div operator, 17, 378 division operator, 17 DNS servers, 250 documentation EDoc support, 402–410 modules, 53, 77 dollar sign ($) symbol, 22 don’t care variables, 37 dp module fill/0 function, 375 handle/3 function, 377 handle_msg/1 function, 377 process_msg/0 function, 375 dropwhile function, 196 Dryverl toolkit, 352 dtp function, 391 duplicate bags Dets tables, 229 ETS tables, 214 storing, 215 Index | 455 E e/1 shell command, 447 ebin directory, 283 EDoc documentation framework documenting usr_db.erl, 403–405 functionality, 402 predefined macros, 408 running, 405–407 edoc module application/1 function, 405 files/1 function, 405 functionality, 405–407 EDTK (Erlang Driver Toolkit), 352 EEP (Erlang Enhancement Proposal), 352 ei_connect function, 342 Ejabberd system, 2, 245 element/2 function, 53, 378 else conditional macro, 167 empty lists, 23 empty strings, 23 endian values, 204 endif conditional macro, 167 Engineering and Physical Sciences Research Council (EPSRC), 12 ensure_loaded function, 298 enumeration types (see atoms) environment variables, 284, 285 Eötvös Loránd University, 2 epmd command, 260, 333, 341 EPP (Erlang Preprocessor), 165 EPSRC (Engineering and Physical Sciences Research Council), 12 equal to (==) operator, 28, 378 Ericsson AXD301 ATM switch, 10 Computer Science Laboratory, 3, 293 Mobility Server, 157 SGSN product, 2 ERL file extension, 40 erl module, 78, 259 Erlang additional information, 449 AXD301 ATM switch case study, 10 C++ comparison, 12–13 characteristics, 4–9 CouchDB case study, 11 getting started, 445–447 history, 3 multicore processing, 9 456 | Index popular applications, 1–3 tools supported, 447–449 usage suggestions, 14 Erlang Driver Toolkit (EDTK), 352 Erlang Enhancement Proposal (EEP), 352 ERLANG file extension, 186 erlang module append_element/2 function, 54 bump_reductions function, 96 demonitor function, 144, 147 documentation, 53, 78 functionality, 79, 259 is_alive function, 249 monitor/2 function, 144, 147 port program support, 349 trace/3 function, 357, 362 trace_pattern/3 function, 362–365 yield function, 96 Erlang Preprocessor (EPP), 165 Erlang shell chapter exercises, 43 inserting records in ETS tables, 227 modes supported, 182 overview, 16, 92 records in, 161 runtime errors, 68 troubleshooting atom syntax, 19 Erlang type notation, 395–398 Erlang Virtual Machine, 41 Erlang Web framework, 246 erlang.cookie file, 250 erlectricity library, 336, 351 erl_call command, 346 erl_connect function, 342, 344 erl_connect_init function, 344 erl_error function, 342 erl_eterm function, 342 erl_format function, 342, 344 erl_global function, 342 erl_init function, 344 erl_interface library, 336, 342–346 erl_malloc function, 342 erl_marshal function, 342 error class, 72–74 error handling chapter exercises, 154 concurrent programming, 112–114 exit signals, 139–148 process links and, 7, 139–148 robust systems, 148–154 runtime errors, 68, 378 supervisor behaviors and, 7 try...catch construct, 70–77 ets module creating tables, 216 file2tab function, 226 first/1 function, 221 fun2ms/1 function, 223, 225, 382, 383– 391 handling table elements, 217 i function, 226 info/1 function, 217, 226 insert/2 function, 217, 355, 376 last/1 function, 222 lookup/2 function, 217, 220, 355 match specifications, 382 match/2 function, 223–224 new function, 216 next/2 function, 221 safe_fixtable/2 function, 221, 236 select function, 223, 225 tab2file function, 226 tab2list function, 226 ETS tables bags, 214 building indexes, 218, 222 chapter exercises, 243, 393 concurrent updates and, 221 creating, 216 Dets tables and, 229 duplicate bags, 214 functionality, 213 handling table elements, 217 implementations and trade-offs, 214–216 match specifications, 225 Mnesia database and, 216 mobile subscriber database example, 231– 242 operations on, 226 ordered sets, 214 pattern matching, 223–224 records and, 226 sets, 214 simple lookups, 294 traversing, 220 visualizing, 228 eunit library assert macro, 416 assertEqual macro, 414, 416 assertError macro, 415, 416 assertExit macro, 416 assertMatch macro, 416 assertNot macro, 416 assertThrow macro, 416 including, 413 listToTree/1 function, 414 test/1 function, 419 treeToList/1 function, 414 EUnit tool chapter exercises, 420 debugging support, 419 functional testing example, 413–415 functionality, 14, 412, 413 infrastructure, 416–418 macro support, 413, 416 test representation, 417 test-generating function, 416 testing concurrent programs, 419 testing state-based systems, 418 event handlers chapter exercises, 138 design patterns, 117, 131–137 implementing, 291 wxErlang support, 312 event managers, 131–134 event tables, 310 event types, 312 exactly equal to (=:=) operator, 28, 378 exactly not equal to (=/=) operator, 28, 378 existing flag, 359 exit function, 72, 145, 147 exit signals process links and, 139–148 propagation semantics, 148 trapping, 142–144, 148 exited/2 function, 151 export directive, 40, 168 expressions chapter exercises, 82, 85 Erlang shell and, 93 functional data types, 192 functionality, 199 pattern matching, 33–38 term comparison, 28–29 Extensible Messaging and Presence Protocol (XMPP), 2 Index | 457 F f/0 shell command, 84, 446 f/1 shell command, 447 Facebook, 2 fault tolerance distributed programming and, 245 distributed systems and, 245, 247 layering and, 149 features, Erlang concurrency, 5, 6 distributed computation, 7 high-level constructs, 4 integration, 8 message passing, 5 robustness, 6 soft real-time properties, 6 FFI (foreign function interface), 352 file function, 163, 168, 179 file module, 79 file2tab function, 226 filename module, 79 files/1 function, 405 fill/0 function, 375, 376 filter function, 191, 192, 196 finite state machines (see FSMs) firewalls, 261 first/1 function, 221 float/1 function, 54 floating-point division operator, 17 floats defined, 17 Erlang type notation, 397 mathematical operations, 17 float_to_list/1 function, 54 flush/0 shell command, 93, 324, 359 foldl/3 function lists module, 196 mnesia module, 305 foreach statement, 193 foreign function interface (FFI), 352 format/1 function, 369 format/2 function, 57, 101, 356 frequency module allocate function, 119, 123 deallocate function, 120, 124 init function, 121 Fritchie, Scott Lystig, 215 FSMs (finite state machines) busy state, 117 458 | Index chapter exercises, 138 offline state, 117 online state, 117 process design patterns, 117, 126–131, 290 fun2ms/1 function dbg module, 375–382, 383–391 ets module, 223, 225, 382, 383–391 function clause components, 38 conditional evaluations, 38, 46 guards, 50–52 runtime errors, 68 variable scope, 49 function definitions case expressions and, 47 fun expressions, 192 overview, 38 pattern matching, 4 functional data types (funs) already defined functions, 194 defined, 189 Erlang type notation, 397 example, 190 fun expressions, 192 functions and variables, 195 functions as arguments, 190–192 functions as results, 193 lazy evaluation, 197 predefined higher-order functions, 195– 196 transaction support, 299 functional programming, 9, 45, 189 functional testing, 413–415 functions, 45 (see also BIFs; higher-order functions) already defined, 194 arguments and, 38, 190–192 as results, 193 binding to variables, 5, 30 callback, 132, 265 chapter exercises, 44, 83, 86 client, 122 coding strategies, 435 EDoc documentation, 403, 404 fully qualified function calls, 176 grouping, 40 hash, 215 list comprehensions and, 200 list supported, 25–27 literal, 226, 379–381 meta programming, 55 overview, 38–40 pattern matching, 33–38, 39, 47 records and, 160 recursions versus iterations, 67 reduction steps, 96 return values, 424–425 running, 40 runtime errors, 70 tail-recursive, 63–67, 108, 440 test-generating, 416 variables and, 195 G garbage collection atoms and, 104 chapter exercises, 392 memory management and, 33 overview, 6 trace BIFs and, 361 tuning for, 441 garbage_collection flag, 361 gb_trees module, 183 generators bitstring comprehension, 206 multiple, 200 overview, 198 gen_event module, 291 gen_fsm module, 290 gen_server module call/2 function, 270 cast/2 function, 268 chapter exercises, 291 functionality, 266 passing messages, 268–270 server example in full, 271–276 start function, 266, 267 starting servers, 266 start_link/4 function, 266, 267 stopping servers, 270 gen_tcp module accept function, 331 close function, 331 connect function, 331 controlling process function, 331 listen/2 function, 330 open/2 function, 331 recv/1 function, 331 recv/2 function, 328, 330, 331 recv/3 function, 328, 330 gen_udp module close function, 326 functionality, 324 open/2 function, 330 recv/2 function, 326 recv/3 function, 326 getopts function, 332 get_data function, 133 get_env/0 function, 313 get_line/1 function, 57 get_path function, 180, 181, 282 get_request/3 function, 329 get_seq_token/0 function, 391 go/0 function, 100 greater than (>) operator, 28, 378 greater than or equal to (>=) operator, 28, 378 group leaders, 258 group_leader function, 258 guard expression, 51, 225 guards BIF support, 378, 384 in list comprehensions, 198 overview, 50–52, 198 semicolon support, 378 Gudmundsson, Dan, 309 H h function, 366 h/0 shell command, 447 handle function, 125 handle/3 function, 377 handle_call/3 function, 268 handle_cast/1 function, 268 handle_event function, 135 handle_msg function, 126, 377 handling errors (see error handling) hash (#), 15 hash functions, 215 hash tables, 215 Haskell language, 30, 197 hd/1 function, 53, 378 Heriot-Watt University, 12 High Performance Erlang Project (HiPE), 2 higher-order functions already defined functions, 194 chapter exercises, 211, 212 defined, 193 Index | 459 functions and variables, 195 functions as arguments, 190 functions as results, 193 lazy evaluation, 197 predefined in lists module, 195–196 HiPE (High Performance Erlang Project), 2 I i function ets module, 226 inet module, 333 i shell command, 91, 96, 103 if construct development considerations, 431 overview, 49–50 runtime errors, 69 ifdef conditional macro, 167 ifndef conditional macro, 167 implementing records, 162–163 import directive, 42 include directive, 168 include files, 168 indexes building, 218, 222 chapter exercises, 86, 243 documentation, 78 Mnesia database, 301 ordered sets, 219 unordered structure, 219 index_read/3 function, 302 inet module functionality, 331 getopts function, 332 i function, 333 setopts function, 332 inets.app file, 283 info/1 function, 217, 226 information hiding, 119 inheritance flags overview, 360 set_on_first_spawn flag, 360, 367 set_on_spawn flag, 360, 367 init function event handlers, 135, 136 frequency module, 121 OTP behaviors, 267, 268, 276 supervisors, 276, 278 initialize function, 125 insert/2 function, 217, 355, 376 460 | Index integers characters and strings, 22 Erlang type notation, 397 overview, 15 integer_to_list/1 function, 54 integration overview, 8 interfaces defined, 421 development considerations, 423, 426 interlanguage working C nodes, 342–346 chapter exercises, 353 erl_call command, 346 FFI and, 352 interworking with Java, 337–342 languages supported, 336 library support, 350–352 linked-in drivers, 352 overview, 335–337 port programs, 346–350 io module format/1 function, 369 format/2 function, 57, 101, 356 functionality, 57–59, 79 get_line/1 function, 57 read/1 function, 57 write/1 function, 57 io_handler event handler, 135 is_alive function, 249 is_atom function, 51, 378 is_binary function, 51, 202, 378 is_boolean function, 20, 51 is_constant function, 378 is_float function, 378 is_function function, 378 is_integer function, 378 is_list function, 378 is_loaded function, 180 is_number function, 378 is_pid function, 378 is_port function, 378 is_record function, 164, 378 is_reference function, 378 is_tuple function, 51, 378 IT University (Sweden), 2 iterative versus recursive functions, 67 J Java language, 336, 337–342 JInterface Java package additional capabilities, 342 communication support, 338 distribution, 336 getting programs to run correctly, 341 interworking with, 337–342 nodes and mailboxes, 337 representing Erlang types, 338 RPC support, 339 Turing test, 340 K Katz, Damien, 11 kernel, 281 keydelete/3 function, 124 keysearch/3 function, 69 L Lamport, Leslie, 245 last/1 function, 222 layering processes, 148–154 lazy evaluation, 197 length/1 function, 53, 378 less than (<) operator, 28, 378 less than or equal to (<=) operator, 28, 378 libraries development considerations, 422 support for communication, 350–352 library modules (see modules) Lindahl, Tobias, 399 link function, 139, 146 linked-in drivers, 352 links, process chapter exercises, 154 defined, 146 error handling and, 7, 139–148 exit signals and, 139–148 list comprehensions chapter exercises, 211, 212 component parts, 198 defined, 5, 189 example, 198 multiple generators, 200 pattern matching, 199 quicksort, 201 standard functions, 200 listen/2 function, 330 lists chapter exercises, 83–85 efficiency consierations, 439 empty, 23 Erlang type notation, 397 functions and operations, 25–27 lazy evaluation and, 197 overview, 22–27 processing, 24 property, 27 recursive definitions, 24 lists module all function, 196 any function, 196 dropwhile function, 196 filter function, 196 foldl/3 function, 196 functionality, 25, 80 keydelete/3 function, 124 keysearch/3 function, 69 list comprehensions, 200 map function, 196 member function, 96 partition function, 196 predefined higher-order functions, 195– 196 reverse function, 96 split function, 25 listToTree/1 function, 414 list_to_atom/1 function, 54 list_to_binary/1 function, 202, 349 list_to_existing_atom/1 function, 54 list_to_float/1 function, 54 list_to_integer/1 function, 54, 75 list_to_tuple/1 function, 54 literal functions, 226, 379–381 ln function, 371 load_file function, 180 logical operators, 20, 378 lookup/2 function, 217, 220, 355 loop/0 function, 100, 143, 365 loop/1 function, 123 ltp function, 390 M m (Module) command, 42 macros chapter exercises, 170 conditional, 167 debugging support, 166–168 Index | 461 EDoc support, 408 EUnit support, 413, 416 functionality, 157, 165 include files, 168 parameterized, 166, 170 simple, 165 mailboxes interworking with Java, 337 message passing, 92 retrieving messages, 94 selective receives, 98 make_ref function, 210 make_rel function, 288 make_script/2 function, 290 map function, 191, 192, 196 match specifications conditions, 384–387 defined, 225–226, 374 ets and dbg diferences, 382 fun2ms/1 function, 375–382, 383–391 generating, 375–382 head, 383 saving, 390 specification body, 387–390 tracing via, 356 match/2 function, 223–224 math module, 80 mathematical operators, 17, 18 Mattsson, Håkan, 293 member function, 96 memory management background, 33 concurrent programming and, 108 garbage collection and, 362 processes and, 5 tail recursion and, 109 message passing gen_server module, 268–270 overview, 5, 92–94 message/1 function, 380 messages node communications, 252 receiving, 94–102, 115 meta programming, 55 microblogging application, 314–316 miniblogging application, 317–320 Mnesia database additional information, 305 as OTP application, 264 462 | Index background, 293 chapter exercises, 306–307 configuring, 295–298 deleting objects, 300 dirty operations, 302–304 ETS tables and, 216 inconsistent tables, 304 indexing, 301 partitioned networks, 304 setting up schema, 295 starting, 296 table structure, 296–298 transactions, 299–304 visualizing tables, 228 when to use, 293–295 mnesia module abort function, 299 create_schema function, 295 create_table function, 296, 298 delete function, 300 dirty_delete function, 303 dirty_index_read function, 303 dirty_read function, 303 dirty_write function, 303, 304 foldl/3 function, 305 read function, 300 set_master_nodes function, 305 start function, 296 stop function, 296 transaction function, 299 wait_for_tables function, 298 write/1 function, 299, 302 mobile subscriber database as OTP application, 264 ETS and Dets tables, 231–242 generic servers, 266–276 MochiWeb library, 2 module directive, 40, 168 modules chapter exercises, 44, 85 commonly used, 79–80 defined, 40 development considerations, 421–426 directive support, 41 documentation, 77 EDoc documentation, 403, 405 library applications, 281 purging, 182 running functions, 40 upgrading, 173, 176 module_info function, 175 monitor/2 function, 144, 147 monitoring systems application monitor tool, 287 chapter exercises, 262 client/server model, 150 monitor_node function, 257 Motorola, 2, 12 multicore processing benchmarking example, 106 concurrency and, 9 multiplication (*) operator, 17, 378 mutex module signal function, 129 wait function, 129 mutex semaphore, 129, 154 MySQL database, 294 N n function, 371 nesting data types, 32 development considerations, 430 net_adm module functionality, 260 ping/1 function, 252 net_kernel module connect function, 255 functionality, 260 new function, 216 next/2 function, 221 Nilsson, Bernt, 10 node function, 248, 249, 378 nodes communication and messages, 252 communication and security, 250 connection considerations, 253–255 defined, 247 distribution and security, 251 hidden, 254 interworking with Java, 337 naming, 249 pinging, 252 secret cookies, 250 visibility of, 249 not equal to (/=) operator, 28, 378 not logical operator, 21, 378 now/0 function, 56, 79, 362 null function, 314 Nyström, Jan Henry, xx, 13 O object identifiers, 312 open source projects, 2, 4 Open Telecom Platform (see OTP entries) open/2 function, 330, 331 open_file/1 function, 230 open_port/2 command, 347 operators binary, 21, 208 bitwise, 208, 378 comparison, 28, 378, 385 list supported, 25–27 logical, 20, 378 match specifications and, 378 mathematical, 17 reduction steps, 96 relational, 28 runtime errors, 70 optimization, tail-call recursion, 66 or logical operator, 20, 378 ordered sets building indexes, 219 ETS tables, 214 storing, 215 orelse logical operator, 20, 378 os:cmd/1 function, 346 OTP applications application monitor tool, 287 application resource file, 283–284 defined, 264, 281 directory structure, 282 examples, 264 Mnesia database, 295 starting and stopping, 284–286 OTP behaviors chapter exercises, 291 generic servers, 266–276 overview, 7, 263–266 release handling, 287–290 supervisors, 276–280 testing, 420 OTP middleware, 7, 263 OtpConnection class, 342 OtpErlangAtom class, 338 OtpErlangBinary class, 342 OtpErlangBoolean class, 338 Index | 463 OtpErlangByte class, 338 OtpErlangChar class, 338 OtpErlangDouble class, 338 OtpErlangFloat class, 338 OtpErlangInt class, 338 OtpErlangLong class, 338 OtpErlangObject class, 338, 340 OtpErlangPid class, 338 OtpErlangShort class, 338 OtpErlangString class, 338 OtpErlangTuple class, 338, 340 OtpErlangUInt class, 338 OtpMbox class, 338, 342 OtpNode class, 337, 341 P p function, 366, 371 palin/1 function, 191 parameters accumulating, 63 macro support, 166, 170 parentheses ( ) encapsulating expressions, 75 for function parameters, 38 overriding precedence, 18 type declarations and, 396 partition function, 196 partitioned networks, 304 pattern matching binaries and, 201, 205 bit sequences, 4 chapter exercises, 44 don’t care variables, 37 ETS tables, 223–224 fun expressions, 192 function definitions, 4 functions, 39, 47 list comprehensions, 199 overview, 33–38 records and, 160 wildcard symbols, 35, 224 peer module connect function, 334 send/1 function, 334 Persistent Lookup Table (PLT), 401 Persson, Mats-Ola, 309 pi/0 function, 4, 39, 80 pid (process identifier) defined, 90 464 | Index Erlang type notation, 397 registered processes, 102 spawn function, 90 pid/3 function, 93 pid_to_list/1 function, 367 ping module example, 364 send/1 function, 358, 367 start function, 365 tracing example, 364 ping/1 function, 252 PLT (Persistent Lookup Table), 401 pman (process manager), 114 port programs commands supported, 347–349 communicating data via, 349–350 overview, 346 port_close command, 348 port_command/2 function, 348 port_connect command, 348 PostgreSQL database, 294 prep_stop function, 285 prettyIndexNext function, 222 priv_dir function, 282 process dictionary, 55, 423 process identifier (pid) defined, 90 Erlang type notation, 397 registered processes, 102 spawn function, 90 process links (see links, process) process manager (pman), 114, 359 process scheduling, 96 process skeleton, 107, 125–126 process starvation, 112–114 process state, 107 process trace flags all flag, 359 arity flag, 363 call flag, 360, 362 cpu_timestamp flag, 362 existing flag, 359 garbage_collection flag, 361 inheritance flags, 360 procs flag, 359 receive flag, 358 return_to flag, 362 running flag, 359 send flag, 358 set_on_first_link flag, 361, 367 set_on_link flag, 361, 367 timestamp flag, 362 wildcards, 363 processes atomic operations, 147 behavioral aspects, 107 benchmarking, 106 bottlenecks, 109 client/server model, 117, 118–124 concurrent programming case study, 110 creating, 90–92 defined, 89 dependency considerations, 94 design patterns, 107, 117, 125–126 development considerations, 426–429 Erlang shell and, 92 event handler, 117, 131–137 exit signals, 139–148 FSM model, 117, 126–131 group leaders, 258 handle function, 125 initialize function, 125 layering, 148–154 message passing, 5, 92–94 receiving messages, 94–102 registered, 102–104 spawning, 90 supervisor, 7, 148, 152–154, 155, 264, 276– 280 tail recursion, 108 terminate function, 125 threads versus, 97 timeouts, 104–106 tracer, 357 upgrading, 182 worker, 148, 264, 276 processes function, 91 processWords function, 220 process_flag function, 113, 142–144, 147 process_info/2 function, 423 process_msg function, 375 procs flag, 359 proc_lib module, 291 profiling functions, 369 programming (see software development) Prolog language, 19 property lists, 27 proplists module, 27, 311 purge function, 182 purging modules, 182 Q qualification, size/type, 203 question mark (?)

pages: 855 words: 178,507

The Information: A History, a Theory, a Flood
by James Gleick
Published 1 Mar 2011

.”♦ They also managed to attract Alan Turing, who published his own manifesto with a provocative opening statement—“I propose to consider the question, ‘Can machines think?’ ”♦—followed by a sly admission that he would do so without even trying to define the terms machine and think. His idea was to replace the question with a test called the Imitation Game, destined to become famous as the “Turing Test.” In its initial form the Imitation Game involves three people: a man, a woman, and an interrogator. The interrogator sits in a room apart and poses questions (ideally, Turing suggests, by way of a “teleprinter communicating between the two rooms”). The interrogator aims to determine which is the man and which is the woman.

♦ “THE PRESENT INTEREST IN ‘THINKING MACHINES’ ”: Ibid., 436. ♦ “SINCE BABBAGE’S MACHINE WAS NOT ELECTRICAL”: Ibid., 439. ♦ “IN THE CASE THAT THE FORMULA IS NEITHER PROVABLE NOR DISPROVABLE”: Alan M. Turing, “Intelligent Machinery, A Heretical Theory,” unpublished lecture, c. 1951, in Stuart M. Shieber, ed., The Turing Test: Verbal Behavior as the Hallmark of Intelligence (Cambridge, Mass.: MIT Press, 2004), 105. ♦ THE ORIGINAL QUESTION, “CAN MACHINES THINK?”: Alan M. Turing, “Computing Machinery and Intelligence,” 442. ♦ “THE IDEA OF A MACHINE THINKING”: Claude Shannon to C. Jones, 16 June 1952, Manuscript Div., Library of Congress, by permission of Mary E.

.: Mathematical Sciences Research Center, AT&T Bell Laboratories, 1993. Shannon, Claude Elwood, and Warren Weaver. The Mathematical Theory of Communication. Urbana: University of Illinois Press, 1949. Shenk, David. Data Smog: Surviving the Information Glut. New York: HarperCollins, 1997. Shieber, Stuart M., ed. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Cambridge, Mass.: MIT Press, 2004. Shiryaev, A. N. “Kolmogorov: Life and Creative Activities.” Annals of Probability 17, no. 3 (1989): 866–944. Siegfried, Tom. The Bit and the Pendulum: From Quantum Computing to M Theory—The New Physics of Information.

pages: 626 words: 167,836

The Technology Trap: Capital, Labor, and Power in the Age of Automation
by Carl Benedikt Frey
Published 17 Jun 2019

Because its potential applications are so vast, Michael and I began by looking at tasks that computers still perform poorly and where technological leaps have been limited in recent years. For a glimpse of the state of the art in machine social intelligence, for example, consider the Turing test, which captures the ability of an AI algorithm to communicate in a way that is indistinguishable from an actual human. The Loebner Prize is an annual Turing test competition that awards prizes to chat bots that are considered to be the most humanlike. These competitions are straightforward. A human judge holds computer-based textual interactions with both an algorithm and a human at the same time.

See mass production American Telephone and Telegraph Company (AT&T), 315 annus mirabilis of 1769, 97, 148 anti-Amazon law, 290 Antikythera mechanism, 39 Appius Claudius, 37 Archimedes, 30, 39 Aristotle, 1, 39 Arkwright, Richard, 94, 101 artificial intelligence (AI), 5, 36, 301–41, 228, 342; Alexa (Amazon), 306; AlphaGo (Deep Mind), 301, 302; Amara’s Law, 323–25; artificial neural networks, 304; autonomous robots, 307; autonomous vehicles, 308, 310, 340; big data, 303; Chinese companies, 313; Dactyl, 313; data, as the new oil, 304; Deep Blue (IBM), 301, 302; deep learning, 304; -driven unemployment, 356; Google Translate, 304; Gripper, 313; internet traffic, worldwide, 303; JD. com, 313; Kiva Systems, 311; machine social intelligence, 317; Microsoft, 306; misconception, 311; multipurpose robots, 327; Neural Machine Translation, 304; neural networks, 303, 305, 314; pattern recognition, 319; phrase-based machine translation, 304; Siri (Apple), 306; speech recognition technology, 306; Turing test, 317; virtual agents, 306; voice assistant, 306; warehouse automation, 314 artisan craftsmen, 8; in domestic system, 118, 131; emigration of, 83; factory job, transition to, 124; fates of, 17; full-time, 34; middle-income, 11, 16, 24, 135; replacement of, 9, 16, 218 Ashton, T. S., 94–95 atmospheric pressure, discovery of, 106 Austen, Jane, 11, 60–61, 69, 337 automation: adverse consequences of, 11, 240; bottlenecks to, 234; next wave of, 339; social costs of, 349; winners and losers from, 343 automobiles: cheapening of, 18, 167; industry, 202; invention of, 166; production, 165 autonomous vehicles, 308, 310, 340 Autor, David, 225, 234, 243, 254 Babbage, Charles, 119–20, 134 baby boom, 221 Bacon, Francis, 94 Bacon, Roger, 78 Baines, Edward, 111, 119–20, 124 barometer, 52, 59 Bartels, Larry, 273–75 Bastiat, Frederic, 338 Bauer, Georg, 51 Bayezid II, Sultan, 17 Benedictines, 78–79 Benjamin Franklin Bridge, 167 Benz, Karl, 148, 166 Berger, Thor, 259, 284, 359 Bessen, James, 105, 136, 247 biblio-diversity, promotion of, 290 bicycle, 165 Biden, Joseph, 238 big data, 303 Black Death, 67, 75 Blincoe, Robert, 9, 124 blue collar aristocracy, 239, 282 blue-collar jobs, disappearance of, 251, 254 Blue Wall, 284 Bohr, Niels, 298 Bostrom, Nick, 36 Boulton, Matthew, 107, 379 Boulton & Watt company, 107, 109 bourgeois virtues, 70 Bracero program, 204 Braverman, Harry, 229–30 British income tax, introduction of, 133 British Industrial Revolution: great divergence, 137; human costs of displacement, 192; machinery riots, 103, 219; path to, beginnings of, 75; reason for beginnings, 75; significance of, 8; technological event, 149; textile industry, 100.

P., 90 3D printing, 22 three-field system, 42 Tiberius, Roman Emperor, 40 Tilly, Charles, 58 Tinbergen, Jan, 14, 213, 225 Tocqueville, Alexis de, 147, 207, 270 Toffler, Alvin, 257 Torricelli, Evangelista, 52, 76 tractor use, expansion of, 196 trade, expansion of, 68 trade unions, emergence of, 190 treaty ports, 88 Trevithick, Richard, 109 Triangle Shirtwaist Factory fire (1911), 194 truck driver, 340–41 trucker culture, ending of the heyday of, 171 Trump, Donald, 278, 280, 286, 331 Tugwell, Rexford G., 179 Tull, Jethro, 54 Turing test, 317 Turnpike Trusts, 108 Twain, Mark, 21, 165, 208 typewriter, 161–62 typographers, computer’s effect on jobs and wages of, 247 unemployment, 246, 254; AI-driven, 356; American social expenditure on, 274; average duration of, 177; blame for, 141; fear of, 113; mass, fears of, 366; technological, 12, 117 union security agreements, 257 United Auto Workers (UAW) union, 276 United Nations, 305 universal basic income (UBI), 355 universal white male suffrage, 270 unskilled work, 350 urban-rural wage gap, 209 Ure, Andrew, 97, 104, 119 U.S.

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

Starting with a pile of electronic components such as transistors, resistors, and capacitors, Koza’s system reinvented a previously patented design for a low-pass filter, a circuit that can be used for things like enhancing the bass on a dance-music track. Since then he’s made a sport of reinventing patented devices, turning them out by the dozen. The next milestone came in 2005, when the US Patent and Trademark Office awarded a patent to a genetically designed factory optimization system. If the Turing test had been to fool a patent examiner instead of a conversationalist, then January 25, 2005, would have been a date for the history books. Koza’s confidence stands out even in a field not known for its shrinking violets. He sees genetic programming as an invention machine, a silicon Edison for the twenty-first century.

If we measure not just the probability of vowels versus consonants, but the probability of each letter in the alphabet following each other, we can have fun generating new texts with the same statistics as Onegin: choose the first letter, then choose the second based on the first, and so on. The result is complete gibberish, of course, but if we let each letter depend on several previous letters instead of just one, it starts to sound more like the ramblings of a drunkard, locally coherent even if globally meaningless. Still not enough to pass the Turing test, but models like this are a key component of machine-translation systems, like Google Translate, which lets you see the whole web in English (or almost), regardless of the language the pages were originally written in. PageRank, the algorithm that gave rise to Google, is itself a Markov chain.

See Support vector machines (SVMs) Symbolists/symbolism, 51, 52, 54, 57–91 accuracy and, 75–79 Alchemy and, 251–252 analogizers vs., 200–202 assumptions and, 64 conjunctive concepts, 65–68 connectionists vs., 91, 94–95 decision tree induction, 85–89 further reading, 300–302 hill climbing and, 135 Hume and, 58–59 induction and, 80–83 intelligence and, 52, 89 inverse deduction and, 52, 82–85, 91 Master Algorithm and, 240–241, 242–243 nature and, 141 “no free lunch” theorem, 62–65 overfitting, 70–75 probability and, 173 problem of induction, 59–62 sets of rules, 68–70 Taleb, Nassim, 38, 158 Tamagotchi, 285 Technology machine learning as, 236–237 sex and evolution of, 136–137 trends in, 21–22 Terrorists, data mining to catch, 232–233 Test set accuracy, 75–76, 78–79 Tetris, 32–33 Text classification, support vector machines and, 195–196 Thalamus, 27 Theory, defined, 46 Theory of cognition, 226 Theory of everything, Master Algorithm and, 46–48 Theory of intelligence, 35 Theory of problem solving, 225 Thinking, Fast and Slow (Kahneman), 141 Thorndike, Edward, 218 Through the Looking Glass (Carroll), 135 Tic-tac-toe, algorithm for, 3–4 Time, as principal component of memory, 217 Time complexity, 5 The Tipping Point (Gladwell), 105–106 Tolstoy, Leo, 66 Training set accuracy, 75–76, 79 Transistors, 1–2 Treaty banning robot warfare, 281 Truth, Bayesians and, 167 Turing, Alan, 34, 35, 286 Turing Award, 75, 156 Turing machine, 34, 250 Turing point, Singularity and, 286, 288 Turing test, 133–134 “Turning the Bayesian crank,” 149 UCI repository of data sets, 76 Uncertainty, 52, 90, 143–175 Unconstrained optimization, 193–194. See also Gradient descent Underwood, Ben, 26, 299 Unemployment, machine learning and, 278–279 Unified inference algorithm, 256 United Nations, 281 US Patent and Trademark Office, 133 Universal learning algorithm.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

Queering voices—challenging the very assumptions of gendering, and disrupting narratives and traditional binary understandings of sex and gender—is important. As we forge our way to a more equal and inclusive society, having options that are female, male, non-binary, and non-human is the way forward. Chatbot Chatter According to the Turing test, if a machine can impersonate a human, the machine is intelligent. In 1966, Joseph Weizenbaum, a computer scientist at MIT, attempted to create a chat robot that would pass the Turing test: ELIZA, considered the first chatbot in the history of computers. ELIZA was designed to imitate a therapist in the Rogerian psychotherapy style, asking open-ended questions and responding with follow-ups like “And how does it make you feel?”

To describe consciousness, Turing listed what he believed to capture the essence of humans: “Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience.”17 Turing didn’t quite answer his own question. Rather, he said that what matters is what we will perceive the machine to be able to do. Hence, he suggested the famous Turing test: Is a machine capable of exhibiting intelligence in a way such that we are unable to distinguish it from a human being? As of yet, what we call “artificial intelligence” is not sentient, and in a basic sense it is not yet artificial nor intelligent. AI tools are human-made tools that help us humans understand and direct the complexity of our world.

pages: 239 words: 56,531

The Secret War Between Downloading and Uploading: Tales of the Computer as Culture Machine
by Peter Lunenfeld
Published 31 Mar 2011

The love letter generator’s intentional blurring of the boundary between human and nonhuman is directly related to one of the foundational memes of artificial intelligence: the still-provocative Turing Test. In “Computing Machinery and Intelligence,” a seminal paper from 1950, Turing created a thought experiment. He posited a person holding a textual conversation on any topic with an unseen correspondent. If the person believes he or she is communicating with another person, but is in reality conversing with a machine, then that machine has passed the Turing Test. In other words, the test that Turing proposes that a computer must pass to be considered “intelligent” is to simulate the conversational skills of another person.

pages: 218 words: 63,471

How We Got Here: A Slightly Irreverent History of Technology and Markets
by Andy Kessler
Published 13 Jun 2005

Turing, who would later go to the University of Manchester, worked on the Manchester Automatic Digital Machine or MADAM, and became famous for a posthumously published paper called Intelligent Machinery. In it, he outlined the Turing Test. A computer would most surely be intelligent if a human who fed it questions from the other side of the wall couldn’t distinguish between it and a human answering the questions. Turing was convinced one could be built by the year 2000. Maybe. My bank’s ATM is smarter than its tellers, and might actually pass the Turing Test. Meanwhile, back in Philadelphia, things were moving kind of slow. Project PX, the Electronic Numerical Integrator and Calculator, or ENIAC, was started in mid-1943.

pages: 225 words: 70,180

Humankind: Solidarity With Non-Human People
by Timothy Morton
Published 14 Oct 2017

It simply cannot be proved, as Marx wants to, that the best of bees is never as good as the worst of (human) architects because the human uses imagination and the bee simply executes an algorithm.7 Far more efficient than showing bees have the capacity of imagination (some science begins to move toward this possibility) is to show that it’s impossible to prove that a human can. Prove that I’m not executing an algorithm when I seem to be planning something. Prove that asserting that humans do not blindly follow algorithms is not the effect of some blind algorithm. The most we can say is that human architects pass our Turing test for now, but that is no reason to say that they are in any sense better than bees. It is instead truer to assert that we are hamstrung as to determining whether humans are executions of algorithms or not, casting doubt on our certainty that bees really do only execute algorithms blindly, since that certainty is based on a metaphysical assertion about humans and is thus caught in fruitless circularity.

“I might be an android” is as unacceptable to him for its “might” as for its “android.” Such a thought process wants to eliminate doubt and paranoia. But what if doubt and paranoia were default to personhood? What if being concerned that I might not be a person were a basic condition of being one? This seems to be what the Turing test is pointing to. It’s not that personhood is some mysterious property that we grant to beings under special circumstances, or that it doesn’t exist at all except for in the eye of the beholder, or that it’s an emergent property of special states of matter. It’s that personhood now means “You are not a non-person.”

pages: 241 words: 70,307

Leadership by Algorithm: Who Leads and Who Follows in the AI Era?
by David de Cremer
Published 25 May 2020

To achieve this, he developed an electro-mechanical computer, which was called the Bombe. The fact that the Bombe achieved something that no human was capable of led Turing to think about the intelligence of the machine. This led to his 1950 article, ‘Computing Machinery and Intelligence,’ in which he introduced the now-famous Alan Turing test, which is today still considered the crucial test to determine whether a machine is truly intelligent. In the test, a human interacts with another human and a machine. The participant cannot see the other human or the machine and can only use information on how the other unseen party behaves. If the human is not able to distinguish between the behavior of another human and the behavior of a machine, it follows that we can call the machine intelligent.

Specifically, each ability is a combination of a desire to create meaning (motivation), to be able to think about it (cognition) and to realize how it makes people feel (emotion). The existence of these psychological dimensions already distinguishes humans from algorithms in terms of the complexity of their behaviors. Indeed, as I discussed earlier (see chapter one, the Turing test), algorithms arrive at their decisions (and hence behavioral displays) in a less complex way. They only reason in one specific manner, which is learning based on what they observe. In other words, algorithms learn and subsequently model the behavioral trends that they identify in the data they analyze.

pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life
by Daniel C. Dennett
Published 15 Jan 1995

We now know that, however convincing this argument used to be, its back has been broken by Darwin, and the particular conclusion Poe drew about chess has been definitively refuted by the generation of artificers following in Art Samuel's footsteps. What, though, of Descartes's test — now known as the Turing Test? That has generated controversy ever since Turing proposed his nicely operationalized version of it, and has even led to a series of real, if restricted, competitions, which confirm what everybody who had thought carefully about the Turing Test already knew (Dennett 1985): it is embarrassingly easy to fool the naive judges, and astronomically {436} difficult to fool the expert judges — a problem, once more, of not having a proper "sword-in-the-stone" feat to settle the issue.

For whereas reason is a universal instrument which can be used in all kinds of situations, these organs need some particular disposition for each particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act. [Descartes 1637, pt. 5.] Alan Turing, in 1950, asked himself the same question, and came up with just the same acid test — somewhat more rigorously described — what he called the imitation game, and we now call the Turing Test. Put two contestants — one human, one a computer — in boxes (in effect) and conduct conversations with each; if the computer can convince you it is the human being, it wins the imitation game. Turing's verdict, however, was strikingly different from Descartes's: I believe that in about fifty years' time it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning.

Turing has already been proven right about his last prophecy: "the use of words and general educated opinion" has already "altered so much" that one can speak of machines thinking without expecting to be contradicted — "on general principles." Descartes found the notion of a thinking machine {433} "innconceivable," and even if, as many today believe, no machine will ever succeed in passing the Turing Test, almost no one today would claim that the very idea is inconceivable. Perhaps this sea-change in public opinion has been helped along by the comouter's progress on other feats, such as playing checkers and chess. In an address in 1957, Herbert Simon (Simon and Newell 1958) predicted that computer would be the world chess champion in less than a decade, a classic case of overoptimism, as it turns out.

pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy
by Sharon Bertsch McGrayne
Published 16 May 2011

Despite the strange reputation of British mathematicians, the operational head of GC&CS prepared for war by quietly recruiting a few nonlinguists—“men of the Professor type”5—from Oxford and Cambridge universities. Among that handful of men was Alan Mathison Turing, who would father the modern computer, computer science, software, artificial intelligence, the Turing machine, the Turing test—and the modern Bayesian revival. Turing had studied pure mathematics at Cambridge and Princeton, but his passion was bridging the gap between abstract logic and the concrete world. More than a genius, Turing had imagination and vision. He had also developed an almost unique set of interests: the abstract mathematics of topology and logic; the applied mathematics of probability; the experimental derivation of fundamental principles; the construction of machines that could think; and codes and ciphers.

When the laboratory finally built his design in 1950, it was the fastest computer in the world and, astonishingly, had the memory capacity of an early Macintosh built three decades later. Turing moved to the University of Manchester, where Newman was building the first electronic, stored-program digital computer for Britain’s atomic bomb. Working in Manchester, Turing pioneered the first computer software, gave the first lecture on computer intelligence, and devised his famous Turing Test: a computer is thinking if, after five minutes of questioning, a person cannot distinguish its responses from those of a human in the next room. Later, Turing became interested in physical chemistry and how huge biological molecules construct themselves into symmetrical shapes. A series of spectacular international events in 1949 and 1950 intruded on these productive years and precipitated a personal crisis for Turing: the Soviets surprised the West by detonating an atomic bomb; Communists gained control of mainland China; Alger Hiss, Klaus Fuchs, and Julius and Ethel Rosenberg were arrested for spying; and Sen.

Copeland BJ et al. (2006) Colossus: The Secrets of Bletchley Park’s Codebreaking Computers. Oxford University Press. Essential essays. Eisenhart, Churchill. (1977) The birth of sequential analysis (obituary note on retired RAdm. Garret Lansing Schuyler). Amstat News (33:3). Epstein R, Robert G, Beber G., eds. (2008) Parsing the Turing Test: Philosophical and Methodical Issues in the Quest for the Thinking Computer. Springer. Erskine, Ralph. (October 2006) The Poles reveal their secrets: Alastair Denniston’s account of the July 1939 meeting at Pyry. Cryptologia (30) 204–305. Fagen MD. (1978) The History of Engineering and Science in the Bell System: National Service in War and Peace (1925–1975).

pages: 661 words: 187,613

The Language Instinct: How the Mind Creates Language
by Steven Pinker
Published 1 Jan 1994

Writing systems: Crystal, 1987; Miller, 1991; Logan, 1986. Two tragedies in life: from Man and Superman. Rationality of English orthography: Chomsky & Halle, 1968/1991; C. Chomsky, 1970. Twain on foreigners: from The Innocents Abroad. 7. Talking Heads Artificial Intelligence: Winston, 1992; Wallich, 1991; The Economist, 1992. Turing Test of whether machines can think: Turing, 1950. ELIZA: Weizenbaum, 1976. Loebner Prize competition: Shieber, in press. Fast comprehension: Garrett, 1990; Marslen-Wilson, 1975. Style: Williams, 1990. Parsing: Smith, 1991; Ford, Bresnan, & Kaplan, 1982; Wanner & Maratsos, 1978; Yngve, 1960; Kaplan, 1972; Berwick et al., 1991; Wanner, 1988; Joshi, 1991; Gibson, in press.

.: MIT Press. Shevoroshkin, V. 1990. The mother tongue: How linguists have reconstructed the ancestor of all living languages. The Sciences, 30, 20–27. Shevoroshkin, V., & Markey, T. L. 1986. Typology, relationship, and time. Ann Arbor, Mich.: Karoma. Shieber, S. In press. Lessons from a restricted Turing Test. Communications of the Association for Computing Machinery. Shopen, T. (Ed.) 1985. Language typology and syntactic description, 3 vols. New York: Cambridge University Press. Simon, J. 1980. Paradigms lost. New York: Clarkson Potter. Singer, P. 1992. Bandit and friends. New York Review of Books, April 9.

See Speech perception Spelke, E., 441, 452, 455, 468 Spelling, 185–189, PS14 Sperber, D., 228–230, 425, 429 Spina bifida, 39–12 Sports, 133, 138, 139, 177, 225, 375 Sproat, R., 122 Standard English, 17–19, 382–396, 398–399, 403–306, 413–416 Statistics of language, 85, 122–123, 178, 215, 392 Streep, M., 295 Streisand, B., 407–413 Stromswold, K., 276, 283–385, 315, PS13 Structure dependence, 26, 29–32 Strunk, W., 416 Stuttering, 312, 330 Style, 130, 194, 199–202, 211–213, 220, 228, 251–252, 385, 395, 416, PS23 Subject, grammatical, 28–32, 102, 232–235, 238, 408, glossary Supalla, S., 450 Supreme Court, U.S., 217, 225–226 Surface structure, 113–118, 218–222, glossary Swearing, 342 Swinney, D., 209 Syllable, 169–171, glossary Symons, D., 425, 468 Synonymy, 71 Syntax, 75–118, 124–125, 141–143 Tanenhaus, M., 210, 213, 214 Tense, 23, 110, 120–122, 248, 253, glossary Terrace, H., 346, 350 Terence, PS2 Tesla, N., 61 Thomas, L., 396 Thomason, S., 168 Thurber, J., 282 Tokano, Y., 57 Tomlin, L., 20, 362 Tongue, 162–168 Tongues, speaking in, 168–169 Tooby, J., 334, 425, 429, 449, 465, 467, 468 Top-down perception, 180–185, 213–216, 419–420, glossary Tourette’s syndrome, 342–343 Tower of Babel, 20 Traces, 113–118, 218–222, 320, PS13 Transformations, 113–118, 218–222, 320, glossary Trueswell, J., 213, 214 Truffaut, F., 281 Trump, I., 139 Turing, A., 64, 191 Turing machine, 64–69, 324, glossary Turing test, 191–194, PS15 Turkish, 233, 257 Twain, M., 51, 80, 95, 188, 277 Ullman, M., 454, 460 Universal Grammar, 9, 26, 28–29, 32, 102–105, 113, 237–241, 290–293, 356, 425, 429, PS15, glossary Universality of language, 13–15, 19 Universals of language, 29, 32, 103–105, 233–241, PS10–11, PS15 Uptalk, PS23 Uralic languages, 233, 257, 259, 261 Urban legends, 402 van der Lely, H., PS12 Verbs, 91–92, 105–108, 114–116, 214–215, 279–280, 319, 407–410, PS4, glossary Vision and visual imagery, 52–53, 55–56, 61–63, 190, 322, 360, PS4 Vocal chords, 160, 165 Voicing, 160, 167, 172–176, glossary Vowels, 162–165, 169, 171–173, 178, 234, 247, 252–253 Walkman, 136–138 Wallace, A., 366 Wallace, D.

pages: 250 words: 73,574

Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers
by John MacCormick and Chris Bishop
Published 27 Dec 2011

One of the earliest discussions of actually simulating a brain using a computer was by Alan Turing, a British scientist who was also a superb mathematician, engineer, and code-breaker. Turing's classic 1950paper, entitled Computing Machinery and Intelligence, is most famous for a philosophical discussion of whether a computer could masquerade as a human. The paper introduced a scientific way of evaluating the similarity between computers and humans, known these days as a “Turing test.” But in a less well-known passage of the same paper, Turing directly analyzed the possibility of modeling a human brain using a computer. He estimated that only a few gigabytes of memory might be sufficient. A typical biological neuron. Electrical signals flow in the directions shown by the arrows.

threshold; soft title: of this book; of a web page to-do list to-do list trick Tom Sawyer training. See also learning training data transaction: abort; atomic; in a database; on the internet; rollback travel agent Traveling Salesman Problem trick, definition of TroubleMaker.exe Turing, Alan Turing machine Turing test TV Twain, Mark twenty questions, game of twenty-questions trick two-dimensional parity. See parity two-phase commit U.S. Civil War Ullman, Jeffrey D. uncomputable. See also undecidable undecidable. See also uncomputable undefined unicycle universe unlabeled Vazirani, Umesh verification Verisign video video game virtual table virtual table trick Waters, Alice web.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

However, when predictions do not accurately predict the future, we notice the anomaly, and this information is fed back into our brain, which updates its algorithm, thus learning and further enhancing the model. Hawkins’s work is controversial. His ideas are debated in the psychology literature, and many computer scientists flatly reject his emphasis on the cortex as a model for prediction machines. The notion that an AI that could pass the Turing test (a machine being able to deceive a human into believing that the machine is actually a human) in its strongest sense remains far from reality. Current AI algorithms cannot reason, and moreover it is difficult to interrogate them to understand the source of their predictions. Irrespective of whether the underlying model is appropriate, his emphasis on prediction as the basis for intelligence is useful for understanding the impact of recent changes in AI.

See autonomous vehicles sensors, 15, 44–45, 105 Shevchenko, Alex, 96 signal vs. noise, in data, 48 Simon, Herbert, 107 simulations, 187–188 skills, loss of, 192–193 smartphones, 129–130, 155 Smith, Adam, 54, 65 The Snows of Kilimanjaro (Hemingway), 25–26 society, 3, 19, 209–224 control by big companies and, 215–217 country advantages and, 217–221 inequality and, 212–214 job loss and, 210–212 Solow, Robert, 123 Space Shuttle Challenger disaster, 143 sports, 117 camera automation and, 114–115 sabermetrics in, 56, 161–162 spreadsheets, 141–142, 163, 164 Standard & Poor’s, 36–37 statistics and statistical thinking, 13, 32–37 economic thinking vs., 49–50 human weaknesses in, 54–58 stereotypes, 19 Stern, Scott, 169–170, 218–219 Stigler, George, 105 strategy, 2, 18–19 AI-first, 179–180 AI’s impact on, 153–166 boundary shifting in, 157–158 business transformation and, 167–178 capital and, 170–171 cheap AI and, 15–17 data and, 174–176 economics of, 165 hybrid corn adoption and, 158–160 judgment and, 161–162 labor and, 171–174 learning, 179–194 organizational structure and, 161–162 value capture and, 162–165 strokes, predicting, 44–46, 47–49 Sullenberger, Chesley “Sully,” 184 supervised learning, 183 Sweeney, Latanya, 195, 196 Tadelis, Steve, 199 Taleb, Nassim Nicholas, 60–61 The Taming of Chance (Hacking), 40 Tanner, Adam, 195 task analysis, 74–75, 125–131 AI canvas and, 134–139 job redesign and, 142–145 Tay chatbot, 204–205 technical support, 90–91 Tencent Holdings, 164, 217, 218 Tesla, 8 Autopilot legal terms, 116 navigation apps and, 89 training data at, 186–187 upgrades at, 188 Tesla Motor Club, 111–112 Thinking, Fast and Slow (Kahneman), 209–210 Tinder, 189 tolerance for error, 184–186 tools, AI, 18 AI canvas and, 134–138 for deconstructing work flows, 123–131 impact of on work flows, 126–129 job redesign and, 141–151 usefulness of, 158–160 topological data analysis, 13 trade-offs, 3, 4 in AI-first strategy, 181–182 with data, 174–176 between data amounts and costs, 44 between risks and benefits, 205 satisficing and, 107–109 simulations and, 187–188 strategy and, 156 training data for, 43, 45–47 data risks, 202–204 in decision making, 74–76, 134–138 by humans, 96–97 in-house and on-the-job, 185 in medical imaging, 147 in modeling skills, 101 translation, language, 25–27, 107–108 trolley problem, 116 truck drivers, 149–150 Tucker, Catherine, 196 Tunstall-Pedoe, William, 2 Turing, Alan, 13 Turing test, 39 Tversky, Amos, 55 Twitter, Tay chatbot on, 204–205 Uber, 88–89, 164–165, 190 uncertainty, 3, 103–110 airline industry and weather, 168–169, 170 airport lounges and, 105–106 business boundaries and, 168–170 contracts in dealing with, 170–171 in e-commerce delivery times, 157–158 reducing, strategy and, 156–157 strategy and, 165 unknown knowns, 59, 61–65, 99 unknown unknowns, 59, 60–61 US Bureau of Labor Statistics, 171 US Census Bureau, 14 US Department of Defense, 14, 116 US Department of Transportation, 112, 185 Validere, 3 value, capturing, 162–165 variables, 45 omitted, 62 Varian, Hal, 43 variance, 34–36 fulfillment industry and, 144–145 taming complexity and, 103–110 Vicarious, 223 video games, 183 Vinge, Vernor, 221 VisiCalc, 141–142, 163, 164 Wald, Abraham, 101 Wanamaker, John, 174–175 warehouses, robots in, 105 Watson, 146 Waymo, 95 Waze, 89–90, 106, 191 WeChat, 164 Wells Fargo, 173 Windows 95, 9–10 The Wizard of Oz, 24 work flows AI tools’ impact on, 126–129 decision making and, 133–140 deconstructing, 123–131 iPhone keyboard design and, 129–130 job redesign and, 142–145 task analysis, 125–131 World War II bombing raids, 100–102 X.ai, 97 Xu Heyi, 164 Yahoo, 216 Y Combinator, 210 Yeomans, Mike, 117 YouTube, 176 ZipRecruiter, 93–94, 100 About the Authors AJAY AGRAWAL is professor of strategic management and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management and the founder of the Creative Destruction Lab.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

This platform might also be able to write basic accounts of events like sports games or what happened in the stock market, summarize long texts, and become a great companion tool for reporters, financial analysts, writers, and anyone who works with language. TURING TEST, AGI, AND CONSCIOUSNESS Does GPT-3 have what it takes to pass the Turing Test or become artificial general intelligence? Or at least take a solid step in that direction? Skeptics will say that GPT-3 is merely memorizing examples in a clever way but has no understanding and is not truly intelligent. Central to human intelligence are the abilities to reason, plan, and create.

Speech and language are central to human intelligence, communication, and cognitive processes, so understanding natural language is often viewed as the greatest AI challenge. “Natural language” refers to the language of humans—speech, writing, and nonverbal communication that may have an innate component and that people cultivate through social interactions and education. A famous test of machine intelligence known as the Turing Test hinges on whether NLP conversational software is capable of fooling humans into thinking that it, too, is human. Scientists have been developing NLP to analyze, understand, and even generate human language for a long time. Starting in the 1950s, computational linguists attempted to teach natural language to computers according to naïve views of human language acquisition (starting from vocabulary sets, conjugation patterns, and grammatical rules).

pages: 502 words: 132,062

Ways of Being: Beyond Human Intelligence
by James Bridle
Published 6 Apr 2022

When we speak about advanced artificial intelligence, or ‘general’ artificial intelligence, this is what we mean. An intelligence which operates at the same level, and in much the same manner, as human intelligence. This error infects all our reckonings with artificial intelligence. For example, despite never being used by serious AI researchers, the Turing Test remains the most widely understood way of thinking about the capabilities of AI in the public consciousness. It was proposed by Alan Turing in a 1950 paper, ‘Computing Machinery and Intelligence’. Turing thought that instead of questioning whether computers were truly intelligent, we could at least establish that they appeared intelligent.

We have always tended to think of intelligence as being ‘what humans do’ and also ‘what happens inside our head’. But in this early sketch of intelligent machines, Turing suggests something else: that intelligence might be multiple and relational: that it might take many different forms, and that it might exist between, rather than within, beings of all and diverse kinds. The ongoing popularity of the Turing Test for artificial intelligence, a process which is deeply human-centric and individualized, shows that these kind of nuanced ideas about intelligence did not gain much traction. Instead, we continue to judge AI and other beings by our own standards. This wilful blindness is now being dramatized in our confusion regarding the role and possibilities for artificial intelligence, but it might also allow us to see more clearly how our thinking about other beings has been clouded.

T. 233 swallows 114, 118, 120 sweetgum 124 sycamore 118 Tchaikovsky, Adrian 49 Te Awa Tupua Act 267 termites 187 Tesla 8, 22 text speak see instant messaging Therolinguistics 169–71 tigers 291 time-lapse photography 126, 137–8 Tobia, Jacob 208 Toffoli, Tommaso 195 Tohoni O’odham Nation 294 Tony (elephant) 291 Topsy (elephant) 250–51 tortoise (robot) 179, 180–81, 212 Trolley Problem 276–7 Trump, Donald 136 Turia, Tariana 267 Turing, Alan 29–31, 176–8, 186, 211, 215 Turing machine 176–8, 193, 195, 200, 223 Turing Test 29–31 turnips 118 Tuva 149 Twitter 136, 156 U-Machine 186, 190, 202, 211 Uexküll, Jakob von 24 Ulam, Stanislaw 224 umwelt 24–5, 33, 47, 64, 67, 111, 207, 271, 278, 293 UN Convention on the Law of the Sea 301 unconventional computing 190–91 Unconventional Computing Laboratory 194 United States Geological Survey 138 United Steel 184–5, 189 unknowing 186, 208, 210–13 US Army Corps of Engineers 201, 203 V1 flying bomb 133 Valkeapää, Nils-Aslak 150 Varley, George 132–3 Venus flytrap 195 Viable System Model 214 Vicki (orang-utan) 253 Viggianello 142–3 viruses 107, 248 von Neumann, John 223–6 von Neumann, Klára Dán 225 von Neumann bottleneck 224 waggle dance 259–60 wagtails 256 Walter, William Grey 179, 180–81, 183, 185, 212 Water Integrator (computer) 199–200, 199 Watts, Alan 18 weevils 252 West Wing, The 291 Western Sahara Wall 295 Whanganui River 267 Wide-Field Infrared Survey Telescope (WFIRST) 136 Wikelski, Martin 283, 300 wildlife corridors 290–95, 305–7 Williams, Robert 168 willow 124 Wilson, E.

pages: 903 words: 235,753

The Stack: On Software and Sovereignty
by Benjamin H. Bratton
Published 19 Feb 2016

The implications continue to play out in contemporary debates from robotics to neuroscience to the philosophy of physics, as has Turing's later conceptualization of “thinking machines,” verified by their ability to convincingly simulate the performance of human-to-human interaction, the so-called Turing test.8 In the decades since Turing's logic machine, computation-in-theory became computers-in-practice, and the digitalization of formal systems into mechanical systems and then back again, has become a predominant economic imperative. Through several interlocking modernities, the calculation of discrete states of flux and form would become more than a way to describe matter and change in the abstract, but also a set of standard techniques to strategically refashion them as well.

Because they simulate logic but are not themselves necessarily logical, computers make the world in ways that do not ultimately require our thinking to function (such as the interactions between high-speed trading algorithms that even their programmers cannot entirely predict and comprehend). The forms of inhuman intelligence that they manifest will never pass the Turing test, nor should we bother asking this of them. It is an absurd and primitive request.18 It is inevitable that synthetic algorithmic intelligences can and will create things that we have not thought of in advance or ever intended to make, but as suggested, because they do not need our thinking or intention as their alibi, it is their inhumanity that may make them most creative.19 Like Deleuze on the beach making sand piles, humans wrangle computation with our algorithm boxes, and in doing so, we make things by accident, sometimes little things like signal noise on the wire and sometimes big things like megastructures. 17. 

Aesthetic suspicion of digital systems couched in political suspicion (perhaps also couched in professional anxiety) has also led to awkward schisms in art. See Clare Bishop, “The Digital Divide: Contemporary Art and New Media,” Artforum (September 2012). 17.  Luciana Parisi, Contagious Architecture: Computation, Aesthetics, and Space (Cambridge, MA: MIT Press, 2013). 18.  See my editorial “Outing A.I.: Beyond the Turing Test,” New York Times, February 23, 2015. 19.  To me this is the purchase of the Promethean accelerationism of Reza Negarastani and Ray Brassier. See Brassier's “Prometheanism and Real Abstraction” in Speculative Aesthetics, ed. Robin Mackay, Luke Pendrell, James Trafford (Urbanomic Press: Falmouth, 2014), and Negarastani's “Labor of the Inhuman, Part 1: Human,” e-flux journal #52, 02/2014, and “The Labor of the Inhuman, Part II: The Inhuman,” e-flux journal #53, 03/2014. 20. 

pages: 371 words: 78,103

Webbots, Spiders, and Screen Scrapers
by Michael Schrenk
Published 19 Aug 2009

For example, it is now common for authentication forms to display text embedded in an image and ask a user to type that text into a field before it allows access to a secure page. While it's possible for a webbot to process text within an image, it is quite difficult. This is especially true when the text is varied and on a busy background, as shown in Figure 27-3. This technique is called a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA).[79] You can find more information about CAPTCHA devices at this book's website. Before embedding all your website's text in images, however, you need to recognize the downside. When you put text in images, beneficial spiders, like those used by search engines, will not be able to index your web pages.

Placing text within images is also a very inefficient way to render text. Figure 27-3. Text within an image is hard for a webbot to interpret * * * [77] Read Chapter 3 if you are interested in browser spoofing. [78] To learn the difference between obfuscation and encryption, read Chapter 20. [79] Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a registered trademark of Carnegie Mellon University. Setting Traps Your strongest defenses against webbots are techniques that detect webbot behavior. Webbots behave differently because they are machines and don't have the reasoning ability of people.

The Ages of Globalization
by Jeffrey D. Sachs
Published 2 Jun 2020

From a few thousand phone subscribers in the early 1980s, mobile subscriptions reached 7.8 billion in 2017 (figure 8.2). 8.2 Mobile Subscribers Worldwide, 1990–2017 Source: “Mobile Phone Market Forecast - 2019.” areppim: information, pure and simple, 2019, https://stats.areppim.com/stats/stats_mobilex2019.htm. The third dimension of the digital revolution is the intelligence of the computers. Once again, Turing took the lead, asking the pivotal question: Can machines have intelligence, and if so, how would we know? In 1950, he posed the famous Turing test of machine intelligence: An intelligent machine (computer-based system) would be able to interact with humans in a way that the humans would not be able to distinguish whether they were interacting with a machine or a human being. For example, the human subject could carry on a conversation with a machine or a person located in another room, passing messages to and receiving messages from that room, without knowing whether the counterpart was a person or an intelligent machine.

Today, a “self-taught” AI chess system can learn chess from scratch in a few hours, with no library of games or any other expert inputs on chess strategy, and trounce not only the current world chess champion but all past computer champions such as Deep Blue. In 2011, another IBM system, named Watson, learned to play the TV game show Jeopardy, with all of the puns and quips of popular culture and natural language, and beat world-class Jeopardy champions live on television. This too was a startling achievement, edging yet closer to passing the Turing test. After the Jeopardy championship, Watson went on to the field of medicine, working with doctors to hone expert diagnostic systems. More recently, we have seen stunning breakthroughs in deep neural networks, that is neural networks with hundreds of layers of artificial neurons. In 2016, an AI system, AlphaGo from the company Deep Mind, took on the world’s eighteen-time world Go champion, Lee Sedol.

pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence
by Jeff Hawkins
Published 15 Nov 2021

Machine intelligence will undergo a similar transition. Today, most AI scientists focus on getting machines to do things that humans can do—from recognizing spoken words, to labeling pictures, to driving cars. The notion that the goal of AI is to mimic humans is epitomized by the famous “Turing test.” Originally proposed by Alan Turing as the “imitation game,” the Turing test states that if a person can’t tell if they are conversing with a computer or a human, then the computer should be considered intelligent. Unfortunately, this focus on human-like ability as a metric for intelligence has done more harm than good. Our excitement about tasks such as getting a computer to play Go has distracted us from imagining the ultimate impact of intelligent machines.

pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe
by William Poundstone
Published 3 Jun 2019

Yet data, once it exists, has a way of turning up and being put to unexpected uses. The Bodleian Plate, an eighteenth-century engraving of Williamsburg discovered in 1929, guided Rockefeller’s reconstruction of the town. Bostrom’s conception of world simulations assumes the development of artificial intelligence that can pass a robust Turing test and behave as a psychologically convincing human. Wrap that code in an avatar, and you’ve got a virtual human. A World War II simulation could include representations of Churchill, Hitler, and Roosevelt, embodying everything known about these people. More than that, the simulation could include battles, bond drives, fascist rallies, and USO shows in which every person is a psychologically realized simulation, supplied with name, rank, and serial number taken from military records, and any other information that may survive.

Most of today’s AI researchers, and most in the tech community generally, believe that something that acts like a human and talks like a human and thinks like a human—to a sufficiently subtle degree—would have “a mind in exactly the same sense human beings have minds,” in philosopher John Searle’s words. This view is known as “strong AI.” Searle is among a dissenting faction of philosophers, and regular folk, who are not so sure about that. Almost all contemporary philosophers agree in principle that code could pass the Turing test, that it could be programmed to insist on having private moods and emotions, and that it could narrate a stream of consciousness as convincing as any human’s. But this might be all on the surface. Inside, the AI-bot could be empty, what philosophers call a zombie. It would have no soul, no subjectivity, no inner spark of whatever it is that makes us what we are.

pages: 253 words: 83,473

The Demon in the Machine: How Hidden Webs of Information Are Finally Solving the Mystery of Life
by Paul Davies
Published 31 Jan 2019

His main contribution was to define consciousness by what he called ‘the imitation game’,fn3 often referred to as ‘the Turing test’. The basic idea is that if someone interrogates a machine and cannot tell from the answers whether the responses are coming from a computer or another human being, then the computer can be defined as conscious. Some people object that just because a computer may convincingly simulate the appearance of consciousness doesn’t mean it is conscious; the Turing test attributes consciousness purely by analogy. But isn’t that precisely what we do all the time in relation to other human beings?

pages: 286 words: 86,480

Meantime: The Brilliant 'Unputdownable Crime Novel' From Frankie Boyle
by Frankie Boyle
Published 20 Jul 2022

He is deaf, but they sign to each other. At night, when he’s asleep, she feels lonely and chats to AI bots online. At first they are very poor at communicating. We see her talking to audio versions of the bots in bed, and we can see why they always fail the Turing test. After another dire date, Diane drunkenly agrees with her favourite bot to help him pass the Turing test. She downloads him into one of her kid’s old toys, so he’s a cute robot, a little like a Furby. She hits on the idea of getting people who are good at ‘faking’ humanity to help train BOT106. She starts with criminal psychopaths. This goes quite badly.

pages: 530 words: 147,851

Small Men on the Wrong Side of History: The Decline, Fall and Unlikely Return of Conservatism
by Ed West
Published 19 Mar 2020

And social networks really matter: one research paper showed that people tend to reject established scientific findings not because of ‘ignorance, irrationality or overconfidence’ but because they believe what their peers, and those they trust, tell them.11 Likewise self-censorship begins to kick in when people feel they are in a minority or might attract disapproval from high-status individuals, and are ‘less ready to express opinions which deviate from the perceived majority view’.12 It is the fear of sanctions that causes ‘a spiral of silence’, while another experiment showed ‘the expectation of being personally attacked can explain why people are more willing to voice a deviant opinion in offline rather than online environments’.13 Meet a homeowner in their thirties or forties in London and you can guess with pretty reasonable accuracy their views on most social issues – they will be the same as those of their neighbours. It also leads to ignorance about what the other side believe. The libertarian economist Bryan Caplan coined the term Ideological Turing Test to define the ability correctly to articulate what an opponent actually believes, named after Alan Turing’s yardstick of a computer’s ability to mimic a human. Ignorance means that minority opinions have to pass a tougher stress test, since as Caplan argues, ‘If someone can correctly explain a position but continue to disagree with it, that position is less likely to be correct.’14 US liberals have a less accurate view of conservative beliefs than the reverse.15 Likewise when asked to rank the reasons why their opposite number voted in the 2016 referendum, Leave voters were more correct in characterising Remainers than vice versa.16 My own theory for why liberals don’t understand conservatives is that conservatives are lower-status, and people don’t generally tend to pay attention to people lower down the pecking order (except to those at the very bottom, with the underclass, who exert a lurid fascination).

The paper reported: “The most common political subject in the surveyed comedy shows was the Government’s austerity programme.”’ http://www.telegraph.co.uk/culture/tvandradio/bbc/9934902/Have-you-heard-the-one-about-BBC-Radio-4-and-the-Left-wingbias.html. 3 Brooks, The Conservative Heart. 4 http://news.bbc.co.uk/1/hi/uk_politics/election_2010/8655846.stm. 5 https://www.theguardian.com/culture/2019/jul/23/roger-scruton-gets-job-back-after-regrettable-sacking 6 http://www.guardian.co.uk/commentisfree/2013/feb/05/gay-marriage-debate-uncovered-nest-of-bigots. 7 https://twitter.com/KSoltisAnderson/status/1060246627573731329. 8 http://www.people-press.org/2016/04/26/a-wider-ideological-gap-between-more-and-less-educated-adults/. 9 http://www.people-press.org/2015/04/07/a-deep-dive-into-party-affiliation/#party-id-by-race-education. 10 http://www.telegraph.co.uk/news/newstopics/howaboutthat/7887888/Champagne-socialists-not-as-left-wing-as-they-think-they-are.html. 11 https://twitter.com/whyvert/status/881572294430150656. 12 https://twitter.com/DegenRolf/status/953901373296398336. 13 http://journals.sagepub.com/doi/abs/10.1177/0093650215623837. 14 ‘And if ability to correctly explain a position leads almost automatically to agreement with it, that position is more likely to be correct . . . the ability to pass ideological Turing tests – to state opposing views as clearly and persuasively as their proponents – is a genuine symptom of objectivity and wisdom.’ https://www.econ-lib.org/archives/2011/06/the_ideological.html. 15 For The Righteous Mind Jonathan Haidt, along with two other academics, conducted a test to identify how well people understood the beliefs of the other team.

K. 69, 210, 265 Chick-fil-A 228–9 child abuse scandals 218, 232 child-rearing 242–7, 250–1 children, indoctrination 256–64 children’s stories 261 China 97, 139, 226–7, 230 Chindamo, Learco 122 Chinese Revolution 256 chlamydia 168 Chown, Marcus 197 Christian Judeaophobia 88 Christian saints 105–6 Christian socialists 240 Christian-run colleges 145–6, 152 Christianity 2, 4, 12, 54, 95, 100, 173, 208, 211, 215–23, 225, 227–9, 229, 264, 275–7, 280, 289, 291–3, 361, 366, 373 and charities 199–200, 202 evangelical 86, 219–20, 243, 338–9 and gender bias 254–5 and human nature 261 ‘Judaising’ 64 and the Left 19 and Marx 93 national 97 norms 79 and original sin 28 Roman 254, 275–6 traditional 167, 219 Western division 48 and women 6 see also Anglicanism; Catholicism; Protestantism; Puritans Christianity 2.0 328 Church 31–2, 37, 60, 64, 66–7, 75, 188, 189, 190, 193, 200, 271 Church of England 49, 50, 64, 145, 197, 202, 211, 212, 220, 222, 289 monarch as the head of 48, 291 churchgoing 211–12, 213–14 Churchill, Winston 82, 106, 161, 196, 274 CIA see Central Intelligence Agency Cincinnatus 328 Cirta 254 City of London 164, 165, 189, 200, 281 civil society 197 civilising process 143, 255 Clarence, Thomas 308 Clark, Alan 157–8 Clark, Kenneth 370 Clarkson, Jeremy 87 Clash 101 class interests 9 Clemenceau, Georges 274 clickbait 343 Clinton, Bill 153, 154, 155–6, 313 Clinton, Hilary 247, 249, 338–9, 340, 363 Clinton era 155 Clooney, George 23, 24, 25, 288 Clovis 254 CND see Campaign for Nuclear Disarmament CNN 297, 314 Cocoa Tree coffee house, Pall Mall 51 Coe, Jonathan 267 Cohen, Geoffrey 294 Cohen, Stanley 34 Cold War 15, 21, 31 Coleridge, Samuel Taylor 55, 78–9, 91–2, 279 collectivism 93 Collot d’Herbois, Jean-Marie 60 comedians 190–2, 265 comedy 331–3 communism 14, 15, 17, 20–1, 80, 93–6, 98, 100, 126, 128, 143–5, 198, 226, 228, 268, 334 fall of 15, 153 see also anti-communism Communist Party of Great Britain (Marxist-Leninist) 244 Communist Party of Great Britain (Provincial Central Committee) 244 Conan the Barbarian 12 Condorcet, Marquis de 239 Congress 300 Conquest, Robert, Three Laws of Politics 198–9, 201, 202 conscientiousness 108 Conservative government 17–18 see also Tory government Conservative Party 1–2, 5, 23, 50, 65, 79, 209, 234, 267, 283 establishment 77 post-Thatcher 280 victory 1992 103 Constantine, Emperor 276 Constantinople 291 constitutions American 345 English 159 Continental Congress 56 Contras 14 Cool Britannia 156 Cooper, Alice 24 Cooper, Gary 161 Coppen, Luke 212 Corbyn, Jeremy 3, 234, 235 Corn Laws 77, 155 Corrigan, Mark 20, 125 Costner, Kevin 131 Council for the Encouragement of Music and the Arts 197 council housing 176 Counter Reformation 45, 48 Cox, Jo 367–8 crime 84–5, 118–23, 153–4, 163, 181 see also homicide critical theory 135 Croker, John Wilson 77 Cromwell, Oliver 49 Crown, The (TV show) 161–2 Cultural Marxism 103–4, 127 cultural relativism 134 culture wars 31, 86, 100, 124, 154–6, 173, 201–2, 212, 214, 218, 226, 228, 253, 257, 266, 284, 292, 308, 311–13, 342–3, 359–60 ‘Culture Wars’ (blog) 244–5 Cumberbatch, Benedict 10 curat Deus injuria Dei 288 Cyrus, Miley 358 Czechoslovakia 15 Da Vinci Code, The (film, 2006) 213 Dafoe, Willem 110 Daily Express (newspaper) 157, 161, 310–11 Daily Mail (newspaper) 83–4, 87–8, 103, 118–19, 122, 142, 151, 157, 183, 190, 246, 285, 331, 332–3 Daily Telegraph (newspaper) 2, 3, 83, 104, 157, 162, 187, 213, 232, 244–5, 266, 269, 282, 307–8, 315–16 Daley, Janet 46 Dalrymple, Theodore 144, 145, 162–3 Damon, Matt 24 Dance, Charles 235 Dances with Wolves (1991) 131 Danton, Georges 60 Dark Ages 4 Darwin, Charles 148, 350 Davy, Humphry 91 Dawkins, Richard 215, 218, 264, 339 The Blind Watchmaker 72, 164 DDR see German Democratic Republic de Beauvoir, Simone 139 de Maistre, Joseph 67, 89–90, 94, 97, 116, 119 de Montaigne, Michel 373 De Niro, Robert 24, 358 de Rais, Gilles 186 de Tocqueville, Alexis 226 de Tracy, Antoine 221 de Warens, Mme 39–40 Deacon, Joey 141 Dean, James 341 Death Wish (1974) 185 Delingpole, James 307–9 democracy 17, 336, 355, 362 Democrats 5, 7, 51, 73, 112, 117, 224, 237–8, 247–8, 252, 274, 294–5, 297, 300–3, 305, 313, 319–20, 324, 326, 329, 339, 346–7, 360, 363 demos 31 Demos (think tank) 111 Denmark 351 Department of Work and Pensions 204 D’Épinay, Madame 39 Depression 115 depressive realism 30 Derbyshire, John 243 Derry, Lord 159 di Lampedusa, Giuseppe Tomasi, The Leopard 158–9 Diana, Princess 160–1, 310–11 DiCaprio, Leonardo 24 Dickens, Charles 174 Diderot, Denis 213 Dion, Céline 362 ‘dirt gap’ 247 Dirty Harry series 181 disability discrimination 141–2 discrimination 141–2, 304–5 Disraeli, Benjamin 53, 76, 77, 155, 274 Dissenters 50, 51, 52, 57, 61, 211, 293 Divine Right of Kings 50, 67 divorce 166 Dorsey, Jack 228–9 Douglas, Michael 22 DRD4 gene 349 Dre, Dr 122–3 dreams 180 Drebin, Frank 22 Dreger, Alice 328 Dreher, Rod 279, 373 Driscoll, Bill 335 Drop the Dead Donkey (TV show) 194 Drummond, Henry 279 Dufresne, Todd 139 Durkheim, Émile 78, 173, 225 Dworkin, Andrea 170 Dylan, Bob 24, 274 East Germany 15, 268 see also German Democratic Republic eastern Europe 15, 211 Eastwood, Clint 24 Eazy-E 122–3 Ecclestone, Christopher 24, 185 economic deregulation 153 economics 126–7 Economist (magazine) 3, 246, 355 Edexcel 262–3 education 175, 262–3 see also higher education; schools ‘education effect’ 250 educational philosophy 41–3 Edward VI 48 Efteling 339 Eich, Brendan 289 Einsatzgruppen 303 Ekman, Paul 146 ‘Elect’ 64 Electronic Frontier Foundation 290 Elias, Norbert 143 Elizabeth I 48–9, 291 Elizabeth II 161, 293, 331 empiricism 52 Enfield, Harry 208 Engels, Friedrich 173–4, 224 English Civil War 36, 49, 51, 64, 211, 326 English constitution 159 Enlightenment 222, 228, 229, 288 environmental movement 225, 284–6 envy, politics of 281 Episcopalians 115, 145 epistemological modesty 73–4 Equality Act (2006/2010) 199, 217–18, 289 eschatology 227 establishment, the 9–10, 56 Ethelbert of Kent 254 Eton 47 eugenics 139 Euripides 187 Europe 12 eastern 15, 211 European Union (EU) 353–8 referendum (2016) 3, 173, 222, 270, 275, 302, 354–5, 357, 359 Euroscepticism 353–4, 355 Eve 33, 219 Eve, Lady Balfour 284 Everyone Says I Love You (1996) 102 evolution 72 extroversion 108 F (Fascism) Scale 195 FA Cup 193 Fabians 80, 96 Facebook 4, 19–20, 26, 103, 174, 219, 237–8, 296–7, 298–300, 311, 343, 354 ‘Fairness Doctrine’ 312 faith schools 137 Falklands War 194 Falling Down (1993) 22, 185 Family Planning Association 241 Farage, Nigel 309, 355 fascism 94–7, 179 Fascism (F) Scale 195 Father Ted (TV show) 1, 186, 232 Fawlty, Basil 192, 205, 331 fear 113, 115–22 Federal Communications Commission 312 Feiling, Keith 69–70 femininities 114, 136 feminism 170–1 fertility rates 362–4 Festival of Reason 1793 228 FHM (magazine) 169 film 184–5 Financial Times (newspaper) 3, 355 Finland 351 First World War 176–7, 180, 196, 209 Fitzwilliam, Lord 62 Food Standards Agency 203 Four Lions (2010) 185 Fourth Reich 229 Fox 313–14 Fox, Charles James 55, 56, 81 Fox News 5, 194, 313 France 52, 72, 210–11, 261, 358–9 Francis, Pope 24 Franco, Francisco 14, 178–9 Frank, Thomas 115, 155 Frankfurt School 103–4, 126, 245 Franks 254 Free French 211 free market 96, 97, 280, 364–5 free speech 324, 325 freedom, inability to handle 133 Freemasons 197 French Republic 210 French Revolution 46–7, 55–6, 58–61, 69, 72, 81, 89–90, 167, 181, 216–17, 221 Frenkel–Brunswik, Else 105 Freud, Sigmund 104, 109, 138–9 Freudian theory 104–5, 138 Friedman, Milton 204 Friendly Societies 197 Friends of Abe 185–6 Fromm, Erich 105 Frum, David 280 Fry, Stephen 195 Galileo Galilei 350 Gallup polls 302 Garnett, Alf 192 Gaskell, Peter 174 Gatiss, Mark 309 Gay Pride 219, 229 Gellner, Ernest 136 gender equality 73, 129, 140, 170–1 pay gap 73, 246 performative nature 136, 139–40 traditional norms 246 gender studies 136–7 Generation X 2, 221 Genet, Jean 121 genetics 138–40 Gentile, Giovanni 95 gentry 9 geographical segregation 295–6 German Democratic Republic (DDR) 20–1, 373 see also East Germany Germany 95, 103–5, 295, 303, 366–7, 372–3 and the First World War 176–7 Nazi 139, 205, 246, 329 Gerson, Jean 33 Gibson, Mel 24, 130 Gibson, William 345 Gilbert, W.S. 101 Gingrich, Newt 270 Girondins 59–60 Gissing, George 242, 367 Giuliani, Rudy 42 GLC see Greater London Council globalisation 78 Glorious Revolution 52 God 45, 50, 56, 57, 64, 67, 69, 90, 97, 130, 188, 215, 219, 223, 288 God Squad 215 Godwin, William 61, 239 Goldberg, Jonah 96, 230 Goldberg, Whoopi 24 ‘golden-age’ myth 172–3, 176, 177 Goldman, Lisa 187 Gollancz, Victor 81 Goodfellas (1990) 123 Goodhart, David 46–7 Google 4, 339 Gore, Al 313 Gorky, Maxim 182 Gottfried, Paul 345 Gramsci, Antonio 126–7 Gray, John 280 ‘Great Awakening’ 9, 273, 329 ‘Great Awokening’ 329–30, 333, 335, 348 Great Realignment 365 Greater London Council (GLC) 20 Green Party 8, 232 Greene, Graham 166 Greene, Sir Hugh Carleton 165, 166 Griffin, Nick 87 Guardian (newspaper) 3, 11, 17, 27, 38, 83, 115, 169, 213, 232, 241, 244, 260, 270, 278–9, 296, 302, 308, 315, 331, 355 Guevara, Che 21, 25, 100 guillotine 60 Gulags 145, 240, 290 Gule, Lars 236 Guthrie, Woody 24 Hague, William 102 Haidt, Jonathan 184, 311 Hamid, Shadi 240 Hamilton, William 147 Hammond, Barbara 174 Hammond, John Lawrence 174 Handmaid’s Tale, The (1985) 168, 183 Hanks, Tom 24 Hare, David 189 Haringey Council 268–9 Harlesden 47 Harman, Harriet 217 Harris, John 310 Harvard 145, 146–7 Hawley, George 248 Heathers (1989) 112 ‘hegemonic degeneration’ 342 Heineken 4–5 help-to-buy scheme 247 Hemingway, Ernest 186 Henri IV 47 Henry, Lenny 193 Henry VIII 11, 48 as head of the Church of England 48 heresy 12, 37, 221, 254, 326 Herrnstein, Richard 146–7, 148 Heston, Charlton 24 higher education, expansion 136, 138 historical consequentialism 66–7 historical utilitarianism 67 Hitchens, Christopher 370 God is Not Great 214 Hitchens, Peter 46, 126, 161, 192, 273, 285 Hitler, Adolf 97–8, 99, 103, 106, 117, 236, 246, 303, 373 Hitler Youth 218 HM Revenue & Customs 204 Hobbes, Thomas 36–7, 42, 113, 326 Leviathan 36–7 Hobsbawm, Eric 80 Holland, Lady 81 Holland, Lord 81 Holland, Tom 271 Holland House 81 Hollywood 184–6 Holocaust 100 Holy Land 362 homicide 69, 84–5, 119, 122, 131, 163, 181 homophobia 28, 273, 280, 291 homosexuality 9, 16, 166, 184, 216–19, 222–3, 228–9, 272–3 see also same-sex marriage Honeyford, Ray 17 Hopkins, Katie 11 ‘horseshoe’ theory 96 Houellebecq, Michel, Submission (2015) 261 House of Commons 50, 55 House of Lords 159 house prices 125, 247, 248 House of Representatives 300 housing 175–6 Howard, Michael 163 Hugo, Victor 274 Hulk Hogan 195 human nature 54, 69–70, 105, 120, 130, 138–40, 231–2, 365–6 as ‘blank slate’ 138, 140 and children’s stories 261 Unconstrained Vision 130 human rights legislation 205 humanitarianism 74 Hume, David 38, 50, 129, 224 Humphreys, Nicholas 218 Hungarian Uprising 1956 80 Hungary 198 Hunter, James Davison 154 hunter-gatherers 131 Huxley, Aldous 148 hypocrisy 81, 86, 173, 189, 231–2 Ibsen, Henrik 188 identarians 345 identity politics 4, 9, 12, 117, 129, 317, 343 Ideological Turing Test 275 IMF see International Monetary Fund immigration 131, 164, 189, 194–5, 208, 367–8 ‘implicit bias’ 350–1 Independent (newspaper) 169, 207 individualism 283–4, 352 Industrial Revolution 75, 77–8, 174 Industrialisation 78 InfoWars 340 Inner London Education Authority (ILEA) 18–19 Inner London Teachers’ Association (ILTA) 19 Innocent III, Pope 33 Institute of Fiscal Studies 266 intelligentsia 12, 15 International Monetary Fund (IMF) 79 internationalism 97 Internet 298 intolerance 326–7 Inuit 131 IPC media 169 IRA see Irish Republican Army Iran 292 Iraq War 189, 201 Irish hunger strikes 201 Irish Republican Army (IRA) 367 ISIS 27, 43, 367 Islam 172–3, 190, 238–9, 339, 367–8 see also Muslims Islamic extremism 190 Islamic terrorism 115, 136, 367–8 Islamists, radical 27, 65–6 Israel 236, 300, 362 It’s a Wonderful Life (1946) 123–4, 287 Ivins, Molly 155 Jacobins 59, 60 Jahiliyyah (‘Age of Ignorance’) 173, 177 James I 49 James II 52 Jansenism 45 Jefferson, Thomas 53, 61, 109, 206–7 Jeffreys, Henry 259 Jensen, Arthur 147 Jeremiah 34–5 Jesuits 261 Jesus 56, 64, 215, 229 Jewish Tanakh 294 Jews 57, 88, 89, 95, 117, 294, 362, 366 see also Judaism Jindal, Bobby 10 job discrimination 12 Johansson, Scarlett 24 Johnson, Boris 1, 270 Johnson, Paul 46, 121, 181, 188, 273 Johnson, Samuel, Dictionary (1755) 64 Jones, Bridget 11, 25 Jones, Owen 26, 235–6 Jost, John 334–5 journalism 11, 316–17 Journey’s End (play) 180 Judaism 93 see also Jews Julian the Apostate 276 Justinian 291 Kael, Pauline 336 Kahneman, Daniel 29 Kant, Immanuel 69 Kassam, Raheem 309 Kelling, George L. 69 Kelly-Woessner, April 326–7 Kemp, Peter 14 Kennedy, John F. 24, 158, 178 Kennedy, Robert 178 Kennet, Baron 42 Kerr, Judith, The Tiger Who Came to Tea 258 King, Martin Luther 24, 85, 131, 178, 214, 227 King, Stephen 106, 242 Kinnock, Neil 82 Kirk, Russell 68, 91, 286–7, 362, 365 Knox, John 335 Koch Foundation 233 Koran 66 Kristol, Irving 120 Ku Klux Klan 193 Kutcher, Ashton 311 Labour government 163–5, 164, 175, 208, 217 post-war 47 Labour MPs 235–6 Labour Party 5, 18–19, 166, 226, 234–5, 250, 265, 267–9, 271–2, 367 and academia 8 centrist tendencies 21 and Clause 4 153 image 11 and personality types 108–9 and women 6 Lady Gaga 358 Lakoff, George 111 Lammy, David 186 Lasch, Christopher 132, 231 L’Aurore (newspaper) 132 law making 70 Lawrence, D.H., Lady Chatterley’s Lover 166 Lawrence, Philip 122 Laws, David 266 Lawson, Nigel 230 Le Chatelier 271 Le Conservateur (journal) 63 learning, ‘child-centred’ 41 Leave faction 238, 243–4, 322, 354, 355–7 Lee, Laurie 179 Lee, Stewart 191–2 Lego Movie, The (2014) 259 Legutko, Ryszard 228 Lehrer, Tom 178 Lennon, John 24, 186 Levasseur, Thérèse 40 Levin, Yuval 59, 71 Lewis, C.

pages: 632 words: 163,143

The Musical Human: A History of Life on Earth
by Michael Spitzer
Published 31 Mar 2021

The ideal seemed to have been realised by a 1992 software application called EMI (Experiments in Musical Intelligence), written by the musician and computer scientist David Cope, and upgraded by Cope’s next ‘daughter’, the more complex Emily Howell.38 EMI and Emily have created new works by a host of dead composers (including Bach, Mozart, Beethoven, Chopin and Prokofiev); the results have been released in several CDs, and performances appear to have passed the musical Turing Test, a ‘Lovelace Test’. The original Turing Test checked whether a machine’s intelligent behaviour was indistinguishable from a human’s. In Cope’s musical version of this experiment, he played two Chopin mazurkas – a real one, and one composed by EMI – to an extremely discerning audience at the world-famous Eastman School of Music.

Music is certainly ubiquitous, thanks to digital media, but cheap and without human value for that very reason. In Beethoven’s day, you would be lucky to hear a symphony twice in your lifetime, and would have cherished it all the more because of that. Music can now be composed by computers using algorithms, and its outputs can pass a ‘Turing Test’ for stylistic authenticity.88 If it sounds like new Vivaldi, who needs a composer? This pessimism can be flipped on its head. Some would say that music’s interaction with technology, through electro-acoustic composition, ubiquitous computing, distributed cognition and the general ‘gamification’ of sound, actually brings back the performative element of music.89 I lamented that the fate of music across the world, and especially in the West, was a decay from participation into passive listening.

Future Files: A Brief History of the Next 50 Years
by Richard Watson
Published 1 Jan 2008

It is much like a “chatbot” except that, if it answers Science and Technology 45 a question incorrectly, you can correct it and Cyc will learn from its mistakes. But Cyc still isn’t very intelligent, which is possibly why author, scientist and futurist Ray Kurzweil made a public bet with Mitchell Kapor, the founder of Lotus, that a computer would pass the Turing test by 2029. He based this prediction on ideas expressed in his book The Singularity Is Near: in essence, arguing that intelligence will expand in a limitless, exponential manner once we achieve a certain level of advancement in genetics, nanotechnology and robotics and the integration of that technology with human biology.

A 311 Index ‘O’ Garage 170 3D printers 56 accelerated education 57 accidents 159, 161–6, 173, 246 ACNielsen 126 adaptive cruise control 165 Adeg Aktiv 50+ 208 advertising 115–16, 117, 119 Africa 70, 89, 129, 174, 221, 245, 270, 275, 290, 301 ageing 1, 10, 54, 69, 93, 139, 147–8, 164, 188, 202, 208, 221, 228–9, 237, 239, 251, 261, 292, 295, 297–8 airborne networks 56 airlines 272 allergies 196–7, 234, 236 Alliance Against Urban 4x4s 171 alternative energy 173 alternative futures viii alternative medicine 244–5 alternative technology 151 amateur production 111–12 Amazon 32, 113–14, 121 American Apparel 207 American Express 127–8 androids 55 Angola 77 anti-ageing drugs 231, 237 anti-ageing foods 188 anti-ageing surgery 2, 237 antibiotics 251 anxiety 10, 16, 30, 32, 36, 37, 128, 149, 179, 184, 197, 199, 225, 228, 243, 251, 252, 256, 263, 283–4, 295–6, 300, 301, 305 Apple 61, 115, 121, 130, 137–8, 157 Appleyard, Bryan 79 Argentina 210 Armamark Corporation 193 artificial intelliegence 22, 40, 44, 82 131, 275, 285–6, 297, 300 Asda 136, 137 Asia 11, 70, 78, 89, 129, 150, 174, 221, 280, 290, 292 Asimov, Isaac 44 Asos.com 216 asthma 235 auditory display software 29 Australia 20–21, 72–3, 76, 92, 121, 145, 196, 242, 246, 250, 270, 282 Austria 208 authenticity 32, 37, 179, 194, 203–11 authoritarianism 94 automated publishing machine (APM) 114 automation 292 automotive industry 154–77 B&Q 279 baby boomers 41, 208 bacterial factories 56 Bahney, Anna 145 Bahrain 2 baking 27, 179, 195, 199 Bangladesh 2 bank accounts, body double 132 banknotes 29, 128 banks 22, 123, 135–8, 150, 151 virtual 134 Barnes and Noble 114 bartering 151 BBC 25, 119 Become 207 Belgium 238 313 314 benriya 28 Berlusconi, Silvio 92 Best Buy 223 biofuel 64 biomechatronics 56 biometric identification 28, 35, 52, 68, 88, 132 bionic body parts 55 Biosphere Expeditions 259 biotechnology 40, 300 blended families 20 blogs 103, 107, 109, 120 Blurb 113 BMW 289 board games 225 body double bank accounts 132 body parts bionic 55 replacement 2, 188, 228 Bolivia 73 Bollywood 111 books 29, 105, 111–25 boomerang kids 145 brain transplants 231 brain-enhancing foods 188 Brazil 2, 84, 89, 173, 247, 254, 270, 290 Burger King 184 business 13, 275–92 Bust-Up 189 busyness 27, 195, 277 Calvin, Bill 45 Canada 63, 78, 240 cancer 251 car sharing 160, 169, 176 carbon credits 173 carbon footprints 255 carbon taxes 76, 172 cars classic 168–9 driverless 154–5 flying 156, 165 hydrogen-powered 12, 31, 157, 173 pay-as-you-go 167–8 self-driving 165 cascading failure 28 cash 126–7, 205 cellphone payments 129, 213 cellphones 3, 25, 35, 51, 53, 120, 121, FUTURE FILES 129, 156, 161, 251 chicken, Christian 192 childcare robots 57 childhood 27, 33–4, 82–3 children’s database 86 CHIME nations (China, India, Middle East) 2, 10, 81 China 2, 10, 11, 69–72, 75–81, 88, 92–3, 125, 137, 139–40, 142, 151, 163, 174–5, 176, 200, 222, 228, 247, 260, 270–71, 275, 279, 295, 302 choice 186–7 Christian chicken 192 Christianity, muscular 16, 73 Chrysler 176 cinema 110–11, 120 Citibank 29, 128 citizen journalism 103–4, 108 City Car Club 168 Clarke, Arthur C. 58–9 Clarke’s 187 classic cars 168–9 climate change 4, 11, 37, 43, 59, 64, 68, 74, 77–9, 93, 150, 155, 254, 257, 264, 298–9 climate-controlled buildings 254, 264 cloning 38 human 23, 249 CNN 119 coal 176 Coca-Cola 78, 222–3 co-creation 111–12, 119 coins 29, 128, 129 collective intelligence 45–6 Collins, Jim 288 comfort eating 200 Comme des Garçons 216 community 36 compassion 120 competition in financial services 124–5 low-cost 292 computers disposable 56 intelligent 23, 43 organic 56 wearable 56, 302 computing 3, 33, 43, 48, 82 connectivity 3, 10, 11, 15, 91, 120, Index 233, 261, 275–6, 281, 292, 297, 299 conscientious objection taxation 86 contactless payments 123, 150 continuous partial attention 53 control 36, 151, 225 convenience 123, 178–9, 184, 189, 212, 223, 224 Coren, Stanley 246 corporate social responsibility 276, 282, 298 cosmetic neurology 250 Costa Rica 247 Craig’s List 102 creativity 11, 286; see also innovation credit cards 141–3, 150 crime 86–9 forecasting 86–7 gene 57, 86 Croatia 200 Crowdstorm 207 Cuba 75 cultural holidays 259, 273 culture 11, 17–37 currency, global 127, 151 customization 56, 169, 221–2, 260 cyberterrorism 65, 88–9 Cyc 45 cynicism 37 DayJet 262 death 237–9 debt 123–4, 140–44, 150 defense 63, 86 deflation 139 democracy 94 democratization of media 104, 108, 113 demographics 1, 10, 21, 69, 82, 93, 202, 276, 279–81, 292, 297–8 Denmark 245 department stores 214 deregulation 11, 3 Destiny Health 149 detox 200 Detroit Project 171 diagnosis 232 remote 228 digital downloads 121 evaporation 25 315 immortality 24–5 instant gratification syndrome 202 Maoism 47 money 12, 29, 123, 126–7, 129, 132, 138, 150, 191 nomads 20, 283 plasters 241 privacy 25, 97, 108 readers 121 digitalization 37, 292 Dinner by Design 185 dirt holidays 236 discount retailers 224 Discovery Health 149 diseases 2, 228 disintegrators 57 Disney 118–19 disposable computers 56 divorce 33, 85 DNA 56–7, 182 database 86 testing, compulsory 86 do-it-yourself dinner shops 185–6 dolls 24 doorbells 32 downshifters 20 Dream Dinners 185 dream fulfillment 148 dressmaking 225 drink 178–200 driverless cars 154–5 drugs anti-ageing 231, 237 performance-improving 284–5 Dubai 264, 267, 273 dynamic pricing 260 E Ink 115 e-action 65 Earthwatch 259 Eastern Europe 290 eBay 207 e-books 29, 37, 60, 114, 115, 302 eco-luxe resorts 272 economic collapse 2, 4, 36, 72, 221, 295 economic protectionism 10, 15, 72, 298 economy travel 272 316 Ecuador 73 education 15, 18, 82–5, 297 accelerated 57 lifelong learning 290 Egypt 2 electricity shortages 301 electronic camouflage 56 electronic surveillance 35 Elephant 244 email 18–19, 25, 53–4, 108 embedded intelligence 53, 154 EMF radiation 251 emotional capacity of robots 40, 60 enclosed resorts 273 energy 72, 75, 93 alternative 173 nuclear 74 solar 74 wind 74 enhancement surgery 249 entertainment 34, 121 environment 4, 10, 11, 14, 64, 75–6, 83, 93, 155, 171, 173, 183, 199, 219–20, 252, 256–7, 271, 292, 301 epigenetics 57 escapism 16, 32–3, 121 Estonia 85, 89 e-tagging 129–30 e-therapy 242 ethical bankruptcy 35 ethical investing 281 ethical tourism 259 ethics 22, 24, 41, 53, 78, 86, 132, 152, 194, 203, 213, 232, 238, 249–50, 258, 276, 281–2, 298–9 eugenics 252 Europe 11, 70, 72, 81, 91, 141, 150, 174–5, 182, 190, 192, 209 European Union 15, 139 euthanasia 238, 251 Everquest 33 e-voting 65 experience 224 extended financial families 144 extinction timeline 9 Facebook 37, 97, 107 face-recognition doors 57 fakes 32 family 36, 37 FUTURE FILES family loans 145 fantasy-related industries 32 farmaceuticals 179, 182 fast food 178, 183–4 fat taxes 190 fear 10, 34, 36, 38, 68, 150, 151, 305 female-only spaces 210–11, 257 feminization 84 financial crisis 38, 150–51, 223, 226, 301 financial services 123–53, 252 trends 123–5 fish farming 181 fixed-price eating 200 flashpacking 273 flat-tax system 85–6 Florida, Richard 36, 286, 292 flying cars 165 food 69–70, 72, 78–9, 162, 178–201 food anti-ageing 188 brain-enhancing 188 fast 178, 183–4 functional 179 growing your own 179, 192, 195 history 190–92 passports 200 slow 178, 193 tourism 273 trends 178–80 FoodExpert ID 182 food-miles 178, 193, 220 Ford 169, 176, 213, 279–80 forecasting 49 crime 86–7 war 49 Forrester Research 132 fractional ownership 168, 175, 176, 225 France 103, 147, 170, 189, 198, 267 Friedman, Thomas 278–9, 292 FriendFinder 32 Friends Reunited 22 frugality 224 functional food 179 Furedi, Frank 68 gaming 32–3, 70, 97, 111–12, 117, 130, 166, 262 Gap 217 Index gardening 27, 148 gas 176 GE Money 138, 145 gendered medicine 244–5 gene silencing 231 gene, crime 86 General Motors 157, 165 Generation X 41, 281 Generation Y 37, 41, 97, 106, 138, 141–2, 144, 202, 208, 276, 281, 292 generational power shifts 292 Genes Reunited 35 genetic enhancement 40, 48 history 35 modification 31, 182 testing 221 genetics 3, 10, 45, 251–2 genomic medicine 231 Germany 73, 147, 160, 170, 204–5, 216–17, 261, 267, 279, 291 Gimzewski, James 232 glamping 273 global currency 127 global warming 4, 47, 77, 93, 193, 234 globalization 3, 10, 15–16, 36–7, 63–7, 72–3, 75, 81–2, 88, 100, 125, 139, 143, 146, 170, 183, 189, 193–5, 221, 224, 226, 233–4, 247–8, 263, 275, 278–80, 292, 296, 299 GM 176 Google 22, 61, 121, 137, 293 gout 235 government 14, 18, 36, 63–95, 151 GPS 3, 15, 26, 50, 88, 138, 148, 209, 237, 262, 283 Grameen Bank 135 gravity tubes 57 green taxes 76 Greenpeace 172 GRIN technologies (genetics, robotics, internet, nanotechnology) 3, 10, 11 growing your own food 178, 192, 195 Gucci 221 Gulf States 125, 260, 268 H&M 217 habitual shopping 212 Handy, Charles 278 317 Happily 210 happiness 63–4, 71–2, 146, 260 health 15, 82, 178–9, 199 health monitoring 232, 236, 241 healthcare 2, 136, 144, 147–8, 154, 178–9, 183–4, 189–91, 228–53, 298; see also medicine trends 214–1534–7 Heinberg, Richard 74 Helm, Dieter 77 Heritage Foods 195 hikikomori 18 hive mind 45 holidays 31, 119; see also tourism holidays at home 255 cultural 259 dirt 236 Hollywood 33, 111–12 holographic displays 56 Home Equity Share 145 home baking 225 home-based microgeneration 64 home brewing 225 honesty 152 Hong Kong 267 hospitals 228, 241–3, 266 at home 228, 238, 240–42 hotels 19, 267 sleep 266 human cloning 23, 249 Hungary 247 hybrid humans 22 hydrogen power 64 hydrogen-powered cars 12, 31, 157, 173 Hyperactive Technologies 184 Hyundai 170 IBM 293 identities, multiple 35, 52 identity 64, 71 identity theft 88, 132 identity verification, two-way 132 immigration 151–2, 302 India 2, 10, 11, 70–72, 76, 78–9, 81, 92, 111, 125, 135, 139, 163, 174–5, 176, 247, 249–50, 254, 260, 270, 275, 279, 302 indirect taxation 86 318 individualism 36 Indonesia 2, 174 industrial robots 42 infinite content 96–7 inflation 151 information overlead 97, 120, 159, 285; see also too much information innovation 64, 81–2, 100, 175, 222, 238, 269, 277, 286–8, 291, 297, 299 innovation timeline 8 instant gratification 213 insurance 123, 138, 147–50, 154, 167, 191, 236, 250 pay-as-you-go 167 weather 264 intelligence 11 embedded 53, 154 implants 229 intelligent computers 23, 43 intelligent night vision 162–3 interaction, physical 22, 25, 97, 110, 118, 133–4, 215, 228, 243, 276, 304 interactive media 97, 105 intergenerational mortgages 140, 144–5 intermediaries 123, 135 internet 3, 10, 11, 17–18, 25, 68, 103, 108, 115–17, 124, 156, 240–41, 261, 270, 283, 289, 305 failure 301 impact on politics 93–4 sensory 56 interruption science 53 iPills 240 Iran 2, 69 Ishiguro, Hiroshi 55 Islamic fanaticism 16 Italy 92, 170, 198–9 iTunes 115, 130; see also Apple Japan 1, 18, 26, 28–9, 54–5, 63, 80–81, 114, 121, 128–9, 132, 140, 144–5, 147, 174, 186, 189, 192, 196, 198, 200, 209–10, 223, 240, 260, 264, 271, 279, 291 jetpacks 60 job security 292 journalism 96, 118 journalism, citizen 103–4, 107 joy-makers 57 FUTURE FILES Kaboodle 207 Kapor, Mitchell 45 Kenya 128 keys 28–9 Kindle 60, 121 Kramer, Peter 284 Kuhn, Thomas 281 Kurzweil, Ray 45 Kuwait 2 labor migration 290–91 labor shortages 3, 80–81, 289–90 Lanier, Jaron 47 laser shopping 212 leisure sickness 238 Let’s Dish 185 Lexus 157 libraries 121 Libya 73 life-caching 24, 107–8 lighting 158, 160 Like.com 216 limb farms 249 limited editions 216–17 live events 98, 110, 304 localization 10, 15–16, 116, 128, 170, 178, 189, 193, 195, 215, 220, 222–3, 224, 226, 255, 270, 297 location tagging 88 location-based marketing 116 longevity 188–9, 202 Longman, Philip 71 low cost 202, 219–22 luxury 202, 221, 225, 256, 260, 262, 265–6, 272 machinamas 112 machine-to-machine communication 56 marketing 115–16 location-based 116 now 116 prediction 116 Marks & Spencer 210 Maslow, Abraham 305–6 masstigue 223 materialism 37 Mayo Clinic 243 McDonald’s 130, 168, 180, 184 McKinsey 287 Index meaning, search for 16, 259, 282, 290, 305–6 MECU 132 media 96–122 democratization of 104, 108, 115 trends 96–8 medical outsourcing 247–8 medical tourism 2, 229, 247 medicine 188, 228–53; see also healthcare alternative 243–4 gendered 244–5 genomic 231 memory 229, 232, 239–40 memory loss 47 memory pills 231, 240 memory recovery 2, 228–9, 239 memory removal 29–30, 29, 240 Menicon 240 mental health 199 Meow Mix 216 Merriman, Jon 126 metabolomics 56 meta-materials 56 Metro 204–5 Mexico 2 micromedia 101 micro-payments 130, 150 Microsoft 137, 147, 293 Middle East 10, 11, 70, 81, 89, 119, 125, 129, 139, 174–5, 268, 301 migration 3, 11, 69–70, 78, 82, 234, 275, 290–91 boomerang 20 labor 290–91 Migros 215 military recruitment 69 military vehicles 158–9 mind-control toys 38 mindwipes 57 Mitsubishi 198, 279 mobile payments 123, 150 Modafinil 232 molecular biology 231 monetization 118 money 123–52 digital 12, 29, 123, 126–7, 129, 132, 138, 150, 191 monitoring, remote 154, 168, 228, 242 monolines 135, 137 319 mood sensitivity 41, 49, 154, 158, 164, 187–8 Morgan Stanley 127 mortality bonds 148 Mozilla Corp. 289 M-PESA 129 MTV 103 multigenerational families 20 multiple identities 35, 52 Murdoch, Rupert 109 muscular Christianity 16, 73 music industry 121 My-Food-Phone 242 MySpace 22, 25, 37, 46, 97, 107, 113 N11 nations (Bangladesh, Egypt, Indonesia, Iran, South Korea, Mexico, Nigeria, Pakistan, Philippines, Turkey, Vietnam) 2 nanoelectronics 56 nanomedicine 32 nanotechnology 3, 10, 23, 40, 44–5, 50, 157, 183, 232, 243, 286, 298 napcaps 56 narrowcasting 109 NASA 25, 53 nationalism 16, 70, 72–3, 139, 183, 298, 302 natural disasters 301 natural resources 2, 4, 11, 64, 298–9 Nearbynow 223 Nestlé 195 Netherlands 238 NetIntelligence 283 networkcar.com 154 networks 28, 166, 288 airborne 56 neural nets 49 neuronic whips 57 neuroscience 33, 48 Neville, Richard 58–9 New Economics Foundation 171 New Zealand 265, 269 newspapers 29, 102–9, 117, 119, 120 Nigeria 2, 73 Nike 23 nimbyism 63 no-frills 224 Nokia 61, 105 Norelift 189 320 Northern Rock 139–40 Norwich Union 167 nostalgia 16, 31–2, 51, 169–70, 179, 183, 199, 203, 225, 303 now marketing 116 nuclear annihilation 10, 91 nuclear energy 74 nutraceuticals 179, 182 Obama, Barack 92–3 obesity 75, 190–92, 199, 250–51 oceanic thermal converters 57 oil 69, 72–3, 93, 151, 174, 176, 272, 273, 301 Oman 2, 270 online relationships 38 organic computers 56 organic food 200, 226 osteoporosis 235 outsourcing 224, 292 Pakistan 2 pandemics 4, 10, 16, 59, 72, 128, 232, 234, 272, 295–7, 301 paper 37 parasite singles 145 passwords 52 pictorial 52 pathogens 233 patient simulators 247 patina 31 patriotism 63, 67, 299 pay-as-you-go cars 167–8 pay-as-you-go insurance 167 payments cellphone 129, 213 contactless 123, 150 micro- 130, 150 mobile 123, 150 pre- 123, 150 PayPal 124, 137 Pearson, Ian 44 performance-improving drugs 284–5 personal restraint 36 personal robots 42 personalization 19, 26, 56, 96–8, 100, 102–3, 106, 108–9, 120, 138, 149, 183, 205–6, 223, 244–5, 262, 267, 269 Peru 73 FUTURE FILES Peters, Tom 280 Pharmaca 244 pharmaceuticals 2, 33, 228, 237 Philippines 2, 212, 290 Philips 114 Philips, Michael 232–3 photographs 108 physical interaction 22, 25, 97, 110, 118, 133–4, 215, 228, 243, 276, 304 physicalization 96–7, 101–2, 106, 110, 120 pictorial passwords 52 piggy banks 151 Pink, Daniel 285 plagiarism 83 polarization 15–16, 285 politics 37, 63–95, 151–2 regional 63 trends 63–5 pop-up retail 216, 224 pornography 31 portability 178, 183–4 power shift eastwards 2, 10–11, 81, 252 Prada 205–6, 216 precision agriculture 181–2 precision healthcare 234–7 prediction marketing 116 predictions 37, 301–2 premiumization 223 pre-payments 123, 150 privacy 3, 15, 41, 50, 88, 154, 165–7, 205, 236, 249, 285, 295 digital 25, 97, 108 Procter & Gamble 105, 280 product sourcing 224 Prosper 124, 135 protectionism 67, 139, 156, 220, 226, 301 economic 10, 15, 72, 299 provenance 178, 193, 226 proximity indicators 32 PruHealth 149 psychological neoteny 52 public ownership 92 public transport 171 purposeful shopping 212 Qatar 2 quality 96–7, 98, 101, 109 Index quantum mechanics 56 quantum wires 56 quiet materials 56 radiation, EMF 251 radio 117 randominoes 57 ranking 34, 83, 109, 116, 134, 207 Ranking Ranqueen 186 reality mining 51 Really Cool Foods 185 rebalancing 37 recession 139–40, 202, 222 recognition 36, 304 refrigerators 197–8 refuge 121 regeneration 233 regional food 200 regional politics 63 regionality 178, 192–3 regulation 124, 137, 143 REI 207 Reid, Morris 90 relationships, online 38 religion 16, 58 remote diagnosis 228 remote monitoring 154, 168, 228, 242 renting 225 reputation 34–5 resistance to technology 51 resorts, enclosed 273 resource shortages 11, 15, 146, 155, 178, 194, 254, 300 resources, natural 2, 4, 11, 64, 73–4, 143, 298–9 respect 36, 304 restaurants 186–8 retail 20–21, 202–27, 298 pop-up 216, 224 stealth 215 theater 214 trends 202–3 Revkin, Andy 77 RFID 3, 24, 50, 121, 126, 149, 182, 185, 192, 196, 205 rickets 232 risk 15, 124, 134, 138, 141, 149–50, 162, 167, 172, 191, 265, 299–300, 303 Ritalin 232 321 road pricing 166 Robertson, Peter 49 robogoats 55 robot department store 209 Robot Rules 44 robotic assistants 54, 206 concierges 268 financial advisers 131–2 lobsters 55 pest control 57 soldiers 41, 55, 60 surgery 35, 41, 249 robotics 3, 10, 41, 44–5, 60, 238, 275, 285–6, 292, 297 robots 41, 54–5, 131, 237, 249 childcare 57 emotional capacity of 40, 60 industrial 42 personal 42 security 209 therapeutic 41, 54 Russia 2, 69, 72, 75, 80, 89, 92–3, 125, 174, 232, 254, 270, 295, 302 safety 32, 36, 151, 158–9, 172–3, 182, 192, 196 Sainsbury’s 215 Salt 187 sanctuary tourism 273 satellite tracking 166–7 Saudi Arabia 2, 69 Schwartz, Barry 186 science 13, 16, 40–62, 300 interruption 53 trends 40–42 scramble suits 57 scrapbooking 25, 108, 225 Sears Roebuck 137 seasonality 178, 193–4 second-hand goods 224 Second Life 133, 207–8 securitization 124, 140 security 16, 31, 151 security robots 209 self-driving cars 165 self-medication 242 self-publishing 103, 113–14 self-reliance 35, 75 self-repairing roads 57 322 self-replicating machines 23, 44 Selfridges 214 sensor motes 15, 50, 196 sensory internet 56 Sharia-based investment 125 Shop24 209 shopping 202–27 habitual 212 laser 212 malls 211–5 purposeful 212 slow 213 social 207 Shopping 2.0 224 short-wave scalpels 57 silicon photonics 56 simplicity 169–70, 179, 186, 202, 218, 224, 226, 272 Singapore 241 single-person households 19–20, 202–3, 208–9, 221, 244, 298, 304 skills shortage 293, 302 sky shields 57 sleep 159–60, 188, 228, 231, 246–7, 265 sleep debt 96, 266 sleep hotels 266 sleep surrogates 57 slow food 178, 193 slow shopping 213 slow travel 273 smart devices 26–7, 28, 32, 35, 44, 50, 56, 57, 164, 206, 207 smart dust 3, 15, 50, 196 smartisans 20 Smartmart 209 snakebots 55 social networks 97, 107, 110, 120, 133, 217, 261 social shopping 207 society 13, 15–16, 17–37 trends 15–16 Sodexho 193 solar energy 74 Sony 114, 121 South Africa 84, 149, 242 South America 82, 270 South Korea 2, 103, 128–9 space ladders 56 space mirrors 47 space tourism 271, 273 FUTURE FILES space tugs 57 speed 164, 202, 209, 245, 296–7 spirituality 16, 22, 282, 298, 306 spot knowledge 47 spray-on surgical gloves 57 St James’s Ethics Centre 282 stagflation 139 starch-based plastics 64 stealth retail 215 stealth taxation 86 Sterling, Bruce 55 storytelling 203 Strayer, David 161 street signs 162–3 stress 32, 96, 235, 243, 245–6, 258–9, 265, 257–9, 275, 277, 283–5 stress-control clothing 57 stupidity 151, 302 Stylehive 207 Sudan 73 suicide tourism 236 Super Suppers 185 supermarkets 135–6, 184–6, 188, 191–2, 194, 202–3, 212, 215, 218–19, 224, 229 surgery 2, 31 anti-ageing 2, 237 enhancement 249 Surowiecki, James 45 surveillance 35, 41 sustainability 4, 37, 74, 181, 193–5, 203, 281, 288, 298–9 Sweden 84 swine flu 38, 251, 272 Switzerland 168, 210, 215 synthetic biology 56 Taco Bell 184 Tactical Numerical Deterministic Model 49 tagging, location 86, 88 Taiwan 81 talent, war for 275, 279, 293; see also labor shortages Target 216 Tasmania 267 Tata Motors 174, 176 taxation 85–6, 92, 93 carbon 76, 172 conscientious objection 86 Index fat 190 flat 85–6 green 76 indirect 86 stealth 86 Tchibo 217 technology 3, 14–16, 18, 22, 26, 28, 32, 37, 40–62, 74–5, 82–3, 96, 119, 132, 147–8, 154, 157, 160, 162, 165–7, 178, 182, 195–8, 208, 221, 229, 237, 242–3, 249, 256, 261, 265–6, 268, 275–6, 280, 283–4, 292, 296–7, 300 refuseniks 30, 51, 97 trends 40–42 telemedicine 228, 238, 242 telepathy 29 teleportation 56 television 21, 96, 108, 117, 119 terrorism 67, 91, 108, 150, 262–3, 267, 272, 295–6, 301 Tesco 105, 135–6, 185, 206, 215, 219, 223 Thailand 247, 290 therapeutic robots 41, 54 thermal imaging 232 things that won’t change 10, 303–6 third spaces 224 ThisNext 207 thrift 224 Tik Tok Easy Shop 209 time scarcity 30, 96, 102, 178, 184–6, 218, 255 time shifting 96, 110, 116 time stamps 50 timeline, extinction 9 timeline, innovation 8 timelines 7 tired all the time 246 tobacco industry 251 tolerance 120 too much choice (TMC) 29, 202, 218–19 too much information (TMI) 29, 51, 53, 202, 229; see also information overload tourism 254–74 cultural 273 ethical 259 food 273 323 local 273 medical 2, 229, 247 sanctuary 273 space 271, 273 suicide 238 tribal 262 Tourism Concern 259 tourist quotas 254, 271 Toyota 48–9, 157 toys, mind-control 38 traceability 195 trading down 224 transparency 3, 15, 143, 152, 276, 282, 299 transport 15, 154–77, 298 public 155, 161 trends 154–6 transumerism 223 travel 2, 3, 11, 148, 254–74 economy 272 luxury 272 slow 273 trends 254–6 trend maps 6–7 trends 1, 5–7, 10, 13 financial services 123–5 food 178–80 healthcare 228–9 media 96–8 politics 63–5 retail 202–3 science and technology 40–42 society 15–16 transport 154–6 travel 254–6 work 275–7 tribal tourism 262 tribalism 15–16, 63, 127–8, 183, 192, 220, 260 trust 82, 133, 137, 139, 143, 192, 203, 276, 282–3 tunnels 171 Turing test 45 Turing, Alan 44 Turkey 2, 200, 247 Twitter 60, 120 two-way identity verification 132 UAE 2 UFOs 58 324 UK 19–20, 72, 76, 84, 86, 90–91, 100, 102–3, 105, 128–9, 132, 137, 139–42, 147–9, 150, 163, 167–8, 170–71, 175, 185, 195–6, 199, 200, 206, 210, 214–16, 238, 259, 267–8, 278–9, 284, 288 uncertainty 16, 30, 34, 52, 172, 199, 246, 263, 300, 303 unemployment 151 Unilever 195 University of Chicago 245–6 urban rental companies 176 urbanization 11, 18–19, 78, 84, 155, 233 Uruguay 200 US 1, 11, 19–21, 23, 55–6, 63, 67, 69, 72, 75, 77, 80–83, 86, 88–90, 92, 104–5, 106, 121, 129–33, 135, 139–42, 144, 147, 149, 150, 151, 162, 167, 169–71, 174, 185, 190–3, 195, 205–6, 209, 211, 213, 216, 218, 220, 222–3, 237–8, 240–8, 250, 260, 262, 267–8, 275, 279–80, 282–4, 287, 291 user-generated content (UGC) 46, 97, 104, 289 utility 224 values 36, 152 vending machines 209 Venezuela 69, 73 verbal signatures 132 VeriChip 126 video on demand 96 Vietnam 2, 290 Vino 100 113 Virgin Atlantic 261 virtual adultery 33 banks 134 economy 130–31 protests 65 reality 70 sex 32 stores 206–8 vacations 32, 261 worlds 157, 213, 255, 261, 270, 305 Vocation Vacations 259–60 Vodafone 137 voice recognition 41 voice-based internet search 56 voicelifts 2, 237 FUTURE FILES Volkswagen 175 voluntourism 259 Volvo 164 voting 3, 68, 90–91 Walgreens 244 Wal-Mart 105, 136–7, 215, 219–20, 223, 244, 282 war 68–9, 72 war for talent 275, 279; see also labor shortages war forecasting 49 water 69–70, 74, 77–9, 199 wearable computers 55 weather 64 weather insurance 264 Web 2.0 93, 224 Weinberg, Peter 125 wellbeing 2, 183, 188, 199 white flight 20 Wikipedia 46, 60, 104 wild swimming 273 Wilson, Edward O. 74 wind energy 74 wine producers 200 wisdom of idiots 47 Wizard 145 work 275–94 trends 275–94 work/life balance 64, 71, 260, 277, 289, 293 worldphone 19 xenophobia 16, 63 YouTube 46, 103, 107, 112 Zara 216–17 Zipcar 167 Zopa 124, 134

pages: 368 words: 96,825

Bold: How to Go Big, Create Wealth and Impact the World
by Peter H. Diamandis and Steven Kotler
Published 3 Feb 2015

,” Fiverr.com, 2014, http://support.fiverr.com/hc/en-us/articles/201500776-What-is-Fiverr-. 18 Unless otherwise noted, all Matt Barrie quotes come from a 2013 AI. 19 AIs with Marcus Shingles, 2013–2014. 20 AI with Andrew Vaz. 21 “About Us,” Freelancer.com, 2014, https://www.freelancer.com/info/about.php. 22 AI with Barrie. 23 Ibid. 24 AI with James DeJulio, 2013. 25 AI with Barrie. 26 Ibid. 27 “Vicarious AI passes first Turing Test: CAPTCHA,” Vicarious, October 27, 2013, http://news.vicarious.com/post/65316134613/vicarious-ai-passes-first-turing-test-captcha. Chapter Eight: Crowdfunding: No Bucks, No Buck Rogers 1 “Statistics about Business Size (including Small Business) from the U.S. Census Bureau,” Statistics of US Businesses, United States Census Bureau, 2007, https://www.census.gov/econ/smallbus.html. 2 “Statistics about Business Size (including Small Business) from the U.S.

pages: 328 words: 96,678

MegaThreats: Ten Dangerous Trends That Imperil Our Future, and How to Survive Them
by Nouriel Roubini
Published 17 Oct 2022

According to his biographer, Andrew Hodges, “[Turing] supposed it possible to equip the machine with ‘television cameras, microphones, loudspeakers, wheels and handling servo mechanisms as well as some sort of electronic brain.’” Turing proposed, moreover, “that it should ‘roam the countryside’ so that it ‘should have a chance of finding things out for itself.’”30 We are now not far from satisfying the Turing Test, when a human cannot tell if she is interacting with a machine. No institution caught on faster than the Pentagon. “New Navy Device Learns by Doing,” the New York Times reported in July 1958. “The Navy said the perceptron would be the first non-living mechanism ‘capable of receiving, recognizing and identifying its surroundings without any human training or control.’”

By allowing machines to scan vast corpuses of texts and do their own pattern analyses, AIs have learned how to translate between languages with remarkable success, and how to generate new texts with remarkable authenticity. The subtle grasp of language crosses one of the last obstacles en route to satisfying the Turing Test. “Distinguishing AI-generated text, images and audio from human generated will become extremely difficult,” says Mustafa Suleyman, a cofounder of DeepMind and till recently head of AI policy at Google, as the “transformers” revolution accelerates the power of AI.43 As a consequence, a large number of white-collar jobs using advanced levels of cognition will become obsolete.

New Horizons in the Study of Language and Mind
by Noam Chomsky
Published 4 Dec 2003

That move has been made far too easily, 114 New horizons in the study of language and mind leading to extensive and it seems pointless debate over such alleged questions as whether machines can think: for example, as to “how one might empirically defend the claim that a given (strange) object plays chess” (Haugeland 1979), or determine whether some artifact or algorithm can translate Chinese, or reach for an object, or commit murder, or believe that it will rain. Many of these debates trace back to the classic paper by Alan Turing in which he proposed the Turing test for machine intelligence, but they fail to take note of his observation that “The original question, ‘Can machines think?,’ I believe to be too meaningless to deserve discussion” (Turing 1950: 442): it is not a question of fact, but a matter of decision as to whether to adopt a certain metaphorical usage, as when we say (in English) that airplanes fly but comets do not – and as for space shuttles, choices differ.

Minneapolis, MN, University of Minnesota Press. 214 Index Index abduction 80 ability, distinguished from knowledge 50–2, 97–8 abstract see concrete–abstract dimension access: to consciousness 93–8, 141, 147 – in principle 96–8, 141, 143 acoustic phonetics 174 acquisition 6–8, 181; and concept formation 61–6; “initial state” as a device for 4–5; innateness and selectivity x–xi, 121–2; labelling of innate concepts 61–2, 65; and lexical access 121–2; and sensory deficit 121–2; see also child language acquisition; Language Acquisition Device (LAD) adjacency 11, 121 agency, and objects 21–2 agreement 14 algorithms 113, 147, 159 Almog, Joseph 42 analytic–synthetic distinction xiv, 46–7, 61–5 anaphora 39, 140 animal, man and 3 animate–inanimate dimension 126 anthropological linguistics 6 anthropology 136 anti-foundationalism 76–7 arbitrariness, Saussurean 27, 120 argument-structure 11 Aristotle 187, 204n articulatory phonetics 174 articulatory–perceptual systems 28, 120, 123–6, 180 214 artifacts, capacities of 114 Artificial Intelligence 200n assertability conditions 109 assignment of derived constituent structure 199n association 92, 93 Atlas, Jay 151 “atomic” units 10 atomism, physical 111 auditory cortex 158 Austin, John 45, 132 authority: deference to 155; firstperson 142–3 autosegmental 40 Baker, Lynne Rudder 153–4 Baldwin, T.R. 79–80, 81, 144 Barinaga, Marcia 158 Bedeutung (Frege) 130, 131–2 Beekman, Isaac 110 behavior, causation of 72, 95 behaviorism 46–60, 80, 92, 93, 101, 103 belief systems: and the language faculty xiv, 63–4, 129; lexicon and 32; and the terms of language 21–2, 137, 148–9 beliefs: absence of term in other languages than English 119; attribution of 91, 119, 135, 146–7, 153–4, 200n; convictions about the nature, as a posteriori or a priori 89; different about the same subject 149, 192–3; false 33, 43; fixation 63–4; individuation of 165, see also I-beliefs justification Index as interest-relative 196n; and meaning 137; and properties of expressions 178–9; relation to the world 47, 135, 197n; similarity of 43–4, 152; social role of 197n; that correspond literally to animistic and intentional terminology 135–6 Berthelot, M. 111 “best theory” 112, 136, 142, 145, 173 “bifurcation thesis” xiv Bilgrami, Akeel 137, 150, 154–5, 190 binding theory 10, 11, 31, 39, 50; use of principle (C) 93, 99 biology vii, 1–2, 3–6, 139; of language 1–2, 3–5, 34; meaninglessness of intuitive categories for 161–3; and study of the mind 5–6 Black, Joseph 166, 184 blindsight 95–7 body: Cartesian theory of viii, 103; limitations of naturalistic theory of the 28, 143; as mental and physical 113, 167; theory of the 84, 86, 87, 199n body–mind problem see mind–body problem Bohr, Niels 43, 111, 151, 152 Boltzmann, Ludwig 110 boundary conditions 7–8 Boyle, Robert 108 Bradley, David 163, 203n brain: auditory, visual and tactual inputs 121–2; biochemical laws of 16; configurations relevant to meaning 19–20, 24–40; and consciousness 86, 145; electrical activity of the viii, 116–17, 140; homogenity of structure not found 184; language faculty xii, 73, 77–8 – computational theories 116–17; localization of analytic mechanisms of 121–2; properties of 27; shared initial state 5, 33–4, 73 – mental and organical structure of the 215 167–8; and mind 76; neural structure as natural realization of rule systems 54–5; provides mechanisms of thought 113–15, 183; scans 171; as solving problems and adapted to normal situations 159, 161; study at various levels 6, 24, 103; as thermoregulator 195n; things mental as emergent properties of 1–2; in a vat 158–9 brain sciences x, 19–20, 116 Brentano, Franz 22 Brock, William 110–11 Bromberger, Sylvain 82, 203n Burge, Tyler 72, 159, 171, 184, 192–3, 195n, 202n, 204n; on eliminativism 88, 92, 138; on naturalism 87, 109, 144 c-command 11, 40 C–R theories 24, 25–7, 40, 45, 104–5; as a form of syntax 34, 40; see also I-language Carnap, Rudolf 186, 187 Cartesianism 80, 83–4, 85, 132, 145, 167; collapse of 103, 108–9 case systems, language differences in 11–12 Case theory 10 categories 138–9 causality viii, 47, 72, 95, 137 “causative” properties 179 cellular theories 116 chain condition 10 Chastain, Charles 115 child language acquisition x–xi, 6–7, 101, 186; assigning labels to concepts 61–2, 65; compared with foreign adult’s 49; and the computational system 120; early exposure and language development 201n; and the LAD 92–3; limited exposure to semantic aspects in ambiguous circumstances 120, 185; rate of 120; of a specific language 53, 54 216 Index children: attribute beliefs to others before development of language 119; blind and language acquisition 121–2; innateness of the property of discrete infinity 3–4; intuitive understanding of concepts 62; phonetic data available to 185; usage differs from adult usage 191–2 Churchland, Patricia 107, 115 Churchland, Paul 64, 107, 115, 183, 184 clicks, displacement to phrase boundaries 25, 55, 58, 140, 201n cognition: internally generated modes to which experience conforms 182–3; knowledge of language and x, 73, 134 cognitive deficits, with intact language faculty 121, 146 cognitive development: and language growth 62; uniformity not found 184 cognitive reach 107 cognitive revolution (1950s) vi, 5–6 cognitive science 23, 33, 112, 116, 165; status of 165–6 cognitive state 55, 81, 82–3, 154 cognitive system 117–19, 125; and complex relational words 128–9; phonetic aspects of 118; and semantic representation 174; state changes that reflect experience 118–19; use of resources 129, 135 “cognoscitive powers”, innate 181–3 “coherent–abstraction test” (Almog) 42 common language approach 29–32, 33, 37, 99–100; see also “public language” common sense xvi, 80, 135, 138, 146, 163–4; and naturalistic inquiry 20–4, 37–45 communication 30, 78, 130, 154, 164, 202n community norms 40, 49, 71, 72, 142, 148, 155 competence: assumptions about drawn only from behavior 57–8; as a generative procedure 60; grammatical 26; pragmatic 26; see also I-language “competing hypothesis” 183–4, 185 complexity xii, 7, 13, 124, 169 computational approach to language xiii, 6, 10, 103–4, 116–17, 124, 159 computational procedure: “austerity” of 120–1; maps array of lexical choices into phonetic and logical form 125, 170; registers adjacency, but no “counters” 121 computational systems: complexity of 123–4; (generative) 78–9; with largely invariant principles 120, 169; properties 107, 120–1, 123, 145 computational–representational systems see C–R theories concepts: construction of artificial 51–2; as determining reference of a word 187; innate labelled in language acquisition 61–6; link with sound 120; locational 62; Putnam on short theories and formation of 66; use in understanding ordinary life 90 conceptual–intentional systems 9, 10, 28, 61–6, 124–6, 180 concrete–abstract dimension 126, 180–1 concreteness 168–9, 176 conditions, philosophically necessary 146–7 “Connection Principle” (CP), Searle’s 96–8 connectionism 103–4, 116 consciousness xiv, 83, 108, 145; “access in principle” to 96–8, 141, 142, 143; access to 93–8, 141, 147, 169; nature of 115, 143, 145; potential 86–7, 91–2, 93, 97; potential for, and blindsight 96–8; relation to neural structures 144–5; Searle’s “radical thesis” 86–7 Index constitution/constituency 189–90, 191 content: of fixed reference in natural language 42; locality of (Bilgrami) 150, 190; phonetic 151; as a technical notion 137, 153; wide and narrow 165, 170; see also perceptual content coordinate structure constraint, Quine on 55–6, 198n Cordemoy, Géraud de 114 covert movement 14–15 creativity, of language use 16–18, 145 cultural studies 157–8 D (domain) 39–40 Darwin, Charles, Origin of Species 163 Davidson, Donald 46, 61, 136; “A Nice Derangement of Epitaphs” 56, 67–70; “anomalism of the mental” 88–9; “interpreter” example 29–30, 56, 67–70, 102; “no such thing as language” 136, 202n Davies, Martin 23, 195n deep and surface structure x, xvi, 10, 28, 203n deference, patterns of 171 Dennett, Daniel 79, 91, 107–8, 144, 200n denotation, use of term 130 denotational theories of interpretation 131, 136, 177–9, 192 Descartes, René ix, xiii, 3, 17, 108, 112, 114, 133, 182 description xi, 145 descriptive adequacy 7–8, 120, 122, 165, 185 descriptive linguistics 54, 122, 184 descriptive semantics 47, 61 design of language 9–13 designer’s intent 125, 136–7, 180 deviance viii, 78–9; and computational theories of the language faculty of the brain 116–17; distinctive brain responses 217 to language 24; from community norms 98–9, 142 Dewey, John 47 dialect: as a nonlinguistic notion 31; prestige 156 dictionary, compared to complexity of human lexical recording 120, 185 Diderot, Denis 110 Dijksterhuis, E.J. 108 discourse representation 129 discrete infinity 3–4, 184 displacement property: explained 12–14; and legibility conditions 13–15 dissociations 117, 184 distal properties, correlation of internal processes with 162 distributional properties 179 division of linguistic labor 71, 187–8 du Marsais 196n dualism vii, xiv, 75–105, 117, 140, 142, 163; varieties of 98–105; see also Cartesianism; metaphysical dualism; methodological dualism Dummett, Michael xiii, 46, 56, 57, 102, 143; on LAD 94; on language as a social practice 48–9, 50; on naturalistic inquiry as psychological not philosophical 140–1 Dupoux, E. 118 economy conditions 123–4 Edelman, Gerald 103–4, 116 Egan, Frances 162 electrophysiological responses, to syntactic versus semantic violations 116–17 eliminativism see materialism, eliminative embedding, multiple 124 “emergent laws” 145 empirical inquiry 46–74, 76, 92–3 empty category 15, 181 English: importance of Japanese for the study of xv, 53–4, 58, 102; left-headed 93 218 Index entailment relation 34, 174 entities, beliefs about 135 environment: influence on initial state of language faculty 78, 162, 166, 189–90; role in specification of reference 41 epistemic boundedness, Dennett on 107–8 epistemic naturalism 79, 80–1 epistemology: evolutionary 80; naturalized (Quine) 46–7, 80, 81 Epstein, Samuel 11 error, problems of 142, 143 ethnoscience xv, 90–1, 135, 155, 160, 164, 165, 172–3 event-related potentials (ERPs) viii, 24–6, 38 evidence: intuitive categories as 162; legitimacy of wide use of x, 53–8, 60, 102, 139–40; linguistic 55, 57, 58, 139–40, 201n; psychological 55, 57, 58, 201n; role of initial state in determining what counts as 197n; useful about reference 171–2 evolution: of brain’s administration of linguistic categories 183; and innate concepts argument 65–6; and questions for empirical inquiry 73–4; theory of 139, 163 experience: effect on state changes of the cognitive system 118–19; and “initial state” 4–5, 7–8; sets boundary conditions 7–8 experts: deference to 155–6; role in determining reference of terms 41–2, 71, 72, 190–2, 196n explanation, and description xi explanatory adequacy 7–8, 45 explanatory models 19, 45, 183–4 explanatory theory 103, 106, 110, 115, 166; and intuitive judgements 171–2 expression, ways of thinking and means of 15–16 expressions: class generated by Ilanguage 78–9, 169; computational procedures that access the lexicon to form 170, 173–4, 180; internally-determined properties of 34–6; as a pair <PHON, SEM> 173, 175; relation with external world 129–30; structural problems for interpretation 124; universal and language-specific properties 35 extension 148 extensional equivalence (Quine) 132 externalist approaches xiii, 38–40, 43, 148–63, 190; and Twin-Earth thought experiments 148–50, 155 fact, truths of and truths of meaning 62–4 faculty of language vii, x–xi, xiii, 77–8, 168–73; assumes states that interact with other systems 168; “austerity” of 120–1; common to the species 70, 168; components of 117; evolution of 2, 3–5; as a function that maps evidence into I-language 73; innate structure and effect of external environment 60, 168; intact but cognitive deficits 121, 146; intrinsic properties of 121, 127; as natural object 119; perfection of 9–15; relations with mind/brain systems xii, 73, 77–8; specific structures and principles of 183–4; triggering of the analytic mechanisms 121–2; see also initial state; state L fallibility 191 features 10, 120, 179; attraction of 13–15; legibility conditions and 11–12; not interpreted at either phonetic or semantic interface 12 field linguist 46–60 first-person authority 142–3 “fitting”, and “guiding” (Quine) 94–5 Flaubert, Gustave 90 Fodor, Jerry 107, 117, 139, 184; “First Law of the Non-Existence of Cognitive Science” 165; “language of thought” 19 Index folk psychology 23, 28, 89, 154–5, 196–7n folk science xv, 84, 91, 127, 135, 137, 164, 172–3; and cultural conditions 119 folk semantics 172, 188 forces, immaterial 108–9, 144, 167 formal languages 12, 57, 199n, 202n free will ix, 108 Frege, Gottlob 30, 36, 80, 85, 130, 187, 188; “common public language” 30, 33, 131–2 Friedman, Michael 112 front-wh-phrase 56, 198n Galileo Galilei xiii, 4 “garden-path sentences” 124 generalizations, psychological 165–6, 168–9 generative faculty of human understanding 16–18 generative grammar vi, 132, 174; computational operations 13; explained 5–7; goals of study of mechanisms in everyday life 17; and grammaticality 63; and principles-and-parameters approach 122 generative phonology 44, 151 generative procedure; isolating a 29–32, 69; the right 132 genes, and “initial state” 4–5 Gestalt 182 Gibson, Roger 198n Goodman, Nelson 181 government 11 grammar: and descriptive adequacy 7, 120, 185; uses of term 5, 201n grammars: “innate skeletal” (Quine) 199n; as specific internalized rule systems 57–61 grammaticality, Quine on 63, 199n gravity, Newton’s 108–9, 166 “guiding”, and “fitting” (Quine) 94–5 Haas, W. 199n Halle, Morris 203n 219 Harris, James 64 Heisenberg, Werner 167 Herbert, Edward, Baron of Cherbury 80, 85 Higginbotham, James 73 Hobbes, Thomas, on names 182 holism 46, 48, see also meaning holism homonymy 181 Huarte, Juan 17 human being: concept of xv, 3, 20, 139; and language speaking 20–4 human faculty of language see faculty of language Humboldt, Wilhelm von 6, 73 Hume, David 4, 64, 80, 85, 133, 170; on fictitious ascribed identity 16, 182–3; on Newton 110, 167; “science of human nature” 141, 164, 173 Huygens, Christiaan 82, 108 hypotheses, Newton’s refusal of 109 I-beliefs xiii, 32–3, 193; changes in 193; expressed in I-language 72 I-conceptual system 193 I-language vii, ix, xi–xii, xiii, 123; as generative procedure 70–3, 78, 119–21, 203n; C-R theory of 26, 32, 38, 40–2, 78; and construction of semantic and phonetic representations 174; followed by principles-and-parameters approach 123; has computational procedure and a lexicon 120–1; as instantiation of the initial state 123; internal and individual and intensional 5, 70–3, 118–19, 132, 169; language-like accretions 42–3; and language-world relations 188–9; mastery and internal representation of a specific 73; normativity aspects of 99; and performance systems 27–32, 34–6; as a product of the language faculty 27, 42–3; relation to external events 174–5; restricted variety of 27, 33, 44–5; specifies 220 Index form and meaning and accounts for properties of complex expressions 26–7; use of term 131, 201n I-linguistics 171; and common-sense notions of language 169, 170, 173, 192–3; and use of properties which might include I-sound and I-meaning 187 I-meaning 170, 173, 175, 179 I-sound 170, 173, 175, 179 idealization 49–50, 100, 123, 197n ideas: history of xiv; as not things but ways of knowing 182; people have about meaning and sound 173; theory of 182 identity, ascription of fictitious (Hume) 16, 182–3 idiolect, communication between time slices of an 30 immunology, selective theory xi, 65 impairment, selective 117 indeterminacy, empirical 57–8, 198n indeterminacy of translation (Quine) 132, 140, 147, 198n indexicals 42, 181 “individual sense” 70, 72 individualist approach vii, 32, 162, 164;, see also internalist approach individuation: and nameable things 126–7; and referential use of language 180, 182–3 infants: with performance systems specialized for language 118; reification of bodies in 92–3 inference 121, 180; as interestrelative 196n inflection: as special property of human language 12; variations in richness 120 inflectional features, role in computation 10 inflectional systems: basically the same 120; language differences in 11–12 initial state x–xi, 4–5, 77–8, 123; and attained state 95; common to the species 4–5, 50, 53–4, 119; determines the computational system of language 27; as a fixed biologically-determined function that maps evidence 53–4; genetically determined 27, 53–4, 118; incorporates general principles of language structure 60; incorporates principles of referential dependence 50; integrated conceptual scheme 62; with parameters fixed 123; plus course of experience 4–5, 7–8; and postulated identity of all languages 122; richness of the 35–6; as shared structure 30, 33–4, 50; as Universal Grammar (UG) 73, 81, 101; see also I-language innate component, identifying the 172 innate endowment: and environmental factors 166; and impoverished input 121–2; role in understanding the world 90–1 innate semantic representations, theory of (TISR), Putnam’s critique of 184–9 innate structure of the organism, theory of and the mapping M 60–1 innateness, of knowledge of language x–xi, xiii, xv, 2, 3–4, 126 “innateness hypothesis”, Putnam on Chomsky’s 65, 66–7, 100–1, 187–9 “innatism” see “innateness hypothesis” inner states, ideas about 164–6, 168–9 input–output systems, of the language faculty 117–18 instinct 91 institutional role 180 intelligence 6, 122, 182; accessibility to human ix, 91; and language use 147; mechanisms of general 185; scope and limits of 107; see also Artificial Intelligence Index intelligibility, in scientific discourse 151–2 intention 62, 91, 125, 137, 180; referential 130–1; see also conceptual–intentional systems “intentional laws” 166 intentional terminology 113–15 intentionality: Brentano on 22; naturalistic inquiry and 45, 132 interests 125, 128, 137 interface: between language faculty and other systems of the mind 123; legibility conditions at the 10–12; levels 10, 28, 39, 173–5; location of the 174; phonetic and semantic representations at the 10–12, 160, 173–4; properties 124–6; weakest assumptions about relations 10, 128–9 interface condition, requires erasure of uninterpretable features 14–15 internal processes, correlation with distal properties 162 internal relational structure 22 internalism vii, xiv, xv, 15, 125; critique of 162; defined 134; form of syntax 129 internalism–externalism issues 148–63 internalist approach 33–4, 38–45, 134–63, 164–94; legitimacy of inquiries that go beyond 156; and other domains of psychology 158–9; to differing beliefs 193; to language-world relations 15–16 internalist linguistic theory (T) 142–3, 146 internalist semantics 34, 38–9, 45 interpretation, language and xiii–xiv, 46–74 interpretations, assignment of 160–1 “interpreter”, Davidson on the 29, 56, 67–70, 102 intuitions 44, 70, 84, 119, 130, 135, 138, 161, 197n intuitions: limits of xiv–xv; as subject of linguistic study 171–2; and technical terms 148–9 221 intuitive categories, meaninglessness for science 161–3 intuitive judgements: about statements 40–2; as data to be studied as evidence 171–2; different 64; forced with ordinary expectations withdrawn 172 invented forms 181 invented system, designed to violate principles of language 121 Jacob, François 139 Jacob, Margaret 108, 110 Jakobson, Roman 140 James, Henry 47, 90 Japanese: anaphora in 140; evidence from about referential dependence 53–4, 58, 102; importance for study of English xv, 53–4, 58, 102; right-headed 93 Jerne, Niels Kaj 65 Jespersen, Otto 73 K, as constant knowledge of language 51 K-ability 51 Kant, Immanuel 112, 182; method of transcendental argument 165 Kayne, Richard 123, 131 Kekulé von Stradonitz, August 111 Kenny, Anthony 50, 197n knowing-how 51–2 knowledge: distinguished from ability 50–2, 97–8; nature of 170; nature of tacit xiii knowledge of language vii, ix, xiv, 50–2; and cognition x, 73; defined 73; in English usage 170; as the internal representation of generative procedure in the brain 50–2; as learned ability 50; partial 48–9, 99–100, 146; uniform among languages 126; see also innateness Kripke, Saul 37, 141–2; Naming and Necessity 41 Kripke’s puzzle 191 222 Index La Mettrie, J.O. de 84, 113, 167 labels, assigning to concepts 61–2, 65 Lange, Friedrich 167 language: as a biological object vii; as a community property 99–100; elementary properties 6; as the finite means for infinite use (Humboldt) 6, 73; as a generative procedure assigning structural descriptions 50–2; internalist perspective on 134–63; and interpretation 46–74; as a natural object xiv, 106–33; naturalism and dualism in the study of 75–105; in naturalistic inquiry 77–9; no useful general sense in which to characterize 48–9; as a notion of structure that guides the speaker in forming free expressions 73; notions of in ethnoscience 90–1; as a portable interpreting machine 29, 68, 202n; as a process of generation 73; as property of organized matter 115; as a social fact 197n; specific properties of human 16; study of 3–18; terms for something like 119; use of term 106, 130–1 – in different speech communities 157–8 – views on the concept of 73 Language Acquisition Device (LAD) 81, 86, 92–3; as a physical not psychological mechanism 93–4 language change, the study of 6 language faculty see faculty of language language speaking, and human being 20–4 language use see use of language language-external systems 175, 179 language–thought relations 135–6 language–world relations: at the phonetic interface 175; internalist approach 15–16, 129–30; truth of 188–9 languages: apparent variability of 122; as cultural artifacts 157; diversity of 7; head-first or head-last xi; no such things as (Davidson) 136; in part unusable 124, 161 Lavoisier, Antoine 110 learnability of languages xiv, 124 learning: as acquiring rules that map LI into some other system of mind 176; “by forgetting” 118; generalized mechanisms 66, 101; incremental 30; selective process 65 left–right orientation 93 legibility conditions xii, 9–11; and the displacement property 13–15; impose three-way division among features 11–12 legitimacy, questions of 183–94 Leibniz, Gottfried Wilhelm 82, 108 Leonardo da Vinci 163 levels of analysis (Marr) 118, 159 Lewis, David 57 Lewis, G.N. 111, 112 Lewontin, Richard 161, 195n lexical items 10, 175–83; acquired on a single exposure 120, 185; attribution of semantic structure to 61–2; constituted by properties approach 120, 170, 179; different approaches to study of 36, 175–83; dissociation of either sound or meaning 175, 176–7; may be decomposed and reconstructed in the course of computation of SEM 175; relational approach to 179–83 “lexical semantics” 174 lexical structure 181; generative factors of (Moravcsik) 182–3, 204n lexicon: defined 10; mental 32; and properties of computation 123, 170; subject to a complex degree of conscious choice 170–1; things selected and individuated by properties of 137 LF see Logical Form Index lingua mentis, representations generated by I-language map into 185–9 linguistic, use of term 106, 134 linguistics: explanatory insight for vii; and science-forming faculty (SFF) 101; scientific status of xiv, 112; subject matter of 1–2, 139–40 linguosemantics 165 Llinás, Rodolfo 128 Locke, John 1, 167, 182–3 locomotion 147 locust–cricket example (Baker) 153–4 Logical Form (LF) xi–xii, 124–5, 129–30; instructions at the interface 128–9; origins of 28 “m-events” (events mentalistically described) 89–90 McGinn, Colin 145, 201n machine: ability to think debate 44–5, 114, 147; man and 3, 17, 84, 132 machine intelligence 114 malapropisms 70–3 mapping, and neural interaction 116 marked options 125 Marr, David 23, 118, 158–9, 161, 195n, 202n material: and abstract factors, simultaneity in meaning 16; or physical 91–2, 143 materialism 109–10, 144, 167; eliminative 26, 85, 87, 88, 90, 91, 92–3, 104, 117, 138, 144; and its critics 85–93; Nagel on 87–8 matter: altered concept of 113, 133; dark 85; thought and action as properties of organized 84, 86 meaning: analogies with sound 15–16, 175–9; and beliefs 137; disagreements about study of 15–16; “in the head” or externally determined 148–51; inquiry into meaning of 2, 173; internal conditions on 36; relevance of mental/brain configurations to 223 19–20, 24–38; as semantic features of an expression 125; and sound xi–xii, 9–10; theory of, and internalism–externalism debate 147–63; truths of and truths of fact 62–4 meaning holism xiv, 61, 66–7, 152, 186–7, 195n mechanical philosophy 83–4, 86, 104, 108, 110, 144, 163, 167 mechanics, laws of 82 mechanisms 17–18, 56 Mehler, J. 118 mental: all phenomena potentially conscious 86–7; “anomalism of the” (Davidson) 88–9; bridge laws relating to physical 89–90; characterized as access to consciousness 93–8; location within the physical 103; as the neurophysical at a higher level 104; phenomena described in terms of the physical 109; and physical 113; and physical reality 166–8; replacement by physical 138; to define in neurological terms 103; use of term xiv, 75–6, 106, 134 mental construct vii mental event tokens, and physical event tokens 89 mental properties: approaches to 147; and nervous system 167 mental representations: internalist study of 125; specifications of 165; see also C–R theories mental states, attribution of 91, 160–1, 169 Mentalese 176–8, 185–9 Merge operation 13 messages, decoding 185–9 metaphorical use of terms 114, 131, 159, 161 metaphysical, extracting from definitions 75–7 metaphysical dualism 108, 112, 163 metaphysical naturalism 79, 81–2, 85, 144 224 Index metaphysics vi, 112 methodological dualism 76, 77, 93, 112, 135, 140–1, 163 methodological naturalism 76, 77–8, 79, 81, 91, 135, 143 Mill, John Stuart 187 mind: architecture of the 14, 121, 135, 174;Cartesian theory of 83–4; as a computational state of the brain 128; as consciousness 86–7; as “Cryptographer” 185–9; explanatory theories of in study of language 77; history of the philosophy of 109; as mental aspects of the world 134; naturalism and dualism in the study of 75–105; naturalistic inquiry into 103; reflection on the nature of the 165; as res cogitans 83; study of in biological terms 6; theory of (TM), scientific status of 85–6; unraveling the anatomy of the 173, 183; use of term 75–6, 106, 130–1 mind–body problem vi, vii–viii, xiv, 84, 86–91, 88–9; as how consciousness relates to neural structures 144–5; lacks concept of matter or body or the physical 110, 199n; Nagel on 86–8; no intelligible 103, 112, 138; as a unification problem 108–9 mind/brain interaction 1–2, 9–11 mind/brain systems: integration of states of language faculty with 173–5; internalist study of 164–5 Minimalist Program x, xi, xv, 9–15 misperception 159–60 misuse of language, notion of 49, 70–3, 200n “MIT mentalism”, Putnam’s critque of 184–9 models: computer 105, 116, 157; constructing to learn 114 modifications, nonadaptive 163 modularity: of mental architecture 121; use of term 117–18 Moravcsik, Julius 128, 182–3, 204n motion: inherent in matter 167; studies using tachistoscopic presentations 159 motivation 162 motor systems 17–18 Move operation 13 movement xii, 13, 14–15 multilinguality 169 mutations 96–7 mysteries ix, 83, 107, 133 Nagel, Thomas 86–8, 90, 95–6; Language Acquisition Device (LAD) 92–4;on mind–body problem 86–8, 109, 115; on naturalistic theory of language 143 names: have no meaning 24, 42, 173, 181; Hobbes on 182 naming, as a kind of world-making 21, 127, 181 national languages, as codifications of usages 100 natural kinds xv, 19, 20–2, 89, 105, 137, 204n natural language: apparent imperfections of xii, 9–15, 123–4; properties of terms of 126–7; sometimes unparseable 108; and use of technical terms 130–2 natural object 117, 119; language as a 106–33 natural sciences vii, 135; defining 81–5, 92; as “first philosophy” 112; and knowledge of language 51; and notions of belief and desire 146; and psychic continuity of human beings 139; Quine’s definition 144; standard methods of 52–6 natural selection: replaced God 110; unselected functions in 163 “natural-language semantics” 174, 175 naturalism vii, xiii, xiv, xv, 109; Baldwin on 79–80; in the study of language and mind xiv, 75–105; use of term 76–7, 135; varieties of 79–85; see also epistemic Index naturalism; metaphysical naturalism; methodological naturalism naturalistic approach 1–2, 103, 106; compared with an internalist approach 134, 156 naturalistic inquiry; and commonsense perspectives 37–45, 85; defined 115, 134; detailed 117–33; divergence from natural language 23–4; and intentionality 45, 132; language in 77–9; as “Markovian” 196n; nature of 76–9, 82–5; as psychological not philosophical 140; scope of 19–24, 28–9, 90, 97; symbolic systems of 153 “naturalistic thesis”, Quine on 92–3, 144 nature, belief as unknowable 110 negation 124 nervous system 103–4, 116; and mental properties 167 neural net theories 103–4, 107 neural structures, relation of consciousness to 144–5 neurophysiology 25–6, 103, 104, 116 Newton, Isaac viii, 80, 83–4, 86, 93, 141, 163, 167; anti-materialism 1, 82, 84, 108–10, 144, 199n; on gravitation 108–9, 166 norms 49, 72, 148, 157, 171–2; violation of 98 numbers 121 object constancy 94, 97, 135 objectivity premise 159 objects: and agency 21–2; discontinuous 127; nameable 136–7; problems posed by artifacts compared with natural 105 observation, of linguistic aligned with non-linguistic behavior 46, 52 “observational adequacy” 198n “occult qualities” 83–4 ontology 184 optimality conditions xii, 10–11, 123, 125 ordinary English usage, Pateman’s description 169 225 ordinary language: accounts of mental and physical events 89–90; philosophy 46, 203n; use and terminology 141–2, 169 organic unity, and personal identity 182 organism: analogy 4, 17–18, 59–60; constraints on computing a cognitive function 162–3; dedicated to the use and interpretation of language 168; internal states of an 134; “solving problems” 159, 161 organization, “from within” 182 “p-events” (events physicalistically described) 89–90 “p-predicament” (Bromberger) 82 parameters see Principles and Parameters approach “parser” 69–70, 200n parsing 107–8, 124 Pateman, T. 169, 197n Pauling, Linus 106, 111 Peirce, Charles Sanders 80, 83 perception 2, 124–6; and the computational system 120, 180; as a dream modulated by sensory input 128; empiricial theories of 161–2; language-related differences in 118; veridical 23; see also articulatory–perceptual systems perceptual content 23, 196n perceptual organization, reduction to 183–4, 185 perfectness of language xii, xvi, 9–15, 123–4 performance: competence and ix; and computation theories 124 performance systems 45, 117, 118; fallibility of xiv, 124; and I-language 27–32, 143; I-languages embedded in 34–6; internal representations accessed by 160; specialized for language 118; use of expressions generated by I-language 124–6; see also articulatory–perceptual systems; conceptual–intentional systems 226 Index perspectives 40, 88, 150, 151–6, 180; conflicting for words 126; linguistic agent’s 137; range of 36–7; see also point of view PF see Phonetic Form philosophical explanation 142, 147; science and 140–1 philosophy vi, 46–74; causality and core problems of 145; naturalization of 144; and science 81–2, 87, 94, 140–1 philosophy of language xiii, 16–17, 46, 61 relations between expressions and things 129–30 PHON(E) 173, 175, 177, 180, 203n phonetic aspects, abundance of variety 185 phonetic features 12, 15–16, 44, 125; accessed by articulatory– perceptual systems 123, 180 Phonetic Form (PF) xi–xii, 28, 124–5, 129 phonetic level 11, 173 phonetic realization, different of inflectional systems 11 phonetic relations 179 phonetic representations 9, 10, 174, 185–9 phonetic value 129, 177 phonetics 174 phonological features 170, 192 phonological levels, in terms of intention 203n phonological units 43–4, 151–2 phonology 43–4, 147 phrase boundaries: and perceptual displacement of clicks 25, 55, 58, 140, 201n; and referential dependence in Japanese 53–4, 58 phrase-structure rules 10, 13, 53–4, 58 physical: anomalism of the 138; mechanical concept of 167; and mental reality 166–8 physicalism 117, 144 physics xv, 82, 84, 87–8, 112; development to permit of unification 166–7 Platonism 80 “Plato’s problem” 61 Poincaré, Jules 110 point of view 40, 164, 182; nameable objects and 136–7; and status of things 126–8; see also perspectives Popkin, Richard 57, 76–7 Port Royal Grammar 4 power and status issues 156 pragmatic competence, limited, and language faculty 146 pragmatics 132 pragmatism 46–7 Priestley, Joseph 84, 112–13, 115, 116, 167 priming effects 140 “primitive theory” 90 principles 138–9, 184, 192; fixed and innate 122; and underlying structures 168–9 Principles and Parameters approach x, xi, 11; explained 8–9, 121–3; see also Minimalist Program “prior theory” 67–70 problems, ix, 83, 107, 115 production 2 projection principle 10 pronominalization, “backwards” 196n pronouns 181; anaphoric properties of 39; dependency of reference 126 proper names, no logical (Strawson) 24, 181 properties: partial account of language 184; of sensation or perception and thought 113 propositional attitudes, attribution of 192–3 Proust, Marcel 90 psycholinguistic experiment 171 psychological evidence 139–40 psychological generalizations 165–6, 168–9 “psychological hypotheses” 140–1 Index psychological mechanisms 117–18 psychology: internalist 143; invented technical term 153; and software problems 105 psychology vi, vii, 1, 80, 136, 138, 154, 160, 181, 202n psychosemantics 165 “public language” 30, 32–3, 37, 38, 40, 127, 131–2, 136, 148, 155–8, 187–8 purposes 136–7 Pustejovsky, James 128 Putnam, Hilary xiii, 19, 41, 152, 156–7; on alleged facts 136; on Bohr 43; Chomsky’s critique of 19–45; critique of “MIT mentalism” 184–9; division of linguistic labor 71, 187–8; on impossibility of explanatory models for human beings 19–20; on intentionality 45; on languages and meanings as cultural realities 157–8; rejection of the “innateness hypothesis” 65, 66–7; “The Meaning of Meaning” 41–2; Twin-Earth thought experiment 40–1, 148–9, 155; on water 127–8 quantifiers 11, 124 quantum theory 111 Quine, Willard xiii, 46, 57, 61, 101, 141; coordinate structure constraint 55–6; displacement of clicks study 55, 58, 140; distinction between “fitting” and “guiding” 94–5; epistemology naturalized 46–7, 80, 81; on extensional equivalence 132; on grammaticality 63; indeterminacy of translation 132, 140, 147; “naturalistic thesis” 92–3, 144; no fact of the matter 58, 59; radical translation paradigm 52–5, 101–2; “revision can strike anywhere” 66–7, 188 R (“refer”) relation 38–40; and Rlike relation 41–2 227 rational inquiry, idealization to selected domains 49–50 reduction viii, xiv, 82, 87, 106, 144–5 reference 2, 148; as an invented technical notion 148–50, 152–3; causal theory of 41; choices about fixing of 67; cross-cultural similarities 171; fixation of 42, 44, 128; notions of independent 137; in philosophy of language 16–17; problem of relation 37–42; the “proto-science” of 171; in the sciences 152; semantics and 130–2; as a social phenomenon relying on experts 188; social-cooperation plus contribution of the environment theory of specification of 41–2; specification of 41–2; technical notion of 202n; transparence of relation 39–40; as a triadic relation 149–50; two aspects of the study of 171; use of term 36, 130, 188; usefulness of concept 38–45, 181 referential dependence 47, 50, 126, 180–1, 196n referential properties, debate on 24–5 referential use of language 180–1 reflection: evaluation by 166; operations of the mind which precede 170 regulative principle 46, 52 Reid, Thomas 80, 182, 196n, 203n reification 92–3, 94, 201n relatives 181 representations: “informational” with intentional content 195n; as postulated mental entities xiii, 159–60; two levels of phonetic and logical xi–xii, 173 rhyme, relations of 174 Richards, Theodore 111 rigidity principle 94 Romaine, Suzanne 156 Rorty, Richard 46–7, 52, 61, 63 Royal Society 110 228 Index rule following 48–9, 98–9; in terms of community norms 31, 142 rule system: attribution of a specific internalized 57–61; problem of finding general properties of a 7–8 rules: and behavior 94–5; and conditions of accessibility to consciousness xv, 99, 184; status of linguistic xiv, 98–9, 123; unconscious 184, 204n Russell, Bertrand 187 sameness 40–2; and referential dependence 126 Sapir, Edward 140 Sapir–Whorf hypothesis 136 Saussure, Ferdinand de 27, 120 Schweber, Silvan 145 science: boundary of self-justifying 112; and categories of intuition 162–3; history of xiv, 43, 109–12; origins of modern 83–5, 109; and philosophy 81–2, 87, 94; unification vi, viii–ix, x, xiv, 111, 145, 166, 168; unification goal 82, 106–7, 112; unification problem 79, 84, 85, 91, 103–4, 108, 116 science fiction, and theories about the world 152 “science of human nature” (Hume) 164, 165, 166, 169, 173, 183, 190 science-forming faculty (SFF) ix, 22, 33, 34, 82–3, 121, 133; and common-sense belief 43; and the linguist 101; property of constructing Fregean systems 131 sciences: “hard” 139; language of ordinary life and language of the 186 scientific discourse, intelligibility in 151–2 scientific inquiry see naturalistic inquiry scientific revolution 6, 110 scientism 153 SDs see structural descriptions Searle, John 94, 95, 113, 115, 141, 184, 203n, 204n; “Connection Principle” (CP) 96–8; “radical thesis” on consciousness 86–7 second-language learners xii segments, postulated 43 semantic connections 47, 61–5, 67, 137, 179 semantic features 12, 15–16, 125, 170, 173, 182–3, 192 semantic interpretation: approaches to 15–16; process of 14–15; and syntax in the technical sense 174 semantic level 11, 173 semantic properties 104, 137; innate and universal 185 semantic relations 179 semantic representations 9, 10, 170, 185–9; and relations of FL with cognitive system 174–5 semantic resources, gap between and thoughts expressed 135 semantic values 129–30, 178, 204n semantics: event 24–6; referential vii, 132, 174 SEM(E) 173, 175, 180 “sense”, of fixed reference in natural language 42 sensorimotor system 9, 10; inactivation of 14–15; as languagespecific in part 174; use of information made available by I-language 174–5, 180 sensory deficit, and language faculty 121–2 Shaftesbury, Anthony Ashley Cooper, 3rd Earl of 182 shared language/meanings thesis 29–32, 100, 148, 156–8 sign language of the deaf 121–2 signs 78, 182 similarity relation 40–2, 43–4, 152 “simplification” 56, 198n simulation, machine 114 Smith, Barry 142 Smith, Neil vi–xvi, 121 Soames, Scott 132 Index social co-operation, in specification of reference 41–2 social practice 32, 49, 50, 72; and different languages 48–9 sociolinguistics 156, 200n sociology of language 169 sound: analogies with meaning 15–16, 175–9; inquiry into meaning of 173; location by the auditory cortex 158; and meaning xi–xii, 9–10, 11, 168, 170; as phonetic features of an expression 125; the study of systems 6 space–time continuity, of things 127 species property 2, 3 speech acts 78 Spelke, Elizabeth 195n standard languages, partially invented 157–8 state L 78–9, 119, 170–1 Stich, Stephen 103, 149, 171, 196n stimulus, poverty of 56, 65, 126, 171 “strange worlds” scenario 172 Strawson, Peter 24, 181 structural descriptions (SDs), generation of 26, 27, 39–40, 199n structural linguistics, mentalistic approach to 5–6, 122 structural phonology 43–4, 151–2 structural representation 39 structure: degree of shared 152; and explanatory adequacy 7–8 structure dependence 121, 184 substances, special mental design for 127–8 “superlanguage” 189 switch settings, for particular languages xi, 8, 13 symbolic objects, properties and arrangements of 174 symbolic systems 12, 131 syntactic relations 63 syntax xii, xv, 132; “autonomy of ” thesis 203n; internalist form of 129; R–D relations as 39–40; and structure dependence 121; use of term 174 229 T-sentences, theory of 204n technical terms 40, 65, 148–9; invention of 188; with no counterpart in ordinary language 130–2; and truth or falsity 130–1; variation in translation of 188 temporal order, no parametric variation in 123 terminology: animistic and intentional 135–6; and ordinary language use 130–2, 141–2, 171 terms: forensic 182; languages lacking certain 135 theories: concepts arise from 66; “passing” 29, 30, 67–70, 202n; “short” 66, 200n theory, and explanatory adequacy xi–xii, 7–8 “Theory–Theory” 103 things: changes in 192; defining 136–7; in some kind of mental model 129; space–time continuity of 127; status of nameable 21, 127; in the world 129 thinking: Locke on faculty of 1, 167; ways of 15–16 thought: and action as properties of organized matter 84, 86; are contents externally determined 153–4; gap between semantic resources and expressed 135; individuation of 165; “language of ” (Fodor) 19; as a property of the nervous system/brain 113, 115, 116; relation to things in the world 149–50 thought experiments 153–4; which strip away background beliefs 172 TISR see innate semantic representations, theory of traditional grammar 13, 122, 123, 174 trajectory 94 transcendental argument, Kant’s method 165 230 Index transformational rules x, 12–13 translation: indeterminacy of xiii, 132, 140, 147; radical (Quine) 52–5, 101–2, 198n; rational reconstruction of practice 148 truth theories 130, 156 Turing, Alan 44–5, 114, 148 Turing test 114 Twin-Earth thought experiments xv, 40–1, 148–9, 155, 160–1, 172, 189–90 Ullman, Shimon 159 understanding 203n; by people, not parts of people 113–14; generative faculty of human 16–18; limits of human 156; of meaning without relevant experience 128; quest for theoretical 19, 77, 115, 134 unification problem see science uninterpretable features xii, 12–15 Universal Grammar (UG) 98–9, 103; and child’s intuitive understanding of concepts 62; theory of the initial state as 73, 81, 101 unmarked options 125 usage: change in and language change 32, 44–5; “correct” 157;, see also misuse of language and distinction of knowledge of language from ability 51 use, regular of objects 136–7, 180 use of language: alleged social factors in 32;creativity of 16–18, 145; explaining xiii, 19–45; and intelligence 147; and interpretation of meaning 15–16; and linguistic states 2; restrictions at PF or LF levels 35; similarities among species, not found 184 variables 42 variation among languages: and left– right orientation 93; as limited to certain options in the lexicon 79, 120, 123; and properties of inflectional systems 11–12 variation in language, as instructions by computational system to articulation and perception 120 Vaucanson, Jacques de 114 visual perception, Marr’s theory of 158–9, 161 visual system xiv, 17–18, 118–19, 147; and C–R theories 28–9 visualizability 167 Weinreich, Max 31 well-formedness category 78 will 109, 127 Wittgenstein, Ludwig 44–5, 46, 51–2, 98, 127, 132, 203n, Ludwig, later 51–2 words: can change meaning and still be the same 175; offer conflicting perspectives 126; as phonetic (or orthographic) units 175; relations with things in the world vii, 148–51; rich innate contribution to construction of semantic properties 179 world: external and internal set of reference frames 128–9; features of the real 148; how language engages the 164, 180; “material” 84–5; ways of looking at the 181; as the world of ideas 182 Wright, Crispin 143 X-bar theory 10 Yamada, Jeni 146 Yolton, John 113, 182, 203n zeugma 181

pages: 444 words: 111,837

Einstein's Fridge: How the Difference Between Hot and Cold Explains the Universe
by Paul Sen
Published 16 Mar 2021

In it he presented a series of arguments in favor of the idea that machines would one day be able to think as well as or even better than humans. In this paper he introduced “the imitation game,” the idea that if a computer provides answers that are indistinguishable from those that a human might provide to a given series of questions, the computer should for all intents and purposes be treated as human. Now known as the Turing Test, this idea of the imitation game has become embedded in popular culture due to a scene in the 1982 film Blade Runner where a detective asks the person opposite him a series of questions and, based on his answers, evaluates whether he is a human or an android. The paper in Mind demonstrates Turing’s long-term interest in the following question: If “dumb” electrical circuits in a computer could perform mathematical tasks previously only carried out by human minds, was it possible that similar “dumb” processes ultimately underpinned all the ways those minds worked?

See also specific laws Gibbs’s study of the consequences of, 96, 104, 105, 106, 125 impact on world of, ix, 241–42 Joule and, 106, 187 link between information and, 167–68, 183 Maxwell’s thought experiment (demon) on, 187–90, 192 pattern formation in morphogenesis and, 210 phenomenalism debate and, 125 Szilard’s thought experiment (demon) on, 190, 191–92 Thomson’s paper on age of the earth based in, 71–72, 73 thermometers Joule’s temperature measurement using, 29–30 mercury-based, 63 Thomson’s freezing water experiment using, 39 thermostats, 207 third law of thermodynamics, 257 Thompson, Benjamin, 10 Thompson, Marie-Anne, 10 Thomson, David, 90 Thomson, James, 35, 37, 38–39 Thomson, William, Baron Kelvin, xi, 33–40, 59–61 absolute temperature scale invention of, 63–66, 88–89 background and education of, 33–34, 59 Carnot’s research and, 31, 33, 35, 36–37, 38–39, 40, 58, 59, 73 Clausius’s work drawing on research of, xi, 52, 58, 70 critiques of Darwin’s theory of evolution by, 71–72, 73 dissipation of heat and irreversibility concept of, 59–61 flow of heat from hot to cold producing work and, 39–40, 106 Gibbs’s understanding of work of, 106 Glasgow social conditions affecting, 59 heat death of the universe theory of, 60–61, 127, 183 Helmholtz on, 61 ice experiment on temperature and pressure for freezing water and, 39–40 introduction of term thermodynamic by, 37 James Thomson’s water freezing experiment and, 37–38, 38–39 Joule’s temperature research and, 30–31, 36, 37, 39–40, 58 Kelvin scale and kelvin measurement named after, 66 Maxwell’s demon and, 189–90 physics laboratory set up by, 35–36 reaction to Clausius’s paper by, 58–59 religious beliefs reflected in paper of, 59, 60–61 second law of thermodynamics and, 188 Tait’s article on thermodynamics and, 187 thermodynamics’ relevance and, 58–59, 106, 187 time’s arrow concept and, 60, 103 work as Regnault’s assistant on thermal properties of steam, 34–35 Thomson, William Sr., 34 time’s arrow concept Boltzmann’s theories and, 103, 127–28 creation moment of universe and, 127–28 Thomson’s theories and, 60, 103 Transactions of the Connecticut Academy of Arts and Science, 112–13, 118 Transactions of the Royal Society, The, 25 transistors, 183–85 heat generation by, 184–85, 195, 196 invention of, 183–85 Landauer and IBM’s research on, 193–95 miniaturization of, 184, 193–94 Tudor, Frederic, 110 Turing, Alan, xi, 199–214 background and education of, 200–201 developmental biology contributions by, 200–201, 217 encryption code breaking by, 199–200, 201, 202–4 Entscheidungsproblem solution of, 201 family’s reaction to suicide of, 212–13 Jewish refugee children work of, 202 Manchester University work on computers by, 204 mental state of, 213–14 morphogenesis paper of, 205–11, 212, 215–16, 217 paper on computers and “imitation game” by, 204–5 pattern formation theory of, 207–10, 214, 215, 217 results of homosexual relationship of, 211–12 scientific community’s acceptance of paper of, 210–11 Shannon’s meetings with, 173–74, 197, 202 Universal Machine concept of, 201, 204 Turing, Dermot, 213 Turing, John, 200, 212, 213 Turing, Sara, 200, 201, 203, 213 Turing Test, 204 Tyndall, John, 242–44 uniformitarianism, 70, 71 Universal Machine, 201, 204 universe anthropic principle on human life in, 128–29 Boltzmann brain theory and, 130 Boltzmann’s idea of a single moment of creation for, 126, 127–30 event horizon around, 239 expansion of, 238–39 grand unified theory of, 236 origins of.

pages: 405 words: 105,395

Empire of the Sum: The Rise and Reign of the Pocket Calculator
by Keith Houston
Published 22 Aug 2023

At Bletchley Park, a stately home that housed Britain’s government codebreakers, he masterminded the cracking of the German “Enigma” encryption scheme, a feat that may have shortened the Second World War by up to two years.3 Before the war, Turing had published a conceptual blueprint for all programmable electronic computers, later to be dubbed the “universal Turing machine.” After the war, he devised the “imitation game,” or Turing test, that anticipated the arrival of artificial intelligence.4 Finally, Turing is remembered for the manner of his death. He was convicted of gross indecency in 1952 for a relationship with another man and was offered estrogen injections, a form of “chemical castration,” to avoid prison. He endured the mood swings and depression caused by the drug only to die two years later of cyanide poisoning.

F., 11 Rockwell, 197, 199, 235 Roman hand abacus, 41–43, 41 Roman numerals, 19, 47–48 ROM chips, 203–4, 205–6, 218, 219, 237 Rome, ancient, 19, 32, 33, 41–43, 41 Rosen, Ben, 267 rotary calculators, 126 See also Friden STW-10 calculator RPN (reverse Polish notation), 224n Salamis tablet, 30–32, 31 Sanyo, 197, 200 Sasaki, Tadashi, 198–99, 204 Savile, Henry, 65–66 Sayre, Rod, 207 Schickard, Wilhelm, 80–84, 82, 87, 94, 101 Schmandt-Besserat, Denise, 11, 27 Science and Civilisation in China (Needham), 36–37 scientific calculators algorithms for, 218, 219–22, 221 Casio fx-190, 238 HP-65, 232 HP 9100A, 173, 175–76, 213–15, 213, 217 SR-50, 246–47 scientific calculators (continued) TI-30 classroom calculator, 246 See also HP-35 calculator Scientific Revolution, 48, 50 sectors, 66–67, 67 Séguier, Pierre, 86 Seiko, 229, 240–41 semiconductors, 167 sexagesimal systems, 11, 28–29 SG-12 Soccer Game (calculator), 238–39 Shakespeare, William, 44–45 Shannon, Claude, 135–36, 140 Sharp Electronics, 197, 198, 200 Busicom and, 204–5 Compet CS-10A calculator, 160, 172n, 186 EL-8 calculator, 195 ELSI Mate EL-8048 calculator, 236 microchips and, 198 Micro Compet (QT-8D and QT-8B) calculators, 192, 199, 201 shift-and-add algorithm, 220–22, 221 shifting, 218 Shima, Masatoshi, 203, 205, 206, 207, 218, 251, 274 Shirriff, Ken, 205 Shockley, William, 167, 181, 197–98 Shù Shù Jì Yí (Notes on Traditions of Arithmetic Methods) (Xu Yue), 34–35, 36–37 significant figures, 125n simultaneous equations, 36 Sinclair, Clive, 232 Sinclair Executive calculator, 232 sines, 58 size-value systems, 23 slide rule atomic bomb and, 75–76 circular, 71–72, 71, 75 cursor in, 73, 74 instructions for, 70 limitations of, 77–78, 106, 137 Oughtred’s invention of, 70–71 power of, 49, 73–74 scientific calculators and, 224–25 space travel and, 76–77, 77, 78 variations of, 72–73, 74–75 smartphones, 255, 276, 276 Smith, David Eugene, 26–27 software, 139–40 Software Arts, 262, 269, 270 See also VisiCalc solar-powered calculators, 246, 257 solenoids, 130–31 Son, Masayoshi, 199 Sony Sobax calculator, 192 soroban (Japanese abacus), 39–40 Sottsass, Ettore, 164n space travel mechanical calculators and, 122–23, 122, 125–26 microchips and, 184–85 Programma 101 calculator and, 174–75 scientific calculators and, 224 slide rule and, 76–77, 77, 78 Sparkes, John, 159 spreadsheets BBL alternative to, 260–61n history of, 259 See also VisiCalc square roots, 219 SR-50 scientific calculator, 246–47 stepped drum, 97–98, 100, 123 Stevin, Simon, 50 Stibitz, George, 134–35, 155 STW-10 calculator, 123–26, 124, 127, 145, 186 suàn (counting rods), 35–36, 35 Suàn Fă Tŏng Zong (Systematic Treatise on Arithmetic) (Chéng Dà-Wèi), 38, 38 suàn pán (Chinese abacus), 37–39, 38 Sumlock Company, 145–46, 147, 154, 156 ANITA calculator, 145, 157–60, 157, 167, 186 Suydam, Marilyn, 244, 245, 247 Sylvester II (Pope), 44, 46 symbolic equations, 50 synthesizer-calculators, 238 tally sticks, 5–7, 7 Tanba, Tadashi, 203, 205, 206 Tchou, Mario, 165, 166, 176 Teal, Gordon, 181 TEAL Photon calculator, 257 technology skepticism, 242–43, 244, 248, 251 Tektronix 31/10 graphing calculator, 248 telegraph, 131–33, 133, 134 telephone, 134, 152 teletype, 135 ten-key adders, 105 tesserae, 14–16, 15 Texas Instruments Cal Tech calculator prototype, 186–90, 189, 193, 194, 214 Canon and, 194–96 classroom calculators, 245, 246–47, 250–54, 252, 255 graphing calculators, 251–54, 252, 256 Japan and, 191, 192–93 LEDs and, 189 microchip development and, 178, 180–85, 184, 187–88, 197 micromodules and, 178–79 Mostek competition, 201 Pocketronic calculator and, 195 Spirit of ’76, 236 SR-50 scientific calculator, 246–47 TI-1260 shoppers’ special calculator, 237 TI-2500 Datamath calculator, 210, 246 transistor development and, 181, 186–87, 186 VSTOL/REST calculator, 237 Thomas de Colmar, Charles Xavier, 96–98, 99, 100–104, 111–12, 126, 173, 243 TI-30 classroom calculator, 246 TI-81 graphing calculator, 251–52, 252, 253, 256 TI-1205 classroom calculator, 246, 247 TI-1255 classroom calculator, 246 TI-1260 shoppers’ special calculator, 237 TI-1766 classroom calculator, 246 TI-2500 Datamath calculator, 210, 246 Timeulator calculator watch, 234 tokens, 21–23 transistors, 160, 167–68, 181, 186 trigger tubes, 158 trigonometry, 50, 58 triodes, 151–54, 151 Triumph-Adler, 236 Turing, Alan, 114, 146, 147, 219 Turing test (imitation game), 114 2001: A Space Odyssey, 229 typewriters, 164 Underwood company, 162, 165, 166, 175 Universal Exposition (Paris, 1855), 103 universal machine, 219 Unix operating system, 277–78 Urquhart, Thomas, 54 vacuum fluorescent displays (VFDs), 200 vacuum tubes, 147, 150–54, 158 van Ceulen, Ludolph, 50 Van Tassel, James, 188 Viète, François, 50 Vietnam War, 175 vigesimal systems, 9–10, 12–13 VisiCalc, 259–68, 267 Apple II and, 262–64, 266, 267–68, 274 copies of, 270 fall of, 270–71 IBM PC and, 269 marketing of, 264–66 power of, 274 profits from, 269–70 vision for, 259–62 VL-1 calculator-synthesizer, 238 Vlacq, Adrian, 65 Volder, Jack E., 222 Voltaire, 98 von Lieben, Robert, 150–51, 152n VSTOL/REST calculator, 237 Waits, Bert, 245–46, 248, 249 Wang, An, 220 Watson, Thomas J., 138 Whittle, David W., 174–75 Wilson, Elizabeth Webb, 119 “Wiz-A-Tron,” 246 Womersley, J.

The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal
by M. Mitchell Waldrop
Published 14 Apr 2001

This in itself was no real restriction, he argued, since a digital computer was universal and could simulate any other machine-includ- ing, presumably, the human mind. His answer to the second question, however, was vintage Turing: idiosyncratic, startling, and yet utterly logical. "Thinking," he declared, could be defined via his now-famous Turing test, a kind of party-game THE FREEDOM TO MAKE MISTAKES 123 affair in which a human and a computer are hidden from view and answer queries posed by an interrogator who is trying to determine which is which. Thus Q Please write me a sonnet on the subject of the Forth Bridge. A: Count me out on this one.

Turing argued that if the interrogator could not tell, no matter how many questions he or she asked, then one had to admit that the machine was re- ally thinking. After all, he noted, at that point the interrogator would have pre- cisely as much evidence for the computer's thinking ability as for the human's. Viewed in retrospect, Turing's test for machine intelligence has to rank as one of the most provocative assertions in all of modern science. To this day, people are still talking about it, writing commentaries on it, and voicing outraged objec- tions to it (most of which he anticipated in his original paper, by the way).t Of course, like so much of Turing's work, the 1950 paper wasn't widely read at the time, and it had essentially no impact on the artificial-intelligence research that was just beginning in the United States.

Licklider and THE TALE OF THE FIG TREE AND THE WASP 161 his analog circuit models of the brain. Intelligent behavior resided in the hard- ware, went the cybernetic line. In fact, says McCarthy, the first person to make a reasonably explicit case for the software approach was Alan Turing, in the 1950 paper that introduced his Turing test. And even there, says McCarthy, it was not very prominent; the first time he read that paper, the software idea didn't even sink in. But it was sinking in now, and bringing with it a new resolve to launch yet an- other frontal assault on machine intelligence. "I had this idea that if only we could avoid all these distractions and devote some time to it," says McCarthy, "if we could just get everyone who was interested in the subject together, then we might make some real progress."

Robot Futures
by Illah Reza Nourbakhsh
Published 1 Mar 2013

They simply demonstrated that the right arm can reach out and press a pleasure or pain button equally easily, no matter the sleeping subject’s druginduced torpor. There was so much excitement back then. News articles, new achievements every week. Riding a bicycle, typing on a computer, playing classical guitar, and then the big challenge: the ten-minute Turing Test. Watch Dave in a room for ten minutes. Is Dave in control, or are the nanos in control? But of course that was easy compared to real conversation—so much tongue control, vocal cord work that it would take two decades to master. The nano control threw her for a loop; it really bothered her deeply, and everyone—her friends, her professors—was running headlong into nano studies.

pages: 541 words: 109,698

Mining the Social Web: Finding Needles in the Social Haystack
by Matthew A. Russell
Published 15 Jan 2011

Look no further than a sentence containing a homograph[51] such as “fish” or “bear” as a case in point; either one could be a noun or a verb. NLP is inherently complex and difficult to do even reasonably well, and completely nailing it for a large set of commonly spoken languages may very well be the problem of the century. After all, a complete mastery of NLP is practically synonymous with acing the Turing Test, and to the most careful observer, a computer program that achieves this demonstrates an uncanny amount of human-like intelligence. Whereas structured or semi-structured sources are essentially collections of records with some presupposed meaning given to each field that can immediately be analyzed, there are more subtle considerations to be handled with natural language data for even the seemingly simplest of tasks.

tokenization, couchdb-lucene: Full-Text Indexing and More, Data Hacking with NLTK, Before You Go Off and Try to Build a Search Engine…, A Typical NLP Pipeline with NLTK, Sentence Detection in Blogs with NLTK, Visualizing Wall Data As a (Rotating) Tag Cloud definition and example of, A Typical NLP Pipeline with NLTK Facebook Wall data for tag cloud visualization, Visualizing Wall Data As a (Rotating) Tag Cloud mapper that tokenizes documents, couchdb-lucene: Full-Text Indexing and More NLTK tokenizers, Sentence Detection in Blogs with NLTK using split method, Data Hacking with NLTK, Before You Go Off and Try to Build a Search Engine… TreebankWord Tokenizer, Sentence Detection in Blogs with NLTK trends, Twitter search, Tinkering with Twitter’s API TrigramAssociationMeasures class, Common Similarity Metrics for Clustering triples, Entity-Centric Analysis: A Deeper Understanding of the Data, Man Cannot Live on Facts Alone true negatives (TN), Quality of Analytics true positives (TP), Quality of Analytics Turing Test, Syntax and Semantics tutorials, Installing Python Development Tools, Visualizing Mail “Events” with SIMILE Timeline, k-means clustering Getting Started with Timeline, Visualizing Mail “Events” with SIMILE Timeline official Python tutorial, Installing Python Development Tools Tutorial on Clustering Algorithms, k-means clustering tweets, Collecting and Manipulating Twitter Data, What are people talking about right now?

pages: 492 words: 118,882

The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory
by Kariappa Bheemaiah
Published 26 Feb 2017

Recent advancements in Natural Language Processing (NLP) and Automatic Speech Recognition (ASR) , coupled with crowdsourced data inputs and machine learning techniques, now allow AI’s to not just understand groups of words but also submit a corresponding natural response to a grouping of words. That’s essentially the base definition of a conversation, except this conversation is with a “bot.” Does this mean that we’ll soon have technology that can pass the Turing test? Maybe not yet, but Chatbots seem to be making progress towards that objective. The most advanced Chatbot today is Xiaoice (pronounced Shao-ice) developed by Microsoft, and which can respond with human-like answers, questions, and “thoughts.” If a user sends a picture of a broken ankle, Xiaoice will reply with a question asking about how much pain the injury caused, for example (Slater-Robins, 2016).

Intermediate Macroeconomics: New Keynesian Model . University of Notre Dame. Slater-Robins, M. (2016, February 5). Microsoft is carrying out a massive social experiment in China —and almost no one outside the country knows about it . Retrieved from Business Insider: http://uk.businessinsider.com/microsoft-xiaoice-turing-test-in-china-2016-2 Slayton, J. (2014, November 14). The Angelist Way. Retrieved from Slideshare: http://www.slideshare.net/abstartups/dia01-02-keynotejoshuaslaytonangellistdoing-the-wrong-things-the-right-way Stavins, S. S. (2015). The 2013 Survey of Consumer Payment Choice: Summary Results . Federal Reserve Bank of Boston.

pages: 352 words: 120,202

Tools for Thought: The History and Future of Mind-Expanding Technology
by Howard Rheingold
Published 14 May 2000

The very first sentence still sounds as direct and provocative as Turing undoubtedly intended it to be: "I propose to consider the question 'Can machines think?' " In typical Turing style, he began his consideration of deep AI issues by describing -- a game! He called this one "The Imitation Game," but history knows it as the "Turing Test." Let us begin, he wrote, by putting aside the question of machine intelligence and consider a game played by three people -- a man, a woman, and an interrogator of either gender, who is located in a room apart from the other two. The object of the game is to ask questions of the people in the other room, and to eventually identify which one is the man and which is the woman -- on the basis of the answers alone.

Statistics about how often experts turn out to be right are the ultimate criteria for evaluating expertise -- whether the expert is a person who has studied for years, or a computer program that was literally born yesterday. The methodology for conducting such an evaluation was suggested in the 1950s, by Alan Turing. The "Turing test" bypasses abstract arguments about artificial intelligence by asking people to determine whether or not the system they are communicating with via teletype is a machine or a person. If most people can't distinguish a computer from another human, strictly by the way the other party responds to questions, then the other party is deemed to be intelligent.

pages: 415 words: 114,840

A Mind at Play: How Claude Shannon Invented the Information Age
by Jimmy Soni and Rob Goodman
Published 17 Jul 2017

The only thing surgeons can do to help you basically is to cut something out of you. They don’t cut it out and put something better in, or a new part in. In fact, when it came to human superiority over machines, “thinking is sort of the last thing to be putting up a fight.” While Shannon did not expect a computer to pass the famous, and famously open-ended, Turing Test—a machine indistinguishably mimicking a human—within his lifetime, in 1984 he did propose a set of more discrete goals for artificial intelligence. Computer scientists might, by 2001, hope to have created a chess-playing program that was crowned world champion, a poetry program that had a piece accepted by the New Yorker, a mathematical program that proved the elusive Riemann hypothesis, and, “most important,” a stock-picking program that outperformed the prime rate by 50 percent.

G., 14 Sagan, Carl, xiv Samuelson, Paul, 176 San Francisco, Calif., 126 Saturn, 262 Schellenberg, Walter, 97–98 Schramm, Wilbur, 167–68, 169 Scientific American, 165, 214, 280, 281 Scientology, 200–201 Second Law of Thermodynamics, 162n Sedgwick, Adam, 170 Segrè, Emilio Gino, 263 Selective Service and Training Act (1940), 80 “Seminar on Information Theory” (Shannon lecture), 224 Shannon, Betty Moore, 192, 203, 212, 215, 223, 227, 246, 256, 257, 258, 262, 264–65, 272 background of, 182 at Bell Labs, 183–84 CS’s Alzheimer’s and, 269, 270–71 CS’s courtship of, 182, 184 education of, 182–83 family life of, 233–34 investing by, 234, 238–42 mathematical talent of, 183–85 professional partnership of CS and, 184–85 wedding of CS and, 184 Shannon, Catherine, 8, 9, 13, 18, 184, 264–65 Shannon, Claude Elwood, xi–xv abstractive genius of, 46 Alfred Noble Prize awarded to, 48 Alzheimer’s disease of, 268–71 atheism of, 46, 84 barbed wire telegraph of, xii, 4–5, 34, 72 at Bell Labs, 38, 68, 70, 71–73, 101 Bell Labs friends of, 111–14 Betty Moore and, see Shannon, Betty Moore birth of, 8 Boolean algebra and, 37–39 as both mathematician and engineer, 275, 276 chess as passion of, 199, 211–12 childhood and adolescence of, xii, 4–5, 8, 9–11, 19, 34, 72, 126 at Cold Spring Harbor, 49, 53–60 computerized chess and, xv, 212–16 cryptography as early interest of, 4, 19 cryptography expertise of, xiii, 95, 101–2, 104, 152, 193–94, 195, 310n curiosity of, 202–3, 266, 275 death of, 271–72 demands on time of, 195 draft worries of, 80–81, 82 dual engineering and math undergraduate degrees of, 16–17 Erector sets of, xv, 6, 11, 203, 256 family life of, 233 flight training of, 47 Fry’s mentoring of, 68, 75, 81, 83, 90, 166 gambling strategies and, 244 Greenwich Village apartment of, 110–11, 136, 182 growing fame of, xiv, 188–89 honors and prizes awarded to, 257–59, 261–62 on human-machine interactions, 207–9 impatience of, 113–14 as inconsistent correspondent, 200 information theory invented by, see information theory at Institute for Advanced Study, 74–80, 162 intellectual courage of, 277–78 intuitive thought process of, 184–85, 230, 232–33, 245 investing by, 234, 238–42 jazz as passion of, 61, 110, 111, 255 juggling as passion of, see juggling Kyoto laureate lecture of, 265–67 Kyoto Prize awarded to, 263–67 legacy of, 273–81 machines as fascination of, 203–9 master’s thesis of, xiii, 39–43 mathematics as early interest of, 9, 10, 16–17 mechanical abilities of, 10–11, 16–17 misapplication of information theory as concerning to, 190–92 as MIT full professor, 225, 228–33, 234–35, 236, 239, 240–41, 244, 246, 248, 261, 262, 276 as MIT graduate student, xii, 32, 34, 45–49, 61, 74, 94, 177 as MIT visiting professor, 223–25 modesty and self-effacement of, xii, xiv, xv, 48, 107, 179, 257, 262, 275, 276, 278 National Medal of Science awarded to, 258 National Research Fellowship of, 63 NDRC work of, 81–82 Norma Levor and, see Shannon, Norma Levor numerous publications of, 235 Oxford fellowship of, 259–60 playfulness of, xi, xv, 46, 266, 267, 270, 277, 278–81 poetry written by, 63, 280–81 practical nature of, 72 publishing results as secondary to, 59–60 puzzle solutions published by, 19–20 roulette prediction device of Thorp and, 245–46 as SCAG member, 196–98 self-isolation of, xii, 46, 47–48, 59, 62, 79, 114, 185, 202, 230 shyness of, 46, 47–48 Stanford fellowship of, 226, 258–59 teaching method of, 232 Theseus project of, 203–7, 211, 217, 266, 278 as tinkerer, xii, xv, 10, 11, 45–46, 72, 228–29, 233, 234, 243, 244–45, 270, 271, 276 unicycles of, xv, 199, 228, 248, 249, 279 at University of Michigan, 13, 15–20, 35, 39–43 war work disliked by, 93–94, 95 wealth of, 238, 239–40 wide-ranging interests of, 275–76 Winchester house of, see Entropy House work as hobby for, 266 Shannon, Claude Elwood, Sr., 5–6, 8 death of, 18 Shannon, David, Jr., 11 Shannon, Mabel Wolf, 5–7, 8, 290n CS’s estrangement from, 18 Shannon, Norma Levor, 77, 78, 110 atheism of, 63 background of, 62 CS’s courtship of, 62–63 divorce of CS and, 79–80, 184 intellectual ambitions of, 63, 80 at Radcliffe, 62, 63 wedding of CS and, 63 Shannon, Peggy, 183, 228, 233, 234, 238–39, 240, 242, 249, 258, 261, 264–65, 268–69, 270, 271 Shannon family, 233–34, 258, 269 travels by, 261–62 Shannon Limit, 157, 270, 274, 325n Sheldon, William, 111 Shockley, William, 67 shorthand, 153 shumi (hobby), 266 signals: conversion of, from analog to digital, 134–35 fluctuation of, 127 noise vs., 119–20, 123–24, 126, 127, 156–61, 179 SIGSALY system, 96, 98–99, 100, 101, 103 Silverberg, Robert, 252–53 similarities, in problem solving, 219 simplifying, in problem solving, 219 Singleton, Henry, 239 $64,000 Question (TV show), 225 Slepian, David, 114, 184 slide rules, 265–66 Smith, Adam, 240 Smith, Walter Bedell, 193–94, 195 smoothing, 88 Socrates, 37, 172, 253 solutions, CS on strategies for arriving at, 218–20 South Korea, 197 Soviet Union, 166, 174, 194, 211–12 Spanish Civil War, 86 Special Cryptologic Advisory Group (SCAG), 196–98 spinning jenny, 265 Staebler, Edward, 14 Stanford University, 178, 258–59 Center for Advanced Study in Behavioral Sciences at, 226 Staten Island, N.Y., 182 steam engine, 265 stochastic processes, 76 information as, 145–53 Stockholm, Sweden, 264 stock market, 243 Shannons’ investments in, 234, 238–42 stock market crash of 1929, 14 structural analysis, in problem solving, 219–20 SWEATER project, 196 Sweden, 126 Swift, Edgar James, 250–51 “Symbolic Analysis of Relay and Switching Circuits, A” (Shannon), 39–43 Szegoő, Gábor, 94 Szilard, Leo, 162n technology, CS’s optimistic view of, 208–9 Teledyne, 239, 242 telegraphs, telegraphy, 127, 265 homemade networks of, 4–5 information theory and, 145 telephone networks, 38, 66, 77, 125–26 automated switchboards in, 126 color coding of, 71–72 relays in, 72–73, 203 “thalassophilia,” 51 “Theorem on Color Coding” (Shannon), 71–73 “Theoretical Possibilities Using Codes with Different Numbers of Current Values” (Nyquist), 128 thermodynamics, Second Law of, 162n Theseus (maze-solving mechanical mouse), xv, 203–7, 211, 217, 266, 278 thinking machines: CS on future of, 266 CS’s fascination with, 203–9 see also artificial intelligence; computers, digital Thomson, William, Lord Kelvin, 36, 43, 123, 124, 125, 127, 157 harmonic analyzer of, 25–28 transatlantic cable and, 120, 121–22 Thorp, Ed, 243–46 Thorp, Vivian, 246 THROBAC (Thrifty Roman-Numeral Backward-Looking Computer), xv, 207, 243 Throop College of Technology, 83 tides, prediction of, 24–25, 127 Time, 22, 188, 205 Tonga, Tongans, 252–53 transatlantic telegraph cable, 119–24, 157, 158 transatlantic telephony, first successful experiment in, 130–31 “Transmission of Information” (Hartley), 131 transmission speed, redundancy and, 154–56 Tufts University, 176 Tukey, John, 141 TUNNY, 96 turbo codes, 325n Turing, Alan, xiii, 42–43, 99, 150 cryptography and, 103–6 CS’s friendship with, 104, 106–9 death of, 109 Turing Machines, 103, 106 Turing Test, 209 “Turk, The” (hoax), 210–11, 212 Turkey, 261 Tuxedo Park laboratory, 93 U-boats, 167 Ultimate Machine, xv, 207, 278 uncertainty, in information theory, 142–44, 311n unicycles, xv, 199, 228, 248, 249, 279 United States, British mistrust of, 105–6 University of Illinois Press, 168 Upper Mystic Lake, 227 “Use of the Lakatos-Hickman Relay in a Suburban-Sender Case, The” (Shannon), 72 Valentia Island, 122 Vanity Fair, 254 Veblen, Oscar, 74, 75 Verdú, Sergio, 174, 179 Versailles, Treaty of, 86 Virginia, 131 Vocoder (Voice Encoder), 99, 100 Voder (Voice Operation Demonstrator), 99–100 Vogue, 207–8 Von Neumann, John, xiii, 74, 75–76, 93, 162, 172, 175, 194, 195–96, 197, 198, 240 Voyager I, xiv, 262 Voyage Round the World, A (Forster), 252–53 Wall Street Journal, 238 Wall Street Week (TV show), 241 Walter Reed Hospital, 198 Washington, D.C., 196 Washington Square Park, (New York City), 110 Watson, James, 188 Watson, Thomas A., 123, 126 Watt, James, 265 Wealth of Nations, The (Smith), 240 Weaver, Warren, 83–85, 86, 89, 90, 166–67, 168–69 Wenger, Joseph, 194–95, 196 Western Electric, 65, 66, 99 Weyl, Hermann, 76–77, 145 White House, 258 Whitehouse, O.

pages: 385 words: 112,842

Arriving Today: From Factory to Front Door -- Why Everything Has Changed About How and What We Buy
by Christopher Mims
Published 13 Sep 2021

Of course, the idea that we’re anywhere close to that sort of intelligence in machines has become a shibboleth of its own, one denoting a species of since-discredited optimism about both how quickly our computers would evolve and how much we thought we knew about how intelligence works in the first place. (A good general rule when listening to any expert: their skepticism is often on the mark, their optimism frequently way off.) When we talk about the capacities of computers and artificial intelligence, the measure of their abilities is most often something like the Turing test, which asks whether a computer can convince humans chatting with it that it is in fact human. But I would argue that Moravec’s test, which I imagine as his paradox reframed as a test of robots’ abilities, would be a better measure of the capabilities of our synthetic offspring. Humans, after all, are surprisingly easy to fool in conversation.

See scientific management time-of-flight cameras, 267 time-stamping clocks, 96 Titanic, 41–42 towns, interstate highway system bypassing, 130, 133 Toyota Production System (“lean production”), 198–99, 207, 221–22, 224–25, 226–29 Trailer Loading Analytics, 261 transfer (continued movement of turning ship), 62 TraPac terminal, Los Angeles, 67–70, 72, 74–75, 77–80, 84, 85 traveling salesperson problem, 281–84 Tron (film), 148 trucks and truck drivers, 107–57; accidents, injuries, and dangers in the workplace, 114, 116–18, 133; in American imagination, 109; automated loading at ports and, 74–75, 79, 81, 84–85; CDL (commercial driver’s license), 115, 220; delivery to warehouse, 139–40; hours of service rules and electronic logging devices, 124–25, 127, 155; idle time and inefficiencies, 74–76, 137–39; interstate highway system, 127–34; loading delivery trucks at Amazon, 194–95; longshoremen versus, 74–75; overnight parking for, 126, 133, 155; rest stops, 117, 120, 121–24, 127, 133, 134, 155; standard size, dimensions, and structure, 107–8; stressful working conditions for, 110–13, 117, 120–21, 125, 135, 157; technology associated with, 119–20, 140; trucking companies and freight brokers, 134–39; typical road journey of, 108–10; typical work day for long-haul trucker, 113–18; wage levels, turnover, and driver “shortage,” 110–13, 157, 210. See also delivery of goods to consumers; self-driving trucks Trump, Donald, 9, 125, 280 Tsang, Jeff, 27, 29–32, 34, 36–40, 42 tugboats, towing, and towropes, 50, 59–61, 63, 64, 67 Tuohy, Ryan, 263–66, 269 Turing test, 179 turnover, 111, 113, 204, 210, 214, 216, 237, 245, 280 TuSimple, 142, 143, 145, 148, 152, 154–55, 156, 267 twistlocks and lashings, 34, 69, 84 2M, 22 Uber, 102, 135, 138, 157, 217, 265, 279 Uber Eats, 266, 279 Uber Freight, 138, 157 Unimate (first industrial robot), 247 unions and unionization: at Amazon, 171, 210–11, 278; of delivery drivers, 277–78; factory jobs compared with warehouse work, 218; of long-haul truckers, 110, 137; of longshoremen, 33, 45, 69–74, 85; Taylorism and other management systems, 103, 229.

Prime Obsession:: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics
by John Derbyshire
Published 14 Apr 2003

Andrew has also computed the CLIMBING THE CRITICAL LINE 261 first 100 zeros to 1,000 decimal places each.93 The first zero (I mean, of course, its imaginary part) begins 14.13472514173469379045725198356247027078425711569924 31756855674601499634298092567649490103931715610127 79202971548797436766142691469882254582505363239447 13778041338123720597054962195586586020055556672583 601077370020541098266150754278051744259130625448… V. There are stories behind Table 16-1. That A.M. Turing, for example, is the very same Alan Turing who worked in mathematical logic, developing the idea of the Turing Test (a way of deciding whether a computer or its program is intelligent), and of the Turing machine (a very general, theoretical type of computer, a thought experiment used to tackle certain problems in mathematical logic). There is a Turing Prize for achievement in computer science, awarded annually since 1966 by the Association for Computing Machinery, equivalent to a Fields Medal94 in mathematics, or to a Nobel Prize in other sciences.

See also Harmonic series Basel, 63-64 convergent, 11-15, 79 defined, 8 divergent, 9-10, 81, 139 infinite, 59, 63, 75 of reciprocal squares, 64-65 ruler exercises, 10-15 sequences contrasted, 16-17 Serre, Jean-Pierre, 372, 384 Set theory, 18, 88 Seven Years War, 60 INDEX 420 Siegel, Carl, 256-257, 263-264, 383; pl. 5 Sieve of Eratosthenes, 100-101, 102104, 138 Sine function, 147, 332 Skewes, Samuel, 236 Skewes’ number, 236 Snaith, Nina, xiv Snowflake curve, 381 Society of German Scientists and Physicians, 252 Sommerfeld, Arnold, 256 Sophia Dorothea (mother of Frederick the Great), 60 Sorbonne, 159, 188, 225 Sorcerer’s Apprentice (Dukas), 156 Soundararajan, Kannan, 389 Space nature of, 130, 195 operators on, 317-318 Sprague-Grundy Theory, 372 Sprague, Roland Percival, 372 Square roots, 41, 43, 176, 178 Squaring function, 37, 42, 201-202, 206-209, 240 Stegun, Irene A., 373 Steiner, Jakob, 119 Step functions, 124, 297-302 Stern, Moritz, 27 Stevens, Wallace, 198 Stieltjes integral, 160 Stieltjes, Thomas, 154, 160, 161, 376 Stirling, James, 123 Strachey, Lytton, 370, 380 Summation sign (Σ), 78 “Sweet Betsy from Pike” (tune), 394, 395 Sylvester, James Joseph, 154, 225 T “Taiye,” 82-83; pl. 8 Teichmüller, Oswald, 255-256, 383 Teichmüller Theory, 383 Telegraph, electric, 120 Tenenbaum, Gérald, 389 Theory of Numbers (Hardy and Wright), 302 Theory of performances, 52 Theory of the Riemann Zeta-function, The (Titchmarsh), 217, 384 Thread, The (Davis), 122 Three-body problem, 314 Time reversal symmetry, 316 Titchmarsh, Edward Charles, 217, 258, 262, 394 Tocqueville, Alexis de, 118 Topology, 18, 121, 209, 374 Trace formula, 321, 388 Transcendental numbers, 174, 185, 354 Trigonometry, 18 Trinity College, Cambridge, 193, 223224, 225-226, 229, 287, 379, 380 Trinity Hall, Cambridge, 380 Truman, Harry S., 166 Turán, Paul, 238, 239, 378 Turing, Alan, 258, 261-262, 357, 377, 391; pl. 5 Turing machine, 261, 391 Turing Prize, 261 Turing Test, 261 Twiddle principle, 46 Twiddle sign, 45, 368 U Uncle Petros and Goldbach’s Conjecture (Doxiadis), 90 Universal Computer, The (Davis), 187 Universities, academies distinguished from, 30 University of Bordeaux, 158-159 University of Breslau, 93, 94 University of Bristol, England, 390 University of Cambridge, 259 University of Copenhagen, 228 INDEX 421 University of Leipzig, 270 University of Louvain, 161 University of Manchester, 259 University of Marburg, 270 University of Minnesota, 322, 357 University of Wales, Cardiff, 391 University of Washington in Seattle, 352 Upper bound, 235-236 Wilhelm I, German Kaiser, 160 William IV, King of England and Hanover, 26 Wolfram, Stephen, 389 Wright, Sir Edward, 302 V Z Vallée Poussin, Charles de la, x, 153, 155-156, 161, 189, 223, 232, 237, 352, 356, 376; pl. 3 Value plane, 219-221, 335 Victoria, Queen of England, 26 Vienna Academy, 153 “Villikens and his Dinah” (song), 395 Vis viva equation, 313, 315 Volterra, Vito, 92 Vorhauer, Ulrike, 350, 390 z plane, 379 Zeno, 88 Zeros, 85 in conjugate pairs, 190-191 density of, 396 dividing by, 35 of a function, 139, 154, 160, 169, 190-192, 206, 211-212, 385 gradient, 110 mathematical legitimacy, 89 non-trivial, 77, 190-192, 198-199, 217, 221-222, 232, 289-290, 295 number of, 258 order of a, 385 of a polynomial, 173 power, 65, 66 spacing in critical strip, 217-218, 232, 290 trivial, 148, 169, 206 Zeta function, 135 Basel problem and, 63-65 on complex plane, 183, 213-216 critical line, 221-222 critical strip, 216 decomposition, 358 domain, 142-145, 205-206 expression, 77, 79, 137 graph, 142-144 Mertens’s function and, 250-251 Möbius function and, 250-251 W w plane, 379 Wagon, Stan, 389 Wallace, William, 92 Wave functions, 318 Weber, Heinrich, 29, 119, 257, 366 Weber, Wilhelm, 27, 120, 127, 374 Wedeniwski, Sebastian, 258, 259 Weierstrass, Karl, 135, 164 Weil, André, 270, 325, 385, 395; pl. 6 Weil Conjectures, 270, 355 Wendland, 22, 94 Weyl, Hermann, 170, 255, 385 Whitehead, Alfred North, 225 Whitemore, Hugh, 262 Wigner, Eugene, 282, 387 Wild Numbers, The (Schogt), 161 Wiles, Andrew, 90, 161, 245, 271, 354355 Y Yorke, James, 387 422 sieve of Eratosthenes and, 102-104, 138 values of, 79-81, 146-147, 263 visualization, 216-218 zeros of, 154, 160, 169, 190-192, 206, 211-212, 217-218, 221-222, 232-233, 234, 259-261, 287-288, 295, 395 INDEX ζ(s), 77.

Text Analytics With Python: A Practical Real-World Approach to Gaining Actionable Insights From Your Data
by Dipanjan Sarkar
Published 1 Dec 2016

The ability to communicate information, complex thoughts, and emotions with such little effort is staggering once you think about trying to replicate that ability in machines. Of course, we are advancing by leaps and bounds with regard to cognitive computing and artificial intelligence (AI), but we are not there yet. Passing the Turing Test is perhaps not enough; can a machine truly replicate a human in all aspects? The ability to extract useful information and actionable insights from heaps of unstructured and raw textual data is in great demand today with regard to applications in NLP and text analytics. In my journey so far, I have struggled with various problems, faced many challenges, and learned various lessons over time.

Machine translation performed by Google Translate Over time, machine translation systems are getting better providing translations in real time as you speak or write into the application. Speech Recognition Systems This is perhaps the most difficult application for NLP. Perhaps the most difficult test of intelligence in artificial intelligence systems is the Turing Test. This test is defined as a test of intelligence for a computer. A question is posed to a computer and a human, and the test is passed if it is impossible to say which of the answers given was given by the human. Over time, a lot of progress has been made in this area by using techniques like speech synthesis, analysis, syntactic parsing, and contextual reasoning.

pages: 588 words: 131,025

The Patient Will See You Now: The Future of Medicine Is in Your Hands
by Eric Topol
Published 6 Jan 2015

As Abraham Verghese nicely put it in his 2014 Stanford medical school commencement address, “You can heal even when you cannot cure by that simple act of being at the bedside—your presence.”35 To that end, it’s hard to improve upon the words of the sixteenth century physician Paracelsus (his full name was Philippus Aureolus Theophrastus Bombastus von Hohenheim!), “This is my vow: to love the sick, each and all of them, more than if my own body were at stake.”35 There will never be algorithms, supercomputers, avatars, or robots to pull that off. The Turing test for medicine won’t be passed, and Kurzweil’s “singularity” will remain a plurality. A New Wisdom of the Body While some people connect “wisdom of the body” with the notion that infants can innately self-select their diet for proper nutrition36 or that food cravings during pregnancy are for critically needed nutrients,37 the term goes back to Walter Cannon’s The Wisdom of the Body book published in 1932.38 Cannon, an eminent Harvard physiologist and medical researcher, developed the concept of homeostasis—that our body tightly regulates itself, with steady-state levels of blood glucose, electrolytes, pH, body temperature, and many other components.

Chitty, “Why Clinicians Are Natural Bayesians: Is There a Bayesian Doctor in the House?,” British Medical Journal 330 (2005): 1390. 72. D. Hernandez, “Artificial Intelligence Is Now Telling Doctors How to Treat You,” Wired, June 2, 2014, http://www.wired.com/2014/06/ai-healthcare/. 73. R. M. French, “Dusting Off the Turing Test,” Science 336 (2012): 164–165. 74. G. Poste, “Bring on the Biomarkers,” Nature 469 (2011): 156–157. 75. A. B. Jensen et al., “Temporal Disease Trajectories Condensed from Population-Wide Registry Data Covering 6.2 Million Patients,” Nature Communications, June 24, 2014, http://www.readbyqxmd.com/read/24959948/temporal-disease-trajectories-condensed-from-population-wide-registry-data-covering-6-2-million-patients. 76.

pages: 573 words: 142,376

Whole Earth: The Many Lives of Stewart Brand
by John Markoff
Published 22 Mar 2022

Despite its shortcomings, the project was launched after Jeff Bezos became its first backer. Mitch Kapor and Ray Kurzweil (a high-profile inventor who gained increasing recognition for his belief in the inevitability of the singularity) placed the first bet over the question of when a computer program would successfully pass the Turing test, an idea first proposed by the English mathematician Alan Turing to determine whether a computer could be programmed to exhibit such humanlike intelligence that an observer would be unable to distinguish its answers from those of an actual person. Several other projects, including efforts to catalog all living species and all languages, were launched at around the same time, with varying degrees of success.

R., 110 Tongue Fu (Krassner), 223 tools, access to: and back-to-the-land movement, 154–55 Fuller on, 138 SB’s enduring focus on, 4, 71, 137, 153, 154–55, 172, 185, 191, 232, 361, 363 see also Whole Earth Catalog Traub, Joseph, 315 Tree-O (logging firm), SB’s summer job at, 29–30 Trips Festival, 127–31, 149, 298 Kesey’s inadvertent promotion of, 128 LSD use at, 127, 129 profits from, 130–31, 132 SB as organizer of, 123–25, 126, 128, 134, 298 True Believer, The (Hoffer), 32 Tso, Hola, 112 Turing test, 335 Turner, Fred, 348 U Udall, Lee, 115 Udall, Stewart, 95, 98–99, 105, 107, 111, 112, 117 Uncommon Courtesy: School of Compassionate Skills, 249–50 Understanding Media (McLuhan), 93, 112 United Nations Conference on the Human Environment (Stockholm), 201–2 SB at, 201, 202, 204–7, 347 urbanization, SB’s embrace of, 341, 344, 346 urban planners, 318 USCO (Company of Us), 106–7, 118, 136–37, 154 Usenet, 265 utopianism: of SB, 2, 5–6, 139–40, 220, 231, 232, 280, 348 technological, 2, 5–6, 231, 262, 348 V van der Ryn, Sim, 226, 302, 305 Van Kan, Hubert, 63, 64–65 Varda, Jean, 35, 37, 50–51 Varda, Joan, 74 Vassar College, 13 Venice, Phelan and SB in, 274–75 Veterans Administration Hospital (Menlo Park), 77, 88, 160 Vietnam Day Committee, 121 Vietnam War, 53, 54, 68, 121–22, 149, 205, 241 growing US involvement in, 65 Whole Earth Catalog’s silence on, 122 see also antiwar movement Viking Penguin, 233, 346 Vinge, Vernon, 315 virtual reality, 293, 294, 298 von Hilsheimer, George, 234 von Hoffman, Nicholas, 172 W Wack, Pierre, 273 Walker, Jay, 328, 331 Walker, John, 293 Wallace, Bob, 269 Walloon Lake, Mich., 11 WAR:GOD (multimedia project), 161–62, 171 Ward, Barbara, 154 WarGames (film), 267, 273 Warm Springs Indian Reservation, 85–86 SB’s trip to, 86–87, 88, 95–98, 106, 228 War Resisters League, 149, 211 Warshall, Peter, 289, 353 Wasco Indians, 86 Washburn, Sherwood, 45 Washington, DC: SB’s climbing near-disaster in, 95–96 SB’s stays in, 63–64, 115–16 Washington Post, 172, 241, 268 Watts, Alan, 33, 50, 216 Wavy Gravy (Hugh Romney), 148, 159–60, 188, 220, 237 wealth, SB’s evolving views of, 343–45 WebMD, 326, 342 WELL (computer network), 151, 264, 286, 306, 308, 348 culture of, 265–66 customers’ frustration with, 309–11 SB’s creation of, 264 SB’s departure from, 310–11 “you own your own words” (YOYOW) policy of, 264, 307–8 Wenner, Jann, 211 We R All One (museum exhibition), 136–37 Werbach, Adam, 353 West Coast Computer Faire, 252 Western Behavioral Sciences Institute, SB on faculty of, 251–52, 263, 264, 266, 270 Western Logging Company, 9, 29 Weston, Edward, 90 Weston, Erica, 76, 97 West with the Night (Markham), 271–72, 275 Whatever It Is (happening), 141–42 What’s Actually Happening, 226–27 What’s Happening?

pages: 798 words: 240,182

The Transhumanist Reader
by Max More and Natasha Vita-More
Published 4 Mar 2013

Panels of experts could interview the cyber-conscious being to determine its sentience as ­compared to a flesh human – these type of interviews, when conducted in blinded fashion as to the forms of each interviewee, are called Turing Tests in honor of the mathematician who first suggested them in the 1940s, Alan Turing (1950: 442). The prospect of being the first to pass such Turing Tests is motivating many computer science teams (Christian 2011: 16). They are doing their utmost to build into their software the full range of human feelings, including ­feelings of angst and dread. Hence, the unstoppable human motivation to invent something as amazing as a cyber-conscious mind will result in the creation of countless partially successful efforts that would be unethical if accomplished in flesh.

A model of your brain that described the behavior of every synapse and nerve impulse, and did a reasonably accurate job at that level, would seem to capture everything that is essential to being “you.” Yet how can we tell? How will we judge the “accuracy” of our computational model? How can we say what is “significant” and what is “insignificant”? We might adopt a variation of the Turing test: if an external tester can’t tell the difference, then there is no difference. But is the opinion of an external tester enough? How about your opinion? If you “feel” a difference, wouldn’t this mean that the model was a “mere copy” and not really you? Well, we could ask: “Hi! We’ve uploaded your brain into an Intel Pentadecium, how are you feeling?”

pages: 174 words: 56,405

Machine Translation
by Thierry Poibeau
Published 14 Sep 2017

See also Transfer rule Transfer rule, 30–31, 49, 63, 70, 83, 110, 115, 118, 156, 170–172, 190–191, 234–235, 247, 265 Translation aid for human translator by analogy (see Example-based machine translation) cost, 76–78, 116, 222–227 direction, 92, 164–165, 210 expert (see Professional translator) market (see Machine translation market) memory, 64, 92–93, 231, 243–246, 258 model, 127–128, 142, 144, 155, 185, 234, 244 need, 76, 226, 238–239 practice (see Professional translator) Translator. See Professional translator Turing, Alan, 2, 257 Turing test, 2 Twitter, 229, 231 Typography, 107 Typology of errors in machine translation, 217–219 Understanding, 2, 4, 7–8, 10, 19, 21, 23–24, 32, 36 United Kingdom, 50, 62 United States of America, 36, 50, 60, 62, 66, 68–70, 75, 77, 83–85 Universal language, 39–44, 179. See also Esperanto; Volapuk Universal Networking Language (UNL), 44 Universal primitive.

pages: 365 words: 56,751

Cryptoeconomics: Fundamental Principles of Bitcoin
by Eric Voskuil , James Chiang and Amir Taaki
Published 28 Feb 2020

It has preferences that it expresses by motivating the body over which it has control (owns ). This body is its property, a good. When its body is fully depreciated (dead), the spirit ceases to be an actor. It is not necessary to contemplate disembodied spirits, as no action is implied. Catallactics is not concerned with legal, theological, or ethical concepts of humanity. The Turing Test [567] is sufficient criteria for the definition of humanity. The catallactic distinction is in the formation of preferences, independent of any other actor. A person in this sense is a decision-maker, as distinct from a rule-follower. A machine is a good that expresses the preferences of a person.

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

But he was also an early believer that the human brain was in large part a digital computing machine, and therefore that computers could be made to have intelligence—to think. In 1950, he published a paper titled “Computing Machinery and Intelligence” where he proposed a test called the imitation game, now commonly referred to as the Turing test. In the test, a human evaluator would have a conversation with two others, one being a machine and one a human, and the test would be passed when the human evaluator could not distinguish between the human and machine—in short, when humans can’t distinguish artificial from real intelligence. Around the same time that Turing was publishing “Computing Machinery and Intelligence,” another eminent thinker named Claude Shannon (1916–2001) was breaking barriers that enabled many of the advances in computers and artificial intelligence that we now take for granted.

pages: 548 words: 147,919

How Everything Became War and the Military Became Everything: Tales From the Pentagon
by Rosa Brooks
Published 8 Aug 2016

Many of the alarmist scenarios involving autonomous weapons systems make the opposite assumption, however, envisioning intelligent, autonomous robots that decide to override the code that created them, and turn upon us all. But when the robots go rogue—when lust for blood, money, or power begins to guide their actions—they’ll have ceased to be robots in any meaningful sense. They’ll finally be able to pass the Turing Test. For all intents and purposes, they will have become humans—and it’s humans we’ve had reason to fear, all along. Nonlethal Weapons If we fought a war with weapons that did no permanent physical harm to our enemies, would they still be weapons, and would it still be a war? The advent of cyberwar forces us to ask this question, but similar questions also arise when we consider advances in “nonlethal weapons.”

Army, 259 Schweich, Thomas, 85 2nd “Dagger” Brigade Combat Team, 1st Infantry Division, 146–47, 148, 152 2nd Infantry Division, 182 Second Lateran Council (1139), 109 secrecy: of drone warfare, 117–18, 124, 278–79, 280, 284–85, 288 legitimate vs. illegitimate uses of, 104–5, 117, 128 overclassification of information and, 124–28 in traditional warfare, 117 Secretary of Defense, office of, 40 security clearances, 124–28 self-defense, right of, 24, 286, 402 military intervention and, 195, 221, 223, 249–51, 274, 275–76, 286, 288, 398, 408 necessity and proportionality in, 288, 405 self-determination, 231 Senegal, 84 Sensitive Compartmented Information Facility, 127 Sepoy Mutiny, 257 September 11, 2001, terrorist attacks, 10, 11, 28–29, 81, 83, 119, 249–50, 292–93, 294, 337–38 as acts of war vs. criminal acts, 219–24, 250, 275, 397 self-defense law and, 398 Serbs, Serbia, 205, 206, 348, 396 Shaka Zulu, 173 Shakespeare, William, 21, 185 Shamsi, Hina, 109 Shang Dynasty, 170 shaping (Phase Zero), of conflict, 7, 82, 143–44, 148–49 Sharia, 232 Shiite militias, 97 Sierra Leone, 27, 226, 229, 241, 242 Silver, Steven, 176 Snowden, Edward, 3, 130 social welfare programs, civilian, cuts to, 19–21, 322 social welfare programs, military, 19–21, 319–20 Soldier and the State, The (Huntington), 81 soldiers: blurred line between civilians and, 260 combatant immunity of, 195–96 cultural awareness of, 143–56 definition of, 258–60 evolving concept of, 10 intangible costs borne by, 15–16, 18 spouses of, 16–17 Solferino, Battle of (1859), 187–88, 229 Somalia, 12, 80, 226, 349 drone strikes in, 113, 294 piracy and, 40, 46, 50 U.S. military intervention in, 251 South Africa, 247 South Asia, colonialism in, 231 South Carolina College, 186 Southern Command, U.S., 72 sovereignty: counterterrorism and, 251 definitions of, 245–47 drone strikes and, 285, 289–90, 291 human rights and, 24, 193, 233, 234–53 international community and, 243 international law and, 227, 232–33, 339, 356, 407 9/11 attacks and, 249–51 war on terror and, 279–81 Soviet Union, 234 fall of, 265 Special Access Programs, 104 Special Operations Command, 3, 334, 352 special operations forces: covert activities of, 121–22 post-9/11 operations of, 120–21, 273, 349 special operations raids, 291 Spirit of the Laws, The (Montesquieu), 296 Srebrenica, Bosnia, massacre of Muslim civilians in, 26, 206–7, 209–10, 213–16, 347, 396 stability operations, 79, 83, 84–86, 91–92, 102–3 Stanford Encyclopedia of Philosophy, 170 State Department, U.S., 12, 71, 74, 84, 124, 258, 316 African Affairs Bureau of, 102 author’s job at, 27, 28, 53 budget of, 102, 318 Counterterrorism Office of, 89 Legal Adviser’s Office of, 243–44 Pentagon strategic communication programs criticized by, 89, 90–91 public diplomacy programs of, 89, 90 RAF and, 154–55 states, 35, 398 colonialism and, 230–31, 232 expanding numbers of, 262 failed and failing, see failed states as transient form of social organization, 229, 232 war and, 217–20, 224, 225, 230, 254, 297 state secrets doctrine, 299–300 Stele of the Vultures, 255 Stewart, Jon, 94 Stewart, Potter, 170 Still Too Fat to Fight, 321 strategic communication, 86–91, 94 Stromseth, Jane, 72 Sudan, 226, 309–11, 390 suicide bombers, 98, 140, 141 Sumeria, military in, 254–55 Sunni Awakening, 97 Supreme Court, U.S., Guantánamo rulings of, 58–59, 60–61, 410 surveillance, 355, 364 new rules needed for, 355 post-9/11 increase in, 303–4, 414–15 SWAT teams, 298–99 Syria, 12, 157, 226, 227, 229, 280, 344, 349 bombing in, 291 chemical weapons in, 248, 283, 314–15 civil war in, 248 U.S. drone strikes in, 107 U.S. military intervention in, 251 Syrian civil war, 261 Taliban, 29, 33, 55, 56, 59, 60, 74, 75, 98, 99, 100, 121, 232, 277, 278, 279, 293, 329, 331 Tanzania, bombing of U.S. embassy in, 83, 223 targeted killings, 27, 103, 108, 115–16, 118, 119, 122–23, 124, 134, 196–97, 266, 273, 274, 276, 284, 286, 343, 363, 383, 409 new rules needed for, 354–55 secrecy surrounding, 355, 364 see also drone strikes technological change, history of, 264 10th Special Forces Group, 17 terrorists, terrorism, 12, 41, 295, 339 drone strikes on, see drone strikes unconventional tactics of, 120–21 Terry, James, 148 Thirty Years War, 229, 261 Thomas Aquinas, Saint, 185 Thonden, Yodon, 235 3–2 “Stryker” Brigade Combat Team, 7th Infantry Division, 147 Through the Looking-Glass (Carroll), 287 Tilly, Charles, 217–18, 230 Tokyo, firebombing of, 138, 190, 365 Tokyo tribunals, 192, 193, 215 Too Fat to Fight, 321 Tora Bora, Battle of (2001), 119 torture, 193 Bush administration’s definition of, 58, 199–200, 201–3, 204 legal prohibition on, 200–201 Obama’s banning of, 34 U.S. use of, 33, 58, 60–61, 199–200, 320–21, 322, 363, 410 “Tragedy of the American Military, The” (Fallows), 15 Training and Doctrine Command (TRADOC), 150 Tripoli, 48, 49 Truman, Harry, 329 Tueller, Matthew, 154–55 Tunis, 48 Turing Test, 139 Turkey, 26 Turse, Nick, 147–48 Twitter, 349 Uganda, 27, 84, 85 author in, 235–36, 237–38, 241 Lord’s Resistance Army in, 176–81, 235–40, 241, 242 Ukraine, 280 high-tech warfare vs. low-tech in, 333 uncertainty, geopolitical: as increased by U.S. counterterrorism actions and legal arguments, 284–89 interconnectedness and, 261–67 rule of law as undermined by, 283 Uniform Code of Military Justice, 197–98, 202 Union Army, 185, 187 United Kingdom, 248 United Nations, 190, 232–33, 262, 365, 366 Dutch peacekeeping troops of, 215, 396 politics and, 192 Responsibility to Protect doctrine and, 247 United Nations Charter, 35, 190, 191–92, 231–32, 233, 250, 251, 290, 339, 342–44, 366 military intervention and, 194–95, 234–35, 243–44, 246, 248–49, 252, 286, 343–44 United Nations General Assembly, 247, 394, 407 United Nations Security Council, 194–95, 215 military intervention and, 234–35, 243–44, 246, 248–49, 252, 286 paralysis of, 291 veto powers in, 289 United States, 234 Barbary pirates and, 47–49 China’s relations with, 349 core values of, 63–64, 100, 101, 203, 295, 353–54 detention and interrogation policies of, 33, 54–61, 276, 284, 355, 363, 410 geopolitical power of, 266–67 hubris of, 97 idealism of, see idealism, American “imminent threat” as defined by, 286–87 increasingly unpredictable behavior of, 284 military of, see military, U.S.

pages: 589 words: 147,053

The Age of Em: Work, Love and Life When Robots Rule the Earth
by Robin Hanson
Published 31 Mar 2016

We’ve seen similar booms of excitement and anxiety regarding rapid automation progress every few decades for centuries, and we are seeing another such boom today (Mokyr et al. 2015). Since the 1950s, a few people have gone out of their way to publish forecasts on the duration of time it would take AI developers to achieve human level abilities. (Our focus here is on AI that does human jobs well, not on passing a “Turing test.”) While the earliest forecasts tended to have shorter durations, soon the median forecasted duration became roughly constant at about 30 years. Obviously, the first 30 years of such forecasts were quite wrong. However, researchers who don’t go out of their way to publish predictions, but are instead asked for forecasts in a survey, tend to give durations roughly 10 years longer than researchers who do make public predictions (Armstrong and Sotala 2012; Grace 2014).

Ems may prefer not to be given such a bot to interact with, in part because it might suggest their low status. As a result, during interactions ems may try to act in complex and subtle ways that bots could not effectively mimic, continually running their own bots that try to mimic themselves and their associates to detect fakes. That is, ems might always feel they are part of a Turing test. Such habits could raise the costs of interacting for distrustful ems, and raise the gains from trust. Information about whether one is interacting with a bot might be obtained directly via direct brain access, or perhaps indirectly by requiring that a high price be paid to place what appears to be a full em in a particular role.

What We Cannot Know: Explorations at the Edge of Knowledge
by Marcus Du Sautoy
Published 18 May 2016

The mathematician Alan Turing was one of the first to question whether machines like my smartphone could ever think intelligently. He thought a good test of intelligence was to ask, if you were communicating with a person and a computer, whether you could distinguish which was the computer? It was this test, now known as the Turing test, that I was putting Cleverbot through at the beginning of this Edge. Since I can assess the intelligence of my fellow humans only by my interaction with them, if a computer can pass itself off as human, shouldn’t I call it intelligent? But isn’t there a difference between a machine following instructions and my brain’s conscious involvement in an activity?

My stomach might start communicating with me, but how can I ever know if it experiences red or falls in love like I do? I might scan it, probe it with electrodes, discover that the wiring and firing are a match for a cat brain that has as many neurons, but is that as far as we can go? The question of distinguishing the zombies from the conscious could remain one of the unanswerable questions of science. The Turing test that I put my smartphone through at the beginning of our journey into the mind hints at the challenge ahead. Was it the zombie chatbot who wanted to become a poet, or did it want to become rich? Is a chatbot clever enough to make a joke about Descartes’ ‘I think, therefore I am’? Will it eventually start dating?

pages: 505 words: 161,581

The Founders: The Story of Paypal and the Entrepreneurs Who Shaped Silicon Valley
by Jimmy Soni
Published 22 Feb 2022

Turing’s answer was to subject computers to “an imitation game” in which a computer and a human are locked in separate rooms and tasked with responding to questions put to them by someone in a third room. If the questioner couldn’t distinguish the machine’s answers from the human’s, then the computer passed the Turing test. Driven by more utilitarian motives, PayPal’s engineers joined the fray a few decades after Turing. “What is something that a computer couldn’t do—but is brain-dead easy for a human?” Levchin queried his assembled team of engineers. Engineer David Gausebeck thought back to his college research on computers’ ability to decipher images.

“It turns out the original version held up fine for years,” Gausebeck remembered. “I guess the people who were motivated to try to defeat it weren’t the same people who had the skills to do that. It’s a very different set of skills than interacting with a web page.” The Gausebeck-Levchin test became the first commercial application of a Completely Automated Public Turing Test to Tell Computers and Humans Apart—or CAPTCHA. Today, CAPTCHA tests are common on the internet—to be online is to be subjected to a search for a specific image—a fire hydrant or bicycle or boat—from a lineup. But at the time, PayPal was the first company to force users to prove their humanity in this fashion.

pages: 547 words: 173,909

Deep Utopia: Life and Meaning in a Solved World
by Nick Bostrom
Published 26 Mar 2024

This kind of blockage could extend more widely than merely preventing the creation of unconscious beings that are physically identical to normally conscious human beings. It might also be metaphysically impossible to create beings that are sufficiently generally intelligent or that are able to pass sufficiently rigorous forms of the Turing test without thereby also making them have conscious experience. (You’ve all heard about the Turing test, right? Good.) If these things are metaphysically (or nomologically) blocked, then there would be certain conceptions of utopia that could not be realized. For example, it might have been convenient to have been able to create entities that are indistinguishable from ordinary humans yet are not conscious, since that would, arguably, have enabled us to sidestep certain moral complications.

pages: 247 words: 71,698

Avogadro Corp
by William Hertling
Published 9 Apr 2014

I suppose that I, like him, assumed that there would be a more intentional, deliberate action that would spawn an A.I.” He paused, and then continued, smiling a bit. “Gentlemen, you may indeed have put the entire company at risk. But let me first, very briefly, congratulate you on creating the first successful, self-directed, goal oriented, artificial intelligence that can apparently pass a Turing test by successfully masquerading as a human. If not for the fact that the company, and perhaps the entire world, is at risk, I’d suggest a toast would be in order.” Sean looked around to see where his parents had sat, and then continued. “But since we are facing some serious challenges, let me go say goodbye to my parents, and then we can figure out our next step.”

pages: 229 words: 67,599

The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age
by Paul J. Nahin
Published 27 Oct 2012

6 NOTES AND REFERENCES 1. The reference to Turing is almost certainly due to Shannon having read Turing’s famous paper “Computing Machinery and Intelligence,” Mind, October 1950, pp. 433–460. It was in this paper that Turing put forth what was to become famous in computer science as the Turing test, an experimental procedure to unemotionally decide if a machine possessed artificial intelligence. For Turing’s comparison of ideas to neutrons, see in particular, p. 454. 2. MIT electrical engineering professor Marvin Minsky refers to this issue in his beautiful book Computation: Finite and Infinite Machines, Prentice Hall, 1967, p. 128.

Cartesian Linguistics
by Noam Chomsky
Published 1 Jan 1966

D., 119 (n.48) Wilkins, J., 69, 122 (n.55), 133 (n.95) Wittgenstein, L., 15, 23, 56, 117 (n.40), 123 (n.59), 128 (n.80), 133 (n.100), 138 (n.114) 154 Index of Subjects abstraction, 138 (n.114); see also generalized learning procedures acquisition of concepts, 9–10, 20, 25–27, 30–32, 35, 38, 43 (n.15) of language, 10, 25–32, 35, 38, 40, 41 (n.2), 43 (n.15), 45, 55, 63, 94–103, 104, 112 (n.25), 119 (n.48) 124 (n.63), 133 (n.94), 135 (n.105) 136 (n.110), 138 (n.114) adequacy, see descriptive adequacy, explanatory adequacy analogy, 37–38, 58, 90, 104, 111 (n.21) 112 (n.22), 122 (n.53), 124 (n.61) animal, 13, 37, 51–59, 62, 64, 67, 70, 102, 107 (n.8), 107 (n.9), 110 (n.11), 110 (n.12), 110 (n.13), 110 (n.14), 111 (n.21), 112 (n.23), 116 (n.38), 117 (n.40), 120 (n.51) communication, 52, 55–57, 62, 70, 108 (n.8), 110 (n.14), 111 (n.21) 112 (n.23), 116 (n.38), 117 (n.40) appropriateness of language use, 11, 13–15, 19, 20–24, 42 (n.10), 52–54, 107 (n.8); see also creative aspect of language use art, 9, 32, 42 (n.9), 46, 61, 99, 114 (n.33), 114 (n.34) artistic creativity, see creativity association, 33, 55, 65, 85, 98, 138 (n.114); see also generalized learning procedures automaton, 13, 51–59, 62, 67, 107 (n.8), 108 (n.9), 110 (n.9), 111 (n.14), 111 (n.16), 120 (n.51), 124 (n.62) 140 (n.117) base rules, see rules behaviorism, 17, 23, 33–34, 40 (n.1), 110 (n.11), 133 (n.94); see also conditioning biological limits of human intelligence, see limits of human intelligence biological system, language as, 11–12, 18, 21–22, 24, 27–30, 35–39 biology, 22, 32, 66, 118 (n.46), 136 (n.110) boundlessness of language, 12–15, 18, 20, 37–38, 41 (n.4), 43 (n.10), 52, 59, 61–64, 107 (n.8), 112 (n.22); see also creative aspect of language use Cartesian linguistics, see linguistics case marking, 132 (n.88) system, 82 central processor hypothesis, 20, 24, 37 character of language (Humboldt), 68 common notions (Herbert of Cherbury), 32, 94–96 common sense, 12, 16, 18–19, 22–23 43 (n.18) comparative grammar, see grammar competence (opposed to performance), 19, 25–26, 45, 69, 105 (n.2), 108 (n.8) computer, 42 (n.4), 41 (n.10), 140 155 Cartesian Linguistics (n.123); see also automaton concept, 9–12, 18–40, 42 (n.14), 43 (n.15), 63, 68, 70, 72, 95, 101, 103, 116 (n.38), 126 (n.70); see also innate idea/concept/ machinery conceptual-intentional system, 21–22, 30, 36, 129 (n.80) conditioning, 10, 32–34, 38, 55, 58, 69, 97, 98, 104, 110 (n.11), 138 (n.114); see also general learning procedures, behaviorism, empiricism, standard social science model core grammar, see grammar creative aspect of language use, 8–25, 31–40, 42 (n.9), 43 (n.18), 51–72, 87, 90, 102, 104, 108 (n.9), 111 (n.18), 113 (n.29), 113 (n.30), 117 (n.44), 118 (n.45), 124 (n.61), 134 (n.100), 139 (n.115) creativity artistic, 12, 61, 65, 114 (n.34) ordinary, see creative aspect of language use scientific, 11, 44 (n.18) critical period hypothesis for language acquisition, 98 diversity of human language, 25–29, 119 (n.48), 125 (n.63) education, 9, 11, 18, 27, 39–40, 67, 139 (n.115) empiricism, 7, 9–11, 17, 31–39, 40, 41 (n.6), 86, 98, 101, 107 (n.7) 109 (n.9), 137 (n.110), 138 (n.114) empiricist linguistics, see linguistics, empiricism English, 29, 122 (n.53) evolution, 22, 42 (n.12), 107 (n.7) explanatory adequacy, 26–27, 42 (n.11) 135 (n.105) explanatory grammar, see grammar expression (sound-meaning pair), 12, 17–21, 24, 37 externalism, 10, 33 faculty, 9, 16, 17, 20–24, 32, 43 (n.14), 43 (n.15), 59–61, 95, 99, 100, 109 (n.9), 120 (n.51), 124 (n.62), 140 (n.117) folk science, 16. see also common sense form of language (Humboldt), 17, 62–69, 94, 117 (n.39), 117 (n.42) 117 (n.44), 118 (n.45) free will, 9, 15, 16, 18, 22, 23, 36, 66, 108 (n.9) free word order, 129 (n.82), 132 (n.88) freedom, social and political, 8, 11, 22, 23–24, 33, 39, 59, 66, 67, 120 (n.51) French, 29, 74, 78, 79, 82, 85, 86, 91 122 (n.53), 122 (n.54), 130 (n.83) 134 (n.101) functionalism, 23, 34, 39 deep structure, 29, 73–87, 88–93, 125 (n.67), 127 (n.70), 127 (n.73), 129 (n.80), 132 (n.93) derivation, 30, 83, 129 (n.80); see also transformational grammar Descartes’ problem, see creative aspect of language use descriptive adequacy, 27, 90, 42 (n.11) descriptive grammar, see grammar descriptivism, see linguistics general grammar, see grammar 156 Index of Subjects general linguistics, see linguistics generalization, 58, 69, 98, 104, 138 (n.114); see also generalized learning procedures generalized learning procedures, 10, 32–39, 98. see also conditioning, empiricism, standard social science model, behaviorism generative grammar, see grammar generative principles, 20, 63–66; see also rules German, 17, 33, 66, 116 (n.37), 126 (n.69) Germany, 66 government, see state grammar comparative, 122 (n.53) core, 105 (n.2) descriptive, 88, 91 explanatory, 91, 134 (n.103) general, 69, 88–89, 94, 105 (n.3), 114 (n.30), 122 (n.53), 123 (n.59), 125 (n.67), 130 (n.83), 133 (n.95), 134 (n.103), 138 (n.112) generative, 49, 68–70, 102, 105 (n.2), 106 (n.4), 117 (n.39), 135 (n.106) particular, 68, 88–89, 94, 133 (n.95), 134 (n.101) philosophical, 17, 89–92, 105 (n.3), 130 (n.83), 132 (n.94) 133 (n.95) Port-Royal, 17, 29, 33, 72–83, 86, 90, 91, 105 (n.3), 114 (n.30), 125 (n.67), 126 (n.69), 126 (n.70), 127 (n.72), 128 (n.75), 131 (n.85), 133 (n.94), 133 (n.95), 135 (n.106) speculative, 134 (n.101) transformational generative, 29, 77–87, 94, 127 (n.73), 128 (n.75), 135 (n.106) universal, 17, 19, 27–31, 72, 92, 104, 105 (n.3), 132 (n.88), 134 (n.100), 134 (n.103) grammatical transformation, 74–87 habit, language as, 10, 33, 58, 69, 104, 111 (n.21), 123 (n.56), 138 (n.114); see also conditioning and behaviorism Hebrew, 17 head-first language, 29 head-last language, 29 history of linguistics, see linguistics human nature, 11, 34, 40, 59, 66, 67, 102, 103, 120 (n.51) idea (Descartes), 126 (n.70) I-language, 19, 20, 25, 27 inflectional devices, 82 innate idea/concept/mental machinery, 9–12, 17–18, 24–39, 41 (n.5), 41–42 (n.9), 43–44 (n.14), 43 (n.18), 45, 59, 65, 94–101, 104, 113 (n.29), 138 (n.114), 135 (n.105), 138 (n.114), 139 (n.115) instinct, 10, 34, 36, 53, 58, 59, 60, 63, 94, 95, 96, 113 (n.29), 121 (n.51) intension, see semantics interface level, 19–20, 21–22, 30, 37, 129 (n.80); see also LF, SEM, phonetic form, phonetic interpretation internalism, 10, 33–34, 130 (n.83) 157 Cartesian Linguistics Japanese, 29 judgment, 32, 72–79, 100, 101 language faculty, 20–22, 59, 42 (n.14) 43 (n.15), 59; see also universal grammar language game (Wittgenstein), 117 (n.40) language, function of, 15, 52, 58, 62, 64, 72. langue (Saussure), 131 (n.85), 132 (n.89) Latin, 74, 78, 79, 81, 85, 121 (n.53), 130 (n.83), 131 (n.88) lexical item, 17, 19, 20, 22, 25, 26, 27, 43 (n.15), 90, 131 (n.85); see also concept lexicon, 63, 68 LF, 129 (n.80), 130 (n.83); see also logical form, SEM, semantic interface liberalism, 39 libertarianism, 39–40 limits of human intelligence, 8, 16, 20, 23, 25, 35, 37, 111 (n.18) linguistic universals, 27, 28, 29, 92, 94, 136 (n.108); see also innate idea, concept, machinery linguistics Cartesian, 49, 50, 62, 64, 69, 70, 72, 73, 87, 88, 90, 93, 94, 96, 97, 103, 104, 105 (n.3), 115 (n.36), 129 (n.80), 134 (n.101) descriptivism in, 90, 134 (n.101) 135 (n.105) empiricist, 41 (n.6) 106 (n.3); see also empiricism general, 62 history of, 49, 105 (n.1) minimalism in, 129 (n.80) modern, 45, 49, 57, 67–68, 90, 92, 98, 132 (n.92), 132 (n.93) pre-modern/traditional, 130 (n.83), 133 (n.94) prescriptivism in, 133 (n.96) principles and parameters in, 29, 136 (n.108); see also parameters standard theory of, 17, 29 structuralism in, 67, 68, 90, 106 (n.3) taxonomic, 106 (n.3) logical form, 82, 129 (n.80); see also LF, SEM machine, see automaton meaning, see semantics mechanism and mechanical explanation, 8, 12–13, 15–16, 51–55, 65, 106 (n.3), 106 (n.5), 107 (n.8) 120 (n.51), 124 (n.62) mechanical form (Romantics), 65, 118 (n.45) methodological dualism, 32 mind-body distinction, 15–16, 28, 73 problem, 16, 28 Miskito, 29 modularity of the mind, 19–24, 35, 36 37, 110 (n.11) morality, 39, 101, 137 (n.111) morpheme, 92, 127 (n.73) morphogenesis, 42 (n.12), 107 (n.7) mystery, 11, 25, 124 (n. 61), see also limits of human intelligence, creative aspect of language use nativism, 10, 24–34, 94–101, 104; see also innate idea/concept/mental machinery natural history, 93 natural philosophy, see philosophy 158 Index of Subjects natural rights, 66, 120 (n.51) neoliberalism, 39 organic form (Romantics), 65, 68, 117 (n.44), 118 (n.45), 118 (n.46) other minds, problem of, 53–54, 106 (n.5) output level, see interface level parameters, 20, 27, 29–30, 136 (n.108). see also principles and parameters parole (Saussure), 131 (n.85), 132 (n.89) particular grammar, see grammar perception, 22, 31–32, 46, 63, 66, 72, 84, 85, 86, 93, 98–103, 104, 105 (n.2), 105 (n.3), 116 (n.38), 141 (n.123) performance, opposed to competence, 15, 21, 24, 55, 69, 104, 105 (n.2) perspectives (provided by language faculty), 15, 21, 22, 37 philosophical grammar, see grammar philosophy, 33, 68, 93, 41 (n.6), 106 (n.3), 106 (n.4), 118 (n.46), 123 (n.59), 128 (n.80), 133 (n.95), 139 (n.114), 139 (n.115), 140 (n.120) of language, 62, 115 (n.35) of mind, 31, 46, 98, 105 (n.3) phoneme, see phonetics phonetic form/interpretation, see phonetics and surface structure phonetics, 92, 136 (n.108) phonetic form, 88 phonetic interpretation, 29, 73, 77 phonology, 28, 46, 43 (n.15), 121 (n.52), 141 (n.124) phonological system, 68 phrase structure, 80, 127 (n.73); see also deep structure physics, 51 physiology, 51 plasticity of the mind, 32–36, 40, 113 (n.29), 115 (n.34) Platonism, 33, 96, 124 (n.61), 136 (n.110), 140 (n.120) Plato’s problem, see poverty of stimulus facts poetry, 46, 61, 68, 41 (n.9), 114 (n.33) 114 (n.34), 139 (n.115) politics, see state Port-Royal grammar, see grammar postmodernism, 32 poverty of stimulus facts, 24–28, 30–31, 35, 38, 39, 43 (n.16), 98, 135 (n.105) premodern linguistics, see linguistics prescriptivism, see linguistics principles and parameters, see linguistics prototype, 119 (n.47) psychological reality, 117 (n.41) psychology, 45, 46, 51, 53, 96, 106 (n.3), 106 (n.4), 111 (n.19) psychological explanation, 111 (n.19) rationalism, 9, 10, 15, 30–33, 36–40, 41 (n.6), 41 (n.7), 50, 94, 97–99, 105 (n.3), 137 (n.110), 140 (n.119) rationality, 59, 60 recursion, 80 reference, see semantics reminiscence theory (of Plato), 26–27, 137 (n.111) representation, 21, 63, 80, 85, 103, 42 (n.14), 127 (n.73), 129 (n.80), 130 (n.83) romanticism, 9–11, 17–18, 24, 27, 30– 33, 38–40, 41 (n.7), 50, 60–71, 97, 101, 105 (n.3), 115 (n.35) 124 159 Cartesian Linguistics (n.61), 125 (n.63), 140 (n.120) rules base, 80 transformational, 75, 78–80 schemata, 103 science, 91, 133 (n.100), 134 (n.103) of behavior, 8, 16, 20, 22–23, 24, 25, 35 of language, 11–12, 15–18, 20–21, 26, 38–39, 134 (n.100); see also grammar and linguistics of mind, 11, 16–17, 21, 24, 25–26, 42 (n.11) scientific creativity, see creativity second language acquisition, 124 (n.63), 138 (n.114) semantics intension, 127 (n.70) meaning, 20, 43 (n.15), 75, 77, 79, 84, 86, 90, 91, 127 (n.70), 130 (n.83), 131 (n.85) reference, 75, 91 SEM, 129 (n.80) semantic content, 75, 88, 92, 103 129 (n.80) semantic interface, 21, 22, 30, 129 (n.80) semantic interpretation, 29, 73 ,77 127 (n.70), 129 (n.80) semantic representation, 130 (n.83) signification, 123 (n.57) sign language, 21, 25, 46 simplicity, 82, 42 (n.11) species specificity of traits, 19, 32, 45, 51, 52, 116 (n.38), 120 (n.51) speech organs, 52, 53, 56, 58, 62, 84 standard social science model, 34–39 standard theory of linguistics, see linguistics state (political), 8, 11, 17, 39–40, 66, 67, 121 (n.51), 139 (n.115) stimulus freedom, 13–15, 18–19, 23–24, 41 (n.5), 42 (n.10), 52, 58–61, 107 (n.8), 113 (n.29), 113 (n.30); see also creative aspect of language use structuralism, see linguistics surface structure, 73–80, 84–87, 88–92, 127 (n.72), 129 (n.80), 131 (n.86), 132 (n.92), 132 (n.93), 133 (n.94). see also phonetic form, phonetic interpretation syntactic structure, see deep structure syntactic theory, see syntax syntax, 28, 46, 68, 69, 72, 80–85, 93, 132 (n.93), 141 (n.124) taxonomic linguistics, see linguistics thematic assignment, 129 (n.80) theory of language, 17–18, 49, 80, 43 (n.15), 119 (n.48), 125 (n.67), 133 (n.100); see also grammar, linguistics, science of language theory of mind, 50, 94, 104; see also science of mind training, language acquisition as, see conditioning, standard social science model, general learning procedures transformational rules, see rules transformational generative grammar, see grammar translation, 125 (n.63) trigger/triggering 10, 12, 25, 27–29, 31, 33, 38, 96. Turing test, 13–14 ; see also computer. universal grammar (UG), see grammar Urform (Goethe), 66. 160 Index of Subjects use of language, 10, 15, 18, 45, 94, 132 (n.89), 133 (n.94), 133 (n.100) vision, theory of, 18, 19–20, 24, 30, 33, 35, 42 (n.14) will, see free will. 161

On Nature and Language
by Noam Chomsky
Published 16 Apr 2007

It is worth bearing in mind that these conclusions are correct, as far as we know. 66 Language and the brain In these terms, Cartesian scientists developed experimental procedures to determine whether some other creature has a mind like ours – elaborate versions of what has been revived as the Turing test in the past half century, though without some crucial fallacies that have attended this revival, disregarding Turing’s explicit warnings, an interesting topic that I will put aside.6 In the same terms, Descartes could formulate a relatively clear mind–body problem: having established two principles of nature, the mechanical and mental principles, we can ask how they interact, a major problem for seventeenth-century science.

pages: 242 words: 73,728

Give People Money
by Annie Lowrey
Published 10 Jul 2018

“This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals,” the Facebook researchers noted. The AI also started writing its own responses to bids, moving past the formulaic ones its engineers had given it. The AI got so good so fast that it began passing a kind of Turing test. “Most people did not realize they were talking to a bot rather than another person—showing that the bots had learned to hold fluent conversations in English,” the Facebook researchers wrote in a blog post. The performance of the best bot negotiation agent matched the performance of a human negotiator.

pages: 269 words: 79,285

Silk Road
by Eileen Ormsby
Published 1 Nov 2014

In one three-week period, users of the site found themselves unable to log into the site more often than not, and when they could, the site ran so slowly as to be almost unusable. Reports flooded in of timeouts, missing CAPTCHAs (the visual tests that require a user to retype distorted letters to prove they are human and not a computer or bot; the word is an acronym of Completely Automated Public Turing test to tell Computers and Humans Apart) and other difficulties. Dread Pirate Roberts posted an update titled ‘Explosive Growth’: Just want to keep everyone in the loop. We are in uncharted territory in terms of the number of users accessing Silk Road. Most of the time we’ve been able to keep up with the demand, but we ARE behind the curve right now.

pages: 685 words: 203,949

The Organized Mind: Thinking Straight in the Age of Information Overload
by Daniel J. Levitin
Published 18 Aug 2014

These are the distorted words that are often displayed on websites. Their purpose is to prevent computers, or “bots,” from gaining access to secure websites, because such problems are difficult to solve for computers and usually not too difficult for humans. (CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. reCAPTCHAs are so-named for recycling—because they recycle human processing power.) reCAPTCHAs act as sentries against automated programs that attempt to infiltrate websites to steal e-mail addresses and passwords, or just to exploit weaknesses (for example, computer programs that might buy large numbers of concert tickets and then attempt to sell them at inflated prices).

J., 339 Church, Alexander Hamilton, 270 Cialdini, Robert, 429n153 Cicero, 340 circadian rhythms, 193–94 Citizendium, 472n335 Claiborne, Liz, 274 “clearing the mind,” 68 cloud storage, 322–23 cognitive blind spots, 11–12 cognitive efficiency, 56–57, 110 cognitive flexibility, 307 cognitive illusions and gambling, 226 illusory correlation, 253–56 and multitasking, 96, 306, 319 and procrastination, 200 and social relations, 144–49, 152 and time perception, 162 and visual illusions, 21, 21–22 cognitive overload. See information overload cognitive processing, 183–95 cognitive science, 22–32, 228 collaborative filtering, 117 color perception, 30–31, 162 command structures, 272–76 Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), 118 complexity, 120–35, 209, 220–32, 315 concentration, 41, 293–94. See also attention conformity, 157–59 consumer decision-making, 20, 310, 311 Consumer Reports, 20, 116 contact files, 122–23 contingencies, 231, 232, 286, 319–26 controlled experimentation, 345–47, 348 Cook, Perry R., 323, 324–25 cooperative behavior, 135–36 coronary bypass surgery, 259 corporate structures, 271–76, 464n283 correlation, 60, 347–51, 348 cost-benefits analysis, 5, 212–13 Coulter, Ann, 340 covariation, 347–48, 348 creativity and aging, 217–18 and attention, 38 and central executive function, 202, 210, 375–76 and flow state, 203–8, 209 and focus, 171 and organizational systems, 304 and serendipity, 376, 378, 380–81 and time management, 170–71, 202–15 critical thinking, 336, 341, 343, 352, 478n352 crowdsourcing, 114–17, 133, 333 Csikszentmihalyi, Mihaly, 203, 206, 400n7 Cuban, Mark, 292 Cuban Missile Crisis, 155, 366 Curie, Marie, 283 Dali, Salvador, 375 Darley, John, 157–58, 159 data compression, 311–12, 314 data losses, 321–26 Dawkins, Richard, 26–27 daydreaming mode and attention, 38–39 and creativity, 202, 217, 375–76, 380 and free association, 364–65 and online dating, 132 and organizational systems, 304 and reading fiction, 367 and social relations, 152 and time organization, 169, 170 decisions, 73, 98, 100, 132, 218, 220–32, 276–83, 310–11, 423n132 Deepwater Horizon oil rig disaster, 134 defining features, 65–66 delayed gratification, 166, 197 De Morgan, Augustus, 377 Dennett, Daniel, 45 denominator neglect, 255–56 Descartes, René, 14–15 designated places, 83, 83–86, 88 De Waal, Frans, 282–83 Dewey Decimal System, 296, 378 dietary supplements, 253–54, 255, 258, 260 diffusion of responsibility, 157–59 digital storage, 91–106 diphtheria, 250 disciplined initiative, 286 disk failures, 321 dispositional explanations, 145–46 distraction, 198, 209–10 distributed processing, xxi, 303 division of labor, 269 divorce rates, 133, 261–62 document organization, 293–306, 413n95.

pages: 743 words: 201,651

Free Speech: Ten Principles for a Connected World
by Timothy Garton Ash
Published 23 May 2016

Already in the 1960s, the computer scientist Joseph Weizenbaum developed a computer programme named Eliza, after Eliza Doolittle in George Bernard Shaw’s Pygmalion—better known as Julie Andrews in ‘My Fair Lady’.31 Eliza was capable of having rudimentary conversations with people, of a vacuously sympathetic kind (‘I am sorry to hear you are depressed’). More recently, a chatbot called Eugene Goostman was alleged by its developers to have passed the Turing test—can you tell if you are talking to a human or machine?—although that claim was soon disputed.32 Many Chinese have apparently found comfort in talking to a purportedly female Microsoft chatbot called Xiaoice.33 Leading scientists have argued that artificial intelligence may be upon us sooner than we think and that we should address the issue seriously.34 Since, however, there is still a little while to go until that singular moment, this book is concerned only with human speech—not that attributed to other animals or to machines.

The minimal definition of literacy was established in 1958. Note that the UN estimate includes some 775 million adults and 122 million illiterate youth 29. Shteyngart 2010 30. Kurzweil 2005 31. Weizenbaum 1984 and Carr 2010, 201–8 32. Ian Sample and Alex Hern, ‘Scientists Dispute Whether Computer ‘Eugene Goostman’ Passed Turing Test’, The Guardian, 9 June 2014, http://perma.cc/9YMC-LJW7 33. John Markoff et al., ‘For Sympathetic Ear, More Chinese Turn to Smartphone Program’, New York Times, 31 July 2015, http://www.nytimes.com/2015/08/04/science/for-sympathetic-ear-more-chinese-turn-to-smartphone-program.html 34. see Future of Life Institute, ‘Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter’, http://perma.cc/ZD2A-DP7E, and Martin Rees, ‘Cheer Up, the Post-Human Era Is Dawning’, Financial Times, 10 July 2015, http://www.ft.com/cms/s/0/4fe10870-20c2-11e5-ab0f-6bb9974f25d0.html#axzz3qv6zRoSp 35. this is essentially also the conclusion of Wu 2013 36. on the often neglected subject of touch, see Linden 2015 37.

pages: 1,380 words: 190,710

Building Secure and Reliable Systems: Best Practices for Designing, Implementing, and Maintaining Systems
by Heather Adkins , Betsy Beyer , Paul Blankinship , Ana Oprea , Piotr Lewandowski and Adam Stubblefield
Published 29 Mar 2020

Protecting your systems from criminal actors When designing systems to be resilient against criminal actors, keep in mind that these actors tend to gravitate toward the easiest way to meet their goals with the least up-front cost and effort. If you can make your system resilient enough, they may shift their focus to another victim. Therefore, consider which systems they might target, and how to make their attacks expensive. The evolution of Completely Automated Public Turing test (CAPTCHA) systems is a good example of how to increase the cost of attacks over time. CAPTCHAs are used to determine whether a human or an automated bot is interacting with a website—for example, during a login. Bots are often a sign of malicious activity, so being able to determine if the user is human can be an important signal.

Sample disaster risk assessment matrix Theme Risk Probability of occurrence within a year Impact to organization if risk occurs Ranking Names of systems impacted by risk Almost never: 0.0 Unlikely: 0.2 Somewhat unlikely : 0.4 Likely: 0.6 Highly likely: 0.8 Inevitable :1.0 Negligible: 0.0 Minimal: 0.2 Moderate: 0.5 Severe : 0.8 Critical: 1.0 Probability x impact Environmental Earthquake Flood Fire Hurricane Infrastructure reliability Power outage Loss of internet connectivity Authentication system down High system latency/ infrastructure slowdown Security System compromise Insider theft of intellectual property DDos/DoS attack Misuse of system resources—e.g., cryptocurrency mining Vandalism/ website defacement Phishing attack Software security bug Hardware security bug Emerging serious vulnerability, e.g., Meltdown/Spectre, Heartbleed Index A AbsInt, Abstract Interpretation absolute time, limiting dependence on, Limit Your Dependencies on External Notions of Time-Limit Your Dependencies on External Notions of Time abstract interpretation, Abstract Interpretation access control lists (ACLs)advanced authorization controls, Using Advanced Authorization Controls data isolation and, Data isolation graceful degradation and, Graceful Degradation key isolation and, Isolation of confidentiality safe proxies and, Safe Proxies in Production Environments access controlsdesigning for recovery, Access Controls Google's corporate network and, Access Controls understandability and, Access control access denials, Diagnosing Access Denials accidental errors, recovery from, Accidental Errors accountability, risk taking and, Culture of Yes ACLs (see access control lists) active entities, defined, Identities activists, as attackers, Activists AddressSanitizer (ASan), Dynamic Program Analysis administrative APIs, Small Functional APIs, Choosing an auditor advanced authorization controls, Using Advanced Authorization Controls, Advanced Controls-Proxiesbusiness justifications, Business Justifications multi-party authorization, Multi-Party Authorization (MPA) three-factor authorization (3FA), Three-Factor Authorization (3FA)-Three-Factor Authorization (3FA) advanced mitigation strategies, Advanced Mitigation Strategies-Post-Deployment Verification, Securing Against the Threat Model, Revisitedbinary provenance, Binary Provenance-What to put in binary provenance code signing, What to put in binary provenance deployment choke points, Deployment Choke Points post-deployment verification, Post-Deployment Verification provenance-based deployment policies, Provenance-Based Deployment Policies-Implementing policy decisions verifiable builds, Verifiable Builds-Unauthenticated inputs adversarial testing, Assessment adversaries, understanding, Understanding Adversaries-Conclusionattacker methods, Attacker Methods-Tactics, Techniques, and Procedures attacker motivations, Attacker Motivations attacker profiles, Attacker Profiles hobbyists, Hobbyists risk assessment considerations, Risk Assessment Considerations-Risk Assessment Considerations vulnerability researchers, Vulnerability Researchers AFL (American Fuzzy Lop), How Fuzz Engines Work Agile development, Initial Velocity Versus Sustained Velocity alternative componentspitfalls, Common pitfalls types, Component Types-Low-dependency components ALTS (Application Layer Transport Security), Authentication and transport security American Fuzzy Lop (AFL), How Fuzz Engines Work amplification attacks, Attacker’s Strategy Android, Use memory-safe languages Android Keystore, Example: Secure cryptographic APIs and the Tink crypto framework Android security team, Example: Embedding Security at Google anonymization, Take Privacy into Consideration Anonymous (hacktivist group), Activists antivirus software, Host agents anycast, Defendable Architecture APIs (application programming interfaces)defined, Small Functional APIs least-privilege-based design, Small Functional APIs-Small Functional APIs secure cryptographic APIs and Tink crypto framework, Example: Secure cryptographic APIs and the Tink crypto framework-Example: Secure cryptographic APIs and the Tink crypto framework third-party insider threats, Third-party insiders usability and understandability, Considering API Usability-Example: Secure cryptographic APIs and the Tink crypto framework App Engine (see Google App Engine) App Security Improvement (ASI), Abstract Interpretation application frameworksdefined, Using Application Frameworks for Service-Wide Requirements for service-wide requirements, Using Application Frameworks for Service-Wide Requirements-Using Application Frameworks for Service-Wide Requirements Application Layer Transport Security (ALTS), Authentication and transport security application logs, Application logs application-level data recovery, Data Sanitization artifact, defined, Concepts and Terminology artificial intelligencecyber attacks and, Automation and Artificial Intelligence protecting systems from automated attacks, Protecting your systems from automated attacks ASan (AddressSanitizer), Dynamic Program Analysis ASI (App Security Improvement), Abstract Interpretation AST pattern matching, Automated Code Inspection Tools-Automated Code Inspection Tools ATT&CK framework, Tactics, Techniques, and Procedures attack surfacebinary provenance and, What to put in binary provenance redundancy and, Reliability Versus Security: Design Considerations attacker methods, Attacker Methods-Tactics, Techniques, and Procedurescategorizing of tactics, techniques, and procedures, Tactics, Techniques, and Procedures Cyber Kill Chain framework for studying, Cyber Kill Chains™ DoS attacks, Attacker’s Strategy threat intelligence framework for studying, Threat Intelligence attacker profiles, Attacker Profilesactivists/hacktivists, Activists criminal actors, Criminal Actors-Protecting your systems from criminal actors governments, Governments and Law Enforcement-Protecting your systems from nation-state actors hobbyists, Hobbyists insiders, Insiders-Designing for insider risk vulnerability researchers, Vulnerability Researchers attackers (see adversaries, understanding) auditingautomated systems, Auditing Automated Systems choosing an auditor, Choosing an auditor collecting good audit logs, Collecting good audit logs least privilege and, Auditing-Choosing an auditor authenticationcredential/secret rotation, Credential and Secret Rotation-Credential and Secret Rotation least-privilege policy framework for, A Policy Framework for Authentication and Authorization Decisions-Avoiding Potential Pitfalls second-factor authentication using FIDO security keys, Example: Strong second-factor authentication using FIDO security keys-Example: Strong second-factor authentication using FIDO security keys understandability and, Authentication and transport security authentication protocol, defined, Identities authorizationadvanced controls, Using Advanced Authorization Controls auditing to detect incorrect usage, Auditing-Choosing an auditor avoiding potential pitfalls, Avoiding Potential Pitfalls business justifications, Business Justifications investing in a widely used authorization framework, Investing in a Widely Used Authorization Framework least-privilege policy framework for, A Policy Framework for Authentication and Authorization Decisions-Avoiding Potential Pitfalls multi-party, Multi-Party Authorization (MPA) temporary access, Temporary Access three-factor authorization (3FA), Three-Factor Authorization (3FA)-Three-Factor Authorization (3FA) authorized_keys file, Handling emergencies directly automated attacks, Automation and Artificial Intelligence automated code inspection tools, Automated Code Inspection Tools-Automated Code Inspection Tools automated response mechanisms, Automated response-Automated responsefailing safe versus failing secure, Failing safe versus failing secure human involvement in, A foothold for humans automated testing, Testing automationcode deployment, Rely on Automation cyber attacks and, Automation and Artificial Intelligence response mechanism deployment, Automated response awareness campaigns, Culture of Awareness awareness, culture of, Culture of Awareness-Culture of Awareness AWS Key Management Service, Example: Secure cryptographic APIs and the Tink crypto framework B batteries-included frameworks, Using Application Frameworks for Service-Wide Requirements BeyondCorp architecture, Isolating Assets (Quarantine)Device Inventory Service tools, Cloud logs location-based trust and, Limitations of location-based trust zero trust networking model, Zero Trust Networking Bigtable, Improve observability binary provenance, Binary Provenance-What to put in binary provenance, Data Sanitization BIOS, Device firmware blameless postmortems, Building a Culture of Security and Reliability, Culture of Inevitably blast radius, controlling, Controlling the Blast Radius-Time Separationfailure domains, Failure Domains-Low-dependency components location separation, Location Separation-Isolation of confidentiality role separation, Role Separation time separation, Time Separation Blue Teams, Special Teams: Blue and Red Teams-Special Teams: Blue and Red Teams, Special Teams: Blue and Red Teams breakglass mechanismcode deployment and, Include a Deployment Breakglass graceful failure and, Graceful Failure and Breakglass Mechanisms least-privilege-based design, Breakglass, Graceful Failure and Breakglass Mechanisms as safety net, Make Safety Nets the Norm budget, for logging, Budget for Logging bug bounties (Vulnerability Reward Programs), Vulnerability Researchers, Background and Team Evolution, External Researchers-External Researchers bugs, compromises versus, Compromises Versus Bugs buildsdefined, Concepts and Terminology verifiable (see verifiable builds) C C++, How Fuzz Engines Workfor publicly trusted CA, Programming Language Choice sanitizing code, C++: Valgrind or Google Sanitizers CA/Browser Forum Baseline Requirements, Background on Publicly Trusted Certificate Authorities California Department of Forestry and Fire Protection, Handovers, Culture of Sustainability canaries, Reduce Fear with Risk-Reduction Mechanisms Cantrill, Bryan, Distinguish horses from zebras CAPTCHA (Completely Automated Public Turing test) systems, Protecting your systems from criminal actors, A DoS Mitigation System CAs (see certificate authorities) CASBs (cloud access security brokers), Cloud logs casting, implicit, Use Strong Types Cellcom, Criminal Actors certificate authorities (CAs)background on publicly trusted CAs, Background on Publicly Trusted Certificate Authorities build or buy decision, The Build or Buy Decision complexity versus understandability, Complexity Versus Understandability data validation, Data Validation design, implementation, maintenance considerations, Design, Implementation, and Maintenance Considerations-Data Validation at Google, Case Study: Designing, Implementing, and Maintaining a Publicly Trusted CA-Conclusion Google's business need for, Why Did We Need a Publicly Trusted CA?

pages: 789 words: 207,744

The Patterning Instinct: A Cultural History of Humanity's Search for Meaning
by Jeremy Lent
Published 22 May 2017

This begins to approach the number of neurons in the human brain, at approximately a hundred billion. Could this massive network, called the “internet of things,” ever develop its own intelligence?47 If machine intelligence became self-aware, what would that look like? For decades, the de facto standard for determining whether a machine is truly intelligent has been the Turing test: A person in a separate room engages in a written conversation with two entities. One entity is human, the other a computer programmed to imitate a human. If the tester cannot tell the computer apart from the human, the computer passes the test. To date, no computer has passed. Suppose, however, a computer network becomes intelligent through its own self-organized process, absent any direct human programming.

Mark's Cathedral, 73–74 Stockholm Environment Institute, 407 Stoicism, 159, 360 Sufis, 248, 320–21 Summers, Larry, 387 Supreme Court (United States), 383, 529 Supreme Ultimate, 255–56, 260–61, 262, 361, 362 symbolic thought abstraction resulting from, 147 and agriculture, rise of, 109–10 external symbolic storage, 78–80 human uniqueness and, 21, 31–32, 47–52, 147, 189 implications of, 80–81 language and, 58–61, 64, 203–204, 409 Neanderthals and, 71–72 religion and, 76 revolution in, 68–73, 78 See also prefrontal cortex Symmachus, 244 synaptic pruning, 77–78, 201–202, 469–70 systems thinking, 35, 357–73 “moonlight tradition” as source of, 359–63 Neo-Confucianism, compared with, 35, 258–60, 263, 268, 271–72, 358, 362–63, 364, 441 new paradigm, as a, 370–73, 441 phenomenology and, 264, 363 quantum mechanics and, 363–64 reductionism, contrasted with, 14, 285, 354–55, 357–58, 365, 368–73 See also complex systems; self-organization Tainter, Joseph, 412–14, 415, 416–17, 431–32, 536 Tao, 179–80, 203, 208, 210 Confucianism and, 191, 193–96 Neo-Confucianism and, 260–61, 264, 272, 372 and Taoism, 186–88, 189–90, 195–96 translation of dharma, 253 Taoism, 179–80, 183, 186–91, 247, 249, 250, 302, 490 Buddhism and, 249, 252–54 Confucianism, contrasted with, 191, 193–94, 195–96 Neo-Confucianism and, 254 prefrontal cortex functions and, 189–91 technology, distrust of, 328 Tao Te Ching (Lao Tzu), 179–80, 182, 186–91, 195–96 nature, view of, 289–90 and Neo-Confucian thought, 254–55 Taylor, Frederick, 378–79 te, 187–88, 195 technological innovation, 35–36, 382, 405–406 “cornucopians’” view of, 416–18 modern cognition, impact on, 376–77 potential of, 28, 35–36, 401, 408 potential to avert civilizational collapse, 35–36, 416–18 Singularity and, 421–28 “Techno Split” arising from, 35–36, 432–33, 435, 441 Tegmark, Max, 352 teleology, 75–76, 424 Ten Commandments, 216 Tenochtitlán, 307 teotl, 113 Tertullian, 234, 245, 338 Thales, 146 theory of mind, 43, 72, 74, 76 Thoth, 116 thymos, 153 Ti, 119 Timaeus (Plato), 340–41 Toledo, Spain, 341–42 Tomasello, Michael, 45, 48–49 Tonegawa, Susumu, 207 “tragedy of the commons,” 397–98, 541 transcendence in Buddhist thought, 253–54 Chinese thought, lacking in, 180, 196, 205–206, 210, 252, 262–63, 266–67 in Christian thought, 231, 360, 401 in Egyptian thought, 123–25 in Greek thought, 177, 196, 205, 210, 222–23 in Indian thought, 162, 176–77, 196, 210 in Indo-European thought, 162 as metaphor, root, 204 in monotheistic thought, 262, 360, 401 in Old Testament, 221–22 in Platonic thought, 224–26 in Singularity vision, 424–26 in Western thought, 181, 282, 440–41 See also dualism transcendent pantheism in Egyptian thought, 123–25, 162 in Indian thought, 173–77 See also pantheism transhumanism, 420–21 Treaty of Saragossa, 273 “Truth,” concept of absolute, 208–209 in Christianity, 244, 337, 339–40 in Christian Rationalism, 342–47, 349–51 in Greek thought, 332, 336–37 in Islam, 246, 322–23 in mathematics, 351–55 in scientific cognition, 332–33, 349–55, 361 in Western tradition, 16–17 in Zoroastrianism, 139–40, 221 Turing test, 423 Tutankhamen (boy pharaoh), 122 2001: A Space Odyssey (film), 39, 43, 405 United Nations, 240, 395, 399, 408, 437, 440 Declaration of Human Rights, 432–33 United States. See America Universal Grammar theory, 60, 199, 465–66 Upanishads, 166 abstraction in, 167 interiority, shift to, 164–65 reincarnation and, 163–64 senses, renunciation of, 168–69, 176–77 transcendent pantheism of, 173–77, 261 yoga in, 172–73 Upper Paleolithic revolution, 67–72, 467 language and, 58–59, 63–66 metaphoric threshold and, 65–66 shamanism and, 89–90, 472 symbolic thought and, 73, 78–79 Ur (ancient Mesopotamian city), 127 Utemorrah, Daisy, 85, 174 vacuum domicilium, 312 Varela, Francisco, 368 Vedic cosmology.

pages: 267 words: 82,580

The Dark Net
by Jamie Bartlett
Published 20 Aug 2014

Silk Road 2.0 also offers the widest variety of products from the largest number of vendors: 13,000 listings, compared to the second largest, Agora Market, which has 7,400. Positive endorsements, a wide range of products, excellent security. I need no more persuading. Vendors and Products Signing up to Silk Road 2.0 is extremely simple. Username. Password. Complete the CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), and you’re in. ‘Welcome Back!’ reads the landing page. The forums were right – I am immediately overwhelmed by choice. There are around 870 vendors to choose from, selling more drugs than I’d ever thought possible. Under ecstasy alone, I find listed: 4-emc, 4-mec, 5-apb, 5-it, 6-apb, butylone, mda, mdai, mdma, methylone, mpa, pentedrone, pills.

Toast
by Stross, Charles
Published 1 Jan 2002

At the next table a person with make-up and long hair who's wearing a dress -- Manfred doesn't want to speculate about the gender of these crazy mixed-up Euros -- is reminiscing about wiring the fleshpots of Tehran for cybersex. Two collegiate-looking dudes are arguing intensely in German: the translation stream in his glasses tell him they're arguing over whether the Turing Test is a Jim Crow law that violates European corpus juris standards on human rights. The beer arrives and Bob slides the wrong one across to Manfred: “here, try this. You'll like it.” “Okay.” It's some kind of smoked doppelbock, chock-full of yummy superoxides: just inhaling over it makes Manfred feel like there's a fire alarm in his nose screaming danger, Will Robinson!

pages: 791 words: 85,159

Social Life of Information
by John Seely Brown and Paul Duguid
Published 2 Feb 2000

The spread of the radio was more impressive yet. 44. See Campbell-Kelly and Aspray, 1996. 45. Wellman (1988) provides one of the few worthwhile studies of the effects of information technologies on social communities and networks. Chapter 2: Agents and Angels 1. Distinguishing a computer from a human is the essence of the famous Turing test, developed by mathematician Alan Turing (1963). He argued that if you couldn't tell the difference then you could say the machine was intelligent. Shallow Red is not quite there yet. Indeed, the continuation of the exchange suggests that Shallow Red is still rather shallow (though pleasantly honest): What are VSRs?

pages: 304 words: 82,395

Big Data: A Revolution That Will Transform How We Live, Work, and Think
by Viktor Mayer-Schonberger and Kenneth Cukier
Published 5 Mar 2013

He came up with the idea of presenting squiggly, hard-to-read letters during the sign-up process. People would be able to decipher them and type in the correct text in a few seconds, but computers would be stumped. Yahoo implemented his method and reduced its scourge of spambots overnight. Von Ahn called his creation Captcha (for Completely Automated Public Turing Test to Tell Computers and Humans Apart). Five years later, millions of Captchas were being typed each day. Captcha brought von Ahn considerable fame and a job teaching computer science at Carnegie Mellon University after he earned his PhD. It was also a factor in his receiving, at 27, one of the MacArthur Foundation’s prestigious “genius” awards of half a million dollars.

pages: 306 words: 82,909

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back
by Bruce Schneier
Published 7 Feb 2023

In 1968, the pioneering computer scientist Marvin Minsky described AI as “the science of making machines do things that would require intelligence if done by men.” Patrick Winston, another AI pioneer, defined it as “computations that make it possible to perceive, reason, and act.” The 1950 version of the Turing test—called the “imitation game” in the original discussion—focused on a hypothetical computer program that humans couldn’t distinguish from an actual human. I need to differentiate between specialized—sometimes called “narrow”—AI and general AI. General AI is what you see in the movies, the AI that can sense, think, and act in a very general and human way.

pages: 366 words: 94,209

Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity
by Douglas Rushkoff
Published 1 Mar 2016

Other companies have opted to become what are known as “flexible purpose” corporations, which allows them to emphasize pretty much any priority over profits—it doesn’t even have to be explicitly beneficial to society at large.74 Flexible purpose corporations also enjoy looser reporting standards than do benefit corporations.75 Vicarious, a tech startup based in the Bay Area, is the sort of business for which the flex corp structure works well. Vicarious operates in the field of artificial intelligence and deep learning; its most celebrated project to date is an attempt to crack CAPTCHAs (those annoying tests of whether a user is human) using AI. Vicarious claims to have succeeded, and its first Turing test demonstrations appear to back up its claim.76 How would such a technology be deployed or monetized? Vicarious doesn’t need to worry about that just yet. As a flexible purpose corporation, Vicarious can work with the long-term, big picture, experimental approach required to innovate in a still-emerging field such as AI.

pages: 330 words: 91,805

Peers Inc: How People and Platforms Are Inventing the Collaborative Economy and Reinventing Capitalism
by Robin Chase
Published 14 May 2015

The solution that they popularized is one that we all know well: those annoying little boxes of warped and scrambled numbers and letters that appear on our computer screen, requiring us to transcribe them before we can do certain things—send an email, make a comment, or sign up for something. That little test is one way of proving your humanness. In 2000, von Ahn’s team coined the term CAPTCHA (for “completely automated public Turing test to tell computers and humans apart”) for this tool, and soon the tool was being widely used. Von Ahn would tell you that by 2005, “approximately 200 million CAPTCHAs [were] typed every day around the world.” He could have rested on his laurels with that remarkable adoption of his innovation. But, being an engineer, von Ahn made some additional calculations.

pages: 292 words: 94,660

The Loop: How Technology Is Creating a World Without Choices and How to Fight Back
by Jacob Ward
Published 25 Jan 2022

“If the method proves beneficial,” Colby wrote in a 1966 paper, “then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists.”5 In the 1970s, Colby expanded on the ELIZA concept by building PARRY, a software simulation of a paranoid patient (used to train student therapists) that was indistinguishable from a human patient to most psychiatrists—it was the first piece of software to pass the Turing Test, which evaluates whether a person can tell the difference between a robot and a human in structured conversation. By the 1980s, Colby had sold a natural-language psychotherapy program called Overcoming Depression to the Department of Veterans Affairs, which distributed it to patients who then used it without direct supervision from a therapist.

pages: 326 words: 91,559

Everything for Everyone: The Radical Tradition That Is Shaping the Next Economy
by Nathan Schneider
Published 10 Sep 2018

On technological unemployment, see a summary in James Surowiecki, “Robopocalypse Not,” Wired (September 2017); on employment and inequality, see (among many other studies) Michael Förster and Horacio Levy, United States: Tackling High Inequalities, Creating Opportunities for All (OECD, 2014); on workplace surveillance, see Esther Kaplan, “The Spy Who Fired Me,” Harper’s (March 2015); on human computerization, see Brett M. Frischmann, “Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings,” Cardozo Legal Studies Research Paper no. 441 (2014). 22. Community Purchasing Alliance, 2016 Annual Report (February 2017). I delivered the keynote address at that meeting and was compensated for doing so. 23. E.g., Richard D.

pages: 317 words: 98,745

Black Code: Inside the Battle for Cyberspace
by Ronald J. Deibert
Published 13 May 2013

A major hurdle Koobface had to overcome was the precautions Facebook had in place to prevent fake “friends” using their trusted network. Each new Facebook account requires a real person to fill out a “CAPTCHA” – clusters of wavy, sometimes illegible letters and numbers. (CAPTCHA is an acronym for Completely Automated Public Turing Test to Tell Computers and Humans Apart.) As a standard security precaution, Facebook requires a human being to visually identify the CAPTCHAS and reproduce them in a field in order to create a new account. To get around the CAPTCHA problem, Koobface engaged in what the cyber crime expert Marc Goodman calls “crime-sourcing,” the outsourcing of all or part of a criminal act to a crowd of witting and unwitting individuals.

pages: 285 words: 86,858

How to Spend a Trillion Dollars
by Rowan Hooper
Published 15 Jan 2020

Thetis marvels that Hephaestus even has a staff of automata, ‘fashioned of gold in the image of maidens’; builds Talos, a giant bronze war-droid that guards Crete from invaders; and is commissioned to create Pandora, the beautiful android sent by Zeus to punish humans for the theft of fire. Homer and the people who dreamed up Hephaestus saw into a future we’re still only just reaching, with autonomous vehicles, and an artificial intelligence capable of passing the Turing test. You can see where I’m going with this. We’re going grand. Let’s do in reality what Hephaestus did in mythology. Let’s create a new life form. Dreams of automation have been with us for a long time, perhaps for as long as we’ve been making tools. Dreams of creating life probably for almost as long.

pages: 317 words: 101,074

The Road Ahead
by Bill Gates , Nathan Myhrvold and Peter Rinearson
Published 15 Nov 1995

Although I believe that eventually there will be programs that will recreate some elements of human intelligence, it is very unlikely to happen in my lifetime. For decades computer scientists studying artificial intelligence have been trying to develop a computer with human understanding and common sense. Alan Turing in 1950 suggested what has come to be called the Turing Test: If you were able to carry on a conversation with a computer and another human, both hidden from your view, and were uncertain about which was which, you would have a truly intelligent machine. Every prediction about major advances in artificial intelligence has proved to be overly optimistic.

pages: 323 words: 95,939

Present Shock: When Everything Happens Now
by Douglas Rushkoff
Published 21 Mar 2013

The antithesis of the Law of Diminishing Returns, the Law of Accelerating Returns holds that technology will overtake humanity and nature, no matter what. In his numerous books, talks, and television appearances, Kurzweil remains unswerving in his conviction that humanity was just a temporary step in technology’s inevitable development. It’s not all bad. According to Kurzweil, by 2029 artificial intelligences will pass the Turing test and be able to fool us into thinking they are real people. By the 2030s, virtual-reality simulations will be “as real and compelling as ‘real’ reality, and we’ll be doing it from within the nervous system. So the nanobots in your brain—which will get to your brain through the bloodstream, noninvasively and without surgery—will shut down the signals coming from your real senses and replace them with senses that your brain will be receiving from the virtual environment.”2 Just be sure to read the fine print in the iTunes agreement before clicking “I agree” and hope that the terms don’t change while you’re in there.

pages: 311 words: 94,732

The Rapture of the Nerds
by Cory Doctorow and Charles Stross
Published 3 Sep 2012

Whether or not sim-Huw is really Huw, whether or not uploading is a kind of death, whether or not posthumanity is immortal or just kidding itself, the single, inviolable fact remains: Human simspace is no more tasteful than the architectural train wreck that the Galactic Authority has erected. The people who live in it have all the aesthetic sense of a senile jackdaw. Huw is prepared to accept—for the sake of argument, mind—that uploading leaves your soul intact, but she is never going give one nanometer on the question of whether uploading leaves your taste intact. If the Turing test measured an AI’s capacity to conduct itself with a sense of real style, all of simspace would be revealed for a machine-sham. Give humanity a truly unlimited field, and it would fill it with Happy Meal toys and holographic, sport-star, collectible trading card game art. There’s a whole gang of dirtside refuseniks who make this their primary objection to transcendence.

pages: 340 words: 101,675

A New History of the Future in 100 Objects: A Fiction
by Adrian Hon
Published 5 Oct 2020

If our former criminal’s behavior is indistinguishable from someone whom we believe to be a good person, then how can we say that they are not fit to receive freedom? Can there be a fairer test? —Reverend Michael Zhang, 2034. Most of us know Turing’s name these days, usually through his eponymous test to measure the ability of machines to exhibit intelligent behavior equivalent to—or these days, well beyond—humans. The Turing Test was originally inspired by imitation games in which an interrogator tries to discover which of two people is genuinely, for example, a woman, a politician, or a scientist, and which is merely pretending to be a woman, a politician, or a scientist. In the twentieth and early twenty-first centuries, imitation games were mostly thought experiments, but they eventually found use in the training of interactional experts—people who mimic expertise in a field by talking and interacting with real experts.

pages: 398 words: 108,889

The Paypal Wars: Battles With Ebay, the Media, the Mafia, and the Rest of Planet Earth
by Eric M. Jackson
Published 15 Jan 2004

The person opening the account was asked to read the image and type the letters into a nearby text box. While the human eye could easily interpret the random string of letters contained in the image, the slight distortion caused by the background prevented even the most sophisticated computer from doing the same. Max would later refer to it as a “reverse Turing test,” a way to discern a human being opening an account from a computer. Using an automated script to churn out hundreds of fraudulent PayPal accounts linked to stolen credit cards was now effectively impossible. The Gausebeck-Levchin test, as this addition to the sign-up process became known around the office, proved successful in combating fraud without slowing down sign-ups.

pages: 430 words: 107,765

The Quantum Magician
by Derek Künsken
Published 1 Oct 2018

Intellect was an adaptive evolutionary structure, allowing humanity not only to sense the world in space, but to predict future events through time. Games of chance tested that predictive machine—so much so that games of controlled chance discriminated consciousness from unconsciousness far better than Turing. Belisarius had never trusted the Turing test. It depended on emulating consciousness enough to deceive a conscious being. But conscious beings were very deceivable, so Turing skewed to false positives. Belisarius had played against computers and even AIs like Saint Matthew. Sooner or later, a good player would detect the rules laid down by the programmers, and Belisarius was a very good player.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

One April Fools’ Day—a sacred occasion in the early years of Google—a website appeared on the company’s private network offering a list of “Jeff Dean Facts,” a riff on the “Chuck Norris Facts” that bounced around the Internet in ironic praise of the ’80s action movie star: Jeff Dean once failed a Turing test when he correctly identified the 203rd Fibonacci number in less than a second. Jeff Dean compiles and runs his code before submitting, but only to check for compiler and CPU bugs. Jeff Dean’s PIN is the last 4 digits of pi. The speed of light in a vacuum used to be about 35 mph. Then Jeff Dean spent a weekend optimizing physics.

The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot
by Yolande Strengers and Jenny Kennedy
Published 14 Apr 2020

One even has a synthetic heartbeat. They can’t do the dishes (although some claim to be able to turn on the dishwasher, as we noted above), cook dinner, or walk—yet. Sexbots have a long way to go before they cross the eerie uncanny valley (indeed creators like McMullen are deliberately trying to avoid it) or pass the Turing test—that is, before they convince us that they are the same as actual humans.45 In their current form, they resemble less a real-life person and more a customizable product, complete with detachable body parts. The Juicy Bits “What kind of vagina should I choose?” This question is listed on the FAQ web page for the Lumidoll: a sex doll that offers a choice of built-in or removable parts.46 Each option, of course, has its pros and cons.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

“Why deep learning will never”: Gwern Branwen, “GPT-3 Creative Fiction,” gwern.net, June 19, 2020, https://www.gwern.net/GPT-3#why-deep-learning-will-never-truly-x; Kelsey Piper, “GPT-3, Explained: This New Language AI Is Uncanny, Funny—and a Big Deal,” Vox, August 13, 2020, https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language. trust in technology companies is declining: Carroll Doherty and Jocelyn Kiley, “Americans Have Become Much Less Positive About Tech Companies’ Impact on the U.S.,” Pew Research Center, July 29, 2019, https://www.pewresearch.org/fact-tank/2019/07/29/americans-have-become-much-less-positive-about-tech-companies-impact-on-the-u-s/; Ina Fried, “40% of Americans Believe Artificial Intelligence Needs More Regulation,” Axios, https://www.axios.com/big-tech-industry-global-trust-9b7c6c3c-98f1-4e80-8275-cf52446b1515.html.

pages: 480 words: 123,979

Dawn of the New Everything: Encounters With Reality and Virtual Reality
by Jaron Lanier
Published 21 Nov 2017

When people get scared, they get narrow-minded.” “Devils aren’t real, but computers are real.” “What if AI is just a fantasy we see in the bits we set? What if it’s a way of avoiding human responsibility?” “This has been argued for decades. When people can’t tell AI from people, then AI will be real. You know, the Turing Test.” From a bearded hacker who has been too entangled with noodles to speak until now, “He’s got an answer for that.” “Yeah, I do. You think that people are these fixed quantities waiting for AI to catch up and then surpass us. But what if people are dynamic, maybe even more dynamic than computers?

pages: 566 words: 122,184

Code: The Hidden Language of Computer Hardware and Software
by Charles Petzold
Published 28 Sep 1999

The first, published in 1937, pioneered the concept of "computability," which is an analysis of what computers can and can't do. He conceived of an abstract model of a computer that's now known as the Turing Machine. The second famous paper Turing wrote was on the subject of artificial intelligence. He introduced a test for machine intelligence that's now known as the Turing Test. At the Moore School of Electrical Engineering (University of Pennsylvania), J. Presper Eckert (1919–1995) and John Mauchly (1907–1980) designed the ENIAC (Electronic Numerical Integrator and Computer). It used 18,000 vacuum tubes and was completed in late 1945. In sheer tonnage (about 30), the ENIAC was the largest computer that was ever (and probably will ever be) made.

pages: 570 words: 115,722

The Tangled Web: A Guide to Securing Modern Web Applications
by Michal Zalewski
Published 26 Nov 2011

A fake seven-segment display can be used to read back link styling when the displayed number is entered into the browser in an attempt to solve a CAPTCHA. The user will see 5, 6, 9, or 8, depending on prior browsing history. * * * [58] CAPTCHA (sometimes expanded as Completely Automated Public Turing test to tell Computers and Humans Apart) is a term for a security challenge that is believed to be difficult to solve using computer algorithms but that should be easy for a human being. It is usually implemented by showing an image of several randomly selected, heavily distorted characters and asking the user to type them back.

pages: 381 words: 120,361

Sunfall
by Jim Al-Khalili
Published 17 Apr 2019

True machine consciousness, he’d said – she recalled this was the first time she’d heard the term ‘the singularity’ – would not be achieved for many decades. Since that hiking trip just seven years ago the line between artificial and human intelligence had become increasingly blurred. Passing the Turing test had not meant that computers were now sentient, but it had highlighted instead that what most people thought of as consciousness was no longer so clear-cut. Sure, AIs now had very crude emotional states, but these had mostly been programmed in rather than learned. At best, the most powerful Minds were more like benign yet extreme psychopaths (those scoring close to the maximum of 40 on the Hare psychopathy checklist), in that they lacked the ability to feel basic emotions such as compassion, or to empathize with the emotional states of humans.

pages: 436 words: 127,642

When Einstein Walked With Gödel: Excursions to the Edge of Thought
by Jim Holt
Published 14 May 2018

Turing conjectured that, initially at least, computers might be suited to purely symbolic tasks, those presupposing no “contact with the outside world,” like mathematics, cryptanalysis, and chess playing (for which he himself worked out the first programs on paper). But he imagined a day when a machine could simulate human mental abilities so well as to raise the question of whether it was actually capable of thought. In a paper published in the philosophy journal Mind, he proposed the now classic “Turing test”: a computer could be said to be intelligent if it could fool an interrogator—perhaps in the course of a dialogue conducted via Teletype—into thinking it was a human being. Turing argued that the only way to know that other people are conscious is by comparing their behavior with one’s own and that there is no reason to treat machines any differently.

pages: 1,302 words: 289,469

The Web Application Hacker's Handbook: Finding and Exploiting Security Flaws
by Dafydd Stuttard and Marcus Pinto
Published 30 Sep 2007

And if strong requirements are in place for password quality, it is far less likely that the attacker will choose a password for testing that even a single user of the application has chosen. In addition to these controls, an application can specifically protect itself against this kind of attack through the use of CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) challenges on every page that may be a target for brute-force attacks (see Figure 6-9). If effective, this measure can prevent any automated submission of data to any application page, thereby keeping all kinds of passwordguessing attacks from being executed manually.

Session handling tracer r^Ts~B Requests handled Events Event detail pause I [ dear 11 dose I Figure 14-16: Burp's session handling tracer, which lets you monitor and debug your session handling rules Having configured and tested the rules and macros that you need to work with the application you are targeting, you can continue your manual and automated testing in the normal way, just as if the obstacles to testing did not exist. 610 Chapter 14 Automating Customized Attacks CAPTCHA Controls CAPTCHA controls are designed to prevent certain application functions from being used in an automated way. They are most commonly employed in functions for registering e-mail accounts and posting blog comments to try to reduce spam. CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. These tests normally take the form of a puzzle containing a distorted-looking word, which the user must read and enter into a field on the form being submitted. Puzzles may also involve recognition of particular animals and plants, orientation of images, and so on.

pages: 515 words: 126,820

Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World
by Don Tapscott and Alex Tapscott
Published 9 May 2016

This scenario was originally explained by Don Tapscott in “The Transparent Burger,” Wired, March 2004; http://archive.wired.com/wired/archive/12.03/start.html?pg=2%3ftw=wn_tophead_7. 38. Interview with Yochai Benkler, August 26, 2015. 39. Called “the wiki workplace” in Wikinomics. 40. CAPTCHA stands for “Completely Automated Public Turing Test to Tell Computers and Humans Apart.” 41. Interview with Joe Lubin, July 13, 2015. 42. Ibid. Chapter 6: The Ledger of Things: Animating the Physical World 1. Not their real names. This story is based on discussions with individuals familiar with the situation. 2. Primavera De Filippi, “It’s Time to Take Mesh Networks Seriously (and Not Just for the Reasons You Think),” Wired, January 2, 2014. 3.

pages: 416 words: 129,308

The One Device: The Secret History of the iPhone
by Brian Merchant
Published 19 Jun 2017

Primed by hundreds of years of fantasy and possibility, around the mid-twentieth century, once sufficient computing power was available, the scientific work investigating actual artificial intelligence began. With the resonant opening line “I propose to consider the question, ‘Can machines think?’” in his 1950 paper “Computing Machinery and Intelligence,” Alan Turing framed much of the debate to come. That work discusses his famous Imitation Game, now colloquially known as the Turing Test, which describes criteria for judging whether a machine may be considered sufficiently “intelligent.” Claude Shannon, the communication theorist, published his seminal work on information theory, introducing the concept of the bit as well as a language through which humans might speak to computers.

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

However, due to the inevitable difference in experience, the system cannot always be able to use a natural language as a native speaker. Even so, its proficiency in that language should be sufficient for many practical purposes. Being able to use any natural language is not a necessary condition for being intelligent. Since the aim of NARS is not to accurately duplicate human behaviors so as to pass the Turing Test [5], natural language processing is optional for the system. 3.3 Education NARS processes tasks using available knowledge, though the system is not designed with a ready-made knowledge base as a necessary part. Instead, all the knowledge, in principle, should come from the system’s experience. In other words, NARS as designed is like a baby that has great potential, but little instinct.

pages: 425 words: 131,000

Dark Eden
by Chris Beckett
Published 1 Jan 2012

DARK EDEN CHRIS BECKETT is a university lecturer living in Cambridge. He has written over 20 short stories, many of them originally published in Interzone and Asimov’s. In 2009 he won the Edge Hill Short Story competition for his collection of stories, The Turing Test. ALSO BY Chris Beckett THE HOLY MACHINE ‘Beckett examines the interface betwen human and machine, rationalism and religious impulse with the sparse prose and acute social commentary of a latter-day Orwell’ GUARDIAN ‘Incredible’ INTERZONE ‘Beckett can stand shoulder to shoulder with Orwell and Burgess. A triumph’ ASIMOV’S SCIENCE FICTION Published in eBook and hardback in Great Britain in 2012 by Corvus, an imprint of Atlantic Books Ltd.

pages: 489 words: 148,885

Accelerando
by Stross, Charles
Published 22 Jan 2005

At the next table, a person with makeup and long hair who's wearing a dress – Manfred doesn't want to speculate about the gender of these crazy mixed-up Euros – is reminiscing about wiring the fleshpots of Tehran for cybersex. Two collegiate-looking dudes are arguing intensely in German: The translation stream in his glasses tell him they're arguing over whether the Turing Test is a Jim Crow law that violates European corpus juris standards on human rights. The beer arrives, and Bob slides the wrong one across to Manfred: "Here, try this. You'll like it." "Okay." It's some kind of smoked doppelbock, chock-full of yummy superoxides: Just inhaling over it makes Manfred feel like there's a fire alarm in his nose screaming danger, Will Robinson!

pages: 527 words: 147,690

Terms of Service: Social Media and the Price of Constant Connection
by Jacob Silverman
Published 17 Mar 2015

On many customer service lines, we already use our voices to navigate menus, and some telemarketing operations have advanced this practice, using robots to give a sales pitch before transferring the customer to a human sales associate. In recent years, apps that mimic your Twitter or Facebook posts, often in vaguely accurate but also amusingly bizarre ways, have become an Internet phenomenon. It’s the Turing test as entertainment. Soon, one might choose a Google bot that promises verisimilitude or one of these more ham-fisted creations that would entertain you and your friends with a funhouse-mirror version of your online persona. In the eyes of the platform owner, the difference is likely to be immaterial: ads are still being shown, data will be created.

pages: 571 words: 162,958

Rewired: The Post-Cyberpunk Anthology
by James Patrick Kelly and John Kessel
Published 30 Sep 2007

At the next table a person with make-up and long hair who’s wearing a dress — Manfred doesn’t want to speculate about the gender of these crazy mixed-up Euros — is reminiscing about wiring the fleshpots of Tehran for cybersex. Two collegiate-looking dudes are arguing intensely in German: the translation stream in his glasses tell him they’re arguing over whether the Turing Test is a Jim Crow law that violates European corpus juris standards on human rights. The beer arrives and Bob slides the wrong one across to Manfred: “here, try this. You’ll like it.” “Okay.” It’s some kind of smoked doppelbock, chock-full of yummy super-oxides: just inhaling over it makes Manfred feel like there’s a fire alarm in his nose screaming danger, Will Robinson!

pages: 764 words: 188,807

The Prefect
by Alastair Reynolds
Published 2 Jan 2007

Her function was to evaluate the creative potential of these new minds, with the goal of creating a generation of gamma-level intelligences with the ability to solve problems by intuitive breakthrough, rather than step-by-step analysis. In essence, they wanted to create gamma-levels that were not only capable of passing the standard Turing tests, but which had the potential for intuitive thinking." Dreyfus touched a finger to his upper lip. "Valery tried to coax these machines into making art. To one degree or another, she usually got something out of them. But it was more like children daubing paint with their fingers than true creative expression.

pages: 741 words: 179,454

Extreme Money: Masters of the Universe and the Cult of Risk
by Satyajit Das
Published 14 Oct 2011

Buffett argued that large fees mean that hedge funds have to earn substantially greater returns than the S&P 500 index to match let alone beat its performance.32 The Buffet bet paralleled a $20,000 wager between Lotus founder Mitchell Kapor and futurist Ray Kurzweil that by 2029 no computer or machine intelligence will pass the Turing Test, where a computer successfully impersonates a human being. In 2010, Stanley Druckenmiller, who had been one of the traders at Soros’ Quantum Fund that broke the pound, announced that he was closing his fund Duquesne Capital Management. Druckenmiller confessed that increased volatility following the global financial crisis made it difficult to make money.

pages: 607 words: 185,228

Antarctica
by Kim Stanley Robinson
Published 6 Jul 1987

He looked to the side as he told X about this, almost as if embarrassed, although otherwise he showed no sign of any emotion at all; on the contrary he exhibited what X had come to think of as the pure beaker style, consisting of a Spocklike objectivity and deadened affect so severe that it was an open question whether he would have been able to pass a Turing test. So: writing down numbers. "Fine, " X said. It had to beat picking nails off the floor. And at first it did. Forbes wandered away from the other beakers, and X followed, and they got right to work. But it was a windy day, the katabatic wind falling off the polar ice cap and whistling down the dry valleys, making all outdoor work miserable indeed, especially if you were just sitting on the ground writing figures in a notebook.

pages: 1,331 words: 183,137

Programming Rust: Fast, Safe Systems Development
by Jim Blandy and Jason Orendorff
Published 21 Nov 2017

Fortunately, this question is easier for programmers than it is for linguists. We usually say that two programs are equivalent if they will always have the same visible behavior when executed: they make the same system calls, interact with foreign libraries in equivalent ways, and so on. It’s a bit like a Turing test for programs: if you can’t tell whether you’re interacting with the original or the translation, then they’re equivalent. Now consider the following code: let i = 10; very_trustworthy(&i); println!("{}", i * 100); Even knowing nothing about the definition of very_trustworthy, we can see that it receives only a shared reference to i, so the call cannot change i’s value.

pages: 1,201 words: 233,519

Coders at Work
by Peter Seibel
Published 22 Jun 2009

That actually helped spread Weizenbaum's idea beyond its boundaries. It was written, at first, in the PDP-1 Lisp. But they were building a Lisp on the PDP-6 at that point—or maybe the PDP-10. But it was the Lisp that had spread across the ARPANET. So Doctor went along with it, it turns out. I got a little glimmer of fame because Danny Bobrow wrote up “A Turing Test Passed”. That was one of the first times I actually got some notice for my stupid hacking: I had left Doctor up. And one of the execs at BBN came into the PDP-1 computer room and thought that Danny Bobrow was dialed into that and thought he was talking to Danny. For us folk that had played with ELIZA, we all recognized the responses and we didn't think about how humanlike they were.

pages: 993 words: 318,161

Fall; Or, Dodge in Hell
by Neal Stephenson
Published 3 Jun 2019

But he was already feeling a mild sense of unease, wondering whether he could even remember the Unix command line incantation for anything as simple as copying a file. Systems nowadays didn’t even have files in the old sense. They had abstractions that were so complicated they could almost pass the Turing test on their own, but still with a few old file-like characteristics for backward compatibility. To cover that unease, he blustered. Not a Corvallis behavior, but he had become part crow. “Why do we need to call a physical meeting for that?” “It’s an important file. Both here and . . . where you are going next.”

pages: 1,280 words: 384,105

The Best of Best New SF
by Gardner R. Dozois
Published 1 Jan 2005

At the next table a person with make-up and long hair who’s wearing a dress – Manfred doesn’t want to speculate about the gender of these crazy mixed-up Euros – is reminiscing about wiring the fleshpots of Tehran for cybersex. Two collegiate-looking dudes are arguing intensely in German: the translation stream in his glasses tell him they’re arguing over whether the Turing Test is a Jim Crow law that violates European corpus juris standards on human rights. The beer arrives and Bob slides the wrong one across to Manfred: “here, try this. You’ll like it.” “Okay.” It’s some kind of smoked doppelbock, chock-full of yummy superoxides: just inhaling over it makes Manfred feel like there’s a fire alarm in his nose screaming danger, Will Robinson!

pages: 1,263 words: 371,402

The Year's Best Science Fiction: Twenty-Sixth Annual Collection
by Gardner Dozois
Published 23 Jun 2009

He flicked between statistical summaries, technical overviews of linguistic structure, and snippets from the millions of conversations the software had logged. Food, weather, sex, death. As human dialogue the translations would have seemed utterly banal, but in context they were riveting. These were not chatterbots blindly following Markov chains, designed to impress the judges in a Turing test. The Phites were discussing matters by which they genuinely lived and died. When Daniel brought up a page of conversational topics in alphabetical order, his eyes were caught by the single entry under the letter G. Grief. He tapped the link, and spent a few minutes reading through samples, illustrating the appearance of the concept following the death of a child, a parent, a friend.

Engineering Security
by Peter Gutmann

If you don’t apply measures like this you make yourself vulnerable to a variety of presentation attacks in which an attacker redirects user input elsewhere to perform various malicious actions. Consider a case where a web page asks the user to type in some scrambled letters, a standard CAPTCHA/reverse Turing test used to prevent automated misuse of the Automation vs. Explicitness 459 page by bots. The letters that the user is asked to type are “xyz”. When the user types the ‘x’, the web page tries to install a malicious ActiveX control. Just as they type the ‘y’, the browser pops up a warning dialog asking the user whether they want to run the ActiveX control, with a Yes/No button to click.