Alan Turing

back to index

description: British mathematician, contributions to computer science and AI

359 results

pages: 351 words: 107,966

The Secret Life of Bletchley Park: The WWII Codebreaking Centre and the Men and Women Who Worked There
by Sinclair McKay
Published 24 May 2010

But it was also at the Park that Turing was to find a rare sort of freedom, before the narrow, repressive culture of the post-war years closed in on him and apparently led to his early death. ‘Turing,’ commented Stuart Milner-Barry, ‘was a strange and ultimately a tragic figure.’ That is one view. Certainly his life was short, and it ended extremely unhappily. But in a number of other senses, Alan Turing was an inspirational figure. ‘Alan Turing was unique,’ recalled Peter Hilton. ‘What you realise when you get to know a genius well is that there is all the difference between a very intelligent person and a genius. With very intelligent people, you talk to them, they come out with an idea, and you say to yourself, if not to them, I could have had that idea.

As a result of the ‘sheet’ system and the ‘cillis’, and thanks to the crucial involvement of the Polish codebreakers, Bletchley Park’s first break into current military Enigma traffic – as opposed to old messages – came in January 1940. Alan Turing had been sent to Paris to confer with the Poles about such matters as wheel changes in the Enigma machine, taking with him some of the Zygalski sheets. In those few days, they managed to crack an Enigma key via this method. One of the Polish mathematicians, Marian Rejewski, remembered his dealings with Turing: ‘We treated Alan Turing as a younger colleague who had specialised in mathematical logic and was just starting out in cryptology.’7 At the time, he was not aware that Turing had quietly been making some astounding cryptographical leaps off his own back.

Now, according to Jack Copeland, convoy re-routings ‘based on Hut 8 decrypts were so successful that for the first twenty-three days [of June], the north Atlantic U-boats made not a single sighting of a convoy’.2 In the midst of these events, Joan Murray gave a short description of Alan Turing, and his own gentle abstraction. ‘I can remember Alan Turing coming in as usual for a day’s leave,’ she wrote, ‘doing his own mathematical research at night, in the warmth and light of the office, without interrupting the routine of daytime sleep.’ Another veteran recalls Turing’s abstraction when being congratulated for his work by a senior ranking officer, while later, Hugh Alexander was to say of Turing’s role that ‘Turing thought it [naval Enigma] could be broken because it would be so interesting to break it … Turing first got interested in the problem for the typical reason that “no one else was doing anything about it and I could have it to myself.”’3 No better example then, of the partnership between unfettered mathematical inquiry and the national interest.

pages: 253 words: 80,074

The Man Who Invented the Computer
by Jane Smiley
Published 18 Oct 2010

He also moved his office from the mathematics department to the new physics building, which was more spacious and more practically oriented. According to Burton, he felt that mathematics as a field was moving in the wrong direction—toward greater and greater abstraction—while physicists continued to be interested in concrete problems. In the meantime, Alan Turing was wrestling with similar dissatisfactions. Alan Turing’s life at Sherborne was punctuated at the end with tragedy—in the winter of his last year (1930), his dearest friend, Christopher Morcom, died of tuberculosis. Morcom, slightly older and gifted with the star power that eluded Turing, had won many prizes at Sherborne, and then a scholarship to Trinity College.

Turing also began thinking again about the Liverpool tide-predicting machine. The machine Alan Turing was thinking of (and received forty pounds sterling to develop) would use weights and counterweights attached to rotating gears to set up problems. Their solutions would be measured by a comparison of weights—an analog idea. Turing and a colleague worked on this machine in their office at Cambridge through the summer of 1939, but in the fall, after the German invasion of Poland, Turing went to Bletchley Park to aid in the breaking of the Enigma. Yet another, and still more obscure, inventor of the computer, one whom Alan Turing would soon know very well, was Tommy Flowers.

Watson, Jr., later said, “If Aiken and my father had had revolvers they would both have been dead.” Hard feelings lingered for years afterward. Alan Turing is now a famous man—the subject of biographies, papers, an opera, and at least one play, but his work at Bletchley Park breaking the Enigma code did not come to light until the 1970s, and then, at first, only by means of popular books that did not actually mention him, or mentioned him in cryptic ways (F. W. Winterbotham, The Ultra Secret, 1974; A. Cave Brown, Bodyguard of Lies, 1975), or in specialized publications that did mention him directly (Brian Randell, “On Alan Turing and the Origins of Digital Computers,” 1972; Brian Randell, editor, The Origins of Digital Computers: Selected Papers, 1973).

Turing's Cathedral
by George Dyson
Published 6 Mar 2012

Good to Sara Turing, December 9, 1956, AMT; Robin Gandy, “The Confluence of Ideas in 1936,” in Rolf Herken, ed., The Universal Turing Machine: A Half-Century Survey (Oxford: Oxford University Press, 1988), p. 85. 11. Alan Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, ser. 2, vol. 42 (1936–1937): 230. 12. Ibid., p. 231. 13. Ibid., p. 250. 14. Ibid., p. 241. 15. Newman, “Max Newman—Mathematician, Codebreaker, and Computer Pioneer,” p. 178; Max Newman to Alonzo Church, May 31, 1936, in Andrew Hodges, Alan Turing: The Enigma (New York: Simon and Schuster, 1983), pp. 111–12. 16. Alan Turing to Sara Turing, October 6, 1936, AMT; Alan Turing to Sara Turing, February 22, 1937, AMT. 17.

Gladwin, June 18, 2002, in “Cryptanalytic Co-operation Between the UK and the USA,” in Christof Teuscher, ed., Alan Turing: Life and Legacy of a Great Thinker (New York: Springer-Verlag, 2002), p. 472. 35. John R. Womersley, Mathematics Division, National Physical Laboratory, “A.C.E. Project: Origin and Early History,” November 26, 1946, AMT. 36. Ibid. 37. Ibid. 38. Max Newman to John von Neumann, February 8, 1946, VNLC. 39. Alan Turing, “Report on visit to U.S.A., January 1st–20th, 1947,” AMT. 40. Sara Turing, Alan M. Turing, p. 56. 41. Alan Turing, “Proposed Electronic Calculator,” n.d., ca. 1946, p. 19, AMT. 42. Sara Turing, Alan M. Turing, p. 78. 43. Alan Turing, “Proposed Electronic Calculator,” p. 47; Alan Turing, “Lecture to the London Mathematical Society on 20 February 1947,” p. 9. 44.

Herman Goldstine, interview with Nancy Stern; Julian Bigelow, interview with Nancy Stern. 22. Julian Bigelow, interview with Nancy Stern. 23. Malcolm MacPhail to Andrew Hodges, December 17, 1977, in Hodges, Alan Turing, p. 138. 24. Turing, “Systems of Logic Based on Ordinals,” p. 161. 25. Ibid., pp. 172–73. 26. Ibid., pp. 214–15. 27. Ibid., p. 215. 28. Alan Turing to Sara Turing, October 14, 1936, AMT. 29. Alan Turing to Philip Hall, n.d., ca. 1938, AMT. 30. I. J. Good, “Pioneering Work on Computers at Bletchley,” in Metropolis, Howlett, and Rota, eds., A History of Computing in the Twentieth Century, p. 35. 31.

pages: 720 words: 197,129

The Innovators: How a Group of Inventors, Hackers, Geniuses and Geeks Created the Digital Revolution
by Walter Isaacson
Published 6 Oct 2014

Turing, published in 2012); Simon Lavington, editor, Alan Turing and His Contemporaries (BCS, 2012). 2. John Turing in Sara Turing, Alan M. Turing, 146. 3. Hodges, Alan Turing, 590. 4. Sara Turing, Alan M. Turing, 56. 5. Hodges, Alan Turing, 1875. 6. Alan Turing to Sara Turing, Feb. 16, 1930, Turing archive; Sara Turing, Alan M. Turing, 25. 7. Hodges, Alan Turing, 2144. 8. Hodges, Alan Turing, 2972. 9. Alan Turing, “On Computable Numbers,” Proceedings of the London Mathematical Society, read on Nov. 12, 1936. 10. Alan Turing, “On Computable Numbers,” 241. 11. Max Newman to Alonzo Church, May 31, 1936, in Hodges, Alan Turing, 3439; Alan Turing to Sara Turing, May 29, 1936, Turing Archive. 12.

See also “The Chinese Room Argument,” The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/chinese-room/. 96. Hodges, Alan Turing, 11305; Max Newman, “Alan Turing, An Appreciation,” the Manchester Guardian, June 11, 1954. 97. M. H. A. Newman, Alan M. Turing, Sir Geoffrey Jefferson, and R. B. Braithwaite, “Can Automatic Calculating Machines Be Said to Think?” 1952 BBC broadcast, reprinted in Stuart Shieber, editor, The Turing Test: Verbal Behavior as the Hallmark of Intelligence (MIT, 2004); Hodges, Alan Turing, 12120. 98. Hodges, Alan Turing, 12069. 99. Hodges, Alan Turing, 12404. For discussions of Turing’s suicide and character, see Robin Gandy, unpublished obituary of Alan Turing for the Times, and other items in the Turing Archives, http://www.turingarchive.org/.

Maurice Wilkes, “How Babbage’s Dream Came True,” Nature, Oct. 1975. 86. Hodges, Alan Turing, 10622. 87. Dyson, Turing’s Cathedral, 2024. See also Goldstine, The Computer from Pascal to von Neumann, 5376. 88. Dyson, Turing’s Cathedral, 6092. 89. Hodges, Alan Turing, 6972. 90. Alan Turing, “Lecture to the London Mathematical Society,” Feb. 20, 1947, available at http://www.turingarchive.org/; Hodges, Alan Turing, 9687. 91. Dyson, Turing’s Cathedral, 5921. 92. Geoffrey Jefferson, “The Mind of Mechanical Man,” Lister Oration, June 9, 1949, Turing Archive, http://www.turingarchive.org/browse.php/B/44. 93. Hodges, Alan Turing, 10983. 94. For an online version, see http://loebner.net/Prizef/TuringArticle.html. 95.

pages: 444 words: 111,837

Einstein's Fridge: How the Difference Between Hot and Cold Explains the Universe
by Paul Sen
Published 16 Mar 2021

By measuring how rapidly the cells reproduce: “Minimum Energy of Computing, Fundamental Considerations” by Victor Zhirnov, Ralph Cavin, and Luca Gammaitoni, chapter from the book ICT—Energy—Concepts Towards Zero—Power Information and Communication Technology. Chapter Eighteen: The Mathematics of Life a mathematical model: From “The Chemical Basis of Morphogenesis” by Alan Turing, Philosophical Transactions of the Royal Society of London, Series B 237 (1952–54). He is best known for his pivotal role: See a range of biographies including The Man Who Knew Too Much: Alan Turing and the Invention of the Computer by David Leavitt and Alan Turing: The Enigma by Andrew Hodges. “There should be no question in”: See Cryptographic History of Work on the German Naval Enigma by Hugh Alexander. rightly celebrated: For example, Breaking the Code, a play by Hugh Whitmore; Britain’s Greatest Codebreaker, broadcast on UK Channel 4; and The Imitation Game, film starring Benedict Cumberbatch.

Willard Gibbs, vols. 1 and 2 The Second Physicist: On the History of Theoretical Physics in Germany by Christa Jungnickel and Russell McCormmach Willard Gibbs by Muriel Rukeyser Part Three: The Consequences of Thermodynamics—Chapters Thirteen to Nineteen Alan Turing: The Enigma by Andrew Hodges Alan Turing: The Enigma Man by Nigel Cawthorne Alan Turing: The Life of a Genius by Dermot Turing The Black Hole War: My Battle with Stephen Hawking to Make the World Safer for Quantum Mechanics by Leonard Susskind Black Holes and Time Warps: Einstein’s Outrageous Legacy by Kip S. Thorne A Brief History of Time: From the Big Bang to Black Holes by Stephen Hawking The Bumpy Road: Max Planck from Radiation Theory to the Quantum, 1896–1906 by Massimiliano Badino Einstein: His Life and Universe by Walter Isaacson Einstein and the Quantum: The Quest of the Valiant Swabian by A.

There was another consequence of working on SIGSALY. For in the later months of 1942 on into 1943, Shannon met, almost daily, the one other person in the world in the fields of cryptography, communication, and computing who was his equal and his intellectual soul mate, the great British mathematician and code breaker Alan Turing. By late 1942, Alan Turing had already played a pivotal role in helping the British crack Enigma, the encryption system used by the Germans to protect their military communications, thereby establishing his reputation as Britain’s leading cryptographer. So, when the American intelligence authorities informed their British counterparts about SIGSALY, they sent Turing to Bell Labs to vet it.

pages: 463 words: 118,936

Darwin Among the Machines
by George Dyson
Published 28 Mar 2012

.: Open Court, 1902), 254. 53.Leibniz to Caroline, Princess of Wales, ca. 1716, in Alexander, Correspondence, 191. CHAPTER 4 1.Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (October 1950): 443. 2.A. K. Dewdney, The Turing Omnibus (Rockville, Md.: Computer Science Press, 1989), 389. 3.Robin Gandy, “The Confluence of Ideas in 1936,” in Rolf Herken, ed., The Universal Turing Machine: A Half-century Survey (Oxford: Oxford University Press, 1988), 85. 4.Alan Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 2d ser. 42 (1936–1937); reprinted, with corrections, in Martin Davis, ed., The Undecidable (Hewlett, N.Y.: Raven Press, 1965), 117. 5.Ibid., 136. 6.Kurt Gödel, 1946, “Remarks Before the Princeton Bicentennial Conference on Problems in Mathematics,” reprinted in Davis, The Undecidable, 84. 7.W.

Hinsley and Alan Stripp, eds., Codebreakers: The Inside Story of Bletchley Park, 2d ed. (Oxford: Clarendon Press, 1994), 164. 30.Hodges, Turing, 278. 31.Irving J. Good, “Turing and the Computer,” review of Alan Turing: The Enigma, by Andrew Hodges, Nature 307 (1 February 1984): 663. 32.Brian Randell, “The Colossus,” in Metropolis, Howlett, and Rota, History of Computing, 78. 33.Hilton, “Reminiscences,” 293. 34.Alan Turing, “Proposal for the Development in the Mathematics Division of an Automatic Computing Engine (ACE),” reprinted in B. E. Carpenter and R. W. Doran, eds., A. M. Turing’s A.C.E. Report of 1946 and Other Papers, Charles Babbage Reprint Series for the History of Computing, vol. 10 (Cambridge: MIT Press, 1986), 20–105. 35.Hodges, Turing, 307. 36.Carpenter and Doran, Turing’s A.C.E.

Report, 2. 37.Sara Turing, Alan M. Turing (Cambridge: W. Heffer & Sons, 1959), 78. 38.M. H. A. Newman, quoted in Good, “Turing and the Computer,” 663. 39.Alan Turing, “Lecture to the London Mathematical Society on 20 February 1947,” in Carpenter and Doran, Turing’s A.C.E. Report, 112. 40.Ibid., 106. 41.J. H. Wilkinson, “Turing’s Work at the National Physical Laboratory,” in Metropolis, Howlett, and Rota, History of Computing, 111. 42.Alan Turing, “Intelligent Machinery,” report submitted to the National Physical Laboratory, 1948, in Donald Michie, ed., Machine Intelligence, vol. 5 (1970), 3. 43.Turing, “Lecture,” 124. 44.Turing, “Intelligent Machinery,” 4. 45.Turing, “Lecture,” 123. 46.Turing, “Intelligent Machinery,” 9. 47.Ibid., 23. 48.Turing, “Computing Machinery,” 456. 49.Turing, “Intelligent Machinery,” 21–22. 50.Turing, “Systems of Logic Based on Ordinals,” Proceedings of the London Mathematical Society, 2d ser. 45 (1939); reprinted in Davis, The Undecidable, 209. 51.John von Neumann, 1948, “The General and Logical Theory of Automata,” in Lloyd A.

pages: 210 words: 62,771

Turing's Vision: The Birth of Computer Science
by Chris Bernhardt
Published 12 May 2016

I also thank Marie Lee, Kathleen Hensley, Virginia Crossman, and everyone at the MIT Press for their encouragement and help in transforming my rough proposal into this current book. Introduction Several biographies of his life have been published. He has been portrayed on stage by Derek Jacobi and in film by Benedict Cumberbatch. Alan Turing, if not famous, is certainly well known. Many people now know that his code breaking work during World War II played a pivotal role in the defeat of Germany. They know of his tragic death by cyanide, and perhaps of the test he devised for determining whether computers can think. Slightly less well known is the fact that the highest award in computer science is the ACM A.M.

We conclude with the text of Gordon Brown’s apology on behalf of the British government. 1 Background “Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show.” Bertrand Russell1 In 1935, at the age of 22, Alan Turing was elected a Fellow at King’s College, Cambridge. He had just finished his undergraduate degree in mathematics. He was bright and ambitious. As an undergraduate he had proved the Central Limit Theorem, probably the most fundamental result in statistics. This theorem explains the ubiquity of the normal distribution and why it occurs in so many guises.

This is what we do in the next chapter where we allow the machine to write on the tape. This seemingly small change gives rise to an immense change in computational power. 4 Turing Machines “The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.”1 Alan Turing We now return to Turing at Cambridge in 1935. He wanted to prove Hilbert wrong by constructing a decision problem that was beyond the capability of any algorithm to answer correctly in every case. Since there was no definition of what it meant for a procedure to be an algorithm, his first step was to define this clearly.

pages: 352 words: 120,202

Tools for Thought: The History and Future of Mind-Expanding Technology
by Howard Rheingold
Published 14 May 2000

This is the kernel of the concept of stored programming, and although the ENIAC team was officially the first to describe an electronic computing device in such terms, it should be noted that the abstract version of exactly the same idea was proposed in Alan Turing's 1936 paper in the form of the single tape of the universal Turing machine. And at the same time the Pennsylvania group was putting together the EDVAC report, Turing was thinking again about the concept of stored programs: So the spring of 1945 saw the ENIAC team on one hand, and Alan Turing on the other, arrive naturally at the idea of constructing a universal machine with a single "tape." . . . But when Alan Turing spoke of "building a brain," he was working and thinking alone in his spare time, pottering around in a British back garden shed with a few pieces of equipment grudgingly conceded by the secret service.

For nearly two years after his arrest, during which time the homophobic and "national security" pressures grew even stronger, Turing worked with the ironic knowledge that he was being destroyed by the very government his wartime work had been instrumental in preserving. In June, 1954, Alan Turing lay down on his bed, took a bite from an apple, dipped it in cyanide, and bit again. Like Ada, Alan Turing's unconventionality was part of his undoing, and like her he saw the software possibilities that stretched far beyond the limits of the computing machinery available at the time. Like her, he died too young. Other wartime research projects and other brilliant mathematicians were aware of Turing's work, particularly in the United States, where scientists were suddenly emerging into the nuclear age as figures of power.

Although Boole's lifework was to translate his inspiration into an algebraic system, he continued to be so impressed with the suddenness and force of the revelation that hit him that day in the meadow that he also wrote extensively about the powers of the unconscious mind. After his death Boole's widow turned these ideas into a kind of human potential cult, a hundred years before the "me decade." Alan Turing solved one of the most crucial mathematical problems of the modern era at the age of twenty-four, creating the theoretical basis for computation in the process. Then he became the top code-breaker in the world--when he wasn't bicycling around wearing a gas mask or running twenty miles with an alarm clock tied around his waist.

pages: 555 words: 163,712

War of Shadows: Codebreakers, Spies, and the Secret Struggle to Drive the Nazis From the Middle East
by Gershom Gorenberg
Published 19 Jan 2021

When he brought his idea to Dilly, it turned out, a team led by Cambridge mathematician John Jeffreys was already busy in the cottage, punching sheets with a machine built for the purpose. The noise was driving Alan Turing nuts. Turing moved to a loft in the cottage where the stable boys once slept. He climbed up by rope ladder and lowered a basket tied to a rope when he wanted coffee sent up.40 A loft and a coffee cup lowered by rope to the world of people: this arrangement fit Alan Turing. He was twenty-seven. He had not fit into his English boarding school as a boy. He did not like to play cricket. He ran long distances, mostly alone. Contrary to myths that grew up about Turing, he did not have a problem understanding people’s feelings.

On the shared love of danger, see Reuth, Rommel, 15–16; Fraser, Knight’s Cross, 142–143. 39. “Hitler, in Warsaw”; Reuth, Rommel, 39. 40. Welchman, Hut Six, 71–72; Mavies [sic] Batey, “Marian and Dilly,” in Ciechanowski et al., Rejewski, 72. 41. Andrew Hodges, Alan Turing: The Enigma (London: Vintage, 2014), 30–32, 73–74, 99–100, 263–265, Kindle; David Boyle, Alan Turing: Unlocking the Enigma (Endeavour, 2014), 17, 21–27, 55–59, Kindle. 42. HW 14/2, “Enigma—Position,” November 1, 1939. 43. HW 14/2, “Investigation of German Military Cyphers: Progress Report,” November 7, 1939. 44. HW 14/1, Denniston to “The Director” [Sinclair], September 16, 1939. 45.

Interviews with Lottie Milvain (née Dudley-Smith) and Tempe Denzer; “Death of Cdr. R. Dudley-Smith,” Gloucestershire Echo, October 3, 1967, courtesy of Lottie Milvain, page number not preserved. 19. Dermot Turing, XY&Z: The Real Story of How Enigma Was Broken (Stroud, Gloucestershire, UK: History Press, 2018), 277–281. 20. Hodges, Alan Turing, chap. 6–8; Boyle, Alan Turing, chap. 6–7. 21. “Cdr Edward Wilfred Harry ‘Jumbo’ Travis,” Bletchley Park, https://rollofhonour.bletchleypark.org.uk/search/record-detail/9170 (accessed August 31, 2015); “The End of Denniston’s Career, and His Legacy,” GCHQ, www.gchq.gov.uk/features/end-dennistons-career-and-his-legacy (accessed January 11. 2018). 22.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

The history of computing can be divided into an Old Testament and a New Testament: before and after electronic digital computers and the codes they spawned proliferated across the Earth. The Old Testament prophets, who delivered the underlying logic, included Thomas Hobbes and Gottfried Wilhelm Leibniz. The New Testament prophets included Alan Turing, John von Neumann, Claude Shannon, and Norbert Wiener. They delivered the machines. Alan Turing wondered what it would take for machines to become intelligent. John von Neumann wondered what it would take for machines to self-reproduce. Claude Shannon wondered what it would take for machines to communicate reliably, no matter how much noise intervened.

During World War II, he developed techniques for aiming antiaircraft fire by making models that could predict the future trajectory of an airplane by extrapolating from its past behavior. In Cybernetics and in The Human Use of Human Beings, Wiener notes that this past behavior includes quirks and habits of the human pilot, thus a mechanized device can predict the behavior of humans. Like Alan Turing, whose Turing Test suggested that computing machines could give responses to questions that were indistinguishable from human responses, Wiener was fascinated by the notion of capturing human behavior by mathematical description. In the 1940s, he applied his knowledge of control and feedback loops to neuromuscular feedback in living systems, and was responsible for bringing Warren McCulloch and Walter Pitts to MIT, where they did their pioneering work on artificial neural networks.

Steve Omohundro has pointed to a further difficulty, observing that intelligent entities must act to preserve their own existence. This tendency has nothing to do with a self-preservation instinct or any other biological notion; it’s just that an entity cannot achieve its objectives if it’s dead. According to Omohundro’s argument, a superintelligent machine that has an off switch—which some, including Alan Turing himself, in a 1951 talk on BBC Radio 3, have seen as our potential salvation—will take steps to disable the switch in some way.* Thus we may face the prospect of superintelligent machines—their actions by definition unpredictable by us and their imperfectly specified objectives conflicting with our own—whose motivations to preserve their existence in order to achieve those objectives may be insuperable. 1001 REASONS TO PAY NO ATTENTION Objections have been raised to these arguments, primarily by researchers within the AI community.

pages: 436 words: 127,642

When Einstein Walked With Gödel: Excursions to the Edge of Thought
by Jim Holt
Published 14 May 2018

Joseph Warren Dauben, Abraham Robinson: The Creation of Nonstandard Analysis, a Personal and Mathematical Odyssey (Princeton, 1995). 14. THE ADA PERPLEX: WAS BYRON’S DAUGHTER THE FIRST CODER? Dorothy Stein, Ada: A Life and Legacy (MIT, 1987). Benjamin Woolley, The Bride of Science: Romance, Reason, and Byron’s Daughter (McGraw-Hill, 1999). 15. ALAN TURING IN LIFE, LOGIC, AND DEATH Andrew Hodges, Alan Turing: The Enigma (Walker, 2000). David Leavitt, The Man Who Knew Too Much: Alan Turing and the Invention of the Computer (Norton, 2006). Martin Davis, Engines of Logic: Mathematics and the Origin of the Computer (Norton, 2000). 16. DR. STRANGELOVE MAKES A THINKING MACHINE George Dyson, Turing’s Cathedral: The Origins of the Digital Universe (Pantheon, 2012).

Or that the first functioning computer should consist not of mechanical components or vacuum tubes but of unemployed pompadour dressers? Such are the froufrou antecedents of the computer era—an era that can claim as its original publicist a nervy young woman, a poet’s daughter, who saw herself as a fairy. 15 Alan Turing in Life, Logic, and Death On June 8, 1954, Alan Turing, a forty-one-year-old research scientist at the University of Manchester, was found dead by his housekeeper. Before getting into bed the night before, he had taken a few bites out of an apple that was, apparently, laced with cyanide. At an inquest a few days later, his death was ruled a suicide.

Although the report contained design ideas from the ENIAC inventors, von Neumann was listed as the sole author, which occasioned some grumbling among the uncredited. And the report had another curious omission. It failed to mention the man who, as von Neumann well knew, had originally worked out the possibility of a universal computer: Alan Turing. An Englishman nearly a decade younger than von Neumann, Alan Turing came to Princeton in 1936 to earn a Ph.D. in mathematics. Earlier that year, at the age of twenty-three, he had resolved a deep problem in logic called the decision problem. The problem traces its origins to the seventeenth-century philosopher Leibniz, who dreamed of “a universal symbolistic in which all truths of reason would be reduced to a kind of calculus.”

pages: 855 words: 178,507

The Information: A History, a Theory, a Flood
by James Gleick
Published 1 Mar 2011

♦ SAID NOTHING TO EACH OTHER ABOUT THEIR WORK: Shannon interview with Robert Price: “A Conversation with Claude Shannon: One Man’s Approach to Problem Solving,” IEEE Communications Magazine 22 (1984): 125; cf. Alan Turing to Claude Shannon, 3 June 1953, Manuscript Division, Library of Congress. ♦ “NO, I’M NOT INTERESTED IN DEVELOPING A POWERFUL BRAIN”: Andrew Hodges, Alan Turing: The Enigma (London: Vintage, 1992), 251. ♦ “A CONFIRMED SOLITARY”: Max H. A. Newman to Alonzo Church, 31 May 1936, quoted in Andrew Hodges, Alan Turing, 113. ♦ “THE JUSTIFICATION … LIES IN THE FACT”: Alan M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society 42 (1936): 230–65

♦ “YOU SEE … THE FUNNY LITTLE ROUNDS”: letter from Alan Turing to his mother and father, summer 1923, AMT/K/1/3, Turing Digital Archive, http://www.turingarchive.org. ♦ “IN ELEMENTARY ARITHMETIC THE TWO-DIMENSIONAL CHARACTER”: Alan M. Turing, “On Computable Numbers,” 230–65. ♦ “THE THING HINGES ON GETTING THIS HALTING INSPECTOR”: “On the Seeming Paradox of Mechanizing Creativity,” in Douglas R. Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern (New York: Basic Books, 1985), 535. ♦ “IT USED TO BE SUPPOSED IN SCIENCE”: “The Nature of Spirit,” unpublished essay, 1932, in Andrew Hodges, Alan Turing, 63. ♦ “ONE CAN PICTURE AN INDUSTRIOUS AND DILIGENT CLERK”: Herbert B.

♦ “ONE CAN PICTURE AN INDUSTRIOUS AND DILIGENT CLERK”: Herbert B. Enderton, “Elements of Recursion Theory,” in Jon Barwise, Handbook of Mathematical Logic (Amsterdam: North Holland, 1977), 529. ♦ “A LOT OF PARTICULAR AND INTERESTING CODES”: Alan Turing to Sara Turing, 14 October 1936, quoted in Andrew Hodges, Alan Turing, 120. ♦ “THE ENEMY KNOWS THE SYSTEM BEING USED”: “Communication Theory of Secrecy Systems” (1948), in Claude Elwood Shannon, Collected Papers, ed. N. J. A. Sloane and Aaron D. Wyner (New York: IEEE Press, 1993), 90. ♦ “FROM THE POINT OF VIEW OF THE CRYPTANALYST”: Ibid., 113. ♦ “THE MERE SOUNDS OF SPEECH”: Edward Sapir, Language: An Introduction to the Study of Speech (New York: Harcourt, Brace, 1921), 21

pages: 236 words: 50,763

The Golden Ticket: P, NP, and the Search for the Impossible
by Lance Fortnow
Published 30 Mar 2013

Biology is a computer: it takes a DNA sequence to produce proteins that perform the necessary functions that make life possible. What about the process we call computation? Is there anything we can’t compute? That mystery was solved before we even had digital computers by the great mathematician Alan Turing in 1936. Turing wondered how mathematicians thought, and came up with a formal mathematical model of that thought process, a model we now call the Turing machine, which has become the standard model of computation. Alan Turing was born in London in 1912. In the early 1930s he attended King’s College, Cambridge University, where he excelled in mathematics. During that time he thought of computation in terms of himself as a mathematician.

Mathematics In 1928 the renowned German mathematician David Hilbert put forth his great Entscheidungsproblem, a challenge to find an algorithmic procedure to find the truth or falsehood of any mathematical statement. In 1931 Kurt Gödel showed there must be some statements that any given set of axioms could not prove true or false at all. Influenced by this work, a few years later Alonzo Church and Alan Turing independently showed that no such algorithm exists. What if we restrict ourselves to relatively short proofs, say, of the kind that could fit in a short book? We can solve this computationally by looking at all possible short proofs for some mathematical statement. This is an NP question since we can recognize a good proof when we see one but finding one in practice remains difficult.

In one fell swoop, Karp tied together all these famous difficult-to-solve computational problems. From that point on, the P versus NP question took center stage. Every year the Association for Computing Machinery awards the ACM Turing Award, the computer science equivalent of the Nobel Prize, named for Alan Turing, who gave computer science its foundations in the 1930s. In 1982 the ACM presented the Turing Award to Stephen Cook for his work formulating the P versus NP problem. But one Turing Award for the P versus NP problem is not enough, and in 1985 Richard Karp received the award for his work on algorithms, most notably for the twenty-one NP-complete problems.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

AD 50: Hero of Alexandria designs first mechanical automata. 1275: Ramon Lull invents Ars Magna, a logical machine. 1637: Descartes declares cogito ergo sum (‘I think therefore I am’). 1642: Blaise Pascal invents the Pascaline, a mechanical cal-culator. 1726: Jonathan Swift publishes Gulliver’s Travels, which includes the description of a machine that can write any book. 1801: Joseph Marie Jacquard invents a textiles loom that uses punched cards. 1811: Luddite movement in Great Britain against the auto-mation of manual jobs. 1818: Mary Shelley publishes Frankenstein. 1835: Joseph Henry invents the electronic relay that allows electrical automation and switching. 1842: Charles Babbage lectures at the University of Turin, where he describes the Analytical Engine. 1843: Ada Lovelace writes the first computer program. 1847: George Boole invents symbolic and binary logic. 1876: Alexander Graham Bell invents the telephone. 1879: Thomas Edison invents the light bulb. 1879: Gottlob Frege invents predicate logic and calculus. 1910: Bertrand Russell and Alfred North Whitehead publish Principia Mathematica. 1917: Karel Capek coins the term ‘robot’ in his play R.U.R. 1921: Ludwig Wittgenstein publishes Tractatus Logico-philosopicus. 1931: Kurt Gödel publishes The Incompleteness Theorem. 1937: Alan Turing invents the ‘Turing machine’. 1938: Claude Shannon demonstrates that symbolic logic can be implemented using electronic relays. 1941: Konrad Zuse constructs Z3, the first Turing-complete computer. 1942: Alan Turing and Claude Shannon work together at Bell Labs. 1943: Warren McCulloch and Walter Pitts demonstrate the equivalence between electronics and neurons. 1943: IBM funds the construction of Harvard Mark 1, the first program-controlled calculator. 1943: Charles Wynn-Williams and others create the computer Colossus at Bletchley Park. 1945: John von Neumann suggests a computer architecture whereby programs are stored in the memory. 1946: ENIAC, the first electronic general-purpose computer, is built. 1947: Invention of the transistor at Bell Labs. 1948: Norbert Wiener publishes Cybernetics. 1950: Alan Turing proposes the ‘Turing Test’. 1950: Isaac Asimov publishes I, Robot. 1952: Alan Turing commits suicide with cyanide-laced apple. 1952: Herman Carr produces the first one-dimensional MRI image. 1953: Claude Shannon hires Marvin Minsky and John McCarthy at Bell Labs. 1953: Ludwig Wittgenstein’s Philosophical Investigations pub-lished in German (two years after his death). 1956: The Dartmouth conference; the term ‘Artificial Intel-ligence’ is coined by John McCarthy. 1957: Allen Newell and Herbert Simon build the ‘General Problem Solver’. 1958: John McCarthy creates LISP programming language. 1959: John McCarthy and Marvin Minsky establish AI lab at MIT. 1963: The US government awards $2.2 million to AI lab at MIT for machine-aided cognition. 1965: Hubert Dreyfus argues against the possibility of Artificial Intelligence. 1969: Stanley Kubrick introduces HAL in the film 2001: A Space Odyssey. 1971: Leon Chua envisions the memristor. 1972: Alain Colmerauer develops Prolog programming language. 1973: The Lighthill report influences the British government to abandon research in AI. 1976: Hans Moravec builds the ‘Stanford Cart’, the first auto-nomous vehicle.

AD 50: Hero of Alexandria designs first mechanical automata. 1275: Ramon Lull invents Ars Magna, a logical machine. 1637: Descartes declares cogito ergo sum (‘I think therefore I am’). 1642: Blaise Pascal invents the Pascaline, a mechanical cal-culator. 1726: Jonathan Swift publishes Gulliver’s Travels, which includes the description of a machine that can write any book. 1801: Joseph Marie Jacquard invents a textiles loom that uses punched cards. 1811: Luddite movement in Great Britain against the auto-mation of manual jobs. 1818: Mary Shelley publishes Frankenstein. 1835: Joseph Henry invents the electronic relay that allows electrical automation and switching. 1842: Charles Babbage lectures at the University of Turin, where he describes the Analytical Engine. 1843: Ada Lovelace writes the first computer program. 1847: George Boole invents symbolic and binary logic. 1876: Alexander Graham Bell invents the telephone. 1879: Thomas Edison invents the light bulb. 1879: Gottlob Frege invents predicate logic and calculus. 1910: Bertrand Russell and Alfred North Whitehead publish Principia Mathematica. 1917: Karel Capek coins the term ‘robot’ in his play R.U.R. 1921: Ludwig Wittgenstein publishes Tractatus Logico-philosopicus. 1931: Kurt Gödel publishes The Incompleteness Theorem. 1937: Alan Turing invents the ‘Turing machine’. 1938: Claude Shannon demonstrates that symbolic logic can be implemented using electronic relays. 1941: Konrad Zuse constructs Z3, the first Turing-complete computer. 1942: Alan Turing and Claude Shannon work together at Bell Labs. 1943: Warren McCulloch and Walter Pitts demonstrate the equivalence between electronics and neurons. 1943: IBM funds the construction of Harvard Mark 1, the first program-controlled calculator. 1943: Charles Wynn-Williams and others create the computer Colossus at Bletchley Park. 1945: John von Neumann suggests a computer architecture whereby programs are stored in the memory. 1946: ENIAC, the first electronic general-purpose computer, is built. 1947: Invention of the transistor at Bell Labs. 1948: Norbert Wiener publishes Cybernetics. 1950: Alan Turing proposes the ‘Turing Test’. 1950: Isaac Asimov publishes I, Robot. 1952: Alan Turing commits suicide with cyanide-laced apple. 1952: Herman Carr produces the first one-dimensional MRI image. 1953: Claude Shannon hires Marvin Minsky and John McCarthy at Bell Labs. 1953: Ludwig Wittgenstein’s Philosophical Investigations pub-lished in German (two years after his death). 1956: The Dartmouth conference; the term ‘Artificial Intel-ligence’ is coined by John McCarthy. 1957: Allen Newell and Herbert Simon build the ‘General Problem Solver’. 1958: John McCarthy creates LISP programming language. 1959: John McCarthy and Marvin Minsky establish AI lab at MIT. 1963: The US government awards $2.2 million to AI lab at MIT for machine-aided cognition. 1965: Hubert Dreyfus argues against the possibility of Artificial Intelligence. 1969: Stanley Kubrick introduces HAL in the film 2001: A Space Odyssey. 1971: Leon Chua envisions the memristor. 1972: Alain Colmerauer develops Prolog programming language. 1973: The Lighthill report influences the British government to abandon research in AI. 1976: Hans Moravec builds the ‘Stanford Cart’, the first auto-nomous vehicle.

It is a game of deception. The man in the first room will try to convince the judge of his manhood. The woman will impersonate the man, counteract his claims, and do her outmost to deceive the judge into believing that she is the man. The judge must guess correctly who is who. The English mathematician Alan Turing, one of the fathers of Artificial Intelligence, proposed this test in a landmark 1950 paper,1 noting that if one were to slightly modify this ‘imitation game’ and, instead of the woman there was a machine in the second room, then one had the best test for judging whether that machine was intelligent.

pages: 415 words: 114,840

A Mind at Play: How Claude Shannon Invented the Information Age
by Jimmy Soni and Rob Goodman
Published 17 Jul 2017

“We talked not at all”: Price, “Oral History: Claude E. Shannon.” “I reached New York” . . . “I had been intending”: Alan Turing, “Alan Turing’s Report from Washington DC, November 1942.” “incomplete alliance”: Andrew Hodges, “Alan Turing as UK-USA Link, 1942 Onwards,” Alan Turing Internet Scrapbook, www.turing.org.uk/scrapbook/ukusa.html. “I am persuaded”: Turing, “Alan Turing’s Report from Washington DC, November 1942.” “we would talk about”: Price, “Oral History: Claude E. Shannon.” “Well, back in ’42” . . . “a very, very impressive guy”: Shannon, interviewed by Hagemeyer, February 28, 1977. “While there we went over”: Price, “Oral History: Claude E.

“The Bush Differential Analyzer and Its Applications.” Nature 146 (September 7, 1940): 319–23. Hatch, David A., and Robert Louis Benson. “The Korean War: The SIGINT Background.” National Security Agency. www.nsa.gov/public_info/declass/korean_war/sigint_bg.shtml. Hodges, Andrew. Alan Turing: The Enigma. Princeton, NJ: Princeton University Press, 1983. ———. “Alan Turing as UK-USA Link, 1942 Onwards.” Alan Turing Internet Scrapbook. www.turing.org.uk/scrapbook/ukusa.html. Horgan, John. “Claude E. Shannon: Unicyclist, Juggler, and Father of Information Theory.” Scientific American, January 1990. ———. “Poetic Masterpiece of Claude Shannon, Father of Information Theory, Published for the First Time.”

“Accept distortion for security”: Dave Tompkins, How to Wreck a Nice Beach: The Vocoder from World War II to Hip-Hop, The Machine Speaks (Chicago: Stop Smiling Books, 2011), 63. “Members working on the job”: Andrew Hodges, Alan Turing: The Enigma (Princeton, NJ: Princeton University Press, 1983), 247. “It worked”: Ibid., 312. “At a recent world fair”: Bush, “As We May Think.” “Phrt fdygui”: Sterling, “Churchill and Intelligence,” 34. “not a lot of laboratories”: Shannon, interviewed by Hagemeyer, February 28, 1977. “a very down to earth discipline”: Shannon, interviewed by Hagemeyer, February 28, 1977. Chapter 12: Turing “Here [Turing] met a person”: Hodges, Alan Turing, 314. “I think Turing had” . . . “We talked not at all”: Price, “Oral History: Claude E.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

This was one of the arguments against AI that was refuted by Alan Turing, “Computing machinery and intelligence,” Mind 59 (1950): 433–60. 2. The earliest known article on existential risk from AI was by Richard Thornton, “The age of machinery,” Primitive Expounder IV (1847): 281. 3. “The Book of the Machines” was based on an earlier article by Samuel Butler, “Darwin among the machines,” The Press (Christchurch, New Zealand), June 13, 1863. 4. Another lecture in which Turing predicted the subjugation of humankind: Alan Turing, “Intelligent machinery, a heretical theory” (lecture given to the 51 Society, Manchester, 1951).

Just by typing, you can create programs that turn the box into something new, perhaps something that magically synthesizes moving images of oceangoing ships hitting icebergs or alien planets with tall blue people; type some more, and it translates English into Chinese; type some more, and it listens and speaks; type some more, and it defeats the world chess champion. This ability of a single box to carry out any process that you can imagine is called universality, a concept first introduced by Alan Turing in 1936.31 Universality means that we do not need separate machines for arithmetic, machine translation, chess, speech understanding, or animation: one machine does it all. Your laptop is essentially identical to the vast server farms run by the world’s largest IT companies—even those equipped with fancy, special-purpose tensor processing units for machine learning.

Fortunately for us, we have a distinct advantage over machines when it comes to knowing how other humans feel and how they will react. Nearly every human knows what it’s like to hit one’s thumb with a hammer or to feel unrequited love. Counteracting this natural human advantage is a natural human disadvantage: the tendency to be fooled by appearances—especially human appearances. Alan Turing warned against making robots resemble humans:34 I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual, characteristics such as the shape of the human body; it appears to me quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

We have many possible choices for the beginning of AI, but for me the beginning of the AI story coincides with the beginning of the story of computing itself, for which we have a pretty clear starting point: King’s College, Cambridge, in 1935, and a brilliant but unconventional young student called Alan Turing. Cambridge, 1935 It is hard to imagine now, because he is about as famous as any mathematician could ever hope to be, but until the 1980s, the name of Alan Turing was virtually unknown outside the fields of mathematics and computer science. While students of mathematics and computing might have come across Turing’s name in their studies, they would have known little about the full extent of his achievements, or his tragic, untimely death.

If you are interested in detailed questions of history and the way in which the field evolved, the book you want is Nils Nilsson’s The Quest for Artificial Intelligence (Cambridge University Press, 2010). This is a superlative historical guide to the many threads of modern AI, written by one of the field’s greatest researchers. There are now several books about the life of Alan Turing, but the best by far is the one that gave the world the Turing story: Andrew Hodges’ Alan Turing: The Enigma (Burnett Books/Hutchinson, 1983). As an undergraduate, I very much enjoyed the three-volume Handbook of Artificial Intelligence published by William Kaufmann & Heuristech Press (Volume I, ed. Avron Barr and Edward A. Feigenbaum, 1981; Volume II, ed.

So, why has AI proved to be so difficult? To understand the answer to this question, we need to understand what computers are and what computers can do, at their most fundamental level. This takes us into the realm of some of the deepest questions in mathematics, and the work of one of the greatest minds of the twentieth century: Alan Turing. The History of AI My second main goal in this book is to tell you the story of AI from its inception. Every story must have a plot, and we are told there are really only seven basic plots for all the stories in existence, so which of these best fits the story of AI? Many of my colleagues would dearly like it to be ‘Rags to Riches’, and it has certainly turned out that way for a clever (or lucky) few.

Pandas for Everyone: Python Data Analysis
by Unknown

# for a Series scientist_names_from_pickle = pd.read_pickle(’../output/scientists_names_ 0 Rosaline Franklin 1 William Gosset 2 Florence Nightingale 3 Marie Curie 4 Rachel Carson 5 John Snow 6 Alan Turing 7 Johann Gauss Name: Name, dtype: object # for a DataFrame scientists_from_pickle = pd.read_pickle(’../output/scientists_df.pickle print(scientists_from_pickle) 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 Name Rosaline Franklin William Gosset Florence Nightingale Marie Curie Rachel Carson John Snow Alan Turing Johann Gauss born_dt 1920-07-25 1876-06-13 1820-05-12 1867-11-07 1907-05-27 1813-03-15 1912-06-23 1777-04-30 died_dt 1958-04-16 1937-10-16 1910-08-13 1934-07-04 1964-04-14 1858-06-16 1954-06-07 1855-02-23 Born 1920-07-25 1876-06-13 1820-05-12 1867-11-07 1907-05-27 1813-03-15 1912-06-23 1777-04-30 age_days_dt 13779 days 22404 days 32964 days 24345 days 20777 days 16529 days 15324 days 28422 days Died 1958-04-16 1937-10-16 1910-08-13 1934-07-04 1964-04-14 1858-06-16 1954-06-07 1855-02-23 Age 66 56 41 77 90 45 37 61 Occupation Chemist Statistician Nurse Chemist Biologist Physician Computer Scientist Mathematician age_years_dt 37.0 61.0 90.0 66.0 56.0 45.0 41.0 77.0 You will see pickle files saved as .p, . pkl, or . pickle. 2.8.2 CSV Comma-separated values (CSV) are the most flexible data storage type.

OTHERWISE NEED TO FIND ANOTHER DATASET first_half = second_half scientists[: 4] = scientists[ 4 :] print(first_half) 0 1 2 3 Name Rosaline Franklin William Gosset Florence Nightingale Marie Curie Born 1920-07-25 1876-06-13 1820-05-12 1867-11-07 Died 1958-04-16 1937-10-16 1910-08-13 1934-07-04 Age 37 61 90 66 Occupation Chemist Statistician Nurse Chemist print(second_half) 4 5 6 7 Name Rachel Carson John Snow Alan Turing Johann Gauss Born 1907-05-27 1813-03-15 1912-06-23 1777-04-30 Died 1964-04-14 1858-06-16 1954-06-07 1855-02-23 Age 56 45 41 77 Occupation Biologist Physician Computer Scientist Mathematician print(first_half + second_half) 0 1 2 3 4 5 6 7 Name Born Died NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN Age Occupation NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN print(scientists * 2) 0 1 2 3 4 5 6 7 Name Rosaline FranklinRosaline Franklin William GossetWilliam Gosset Florence NightingaleFlorence Nightingale Marie CurieMarie Curie Rachel CarsonRachel Carson John SnowJohn Snow Alan TuringAlan Turing Johann GaussJohann Gauss 0 Died 1958-04-161958-04-16 Age 74 Born 1920-07-251920-07-25 1876-06-131876-06-13 1820-05-121820-05-12 1867-11-071867-11-07 1907-05-271907-05-27 1813-03-151813-03-15 1912-06-231912-06-23 1777-04-301777-04-30 Occupation ChemistChemist \ 1 2 3 4 5 6 7 1937-10-161937-10-16 1910-08-131910-08-13 1934-07-041934-07-04 1964-04-141964-04-14 1858-06-161858-06-16 1954-06-071954-06-07 1855-02-231855-02-23 122 180 132 112 90 82 154 StatisticianStatistician NurseNurse ChemistChemist BiologistBiologist PhysicianPhysician Computer ScientistComputer Scientist MathematicianMathematician 2.7 Making changes to Series and DataFrames 2.7.1 Add additional columns Now that we know various ways of subsetting and slicing our data (See table 2–1), we should now be able to find values of interest to assign new values to them.

We can recalculate the ‘real’ age using datetime arithmetic. 6 https://docs.python.org/3.5/library/random.html#random.shuffle # subtracting dates will give us number of days scientists[’age_days_dt’] = (scientists[’died_dt’] - scientists[ print(scientists) 0 1 2 3 4 5 6 7 Name Rosaline Franklin William Gosset Florence Nightingale Marie Curie Rachel Carson John Snow Alan Turing Johann Gauss 0 1 2 3 4 5 6 7 born_dt 1920-07-25 1876-06-13 1820-05-12 1867-11-07 1907-05-27 1813-03-15 1912-06-23 1777-04-30 died_dt 1958-04-16 1937-10-16 1910-08-13 1934-07-04 1964-04-14 1858-06-16 1954-06-07 1855-02-23 Born 1920-07-25 1876-06-13 1820-05-12 1867-11-07 1907-05-27 1813-03-15 1912-06-23 1777-04-30 Died 1958-04-16 1937-10-16 1910-08-13 1934-07-04 1964-04-14 1858-06-16 1954-06-07 1855-02-23 Age 66 56 41 77 90 45 37 61 Occupation Chemist Statistician Nurse Chemist Biologist Physician Computer Scientist Mathematician age_days_dt 13779 days 22404 days 32964 days 24345 days 20777 days 16529 days 15324 days 28422 days # we can convert the value to just the year # using the astype method scientists[’age_years_dt’] = scientists[’age_days_dt’].astype(’ print(scientists) 0 1 2 Name Rosaline Franklin William Gosset Florence Nightingale Born 1920-07-25 1876-06-13 1820-05-12 Died 1958-04-16 1937-10-16 1910-08-13 Age 66 56 41 Occupation Chemist Statistician Nurse 3 4 5 6 7 0 1 2 3 4 5 6 7 Marie Curie Rachel Carson John Snow Alan Turing Johann Gauss born_dt 1920-07-25 1876-06-13 1820-05-12 1867-11-07 1907-05-27 1813-03-15 1912-06-23 1777-04-30 died_dt 1958-04-16 1937-10-16 1910-08-13 1934-07-04 1964-04-14 1858-06-16 1954-06-07 1855-02-23 1867-11-07 1907-05-27 1813-03-15 1912-06-23 1777-04-30 1934-07-04 1964-04-14 1858-06-16 1954-06-07 1855-02-23 age_days_dt 13779 days 22404 days 32964 days 24345 days 20777 days 16529 days 15324 days 28422 days 77 90 45 37 61 Chemist Biologist Physician Computer Scientist Mathematician age_years_dt 37.0 61.0 90.0 66.0 56.0 45.0 41.0 77.0 Note We could’ve directly assigned the column to the datetime converted, but the point is an assignment still needed to be preformed.

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

It’s about the nature of creativity, the future of employment, and what happens when all knowledge is data and can be stored electronically. It’s about what we’re trying to do when we make machines smarter than we are, how humans still have the edge (for now), and the question of whether you and I aren’t thinking machines of a sort as well. The pioneering British mathematician and computer scientist Alan Turing predicted in 1950 that by the end of the twentieth century, ‘the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted’. Like many futurist predictions about technology, he was optimistic in his timeline – although he wasn’t off by too much.

Compared with the unreliable memory of humans, a machine capable of accessing thousands of items in the span of microseconds had a clear advantage. There are entire books written about the birth of modern computing, but three men stand out as laying the philosophical and technical groundwork for the field that became known as Artificial Intelligence: John von Neumann, Alan Turing and Claude Shannon. A native of Hungary, von Neumann was born in 1903 into a Jewish banking family in Budapest. In 1930, he arrived at Princeton University as a maths teacher and, by 1933, had established himself as one of six professors in the new Institute for Advanced Study in Princeton: a position he stayed in until the day he died.

Unlike some of his contemporaries, he did not believe a computer would be able to think in the way that a human can, but he did help establish the parallels that exist with human physiognomy. The parts of a computer, he wrote in one paper, ‘correspond to the associative neurons in the human nervous system. It remains to discuss the equivalents of the sensory or afferent and the motor or efferent neurons.’ Others would happily take up the challenge. Alan Turing, meanwhile, was a British mathematician and cryptanalyst. During the Second World War, he led a team for the Government Code and Cypher School at Britain’s secret code-breaking centre, Bletchley Park. There he came up with various techniques for cracking German codes, most famously an electromechanical device capable of working out the settings for the Enigma machine.

pages: 256 words: 73,068

12 Bytes: How We Got Here. Where We Might Go Next
by Jeanette Winterson
Published 15 Mar 2021

Artificial intelligence was the term coined in the mid-1950s by John McCarthy – an American computing expert – who, like his friend Marvin Minsky, believed computers could achieve human levels of intelligence by the 1970s. Alan Turing had thought the year 2000 was realistic. Yet from the coining of a term – AI – 40 years would pass before IBM’s Deep Blue beat Kasparov at chess in 1997. That’s because computational power is the sum of computer storage (memory) and processing speed. Simply, computers weren’t powerful enough to do what McCarthy, Minsky and Turing knew they would be able to do. And before those men, there was Ada Lovelace, the early-19th century genius who inspired Alan Turing to devise the Turing Test – when we can no longer tell the difference between AI and bio-human.

Set against that thought is the fact that the biggest touring bands in the world are still old-fashioned (and increasingly old-aged) guys who write their music and play their instruments. But this is probably the end of an era. As David Cope puts it: ‘The question isn’t whether computers possess a soul but if we possess one.’ * * * Alan Turing, the British mathematician who designed and built the Enigma code-breaking machine at Bletchley Park (Turing was played by Benedict Cumberbatch in the movie The Imitation Game), wasn’t interested in whether or not a computer could have, or would have, a soul, but he was interested in whether or not a computer could originate (as well as learn) independently of human input.

Turing thought that machine intelligence would pass his test by the year 2000. That hasn’t happened, but we are getting closer – and while we are getting closer we might decide, or AI might decide, that it doesn’t matter. * * * Mary Shelley may be closer to the world that is to come than either Ada Lovelace or Alan Turing. A new kind of life-form may not need to be like a human at all (the cute helper bot or the virtual digital assistant may be just a distraction, a sideline, a bridge. Pure intelligence will be other) – and that’s something that is achingly, heartbreakingly, clear in Frankenstein. The monster is initially designed to be ‘like’ us.

Decoding Organization: Bletchley Park, Codebreaking and Organization Studies
by Christopher Grey
Published 22 Mar 2012

Decoding Organization How was Bletchley Park made as an organization? How was signals intelligence constructed as a field? What was Bletchley Park’s culture and how was its work co-ordinated? Bletchley Park was not just the home of geniuses such as Alan Turing, it was also the workplace of thousands of other people, mostly women, and their organization was a key component in the cracking of Enigma. Challenging many popular perceptions, this book examines the hitherto unexamined complexities of how 10,000 people were brought together in complete secrecy during World War II to work on ciphers. Unlike most organizational studies, this book decodes, rather than encodes, the processes of organization and examines the structures, cultures and the work itself of Bletchley Park using archive and oral history sources.

The BP site is now a major museum attracting many thousands of visitors each year and is regularly in the news because of the enduring interest in its codebreaking achievements and contribution to the conduct of WW2, its role in the development of computing and not least because of public interest in its best known luminary, Alan Turing (Hodges, 1982). There is a stream of popular literature explaining what happened at BP (e.g. Smith, 1998; McKay, 2010) and a growing number of reminiscences of those who worked there (e.g. Welchman, 1982; Hinsley and Stripp, 1993; Calvocoressi, 2001; Page, 2002, 2003; Hill, 2004; Luke, 2005; Watkins, 2006; Paterson, 2007; Hogarth, 2008; Thirsk, 2008; Briggs, 2011; Pearson, 2011)4.

Certainly it is significant that, as mentioned in the previous chapter, the van Cutsem report was commissioned not by Denniston but jointly by the DMI and ‘C’37. Perhaps even more damaging was one of the most famous events in the history of BP. This was the frustration about lack of resources which led Gordon Welchman, Stuart Milner-Barry, Alan Turing and Hugh Alexander to bypass Denniston (and ‘C’) to appeal directly to Winston Churchill (who had recently visited BP) in a letter handdelivered to 10 Downing Street on 21 October 1941. The symbolism of that date – Trafalgar Day – would surely not have been lost on Churchill. At all events, his response, accompanied, famously, by the injunction to ‘action this day’ was ‘[m]ake sure they have all they want on extreme priority and report to me that this has been done’38.

pages: 1,243 words: 167,097

One Day in August: Ian Fleming, Enigma, and the Deadly Raid on Dieppe
by David O’keefe
Published 5 Nov 2020

Ian Fleming in naval uniform 3. Rear Admiral John Godfrey 4. Admiral Karl Dönitz 5. The naval four-rotor Enigma machine 6. The ‘Morrison Wall’ at Bletchley Park 7. Two sets of spare rotor wheels for the Enigma machine 8. Three Enigma rotor wheels laid out on their sides 9. Frank Birch 10. Alan Turing 11. A captured Enigma wheel 12. Harry Hinsley, Sir Edward Travis and John Tiltman 13. Major General Hamilton Roberts 14. Captain Peter Huntington-Whiteley 15. No. 10 Platoon of X Company, Royal Marines 16. Captain John ‘Jock’ Hughes-Hallett 17. R.E.D. ‘Red’ Ryder 18. An aerial reconnaissance photo of Dieppe harbour 19.

It was the painstaking assembly of minuscule pieces of evidence – balanced, sorted and weighed against other evidence – that has finally allowed me to tell the ‘untold’ story of Dieppe. New technologies such as the internet, the microchip and digitization – ones that would have delighted Charles Babbage, Alan Turing, Frank Birch and Ian Fleming – have allowed me to consult more than 150,000 pages of documents from archives on two continents and over 50,000 pages of published primary and secondary source material for this book. The methodology I employ is straightforward, based in large part on the sage advice of a multitude of mentors in the historical realm and, perhaps rather ironically, on the musings of Colonel Peter Wright, the man who served as General Ham Roberts’s intelligence officer aboard the Calpe.

In this context, there can be little doubt that the ‘intelligence booty’ Fleming sought in Ruthless was akin to the Holy Grail for Bletchley Park. Fleming had dreamed up the operation to assist the gifted cryptanalysts who worked in Bletchley’s Naval Section – brilliant mathematicians, physicists and classical scholars such as Dillwyn ‘Dilly’ Knox, Alan Turing and Peter Twinn. These men now found themselves stymied in their critical struggle to break into German naval communications enciphered on a specially designed Enigma encryption machine. Despite their impressive intellectual efforts, they desperately needed ‘cribs,’ or ‘cheats’ – plain-language German text – that they could match up with a stretch of ciphertext and thus discover the daily ‘key’ setting, or password, which would unlock the contents of the top-secret German messages.

pages: 329 words: 88,954

Emergence
by Steven Johnson

If we could only figure out how the Dictyostelium pull it off, maybe we would gain some insight on our own baffling togetherness. “I was at Sloan-Kettering in the biomath department—and it was a very small department,” Keller says today, laughing. While the field of mathematical biology was relatively new in the late sixties, it had a fascinating, if enigmatic, precedent in a then-little-known essay written by Alan Turing, the brilliant English code-breaker from World War II who also helped invent the digital computer. One of Turing’s last published papers, before his death in 1954, had studied the riddle of “morphogenesis”—the capacity of all life-forms to develop ever more baroque bodies out of impossibly simple beginnings.

People had been thinking about emergent behavior in all its diverse guises for centuries, if not millennia, but all that thinking had consistently been ignored as a unified body of work—because there was nothing unified about its body. There were isolated cells pursuing the mysteries of emergence, but no aggregation. Indeed, some of the great minds of the last few centuries—Adam Smith, Friedrich Engels, Charles Darwin, Alan Turing—contributed to the unknown science of self-organization, but because the science didn’t exist yet as a recognized field, their work ended up being filed on more familiar shelves. From a certain angle, those taxonomies made sense, because the leading figures of this new discipline didn’t even themselves realize that they were struggling to understand the laws of emergence.

Nearly a hundred years later, the area has christened itself the Gay Village and actively promotes its coffee bars and boutiques as a must-see Manchester tourist destination, like Manhattan’s Christopher Street and San Francisco’s Castro. The pattern is now broadcast to a wider audience, but it has not lost its shape. But even at a lower amplitude, that signal was still loud enough to attract the attention of another of Manchester’s illustrious immigrants: the British polymath Alan Turing. As part of his heroic contribution to the war effort, Turing had been a student of mathematical patterns, designing the equations and the machines that cracked the “unbreakable” German code of the Enigma device. After a frustrating three-year stint at the National Physical Laboratory in London, Turing moved to Manchester in 1948 to help run the university’s embryonic computing lab.

pages: 440 words: 109,150

The Secrets of Station X: How the Bletchley Park codebreakers helped win the war
by Michael Smith
Published 30 Oct 2011

The main problem for Knox was what he called ‘the QWERTZU’, by which he meant the way in which the letters on the keyboard of the Wehrmacht Enigma machines were wired to the letters on the wheels inside the machine, and he left the meeting in Paris none the wiser. One good thing did however come out of the January 1939 meeting. It became clear that the Poles were using mathematicians to try to break Enigma and, when they returned to the UK, Denniston recruited two mathematicians to assist Knox. One was Alan Turing, a 27-year-old fellow of King’s College, Cambridge, who began working part-time, coming in on occasional days with the intention of joining full time when the war began. The other was Peter Twinn, a 23-year-old mathematician from Brasenose College, Oxford, who started work immediately. ‘When I joined GC&CS in early February 1939 and went to join Dilly Knox to work on the German services’ Enigma traffic, the outlook was not encouraging,’ Twinn recalled.

But his good humour soon returned after they told him that the keys were wired up to the encypherment mechanism in alphabetical order, A to A, B to B, etc. Although one female codebreaker had suggested this as a possibility, it had never been seriously considered, Twinn recalled. ‘It was such an obvious thing to do, really a silly thing to do, that nobody, not Dilly Knox or Tony Kendrick or Alan Turing, ever thought it worthwhile trying,’ he recalled. ‘I know in retrospect it looks daft. I can only say that’s how it struck all of us and none of the others were idiots.’ A few weeks later the Poles gave both the French and British codebreakers clones of the steckered Enigma. Bertrand, who had been given both machines and asked to pass one on to the British, later described taking the British copy to London on the Golden Arrow express train on 16 August 1939.

The 21-year-old from Belfast was brought in at the end of January 1940 by Gordon Welchman, who had been his mathematics tutor at Sidney Sussex College, Cambridge, and recognised that he had exceptional talent as a mathematician. He was one of the new boys trying to think of ways to break into the Red. After being taught ‘the mysteries of the Enigma’ by Alan Turing and Tony Kendrick, Herivel was sent to Hut 6. ‘I had been recruited by Welchman and I was going to work in his show,’ Herivel recalled. I do remember that when I came to Hut 6, we were doing very badly in breaking into the Red code. Every evening, when I went back to my digs and when I’d had my supper, I would sit down in front of the fire and put my feet up and think of some method of breaking into the Red.

The Code Book: The Science of Secrecy From Ancient Egypt to Quantum Cryptography
by Simon Singh
Published 1 Jan 1999

There were many great cryptanalysts and many significant breakthroughs, and it would take several large volumes to describe the individual contributions in detail. However, if there is one figure who deserves to be singled out, it is Alan Turing, who identified Enigma’s greatest weakness and ruthlessly exploited it. Thanks to Turing, it became possible to crack the Enigma cipher under even the most difficult circumstances. Alan Turing was conceived in the autumn of 1911 in Chatrapur, a town near Madras in southern India, where his father Julius Turing was a member of the Indian civil service. Julius and his wife Ethel were determined that their son should be born in Britain, and returned to London, where Alan was born on June 23, 1912.

Chapter 4 Hinsley, F.H., British Intelligence in the Second World War: Its Influence on Strategy and Operations (London: HMSO, 1975). The authoritative record of intelligence in the Second World War, including the role of Ultra intelligence. Hodges, Andrew, Alan Turing: The Enigma (London: Vintage, 1992). The life and work of Alan Turing. One of the best scientific biographies ever written. Kahn, David, Seizing the Enigma (London: Arrow, 1996). Kahn’s history of the Battle of the Atlantic and the importance of cryptography. In particular, he dramatically describes the “pinches” from U-boats which helped the codebreakers at Bletchley Park.

He replies, “It’s about right and wrong. In general terms. It’s a technical paper in mathematical logic, but it’s also about the difficulty of telling right from wrong. People think—most people think—that in mathematics we always know what is right and what is wrong. Not so. Not any more.” Figure 47 Alan Turing. (photo credit 4.4) In his attempt to identify undecidable questions, Turing’s paper described an imaginary machine that was designed to perform a particular mathematical operation, or algorithm. In other words, the machine would be capable of running through a fixed, prescribed series of steps which would, for example, multiply two numbers.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

The heroes of Bletchley Park: Hinsley, Harry, “The Influence of ULTRA in the Second World War,” Babbage Lecture Theatre, Computer Laboratory, last modified November 26, 1996, http://www.cl.cam.ac.uk/research/security/Historical/hinsley.html (accessed September 6, 2011). At Bletchley Turing: Banks, “A Conversation with I. J. Good.” I won’t say that what Turing did: McKittrick, David, “Jack Good: Cryptographer whose work with Alan Turing at Bletchley Park was crucial to the War effort,” The Independent, sec. obituaries, May 14, 2009, http://www.independent.co.uk/news/obituaries/jack-good-cryptographer-whose-work-with-alan-turing-at-bletchley-park-was-crucial-to-the-war-effort-1684506.html (accessed September 5, 2011). In 1957, MIT psychologist: McCorduck, Pamela, Machines Who Think, A Personal Inquiry into the History and Prospects of Artificial Intelligence (San Francisco: W.

To meet our definition of general intelligence a computer would need ways to receive input from the environment, and provide output, but not a lot more. It needs ways to manipulate objects in the real world. But as we saw in the Busy Child scenario, a sufficiently advanced intelligence can get someone or something else to manipulate objects in the real world. Alan Turing devised a test for human-level intelligence, now called the Turing test, which we will explore later. His standard for demonstrating human-level intelligence called only for the most basic keyboard-and-monitor kind of input and output devices. The strongest argument for why advanced AI needs a body may come from its learning and development phase—scientists may discover it’s not possible to “grow” AGI without some kind of body.

He may be a genius, but he’s not a thousand times more intelligent than the smartest human, as an ASI could be. Bad or indifferent ASI needs to get out of the box just once. The AI-Box Experiment also fascinated me because it’s a riff on the venerable Turing test. Devised in 1950 by mathematician, computer scientist, and World War II code breaker Alan Turing, the eponymous test was designed to determine whether a machine can exhibit intelligence. In it, a judge asks both a human and a computer a set of written questions. If the judge cannot tell which respondent is the computer and which is the human, the computer “wins.” But there’s a twist. Turing knew that thinking is a slippery subject, and so is intelligence.

pages: 370 words: 94,968

The Most Human Human: What Talking With Computers Teaches Us About What It Means to Be Alive
by Brian Christian
Published 1 Mar 2011

Ramachandran and Sandra Blakeslee, Phantoms in the Brain: Probing the Mysteries of the Human Mind (New York: William Morrow, 1998). 10 Alan Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, 1937, 2nd ser., 42, no. 1 (1937), pp. 230–65. 11 Ada Lovelace’s remarks come from her translation (and notes thereupon) of Luigi Federico Menabrea’s “Sketch of the Analytical Engine Invented by Charles Babbage, Esq.,” in Scientific Memoirs, edited by Richard Taylor (London, 1843). 12 Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950), pp. 433–60. 13 For more on the idea of “radical choice,” see, e.g., Sartre, “Existentialism Is a Humanism,” especially Sartre’s discussion of a painter wondering “what painting ought he to make” and a student who came to ask Sartre’s advice about an ethical dilemma. 14 Aristotle’s arguments: See, e.g., The Nicomachean Ethics. 15 For a publicly traded company: Nobel Prize winner, and (says the Economist) “the most influential economist of the second half of the 20th century,” Milton Friedman wrote a piece in the New York Times Magazine in 1970 titled “The Social Responsibility of Business Is to Increase Its Profits.”

Fortunately, I am human; unfortunately, it’s not clear how much that will help. The Turing Test Each year, the artificial intelligence (AI) community convenes for the field’s most anticipated and controversial annual event—a competition called the Turing test. The test is named for British mathematician Alan Turing, one of the founders of computer science, who in 1950 attempted to answer one of the field’s earliest questions: Can machines think? That is, would it ever be possible to construct a computer so sophisticated that it could actually be said to be thinking, to be intelligent, to have a mind? And if indeed there were, someday, such a machine: How would we know?

The Illegitimacy of the Figurative When Claude Shannon met Betty at Bell Labs in the 1940s, she was indeed a computer. If this sounds odd to us in any way, it’s worth knowing that nothing at all seemed odd about it to them. Nor to their co-workers: to their Bell Labs colleagues their romance was a perfectly normal one, typical even. Engineers and computers wooed all the time. It was Alan Turing’s 1950 paper “Computing Machinery and Intelligence” that launched the field of AI as we know it and ignited the conversation and controversy over the Turing test (or the “Imitation Game,” as Turing initially called it) that has continued to this day—but modern “computers” are nothing like the “computers” of Turing’s time.

pages: 524 words: 120,182

Complexity: A Guided Tour
by Melanie Mitchell
Published 31 Mar 2009

As the mathematician and writer Andrew Hodges notes: “This was an amazing new turn in the enquiry, for Hilbert had thought of his programme as one of tidying up loose ends. It was upsetting for those who wanted to find in mathematics something that was perfect and unassailable….” Turing Machines and Uncomputability While Gödel dispatched the first and second of Hilbert’s questions, the British mathematician Alan Turing killed off the third. In 1935 Alan Turing was a twenty-three-year-old graduate student at Cambridge studying under the logician Max Newman. Newman introduced Turing to Gödel’s recent incompleteness theorem. When he understood Gödel’s result, Turing was able to see how to answer Hilbert’s third question, the Entscheidungsproblem, and his answer, again, was “no.”

Proceedings of the National Academy of Sciences, USA, 101(4), 2004, pp. 918–922. “there is no such thing as an unsolvable problem”: Quoted in Hodges, A., Alan Turing: The Enigma, New York: Simon & Schuster, 1983, p. 92. “Gödel’s proof is complicated”: For excellent expositions of the proof, see Nagel, E. and Newman, J. R., Gödel’s Proof. New York: New York University, 1958; and Hofstadter, D. R., Gödel, Escher, Bach: an Eternal Golden Braid. New York: Basic Books, 1979. “This was an amazing new turn”: Hodges, A., Alan Turing: The Enigma. New York: Simon & Schuster, 1983, p. 92. “Turing killed off the third”: Another mathematician, Alonzo Church, also proved that there are undecidable statements in mathematics, but Turing’s results ended up being more influential.

Following the intuition of Leibniz of more than two centuries earlier, Turing formulated his definition by thinking about a powerful calculating machine—one that could not only perform arithmetic but also could manipulate symbols in order to prove mathematical statements. By thinking about how humans might calculate, he constructed a mental design of such a machine, which is now called a Turing machine. The Turing machine turned out to be a blueprint for the invention of the electronic programmable computer. Alan Turing, 1912–1954 (Photograph copyright ©2003 by Photo Researchers Inc. Reproduced by permission.) A QUICK INTRODUCTION TO TURING MACHINES As illustrated in figure 4.1, a Turing machine consists of three parts: (1) A tape, divided into squares (or “cells”), on which symbols can be written and from which symbols can be read.

pages: 250 words: 73,574

Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers
by John MacCormick and Chris Bishop
Published 27 Dec 2011

In the next section, we encounter neural networks: a pattern recognition technique in which the learning phase is not only significant, but directly inspired by the way humans and other animals learn from their surroundings. NEURAL NETWORKS The remarkable abilities of the human brain have fascinated and inspired computer scientists ever since the creation of the first digital computers. One of the earliest discussions of actually simulating a brain using a computer was by Alan Turing, a British scientist who was also a superb mathematician, engineer, and code-breaker. Turing's classic 1950paper, entitled Computing Machinery and Intelligence, is most famous for a philosophical discussion of whether a computer could masquerade as a human. The paper introduced a scientific way of evaluating the similarity between computers and humans, known these days as a “Turing test.”

Two mathematicians, one American and one British, independently discovered uncomputable problems in the late 1930s—several years before the first real computers emerged during the Second World War. The American was Alonzo Church, whose groundbreaking work on the theory of computation remains fundamental to many aspects of computer science. The Briton was none other than Alan Turing, who is commonly regarded as the single most important figure in the founding of computer science. Turing's work spanned the entire spectrum of computational ideas, from intricate mathematical theory and profound philosophy to bold and practical engineering. In this chapter, we will follow in the footsteps of Church and Turing on a journey that will eventually demonstrate the impossibility of using a computer for one particular task.

The Halting Problem and Undecidability That concludes our tour through one of the most sophisticated and profound results in computer science. We have proved the absolute impossibility that anyone will ever write a computer program like CanCrash.exe: a program that analyzes other programs and identifies all possible bugs in those programs that might cause them to crash. In fact, when Alan Turing, the founder of theoretical computer science, first proved a result like this in the 1930s, he wasn't concerned at all about bugs or crashes. After all, no electronic computer had even been built yet. Instead, Turing was interested in whether or not a given computer program would eventually produce an answer.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

I will explain the origins of the myth of AI, what we know and don’t know about the prospects of actually achieving human-level AI, and why we need to better appreciate the only true intelligence we know—our own. IN THIS BOOK In Part One, The Simplified World, I explain how our AI culture has simplified ideas about people, while expanding ideas about technology. This began with AI’s founder, Alan Turing, and involved understandable but unfortunate simplifications I call “intelligence errors.” Initial errors were magnified into an ideology by Turing’s friend and statistician, I. J. Good, who introduced the idea of “ultraintelligence” as the predictable result once human-level AI had been achieved.

Since we have no good scientific reason to believe the myth is true, and every reason to reject it for the purpose of our own future flourishing, we need to radically rethink the discussion about AI. Part I THE SIMPLIFIED WORLD Chapter 1 THE INTELLIGENCE ERROR The story of artificial intelligence starts with the ideas of someone who had immense human intelligence: the computer pioneer Alan Turing. In 1950 Turing published a provocative paper, “Computing Machinery and Intelligence,” about the possibility of intelligent machines.1 The paper was bold, coming at a time when computers were new and unimpressive by today’s standards. Slow, heavy pieces of hardware sped up scientific calculations like code breaking.

Admitting the necessity of supplying a bias to learning systems is tantamount to Turing’s observing that insights about mathematics must be supplied by human minds from outside formal methods, since machine learning bias is determined, prior to learning, by human designers.10 TURING’S LEGACY To sum up the argument, the problem-solving view of intelligence necessarily produces narrow applications, and is therefore inadequate for the broader goals of AI. We inherited this view of intelligence from Alan Turing. (Why, for instance, do we even use the term artificial intelligence, rather than, perhaps, speaking of “human-task-simulation”?)11 Turing’s great genius was to clear away theoretical obstacles and objections to the possibility of engineering an autonomous machine, but in so doing he narrowed the scope and definition of intelligence itself.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

This makes it relatively easy for intelligent people in isolated places to produce powerful new software. When This question is surprisingly easy to answer, namely “in roughly fifty years”. This prediction has been consistently made since the beginning of artificial intelligence research, and continues to be made today. Alan Turing Public Wikipedia In 1950, the great Alan Turing reasoned:As I have explained, the problem is mainly one of programming. Advances in engineering will have to be made too, but it seems unlikely that these will not be adequate for the requirements. Estimates of the storage capacity of the brain vary from 1010 to 1015 binary digits.

It took a lot of clever technology to be able to perform these tasks electronically, but the computer can now easily out perform humans at those specific tasks. (There are, of course, also many unresolved software challenges such as playing the game Go at a professional level.) Turing Test The problem of defining intelligence was recognized very early and it led the great logician Alan Turing to propose a functional definition now known as the Turing Test in 1950. This test was simply that a computer would be considered intelligent when it could convince a human that the computer was a human. The idea is that the human would communicate using a text messaging-like program so that they could not see or hear the other party, and at the end of a conversation would state whether they thought that the other party was man or machine.

None of these missions those involved human astronauts.) The Case Against Machine Intelligence Many people have argued that machine intelligence is impossible. Most of these arguments can be easily discounted, but they are still worth examining. Turing halting problem The first line of arguments are based on the limits to computation proved by Alan Turing and Kurt Gödel in the 1930s. Long before significant real computers could be built, Turing created a very simple theoretical computer in which programs could be written. He then proved that any other more sophisticated computer could not have any more computational power than his simple machine.

pages: 476 words: 121,460

The Man From the Future: The Visionary Life of John Von Neumann
by Ananyo Bhattacharya
Published 6 Oct 2021

Kurt Gödel, ‘Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I’, Monatshefte für Mathematik und Physik, 38 (1931), pp. 173–98. 29. Von Neumann, Selected Letters. 30. Any details of Turing’s life are drawn from Andrew Hodges, 2012, Alan Turing: The Enigma. The Centenary Edition, Princeton University Press, Princeton. My brief description of Turing’s paper is abridged from Charles Petzold, 2008, The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine, Wiley, Hoboken, and ‘Computable Numbers: A Guide’, in Jack B. Copeland (ed.), 2004, The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, Oxford University Press, Oxford. 31.

Mathematical Foundations of Quantum Mechanics is the work of an exceptional mathematician. One of the work’s earliest fans would be a teenager, who ordered the book in the original German as his prize after winning a school competition.47 ‘Very interesting, and not at all difficult reading’ was how Alan Turing described von Neumann’s classic in a letter to his mother the following year.48 But von Neumann’s book was also the work of a rather cocksure young man. To some, it seemed like the twenty-eight-year-old upstart was suggesting his book was the last word on quantum mechanics. Erwin Schrödinger disagreed.

‘Anyone wishing to get an unforgettable impression of the razor edge of von Neumann’s mind,’ he wrote, ‘need merely try to pursue this chain of exact reasoning for himself, realizing that often five pages of it were written down before breakfast, seated at a living room writing-table in a bathrobe.’6 This was around the time an unkempt young mathematician, eight years his junior, came to von Neumann’s attention. Alan Turing’s first paper, published in April 1935, developed work in von Neumann’s fifty-second, on group theory, which had appeared the previous year. Coincidentally, this was exactly when von Neumann arrived in Cambridge, England, where Turing had a fellowship at King’s College, to deliver a series of lectures on the same topic.

pages: 696 words: 143,736

The Age of Spiritual Machines: When Computers Exceed Human Intelligence
by Ray Kurzweil
Published 31 Dec 1998

Despite Babbage’s inability to finish any of his major initiatives, their concepts of a computer with a stored program, self-modifying code, addressable memory, conditional branching, and computer programming itself still form the basis of computers today.4 Again, Enter Alan Turing By 1940, Hitler had the mainland of Europe in his grasp, and England was preparing for an anticipated invasion. The British government organized its best mathematicians and electrical engineers, under the intellectual leadership of Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission was likely to doom the nation.

Tunneling allows some of the electrons to effectively move through the barrier and accounts for the “semi” conductor properties of a transistor. Turing machine A simple abstract model of a computing machine, designed by Alan Turing in his 1936 paper “On Computable Numbers.” The Turing machine is a fundamental concept in the theory of computation. Turing Test A procedure proposed by Alan Turing in 1950 for determining whether or not a system (generally a computer) has achieved human-level intelligence, based on whether it can deceive a human interrogator into believing that it is human. A human “judge” interviews the (computer) system, and one or more human “foils” over terminal lines (by typing messages).

As with any phenomenon of exponential growth, the increases are so slow at first as to be virtually unnoticeable. Despite many decades of progress since the first electrical calculating equipment was used in the 1890 census, it was not until the mid-1960s that this phenomenon was even noticed (although Alan Turing had an inkling of it in 1950). Even then, it was appreciated only by a small community of computer engineers and scientists. Today, you have only to scan the personal computer ads—or the toy ads—in your local newspaper to see the dramatic improvements in the price performance of computation that now arrive on a monthly basis.

pages: 253 words: 83,473

The Demon in the Machine: How Hidden Webs of Information Are Finally Solving the Mystery of Life
by Paul Davies
Published 31 Jan 2019

Gödel’s theorem tells us that the world of mathematics embeds inexhaustible novelty; even an unbounded intellect, a god, can never know everything. It is the ultimate statement of open-endedness. Constructed as it was in the rarefied realm of formal logic, Gödel’s theorem had no apparent link with the physical world, let alone the biological world. But only five years later the Cambridge mathematician Alan Turing established a connection between Gödel’s result and Hilbert’s halting problem, which he published in a paper entitled ‘On computable numbers, with an application to the Entscheidungsproblem’.2 It proved to be the start of something momentous. Turing is best known for his role in cracking the German Enigma code in the Second World War, working in secret at Bletchley Park in the south of England.

Many of the great scientists of the twentieth century spotted the connection between Turing’s ideas and biology. What was needed to cement the link with biology was the transformation of a purely computational process into a physical construction process. A MACHINE THAT COPIES ITSELF Across the other side of the Atlantic from Alan Turing, the Hungarian émigré John von Neumann was similarly preoccupied with designing an electronic computer for military application, in his case in connection with the Manhattan Project (the atomic bomb). He used the same basic idea as Turing – a universal programmable machine that could compute anything that is computable.

This hand-wavy account is easy to state, but not so easy to turn into a detailed scientific explanation, in large part because it depends on the coupling between chemical networks and information-management networks, so there are two causal webs tangled together and changing over time. Added to all this is growing evidence that not just chemical gradients but physical forces – electric and mechanical – also contribute to morphogenesis. I shall have more to say on this remarkable topic in the next chapter. Curiously, Alan Turing took an interest in the problem of morphogenesis and studied some equations describing how chemicals might diffuse through tissue to form a concentration gradient of various substances, reacting in ways that can produce three-dimensional patterns. Although Turing was on the right track, it has been slow going.

On Nature and Language
by Noam Chomsky
Published 16 Apr 2007

“The world was merely a set of Archimedian simple machines hooked together,” Galileo scholar Peter Machamer observes, “or a set of colliding corpuscles that obeyed the laws of mechanical collision.” The world is something like the intricate clocks and other automata that excited the scientific imagination of that era, much as computers do today – and the shift is, in an important sense, not fundamental, as Alan Turing showed sixty years ago. Within the framework of the mechanical philosophy, Descartes developed his theory of mind and mind–body dualism, still the locus classicus of much discussion of our mental nature, a serious misunderstanding, I believe. Descartes himself pursued a reasonable course. He sought to demonstrate that the inorganic and organic world could be explained in terms of the mechanical philosophy.

If a very recent emergent organ that is central to human existence in fact does approach optimal design, that would suggest that, in some unknown way, it may be the result of the functioning of physical and chemical laws for a brain that has reached a certain level of complexity. And further questions arise for general evolution that are by no means novel, but that have been somewhat at the margins of inquiry until fairly recently. I am thinking of the work of D’Arcy Thompson and Alan Turing, to mention two of the most prominent modern figures. Similar conceptions, now emerging in a certain form in the study of language, also had a central place in Galileo’s thought. In studying acceleration, he wrote, “we have been guided . . . by our insight into the character and properties of nature’s other works, in which nature generally employs only the least elaborate, the simplest and easiest of means.

The modern scientific revolution, from Galileo, was based on the thesis that the world is a great machine, which could in principle be constructed by a master artisan, a complex version of the clocks and other intricate automata that fascinated the seventeenth and eighteenth centuries, much as computers have provided a stimulus to thought and imagination in recent years; the change of artifacts has limited consequences for the basic issues, 65 On nature and language as Alan Turing demonstrated sixty years ago. The thesis – called “the mechanical philosophy” – has two aspects: empirical and methodological. The factual thesis has to do with the nature of the world: it is a machine constructed of interacting parts. The methodological thesis has to do with intelligibility: true understanding requires a mechanical model, a device that an artisan could construct.

pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy
by Sharon Bertsch McGrayne
Published 16 May 2011

Hilton, Peter. (2000) Reminiscences and reflections of a codebreaker. In Coding Theory and Cryptography: From Enigma and Geheimschreiber to Quantum Theory, ed., WD Joyner. Springer-Verlag. 1–8. Hodges, Andrew. (1983, 2000) Alan Turing: The Enigma. Walker. A classic. ———. The Alan Turing Webpage. http://www.turing.org.uk/turing/. ———. (2000) Turing, a natural philosopher. Routledge. In The Great Philosophers, eds., R. Monk and F. Raphael. Weidenfeld and Nicolson. ———. (2002) Alan Turing—a Cambridge Scientific Mind. In Cambridge Scientific Minds, eds., Peter Harmon, Simon Mitton. Cambridge University Press. Hosgood, Steven. http://tallyho.bc.nu/~steve/banburismus.html.

After Laplace’s death, researchers and academics seeking precise and objective answers pronounced his method subjective, dead, and buried. Yet at the very same time practical problem solvers relied on it to deal with real-world emergencies. One spectacular success occurred during the Second World War, when Alan Turing developed Bayes to break Enigma, the German navy’s secret code, and in the process helped to both save Britain and invent modern electronic computers and software. Other leading mathematical thinkers—Andrei Kolmogorov in Russia and Claude Shannon in New York—also rethought Bayes for wartime decision making.

Although Bayes’ rule drew the attention of the greatest statisticians of the twentieth century, some of them vilified both the method and its adherents, crushed it, and declared it dead. Yet at the same time, it solved practical questions that were unanswerable by any other means: the defenders of Captain Dreyfus used it to demonstrate his innocence; insurance actuaries used it to set rates; Alan Turing used it to decode the German Enigma cipher and arguably save the Allies from losing the Second World War; the U.S. Navy used it to search for a missing H-bomb and to locate Soviet subs; RAND Corporation used it to assess the likelihood of a nuclear accident; and Harvard and Chicago researchers used it to verify the authorship of the Federalist Papers.

pages: 331 words: 104,366

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins
by Garry Kasparov
Published 1 May 2017

Players have been caught using sophisticated signaling methods with accomplices, Bluetooth headsets in hats or electrical devices in shoes, and simply using a smartphone in the restroom. The first real chess program actually predates the invention of the computer and was written by no less a luminary than Alan Turing, the British genius who cracked the Nazi Enigma code. In 1952, he processed a chess algorithm on slips of paper, playing the role of CPU himself, and this “paper machine” played a competent game. This connection went beyond Turing’s personal interest in chess. Chess had a long-standing reputation as a unique nexus of the human intellect, and building a machine that could beat the world champion would mean building a truly intelligent machine.

How do machines play chess? The basic formula hasn’t changed since 1949, when the American mathematician and engineer Claude Shannon wrote a paper describing how it might be done. In “Programming a Computer for Playing Chess,” he proposed a “computing routine or ‘program’” for use on the sort of general-purpose computer Alan Turing had theorized years earlier. You can tell how early it was in the computer age that Shannon put the word “program” in quotation marks as jargon. As with many who followed him, Shannon was slightly apologetic at proposing a chess-playing device of “perhaps no practical importance.” But he saw the theoretical value of such a machine in other areas, from routing phone calls to language translation.

If you manage to find the four to five most reasonable moves in a given position and discard the rest, which is not trivial at all, the geometric branching of the decision tree still becomes enormous very quickly. So even if you succeed in creating a Type B algorithm that can search more intelligently, you still need a lot of processing speed and a lot of memory to keep track of all those millions of position evaluations. I’ve already mentioned Alan Turing’s “paper machine,” the first known functional chess program. I even had the honor of playing a reconstructed version of it on a modern computer when I was invited to speak at the Turing centenary in Manchester in 2012. It was quite weak by modern standards, but still must be considered a remarkable achievement considering that Turing didn’t even have a computer to test it on.

pages: 209 words: 53,236

The Scandal of Money
by George Gilder
Published 23 Feb 2016

Like the electromagnetic spectrum, which bears all the messages of the Internet to and from your smartphone or computer, it must be rooted in the absolute speed of light, the ultimate guarantor of the integrity of time. Dominating our own era and revealing in fundamental ways the nature of money is the information theory of Kurt Gödel, John von Neumann, Alan Turing, and Claude Shannon. Information theory tells us that information is not order but disorder, not the predictable regularity that contains no news, but the unexpected modulation, the surprising bits. But human creativity and surprise depend upon a matrix of regularities, from the laws of physics to the stability of money.4 Information theory has impelled the global ascendancy of information technology.

Because we use it to prioritize most of our activities, register and endow our accomplishments of learning and invention, and organize the life-sustaining work of our society, money is more than a mere payments system. It expresses a system of the world. That is why I link it to the information theory of Kurt Gödel, Alan Turing, and Claude Shannon. Each of these thinkers attempted to define his philosophy in utilitarian and determinist mathematics. Addressing pure logic as math, Gödel concluded that even arithmetic cannot constitute a complete and coherent system. All logical schemes have to move beyond self-referential circularity and invoke axioms outside themselves.

Oil futures trading has risen by a factor of one hundred in some three decades, from 10 percent of oil output in 1984 to ten times oil output in 2015. Derivatives on real estate are now nine times global GDP. That’s not capitalism, that’s hypertrophy of finance. Information theory: Based on the mathematical theories of Claude Shannon and Alan Turing, an evolving discipline that depicts human creations and communications as transmissions across a channel, whether that channel is a wire or the world. Measuring the outcome is its “news” or surprise, defined as entropy and consummated as knowledge. Entropy is higher or lower depending on the freedom of choice of the sender.

pages: 259 words: 73,193

The End of Absence: Reclaiming What We've Lost in a World of Constant Connection
by Michael Harris
Published 6 Aug 2014

Army designed its Sergeant Star, a chatbot that talks to would-be recruits at GoArmy.com, they naturally had their algorithm speak with a burly, all-American voice reminiscent of the shoot-’em-up video game Call of Duty. Fooling a human into bonding with inanimate programs (often of corporate or governmental derivation) is the new, promising, and dangerous frontier. But the Columbus of that frontier set sail more than half a century ago. • • • • • The haunted English mathematician Alan Turing—godfather of the computer—believed that a future with emotional, companionable computers was a simple inevitability. He declared, “One day ladies will take their computers for walks in the park and tell each other, ‘My little computer said such a funny thing this morning!’” Turing proposed that a machine could be called “intelligent” if people exchanging text messages with that machine could not tell whether they were communicating with a human.

But there is a deep difficulty in teaching our computers even a little empathy. Our emotional expressions are vastly complex and incorporate an annoyingly subtle range of signifiers. A face you read as tired may have all the lines and shadows of “sorrowful” as far as a poorly trained robot is concerned. What Alan Turing imagined, an intelligent computer that can play the human game at least almost as well as a real human, is now called “affective computing”—and it’s the focus of a burgeoning field in computer science. “Affective” is a curious word choice, though an apt one. While the word calls up “affection” and has come to reference moods and feelings, we should remember that “affective” comes from the Latin word afficere, which means “to influence” or (more sinisterly) “to attack with disease.”

This is because the means of confession—the technology itself—is so very amiable. Dinakar is building a more welcoming online world, and it’s a good thing he is. But we need to remain critical as we give over so much of ourselves to algorithmic management. • • • • • In a sense, Dinakar and others at the Media Lab are still pursuing Alan Turing’s dream. “I want to compute for empathy,” Dinakar told me as our time together wound down. “I don’t want to compute for banning anyone. I just want . . . I want the world to be a less lonely place.” Of course, for such affective computing to work the way its designers intend, we must be prepared to give ourselves over to its care.

pages: 372 words: 101,174

How to Create a Mind: The Secret of Human Thought Revealed
by Ray Kurzweil
Published 13 Nov 2012

All I’m after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company. —Alan Turing A computer would deserve to be called intelligent if it could deceive a human into believing that it was human. —Alan Turing I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. —Alan Turing A mother rat will build a nest for her young even if she has never seen another rat in her lifetime.1 Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a dam, even if no contemporary ever showed them how to accomplish these complex tasks.

Watson has already read hundreds of millions of pages on the Web and mastered the knowledge contained in these documents. Ultimately machines will be able to master all of the knowledge on the Web—which is essentially all of the knowledge of our human-machine civilization. English mathematician Alan Turing (1912–1954) based his eponymous test on the ability of a computer to converse in natural language using text messages.13 Turing felt that all of human intelligence was embodied and represented in language, and that no machine could pass a Turing test through simple language tricks. Although the Turing test is a game involving written language, Turing believed that the only way that a computer could pass it would be for it to actually possess the equivalent of human-level intelligence.

Most of the patterns or ideas (and an idea is also a pattern), as we have seen, are stored in the brain with a substantial amount of redundancy. A primary reason for the redundancy in the brain is the inherent unreliability of neural circuits. The second important idea on which the information age relies is the one I mentioned earlier: the universality of computation. In 1936 Alan Turing described his “Turing machine,” which was not an actual machine but another thought experiment. His theoretical computer consists of an infinitely long memory tape with a 1 or a 0 in each square. Input to the machine is presented on this tape, which the machine can read one square at a time. The machine also contains a table of rules—essentially a stored program—that consist of numbered states.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

In fact, the ideas that led to the first programmable computers came out of mathematicians’ attempts to understand human thought—particularly logic—as a mechanical process of “symbol manipulation.” Digital computers are essentially symbol manipulators, pushing around combinations of the symbols 0 and 1. To pioneers of computing like Alan Turing and John von Neumann, there were strong analogies between computers and the human brain, and it seemed obvious to them that human intelligence could be replicated in computer programs. Most people in artificial intelligence trace the field’s official founding to a small workshop in 1956 at Dartmouth College organized by a young mathematician named John McCarthy.

Would it need to understand things in the same way a human understands them? Given that we’re talking about a machine here, would we be more correct to say it is “simulating thought,” or could we say it is truly thinking? Could Machines Think? Such philosophical questions have dogged the field of AI since its inception. Alan Turing, the British mathematician who in the 1930s sketched out the first framework for programmable computers, published a paper in 1950 asking what we might mean when we ask, “Can machines think?” After proposing his famous “imitation game” (now called the Turing test—more on this in a bit), Turing listed nine possible objections to the prospect of a machine actually thinking, all of which he tried to refute.

For Searle, the strong AI claim would be that “the appropriately programmed digital computer does not just simulate having a mind; it literally has a mind.”13 In contrast, in Searle’s terminology, weak AI views computers as tools to simulate human intelligence and does not make any claims about them “literally” having a mind.14 We’re back to the philosophical question I was discussing with my mother: Is there a difference between “simulating a mind” and “literally having a mind”? Like my mother, Searle believes there is a fundamental difference, and he argued that strong AI is impossible even in principle.15 The Turing Test Searle’s article was spurred in part by Alan Turing’s 1950 paper, “Computing Machinery and Intelligence,” which had proposed a way to cut through the Gordian knot of “simulated” versus “actual” intelligence. Declaring that “the original question ‘Can a machine think?’ is too meaningless to deserve discussion,” Turing proposed an operational method to give it meaning.

pages: 360 words: 85,321

The Perfect Bet: How Science and Math Are Taking the Luck Out of Gambling
by Adam Kucharski
Published 23 Feb 2016

Additional specifics from competition online results (http://www.computerpokercompetition.org). 168“Poker is a perfect microcosm”: Author interview with Jonathan Schaeffer, July 2013. 169“a bath of refreshing foolishness”: Ulam, S. M. Adventures of a Mathematician (Oakland: University of California Press, 1991). 169young British mathematician by the name of Alan Turing: Hodges, Andrew. Alan Turing: The Enigma (Princeton, NJ: Princeton University Press, 1983). 169“I rather liked it at first”: Turing background given in: Copeland, B. J. The Essential Turing (Oxford: Oxford University Press, 2004). 170manuscript entitled “The Game of Poker”: The game of poker. File AMT/C/18.

Metropolis used half the money to buy a copy of von Neumann’s Theory of Games and Economic Behavior and stuck the remaining five dollars inside the cover to mark the win. Even before von Neumann had published his book on game theory, his research into poker was well known. In 1937, von Neumann had presented his work in a lecture at Princeton University. Among the attendees, there would almost certainly have been a young British mathematician by the name of Alan Turing. At the time Turing was a graduate student visiting from the University of Cambridge. He had come to the United States to work on mathematical logic. Although he was disappointed Kurt Gödel was no longer at the university, Turing generally enjoyed his time at Princeton, even if he did find certain American habits puzzling.

Cepheus has shown that, even in complex situations, it can be possible to find an optimal strategy. The researchers point to a range of scenarios in which such algorithms could be useful, from designing coast guard patrols to medical treatments. But this was not the only reason for the research. The team ended their Science paper with a quote from Alan Turing: “It would be disingenuous of us to disguise the fact that the principal motive which prompted the work was the sheer fun of the thing.” Despite the breakthrough, not everyone was convinced that it represented the ultimate victory of the artificial over the biological. Michael Johanson says that many human players view limit poker as the easy option, because there is a cap on how much players can raise the betting.

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

This year’s contributors to the Edge Question (there are close to 200 of them!) are a grown-up bunch and have eschewed mention of all that science fiction and all those movies: Star Maker, Forbidden Planet, Colossus: The Forbin Project, Blade Runner, 2001, Her, The Matrix, “The Borg.” And eighty years after Alan Turing introduced his Universal Machine, it’s time to honor Turing and other AI pioneers by giving them a well-deserved rest. We know the history. (See, for instance, George Dyson’s 2004 Edge feature, “Turing’s Cathedral.”) What’s going on NOW? So, once again, with appropriate rigor, the Edge Question, 2015: What do you think about machines that think?

THINKING DOES NOT IMPLY SUBJUGATING STEVEN PINKER Johnstone Family Professor, Department of Psychology, Harvard University; author, The Sense of Style: The Thinking Person’s Guide to Writing in the Twenty-First Century Thomas Hobbes’s pithy equation of reasoning as “nothing but reckoning” is one of the great ideas in human history. The notion that rationality can be accomplished by the physical process of calculation was vindicated in the twentieth century by Alan Turing’s thesis that simple machines can implement any computable function, and by models from D. O. Hebb, Warren McCulloch, and Walter Pitts and their scientific heirs showing that networks of simplified neurons could achieve comparable feats. The cognitive feats of the brain can be explained in physical terms: To put it crudely (and critics notwithstanding), we can say that beliefs are a kind of information, thinking a kind of computation, and motivation a kind of feedback and control.

This, again, is the time to greatly expand research on intelligence, not withdraw from it. We’re often misled by “big,” somewhat ill-defined, long-used words. Nobody so far has been able to give a precise, verifiable definition of what general intelligence or thinking is. The only definition I know that, though limited, can be practically used is Alan Turing’s. With his test, Turing provided an operational definition of a specific form of thinking—human intelligence. Let’s then consider human intelligence as defined by the Turing Test. It’s becoming increasingly clear that there are many facets of human intelligence. Consider, for instance, a Turing Test of visual intelligence—that is, questions about an image, a scene, which may range from “What is there?”

pages: 405 words: 105,395

Empire of the Sum: The Rise and Reign of the Pocket Calculator
by Keith Houston
Published 22 Aug 2023

Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society s2-42, no. 1 (1937): 230–265, https://doi.org/10.1112/plms/s2-42.1.230; Alan M. Turing, “Computing Machinery and Intelligence,” Mind LIX, no. 236 (1950): 433–460, https://doi.org/10.1093/mind/LIX.236.433. 5 Roland Pease, “Alan Turing: Inquest’s Suicide Verdict ‘Not Supportable,’ ” BBC News, June 26, 2012, https://www.bbc.co.uk/news/science-environment-18561092; “Royal Pardon for Codebreaker Alan Turing,” BBC News, December 24, 2013, https://www.bbc.co.uk/news/technology-25495315. 6 Turing, “Computing Machinery and Intelligence,” 436–437. 7 Menninger, Number Words, 212, 306. 8 Heinrich Schreiber, Ayn New Kunstlich Buech, Welches Gar Gewiß Vnd Behend Lernet Nach Der Gemainen Regel Detre, Welschen Practic, Regeln Falsi vñ Etlichē Regeln Cosse Mancherlay Schöne Uñ Zu Wissen Notürfftig Rechnu~g Auff Kauffmanschafft . . .

Many were made by stolid, familiar corporations that had become part of the business landscape: Smith Corona, best known for its typewriters; Burroughs, an adding machine pioneer founded in 1886; and Monroe, a maker of calculators based on an expired nineteenth-century patent.1 These were unexceptional machines for the most part, little advanced from the electrically driven arithmometers first built at the turn of the century, and the vast majority were capable only of the four fundamental arithmetical operations.2 And yet, by the mid-twentieth century, when the Curta was garnering attention but not sales, those same simple calculators had become the hardware on which ran great enterprises of commerce and science. They were the hardware and we, their human wranglers, were the software. Alan turing, the twentieth-century British mathematician, is known for many things. At Bletchley Park, a stately home that housed Britain’s government codebreakers, he masterminded the cracking of the German “Enigma” encryption scheme, a feat that may have shortened the Second World War by up to two years.3 Before the war, Turing had published a conceptual blueprint for all programmable electronic computers, later to be dubbed the “universal Turing machine.”

Later still, Sumlock acquired the British arm of the Comptometer Corporation itself, and by 1961, the original Chicagoan company had shut down its factories in favor of simply importing comptometers made by Sumlock in the UK.6 Sumlock was riding high, but something new and threatening loomed on the horizon. In the aftermath of the Second World War, the mathematicians and engineers of Bletchley Park had scattered to the far corners of the U.K.’s academic establishment. At the universities of Manchester and Cambridge and at the National Physical Laboratory and Birkbeck College in London, Alan Turing and others were busy planting the seeds of Britain’s nascent computing industry.7 It was a source of some national pride that, by 1950, three of those four groups had succeeded in building a functional electronic computer while a dozen similar projects in the United States still struggled to catch up.

pages: 239 words: 64,812

Geek Sublime: The Beauty of Code, the Code of Beauty
by Vikram Chandra
Published 7 Nov 2013

But, as Ada Byron pointed out: The Analytical Engine, on the contrary, is not merely adapted for tabulating the results of one particular function and no other, but for developing and tabulating any function whatever. In fact the engine may be described as being the material expression of any indefinite function of any degree of generality and complexity.4 In 1936, in his famous paper “On Computable Numbers,” Alan Turing announced, “It is possible to invent a single machine which can be used to compute any computable sequence,” and then showed how—at least in principle—to build such a machine.5 “Any indefinite function,” “any computable sequence”—that simple word any holds here a vastness perhaps equal to the universe, or your consciousness.

How would they make art and experience it? Perhaps these are questions not for programmers but for novelists and poets, for thinkers who deal with “the philosophy of awareness and the philosophy of language.” We await an Anandavardhana, an Abhinavagupta, for answers. In March 1954, a few months before his tragic death, Alan Turing sent four postcards to his friend, the logician Robin Gandy. Gandy kept only the last three of the series, which was labeled “Messages from the Unseen World.” The second postcard contains the following lines in Turing’s handwriting: III. The Universe is the interior of the Light Cone of the Creation.

“Computing Science: The Semicolon Wars.” American Scientist (2006): 299–303. Hessel, A., M. Goodman, and S. Kotler. “Hacking the President’s DNA.” Atlantic, November 2012. http://www.theatlantic.com/magazine/archive/2012/11/hacking-the-presidents-dna/309147/?single_page=true. Hodges, Andrew. Alan Turing: The Enigma. New York: Walker, 2000. Kindle edition. Hogan, Patrick Colm. “Towards a Cognitive Science of Poetics: Anandavardhana, Abhinavagupta, and the Theory of Literature.” College Literature 23.1 (February 1996): 164–79. Houben, Jan E. M. “Sociolinguistic Attitudes Reflected in the Work of Bhartṛhari and Some Later Grammarians.”

pages: 229 words: 67,599

The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age
by Paul J. Nahin
Published 27 Oct 2012

Bill Gates, the late Steve Jobs, and other present-day business geniuses are the people most commonly thought of when the world of computer science is discussed in the popular press, but knowledgeable students of history know who were the real technical minds behind it all—Boole and Shannon (and Shannon’s friend, the English genius Alan Turing, who appears in the following pages, too). Read this book and you’ll understand why. The Logician and the Engineer 1 What You Need To Know to Read This Book If a little knowledge is dangerous, where is the man who has so much as to be out of danger? —Thomas Huxley (1877) Claude Shannon’s very technical understanding of information … is boring—it’s dry.

(The German V2 rocket—the world’s first ballistic missile—is also often lumped in with the V1 as driving fire-control system development during Shannon’s day, but it would be quite difficult to shoot down a V2 today, during its 2,000 mph terminal atmospheric reentry phase, much less with 1940s gun technology!) Later work at Bell Labs took Shannon into the arcane world of cryptography, during which he met the English mathematician Alan Turing (1912–1954), who was a key player in the supersecret British Ultra program (“Ultra” was the code-name for the intelligence obtained from intercepted messages sent by German Enigma coding machines that the Nazis incorrectly thought unbreakable). Turing’s impact and influence on Shannon are discussed further in Chapter 9.

Before the machine begins to look at the input X, we need to reset Q1 (Q1 = 0) and to set Q2 (Q2 = 1), which could be done by applying the output signal from Figure 8.3.7 directly to R1 and S2, respectively, of the RS flip-flops from which we built the T flip-flops used in our machine. 9 Turing Machines No, I’m not interested in developing a powerful Brain. All I’m after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company. Shannon wants to feed not just data to a Brain, but cultural things! He wants to play music to it! — Both by Alan Turing, during a two-month visit in early 1943 to Bell Labs (New York City), where he met Claude Shannon and found they had a common interest in how computing machines might imitate the human brain A very small percentage of the population produces the greatest proportion of the important ideas. This is akin to an idea presented by the English mathematician, Turing, that the human brain is something like a piece of uranium … You shoot one neutron into it [and more than one neutron is] produced [the famous chain-reaction].

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

Deep Blue, we’ll see shortly, made headlines not because it was intelligent in any way comparable to a human, but because its underlying architecture allowed it to search deeper into the game than any previous machine. Computer power triumphed over human ingenuity. A breakthrough of sorts, but not perhaps what Alan Turing had in mind when he prophesised in 1950 that: at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.23 Throughout the cycles of boom and bust, the optimism and hyperbole that were a feature of AI probably didn’t help. Herbert Simon and Alan Turing were typical of upbeat researchers who had thought that solving intelligence was a feasible challenge.

Architectures of warbot intelligence More capable warbots would need to be smarter than a V1. It wouldn’t take long. The Second World War prompted massive technological innovation, nowhere more so than in the fields of electronics and computing. The British codebreaking project at Bletchley Park led the way—most famously via the genius of Alan Turing. This astonishing polymath made several enormous contributions that bear on the story here. Most directly, his creation of mechanical, and then electronic code-breaking machines started the digital computing revolution. Today’s warbots run on computers first imagined by Turing. Before Bletchley, and prior to the war, Turing made his mark in academia by setting out a proof for the essential logical incompleteness of mathematics.

Perhaps it is unsurprising that the heady optimism in AI of the day should have embraced the architecture of the brain as a model for intelligence, even with all these unanswered questions. But in truth the direction was two-way, as neuroscientists (the field was itself a new discipline) embraced mathematics, logic and computer science as a way of grasping human biology. In 1943, while Alan Turing and colleagues were labouring away at Bletchley on their designs for an analogue codebreaking computer, the American theorists Warren McCulloch and Walter Pitts produced a mathematical model of the neuron, which they called a ‘Threshold Logic Unit’.5 It bore a superficial relationship to the real thing—and that superficiality epitomised the direction of travel for connectionist AI thereafter: the brain would serve as a loose analogy, but only that, for machine intelligence.

pages: 332 words: 93,672

Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy
by George Gilder
Published 16 Jul 2018

Not only could men discover algorithms, they could compose them. The new vision ultimately led to a new information theory of biology, anticipated in principle by von Neumann and developed most fully by Hubert Yockey,10 in which human beings might eventually reprogram parts of their own DNA. More immediately, Gödel’s proof prompted Alan Turing’s invention in 1936 of the Turing machine—the universal computing architecture with which he showed that computer programs, like other logical schemes, not only were incomplete but could not even be proved to reach any conclusion. Any particular program might cause it to churn away forever. This was the “halting problem.”

The reason people like Stinchcombe regard blockchains and other such technologies as novel and threatening is that they accept the Google era eschaton. They see the advance of automation, machine learning, and artificial intelligence as occupying a limited landscape of human dominance and control that ultimately will be exhausted in a robotic universe—Life 3.0. But Charles Sanders Peirce, Kurt Gödel, Alonzo Church, Alan Turing, Emil Post, and Gregory Chaitin disproved this assumption on the most fundamental level of mathematical logic itself. Mathematics is not a closed or bounded system. It opens up at every step to a universe of human imagination. As Peirce’s triadic logic illuminates, every symbol engenders its own infinity of imaginative interpretation.

De Jong is a seasoned entrepreneur and financier devoted to using the blockchain to make gold the “earth’s most liquid currency” based on “vaulted, conflict free, responsibly mined gold.” Bell’s Law dooms the existing recentralization of computing and ensures the emergence of a new architecture. Lo and behold, here it is. It is based on the same cryptography that Claude Shannon and Alan Turing developed during World War II. It now provides a new computer architecture founded on blockchains, mathematical hashes, and the array of associated inventions in the Great Unbundling. The new architecture provides alternatives to the five trillion a day of gambled money. It provides alternatives to today’s insecure Internet, this porous Web where Equifax or Yahoo can lose hundreds of millions of items of personal data in a nonce, and the five Internet leviathans all just demand more passwords and user names.

pages: 337 words: 103,522

The Creativity Code: How AI Is Learning to Write, Paint and Think
by Marcus Du Sautoy
Published 7 Mar 2019

It holds a position wholly its own, and the considerations it suggests are more interesting in their nature.’ Ada Lovelace’s notes are now recognised as the first inroads into the creation of code. That kernel of an idea has blossomed into the artificial intelligence revolution that is sweeping the world today, fuelled by the work of pioneers like Alan Turing, Marvin Minsky and Donald Michie. Yet Lovelace was cautious as to how much any machine could achieve: ‘It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. The Analytical Engine has no pretensions whatever to originate anything.

Although she thought machines were limited, Lovelace began to realise the potential of these machines of cogs and gears to express a more artistic side of its character: It might act upon other things besides number … supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent. Yet she believed that any act of creativity would lie with the coder, not the machine. Is it possible to shift the weight of responsibility more towards the code? The current generation of coders believes it is. At the dawn of AI, Alan Turing famously proposed a test to measure intelligence in a computer. I would now like to propose a new test: the Lovelace Test. To pass the Lovelace Test, an algorithm must originate a creative work of art such that the process is repeatable (i.e. it isn’t the result of a hardware error) and yet the programmer is unable to explain how the algorithm produced its output.

While he was there, he created his own game, Theme Park, where players had to build and run their own theme park. The game was hugely successful, selling several million copies and winning a Golden Joystick award. With enough funds to finance his time at university, Hassabis set off for Cambridge. His course introduced him to the greats of the AI revolution: Alan Turing and his test for intelligence, Arthur Samuel and his program to play draughts, John McCarthy, who coined the term artificial intelligence, Frank Rosenblatt and his first experiments with neural networks. These were the shoulders on which Hassabis aspired to stand. It was while sitting in his lectures at Cambridge that he heard his professor repeating the mantra that a computer could never play Go because of the game’s creative and intuitive characteristics.

Prime Obsession:: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics
by John Derbyshire
Published 14 Apr 2003

Andrew has also computed the CLIMBING THE CRITICAL LINE 261 first 100 zeros to 1,000 decimal places each.93 The first zero (I mean, of course, its imaginary part) begins 14.13472514173469379045725198356247027078425711569924 31756855674601499634298092567649490103931715610127 79202971548797436766142691469882254582505363239447 13778041338123720597054962195586586020055556672583 601077370020541098266150754278051744259130625448… V. There are stories behind Table 16-1. That A.M. Turing, for example, is the very same Alan Turing who worked in mathematical logic, developing the idea of the Turing Test (a way of deciding whether a computer or its program is intelligent), and of the Turing machine (a very general, theoretical type of computer, a thought experiment used to tackle certain problems in mathematical logic). There is a Turing Prize for achievement in computer science, awarded annually since 1966 by the Association for Computing Machinery, equivalent to a Fields Medal94 in mathematics, or to a Nobel Prize in other sciences.

Some of the gear wheels survived, however, 262 PRIME OBSESSION and were found among his effects when he died, probably from suicide, on June 7, 1954. As sad and strange as Turing’s death was—he ate an apple coated, by himself, with cyanide—he enjoyed posthumous good fortune in the matter of biographers. Andrew Hodges wrote a beautiful book about him (Alan Turing: The Enigma, 1983), and then Hugh Whitemore made a fascinating play based on the book (Breaking the Code, 1986). I have no space here to go into the details of Turing’s life. I refer the reader to Hodges’s fine biography, from which I shall just quote the following. [O]n 15 March [1952] he submitted for publication his work on the calculation of the zeta function, even though the practical attempt at doing it on the prototype Manchester computer had been so unsatisfactory.

There are now hundreds of theorems that begin, “Assuming the truth of the Riemann Hypothesis….” They would all come crashing down if the RH were false. That is undesirable, of course, so the believers might be accused of wishful thinking, but it’s not the undesirability of losing those results, it’s the fact of their existence. Weight of evidence. Other mathematicians believe, as Alan Turing did, that the RH is probably false. Martin Huxley136 is a current nonbeliever. He justifies his nonbelief on entirely intuitive grounds, citing an argument first put forward by Littlewood: “A long-open conjecture in analysis generally turns out to be false. A long-open conjecture in algebra generally turns out to be true.”

pages: 528 words: 146,459

Computer: A History of the Information Machine
by Martin Campbell-Kelly and Nathan Ensmenger
Published 29 Jul 2013

Comrie, Lewis Fry Richardson, Lord Kelvin, Vannevar Bush, Howard Aiken, and Alan Turing. Of these, all but Comrie have standard biographies. They are Anthony Hyman’s Charles Babbage: Pioneer of the Computer (1984), Oliver Ashford’s Prophet—or Professor? The Life and Work of Lewis Fry Richardson (1985), Crosbie Smith and Norton Wise’s Energy and Empire: A Biographical Study of Lord Kelvin (1989), G. Pascal Zachary’s Endless Frontier: Vannevar Bush, Engineer of the American Century (1997), I. Bernard Cohen’s Howard Aiken: Portrait of a Computer Pioneer (1999), and Andrew Hodges’s Alan Turing: The Enigma (1988). A vignette of Comrie is given by Mary Croarken in her article “L.

As always in a new edition, we have sparingly revised the text to reflect changing perspectives and updated the bibliography to incorporate the growing literature of the history of computing. We have also introduced some substantial new material. In Chapter 3, which focuses on the precomputer era, we have added a section on Alan Turing. The year 2012 saw the centenary of the birth of Turing, whom many consider both a gay icon and the true inventor of the computer. Turing was indeed a key influence in the development of theoretical computer science, but we believe his influence on the invention of the computer has been overstated and have tried to give a measured assessment.

To understand the post–World War II computer industry, we need to realize that its leading firms—including IBM—were established as business-machine manufacturers in the last decades of the nineteenth century and were major innovators between the two world wars. Chapter 3 describes Charles Babbage’s failed attempt to build a calculating engine in the 1830s and its realization by Harvard University and IBM a century later. We also briefly discuss the theoretical developments associated with Alan Turing. Part Two of the book describes the development of the electronic computer, from its invention during World War II up to the establishment of IBM as the dominant mainframe computer manufacturer in the mid-1960s. Chapter 4 covers the development of the ENIAC at the University of Pennsylvania during the war and its successor, the EDVAC, which was the blueprint for almost all subsequent computers up to the present day.

pages: 523 words: 154,042

Fancy Bear Goes Phishing: The Dark History of the Information Age, in Five Extraordinary Hacks
by Scott J. Shapiro

Hacking is less about breaking encryption than breaking something around the encryption in order to sidestep it. 50 million lines of code: “Windows 10 Lines of Code,” Microsoft, 2020, https://answers.microsoft.com/en-us/windows/forum/all/windows-10-lines-of-code/a8f77f5c-0661–4895–9c77–2efd42429409. Turing Test: Turing set out his test for intelligence in Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950): 433–60. A Turing Test has a human judge and a computer subject attempting to appear human. A “reverse” Turing Test has a computer judge and a human subject trying to appear human. CAPTCHA—the irritating image-recognition challenge that websites use for detecting bots—stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” principles of metacode: Alan Turing, “On Computable Numbers with an Application to the Entscheidungproblem,” Proceedings of the London Mathematical Society, 1936, 230–65.

Hackers, as I’ll show, do not just hack downcode—they exploit philosophical principles, which I call “metacode.” Metacode refers to those fundamental principles that control all forms of computation. They determine what computation is and how it must work. Metacode, in other words, is the code for code—the code that must “run” before computer instructions can execute. Metacode was discovered by Alan Turing, the ingenious mathematician whose tragic life is featured in the Academy Award–winning movie The Imitation Game. Turing is best known for helping break the German Enigma code during World War II and developing a test for artificial intelligence, now known as the Turing Test. The Turing Test claims that a computer possesses intelligence when it can fool a human into thinking that it’s human.

The Turing Test claims that a computer possesses intelligence when it can fool a human into thinking that it’s human. Despite his many contributions to his country, and to humanity, Turing was prosecuted and punished by the British government for having had sex with another man. He died in 1954, by suicide, after eating an arsenic-laced apple. Alan Turing was only twenty-four years old in 1936 when he published his seminal article, “On Computable Numbers,” in which he set out the principles of metacode. Turing showed, for example, that computation is a physical process. When your calculator adds 2 + 2, when Amazon.com searches its database for a book, when the telephone company routes your call, or even when your visual cortex processes these words, physical mechanisms are working: switching circuits, sending pulses of light, forming neurochemical reactions, and more.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

Communications of the ACM 54, no. 10 (October 1, 2011): 66. doi:10.1145/2001269.2001288. Cohoon, J. McGrath, Zhen Wu, and Jie Chao. “Sexism: Toxic to Women’s Persistence in CSE Doctoral Programs,” 158. New York: ACM Press, 2009. https://doi.org/10.1145/1508865.1508924. Copeland, Jack. “Summing Up Alan Turing.” Oxford University Press (blog), November 29, 2012. https://blog.oup.com/2012/11/summing-up-alan-turing/. Cox, Amanda, Matthew Bloch, and Shan Carter. “All of Inflation’s Little Parts.” New York Times, May 3, 2008. http://www.nytimes.com/interactive/2008/05/03/business/20080403_SPENDING_GRAPHIC.html. Crawford, Kate. “Artificial Intelligence—With Very Real Biases.”

AI is tied up with games—not because there’s anything innate about the connection between games and intelligence, but because computer scientists tend to like certain kinds of games and puzzles. Chess, for example, is quite popular in their crowd, as are strategy games like Go and backgammon. A quick look at the Wikipedia pages for prominent venture capitalists and tech titans reveals that most of them were childhood Dungeons & Dragons enthusiasts. Ever since Alan Turing’s 1950 paper that proposed the Turing test for machines that think, computer scientists have used chess as a marker for “intelligence” in machines. Half a century has been spent trying to make a machine that could beat a human chess master. Finally, IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997.

Fowler was routinely passed over for promotion and was sexually propositioned by male coworkers. Uber’s HR team should have recognized that Fowler was facing a textbook case of gender bias in the workplace. Instead, they put her on probation and told her it was her fault. Disregard for social convention goes back farther than Minsky, back to computing pioneer Alan Turing, who, like Minsky, did his graduate work at Princeton. Turing was hopeless at social interaction. Turing’s biographer—Jack Copeland, director of the Turing Archive for the History of Computing—writes that Turing preferred to work in isolation: “Reading his scientific papers, it is almost as though the rest of the world—the busy community of human minds working away on the same or related problems—simply did not exist.”11 Unlike the character portrayed by actor Benedict Cumberbatch in the Turing biopic The Imitation Game, the real Turing was slovenly in appearance.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

The National Highway Traffic Safety Administration took note and declared that it would “monitor the new technology closely” and that it “will not hesitate to take action to protect the public against unreasonable risks to safety.” (See endnote 2, Chapter 3.) CHAPTER 4 THE QUEST TO BUILD INTELLIGENT MACHINES THE A.M. TURING AWARD IS GENERALLY RECOGNIZED AS THE “Nobel Prize” of computing. Named after the legendary mathematician and computer scientist Alan Turing and awarded annually by the Association for Computing Machinery, the Turing Award represents the pinnacle of achievement for those who have devoted their careers to advancing the state of the field. Like the Nobel, the Turing prize comes with a $1 million financial award, which is funded primarily by Google.

Recent advances in AI have led prominent figures like Elon Musk and the late Stephen Hawking to warn of scenarios remarkably similar to what Butler worried about more than 150 years ago. Opinions differ as to exactly when artificial intelligence became a serious field of study. I would mark the origin as 1950. In that year, the brilliant mathematician Alan Turing published a scientific paper entitled “Computing Machinery and Intelligence” that asked the question “Can machines think?”2 In his paper Turing invented a test, based on a game that was popular at parties, which is still the most commonly cited method for determining if a machine can be considered to be genuinely intelligent.

The goals were both ambitious and optimistic; the conference proposal declared that “an attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves” and promised that the organizers’ believed a “significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”3 Attendees included Marvin Minsky, who along with McCarthy became one of the world’s most celebrated AI researchers and founded the Computer Science and Artificial Intelligence Lab at MIT, and Claude Shannon, a legendary electrical engineer who formulated the principles of information theory that underlie electronic communication and make the internet possible. The brightest mind, however, was notably absent from the Dartmouth conference. Alan Turing had committed suicide two years earlier. Prosecuted for a same-sex relationship under the “indecency” laws then in force in Britain, Turing was given a choice between imprisonment or chemical castration through the forced introduction of estrogen. Depressed after selecting the second option, he took his own life in 1954.

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

The secret was only revealed in 1834, sixty-five years after the Turk was built—Jacques-Francois Mouret, one of the “directors” who had hidden in the machine, sold his secret to a newspaper.   8.  Wood, Living Dolls, p. 35   9.  Alan Turing, “Lecture to the London Mathematical Society,” 20 February 1947; archived at https://www.vordenker.de/downloads/turing-vorlesung.pdf (accessed July 2018). 10.  Alan Turing, “Intelligent Machinery: A Report by A. M. Turing,” National Physical Laboratory (1948); archived at https://www.npl.co.uk (accessed July 2018). 11.  See Grace Solomonoff, “Ray Solomonoff and the Dartmouth Summer Research Project in Artificial Intelligence” (no date), http://raysolomonoff.com/dartmouth/dartray.pdf. 12.  

For the first time, researchers began to build machines with the serious intention of rivaling human beings—a proper, sophisticated program of constructing intelligence was under way. Their aspirations were now serious, no longer confined to fiction or dependent on deceit. THE FIRST WAVE OF AI At a 1947 meeting of the London Mathematical Society, Alan Turing told the gathering that he had conceived of a computing machine that could exhibit intelligence.9 Turing deserved to be taken seriously: perhaps Britain’s leading World War II code breaker, he is one of the greatest computer scientists to have ever lived. Yet the response to the ideas in his lecture was so hostile that within a year he felt compelled to publish a new paper on the topic, responding in furious detail to assorted objections to his claim that machines “could show intelligent behaviour.”

Writing a program to translate one language into another meant observing how a multilingual person makes sense of a paragraph of text. Identifying objects meant representing and processing an image in the same way as human vision.16 This methodology was reflected in the language of the AI pioneers. Alan Turing claimed that “machines can be constructed which will simulate the behaviour of the human mind very closely.”17 Nils Nilsson, an attendee at the Dartmouth gathering, noted that most academics there “were interested in mimicking the higher levels of human thought. Their work benefitted from a certain amount of introspection about how humans solve problems.”18 And John Haugeland, a philosopher, wrote that the field of AI was seeking “the genuine article: machines with minds, in the full and literal sense.”19 Behind some of the claims made by Haugeland and others was a deeper theoretical conviction: human beings, they believed, were themselves actually just a complex type of computer.

pages: 322 words: 88,197

Wonderland: How Play Made the Modern World
by Steven Johnson
Published 15 Nov 2016

In the middle of the twentieth century, chess became a kind of shorthand way of thinking about intelligence itself, both in the functioning of the human brain and in the emerging field of computer science that aimed to mimic that intelligence in digital machines. The very roots of the modern investigation into artificial intelligence are grounded in the game of chess. “Can [a] machine play chess?” Alan Turing famously asked in a groundbreaking 1946 paper. “It could fairly easily be made to play a rather bad game. It would be bad because chess requires intelligence . . . There are indications however that it is possible to make the machine display intelligence at the risk of its making occasional serious mistakes . . .

— In the mid-2000s, the head of research at IBM, Paul Horn, began thinking about the next chapter in IBM’s storied tradition of “Grand Challenges”—high-profile projects that showcase advances in computation, often with a clearly defined milestone of achievement designed to attract the attention of the media. Deep Blue, the computer that ultimately defeated Gary Kasparov at chess, had been a Grand Challenge a decade before, exceeding Alan Turing’s hunch that chess-playing computers could be made to play a tolerable game. Horn was interested in Turing’s more celebrated challenge: the Turing Test, which he first formulated in a 1950 essay on “Computing Machinery and Intelligence.” In Turing’s words, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The pleasure of play is understandable. The productivity of play is harder to explain. Making sense of this mystery requires that we peer into the inner workings of the human brain, drawing on recent research in neuroscience and cognitive psychology—research that, fittingly, began by studying games. In the 1950s, inspired by Alan Turing’s musing on a chess-playing computer, a computer scientist at IBM named Arthur Samuels created a software program that could play checkers at a reasonable skill level on an IBM 701. (Legend has it that when IBM CEO Thomas Watson saw an early draft of the program, he predicted that the news of the checkers game would cause IBM stock to jump fifteen points.)

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Move 37 by ALPHAG Oviolated centuries of Go orthodoxy and was immediately seen by human experts as an embarrassing mistake, but it turned out to be a winning move. At top left is an Atlas humanoid robot built by Boston Dynamics. A depiction of a self-driving car sensing its environment appears between Ada Lovelace, the world’s first computer programmer, and Alan Turing, whose fundamental work defined artificial intelligence. At the bottom of the chess board are a Mars Exploration Rover robot and a statue of Aristotle, who pioneered the study of logic; his planning algorithm from De Motu Animalium appears behind the authors’ names. Behind the chess board is a probabilistic programming model used by the UN Comprehensive Nuclear-Test-Ban Treaty Organization for detecting nuclear explosions from seismic signals.

The methods used are necessarily different: the pursuit of human-like intelligence must be in part an empirical science related to psychology, involving observations and hypotheses about actual human behavior and thought processes; a rationalist approach, on the other hand, involves a combination of mathematics and engineering, and connects to statistics, control theory, and economics. The various groups have both disparaged and helped each other. Let us look at the four approaches in more detail. 1.1.1Acting humanly: The Turing test approach The Turing test, proposed by Alan Turing (1950), was designed as a thought experiment that would sidestep the philosophical vagueness of the question “Can a machine think?” A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer.

His incompleteness theorem showed that in any formal theory as strong as Peano arithmetic (the elementary theory of natural numbers), there are necessarily true statements that have no proof within the theory. This fundamental result can also be interpreted as showing that some functions on the integers cannot be represented by an algorithm—that is, they cannot be computed. This motivated Alan Turing (1912–1954) to try to characterize exactly which functions are computable—capable of being computed by an effective procedure. The Church–Turing thesis proposes to identify the general notion of computability with functions computed by a Turing machine (Turing, 1936). Turing also showed that there were some functions that no Turing machine can compute.

pages: 218 words: 63,471

How We Got Here: A Slightly Irreverent History of Technology and Markets
by Andy Kessler
Published 13 Jun 2005

Using Edison effect tubes and relays and other forms of logic and memory, scientists and engineers invented electronic computers to help win World War II. John von Neumann at the Moore School at the University of Pennsylvania designed the ENIAC digital computer, the birth mother of the U.S. computer industry, to speed up calculations for artillery firing tables for Navy guns. At the same time, Alan Turing and the British at Bletchley Park designed the Colossus computer to decipher Enigma codes. A host of electronic devices at Los Alamos helped speed up difficult calculations to control the reaction of uranium-235 for the atomic bomb. It is the very pursuit of those weapons that created huge commercial markets, and vice versa.

But by 1939, the Nazis got smart and changed the code every day instead of every month, and the Poles could no longer decipher the messages. But they did sneak their model of Enigma, BALLISTICS, CODES AND BOMBS 113 which they named Bomba, to the British, which gave them a huge head start in breaking Nazi codes. Alan Turing, a mathematician from Cambridge and then Princeton, had written a thesis in 1934 on a Universal Machine that could figure out any algorithm depending on how it was programmed. In other words, he conceptualized the stored-program computer with programmable instructions, rather than a fixed-purpose machine.

The product of these ideas was the design of another machine, the Electronic Discrete Variable Automatic Computer, EDVAC. The result, known as von Neumann computer architecture, consisted of a central processing unit that read in a program and data from memory and wrote the results back to memory. Virtually all computers in use today are von Neumann machines. Of course, von Neumann probably read Alan Turing’s paper back in 1934, and knew about the Universal Machine. Since Turing had been studying at Princeton, however, he may have picked up on the concept from meetings with von Neumann. Today, both seem to share credit. Since von Neumann worked as a consultant to the Manhattan Project, his computer expertise did play a part in the war effort despite the lateness of the ENIAC.

pages: 241 words: 70,307

Leadership by Algorithm: Who Leads and Who Follows in the AI Era?
by David de Cremer
Published 25 May 2020

To understand how algorithms learn, it is necessary to introduce the English mathematician Alan Turing. Depicted by actor Benedict Cumberbatch in the movie The Imitation Game, Alan Turing is best known for his accomplishment of deciphering the Enigma code used by the Germans during the second world war. To achieve this, he developed an electro-mechanical computer, which was called the Bombe. The fact that the Bombe achieved something that no human was capable of led Turing to think about the intelligence of the machine. This led to his 1950 article, ‘Computing Machinery and Intelligence,’ in which he introduced the now-famous Alan Turing test, which is today still considered the crucial test to determine whether a machine is truly intelligent.

The participant cannot see the other human or the machine and can only use information on how the other unseen party behaves. If the human is not able to distinguish between the behavior of another human and the behavior of a machine, it follows that we can call the machine intelligent. It is these behavioral ideas of Alan Turing that are today still significantly influencing the development of learning algorithms. The fact that observable behaviors form the input to learning is not a surprise as in the time of Turing behavioral science was dominating. This stream within psychology refrained from looking inside the mind of humans.

pages: 592 words: 152,445

The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America's Enemies
by Jason Fagone
Published 25 Sep 2017

He arrived there on July 28, eight days before America dropped the first atomic bomb on Japan. As he had done on his previous visit to Bletchley, William kept a detailed diary. One entry described a meeting with Alan Turing: “At 1535 a visit with Dr. Turing. He is leaving GC&CS, to my surprise. Says he’s going into electronic calculating devices and may come to the U.S. for a visit soon. Invited him to visit us if he comes to Washington.” This turned out to be the final encounter of William Friedman and Alan Turing. The two geniuses would never see each other again. In 1952, the British government stripped Turing’s security clearance on grounds that he was a homosexual, and officials coerced him into taking estrogen injections.

Until the invention of digital ciphers in the 1960s, the field of cryptology would be defined by heroic human attacks on physical cipher machines. These attacks would often be aided by machines specially built to speed the attacks—like the famous electromechanical “bombes” designed by the British codebreaker Alan Turing, and some of the world’s first computers, monstrosities of wires and vacuum tubes that occupied entire rooms—but not necessarily. It was still possible at this point to defeat a machine with mere pencil and paper. The human brain could beat the machines, if it was the right brain, and if the owner of the brain was willing to accept the cost of victory.

There were too many Germans using too many Enigmas with too many shifting keys to ever recover the keys by hand, so codebreakers needed to build machines of their own to assault the enemy’s machines, giant electro-mechanical contraptions and some of the first digital computers, too. Automation. Polish codebreakers were the first to solve Enigmas and automate the process of recovering keys. They built “bombes” that mirrored the Enigma rotors, ticking through possible alphabets until they found ones that might fit. Later, the British mathematician Alan Turing discovered how to make bombes dramatically more powerful, based on mathematical principles and previously solved bits of text known as “cribs”—a crib might be the name of a Nazi officer, the time of day, or “Heil Hitler.” His solutions were essentially search algorithms, ancestors of the Internet search algorithms of today.

pages: 169 words: 41,887

Literary Theory for Robots: How Computers Learned to Write
by Dennis Yi Tenen
Published 6 Feb 2024

History tells us that computers compute not only in the mathematical sense but universally. The number was incidental to the symbol. In the 1840s, Ada Lovelace, daughter of Lord Byron and one of the first “programmers” in the modern sense, imagined an engine that could manipulate any symbolic information whatsoever (not just numbers). A century later, Alan Turing extended that blueprint to imagine a “universal machine,” capable of reading from and writing to an infinite string of characters. The children of Turing and Lovelace occupying our homes are therefore expert generic symbol manipulators. They are smart because they are capable of interpreting abstract variables, containing values, representing anything and everything.

It would be simpler if we had at least two distinct words for the Platonic and the Aristotelian definitions, capturing the difference between intelligence as the source of and the goal of action. Baroque birds of all sorts bring us a step closer to the story of modern digital computers, able to mimic any other “discrete-­state machine” universally. Called a “universal machine” by Alan Turing, the computer promises to model general intelligence, regardless of the task it was originally designed to perform. Recall the obsolescence of single-­purpose machines like calculators, cash registers, and digital music players. The universal computer ate them all. Similarly, general intelligence refers to the ability of an individual or machine to learn, reason, and problem-­solve across a wide range of domains and contexts.

Malaprop (1977); GUS (1977) by a team from Xerox Palo Alto consisting of Daniel Bobrow, Ronald Kaplan, Martin Kay, Donald Norman, Henry Thompson, and Terry Winograd; and Wendy Lehert’s QUALM (1977), among many other (though still dude-­heavy) examples. Linguistics led this first wave of research, but AI also contained other nascent disciplines, such as robotics, vision, logic, decision-­making, cybernetics, and neuroscience. Alan Turing’s writings on the mind cemented the conversation with robots in the popular imagination, as did the staged dialogs between ELIZA and PARRY in 1973. A machine called RACTER published a much-­discussed volume of poetry titled The Policeman’s Beard Is Half Constructed (1984). Images of talking robots became commonplace, appearing in films like Jean-­Luc Godard’s influential Alphaville (1965), Stanley Kubrick’s 2001: A Space Odyssey (1968), and George Lucas’s Star Wars (1977).

pages: 326 words: 103,170

The Seventh Sense: Power, Fortune, and Survival in the Age of Networks
by Joshua Cooper Ramo
Published 16 May 2016

You can snap a photo, send it to a friend, edit it, and pass it along again. The world’s data can be reduced to ones and zeroes. But this is also an important metaphor: Our trade, our currencies, our ideologies—all these interact now. Long before the idea of a smartphone or 3-D goggles, the British mathematician Alan Turing anticipated their arrival when he dreamed of what he called a universal device: a notional box that, starting from the ones and zeroes of digitized data, could be constructed to do anything. Since everything can ultimately be reduced to a binary encoding, nearly any sort of data can be shared, studied, combined, or remixed.

The human was still doing the thinking; the computer was simply computing. It was extremely easy to draw a line between where the biological ended and the digital commenced. This was a puzzle that had been, in a sense, anticipated at the very dawn of the digital revolution by the mathematician Alan Turing in a paper called “Computing Machinery and Intelligence,” which he published in 1950. “Can machines think?” Turing began. His idea was to test this question in the following way: Have a research subject—a secretary, a graduate student, anyone—chat with an invisible interlocutor by way of a keyboard.

They will become more acute, more insightful in their judgment to the same degree that the human mind finds itself overwhelmed. Just as computers can see better, hear better, and remember longer than we can, so the device webs of our future will own a new, essential sense of what is happening on a whole system. We are at the moment that worried Alan Turing, the instant when man and machine confront each other and man has to ask, Wow, do I really let this thing gatekeep me? Humans alone already no longer train the very best machines: The devices teach themselves now, to some extent. Of course, there are still decades of adjustment, of leaps in hardware and programming, to eliminate the seams between our minds and the fused ideas of a digital system.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

There wasn’t a way to build a thinking machine—the processes, materials, and power weren’t yet available—and so the theory couldn’t be tested. The leap from theoretical thinking machines to computers that began to mimic human thought happened in the 1930s with the publication of two seminal papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits” and Alan Turing’s “On Computable Numbers, with an Application to the Entscheidungsproblem.” As an electrical engineering student at MIT, Shannon took an elective course in philosophy—an unusual diversion. Boole’s An Investigation of the Laws of Thought became the primary reference for Shannon’s thesis. His advisor, Vannevar Bush, encouraged him to map Boolean logic to physical circuits.

For example, Minsky was quoted in Life magazine saying: “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.”28 In that same article, the journalist refers to Alan Turing as “Ronald Turing.” Minsky, who was clearly enthusiastic, was likely being cheeky and didn’t mean to imply that walking, talking robots were just around the corner. But without the context and explanation, the public perception of AI started to warp. It didn’t help that in 1968, Arthur Clarke and Stanley Kubrick decided to make a movie about the future of machines with the general intelligence of the average person.

You now have a better understanding of how the Big Nine are driving AI’s developmental track, how investors and funders are influencing the speed and safety of AI systems, the critical role the US and Chinese governments play, how universities inculcate both skills and sensibilities, and how everyday people are an intrinsic part of the system. It’s time to open your eyes and focus on the boulder at the top of the mountain, because it’s gaining momentum. It has been moving since Ada Lovelace first imagined a computer that could compose elaborate pieces of music all on its own. It was moving when Alan Turing asked “Can machines think?” and when John McCarthy and Marvin Minsky gathered together all those men for the Dartmouth workshop. It was moving when Watson won Jeopardy and when, not long ago, DeepMind beat the world’s Go champions. It has been moving as you’ve read the pages in this book. Everybody wants to be the hero of their own story.

pages: 158 words: 49,168

Infinite Ascent: A Short History of Mathematics
by David Berlinski
Published 2 Jan 2005

If Gödel’s theorem undercut the very pretensions of the axiomatic method, it also forced the mathematical community to appreciate with unaccustomed modesty the fact that the sources of mathematical knowledge are and remain mysterious. During the 1930s, Gödel lectured at the new Institute for Advanced Study, his lectures themselves constituting both a presentation and an explanation of his work. A small cadre of professional logicians—Alonzo Church, Stephen Kleene, Barkley Rosser, W. V. O. Quine, Alan Turing—understood at once the implications of Gödel’s theorem, and they entertained the conviction, rare even among mathematicians, that in understanding Gödel’s theorem they were understanding a work of great art made possible by an intellect of great genius. For almost thirty years, Gödel’s theorem retained an esoteric aspect, one that many working mathematicians found baffling.

And yet, as all those dinosaurs must have dimly sensed, change is coming. The origins of the computer, considering the issue in terms of various paternity tests, lie in thought experiments conducted in the 1930s and early 1940s by a small, isolated group of mathematical logicians: Kurt Gödel, Alonzo Church, Stephen Kleene, Barkley Rosser, Alan Turing, Emil Post. Moving from the shadows to the spotlight, and standing there in some consternation, the ancient idea of an algorithm or an effective procedure seemed suddenly in need of a precise definition. The informal idea is perfectly obvious. An algorithm is a linked series of rules, a guide, an instruction manual, an adjuration, a way of getting things done, a tool to address life’s chattering chaos in symbols.

Some years after Gödel presented his results, the American logician Alonzo Church defined what he called the lambda-computable functions. And to roughly the same point since the recursive and the lambda-computable functions, although quite different, did the same thing and carried on in the same way. In 1936, Alan Turing published the first of his papers on computability, “On Computable Numbers with an Application to the Entscheidungsproblem,” and so gave the idea of an algorithm a vivid and unforgettable metaphor. An effective calculation is any calculation that could be undertaken, Turing argued, by an exceptionally simple imaginary machine, or even a human computer, someone who has, like a clerk in the department of motor vehicles or a college dean, been stripped of all cognitive powers and can as a result execute only a few primitive acts.

The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal
by M. Mitchell Waldrop
Published 14 Apr 2001

Wit- ness Claude Shannon's recognition of the links between switching circuits and logic, for example. Or witness the ideas of another young man at the very start of his career-an English mathematician-in-training who had simultaneously fol- lowed the trail of logic to an even stranger destination. Actually, Alan Turing had gotten a bit of a head start on Shannon; by the time his American contemporary started tracing the Differential Analyzer's relay cir- cuits in 1936, Turing had already spent a year at Cambridge University obsessing about a single question: What are the ultimate powers and fundamental limits of a computer?

Thus the "decidability problem," which required mathematicians either to find this diagnostic procedure or else to show that it could not exist. THE LAST TRANSITION 49 Hmm. Turing was intrigued. And so all by himself, without a word to New- man or anyone else, he took on the decidability problem as his own personal challenge. This was typical. Alan Turing was a very solitary young man. Born in 1912, the second son of parents who had felt it best to leave their boys in foster care while they were away from England during their various tours of duty with the Indian civil service, he had been a shy, awkward, rather excitable child with an obsession with self-sufficiency: one of his favorite pastimes was what he called the desert-island game, in which he tried to synthesize exotic (and often danger- ous) chemicals from everyday substances found around the house.

Moreover, it was mathematically tractable, which meant that they could begin to ask deeper ques- tions about neural networks in general. What could such networks actually do, for example? What was their computational power? What kind of logical opera- tions were they capable of carrying out? And what kind of operations could they never carry out, even in principle? If these sound like the same questions that Alan Turing asked of his imagi- nary machine, it was no accident. By some miracle, the obscure McCulloch and Pitts had come across the decidability paper written by the equally obscure Tur- ing, understood its significance, and taken Turing's work as one inspiration for their own. Moreover, McCulloch and Pitts now found themselves arriving at much the same destination as Turing.

pages: 246 words: 81,625

On Intelligence
by Jeff Hawkins and Sandra Blakeslee
Published 1 Jan 2004

AI suffers from a fundamental flaw in that it fails to adequately address what intelligence is or what it means to understand something. A brief look at the history of AI and the tenets on which it was built will explain how the field has gone off course. The AI approach was born with the digital computer. A key figure in the early AI movement was the English mathematician Alan Turing, who was one of the inventors of the idea of the general-purpose computer. His masterstroke was to formally demonstrate the concept of universal computation: that is, all computers are fundamentally equivalent regardless of the details of how they are built. As part of his proof, he conceived an imaginary machine with three essential parts: a processing box, a paper tape, and a device that reads and writes marks on the tape as it moves back and forth.

Programming computers to do even the most basic tasks of perception, language, and behavior began to seem impossible. Today, not much has changed. As I said earlier, there are still people who believe that AI's problems can be solved with faster computers, but most scientists think the entire endeavor was flawed. We shouldn't blame the AI pioneers for their failures. Alan Turing was brilliant. They all could tell that the Turing Machine would change the world— and it did, but not through AI. * * * My skepticism of AI's assertions was honed around the same time that I applied to MIT. John Searle, an influential philosophy professor at the University of California at Berkeley, was at that time saying that computers were not, and could not be, intelligent.

Whether they are calling these behaviors "answers," "patterns," or "outputs," both AI and neural networks assume intelligence lies in the behavior that a program or a neural network produces after processing a given input. The most important attribute of a computer program or a neural network is whether it gives the correct or desired output. As inspired by Alan Turing, intelligence equals behavior. But intelligence is not just a matter of acting or behaving intelligently. Behavior is a manifestation of intelligence, but not the central characteristic or primary definition of being intelligent. A moment's reflection proves this: You can be intelligent just lying in the dark, thinking and understanding.

pages: 289 words: 85,315

Fermat’s Last Theorem
by Simon Singh
Published 1 Jan 1997

Kreisel, Biographical Memoirs of the Fellows of the Royal Society, 1980. A Mathematician’s Apology, by G.H. Hardy, 1940, Cambridge University Press. One of the great figures of twentieth-century mathematics gives a personal account of what motivates him and other mathematicians. Alan Turing: The Enigma of Intelligence, by Andrew Hodges, 1983, Unwin Paperbacks. An account of the life of Alan Turing, including his contribution to breaking the Enigma code. Chapter 5 Yutaka Taniyama and his time, by Goro Shimura, Bulletin of the London Mathematical Society 21 (1989), 186–196. A very personal account of the life and work of Yutaka Taniyama.

During the Second World War the Allies realised that in theory mathematical logic could be used to unscramble German messages, if only the calculations could be performed quickly enough. The challenge was to find a way of automating mathematics so that a machine could perform the calculations, and the Englishman who contributed most to this code-cracking effort was Alan Turing. In 1938 Turing returned to Cambridge having completed a stint at Princeton University. He had witnessed first-hand the turmoil caused by Gödel’s theorems of undecidability and had become involved in trying to pick up the pieces of Hilbert’s dream. In particular he wanted to know if there was a way to define which questions were and were not decidable, and tried to develop a methodical way of answering this question.

They were concerned that the man who knew more about Britain’s security codes than anyone else was vulnerable to blackmail and decided to monitor his every move. Turing had largely come to terms with being constantly shadowed, but in 1952 he was arrested for violation of British homosexuality statutes. This humiliation made life intolerable for Turing. Andrew Hodges, Turing’s biographer, describes the events leading up to his death: Alan Turing’s death came as a shock to those who knew him … That he was an unhappy, tense, person; that he was consulting a psychiatrist and suffered a blow that would have felled many people – all this was clear. But the trial was two years in the past, the hormone treatment had ended a year before, and he seemed to have risen above it all.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

In fact, we can go back even further. A challenge put forth by mathematicians in the 1920s and 1930s was to answer a fundamental question: ‘Can all mathematical reasoning be formalized?’ In the following decades, the answers that came from some of the twentieth century’s topmost math prodigies – Kurt Gödel, Alan Turing and Alonzo Church – were surprising in two ways. Firstly, they proved that, in fact, there are limits to what mathematical logic can accomplish. Secondly, and more importantly for AI, the answers suggested that, within these limits, any form of mathematical reasoning could be mechanized. Church and Turing offered a thesis implying that any mechanical device capable of shuffling symbols as simple as 0 and 1 could imitate any conceivable process of mathematical deduction.

Simple as it was, this invention inspired scientists to begin discussing the possibility of thinking machines, and that, in my personal view, was the point at which the work to deliver intelligent machines – so long the object of humanity’s fantasies – actually started. These scientists so strongly believed back then in the inevitability of a thinking machine that, in 1950, Alan Turing proposed a test (it came to be known as the Turing test) that set an early and still relevant bar to see if artificial intelligence could measure up to human intelligence. In simple terms, he suggested a natural language conversation between an evaluator, a human and a machine designed to generate human-like responses.

One of the most well-known of these is Frank Herbert’s Dune series, also made into a cult film, where the ‘Butlerian Jihad’ uprising sees humans gain victory over the robots and anyone who is caught making new ones is threatened with the death penalty. ‘Thou shalt not make a machine in the likeness of a human mind’ becomes the defining commandment of their Orange Catholic Bible. More recently, two literary heavyweights have added to the canon: Ian McEwan’s Robots Like Me interweaves Alan Turing, synthetic humans and insider trading with ideas of love and sex, and Kazuo Ishiguro’s Klara and the Sun brings poignancy to the concept of relationships with an AI who is designed to be the perfect ‘artificial friend’. That these works have not been classified within the more niche ‘science-fiction’ genre but have become part of the mainstream surely reflects how these issues around how we will live with AI are moving into the general consciousness.

Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions
by Temple Grandin, Ph.d.
Published 11 Oct 2022

Zihl, J., and C. A. Heywood. “The Contribution of LM to the Neuroscience of Movement Vision.” Frontiers in Integrative Neuroscience 9, no. 6 (February 17, 2015). https://www.frontiersin.org/articles/10.3389/fnint.2015.00006/full. Zitarelli, D. E. “Alan Turing in America—1942–1943.” Convergence, January 2015. https://www.maa.org/press/periodicals/convergence/alan-turing-in-america. 6. VISUALIZING RISK TO PREVENT DISASTERS Acton, J. M., and M. Hibbs. “Why Fukushima Was Preventable.” Carnegie Endowment for International Peace, March 6, 2012. https://carnegieendowment.org/2012/03/06/why-fukushima-was-preventable-pub-47361.

According to psychology professor Anna Abraham at Leeds Beckett University in the UK, mathematicians enjoy a “pedestal position” because math “represents the pinnacle of abstraction in reasoning” and is associated with elegance, pattern making, invention, creativity, and the like. That kind of mind is exemplified in the brilliant mathematician Alan Turing, who bridged the gap between the science of logic and mechanical computing machines. He is widely credited with developing the foundation of modern computing. In school in Dorset, England, Turing’s mathematical abilities and intelligence were apparent at an early age. Since childhood, he’d been attracted to numbers, even studying serial numbers on lamp posts.

“Genetic and Environmental Influences on Structural Brain Measures in Twins with Autism Spectrum Disorder.” Molecular Psychiatry 25 (2020): 2556–66. Helmrich, B. H. “Window of Opportunity? Adolescence, Music, and Algebra.” Journal of Adolescent Research 25, no. 4 (2010): 557–77. Hodges, A. Alan Turing: The Enigma. Princeton, NJ: Princeton University Press, 2015. Huddleston, T., Jr. “Bill Gates: Use This Simple Trick to Figure Out What You’ll Be Great at in Life.” CNBC, March 12, 2019. https://www.cnbc.com/2019/03/12/bill-gates-how-to-know-what-you-can-be-great-at-in-life.html. Isaacson, W.

pages: 222 words: 53,317

Overcomplicated: Technology at the Limits of Comprehension
by Samuel Arbesman
Published 18 Jul 2016

It requires significant effort to understand what was going on inside Galaga, particularly when something went wrong. Even though the graphics and gameplay were simple, the true level of complexity might only become clear to players when it failed. Bugs are not just annoyances to be fixed. Bugs are how we realize that we are in the Entanglement. In 1950, Alan Turing noted that machines can and will yield surprises for us in their behavior. And it seems that these surprises will only increase in frequency. As we build systems that become more and more complicated, there is a greater divergence between how we intend our systems to act and how they actually do act.

The twentieth century brought us numerous limitative theorems, statements that placed bounds upon what we could ever know and understand. Kurt Gödel showed that in mathematics there will always be statements that can never be proven as either true or false, within a given mathematical system. In computer science, Alan Turing demonstrated the limits on what any machine could ever do, no matter how fancy an algorithm one might develop. But neither of these fields died out or suffered a huge setback. Despite being bounded by limits, they flourished, in many ways far beyond what these men could have ever imagined. In building and using complex technological systems, there are limits to what we can understand about how they work and how they fail.

how we realize that we are in the Entanglement: Distinguished Google Fellow Urs Hölzle: “Complexity is evil in the grand scheme of things because it makes it possible for these bugs to lurk that you see only once every two or three years, but when you see them it’s a big story because it had a large, cascading effect.” Jack Clark, “Google: ‘At Scale, Everything Breaks,’” ZDNet, June 22, 2011, http://www.zdnet.com/article/google-at-scale-everything-breaks/2/. In 1950, Alan Turing noted: A. M. Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433–60. Widely available online, e.g., http://cogprints.org/499/1/turing.html. a widely used simulator of gravitation: There are about 10,000 mixed-precision instances (the specific type of error) across about 1,000 lines out of approximately 30,000 lines of code.

pages: 523 words: 143,139

Algorithms to Live By: The Computer Science of Human Decisions
by Brian Christian and Tom Griffiths
Published 4 Apr 2016

In part, that’s because when we think about computers, we think about coldly mechanical, deterministic systems: machines applying rigid deductive logic, making decisions by exhaustively enumerating the options, and grinding out the exact right answer no matter how long and hard they have to think. Indeed, the person who first imagined computers had something essentially like this in mind. Alan Turing defined the very notion of computation by an analogy to a human mathematician who carefully works through the steps of a lengthy calculation, yielding an unmistakably right answer. So it might come as a surprise that this is not what modern computers are actually doing when they face a difficult problem.

Born in 1931 in Breslau, Germany (which became Wrocław, Poland, at the end of World War II), Rabin was the descendant of a long line of rabbis. His family left Germany for Palestine in 1935, and there he was diverted from the rabbinical path his father had laid down for him by the beauty of mathematics—discovering Alan Turing’s work early in his undergraduate career at the Hebrew University and immigrating to the United States to begin a PhD at Princeton. Rabin would go on to win the Turing Award—the computer science equivalent of a Nobel—for extending theoretical computer science to accommodate “nondeterministic” cases, where a machine isn’t forced to pursue a single option but has multiple paths it might follow.

We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe who practice the fourth, fifth, and higher degrees. Computer science illustrates the fundamental limitations of this kind of reasoning with what’s called the “halting problem.” As Alan Turing proved in 1936, a computer program can never tell you for sure whether another program might end up calculating forever without end—except by simulating the operation of that program and thus potentially going off the deep end itself. (Accordingly, programmers will never have automated tools that can tell them whether their software will freeze.)

pages: 315 words: 92,151

Ten Billion Tomorrows: How Science Fiction Technology Became Reality and Shapes the Future
by Brian Clegg
Published 8 Dec 2015

The fancy cabinet that supported the chessboard housed a very small chess master, who manipulated the Turk’s hands and directed the play by pulling on a series of levers. The first realistic concepts that could lead toward an automated chess device came from the grandfathers and father of the modern computer respectively, Charles Babbage and Alan Turing. Babbage speculated that his mechanical programmable computer, the Analytical Engine, which was designed but never built, would be able to play chess, while Turing wrote a simple program for chess playing that was only ever executed by hand. More impetus was given by information theorist Claude Shannon in the 1950s.

This could be used either to encrypt a message or to compress it to use a tiny fraction of the bandwidth that normal speech transmission required, but the overly complex system never caught on outside niche applications. It was only with the advent of the electronic computer that the possibility of going far emerged further. The idea of making computers produce sounds began very early in the history of information technology. The great computing pioneer Alan Turing wrote the first program to generate music on the electronic computer at the University of Manchester. The machine was rigged up with a speaker to produce an alarm signal when something went wrong. Turing realized he could program this to issue tiny clicks. If these were put out thousands of times a second, it would produce a musical note that varied with the frequency of the clicks.

Taken out of context it is impossible to say. It could be describing the eating habits of a particular type of fly, or the aerodynamics of fruit. While that specific sentence is unlikely to come up in any imaginable conversation, there is no doubt that the whole process of understanding what people say is fraught with difficulty. Alan Turing envisaged a test to see if a computer could be considered intelligent. The device would be isolated in a room and a person would interrogate it from a second room, trying to decide if the “person” on the other end of the line is human or machine. (Turing’s original statement of his idea was more complex, but this is the important part of it.)

pages: 293 words: 91,110

The Chip: How Two Americans Invented the Microchip and Launched a Revolution
by T. R. Reid
Published 18 Dec 2007

The transistor, with its promise of fast, low-power switching, spurred him to even more ambitious theories of what computers might become. More and more, toward the end of his life, he began to see parallels between the evolution of computing machines and the evolution of the human mind. His last book, published posthumously in 1958, was titled The Computer and the Brain. Alan Turing, born in London in 1912, was considered a poor student with little academic promise through most of his school career. After twice failing the scholarship exam for Trinity College, Cambridge, he matriculated at King’s, another Cambridge college, and took his Ph.D. there in 1935. He became intrigued by the Entscheidungsproblem, a deep mathematical quandary posed by the German scholar David Hilbert.

One of the first things Kilby realized was that tearing apart existing adding machines to see how they worked—a process known as reverse engineering—would offer little, if any, help, because the basic architecture of this pocket-size device would have to be completely new. And so the team started at ground zero, setting down the fundamental elements that their calculator would require. In accordance with the architecture worked out by Alan Turing and John von Neumann, all digital devices, from the most powerful mainframe supercomputer to the simplest handheld electronic game, can be divided into four basic parts serving four essential functions: Input: The unit that receives information from a human operator, a sensory device, or another computer and delivers it to the processing unit.

An important contribution to this literature is Herman Goldstine’s The Computer from Pascal to von Neumann (Princeton, N.J.: Princeton University Press, 1972), which is strangely organized but has the immediacy that could be conveyed only by one who was present at the creation of the modem electronic computer. Andrew Hodges, Alan Turing: The Enigma (New York: Simon & Schuster, 1983), and Steve J. Heims, John von Neumann and Norbert Wiener (Cambridge, Mass.: MIT Press, 1980), are the first complete biographies. Von Neumann’s seminal paper “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument” is reprinted in John Diebold, ed., The World of the Computer (New York: Random House, 1973).

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

Nimatron is followed by an announcement that Elsie, “the star bovine performer of Borden’s Dairy World of Tomorrow,” is starting her residency at the fair, displayed in “a special glass boudoir.” “Most of its defeats”: E. U. Condon, “The Nimatron,” American Mathematical Monthly 49, no. 5 (1942): 331. Alan Turing, who worked: S. Barry Cooper and J. Van Leeuwen, Alan Turing (Amsterdam: Elsevier Science & Technology, 2013), 626. “The reader might well ask”: Cooper and Van Leeuwen, Alan Turing. The longest tournament game: There seems to actually be some dispute about the longest theoretically possible chess game, but 5,898 is the figure that seems to me most commonly claimed. The 269-move match was played in 1989 in Belgrade between Ivan Nikolić and Goran Arsović.

In 1951, the British electronics firm Ferranti built its own Nim-playing robot, Nimrod, which drew huge crowds on a world tour. In London a team of psychics attempted to overcome Nimrod’s perfect play by means of concentrated telepathic vibrations, with no success. In Berlin the machine took on future West German chancellor Ludwig Erhard and beat him three times in a row. Alan Turing, who worked on Ferranti’s Mark One computer, reported that Nimrod so captivated the German public that a free bar down the hall went entirely unpatronized. That a computer can play Nim as well as a human was seen as amazing, Germans-passing-up-free-beer amazing—but is it? Turing himself expressed some skepticism, writing: “The reader might well ask why we bother to use these complicated and expensive machines in so trivial a pursuit as playing games.”

The game of Go is much older than checkers or chess—in fact, just for a change of pace, it actually is ancient and Chinese. Machines that play Go, on the other hand, came later than machines for other games. In 1912, the Spanish mathematician Leonardo Torres y Quevedo built a machine, El Ajedrecista, to play out certain chess endgames, and Alan Turing laid out the plan for a functional chess computer in the 1950s. The idea of a chess-playing robot is even older, dating back to Wolfgang von Kempelen’s “Chess Turk,” a wildly popular chess-playing automaton of the eighteenth and nineteenth centuries, which inspired Charles Babbage, baffled Edgar Allan Poe, and checkmated Napoleon, but which was in fact controlled by a diminutive human operator concealed inside the works.

pages: 434 words: 135,226

The Music of the Primes
by Marcus Du Sautoy
Published 26 Apr 2004

But it was the war, and in particular the code-breakers at Bletchley Park, that were responsible for the development of the machine that would generate this new evidence: the computer. CHAPTER EIGHT Machines of the Mind I propose to consider the question, ‘Can machines think?’ Alan Turing, Computing Machinery and Intelligence Alan Turing’s name will always be associated with the cracking of Germany’s wartime code, Enigma. From the comfort of the country house of Bletchley Park, halfway between Oxford and Cambridge, Churchill’s code-breakers created a machine which could decode the messages sent each day by German intelligence.

Landau’, Journal of the London Mathematical Society, vol. 13 (1938), pp. 302–10 Hardy, G.H., A Mathematician’s Apology (Cambridge: Cambridge University Press, 1940) Hardy, G.H., Ramanujan. Twelve Lectures on Subjects Suggested by His Life and Work (Cambridge: Cambridge University Press, 1940) Hodges, A., Alan Turing: The Enigma (New York, NY: Simon & Schuster, 1983) Hoffman, P., The Man Who Loved Only Numbers. The story of Paul Erdos and the Search for Mathematical Truth (London: Fourth Estate, 1998) Jackson, A., ‘The IHÉS at forty’, Notices of the American Mathematical Society, vol. 46, no. 3 (1999), pp. 329–37 Jackson, A., ‘Interview with Henri Cartan’, Notices of the American Mathematical Society, vol. 46, no. 7 (1999), pp. 782–8 Jackson, A., ‘Million-dollar mathematics prizes announced’, Notices of the American Mathematical Society, vol. 47, no. 8 (2000), pp. 877–9 Kanigel, R., The Man Who Knew Infinity: A Life of the Genius Ramanujan (New York, NY: Scribner’s, 1991) Koblitz, N., ‘Mathematics under hardship conditions in the Third World’, Notices of the American Mathematical Society, vol. 38, no. 9 (1991), pp. 1123–8 Knapp, A.W., ‘André Weil: a prologue’, Notices of the American Mathematical Society, vol. 46, no. 4 (1999), pp. 434–9 Lang, S., ‘Mordell’s review, Siegel’s letter to Mordell, Diophantine geometry, and 20th century mathematics’, Notices of the American Mathematical Society, vol. 42, no. 3 (1995), pp. 339–50 Laugwitz, D., Bernhard Riemann, 1826–1866: Turning Points in the Conception of Mathematics, translated from the 1996 German original by Abe Shenitzer (Boston, MA: Birkhäuser, 1999) Lesniewski, A., ‘Noncommutative geometry’, Notices of the American Mathematical Society, vol. 44, no. 7 (1997), pp. 800–805 Littlewood, J.E., A Mathematician’s Miscellany (London: Methuen, 1953) Littlewood, J.E., ‘The Riemann hypothesis’, in The Scientist Speculates: An Anthology of Partly-Baked Ideas, edited by I.J.

http://www.phys.unsw.edu.au/music/ A fascinating site exploring the acoustic qualities of different musical instruments with connection to Ernst Chladni’s plates. http://www.utm.edu/research/primes/ A good resource for information about prime numbers. http://www.naturalsciences.be/expo/ishango/en/index.html A chance to see the Ishango bone. http://www.turing.org.uk/ A website maintained by Andrew Hodges, Alan Turing’s biographer. http://www.salon.com/people/feature/1999/10/09/dyson ‘Freeman Dyson: frog prince of physics’, an article by Kristi Coale. Illustration and Text Credits p15 courtesy of Clay Mathematics Institute; © 2000 Clay Mathematics Institute, All Rights Reserved; pp21, 42 and 133 Science Photo Library; p37 SCALA, Florence; p73 Universitätsbibliothek Göttingen; p177 Cambridge University Library; p214 Photography by Ingrid von Kruse, Freibildnerische Photographie; p221 photo courtesy of Andrew Odlyzko; p229 photo courtesy of Professor Leonard M.

pages: 281 words: 71,242

World Without Mind: The Existential Threat of Big Tech
by Franklin Foer
Published 31 Aug 2017

Where Descartes emphasized skepticism and doubt, Google is never plagued by second-guessing. It has turned the liberation of the brain into an engineering challenge—an exercise that often fails to ask basic questions about the human implications of the project. This is a moral failing that afflicts Google and has haunted computer science from the start. • • • ALAN TURING WAS AN ATHEIST and a loner. He relished being an outsider. When his mother dispatched him at age thirteen to suffer the cold-shower, hard-bed plight of English boarding school, he bicycled alone to campus, sixty miles in two days. He could be shy and strange. To combat the hay fever that arrived every June, he would don a gas mask.

Complex processes must be subdivided into a series of binary choices. There’s no equation to suggest a dress to wear, but an algorithm could easily be written for that—it will work its way through a series of either/or questions (morning or night, winter or summer, sun or rain), with each choice pushing to the next. Mechanical thinking was exactly what Alan Turing first imagined as he collapsed on his run through the meadows of Cambridge in 1935 and daydreamed about a fantastical new calculating machine. For the first decades of computing, the term “algorithm” wasn’t much mentioned. But as computer science departments began sprouting across campuses in the sixties, the term acquired a new cachet.

“He believed that his philosophical method”: Noble, 147. “The seclusion of a medieval monastery”: Isaacson, 41. “the gift for solitary thinking”: Stuart Hampshire, “Undecidables,” London Review of Books, February 16, 1984. “One day ladies will take their computers for walks”: Andrew Hodges, Alan Turing (Vintage, 2012), 418. “We may hope that machines will eventually compete”: B. Jack Copeland, ed., The Essential Turing (Oxford University Press, 2004), 463. His parents, Viennese Jews, fled on the eve of the Anschluss: Ray Kurzweil, Ask Ray blog, “My Trip to Brussels, Zurich, Warsaw, and Vienna,” December 14, 2010.

Cartesian Linguistics
by Noam Chomsky
Published 1 Jan 1966

He takes ordinary linguistic creativity to be “novel” and “innovative” (effectively, by access to a boundless number of sentences), “free from” external and internal stimulus control (stimulus freedom), and “coherent” and “appropriate.” Descartes’ ‘creativity test’ for human intelligent behavior (the products of what he called “Reason” although he must have included volition in this as well) is still generally accepted. Alan Turing, in 1950, proposed a ‘test for mind’ for computers that focuses on appropriateness. He suggested that we should not attempt to decide whether to regard computers as capable – like human beings – of producing intelligent behavior until they can be programmed in such a way that their responses to arbitrary questions are no less appropriate than human responses.

Chomsky often points out (as did Darwin) that evolution can and should include more than just natural selection. To say that something came about through evolution is virtually the same as saying that it came about through biological processes. Thus, a mathematical consequence of the basic structure of a biological system can be called an evolutionary consequence. Alan Turing, no friend of selection, sought to find mathematical patterns in biological systems – in the morphogenesis of plant species, for example. If Turing had adopted Chomsky’s broad version of evolution, he could have said that these patterns are the result of evolution, even though they are not the result of selection.

For a popular discussion of some issues, see Pinker 1995; Pinker and Chomsky do not, however, agree on the issue of the evolution of language. Jenkins 2000 has a clear and general but more technical discussion of some of Chomsky’s views on the topic. In a related vein, Chomsky often now refers to formal work on morphogenesis by Alan Turing and D’Arcy Thompson, and has suggested – speculatively at this stage – that perhaps language ‘evolved’ as a consequence of what happens to physical and biological processes when placed in a specific and complex form of organism. This is not evolution as popularly conceived, where it is supposed that evolution amounts to selection.] 8.

pages: 118 words: 35,663

Smart Machines: IBM's Watson and the Era of Cognitive Computing (Columbia Business School Publishing)
by John E. Kelly Iii
Published 23 Sep 2013

“Cities should help people live their lives, not get in the way.” CODA: AN ALLIANCE OF HUMAN AND MACHINE Ever since Watson won at Jeopardy!, people have been asking the research scientists who designed the machine if they’d like to try to pass the so-called Turing test. That’s an exercise suggested by computing pioneer Alan Turing in his 1950 paper “Computing Machinery and Intelligence,” where he raised the question: “Can machines think?” He suggested that to test whether a machine can think, a human judge should have a written conversation via computer screen and keyboard with another human and a computer. If the judge couldn’t tell the human from the machine based on their responses, the machine would have passed the test.1 With this test, Turing set a standard for measuring the capabilities of machines that has not yet been met.

Colin Harrison, IBM, “Smarter Cities—NextGen,” PowerPoint presentation, presented at the Major Cities of Europe Conference, Vienna, Austria, June 5, 2012. 9. Milind Naphade, IBM Research, interview, September 27, 2012. 10. Chandra Ravada, Iowa East Central Intergovernmental Association, interview, January 7, 2013. CODA: AN ALLIANCE OF HUMAN AND MACHINE 1. Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (October 1950): 433–60; see http://www.loebner.net/Prizef/TuringArticle.html.

pages: 389 words: 109,207

Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street
by William Poundstone
Published 18 Sep 2006

Project X IT WAS CALLED PROJECT X. Declassified only in 1976, it was a joint effort of Bell Labs and Britain’s Government Code and Cipher School at Bletchley Park, north of London. It had a scientific pedigree rivaling that of the Manhattan Project, for the British-American team included not only Shannon but also Alan Turing. They were building a system known as SIGSALY. That was not an acronym, just a random string of letters to confuse the Germans, should they learn of it. SIGSALY was the first digitally scrambled, wireless phone. Each SIGSALY terminal was a room-sized, 55-ton computer with an isolation booth for the user and an air-conditioning system to prevent its banks of vacuum tubes from melting down.

After pressing the exact number of key records needed, the master was destroyed and the LPs distributed by trusted couriers to the SIGSALY terminals. It was vitally important that the SIGSALY phonographs play at precisely the same speed and in sync. Were one phonograph slightly off, the output was abruptly replaced by noise. Alan Turing cracked the German “Enigma” cipher, allowing the Allies to eavesdrop on the German command’s messages. The point of SIGSALY was to ensure that the Germans couldn’t do the same. Part of Shannon’s job was to prove that the system was indeed impossible for anyone lacking a key to crack. Without that mathematical assurance, the Allied commanders could not have spoken freely.

Shannon later said that thinking about how to conceal messages with random noise motivated some of the insights of information theory. “A secrecy system is almost identical with a noisy communications system,” he claimed. The two lines of inquiry “were so close together you couldn’t separate them.” In 1943 Alan Turing visited Bell Labs’ New York offices. Turing and Shannon spoke daily in the lab cafeteria. Shannon informed Turing that he was working on a way of measuring information. He used a unit called the bit. Shannon credited that name to another Bell Labs mathematician, John Tukey. Tukey’s bit was short for “binary digit.”

pages: 387 words: 111,096

Enigma
by Robert Harris
Published 15 Feb 2011

But he could still make out the name, one of three painted on a wooden board in elegant white capitals, now cracked and faded. TURING, A.M. How nervously he had climbed these stairs for the first time—when? in the summer of 1938? a world ago—to find a man barely five years older than himself, as shy as a freshman, with a hank of dark hair falling across his eyes: the great Alan Turing, the author of On Computable Numbers, the progenitor of the Universal Computing Machine . . . Turing had asked him what he proposed to take as his subject for his first year's research. 'Riemann's theory of prime numbers.' 'But I am researching Riemann myself.' 'I know,' Jericho had blurted out, 'that's why I chose it.'

For all his efforts with the pump, the tyres remained half-flat, the wheels and chain were stiff for want of oil. It was hard going, but Jericho didn't mind. He was taking action, that was the point. It was the same as code-breaking. However hopeless the situation, the rule was always to do something. No cryptogram, Alan Turing used to say, was ever solved by simply staring at it. He cycled on for about two miles, following the lane as it continued to rise gently towards Shenley Brook End. This was hardly a village, more a tiny hamlet of perhaps a dozen houses, mostly farmworkers' cottages. He couldn't see the buildings, which sheltered in a slight hollow, but when he rounded a bend and caught the scent of woodsmoke he knew he must be close.

He thought of his father's funeral, on just such a day as this: a freezing, ugly Victorian church in the industrial Midlands, medals on the coffin, his mother weeping, his aunts in black, everyone studying him with sad curiosity, and he all the time a million miles away, factoring the hymn numbers in his head ('Forward out of error,/Leave behind the night'—number 392 in Ancient and Modern -came out very prettily, he remembered, as 2 x 7 x 2 x 7 x 2...) And for some reason he thought of Alan Turing, restless with excitement in the hut one winter night, describing how the death of his closest friend had made him seek a link between mathematics and the spirit, insisting that at Bletchley they were creating a new world: that the bombes might soon be modified, the clumsy electro-mechancial switches replaced by relays of pentode valves and GT1C-thyatrons to create computers, machines that might one day mimic the actions of the human brain and unlock the secrets of the soul. . .

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

A more inclusive approach acknowledges its multifaceted nature and views the field as an atlas, providing a platform to explore and cultivate diverse conceptualizations, applications, methodologies, and effects, and to unravel power structures and resistance towards technology and its phenomena (Crawford 2021)—all this will continue to change the cultural sector (Hochscherf/Lätzel 2023). 12 AI in Museums There is no such thing as one artificial intelligence. In his fundamental study of 1950, Alan Turing argued that the thinking of intelligent humans could not be precisely defined and therefore any output of a machine that cannot be recognized as such by humans should also be regarded as intelligent (Turing 1950; Vater 2023); a little later, the research field of artificial intelligence was established at the famous Dartmouth Workshop of 1956 (McCorduck 2004; Moor 2006).

It is instead a being that by having a thought or feeling knows that it is having this thought or that it is in a specific emotional state—since an emotional state is an embodied cognitive state (Goldie 2000). Operating based on such conceptual confusion and reduction of a full-fledged conception of a person can be attributed to the role that Alan Turing and his Turing test played in the tradition of the development of artificial intelligence with respect to the concept it embodies. In his classic paper on the topic (Turing 1950), Turing proposed substituting the question whether machines are able to think with the question of whether we can notice the difference when confronted with an output in the form of a written text, for instance, on a screen, that is either the output of a machine or was written by a real person.

For example, the question of whether machines can make art leads back to the question of whether or not these machines can be creative—at the same time, while we may have the experience of being creative, we also do not have a straightforward understanding or a precise criterion for what being creative means for humans. Alan Turing already presented a similar argument against a psychological approach to the question of whether computers can think. In his view, the problem is not so much that it would be speculative to attribute psychological capabilities or intelligence to computers. Given the traditional conundrums about the soul, consciousness, et cetera, he instead suggests that whether computers can think is a pointless question, because it is not at all clear what exactly might be meant by this: ‘The original question, “Can machines think?”

pages: 608 words: 150,324

Life's Greatest Secret: The Race to Crack the Genetic Code
by Matthew Cobb
Published 6 Jul 2015

At the rear, on Washington Street, an overground subway line ran right through the building, like something out of a 1930s film of the future. Shannon was part of the cryptography group, studying the transmission of messages over the telephone. In January 1943, as Schrödinger was about to give his lectures in Dublin, the Bell Labs had a visitor from England – the mathematician and cryptographer Alan Turing, who had arrived in New York on the Queen Elizabeth in November. Turing began work at Bell Labs, investigating ways of setting up a securely encoded telephone link between Roosevelt and Churchill – this was later successfully implemented after Shannon’s theoretical demonstration that the code could not be broken.17 Although Turing did not work with Shannon, the two young men regularly had tea together in the cafeteria, where they discussed Turing’s ideas for a ‘universal machine’ that could perform any conceivable calculation.

It helped invent a new vocabulary of information that transformed the postwar world and shaped a radical new view of genetics. We are still living in that world. * Cybernetics was born in Paris. In 1947, Wiener was invited to a conference in Nancy, in eastern France. On his visit to Europe he met thinkers in England, such as Alan Turing, J. B. S. Haldane and the University of Manchester computer pioneers, ‘Freddie’ Williams and Tom Kilburn. During a visit to Paris, Wiener went to ‘a drab little bookshop opposite the Sorbonne’, where he met Enriques Freymann, the Mexican-born head of a French publishing firm.3 Fascinated by Wiener’s ideas, Freymann suggested that the American should write a book to explain them to the general public.

The symposium was a small affair – only fourteen speakers, with a further five participants, one of whom was the Caltech chemist Linus Pauling. 1. Claude Shannon’s model of communication. From Shannon and Weaver (1949). Von Neumann gave the opening talk, entitled ‘General and logical theory of automata’, and explored one of the defining features of life: its ability to reproduce. Von Neumann’s starting point was Alan Turing’s prewar theory of a universal machine that carried out its operations by reading and writing on a paper tape. But this was too simple for von Neumann: he wanted to imagine ‘an automaton whose output is other automata’. Von Neumann argued that such a machine needed instructions to construct its component parts, and that these instructions would be ‘roughly effecting the functions of a gene’; a change in the instruction would be like a mutation.

pages: 254 words: 72,929

The Age of the Infovore: Succeeding in the Information Economy
by Tyler Cowen
Published 25 May 2010

People such as myself, who have normal face-recognition abilities, usually have no such system. The result was that this woman—some might call her “handicapped”—had a much better sense of the crowd than I did. Charles Darwin, Gregor Mendel, Thomas Edison, Nikola Tesla, Albert Einstein, Isaac Newton, Samuel Johnson, Vincent van Gogh, Thomas Jefferson, Bertrand Russell, Jonathan Swift, Alan Turing, Paul Dirac, Glenn Gould, Steven Spielberg, and Bill Gates, among many others, are all on the rather lengthy list of famous figures who have been identified as possibly autistic or Asperger’s. I do not think we can “diagnose” individuals from such a distance, so we should be cautious in making any very particular claims.

If you’re wondering, a typical list of historical figures claimed to be on the autism spectrum includes Hans Christian Andersen, Lewis Carroll, Herman Melville, George Orwell, Jonathan Swift, William Butler Yeats, James Joyce, Bela Bartók, Bob Dylan, Glenn Gould, Vincent van Gogh, Andy Warhol, Mozart, Gregor Mendel, Charles Darwin, Ludwig Wittgenstein, Henry Cavendish, Samuel Johnson, Albert Einstein, Alan Turing, Paul Dirac, Emily Dickinson, Michelangelo, Bertrand Russell, Thomas Jefferson, Thomas Edison, Nikola Tesla, Isaac Newton, and Willard Van Orman Quine, among others. When it comes to any individual life, I have my worries about making any firm judgments. First, for some of these lives I know a bit about, such as Mozart’s, I just don’t see the evidence for autism.

Maybe quite a few of the names on that list would likely qualify as connected to the autism spectrum; nonetheless I am promoting the idea of autistic cognitive strengths, not diagnosing people. We’ve had far too much of diagnosis and far too little of simply considering what keen, specialized perception and mental ordering bring to society as a whole. Quite possibly Alan Turing and Glenn Gould were on the autism spectrum and you’ll find some evidence for this in their biographies. Peter Ostwald, a psychiatrist and also a former friend of Gould’s, wrote a whole book outlining the evidence, which includes Gould’s unusual and demanding routines. In other words, Gould had some of the more visible features associated with the autism spectrum.

pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life
by Daniel C. Dennett
Published 15 Jan 1995

The term algorithm descends, via Latin (algorismus) to early English (algorisme and, mistakenly therefrom, algorithm), from the name of a Persian mathematician, Muusa al-Khowarizm, whose book on arithmetical procedures, written about 835 A.D., was translated into Latin in the twelfth century by Adelard of Bath or Robert of Chester. The idea that an algorithm is a foolproof and somehow "mechanical" procedure has been present for centuries, but it was the pioneering work of Alan Turing, Kurt Godel, and Alonzo Church in the 1930s that more or less fixed our current understanding of the term. Three key features of algorithms will be important to us, and each is somewhat difficult to define. Each, moreover, has given rise to confusions (and anxieties) that continue to beset our thinking about Darwin's revolutionary discovery, so we will have to revisit and reconsider these introductory characterizations several times before we are through: (1) substrate neutrality: The procedure for long division works equally well with pencil or pen, paper or parchment, neon lights or skywriting, {51} using any symbol system you like.

physics and chemistry. They wanted something dead simple, easy to visualize and easy to calculate, so they not only dropped from three dimensions to two; they also "digitized" both space and time — all times and distances, as we saw, are in whole numbers of "instants" and "cells." It was von Neumann who had taken Alan Turing's abstract conception of a mechanical computer (now called a "Turing machine") and engineered it into the specification for a general-purpose stored-program serial-processing computer (now called a "von Neumann machine"); in his brilliant explorations of the spatial and structural requirements for such a computer, he had realized — and proved — that a Universal Turing machine (a Turing machine that can compute any computable function at all) could in principle be "built" in a two-dimensional world.6 Conway and his students also set out to confirm this with their own exercise in two-dimensional engineering.7 It was far from easy, but they showed how they could "build" a working computer out of simpler Life forms.

(See Mazlish 1993) It is a long and winding road from molecules to minds, with many diverting spectacles along the way — and we will tarry over the most interesting of these in subsequent chapters — but now is the time to look more closely than usual at the Darwinian beginnings of Artificial Intelligence. 5. THE COMPUTER THAT LEARNED TO PLAY CHECKERS Alan Turing and John von Neumann were two of the greatest scientists of the century. If anybody could be said to have invented the computer, they did, and their brainchild has come to be recognized as both a triumph of engineering and an intellectual vehicle for exploring the most abstract realms of pure science.

Wireless
by Charles Stross
Published 7 Jul 2009

We reach the first periscope station in the viewing gallery. “This is room two. It’s currently occupied by Alan Turing.” She notices my start. “Don’t worry. It’s just his safety name.” (True names have power, so the Laundry is big on call by reference, not call by value; I’m no more “Bob Howard” than the “Alan Turing” in room two is the father of computer science and applied computational demonology.) She continues. “The real Alan Turing would be nearly a hundred by now. All our long-term residents are named for famous mathematicians. We’ve got Alan Turing, Kurt Godel, Georg Cantor, and Benoit Mandelbrot. Turing’s the oldest, Benny is the most recent—he actually has a payroll number, sixteen.”

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

More recently, science fiction got started with Mary Shelley’s Frankenstein in the early nineteenth century, and in the early twentieth century Karel Capek’s play RUR (Rossum’s Universal Robots) introduced the idea of an uprising in which robots eliminate their human creators. Alan Turing The brilliant British mathematician and code-breaker Alan Turing is often described as the father of both computer science and artificial intelligence. His most famous achievement was breaking the German naval ciphers at the code-breaking centre at Bletchley Park during the Second World War. He used complicated machines known as bombes, which eliminated enormous numbers of incorrect solutions to the codes so as to arrive at the correct solution.

Know Thyself
by Stephen M Fleming
Published 27 Apr 2021

If I were to then roll a 5, a 4, and a 7, all with the same three dice, then you would be much more confident that the trick die was a 0. As long as each roll is independent of the previous one, Bayes’s theorem tells us we can compute the probability that the answer is a 3 or a 0 by summing up the logarithm of the ratio of our confidence in each hypothesis after each individual roll.16 The brilliant British mathematician Alan Turing used this trick to figure out whether or not to change tack while trying to crack the German Enigma code in the Second World War. Each morning, his team would try new settings of their Enigma machine in an attempt to decode intercepted messages. The problem was how long to keep trying a particular pair of ciphers before discarding it and trying another.

I see two broad solutions to this problem: • Seek to engineer a form of self-awareness into machines (but risk losing our own autonomy in the process). • Ensure that when interfacing with future intelligent machines, we do so in a way that harnesses rather than diminishes human self-awareness. Let’s take a look at each of these possibilities. Self-Aware Machines Ever since Alan Turing devised the blueprints for the first universal computer in 1937, our position as owners of uniquely intelligent minds has looked increasingly precarious. Artificial neural networks can now recognize faces and objects at superhuman speed, fly planes or pilot spaceships, make medical and financial decisions, and master traditionally human feats of intellect and ingenuity such as chess and computer games.

Hochberg, Leigh R., Mijail D. Serruya, Gerhard M. Friehs, Jon A. Mukand, Maryam Saleh, Abraham H. Caplan, Almut Branner, David Chen, Richard D. Penn, and John P. Donoghue. “Neuronal Ensemble Control of Prosthetic Devices by a Human with Tetraplegia.” Nature 442, no. 7099 (2006): 164–171. Hodges, Andrew. Alan Turing: The Enigma. London: Vintage, 1992. Hohwy, Jakob. The Predictive Mind. Oxford: Oxford University Press, 2013. Hoven, Monja, Maël Lebreton, Jan B. Engelmann, Damiaan Denys, Judy Luigjes, and Ruth J. van Holst. “Abnormalities of Confidence in Psychiatry: An Overview and Future Perspectives.” Translational Psychiatry 9, no. 1 (2019): 1–18.

pages: 542 words: 161,731

Alone Together
by Sherry Turkle
Published 11 Jan 2011

For her dissertation work on how people responded to Mertz in a natural environment, see Lijin Aryananda, “A Few Days of a Robot’s Life in the Human’s World: Toward Incremental Individual Recognition” (PhD diss., Massachusetts Institute of Technology, 2007). 6 Alan Turing, usually credited with inventing the programmable computer, said that intelligence may require the ability to have sensate experience. In 1950, he wrote, “It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. That process could follow the normal teaching of a child. Things would be pointed out and named, etc.” Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950): 433-460. 7 This branch of artificial intelligence (sometimes called “classical AI”) attempts to explicitly represent human knowledge in a declarative form in facts and rules.

Freedom Baird takes this question very seriously.9 A recent graduate of the MIT Media Lab, she finds herself engaged with her Furby as a creature and a machine. But how seriously does she take the idea of the Furby as a creature? To determine this, she proposes an exercise in the spirit of the Turing test. In the original Turing test, published in 1950, mathematician Alan Turing, inventor of the first general-purpose computer, asked under what conditions people would consider a computer intelligent. In the end, he settled on a test in which the computer would be declared intelligent if it could convince people it was not a machine. Turing was working with computers made up of vacuum tubes and Teletype terminals.

See the Feldman Gallery’s Kelly Heaton page at www.feldmangallery.com/pages/artistsrffa/arthea01.html (accessed August 18, 2009). 9 Baird developed her thought experiment comparing how people would treat a gerbil, a Barbie, and a Furby for a presentation at the Victoria Institute, Gothenburg, Sweden, in 1999. 10 In Turing’s paper that argued the existence of intelligence if a machine could not be distinguished from a person, one scenario involved gender. In “Computing Machinery and Intelligence,” he suggested an “imitation game”: a man and then a computer pose as female, and the interrogator tries to distinguish them from a real woman. See Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950): 433-460. 11 Antonio Damasio, The Feeling of What Happens: Body and Emotion in the Making of Consciousness (New York: Harcourt, 1999). Since emotions are cognitive representations of body states, the body cannot be separated from emotional life, just as emotion cannot be separated from cognition. 12 There are online worlds and communities where people feel comfortable expressing love for Furbies and seriously mourning Tamagotchis.

pages: 998 words: 211,235

A Beautiful Mind
by Sylvia Nasar
Published 11 Jun 1998

But because the science required sophisticated mathematics, it was also very much a mathematicians’ war, and the war effort tapped the eclectic talents of the Princeton mathematical community.31 Princeton mathematicians became involved in ciphers and code breaking. A cryptanalytic breakthrough enabled the United States to win a major battle at Midway Island, the turning point in the naval war between the United States and Japan.32 In Britain, Alan Turing, a Princeton Ph.D., and his group at Bletchley Park broke the Nazi code without the Germans’ knowledge, thus turning the tide in the submarine battle for control of the Atlantic.33 Oswald Veblen and several of his associates essentially rewrote the science of ballistics at the Aberdeen Proving Ground.

The arrest preceded the onset of Nash’s illness by more than four years. Stories of other mathematicians who were caught up in the meanness and bigotry of those times illustrate how disequilibrating being harassed and humiliated can be. J. C. C. McKinsey committed suicide in 1953 within two years of being fired by RAND.35 Alan Turing, the mathematical genius who cracked the Nazi submarine code, was arrested, tried, and convicted under Britain’s anti-homosexual statutes in 1952; he committed suicide in the summer of 1954 by taking a bite of a cyanide-laced apple in his laboratory.36 Others, less well known, less obviously brutalized, had breakdowns that led to their giving up mathematics and living on the margins of society.

Bell, Men of Mathematics (New York: Simon & Schuster, 1986); Stuart Hollingdale, Makers of Mathematics (New York: Penguin, 1989); Ray Monk, Ludwig Wittgenstein: The Duty of Genius (New York: Penguin, 1990); John Dawson, Logical Dilemmas: The Life and Work of Kurt Gödel (Wellesley, Mass.: A. K. Peters, 1997); Roger Highfield and Paul Carter, The Private Lives of Albert Einstein (New York: St. Martin’s Press, 1994); Andrew Hodges, Alan Turing: The Enigma (New York: Simon & Schuster, 1983). 21. Anthony Storr, The Dynamics of Creation (New York: Atheneum, 1972). 22. Ibid. 23. John G. Gunderson, “Personality Disorders,” The New Hanard Guide to Psychiatn (Cambridge: The Belknap Press of Harvard University, 1988), pp. 343–44. 24. Ibid. 25.

pages: 247 words: 43,430

Think Complexity
by Allen B. Downey
Published 23 Feb 2012

What is the biggest spaceship you can find? Universality To understand universality, we have to understand computability theory, which is about models of computation and what they compute. One of the most general models of computation is the Turing machine, which is an abstract computer proposed by Alan Turing in 1936. A Turing machine is a 1-D CA, infinite in both directions, augmented with a read-write head. At any time, the head is positioned over a single cell. It can read the state of that cell (usually there are only two states), and it can write a new value into the cell. In addition, the machine has a register, which records the state of the machine (one of a finite number of states), and a table of rules.

Read http://en.wikipedia.org/wiki/Instrumentalism and construct a sequence of statements that characterize instrumentalism in a range of strengths. Turmites If you generalize the Turing machine to two dimensions or add a read-write head to a 2D CA, the result is a cellular automaton called a turmite. It is named after a termite because of the way the read-write head moves, but spelled wrong as an homage to Alan Turing. The most famous turmite is Langton’s ant, discovered by Chris Langton in 1986. See http://en.wikipedia.org/wiki/Langton_ant. The ant is a read-write head with four states, which you can think of as facing north, south, east, or west. The cells have two states, black and white. The rules are simple.

The Science of Language
by Noam Chomsky
Published 24 Feb 2012

Thomas Huxley recognized it – that there's going to be a lot of different kinds of life forms, including human ones; maybe nature just somehow allows human types and some other types – maybe nature imposes constraints on possible life forms. This has remained a fringe issue in biology: it has to be true, but it's hard to study. [Alan] Turing (1992), for example, devoted a large part of his life to his work on morphogenesis. It is some of the main work he did – not just that on the nature of computation – and it was an effort to show that if you ever managed to understand anything really critical about biology, you'd belong to the chemistry or physics department.

Biology, a science of particular interest in the study of language, seems still to be in transition; it seems still to owe some allegiance to commonsense understanding. Darwin's view of natural selection (even supplemented with genes in what is now called “neo-Darwinism”) and the concept of adaptation built on it remains indebted to what Alan Turing and Richard Lewontin call “history,” not to mathematical formal theories of structure and form and to the constraints they impose on both potential modifications in and growth/development of organisms. Indeed, there are some naïve versions of selection and adaptation that appear in evolutionary discussions that are difficult to distinguish from historicized versions of behaviorism, a point B.

Wallace at Darwin's time pointed out that it is very unlikely that selection could explain the introduction of the capacity we humans alone have to do mathematics. That natural selection does not deal with everything in the way of explaining ‘shape’ and biological form – and perhaps deals with very little – was emphasized by D'Arcy Thompson in the early 1900s and Alan Turing in the mid-1900s. They pointed to a significant role for physiochemical explanation in dealing with structure and modification and emphasized that formal functions could explain form and its permissible variations in a way that brought into question the value of adaptationist and selectional explanations.

The Book of Why: The New Science of Cause and Effect
by Judea Pearl and Dana Mackenzie
Published 1 Mar 2018

It was primarily this third level that prepared us for further revolutions in agriculture and science and led to a sudden and drastic change in our species’ impact on the planet. I cannot prove this, but I can prove mathematically that the three levels differ fundamentally, each unleashing capabilities that the ones below it do not. The framework I use to show this goes back to Alan Turing, the pioneer of research in artificial intelligence (AI), who proposed to classify a cognitive system in terms of the queries it can answer. This approach is exceptionally fruitful when we are talking about causality because it bypasses long and unproductive discussions of what exactly causality is and focuses instead on the concrete and answerable question “What can a causal reasoner do?”

This is, of course, a holy grail of any branch of science—the development of a theory that will enable us to predict what will happen in situations we have not even envisioned yet. But it goes even further: having such laws permits us to violate them selectively so as to create worlds that contradict ours. Our next section features such violations in action. THE MINI-TURING TEST In 1950, Alan Turing asked what it would mean for a computer to think like a human. He suggested a practical test, which he called “the imitation game,” but every AI researcher since then has called it the “Turing test.” For all practical purposes, a computer could be called a thinking machine if an ordinary human, communicating with the computer by typewriter, could not tell whether he was talking with a human or a computer.

But life and science are never so simple! All evidence comes with a certain amount of uncertainty. Bayes’s rule tells us how to perform step (4) in the real world. FROM BAYES’S RULE TO BAYESIAN NETWORKS In the early 1980s, the field of artificial intelligence had worked itself into a cul-de-sac. Ever since Alan Turing first laid out the challenge in his 1950 paper “Computing Machinery and Intelligence,” the leading approach to AI had been so-called rule-based systems or expert systems, which organize human knowledge as a collection of specific and general facts, along with inference rules to connect them. For example: Socrates is a man (specific fact).

pages: 285 words: 86,853

What Algorithms Want: Imagination in the Age of Computing
by Ed Finn
Published 10 Mar 2017

The commodification of the Enlightenment comes at a price. It turns progress and computational efficiency into a performance, a spectacle that occludes the real decisions and trade-offs behind the mythos of omniscient code. And we believe it because we have lived with this myth of the algorithm for a long time—much longer than computational pioneers Alan Turing or even Charles Babbage and their speculations about thinking machines. The cathedral is a pervasive metaphor here because it offers an ordering logic, a superstructure or ontology for how we organize meaning in our lives. Bogost is right to cite the Enlightenment in his piece, though I will argue the relationship between algorithmic culture and that tradition of rationalism is more complicated than a simple rejection or deification.

If the anchor point for the pragmatist’s definition of the algorithm is its indefinable flexibility based on tacit understanding about what counts as a problem and a solution, the anchor point here is the notion of abstraction. The argument for computationalism begins with the Universal Turing Machine, mathematician Alan Turing’s breathtaking vision of a computer that can complete any finite calculation simply by reading and writing to an infinite tape marked with 1s and 0s, moving the tape forward or backward based on the current state of the machine. Using just this simple mechanism one could emulate any kind of computer, from a scientific calculator finding the area under a curve to a Nintendo moving Mario across a television screen.

This phase shift has produced a new crop of centers and initiatives grappling with the potential consequences of artificial intelligence, uniting philosophers, technologists, and Silicon Valley billionaires around the question of whether a truly thinking machine could pose an existential threat to humanity. In the paper where he described the Turing test, Alan Turing also took on the broader question of machine intelligence: an algorithm for consciousness. The Turing test was in many ways a demonstration of the absurdity of establishing a metric for intelligence; the best we can do is have a conversation and see how effective a machine is at emulating a human.

pages: 315 words: 89,861

The Simulation Hypothesis
by Rizwan Virk
Published 31 Mar 2019

In lieu of a formal definition, an informal definition is a computer program or artificial device that can pass the Turing Test. The History and Rise of AI The Turing Test Figure 13: A visual depiction of the Turing Test 12 The Turing Test is more of a milestone than a definition, since most AI today cannot pass this test. Alan Turing, considered by many to be the father of modern computer science, conjectured a time when a machine would exhibit intelligent behaviors. In his 1950 paper titled “Computing Machinery and Intelligence,” Turing took on the question of whether a machine could “think.” Since it was very difficult to say what “thinking” would mean, Turing devised a party game to tell if a computer was “intelligent” enough in conversation that it could fool a human.

The components that would need to be developed for AI/NPCs to pass this test in a fully immersive simulation like our reality include: Natural Language Processing. The first requirement would be that AI could accept natural language as an input. This would initially be typed responses not unlike Alan Turing’s idea. The AI would need to understand the input well enough to consider an appropriate series of responses. Natural Language Response. The AI would then need to give a response back that showed an understanding of what was input in a way that mimicked how a human might respond. This means understanding not only the text but the emotional content and context of what was said.

It was developed based on the technology of a long-vanished alien race! The Upshot: Consciousness as Information We have seen how the history of AI and gaming is intertwined, and just as my first exposure to AI was in building a Tic Tac Toe game, many of the founding figures of modern computer science, including Claude Shannon and Alan Turing, devised games as a way to test and develop artificial intelligence. We saw that recently, algorithms have been able to beat professional players in traditional games like Chess and Go, and even beat professional players of eSports, which requires much more understanding of a virtual 3D environment.

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

Data) from Star Trek, C3PO from Star Wars and Agent Smith from The Matrix are all examples of AGI. Each of these fictional systems would be capable of passing the TURING TEST—in other words, these AI systems could carry out a conversation so that they would be indistinguishable from a human being. Alan Turing proposed this test in his 1950 paper, Computing Machinery and Intelligence, which arguably established artificial intelligence as a modern field of study. In other words, AGI has been the goal from the very beginning. It seems likely that if we someday succeed in achieving AGI, that smart system will soon become even smarter.

These two paradigms were completely different, they aimed to try and solve different problems, and they used completely different methods and different kinds of mathematics. Back then, it wasn’t at all clear which was going to be the winning paradigm. It’s still not clear to some people today. What was interesting, was that some of the people most associated with logic actually believed in the neural net paradigm. The biggest examples are John von Neumann and Alan Turing, who both thought that big networks of simulated neurons were a good way to study intelligence and figure out how those things work. However, the dominant approach in AI was symbol processing inspired by logic. In logic, you take symbol strings and alter them to arrive at new symbol strings, and people thought that must be how reasoning works.

So, very early on I started to think a lot about thinking, and that led me to my interest in things like neuroscience later on in my life. Chess, of course, has a deeper role in AI. The game itself has been one of the main problem areas for AI research since the dawn of AI. Some of the early pioneers in AI like Alan Turing and Claude Shannon were very interested in computer chess. When I was 8 years old, I purchased my first computer using the winnings from the chess tournaments that I entered. One of the first programs that I remember writing was for a game called Othello—also known as Reversi—and while it’s a simpler game than chess, I used the same ideas that those early AI pioneers had been using in their chess programs, like alpha-beta search, and so on.

pages: 317 words: 101,074

The Road Ahead
by Bill Gates , Nathan Myhrvold and Peter Rinearson
Published 15 Nov 1995

For the next century mathematicians worked with the ideas Babbage had outlined and finally, by the mid-1940s, an electronic computer was built based on the principles of his Analytical Engine. It is hard to sort out the paternity of the modern computer, because much of the thinking and work was done in the United States and Britain during World War II under the cloak of wartime secrecy. Three major contributors were Alan Turing, Claude Shannon, and John von Neumann. In the mid-1930s, Alan Turing, like Babbage a superlative Cambridge-trained British mathematician, proposed what is known today as a Turing machine. It was his version of a completely general-purpose calculating machine that could be instructed to work with almost any kind of information. In the late 1930s, when Claude Shannon was still a student, he demonstrated that a machine executing logical instructions could manipulate information.

Although I believe that eventually there will be programs that will recreate some elements of human intelligence, it is very unlikely to happen in my lifetime. For decades computer scientists studying artificial intelligence have been trying to develop a computer with human understanding and common sense. Alan Turing in 1950 suggested what has come to be called the Turing Test: If you were able to carry on a conversation with a computer and another human, both hidden from your view, and were uncertain about which was which, you would have a truly intelligent machine. Every prediction about major advances in artificial intelligence has proved to be overly optimistic.

pages: 170 words: 51,205

Information Doesn't Want to Be Free: Laws for the Internet Age
by Cory Doctorow , Amanda Palmer and Neil Gaiman
Published 18 Nov 2014

They figured out how to do it with ease. 1.5 Understanding General-Purpose Computers BACK AT THE dawn of mechanical computation, computers were “special-purpose.” One computer would solve one kind of mathematical problem, and if you had a different problem, you’d build a different computer. But during World War II, thanks to the government-funded advancements made by such scientific luminaries as Alan Turing and John von Neumann, a new kind of computer came into existence: the “general-purpose” digital computer. These machines arose from a theory of general-purpose computation that showed that, with a simple set of “logic gates” and enough memory and time, you could “compute” any program that could be represented symbolically.

We know how to build a computer that can solve one kind of problem (like a mechanical adding machine), and we know how to build a computer that can solve all kinds of problems. But we don’t know how to design and build a computer that can run every program except for one program that pisses off, endangers, or harms the entertainment industry. The computer that runs everything minus one has no basis in theory. Maybe some twenty-first-century Alan Turing will invent it, but no one has yet stepped forward with such a design, and the consensus in computer science is that such a design is not feasible. Which means that we can’t deliver on the promise of digital locks. Of course, that hasn’t stopped the entertainment industry from trying to approximate the effect of a computer that doesn’t run a certain program.

pages: 180 words: 55,805

The Price of Tomorrow: Why Deflation Is the Key to an Abundant Future
by Jeff Booth
Published 14 Jan 2020

(Plan28.org is an ongoing project to use his designs to build his analytics engine using only parts available from his time. That project is on track to finish by 2021.) Advances in technology—including electricity—increased what was possible. Research into thinking machines grew from the 1930s to 1950s. An important trailblazer of the time was Alan Turing (1912–1954), an English mathematician. Turing is best known for breaking the German Enigma code in World War II, which allowed the Allies to read encrypted messages crucial to their victory over Nazi Germany—a feat depicted in the movie The Imitation Game. But he was also an early believer that the human brain was in large part a digital computing machine, and therefore that computers could be made to have intelligence—to think.

Although progress still continued in pockets, it was largely due to the increasing computational power of computers, combined with digitization, that artificial intelligence finally began a lasting resurgence in the late 1990s. An area of specific study for many in the artificial intelligence field was investigating how our own brains work. Alan Turing himself theorized that the cortex at birth is an “unorganized machine” and through “training” becomes organized “into a universal machine or something like it.”50 If brains learn like computers, then computers can learn like brains. But was Turing right? Do we understand by reducing probabilities?

pages: 573 words: 157,767

From Bacteria to Bach and Back: The Evolution of Minds
by Daniel C. Dennett
Published 7 Feb 2017

They are just as clearly fruits of the Tree of Life as spider webs and beaver dams, but the probability of their emerging without the helping hand of Homo sapiens and our cultural tools is nil. As we learn more and more about the nano-machinery of life that makes all this possible, we can appreciate a second strange inversion of reasoning, achieved almost a century later by another brilliant Englishman: Alan Turing. Here is Turing’s strange inversion, put in language borrowed from Beverley: IN ORDER TO BE A PERFECT AND BEAUTIFUL COMPUTING MACHINE, IT IS NOT REQUISITE TO KNOW WHAT ARITHMETIC IS. Before Turing’s invention there were computers, by the hundreds or thousands, employed to work on scientific and engineering calculations.

Before we take up the minds of animals, I want to turn to some further examples of the design of artifacts that will help isolate the problem that evolution solved when it designed competent animals. The intelligent designers of Oak Ridge and GOFAI After seventy years there are still secrets about World War II that have yet to emerge. The heroic achievements of Alan Turing in breaking the German Enigma code at Bletchley Park are now properly celebrated even while some of the details are still considered too sensitive to make public. Only students of the history of atomic energy engineering are likely to be well acquainted with the role that General Leslie Groves played in bringing the Manhattan Project to a successful conclusion.

There are many well-studied examples of viral and bacterial camouflage and mimicry, and biotechnologists are now copying the strategy, masking nano-artifacts that would otherwise be attacked by the immune system. 26Suppose bigger teeth would be a design improvement in some organism; the raw materials required, and the energy to move the raw materials into position, do not count as semantic information, but the developmental controls to accomplish this redesign do count. 27The Ratio Club, founded in 1949 by the neurologist John Bates at Cambridge University, included Donald MacKay, Alan Turing, Grey Walter, I. J. Good, William Ross Ashby, and Horace Barlow, among others. Imagine what their meetings must have been like! 28Science has hugely expanded our capacity to discover differences that make a difference. A child can learn the age of a tree by counting its rings, and an evolutionary biologist can learn roughly how many million years ago two birds shared a common ancestor by counting the differences in their DNA, but these pieces of information about duration are not playing any role in the design of the tree or the bird; the information is not for them but has now become information for us. 29Colgate and Ziock (2010) defend a definition of information as “that which is selected,” which is certainly congenial to my definition, but in order to make it fit the cases they consider, the term “selected” has to be allowed to wander somewhat. 30Notoriously, Gibson doesn’t just ignore the question of what internal machinery manages to do this pick-up; he often seems to deny that there are any difficult questions to answer here.

pages: 207 words: 59,298

The Gig Economy: A Critical Introduction
by Jamie Woodcock and Mark Graham
Published 17 Jan 2020

We owe an important thanks to the German Federal Ministry for Economic Cooperation and Development (BMZ) and the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), as well as the ESRC (ES/S00081X/1) for supporting our research in this area. We would like to acknowledge also the Leverhulme Prize (PLP-2016-155), the European Research Council (ERC-2013-StG335716-GeoNet), and The Alan Turing Institute (EPSRC grant EP/N510129/1) for their ongoing support. The book has drawn on previous and ongoing research projects at the Oxford Internet Institute. We are particularly thankful to the Fairwork team, including Sandy Fredman, Paul Mungai, Richard Heeks, Darcy du Toit, Jean-Paul van Belle and Abigail Osiki on the South African project; Balaji Parthasarathy, Mounika Neerukonda and Pradyumna Taduri in India; Sai Englert, Adam Badger and Fabian Ferrari in the UK; as well as Noopur Raval, Srujana Katta, Alison Gillwald, Anri van der Spuy, Trebor Scholz, Niels van Doorn, Anna Thomas and Janine Berg – many of whom also discussed the ideas and offered feedback on the manuscript.

Available at: https://www.oecd-ilibrary.org/employment/automation-skills-use-and-training_2e2f4eea-en Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press. OECD (2019) Measuring platform mediated workers. OECD Digital Economy Papers No. 282. Ojanperä, S., O’Clery, N. and Graham, M. (2018) Data science, artificial intelligence and the futures of work. Alan Turing Institute Report, 24 October. Available at: http://doi.org/10.5281/zenodo.1470609 O’Neil, C. (2017) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Penguin. Pasquale, F. (2015) The Black Box Society: The Secret Algorithms That Control Money and Information.

pages: 211 words: 57,618

Quantum Computing for Everyone
by Chris Bernhardt
Published 19 Mar 2019

The authors were generally referring to specific types of computers, but the impression they give, though exaggerated, is true. Initially computers were massive, had to be in air-conditioned rooms, and were not very reliable. Today, I have a laptop, a smartphone, and a tablet. All three are far more powerful than the first computers. I think that even visionaries like Alan Turing would be amazed at the extent to which computers have thoroughly permeated all levels of society. Turing did discuss chess playing and artificial intelligence, but nobody predicted that the rise of e-commerce and social media would come to dominate so much of our lives. Quantum computing is now in its infancy, and the comparison to the first computers seems apt.

Though this is distinctly a minority view, Deutsch is a firm believer. His paper in 1985 is one of the foundational papers in quantum computing, and one of Deutsch’s goals with this work was to make a case for parallel universes. He hopes that one day that there will be a test, analogous to Bell’s test, that will confirm this interpretation. Computation Alan Turing is one of the fathers of the theory of computation. In his landmark paper of 1936 he carefully thought about computation. He considered what humans did as they performed computations and broke it down to its most elemental level. He showed that a simple theoretical machine, which we now call a Turing machine, could carry out any algorithm.

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

CHAPTER 3 FROM TURING TO TODAY—AND BEYOND IN 1943, WHEN researchers created the first modern computer—electronic, digital, and programmable—their achievement gave new urgency to intriguing questions: Can machines think? Are they intelligent? Could they become intelligent? Such questions seemed particularly perplexing given long-standing dilemmas about the nature of intelligence. In 1950, mathematician and code breaker Alan Turing offered a solution. In a paper unassumingly titled “Computing Machinery and Intelligence,” Turing suggested setting aside the problem of machine intelligence entirely. What mattered, Turing posited, was not the mechanism but the manifestation of intelligence. Because the inner lives of other beings remain unknowable, he explained, our sole means of measuring intelligence should be external behavior.

Werner Heisenberg, “Ueber den anschaulichen Inhalt der quantentheoretischen Kinematik and Mechanik,” Zeitschrift für Physik, as quoted in the Stanford Encyclopedia of Philosophy, “The Uncertainty Principle,” https://plato.stanford.edu/entries/qt-uncertainty/. 15. Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe (Oxford, UK: Basil Blackwell, 1958), 32–34. 16. See Eric Schmidt and Jared Cohen, The New Digital Age: Reshaping the Future of People, Nations, and Business (New York: Alfred A. Knopf, 2013). CHAPTER 3 1. Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950), 433–460, reprinted in B. Jack Copeland, ed., The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life Plus the Secrets of Enigma (Oxford, UK: Oxford University Press, 2004), 441–464. 2.

New Horizons in the Study of Language and Mind
by Noam Chomsky
Published 4 Dec 2003

By standard externalist arguments, the question should be settled by the truth about thought: what is the essence of Peter’s thinking about his children, or solving a quadratic equation, or playing chess, or interpreting a sentence, or deciding whether to wear a raincoat? But that is not the way it seemed to Ludwig Wittgenstein and Alan Turing, to take two notable examples. For Wittgenstein, the question whether machines think cannot seriously be posed: “We can only say of a human being and what is like one that it thinks” (Wittgenstein 1958: 113), maybe dolls and spirits; that is the way the tool is used. Turing, in his classic 1950 paper, wrote that the question whether machines can think may be too meaningless to deserve discussion.

That move has been made far too easily, 114 New horizons in the study of language and mind leading to extensive and it seems pointless debate over such alleged questions as whether machines can think: for example, as to “how one might empirically defend the claim that a given (strange) object plays chess” (Haugeland 1979), or determine whether some artifact or algorithm can translate Chinese, or reach for an object, or commit murder, or believe that it will rain. Many of these debates trace back to the classic paper by Alan Turing in which he proposed the Turing test for machine intelligence, but they fail to take note of his observation that “The original question, ‘Can machines think?,’ I believe to be too meaningless to deserve discussion” (Turing 1950: 442): it is not a question of fact, but a matter of decision as to whether to adopt a certain metaphorical usage, as when we say (in English) that airplanes fly but comets do not – and as for space shuttles, choices differ.

The discussions are not only dualistic in essence, but also, it seems, without any clear purpose or point: on a par with debates about whether 148 New horizons in the study of language and mind the space shuttle flies or submarines set sail, but do not swim; questions of decision, not fact, in these cases, though assumed to be substantive in the case of the mind, on assumptions that have yet to be explained – and that, incidentally, ignore an explicit warning by Alan Turing in the classic paper that inspired much of the vigorous debate of the past years. When we turn to language, the internalism–externalism issues arise; though again only for the theory of meaning, not for phonology, where they could be posed in the same ways. Thus we are asked to consider whether meanings are “in the head,” or are externally determined.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

Specifically, what exactly was the nature of Hilbert’s ‘mechanical procedures’? What was the complete range of possible ‘mechanical procedures’ that could be created with systems of logical rules? In other words, what defined all the things that algorithms could possible do? In 1936, British mathematician (and later war hero) Alan Turing designed a theoretical mechanical device, and a mathematical proof that it could do any mechanical procedure, that could be implemented on any computer, ever. This device, which we now call a Turing Machine, involved configurations of mechanical ‘states’, and the ability to write to and read from memory (in the form of symbols written on an infinitely long roll of tape).

Note the wall of patch cords and the panels of knobs (see Figure 8.2). The rat’s nests of wires connecting the switches was suggestive of a network not dissimilar to the brain’s complex web of neurons and synapses. This conception of the brain was probably widely held as it is apparent in one of the earliest generalized papers about AI, written by Alan Turing in 1948, and entitled ‘Intelligent Machinery’. Although the paper wasn’t published until 1968, it contained an early ground-breaking description of computing devices that are very like connectionist algorithms.17 Furthermore, the paper illustrates some striking similarities with ideas in Hayek’s later work, which emphasized the brain-like mess of patch cords in ENIAC and other early computers.

What words we use for which things shapes our perception of those things, and perceptions change meaning, and meaning determines actions which, in turn, may change perception. This is a vital feedback loop that affects social and cultural evolution and it’s a loop that now involves algorithms. So, it is now imperative that we understand the meaning of words applied to algorithms and how algorithms generate words about and for us. Alan Turing’s contributions to the world are on a nearly unimaginable scale. Due to the deeply rooted societal bias against homosexuals, he was, until recently, the greatest unsung hero of the Second World War, perhaps the greatest of all time. He was instrumental in the creation of a computer that deciphered the coded messages of the German’s Enigma machine, a feat that is thought to have shortened the war by at least four years and thus saved millions of lives.

pages: 407 words: 104,622

The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution
by Gregory Zuckerman
Published 5 Nov 2019

There wasn’t even a proper name for this kind of trading, which involved data cleansing, signals, and backtesting, terms most Wall Street pros were wholly unfamiliar with. Few used email in 1990, the internet browser hadn’t been invented, and algorithms were best known, if at all, as the step-by-step procedures that had enabled Alan Turing’s machine to break coded Nazi messages during World War II. The idea that these formulas might guide, or even help govern, the day-to-day lives of hundreds of millions of individuals, or that a couple of former math professors might employ computers to trounce seasoned and celebrated investors, seemed far-fetched if not outright ludicrous.

The experience taught Patterson to distrust most moneymaking operations, even those that appeared legitimate—one reason why he was so skeptical of Simons years later. After graduate school, Patterson thrived as a cryptologist for the British government, building statistical models to unscramble intercepted messages and encrypt secret messages in a unit made famous during World War II when Alan Turing famously broke Germany’s encryption codes. Patterson harnessed the simple-yet-profound Bayes’ theorem of probability, which argues that, by updating one’s initial beliefs with new, objective information, one can arrive at improved understandings. Patterson solved a long-standing problem in the field, deciphering a pattern in the data others had missed, becoming so valuable to the government that some top-secret documents shared with allies were labeled “For US Eyes Only and for Nick Patterson.”

Watson Research Center, 172 Thorp, Edward, 30, 97–98, 127–29, 130, 163 tick data, 112 Toll, John, 33 tradeable effects, 111 trading errors, 166 trading signals, 3, 83–84, 203–5, 246–47, 312 trenders, 73 trend following, 96, 100 Trump, Donald, xviii, 281–94, 302, 304–5 Trump, Ivanka, 281 Trump, Melania, 285 Trump National Golf Club, 282 Tsai, Gerald, Jr., 123 Turing, Alan (Turing machine), 3, 148 “turtles,” 125 Tversky, Amos, 152 twenty-four-hour effect, 109 20th Century Fox, 10–11 Two Sigma Investments, 310, 312 Tykhe Capital, 256 United Airlines, 166 United Church of Christ, 87–88 United Fruit Company, 19 University of California, Berkeley, 3, 17–19, 20, 38, 68–69, 92–93, 95 University of California, Irvine, 81 University of California, Los Angeles, 36–37 University of Cambridge, 147 University of Chicago, 30, 72, 256 University of Erlangen-Nuremberg, 300–301 University of Illinois, 171 University of New Mexico, 169–70 University of Pennsylvania, 176, 185, 236, 270 University of Rochester, 169 value style of investing, 96 Vietnam War, 31–32, 48 Villani, Dario, 308 Vinik, Jeffrey, 163 Volcker, Paul, 65 Volfbeyn, Pavel, 238, 241, 242, 252–54 von Neumann, John, 67 Wadsworth, Jack, Jr., 89 Wallace, Mike, 13 Wall Street (movie), 106 Wall Street Journal, 57, 76, 122, 124, 128, 146, 172, 198, 275, 294, 303, 318 Walters, Barbara, 13 Wander, Wolfgang, 300–301, 300n Ward, Kelli, 304 WarGames (movie), 192 Washington Post, 282 weekend effect, 109–10 Weinberger, Peter, 201, 233–34 Weinstein, Boaz, 299 Welch, Lloyd, 46–48 West Meadow Beach, 34, 235 Wheeler, Langdon, 106 white supremacism, 292–93, 299–300 Whitney, Glen, at Renaissance, 235–36 compensation, 200–201, 229 departure, 262 job interviews, 233 Kononenko and, 241, 242–43, 262 Mercer and, 231–32, 235 Wild One, The (movie), 17 Wiles, Andrew, 69–70 Witten, Edward, 38 World Bank, 56 WorldCom, 226 World Trade Center mosque controversy, 278 Yale University, 176 Yang, Chen Ning, 33 Yau, Shing-Tung, 35 Yiannopoulos, Milo, 300, 302 Zeno’s paradoxes, 12 ABCDEFGHIJKLMNOPQRSTUVWXYZ ABOUT THE AUTHOR Gregory Zuckerman is the author of The Greatest Trade Ever and The Frackers, and is a Special Writer at the Wall Street Journal.

pages: 406 words: 108,266

Journey to the Edge of Reason: The Life of Kurt Gödel
by Stephen Budiansky
Published 10 May 2021

One could even easily devise a machine which would give you as many correct consequences of the axioms as you like.”68 Gödel’s mentioning a “machine” was a suggestive anticipation of the profound connections his work would in fact have to the theories of Alan Turing and John von Neumann, laying the basis of the digital computer a decade later. It was just one of many far-reaching ripples from the series of rocks he was about to throw into the waters of mathematical logic. *The term “decidable” also has another sense when referring to the so-called Entscheidungsproblem, which asks whether a computer program or other algorithm can, in a finite number of steps, determine whether a proposition is true or false. Alan Turing demonstrated in 1936 that no such algorithm can be constructed.

“But for fifteen years,” he explained, “I had carried around the thought of astounding the mathematical world with my unorthodox ideas, and meeting the man chiefly responsible for the vanishing of that dream rather carried me away.” He ended by saying, “As for any claims I might make perhaps the best I can say is that I would have proved Gödel’s Theorem in 1921—had I been Gödel.”45 Still, Post was able to contribute an important extension of Gödel’s theorem which, along with a near simultaneous paper by Alan Turing, provided a rigorous definition of a formal system as a series of basic, mechanical computational steps—the conceptual model for a computer that would come to be known as the Turing Machine, and which would lay the foundation of modern computer science. Gödel and Turing never met, but each recognized the great significance of the other’s work for the newly emerging field of computing.

pages: 261 words: 10,785

The Lights in the Tunnel
by Martin Ford
Published 28 May 2011

These are usually relatively isolated locations far from natural disasters and other threats and close to clean, reliable energy (which today mostly means hydroelectric power). ] The “Heads in the Sand” Objection If other arguments against the ideas I have presented here prove insufficient, then I suspect that many people will be tempted to turn to this one: Some people will reject the idea that machines might begin to exhibit some degree of intelligence—and, therefore, achieve the capability to perform a great many jobs—simply because the implications are very difficult to deal with. This irrational, but perhaps understandable, objection to the idea that machines might someday begin to think and reason was first articulated by the founder of computer science, Alan Turing (Please see the last section of this Appendix). Turing initiated the field of artificial intelligence with his 1950 paper “Computing Machinery and Intelligence.” Here’s how Turing expressed what he called the “Heads in the Sand” Objection (which, of course, he rejected): “The consequences of machines thinking would be too dreadful.

Roger Penrose, one of the world’s top mathematical physicists, has written several books57 suggesting that true artificial intelligence is unattainable using conventional computers because he believes that intelligence (or at least consciousness) has its roots in quantum mechanics—the area of physics that governs the probabilistic, and seemingly bizarre, interactions that occur between particles of subatomic size. If strong AI does arrive, how will we know? That is a question that was first asked by Alan Turing nearly sixty years ago. Turing, a legendary British mathematician and code breaker during World War II, is often considered to be the founder of computer science. In 1950, Turing published a paper entitled “Computing Machinery and Intelligence,” in which he proposed a test to answer the question: “Can machines think?”

Speaking Code: Coding as Aesthetic and Political Expression
by Geoff Cox and Alex McLean
Published 9 Nov 2012

Intelligence To demonstrate believability, a machine would be required to possess some kind of intelligence that reflects the capacity for human reasoning, in parallel to turning mere voice sounds into proper speech that expresses human gentility. In a paper of 1950, “Computing Machinery and Intelligence,” Alan Turing made the claim that computers would be capable of imitating human intelligence, or more precisely the human capacity for rational thinking. He set out what become commonly known as the “Turing test” to examine whether a machine is able to respond convincingly to an input with an output similar to a human’s.48 The contemporary equivalent, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), turns this idea around, so that the software has to decide whether it is dealing with a human or a script.49 Perhaps it is the lack of speech that makes this software appear crude by comparison, as human intelligence continues to be associated with speech as a marker of reasoned semantic processing.

See Wikipedia entry, available at http://en.wikipedia.org/wiki/Wolfgang_von_Kempelen% 27s_Speaking_Machine. 43. Rée, I See a Voice, 258. 44. Ong, Orality and Literacy, 86. 45. See http://www.omniglot.com/writing/korean.htm. 46. Rée, I See a Voice, 262. 47. George Bernard Shaw, Pygmalion (1916). Also see Ovid’s Metamorphoses, book X. 48. Alan Turing, “Computing Machinery and Intelligence” (1950), in Noah Wardrip-Fruin and Nick Montfort, eds., The New Media Reader (Cambridge, MA: MIT Press, 2003), 49–64. 49. See http://www.captcha.net/. 50. John R. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3 (1980): 417. 51. Ibid., 418. 52.

pages: 333 words: 64,581

Clean Agile: Back to Basics
by Robert C. Martin
Published 13 Oct 2019

I imagine the first steam engine, the first mill, the first internal combustion engine, and the first airplane were produced by techniques that we would now call Agile. The reason for that is that taking small measured steps is just too natural and human for it to have happened any other way. So when did Agile begin in software? I wish I could have been a fly on the wall when Alan Turing was writing his 1936 paper.1 My guess is that the many “programs” he wrote in that book were developed in small steps with plenty of desk checking. I also imagine that the first code he wrote for the Automatic Computing Engine, in 1946, was written in small steps, with lots of desk checking, and even some real testing. 1.

On computable numbers, with an application to the Entscheidungsproblem [proof]. Proceedings of the London Mathematical Society, 2 (published 1937), 42(1):230–65. The best way to understand this paper is to read Charles Petzold’s masterpiece: Petzold, C. 2008. The Annotated Turing: A Guided Tour through Alan Turing’s Historic Paper on Computability and the Turing Machine. Indianapolis, IN: Wiley. The early days of software are loaded with examples of behavior that we would now describe as Agile. For example, the programmers who wrote the control software for the Mercury space capsule worked in half-day steps that were punctuated by unit tests.

pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It
by Azeem Azhar
Published 6 Sep 2021

But a key breakthrough came in 1938, when Claude Shannon, then a master’s student at the Massachusetts Institute of Technology, realised electronic circuits could be built to utilise Boolean logic – with on and off representing 1 and 0. It was a transformative discovery, which paved the way for computers built using electronic components. The first programmable, electronic, digital computer would famously be used by a team of Allied codebreakers, including Alan Turing, during World War Two. Two years after the end of the war, scientists at Bell Labs developed the transistor – a type of semiconductor, a material that partly conducts electricity and partly doesn’t. You could build useful switches out of semiconductors. These in turn could be used to build ‘logic gates’ – devices that could do elementary logic calculations.

And since then, it has transformed all of our lives again. When you pick up your smartphone, you hold a device with several chips and billions of transistors. Computers – once limited to the realms of the military or scientific research – have become quotidian. Think of the first electronic computer, executing Alan Turing’s codebreaking algorithms in Bletchley Park in 1945. A decade later, there were still only 264 computers in the world, many costing tens of thousands of dollars a month to rent.8 Six decades on, there are more than 5 billion computers in use – including smartphones, the supercomputers in our pockets.

That’s a Problem’, Business Insider, 10 October 2020 <https://www.businessinsider.com/tiktok-ban-hearings-politicians-senators-know-nothing-about-tech-2020-10> [accessed 12 April 2021]. 9 Charles Percy Snow, The Two Cultures (Cambridge University Press, 2012). CHAPTER 1: THE HARBINGER 1 In other words, it was a Turing Machine – so named after British mathematician Alan Turing, who devised much of the theory behind computer science. Turing’s tragic death in 1954 meant he never had access to a computer as generally capable as the ZX81, with its 1,024 bytes of memory storage capable of crunching through a superhuman half a million instructions per second. 2 G. E. Moore, ‘Cramming More Components onto Integrated Circuits’, Proceedings of the IEEE, 86(1), 1965, pp. 82–85 <https://doi.org/10.1109/JPROC.1998.658762>. 3 Newton’s laws work at the scale of the everyday and in what is known as ‘inertial reference frames’.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

The fact that I am a woman ripples throughout the millions of data points in any and all information collected about me. AI sees patterns in vast amounts of data. By their very nature, algorithms are opaque, in the sense that even if you know how to read code, you still wouldn’t be able to know what an algorithm will do without putting it into action—the oft-cited black box problem. Alan Turing, the father of modern computer science, said that a key feature of a learning machine is that the human “teacher” is largely ignorant of what is going on inside the “pupil”—the machine. This feature is becoming more and more true about algorithms. Even their designers cannot fully comprehend the processes that happen in the more sophisticated algorithms.

Our algorithms are good at pattern recognition, but they do not yet think for themselves; we are nowhere close to that point. As Stanford University professor Fei-Fei Li said in her testimony to the U.S. Congress about the state of AI, “There’s nothing artificial about AI. It’s inspired by people, and most importantly it impacts people.”16 In 1950, Alan Turing asked whether it would be possible one day to create a computer with consciousness. To describe consciousness, Turing listed what he believed to capture the essence of humans: “Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience.”17 Turing didn’t quite answer his own question.

Fortunately, in December 2021, a year after her departure from Google, Gebru launched the Distributed AI Research (DAIR) Institute, a research organization funded by the Ford Foundation, the MacArthur Foundation, the Kapor Center, and the Open Society Foundations. DAIR joins other impactful non-profit organizations devoted to AI fairness, including the Algorithmic Justice League, AI for Good, the Data & Society Research Institute, the Alan Turing Institute, the Center for the Governance of AI housed at Oxford University, the Ethics of AI Lab at the University of Toronto, the Human-Centered Artificial Intelligence Institute at Stanford, the Berkman Klein Center for Internet and Society at Harvard, and the AI NOW Institute at NYU. At the same time, Haugen’s and Gebru’s stories also demonstrate the importance of ethical leaders, particularly women and people of color, continuing to take positions as insiders within major corporations.

pages: 239 words: 56,531

The Secret War Between Downloading and Uploading: Tales of the Computer as Culture Machine
by Peter Lunenfeld
Published 31 Mar 2011

When simulation evades the trap of mimicking the worst traits of a medium, and makes the best characteristics and affordances of it available to ever-larger groups of people, then simulation and participation become linked in what economists and social scientists refer to as a virtuous cycle. Should this virtuous cycle produce mindful downloading and meaningful uploading, then the promise of the culture machine is fulfilled. T SIDEBAR From Turing to Culture Machine Computer science’s equivalent to the Nobel Prize is called the Turing Award—an indication of how central Alan Turing is to the dream of the culture machine. A towering figure in a generation of truly great mathematicians, Turing was an authentic Cambridge eccentric, a shy but committed freethinker. He was by nature a solitary person, but proved to be a great patriot when he helped England and its allies crack German codes 17 CHAPTER 2 during World War II.

—Vannevar Bush People tend to overestimate what can be done in one year and underestimate what can be done in five to ten years. —J.C.R. Licklider 147 GENERATIONS There are many mathematicians, early computer scientists, and engineers who deserve to be considered part of the first generation of pioneering Patriarchs. They include Alan Turing, already discussed in chapter 2; mathematician and quantum theorist John von Neumann; cyberneticist Norbert Wiener; information theorist Claude Shannon; and computer architects like the German Konrad Zuse, and Americans J. Presper Eckert and John Mauchly, who developed ENIAC, the room-sized machine at the University of Pennsylvania that we recognize as the first general-purpose electronic computer.

pages: 237 words: 64,411

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence
by Jerry Kaplan
Published 3 Aug 2015

It will destroy existing jobs (taxi drivers, to name just one) and create new ones (commuter shared club-car concierges, for instance).12 And there are many, many other coming technologies with potentially comparable impact. That’s why I’m supremely confident that our future is very bright—if only we can figure out how to equitably distribute the benefits. Let’s look at another example of language shifting to accommodate new technology, this one predicted by Alan Turing. In 1950 he wrote a thoughtful essay called “Computing Machinery and Intelligence” that opens with the words “I propose to consider the question, ‘Can machines think?’” He goes on to define what he calls the “imitation game,” what we now know as the Turing Test. In the Turing Test, a computer attempts to fool a human judge into thinking it is human.

What might such concierges do? They could bring your coffee in the morning and have your favorite drink ready for your trip home, while you relax in one of perhaps four “captain’s chairs” in the van, complete with tray table and entertainment system, similar to a first-class airplane seat. 13. Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433–60, http://mind.oxfordjournals.org/content/LIX/236/433. 14. http://en.wikipedia.org/wiki/Loebner_Prize#Winners, last modified December 29, 2014. 15. Turing, “Computing Machinery and Intelligence,” 442. 16. Paul Miller, “iOS 5 includes Siri ‘Intelligent Assistant’ Voice-Control, Dictation—for iPhone 4S Only,” The Verge, October 4, 2011, http://www.theverge.com/2011/10/04/ios-5-assistant-voice-control-ai-features/. 17.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
by Michael Kearns and Aaron Roth
Published 3 Oct 2019

We deliberately say “computation” and not “computers,” because for the purposes of this book (and perhaps even generally), the most important thing to know about theoretical computer science is that it views computation as a ubiquitous phenomenon, not one that is limited to technological artifacts. The scientific justification for this view originates with the staggeringly influential work of Alan Turing (the first theoretical computer scientist) in the 1930s, who demonstrated the universality of computational principles with his mathematical model now known as the Turing machine. Many trained in theoretical computer science, ourselves included, view the field and its tools not simply as another scientific discipline but as a way of seeing and understanding the world around us—perhaps much as those trained in theoretical physics in an earlier era saw their own field.

Most of these fears are premised on the idea that AI research will inevitably lead to superintelligent machines in a chain reaction that will happen much faster than humanity will have time to react to. This chain reaction, once it reaches some critical point, will lead to an “intelligence explosion” that could lead to an AI “singularity.” One of the earliest versions of this argument was summed up in 1965 by I. J. Good, a British mathematician who worked with Alan Turing: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.

pages: 960 words: 125,049

Mastering Ethereum: Building Smart Contracts and DApps
by Andreas M. Antonopoulos and Gavin Wood Ph. D.
Published 23 Dec 2018

Transaction Data committed to the Ethereum Blockchain signed by an originating account, targeting a specific address. The transaction contains metadata such as the gas limit for that transaction. Truffle One of the most commonly used Ethereum development frameworks. Turing complete A concept named after English mathematician and computer scientist Alan Turing: a system of data-manipulation rules (such as a computer’s instruction set, a programming language, or a cellular automaton) is said to be “Turing complete” or “computationally universal” if it can be used to simulate any Turing machine. Vitalik Buterin A Russian–Canadian programmer and writer primarily known as the cofounder of Ethereum and of Bitcoin Magazine.

Further Reading The following references provide additional information on the technologies mentioned here: The Ethereum Yellow Paper: https://ethereum.github.io/yellowpaper/paper.pdf The Beige Paper, a rewrite of the Yellow Paper for a broader audience in less formal language: https://github.com/chronaeon/beigepaper ÐΞVp2p network protocol: http://bit.ly/2quAlTE Ethereum Virtual Machine list of resources: http://bit.ly/2PmtjiS LevelDB database (used most often to store the local copy of the blockchain): http://leveldb.org Merkle Patricia trees: https://github.com/ethereum/wiki/wiki/Patricia-Tree Ethash PoW algorithm: https://github.com/ethereum/wiki/wiki/Ethash Casper PoS v1 Implementation Guide: http://bit.ly/2DyPr3l Go-Ethereum (Geth) client: https://geth.ethereum.org/ Parity Ethereum client: https://parity.io/ Ethereum and Turing Completeness As soon as you start reading about Ethereum, you will immediately encounter the term “Turing complete.” Ethereum, they say, unlike Bitcoin, is Turing complete. What exactly does that mean? The term refers to English mathematician Alan Turing, who is considered the father of computer science. In 1936 he created a mathematical model of a computer consisting of a state machine that manipulates symbols by reading and writing them on sequential memory (resembling an infinite-length paper tape). With this construct, Turing went on to provide a mathematical foundation to answer (in the negative) questions about universal computability, meaning whether all problems are solvable.

With this construct, Turing went on to provide a mathematical foundation to answer (in the negative) questions about universal computability, meaning whether all problems are solvable. He proved that there are classes of problems that are uncomputable. Specifically, he proved that the halting problem (whether it is possible, given an arbitrary program and its input, to determine whether the program will eventually stop running) is not solvable. Alan Turing further defined a system to be Turing complete if it can be used to simulate any Turing machine. Such a system is called a Universal Turing machine (UTM). Ethereum’s ability to execute a stored program, in a state machine called the Ethereum Virtual Machine, while reading and writing data to memory makes it a Turing-complete system and therefore a UTM.

pages: 626 words: 181,434

I Am a Strange Loop
by Douglas R. Hofstadter
Published 21 Feb 2011

Among the well-known authors who have most influenced my thinking on the interwoven topics of minds, brains, patterns, symbols, self-reference, and consciousness are, in some vague semblance of chronological order: Ernest Nagel, James R. Newman, Kurt Gödel, Martin Gardner, Raymond Smullyan, John Pfeiffer, Wilder Penfield, Patrick Suppes, David Hamburg, Albert Hastorf, M. C. Escher, Howard DeLong, Richard C. Jeffrey, Ray Hyman, Karen Horney, Mikhail Bongard, Alan Turing, Gregory Chaitin, Stanislaw Ulam, Leslie A. Hart, Roger Sperry, Jacques Monod, Raj Reddy, Victor Lesser, Marvin Minsky, Margaret Boden, Terry Winograd, Donald Norman, Eliot Hearst, Daniel Dennett, Stanislaw Lem, Richard Dawkins, Allen Wheelis, John Holland, Robert Axelrod, Gilles Fauconnier, Paolo Bozzi, Giuseppe Longo, Valentino Braitenberg, Derek Parfit, Daniel Kahneman, Anne Treisman, Mark Turner, and Jean Aitchison.

This, in essence, is what the computer revolution is all about: when a certain well-defined threshold — I’ll call it the “Gödel–Turing threshold” — is surpassed, then a computer can emulate any kind of machine. This is the meaning of the term “universal machine”, introduced in 1936 by the English mathematician and computer pioneer Alan Turing, and today we are intimately familiar with the basic idea, although most people don’t know the technical term or concept. We routinely download virtual machines from the Web that can convert our universal laptops into temporarily specialized devices for watching movies, listening to music, playing games, making cheap international phone calls, who knows what.

I bounce back and forth between my email program, my word processor, my Web browser, my photo displayer, and a dozen other “applications” that all live inside my computer. At any specific moment, most of these independent, dedicated machines are dormant, sleeping, waiting patiently (actually, unconsciously) to be awakened by my royal double-click and to jump obediently to life and do my bidding. Inspired by Gödel’s mapping of PM into itself, Alan Turing realized that the critical threshold for this kind of computational universality comes at exactly that point where a machine is flexible enough to read and correctly interpret a set of data that describe its own structure. At this crucial juncture, a machine can, in principle, explicitly watch how it does any particular task, step by step.

pages: 291 words: 77,596

Total Recall: How the E-Memory Revolution Will Change Everything
by Gordon Bell and Jim Gemmell
Published 15 Feb 2009

Their avatars have gotten better scores than humans in accuracy, sales performance, and customer satisfaction. Now the MyCyberTwin folks are intrigued by the idea of taking my own e-memories as input—there is enough of what I have said in e-mail, letters, chat, papers, and so forth, that one ought to be able to construct a pretty realistic Gordon Bell cyber twin. Alan Turing, a founding father of computer science, proposed the Turing test for determining a machine’s capability to demonstrate intelligence: A human judge has a conversation with a human and a machine, each of which tries to appear human. If the judge can’t tell which one is human, then the machine has passed the test.

See also associative memory transaction processing. See financial and transaction data transience of memory translation of data. See also file formats; portability of data transportation travel information and travelogues . See also GPS technology; location tracking trends Trip Replay The Truth Machine (Halperin) Turing, Alan Turing Award Turing Test Turn Signals are the Facial Expressions of Automobiles (Norman) Twitter U u-Photo unified communications unified storage University of California, Berkeley University of Tokyo University of Washington unwanted memories URLs U.S. Department of Defense U.S. Veterans Administration USB thumb drives user interfaces V Vemuri, Sunil video and video cameras and bio-memory and diaries and education and human development research and lifelogging and long-term data storage and memory aids and privacy issues short clips and storytelling and summarization of data and Total Recall technology and videophones Vista operating system visual learners VizzVox voice mail voice recognition vSafe W wagers Wal-Mart Walgreens weight management Wells Fargo Wi-Fi networking Wikipedia Williams, Lyndsay Williams, Robin Winamp Windows Windows Live Messenger Windows Live Sync Windows Media Center Windows Mobile Windows Plus Windows Vista WindowsMedia WinFS project wireless technology, Wizcon word processing work environment and data entanglement and efficiency and institutional memory and national defense and paperless offices and personal information work processes World Wide Telescope World Wide Web.

Blindside: How to Anticipate Forcing Events and Wild Cards in Global Politics
by Francis Fukuyama
Published 27 Aug 2007

Although ENIAC was too late to help in designing the atomic bomb—the machine did not become operational until 1946—von Neumann was inspired nonetheless. After the war, he went on to pioneer what would now be called scientific supercomputing, designing machines and algorithms for weather forecasting and many other types of simulations. And, along with other pioneers such as Alan Turing, he began to pursue a vision of what would now be called artificial intelligence.7 Examples such as these suggest that successful technological foresight requires, at a minimum, a careful look at the technological challenges and opportunities facing society as a whole. And forecasters have certainly tried to do that.

Or to put it another way, the act of computation had become an abstraction embodied in what is now known as software. The history of information technology offers many other examples of invention-by-convergence. Among them: —The modern concept of information and information processing was a synthesis of insights developed in the 1930s and 1940s by Alan Turing, Claude Shannon, Norbert Wiener, Warren McCulloch, Walter Pitts, and John von Neumann.12 —The hobbyists who sparked the personal computer revolution in the late 1970s were operating (consciously or not) in the context of ideas that had been around for a decade or more. There was the notion of interactive comput- 2990-7 ch11 waldrop 7/23/07 12:13 PM innovation and adaptation Page 125 125 ing, for example, in which a computer would respond to the user’s input immediately (as opposed to generating a stack of fanfold printout hours later); this idea dated back to the Whirlwind project, an experiment in real-time computing that began at MIT in the 1940s.13 There were the twin notions of individually controlled computing (having a computer apparently under the control of a single user) and home computing (having a computer in your own house); both emerged in the 1960s from MIT’s Project MAC, an early experiment in time-sharing.14 And then there was the notion of a computer as an open system, meaning that a user could modify it, add to it, and upgrade it however he or she wanted; that practice was already standard in the minicomputer market, which was pioneered by the Digital Equipment Corporation in the 1960s.15 —The Internet as we know it today represents the convergence of (among other ideas) the notion of packet-switched networking from the 1960s;16 the notion of internetworking (as embodied in the TCP/IP protocol), which was developed in the 1970s to allow packets to pass between different networks;17 and the notion of hypertext—which, of course, goes back to Vannevar Bush’s article on the memex in 1945. 2990-7 ch11 waldrop 7/23/07 12:13 PM Page 126 2990-7 ch12 kurth 7/23/07 12:14 PM Page 127 Part IV What Could Be 2990-7 ch12 kurth 7/23/07 12:14 PM Page 128 2990-7 ch12 kurth 7/23/07 12:14 PM Page 129 12 Cassandra versus Pollyanna A Debate between James Kurth and Gregg Easterbrook James Kurth: I am an optimist about the current pessimism, but a pessimist overall.

pages: 285 words: 78,180

Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life
by J. Craig Venter
Published 16 Oct 2013

Among them was Hans Driesch (1867–1941), an eminent German embryologist who, because the intellectual problem of the formation of a body from a patternless single cell seemed to him otherwise insoluble, had turned to the idea of entelechy (from the Greek entelécheia), which requires a “soul,” “organizing field,” or “vital function” to animate the material ingredients of life. In 1952 the great British mathematician Alan Turing would show how a pattern could emerge in an embryo de novo.20 Likewise, the French philosopher Henri-Louis Bergson (1874–1948) posited an élan vital to overcome the resistance of inert matter in the formation of living bodies. Even today, although most serious scientists believe vitalism to be a concept long since disproven, some have not abandoned the notion that life is based on some mysterious force.

In 1929 the young Irish crystallographer John Desmond Bernal (1901–1971) imagined the possibility of machines with a lifelike ability to reproduce themselves, in a “post-biological future” he described in The World, the Flesh & the Devil: “To make life itself will be only a preliminary stage. The mere making of life would only be important if we intended to allow it to evolve of itself anew.” A logical recipe to create these complex mechanisms was developed in the next decade. In 1936 Alan Turing, the cryptographer and pioneer of artificial intelligence, described what has come to be known as a Turing machine, which is described by a set of instructions written on a tape. Turing also defined a universal Turing machine, which can carry out any computation for which an instruction set can be written.

pages: 238 words: 77,730

Final Jeopardy: Man vs. Machine and the Quest to Know Everything
by Stephen Baker
Published 17 Feb 2011

In fact, the company stressed that Deep Blue did not represent AI, since it didn’t mimic human thinking. But the Deep Blue team made good on a decades-old promise. They taught a machine to win a game that was considered uniquely human. In this, they passed a chess version of the so-called Turing test, an intelligence exam for machines devised by Alan Turing, a pioneer in the field. If a human judge, Turing wrote, were to communicate with both a smart machine and another human, and that judge could not tell one from the other, the machine passed the test. In the limited realm of chess, Deep Blue aced the Turing test—even without engaging in what most of us would recognize as thought.

To build a more ambitious-thinking machine, some looked to the architecture of the human brain. Indeed, while Ferrucci was grappling with expert systems, other researchers were piecing together an altogether different species of program, called “neural networks.” The idea had been bouncing around at least since 1948, when Alan Turing outlined it in a paper called “Intelligent Machinery.” Like much of his thinking, Turing’s paper was largely theoretical. Computers in his day, with vacuum tubes switching the current on and off, were too primitive to handle such work. (He died in 1954, the year that Texas Instruments produced the first silicon transistor.)

pages: 252 words: 79,452

To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death
by Mark O'Connell
Published 28 Feb 2017

With these invocations, he moves his arms downward, then outward to either side, before clasping his hands to his chest. He turns about the room, bestowing a gesture of esoteric benediction on the four points of the compass, speaking in each of these positions the hallowed name of a prophet of the computer age: Alan Turing, John von Neumann, Charles Babbage, Ada Lovelace. Then he stands perfectly still, this priestly young man, arms outspread in a cruciform posture. “Around me shines the bits,” he says, “and in me is the bytes. The data, the code, the communications. Forever, amen.” This young man, I learned, was a Swedish academic named Anders Sandberg.

And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or subtract.” And there has always been a kind of feedback loop between the idea of the mind as a machine, and the idea of machines with minds. “I believe that by the end of the century,” wrote Alan Turing in 1950, “one will be able to speak of machines thinking without expecting to be contradicted.” As machines have grown in sophistication, and as artificial intelligence has come to occupy the imaginations of increasing numbers of computer scientists, the idea that the functions of the human mind might be simulated by computer algorithms has gained more and more momentum.

pages: 280 words: 76,638

Rebel Ideas: The Power of Diverse Thinking
by Matthew Syed
Published 9 Sep 2019

This was precisely the approach of Alistair Denniston, a diminutive Scot, when he was asked to head up the Bletchley Park operation. In 1939, he hired Alan Turing, then a twenty-seven-year-old Fellow at King’s College, Cambridge, who is widely considered among the greatest mathematicians of the twentieth century, and Peter Twinn, a twenty-three-year-old from Brasenose College, Oxford. Over time, more mathematicians and logicians would be added to the team. But Denniston, known as A. G. D. to his colleagues, had an important insight. He realised that solving a complex, multidimensional problem requires cognitive diversity. He needed a team of rebels, not a team of clones. A group of Alan Turings – even if such a group existed – could not have got the job done.

pages: 342 words: 72,927

Transport for Humans: Are We Nearly There Yet?
by Pete Dyson and Rory Sutherland
Published 15 Jan 2021

This book has covered ideas that might lead the way: tickets that embrace flexible travel; ways to make switching journey options easier; and improvements to connectivity so that people can work, socialize and gain utility while travelling. While change is urgent, it will take time for transport technologies to adapt to people. The kernel of this argument was expressed by polymath Alan Turing in 1947. After cracking the enigma code and practically inventing computer science, Turing went on to lay out a vision for the future: ‘The machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.’ 6 Blessed with insights from behavioural science and observations from the past seventy years, we suspect machines need more quality contact with humans.

They tell us that national borders melt away; that they see a fragile ball hanging in the void and observe the atmosphere as a paper-thin shield. Now receiving dedicated study, this phenomenon is known as the overview effect – the cognitive shift of consciousness, awe and a planetary perspective that endures for years after an astronaut has returned to Earth.9 * * * 6 See Alan Turing’s lecture to the London Mathematical Society on 20 February 1947 in A. M. Turing’s ACE Report of 1946 and Other Papers. 1986. Charles Babbage Reprinting Series for the History of Computing, edited by B. E. Carpenter and B. W. Doran. Cambridge, MA: MIT Press. 7 A. Shimizu, I. Dohzono, M.

pages: 436 words: 76

Culture and Prosperity: The Truth About Markets - Why Some Nations Are Rich but Most Remain Poor
by John Kay
Published 24 May 2004

But Babbage's machine was designed to do arithmetic. What turned a calculator into a computer Culture and Prosperity { 267} was the insight that a machine that can make long strings of calculations can do almost anything-write letters, check spelling, remember addresses, and turn on the central heating. This was first realized by Alan Turing, at the time a fellow of King's, the Cambridge University college that was also home to John Maynard Keynes. At the outbreak of the World War II, Turing became a code-breaker at Bletchley Park, northwest ofLondon. The group Turing joined, which represented the cream of British academic life, built the first operational computer. 3 Turing spent eight years working for the British government.

Bill Gates made an important contribution to the personal computer industry, but his wealth would allow him an annuity of$3 billion per year for the rest of his life. Are his effort, talent, and skills really so exceptional? Do they justify an income many thousands of times greater than that of-say-Alan Turing? Would Gates have put in much less effort if the prospective reward had been, say, only $1 billion per year? IfGDP would fall by $3 billion if Gates stayed at home, then we are all better off by paying him $3 billion to come to work. But it is not likely that this is true. We certainly don't know that it's true.

.: Harvard University Press. Hirschman, A. 0. 1970. Exit) Voice) and Loyalty: Responses to Decline in Firms) Organizations) and States. Cambridge, Mass.: Harvard University Press. Hochschild, A. 1999. King Leopold)s Ghost: A Story of Greed) Terror and Heroism in Colonial Africa. Boston: Houghton Mifflin. Hodges, A. 1992. Alan Turing: The Enigma. New York: Simon and Schuster. Hoffman, A. 2000. "Standardized Capital Stock Estimates in Latin America: A 1950-94 Update." Cambridge journal ofEconomics 24: 45-86. Hosking, G. 1992. A History ofthe Soviet Union. London: Fontana. Howard, P. K. 2001. The LostArtofDrawingthe Line. New York: Random House.

pages: 416 words: 129,308

The One Device: The Secret History of the iPhone
by Brian Merchant
Published 19 Jun 2017

Primed by hundreds of years of fantasy and possibility, around the mid-twentieth century, once sufficient computing power was available, the scientific work investigating actual artificial intelligence began. With the resonant opening line “I propose to consider the question, ‘Can machines think?’” in his 1950 paper “Computing Machinery and Intelligence,” Alan Turing framed much of the debate to come. That work discusses his famous Imitation Game, now colloquially known as the Turing Test, which describes criteria for judging whether a machine may be considered sufficiently “intelligent.” Claude Shannon, the communication theorist, published his seminal work on information theory, introducing the concept of the bit as well as a language through which humans might speak to computers.

But from the seventeenth century to the mid-twentieth, when these computers helped the military calculate weapons trajectories or NASA map out flight plans, the term computers described working people. And not only laborers, but laborers who were mostly invisible, working to benefit a man or institution that would ultimately obscure their participation. In fact, the actual origin of computing as we know it today probably begins not with the likes of Charles Babbage or Alan Turing or Steve Jobs but with a French astronomer, Alexis Clairaut, who was trying to solve the three-body problem. So he enlisted two fellow astronomers to help him carry out the calculations, thus dividing up labor to more efficiently compute his equations. Two centuries later, six women computers programmed the ENIAC, one of the first bona fide computing machines, but they were not invited to its public unveiling at the University of Pennsylvania nor mentioned at the event.

Hey, Siri The backbone of the Siri chapter is a lengthy interview conducted with Tom Gruber, Apple’s head of advanced development for Siri. Artificial intelligence is obviously a loaded topic—I attempted to approach it through the lens of what Siri actually does, or tries to do. The first stop on any AI reading list is Alan Turing’s classic “Computing Machinery and Intelligence.” Additional research concerned the Hearsay II papers. The Oral History Collection at the Charles Babbage Institute is a great resource, and the interview conducted with Raj Reddy is no different; it provides a fascinating look at the life of one of the first AI pioneers.

Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing (Writing Science)
by Thierry Bardini
Published 1 Dec 2000

By 195 5, though, IntroductIon 3 a relatively old idea, the use of meta programs called assemblers and compilers, which worked as translators between some sort of more natural human lan- guage and machine code, finally became an idea whose time had come. The computer pioneers had thought of just such a possibility. Alan Turing, for instance, had developed a "symbolic language" close to an assembly lan- guage and had even written that "actually one could communicate with these machines in any language provided it was an exact language, i.e. in principle one should be able to communicate in any symbolic logic, provided that the machines were given instruction tables which would enable it to interpret that logical system" (quoted in Hodges 1992 [1983], 358).

First, the very use of the word "boundary" in this context is itself metaphorical: 7 it suggests that there is a "space" where the processes of the mind and the processes of the machine are in contact, a line where one cannot be distinguished from the other except by convention-the sort of line usually drawn after a war, if one follows the lessons of human history. 8 Second, to talk about the point of contact between human and computer in- telligence at this specific time, the end of the twentieth century, has to be meta- phorical because direct perception by sight, sound, or touch is still enough to know absolutely that humans and machines are different things with no ap- parent point of contact. Since the early days of computer science, however, the most common test to decide whether a computer can be considered an analog to a human being is the Turing Test, Alan Turing's variation on the imitation game whose experimental setting makes sure that there cannot be a direct per- ception (Turing 1950). In it, an interrogator sitting at a terminal who cannot Language and the Body 43 see the recipients of his questions, one a human and one a machine, is asked to decide within a given span of time which one is a machine by means of their respective responses.

John von Neumann and Norbert WIener: From Mathematics to TechnologIes of Life and Death. Cambridge, Mass.: MIT Press. . 1991. The CybernetIcs Group. Cambridge, Mass.: MIT Press. HerkImer County HIstorical Society. 1923. The Story of the TypewrIter, 1873- 1923. HerkImer, N.Y. Hodges, A. 1992 [1983]. Alan TurIng: The EnIgma. London: Vintage. Hofstadter, R. 1962. Ant,-Intel/ectual,sm In AmerIcan LIfe. New York: VIntage Books. Hutchins, E. L., J. D. Hollan, and D. A. Norman. 1986. "DIrect Manipulation In- terfaces." In User-Centered System DesIgn: New PerspectIves on Human- Computer InteractIon, edited by D.

pages: 588 words: 131,025

The Patient Will See You Now: The Future of Medicine Is in Your Hands
by Eric Topol
Published 6 Jan 2015

Schumpeter: A Theory of Social and Economic Evolution (New York, NY: Palgrave McMillan, 2011). 2. R. Smith, “Teaching Medical Students Online Consultation with Patients,” BMJ Blogs, February 14, 2014, http://blogs.bmj.com/bmj/2014/02/14/richard-smith-teaching-medical-students-online-consultation-with-patients/. 3. “Alan Turing,” Wikiquote, accessed August 13, 2014, http://en.wikiquote.org/wiki/Alan_Turing. 4. “Ignaz Semmelweis,” Wikipedia, accessed August 13, 2014, http://en.wikipedia.org/wiki/Ignaz_Semmelweis. 5. B. Ewigman et al., “Ethics and Routine Ultrasonography in Pregnancy,” American Journal of Obstetrics & Gynecology 163, no. 1 (1990): 256–257. 6.

These exemplify the use of artificial intelligence for differential diagnoses and treatments in medicine. But again they do not represent prediction. Then let’s make sure it is clear that collecting oodles of data doesn’t mean you are going to be able to predict something meaningful. At the time of Alan Turing’s one-hundredth birthday, Science ran a number of articles, including one about “a home fully equipped with cameras and audio equipment [that] continuously recorded the life of an infant from birth to age three, amounting to ~200,000 hours of audio and video recordings, representing 85% of the child’s waking experience.”73 OK, that’s a triumph in data collection, but certainly not with the intent of or any likelihood of predicting illness.

pages: 511 words: 139,108

The Fabric of Reality
by David Deutsch
Published 31 Mar 2012

The basic idea of the proof - known as a diagonal argument - predates the idea of virtual reality. It was first used by the nineteenth-century mathematician Georg Cantor to prove that there are infinite quantities greater than the infinity of natural number (1, 2, 3...). The same form of proof is at the heart of the modern theory of computation developed by Alan Turing and others in the 1930s. It was also used by Kurt Gödel to prove his celebrated 'incompleteness theorem', of which more in Chapter 10. Each environment in our machine's repertoire is generated by some program for its computer. Imagine the set of all valid programs for this computer. From a physical point of view, each sucli program specifies a particular set of values for physical variables, on the disks or other media, that represent the computer's program. {126} We know from quantum theory that all such variables are quantized, and therefore that, no matter how the computer works, the set of possible programs is discrete.

But instead of trying to deduce their results from physical laws, mathematicians postulated abstract models of 'computation', and defined 'calculation' and 'proof' in terms of those models. (I shall discuss this interesting mistake in Chapter 10.) That is how it came about that over a period of a few months in 1936, three mathematicians, Emil Post, Alonzo Church and, most importantly, Alan Turing, independently created the first abstract designs for universal computers. Each of them conjectured that his model of 'computation' did indeed correctly formalize the traditional, intuitive notion of mathematical 'computation'. Consequently, each of them also conjectured that his model was equivalent to (had the same repertoire as) any other reasonable formalization of the same intuition.

Dennis Sciama, The Unity of the Universe, Faber &c Faber, 1967. Ian Stewart, Does God Play Dice? The Mathematics of Chaos, Basil Blackwell, 1989; Penguin Books, 1990. L. J. Stockmeyer and A. K. Chandra, 'Intrinsically Difficult Problems', Scientific American, May 1979. Frank Tipler, The Physics of Immortality, Doubleday, 1995. Alan Turing, 'Computing Machinery and Intelligence', Mind, October 1950. [Reprinted in The Mind's I, edited by Douglas Hofstadter and Daniel C. Dennett, Harvester, 1981.] Steven Weinberg, Gravitation and Cosmology, John Wiley, 1972. Steven Weinberg, The First Three Minutes, Basic Books, 1977. Steven Weinberg, Dreams of a Final Theory, Vintage, 1993, Random, 1994.

pages: 502 words: 132,062

Ways of Being: Beyond Human Intelligence
by James Bridle
Published 6 Apr 2022

An intelligence which operates at the same level, and in much the same manner, as human intelligence. This error infects all our reckonings with artificial intelligence. For example, despite never being used by serious AI researchers, the Turing Test remains the most widely understood way of thinking about the capabilities of AI in the public consciousness. It was proposed by Alan Turing in a 1950 paper, ‘Computing Machinery and Intelligence’. Turing thought that instead of questioning whether computers were truly intelligent, we could at least establish that they appeared intelligent. Turing called his method for doing this ‘the imitation game’: he imagined a set-up in which an interviewer interrogated two hidden interlocutors – one human, one machine – and tried to tell which was which.

One of the most interesting of those branches is to be found budding on the eve of the Second World War, at the very moment the modern computer was conceived. The kind of computer I am using – that we are all using – is based on something called a Turing machine. This is the model of a computer described theoretically by Alan Turing in 1936. It’s what’s called an ideal machine – ideal as in imaginary, but not necessarily perfect. The Turing machine was a thought experiment, but because it came to form the basis for all future forms of computation, it also altered the way we think. Turing’s imaginary machine consisted of a long strip of paper and a tool for reading and writing onto it, like a tape recorder.

Turing, ‘On Computable Numbers, With an Application to the Entscheidungsproblem’ (1936), Proceedings of the London Mathematical Society, Series 2, 42, 1937, pp. 230–65; DOI:10.1112/plms/s2-42.1.230. 3. A. M. Turing, ‘Systems of Logic Based on Ordinals’, Proceedings of the London Mathematical Society, Series 2, 45, 1939, pp. 161–228. 4. B. Jack Copeland and Diane Proudfoot, ‘Alan Turing’s Forgotten Ideas in Computer Science’, Scientific American, 280(4), April 1999, pp. 98–103; DOI: 10.1038/scientificamerican0499-98. 5. For a full account of the wiring and behaviour of the tortoises – including the Carroll quote – see W. Grey Walter, ‘An Imitation of Life’, Scientific American, May 1950.

pages: 72 words: 21,361

Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy
by Erik Brynjolfsson
Published 23 Jan 2012

Illini starter Will Strack struggled, allowing five runs in six innings, but the bullpen allowed only no runs and the offense banged out 17 hits to pick up the slack and secure the victory for the Illini. The difference between the automatic generation of formulaic prose and genuine insight is still significant, as the history of a 60-year-old test makes clear. The mathematician and computer science pioneer Alan Turing considered the question of whether machines could think “too meaningless to deserve discussion,” but in 1950 he proposed a test to determine how humanlike a machine could become. The “Turing test” involves a test group of people having online chats with two entities, a human and a computer. If the members of the test group can’t in general tell which entity is the machine, then the machine passes the test.

pages: 287 words: 86,919

Protocol: how control exists after decentralization
by Alexander R. Galloway
Published 1 Apr 2004

The provocative but tantalizingly thin Pandemonium: The Rise of Predatory Locales in the Postwar World from architect Branden Hookway, looks at how cybernetic bodies permeate twentieth-century life. Other important theorists from the field of computer and media studies who have influenced me include Vannevar Bush, Hans Magnus Enzensberger, Marshall McLuhan, Lewis Mumford, and Alan Turing. I am also inspired by Lovink’s new school of media theory known as Net criticism. This loose international grouping of critics and practitioners has grown up with the Internet and includes the pioneering work of Hakim Bey Introduction 18 and Critical Art Ensemble, as well as newer material from Timothy Druckrey, Marina Gržinić, Lev Manovich, Sadie Plant, and many others.

There are many different types of hardware: controllers (keyboards, joysticks), virtualization apparatuses (computer monitors, displays, virtual reality hardware), the interface itself (i.e., the confluence of the controller and the virtualization apparatus), the motherboard, and physical networks both intra (a computer’s own guts) and inter (an Ethernet LAN, the Internet). However, the niceties of hardware design are less important than the immaterial software existing within it. For, as Alan Turing demonstrated at the dawn of the computer age, the important characteristic of a computer is that it can mimic any machine, any piece of hardware, provided that the functionality of that hardware can be broken down into logical processes. Thus, the key to protocol’s formal relations is in the realm of the immaterial software.

pages: 292 words: 88,319

The Infinite Book: A Short Guide to the Boundless, Timeless and Endless
by John D. Barrow
Published 1 Aug 2005

An infinite number of printings would have been made after one minute has elapsed. Fig 10.3 The beginning of the infinite decimal expansion of the number π. If the sequence is exhaustively random then all possible sequences of numbers will eventually arise in this infinite list. If this process could be implemented, then even more astonishing things could be achieved. Alan Turing, pioneer of computing, showed that there exist mathematical operations which cannot be carried out by any computer in a finite number of computational steps. They are called uncomputable operations and their existence is closely associated with the famous incompleteness theorem of Kurt Gödel, which teaches us that there exist statements of arithmetic that we can never prove to be true or false by using the rules of arithmetic.

. = S – ½ since the right-hand size is just the series without its first term, which is ½. Hence, ½ S = ½ and S = 1. 5. H. Weyl, Philosophy of Mathematics and Natural Science, Princeton University Press, 1949, p. 42. Weyl’s mention of decision procedures and machines is interesting. Mathematics had just emerged from a pre-war period which saw the advent in print of Alan Turing’s ‘Turing machine’, the archetypal universal computer that is indistinguishable from a human calculator (the original meaning of the word ‘computer’) and the question, answered in the negative by Turing, of whether a finite computing machine would be able to decide the truth or falsity of all statements of mathematics in a finite time.

pages: 253 words: 84,238

A Thousand Brains: A New Theory of Intelligence
by Jeff Hawkins
Published 15 Nov 2021

To create numeric tables or to decode encrypted messages, dozens of human computers would do the necessary calculations by hand. The very first electronic computers were designed to replace human computers for a specific task. For example, the best automated solution for message decryption was a machine that only decrypted messages. Computing pioneers such as Alan Turing argued that we should build “universal” computers: electronic machines that could be programmed to do any task. However, at that time, no one knew the best way to build such a computer. There was a transitionary period where computers were built in many different forms. There were computers designed for specific tasks.

Machine intelligence will undergo a similar transition. Today, most AI scientists focus on getting machines to do things that humans can do—from recognizing spoken words, to labeling pictures, to driving cars. The notion that the goal of AI is to mimic humans is epitomized by the famous “Turing test.” Originally proposed by Alan Turing as the “imitation game,” the Turing test states that if a person can’t tell if they are conversing with a computer or a human, then the computer should be considered intelligent. Unfortunately, this focus on human-like ability as a metric for intelligence has done more harm than good. Our excitement about tasks such as getting a computer to play Go has distracted us from imagining the ultimate impact of intelligent machines.

God Created the Integers: The Mathematical Breakthroughs That Changed History
by Stephen Hawking
Published 28 Mar 2007

Selections from Henri Lebesgue’s Integrale, Longeur, Aire reprinted from Annali di Matematica, Pura ed Applicata, 1902, Ser. 3, vol. 7, pp. 231–359. Kurt Gödel’s On Formally Undecidable Propositions of Principia Mathematica and Related Systems, trans. B. Meltzer, courtesy of Dover Publications. Alan Turing’s On computable numbers with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society courtesy of the London Mathematical Society. Picture Credits: Euclid: Getty Images. Archimedes: Getty Images. Diophantus: Title page of Diophanti Alexandrini Arthimeticorum libri sex. . . ., 1621: Library of Congress, call number QA31.D5, Rare Book/Special Collections Reading Room, (Jefferson LJ239).

Henri Lebesgue: Frontispiece from Henri Lebesgue Oeuvres Scientifiques, volume I. Reproduced by permission of L’Enseignement Mathématique, Universite De Geneve, Switzerland. Photograph provided by the Library of Congress, call number QA3.L27, vol. 1, copy 1. Kurt Gödel: Time Life Pictures/Getty Images. Alan Turing: Photo provided by King’s College Archive Centre, Cambridge, UK, AMT/K/7/9. Contact the Archive Centre for copyright information. CONTENTS Introduction EUCLID (C. 325BC–265BC) His Life and Work Selections from Euclid’s Elements Book I: Basic Geometry—Definitions, Postulates, Common Notions; and Proposition 47, (leading up to the Pythagorean Theorem) Book V: The Eudoxian Theory of Proportion—Definitions & Propositions Book VII: Elementary Number Theory—Definitions & Propositions Book IX: Proposition 20: The Infinitude of Prime Numbers Book IX: Proposition 36: Even Perfect Numbers Book X: Commensurable and Incommensurable Magnitudes ARCHIMEDES (287BC–212BC) His Life and Work Selections from The Works of Archimedes On the Sphere and Cylinder, Books I and II Measurement of a Circle The Sand Reckoner The Methods DIOPHANTUS (C. 200–284) His Life and Work Selections from Diophantus of Alexandria, A Study in the History of Greek Algebra Book II Problems 8–35 Book III Problems 5–21 Book V Problems 1–29 RENÉ DESCARTES (1596–1650) His Life and Work The Geometry of Rene Descartes ISAAC NEWTON (1642–1727) His Life and Work Selections from Principia On First and Last Ratios of Quantities LEONHARD EULER (1707–1783) His Life and Work On the sums of series of reciprocals (De summis serierum reciprocarum) The Seven Bridges of Konigsberg Proof that Every Integer is A Sum of Four Squares PIERRE SIMON LAPLACE (1749–1827) His Life and Work A Philosophical Essay on Probabilities JEAN BAPTISTE JOSEPH FOURIER (1768–1830) His Life and Work Selection from The Analytical Theory of Heat Chapter III: Propagation of Heat in an Infinite Rectangular Solid (The Fourier series) CARL FRIEDRICH GAUSS (1777–1855) His Life and Work Selections from Disquisitiones Arithmeticae (Arithmetic Disquisitions) Section III Residues of Powers Section IV Congruences of the Second Degree AUGUSTIN-LOUIS CAUCHY (1789–1857) His Life and Work Selections from Oeuvres complètes d’Augustin Cauchy Résumé des leçons données à l’École Royale Polytechnique sur le calcul infinitésimal (1823), series 2, vol. 4 Lessons 3–4 on differential calculus Lessons 21–24 on the integral NIKOLAI IVANOVICH LOBACHEVSKY (1792–1856) His Life and Work Geometrical Researches on the Theory of Parallels JÁNOS BOLYAI (1802–1860) His Life and Work The Science of Absolute Space ÉVARISTE GALOIS (1811–1832) His Life and Work On the conditions that an equation be soluble by radicals Of the primitive equations which are soluble by radicals On Groups and Equations and Abelian Integrals GEORGE BOOLE (1815–1864) His Life and Work An Investigation of the Laws of Thought BERNHARD RIEMANN (1826–1866) His Life and Work On the Representability of a Function by Means of a Trigonometric Series (Ueber die Darstellbarkeit eine Function durch einer trigonometrische Reihe) On the Hypotheses which lie at the Bases of Geometry (Ueber die Hypothesen, welche der Geometrie zu Grunde liegen) On the Number of Prime Numbers Less than a Given Quantity (Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse) KARL WEIERSTRASS (1815–1897) His Life and Work Selected Chapters on the Theory of Functions, Lecture Given in Berlin in 1886, with the Inaugural Academic Speech, Berlin 1857 § 7 Gleichmässige Stetigkeit (Uniform Continuity) RICHARD DEDEKIND (1831–1916) His Life and Work Essays on the Theory of Numbers GEORG CANTOR (1848–1918) His Life and Work Selections from Contributions to the Founding of the Theory of Transfinite Numbers Articles I and II HENRI LEBESGUE (1875–1941) His Life and Work Selections from Integrale, Longeur, Aire (Intergral, Length, Area) Preliminaries and Integral KURT GÖDEL (1906–1978) His Life and Work On Formally Undecidable Propositions of Principia Mathematica and Related Systems ALAN TURING (1912–1954) His Life and Work On computable numbers with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society INTRODUCTION WE ARE LUCKY TO LIVE IN AN AGE LN WHICH WE ARE STILL MAKING DISCOVERIES.

CONTENTS Introduction EUCLID (C. 325BC–265BC) His Life and Work Selections from Euclid’s Elements Book I: Basic Geometry—Definitions, Postulates, Common Notions; and Proposition 47, (leading up to the Pythagorean Theorem) Book V: The Eudoxian Theory of Proportion—Definitions & Propositions Book VII: Elementary Number Theory—Definitions & Propositions Book IX: Proposition 20: The Infinitude of Prime Numbers Book IX: Proposition 36: Even Perfect Numbers Book X: Commensurable and Incommensurable Magnitudes ARCHIMEDES (287BC–212BC) His Life and Work Selections from The Works of Archimedes On the Sphere and Cylinder, Books I and II Measurement of a Circle The Sand Reckoner The Methods DIOPHANTUS (C. 200–284) His Life and Work Selections from Diophantus of Alexandria, A Study in the History of Greek Algebra Book II Problems 8–35 Book III Problems 5–21 Book V Problems 1–29 RENÉ DESCARTES (1596–1650) His Life and Work The Geometry of Rene Descartes ISAAC NEWTON (1642–1727) His Life and Work Selections from Principia On First and Last Ratios of Quantities LEONHARD EULER (1707–1783) His Life and Work On the sums of series of reciprocals (De summis serierum reciprocarum) The Seven Bridges of Konigsberg Proof that Every Integer is A Sum of Four Squares PIERRE SIMON LAPLACE (1749–1827) His Life and Work A Philosophical Essay on Probabilities JEAN BAPTISTE JOSEPH FOURIER (1768–1830) His Life and Work Selection from The Analytical Theory of Heat Chapter III: Propagation of Heat in an Infinite Rectangular Solid (The Fourier series) CARL FRIEDRICH GAUSS (1777–1855) His Life and Work Selections from Disquisitiones Arithmeticae (Arithmetic Disquisitions) Section III Residues of Powers Section IV Congruences of the Second Degree AUGUSTIN-LOUIS CAUCHY (1789–1857) His Life and Work Selections from Oeuvres complètes d’Augustin Cauchy Résumé des leçons données à l’École Royale Polytechnique sur le calcul infinitésimal (1823), series 2, vol. 4 Lessons 3–4 on differential calculus Lessons 21–24 on the integral NIKOLAI IVANOVICH LOBACHEVSKY (1792–1856) His Life and Work Geometrical Researches on the Theory of Parallels JÁNOS BOLYAI (1802–1860) His Life and Work The Science of Absolute Space ÉVARISTE GALOIS (1811–1832) His Life and Work On the conditions that an equation be soluble by radicals Of the primitive equations which are soluble by radicals On Groups and Equations and Abelian Integrals GEORGE BOOLE (1815–1864) His Life and Work An Investigation of the Laws of Thought BERNHARD RIEMANN (1826–1866) His Life and Work On the Representability of a Function by Means of a Trigonometric Series (Ueber die Darstellbarkeit eine Function durch einer trigonometrische Reihe) On the Hypotheses which lie at the Bases of Geometry (Ueber die Hypothesen, welche der Geometrie zu Grunde liegen) On the Number of Prime Numbers Less than a Given Quantity (Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse) KARL WEIERSTRASS (1815–1897) His Life and Work Selected Chapters on the Theory of Functions, Lecture Given in Berlin in 1886, with the Inaugural Academic Speech, Berlin 1857 § 7 Gleichmässige Stetigkeit (Uniform Continuity) RICHARD DEDEKIND (1831–1916) His Life and Work Essays on the Theory of Numbers GEORG CANTOR (1848–1918) His Life and Work Selections from Contributions to the Founding of the Theory of Transfinite Numbers Articles I and II HENRI LEBESGUE (1875–1941) His Life and Work Selections from Integrale, Longeur, Aire (Intergral, Length, Area) Preliminaries and Integral KURT GÖDEL (1906–1978) His Life and Work On Formally Undecidable Propositions of Principia Mathematica and Related Systems ALAN TURING (1912–1954) His Life and Work On computable numbers with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society INTRODUCTION WE ARE LUCKY TO LIVE IN AN AGE LN WHICH WE ARE STILL MAKING DISCOVERIES. IT IS LIKE THE DISCOVERY OF AMERICA-YOU ONLY DISCOVER IT ONCE.

pages: 286 words: 90,530

Richard Dawkins: How a Scientist Changed the Way We Think
by Alan Grafen; Mark Ridley
Published 1 Jan 2006

I am not concerned here with the psychology of motives.’14 Dawkins’ brilliant application of mentalistic behaviorism—what I call the intentional stance—to evolutionary biology was, like my own coinage, an articulation of ideas that were already proving themselves in the work of many other theorists. We are both clari-fiers and unifiers of practices and attitudes pioneered by others, and we share a pantheon: Alan Turing and John von Neumann on the one hand, and Bill Hamilton, John Maynard Smith, George Williams, and Bob Trivers on the other. We see computer science and evolutionary theory fitting together in excellent harmony; it’s algorithms all the way down. Dawkins and I have both had to defend our perspective against those who cannot fathom—or abide—this strategic approach to such deep matters.

This recent burst of high-profile activity might suggest that computer scientists have only just begun to work on biological questions, but activity at this particular disciplinary interface is by no means new. In fact, it has an extremely long history involving the most famous early pioneers of computing, cybernetics, and artificial intelligence. In the 1950s, Alan Turing, the ‘father of artificial intelligence’ and a man fundamentally associated with codes, logic, chess, and other mechanico-mathematical arcana, developed influential models of biological morphogenesis:1 the processes involved in the development of biological patterns as an organism grows from a single cell.

pages: 313 words: 91,098

The Knowledge Illusion
by Steven Sloman
Published 10 Feb 2017

As great mathematical minds like John von Neumann and Alan Turing developed the foundations of computing as we know it, the question arose whether the human mind works in the same way. Computers have an operating system that is run by a central processor that reads and writes to a digital memory using a small set of rules. Early cognitive scientists ran with the idea that the mind does too. The computer served as a metaphor that governed how the business of cognitive science was done. Thinking was assumed to be a kind of computer program that runs in people’s brains. One of Alan Turing’s claims to fame is that he took this idea to its logical extreme.

pages: 826 words: 231,966

GCHQ
by Richard Aldrich
Published 10 Jun 2010

The importance of decrypted German communications – known as ‘the Ultra secret’ – to Britain’s victory over the Axis is universally recognised. Winston Churchill’s wartime addiction to his daily supply of ‘Ultra’ intelligence, derived from supposedly impenetrable German cypher machines such as ‘Enigma’, is legendary. The mathematical triumphs of brilliant figures such as Alan Turing are a central part of the story of Allied success in the Second World War. The astonishing achievement of signals intelligence allowed Allied prime ministers and presidents to see into the minds of their Axis enemies. Thanks to ‘sigint’ we too can now read about the futile attempts of Japanese leaders to seek a favourable armistice in August 1945, even as the last screws were being tightened on the atomic bombs destined for Hiroshima and Nagasaki.3 However, shortly after VJ-Day, something rather odd happens.

Neither Alastair Denniston nor his deputy, Edward Travis, had the pull in Whitehall to overcome the shortage.35 Churchill was not ignorant of this state of affairs for long. Recalling the Prime Minister’s kind words during his recent visit, the code-breakers resolved to go straight to the top. On 21 October 1941, four of the most brilliant minds at Bletchley Park, Hugh Alexander, Stuart Milner-Barry, Alan Turing and Gordon Welchman, wrote directly to Churchill to beg for more resources, explaining that their work was so secret that it was hard to explain their requirements to those who controlled personnel.36 So secret was their missive that Milner-Barry took the train to London and delivered it personally to 10 Downing Street.

For example, it had long helped to steer policy on the teaching of languages like Chinese in British universities.28 More importantly, it had a role in the development of British computing. Code-breaking had driven important breakthroughs in computing both during and after the Second World War, led by luminary figures such as Alan Turing. The most famous example is ‘Colossus’, which was used to attack ‘Tunny’, the encyphered teleprinter used by the German High Command. Ten examples of Colossus II were in operation by the end of the war. Other early computers called ‘Robinson’ and ‘Aquarius’ were no less innovative. Both Robinson and Colossus were designed and built at the Post Office Research Station at Dollis Hill by the celebrated Tommy Flowers, now recognised as one of the most enterprising scientists Britain produced during the war.

pages: 1,201 words: 233,519

Coders at Work
by Peter Seibel
Published 22 Jun 2009

When Alan Turing wrote its first programming manual in 1950 he remarked that bitwise not can be obtained by using exclusive or in combination with a row of ones.” Now, in my sentence I'm saying, “Alan Turing wrote its first programming manual,” meaning the first programming manual for the Manchester Mark 1. But four or five readers independently said, I must have meant “his”: “When Alan Turing wrote his first programming manual in 1950”. Well, actually, he had written other programming manuals, so what I said was correct but it was misinterpreted by people. So now I say, “When Alan Turing wrote the first programming manual for the Mark I, in 1950. …” Mathematical things: similarly I'll get people who miss it.

pages: 761 words: 231,902

The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil
Published 14 Jul 2005

For example, Charles Babbage's late-nineteenth-century mechanical computer (which never ran) provided only a handful of operation codes, yet provided (within its memory capacity and speed) the same kinds of transformations that modern computers do. The complexity of Babbage's invention stemmed only from the details of its design, which indeed proved too difficult for Babbage to implement using the technology available to him. The Turing machine, Alan Turing's theoretical conception of a universal computer in 1950, provides only seven very basic commands, yet can be organized to perform any possible computation.73 The existence of a "universal Turing machine," which can simulate any possible Turing machine that is described on its tape memory, is a further demonstration of the universality and simplicity of information.74 In The Age of Intelligent Machines, I showed how any computer could be constructed from "a suitable number of [a] very simple device," namely, the "nor" gate.75 This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than rule 110), given the right software (which would include the connection description of the nor gates).76 Although we need additional concepts to describe an evolutionary process that create intelligent solutions to problems, Wolfram's demonstration of the simplicity an ubiquity of computation is an important contribution in our understanding of the fundamental significance of information in the world.

A key requirement for a self-organizing system is a nonlinearity: some means of creating outputs that are not simple weights sums of the inputs. The early neural-net models provided this nonlinearity in their replica of the neuron nucleus.23 (The basic neural-net method is straightforward.)24 Work initiated by Alan Turing on theoretical models of computation around the same time also showed that computation requires a nonlinearity. A system that simple creates weighted sums of its inputs cannot perform the essential requirements of computation. We now know that actual biological neurons have many other nonlinearities resulting from the electrochemical action of the synapses and the morphology (shape) of the dendrites.

Gödel's incompleteness theorem, which is fundamentally a proof demonstrating that there are definite limits to what logic, mathematics, and by extension computation can do, has been called the most important in all mathematics, and its implications are still being debated.27 A similar conclusion was reached by Alan Turing in the context of understanding the nature of computation. When in 1936 Turing presented the Turing machine (described in chapter 2) as a theoretical model of a computer, which continues today to form the basis of modern computational theory, he reported an unexpected discovery similar to Gödel's.28 In his paper that year he described the concept of unsolvable problems—that is, problems that are well defined, with unique answers that can be shown to exist, but that we can also show can never be computed by a Turing machine.

pages: 551 words: 174,280

The Beginning of Infinity: Explanations That Transform the World
by David Deutsch
Published 30 Jun 2011

Nevertheless, Babbage and Lovelace denied that it could. Lovelace argued that ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.’ The mathematician and computer pioneer Alan Turing later called this mistake ‘Lady Lovelace’s objection’. It was not computational universality that Lovelace failed to appreciate, but the universality of the laws of physics. Science at the time had almost no knowledge of the physics of the brain. Also, Darwin’s theory of evolution had not yet been published, and supernatural accounts of the nature of human beings were still prevalent.

In the past, innovators who brought about such a jump to universality had rarely been seeking it, but since the Enlightenment they have been, and universal explanations have been valued both for their own sake and for their usefulness. Because error-correction is essential in processes of potentially unlimited length, the jump to universality only ever happens in digital systems. 7 Artificial Creativity Alan Turing founded the theory of classical computation in 1936 and helped to construct one of the first universal classical computers during the Second World War. He is rightly known as the father of modern computing. Babbage deserves to be called its grandfather, but, unlike Babbage and Lovelace, Turing did understand that artificial intelligence (AI) must in principle be possible because a universal computer is a universal simulator.

: Everett, Quantum Theory, and Reality (Oxford University Press, 2010) David Deutsch, ‘It from Qubit’, in John Barrow, Paul Davies and Charles Harper, eds., Science and Ultimate Reality (Cambridge University Press, 2003) David Deutsch, ‘Quantum Theory of Probability and Decisions’, Proceedings of the Royal Society A455 (1999) David Deutsch, ‘The Structure of the Multiverse’, Proceedings of the Royal Society A458 (2002) Richard Feynman, The Character of Physical Law (BBC Publications, 1965) Richard Feynman, The Meaning of It All (Allen Lane, 1998) Ernest Gellner, Words and Things (Routledge & Kegan Paul, 1979) William Godwin, Enquiry Concerning Political Justice (1793) Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (Basic Books, 1979) Douglas Hofstadter, I am a Strange Loop (Basic Books, 2007) Bryan Magee, Popper (Fontana, 1973) Pericles, ‘Funeral Oration’ Plato, Euthyphro Karl Popper, In Search of a Better World (Routledge, 1995) Karl Popper, The World of Parmenides (Routledge, 1998) Roy Porter, Enlightenment: Britain and the Creation of the Modern World (Allen Lane, 2000) Martin Rees, Just Six Numbers (Basic Books, 2001) Alan Turing, ‘Computing Machinery and Intelligence’, Mind, 59, 236 (October 1950) Jenny Uglow, The Lunar Men (Faber, 2002) Vernor Vinge, ‘The Coming Technological Singularity’, Whole Earth Review, winter 1993 *The term was coined by the philosopher Norwood Russell Hanson. *This terminology differs slightly from that of Dawkins.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

But let us note at the outset that however many stops there are between here and human-level machine intelligence, the latter is not the final destination. The next stop, just a short distance farther along the tracks, is superhuman-level machine intelligence. The train might not pause or even decelerate at Humanville Station. It is likely to swoosh right by. The mathematician I. J. Good, who had served as chief statistician in Alan Turing’s code-breaking team in World War II, might have been the first to enunciate the essential aspects of this scenario. In an oft-quoted passage from 1965, he wrote: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.

The early Good Old-Fashioned Artificial Intelligence systems did not, for the most part, focus on learning, uncertainty, or concept formation, perhaps because techniques for dealing with these dimensions were poorly developed at the time. This is not to say that the underlying ideas are all that novel. The idea of using learning as a means of bootstrapping a simpler system to human-level intelligence can be traced back at least to Alan Turing’s notion of a “child machine,” which he wrote about in 1950: Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.3 Turing envisaged an iterative process to develop such a child machine: We cannot expect to find a good child machine at the first attempt.

In particular, cognitive enhancement could accelerate science and technology, including progress toward more potent forms of biological intelligence amplification and machine intelligence. Consider how the rate of progress in the field of artificial intelligence would change in a world where Average Joe is an intellectual peer of Alan Turing or John von Neumann, and where millions of people tower far above any intellectual giant of the past.63 A discussion of the strategic implications of cognitive enhancement will have to await a later chapter. But we can summarize this section by noting three conclusions: (1) at least weak forms of superintelligence are achievable by means of biotechnological enhancements; (2) the feasibility of cognitively enhanced humans adds to the plausibility that advanced forms of machine intelligence are feasible—because even if we were fundamentally unable to create machine intelligence (which there is no reason to suppose), machine intelligence might still be within reach of cognitively enhanced humans; and (3) when we consider scenarios stretching significantly into the second half of this century and beyond, we must take into account the probable emergence of a generation of genetically enhanced populations—voters, inventors, scientists—with the magnitude of enhancement escalating rapidly over subsequent decades.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

The answer which seems to me to fit all or nearly all the facts is . . . the force and mechanism of reinforcement, applied to a connection. —EDWARD THORNDIKE12 If the animal researchers following Thorndike were, like he was, ultimately interested in the psychology of the human child, they were not alone; computer scientists—the very first ones—were too. Alan Turing’s most famous paper, “Computing Machinery and Intelligence,” in 1950, explicitly framed the project of artificial intelligence in these terms. “Instead of trying to produce a programme to simulate the adult mind,” he wrote, “why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.”

“Right when you see it about to happen, you gotta yank that electricity out of the wall, man.”53 “You know, you could forgive Obama for thinking that,” Dylan Hadfield-Menell tells me over the conference table at OpenAI.54 “For some amount of time, you can forgive AI experts for saying that,” he adds—and, indeed, Alan Turing himself talked in a 1951 radio program about “turning off the power at strategic moments.”55 But, says Hadfield-Menell, “I think it’s not something you can forgive if you actually think about the problem. It’s something I’m fine with as a reactionary response, but if you actually deliberate for a while and get to ‘Oh, just pull the plug,’ it’s just, I don’t see how you get to that, if you’re actually taking seriously the assumptions of ‘This thing is smarter than people.’ ” A resistance to being turned off, or to being interfered with in general, hardly requires malice: the system is simply trying to achieve some goal or following its “muscle memory” in doing the things that brought it rewards in the past, and any form of interference simply gets in its way.

Its story will be our story, for better or worse. How could it not? On January 14, 1952, the BBC hosted a radio program that convened a panel of four distinguished scientists for a roundtable conversation. The topic was “Can automatic calculating machines be said to think?” The four guests were Alan Turing, one of the founders of computer science, who had written a now-legendary paper on the topic in 1950; philosopher of science Richard Braithwaite; neurosurgeon Geoffrey Jefferson; and mathematician and cryptographer Max Newman. The panel began discussing the question of how a machine might learn, and how humans might teach it.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

Finally, we’ll turn to how, aided by superhuman AI, we will engineer brain–computer interfaces that vastly expand our neocortices with layers of virtual neurons. This will unlock entirely new modes of thought and ultimately expand our intelligence millions-fold: this is the Singularity. The Birth of AI In 1950, the British mathematician Alan Turing (1912–1954) published an article in Mind titled “Computing Machinery and Intelligence.”[1] In it, Turing asked one of the most profound questions in the history of science: “Can machines think?” While the idea of thinking machines dates back at least as far as the bronze automaton Talos in Greek myth,[2] Turing’s breakthrough was boiling the concept down to something empirically testable.

For the purpose of thinking about the Singularity, though, the most important fiber in our bundle of cognitive skills is computer programming (and a range of related abilities, like theoretical computer science). This is the main bottleneck for superintelligent AI. Once we develop AI with enough programming abilities to give itself even more programming skill (whether on its own or with human assistance), there’ll be a positive feedback loop. Alan Turing’s colleague I. J. Good foresaw as early as 1965 that this would lead to an “intelligence explosion.”[138] And because computers operate much faster than humans, cutting humans out of the loop of AI development will unlock stunning rates of progress. Artificial intelligence theorists jokingly refer to this as “FOOM”—like a comic book–style sound effect of AI progress whizzing off the far end of the graph.[139] Some researchers, like Eliezer Yudkowsky, see this as more likely to happen extremely fast (a “hard takeoff” in minutes to months), while others, like Robin Hanson, think it will be relatively more gradual (a “soft takeoff” over years or longer).[140] I fall somewhere in the middle.

On April 9, 2002, personal computing pioneer Mitch Kapor and I engaged in the first “Long Now” bet, concerning whether or not such a Turing test would be passed by 2029.[152] It introduced a series of issues such as defining how much cognitive enhancement a human could have (to be a judge or a human foil) and still be considered a human. The reason a well-defined empirical test is necessary is that, as mentioned previously, humans have a powerful tendency to redefine whatever artificial intelligence achieves as not really so hard in hindsight. This is often referred to as the “AI effect.”[153] Over the seven decades since Alan Turing devised his imitation game, computers have gradually surpassed humans in many narrow areas of intelligence. But they’ve always lacked the breadth and flexibility of human intellect. After IBM’s Deep Blue supercomputer beat world chess champion Garry Kasparov in 1997, many commentators dismissed the accomplishment’s relevance to real-world cognition.[154] Because chess involves perfect information about the location of the pieces on the board and their capabilities, and because there is a relatively small number of possible moves each turn, it is easy to represent the game mathematically.

pages: 360 words: 100,991

Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence
by Richard Yonck
Published 7 Mar 2017

World War II gave computing a huge boost that would eventually contribute to our confidence in the inevitability of AI. Driven by the exigencies of war and the challenge of seemingly unbreakable coded messages then being used by Germany and Japan, enormous strides were made in what would later become the field of computer science.5 The code-breaking team of England’s Bletchley Park, including Alan Turing, worked for years on the problem.6 Without their advances, it’s very possible the war would have lasted much longer and perhaps might have even have been lost by the Allies. As it was, soon after the end of World War II, computer science and theory were at such a stage that a number of researchers and scientists believed we would be capable of building a true machine intelligence before very long.

It is the tale of Caleb, a computer programmer who is invited by his employer, eccentric billionaire Nathan, to administer a live Turing test to Ava, a humanoid robot he has created. (A Turing test as referenced here is a general determination of the humanness of an artificial intelligence and not the formal text-based test originally proposed by computing pioneer Alan Turing.) Though obviously an electromechanical robot, Ava has a young, beautiful female face with hands and feet made of simulated flesh. Her robotic form emulates a woman’s breasts, hips and buttocks, suggestive of a fetishistic sexbot. In their first conversation, Caleb questions Ava, attempting to test the range and depth of her intelligence.

pages: 348 words: 97,277

The Truth Machine: The Blockchain and the Future of Everything
by Paul Vigna and Michael J. Casey
Published 27 Feb 2018

In 2005, a computer expert named Ian Grigg, working at a company called Systemics, introduced a trial system he called “triple-entry bookkeeping.” Grigg worked in the field of cryptography, a science that dates way back to ancient times, when coded language to share “ciphers,” or secrets, first arose. Ever since Alan Turing’s calculating machine cracked the German military’s Enigma code, cryptography has underpinned much of what we’ve done in the computing age. Without it we wouldn’t be able to share private information across the Internet—such as our transactions within a bank’s Web site—without revealing it to unwanted prying eyes.

See also ledger-keeping Trump, Donald trust, distributed trusted computing Trusted Computing Group Trusted IoT Alliance trusted third parties and Bitcoin and blockchain-inspired startups and blockchain property registries and cloud computing and energy sector and governance and identity and permissioned systems truth discovery truth machine Tual, Stephan Turing, Alan “Turing complete” Uber “God’s View” knowledge Ubitquity UBS Ujo Ulbricht, Ross UNESCO Union Square Ventures United Kingdom Brexit Financial Conduct Authority Government Office for Science blockchain report and universal basic income United Nations UN High Commission for Refugees (UNHCR) UNHCR identity program World Food Program (WFP) universal basic income (UBI) user attention Veem venture capital (VC) Ver, Roger Veripart Verisign Vertcoin Vigna, Paul.

pages: 349 words: 102,827

The Infinite Machine: How an Army of Crypto-Hackers Is Building the Next Internet With Ethereum
by Camila Russo
Published 13 Jul 2020

“Right, instead it has a machine that’s at its core,” Adam said. “The Ethereum Virtual Machine,” Texture said, scrolling through the paper. “Yeah, which is Turing-complete, so it can process whatever piece of code you throw at it,” Adam said. “Turing completeness” is a concept named after mathematician Alan Turing. Turing-complete machines are able to run any computer code. Bitcoin has a scripting language that supports some computation, but Ethereum’s Turing-complete language is designed to support anything a programmer could dream of, and still run in a decentralized way. “The problem with Turing-complete machines is that infinite loops can break them.

Is there an opportunity to mitigate against systemic risk? Absolutely, and I think that is almost guaranteed over the long term. It’s just a better way of doing things.” He also raved about Vitalik, who had recently been awarded the prestigious 2014 World Technology Awards, beating out Mark Zuckerberg, who was also nominated. “He’s like, the next Alan Turing!” Joe didn’t say much during Jeff’s pitch/rant. He just stood there, cross armed, with a bemused look on his face. He offered Jeff the job while in transit to an Ethereum meetup later that day. Jeff held out initially because he wanted to see the terms of his contract before officially accepting the position.

pages: 328 words: 96,678

MegaThreats: Ten Dangerous Trends That Imperil Our Future, and How to Survive Them
by Nouriel Roubini
Published 17 Oct 2022

World War II accelerated the pace for automation. Assembly lines built war materiel, newfangled radar tracked aircraft, and researchers at Bletchley Park, England, used advanced mathematics to break secret German naval codes that revealed the whereabouts of deadly submarines. The brilliant and tragic Alan Turing led the code-breaking initiative. His Enigma machine shortened the war and saved countless lives. After the war, Turing wrote a paper entitled “Computing Machinery and Intelligence.” Instead of asking whether machines can think, he wondered whether computer responses might seem human by replicating the external manifestations of human thought processes.

Norton, 1963), 358–73, http://www.econ.yale.edu/smith/econ116a/keynes1.pdf. 29. Matthew Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law & Technology 29, no. 2 (Spring 2016), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2609777. 30. Andrew Hodges, Alan Turing: The Enigma (New York: Simon and Schuster, 1983), p. 382. 31. The New York Times Guide to Essential Knowledge (New York: St. Martin’s Press, 2011), p. 442. 32. Harley Shaiken, “A Robot Is After Your Job,” New York Times, September 3, 1980, https://www.nytimes.com/1980/09/03/archives/a-robot-is-after-your-job-new-technology-isnt-a-panacea.html. 33.

pages: 502 words: 107,657

Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die
by Eric Siegel
Published 19 Feb 2013

PA’s deployment brings a qualitative change in the way we compete against malicious intent. But beware! Another type of fraud attacks you and every one of us, many times a day. Are you protected? Lipstick on a Pig An Internet service cannot be considered truly successful until it has attracted spammers. —Rafe Colburn, Internet development thought leader Alan Turing (1912–1954), the father of computer science, proposed a thought experiment to explore the definition of what would constitute an “intelligent” computer. This so-called Turing test allows people to communicate via written language with someone or something hidden behind a closed door in order to formulate an answer to the question: Is it human or machine?

We may be too darn complex to program computers to mimic ourselves, but the model need not derive answers in the same manner as a person; with predictive modeling, perhaps the computer can find some innovative way to program itself for this human task, even if it’s done differently than by humans. As Alan Turing famously asked, would a computer program that exhibits humanlike behavior qualify as AI? It’s anthropocentric to think so, although I’ve been called worse. But having extensive Jeopardy! learning data did not itself guarantee successful predictive models, for two reasons: 1. Open question answering presents tremendous unconquered challenges in the realms of language analysis and human reasoning. 2.

pages: 385 words: 111,113

Augmented: Life in the Smart Lane
by Brett King
Published 5 May 2016

Chapter 3 When Computers Disappear “Information technology grows exponentially, basically doubling every year. What used to fit in a building now fits in your pocket, and what fits in your pocket today will fit inside a blood cell in 25 years’ time.” Ray Kurzweil, 2009 At the height of World War II, Alan Turing and the Bletchley Park team had just developed the first programmable electronic digital computer, designed specifically to assist British codebreakers with cryptanalysis of Lorenz ciphers. The German Lorenz rotor stream cipher machines were widely used during the war by the German army to send encrypted messages and dispatches.

These algorithms don’t learn language like a human; they identify a phrase through recognition, look it up on a database and then deliver an appropriate response. Recognising speech and being able to carry on a conversation are two very different achievements. What would it take for a computer to fool a human into thinking it was a human, too? The Turing Test or Not… In 1950, Alan Turing published a famous paper entitled “Computing Machinery and Intelligence”. In his paper, he asked not just if a computer or machine could be considered something that could “think”, but more specifically “Are there imaginable digital computers which would do well in the imitation game?”26 Turing proposed that this “test” of a machine’s intelligence—which he called the “imitation game”—be tested in a human-machine question and answer session.

pages: 394 words: 108,215

What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry
by John Markoff
Published 1 Jan 2005

Ken Colby, a Stanford computer scientist and psychiatrist who had worked with Joseph Weizenbaum, who would later become a well-known MIT computer scientist, on his Eliza conversational program, brought his research group to the laboratory early on. One of the enduring hurdles facing artificial-intelligence research projects has been the Turing test, an experiment first proposed by the British mathematician Alan Turing in 1950. Turing identified a simple way of cutting through the philosophical debate about whether a machine could ever be built to mimic the human mind. If, in a blind test, a person could not tell whether he was communicating with a computer or a human, Turing reasoned, the question would be resolved.

Swenson, Lee System Development Corporation (SDC) Taylor, Robert ARPAnet and Licklider and networks and Vietnam and at Xerox telecommuting Telnet Terman, Frederick Terrell, Paul Tesler, Larry TeX Texas Instruments Thacker, Chuck Thorp, Edward O. Tiny BASIC Tolkien, J.R.R. Tools for Conviviality (Illich) transistors Moore’s Law and; see also Moore’s Law photolithographic printing of triodes Tuck, Hugh Turing, Alan Turing test II Cybernetic Frontiers (Brand) Tymshare Corporation UCLA Utah, University of Vallee, Jacques Van Dam, Andries Varian Veblen, Thorstein Venceremos Vietnam War Cambodia and Laos invasions in Duvall and protest against, see antiwar activism statistics and Tesler and Vallee and Von Neumann, John Walking on the Edge of the World (Leonard) Wallace, Don “Smokey” Warnock, John Warren, Jim Watson, Dick Watson, Tom, Sr.

pages: 366 words: 107,145

Fuller Memorandum
by Stross, Charles
Published 14 Jan 2010

(And it would have been messy, very messy--if old HPL was around today he'd be the kind of blogging and email junkie who's in everybody's RSS feed like some kind of giant mutant gossip squid.) Then there were those who were sitting on top of the truth, if they'd had but the wits to see it--Dennis Wheatley, for example, worked down the hall in Deception Planning at SOE and regularly did lunch with a couple of staff officers who worked with Alan Turing--the man himself, not the anonymous code-named genius currently doing whatever it is they do in the secure wing at the Funny Farm. Luckily Wheatley wouldn't have known a real paranormal excursion if it bit him on the arse. (In fact, looking back to the dusty manila files, I'm not entirely sure that Dennis Wheatley's publisher wasn't on the Deception Planning payroll after the war, if you follow my drift.)

The Memex is a miracle of simplicity and good design, as long as you bear in mind that it's operated by foot pedals (except for the paper tape punch), the display is a microfilm reader, and it can't display more than ten menu choices on screen at any time. Unlike early digital computers such as the Manchester Mark One, you don't need to be Alan Turing and debug raw machine code on the fly by flashing a torch at the naked phosphor memory screen; you just need to be able to type on a Baudot keyboard using both feet (with no delete key and lethal retaliation promised if you make certain typos). There's nothing here that's remotely as hostile as VM/CMS to a UNIX hacker.

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future
by Martin Ford
Published 4 May 2015

Indeed, the arc of progress can be traced back in time at least as far as Charles Babbage’s mechanical difference engine in the early seventeenth century. The innovations that have resulted in fantastic wealth and influence in today’s information economy, while certainly significant, do not really compare in importance to the groundbreaking work done by pioneers like Alan Turing or John von Neumann. The difference is that even incremental advances are now able to leverage that extraordinary accumulated account balance. In a sense, the successful innovators of today are a bit like the Boston Marathon runner who in 1980 famously snuck into the race only half a mile from the finish line.

The quest to build a genuinely intelligent system—a machine that can conceive new ideas, demonstrate an awareness of its own existence, and carry on coherent conversations—remains the Holy Grail of artificial intelligence. Fascination with the idea of building a true thinking machine traces its origin at least as far back as 1950, when Alan Turing published the paper that ushered in the field of artificial intelligence. In the decades that followed, AI research was subjected to a boom-and-bust cycle in which expectations repeatedly soared beyond any realistic technical foundation, especially given the speed of the computers available at the time.

The Deep Learning Revolution (The MIT Press)
by Terrence J. Sejnowski
Published 27 Sep 2018

(The Department of Defense had recently poured $600 million into its Strategic Computing Initiative, a program that ran from 1983 to 1993 but came up short on building a vision system to guide a self-driving tank.)9 “Good luck with that,” was my reply. Gerald Sussman, who made several important applications of AI to real-world problems, including a system for high-precision integration for orbital mechanics, defended the honor of MIT’s approach to AI with an appeal to the classic work of Alan Turing, who had proven that the Turing machine, a thought experiment, could compute any computable function. “And how long would that take?” I asked. “You had better compute quickly or you will be eaten,” I added, then walked across the room to pour myself a cup of coffee. And that was the end of the dialogue with the faculty. 34 Chapter 2 “What is wrong with this picture?”

When data sets are small, a single sample left out of the training set can be used to test the performance of the network trained on the remaining examples, and the process repeated for every sample to get an average test performance. This is a special case of cross-validation where n = 1, in which n subsamples are held out. 284 Glossary Turing machine Hypothetical computer invented by Alan Turing (1937) as a simple model for mathematical calculation. A Turing machine consists of a “tape” that can be moved back and forth, a “head” that has a “state” that can change the property of the active cell beneath it, and a set of instructions for how the head should modify the active cell and move the tape.

pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think
by James Vlahos
Published 1 Mar 2019

It took the advent of a whole new type of technology, however, for them to acquire something even more important: brains. Only in the computer age have talking objects become capable of anything more than the playing back of recorded messages. Of course, that a wondrous new invention—the electronic digital computer—would be adept at mathematical calculation was obvious from the jump. The 1936 paper by Alan Turing that first laid out a vision for such devices was titled, “On Computable Numbers.” Some of the earliest deployed computers were used aboard submarines in World War II to calculate torpedo launch angles at moving targets. But people also envisioned early on that computers might be good at something that intuitively seems much harder for machines to handle than numbers are: words.

When you encountered them, you could exchange messages, making the game one of the world’s first online chat platforms. But you typically didn’t know who the players were in real life. In this anonymity, Mauldin saw an opportunity to do a bold AI experiment. His idea was inspired by the computing pioneer Alan Turing, who back in 1950 had famously proposed a way to gauge a machine’s ability to pass as human. In what came to be known as a Turing test, a person exchanges typed messages with an unknown entity and tries to guess whether it is a human or a chatbot. The computer passes the test if it fools the person into thinking that it is actually alive.

pages: 374 words: 111,284

The AI Economy: Work, Wealth and Welfare in the Robot Age
by Roger Bootle
Published 4 Sep 2019

This skepticism has been fueled, among other things, by the fact that AI has been with us for some time – at least in theory – and it has not yet produced anything really dramatic. It grew out of digital computing, which was explored and developed at Bletchley Park in England during the Second World War, famously enabling the Nazis’ Enigma code to be broken. That feat is closely associated with the name of Alan Turing. Turing was also responsible for AI’s early conceptual framework, publishing in 1950 the seminal paper “Computing Machinery and Intelligence.” The subject was subsequently developed mainly in the USA and the UK. But it waxed and waned in both esteem and achievement. Over the last decade, however, a number of key developments have come together to power AI forward: • Enormous growth in computer processing power

In these conditions, perhaps the best future to await us would be to be kept as a sort of underclass, objects of curiosity and wonder, rather like animals in a zoo, perhaps pacified by appropriate doses of something like soma, the drug dished out in Aldous Huxley’s Brave New World, in order to keep people quiet. There is a wealth of speculation by AI gurus about what this world would be like. It may be most enlightening if I give you a flavor of what they think, in their own words, before giving you my view afterward. The father of the whole AI field, Alan Turing, clearly saw the negative possibilities. In 1951 he wrote: “If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position … we should, as a species, feel greatly humbled.” Many AI experts have subsequently shared this view.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

” * * * — ON March 27, 2019, the Association for Computing Machinery, the world’s largest society of computer scientists, announced that Hinton, LeCun, and Bengio had won the Turing Award. First introduced in 1966, the Turing Award was often called “the Nobel Prize of computing.” It was named for Alan Turing, one of the key figures in the creation of the computer, and it now came with $1 million in prize money. After reviving neural network research in the mid-2000s and pushing it into the heart of the tech industry, where it remade everything from image recognition to machine translation to robotics, the three veteran researchers split the prize three ways, with LeCun and Bengio giving the extra cent to Hinton.

FRANK ROSENBLATT, the Cornell psychology professor who built the Perceptron, a system that learned to recognize images, in the early 1960s. DAVID RUMELHART, the University of California–San Diego psychologist and mathematician who helped revive Frank Rosenblatt’s ideas alongside Geoff Hinton in the 1980s. ALAN TURING, the founding father of the computer age who lived on the staircase at King’s College Cambridge that was later home to Geoff Hinton. NOTES This book is based on interviews with more than four hundred people over the course of the eight years I’ve been reporting on artificial intelligence for Wired magazine and then the New York Times, as well as more than a hundred interviews conducted specifically for the book.

pages: 412 words: 104,864

Silence on the Wire: A Field Guide to Passive Reconnaissance and Indirect Attacks
by Michal Zalewski
Published 4 Apr 2005

This is because even though computer hardware can be and often is consistent and reliable, you typically can’t make long-term predictions about the behavior of a sufficiently complex computer program, let alone a complex matrix of interdependent programs (such as a typical operating system); this makes validating a computer program quite difficult, even assuming we could come up with a detailed, sufficiently strict and yet flawless hypothetical model of what the program should be doing. Why? Well, in 1936, Alan Turing, the father of modern computing, proved by reductio ad absurdum (reduction to the absurd) that there can be no general method for determining an outcome of any computer procedure, or algorithm, in a finite time (although there may be specific methods for some algorithms).[41] This in practice means that while you cannot expect your operating system or text editor to ever behave precisely the way you or the author intend it to, you can reasonably expect that two instances of a text editor on systems running on the same hardware will exhibit consistent and identical behavior given the same input (unless, of course, one of the instances gets crushed by a falling piano or is otherwise influenced by other pesky external events).

Bibliographic Notes Chapter 1 [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] Chapter 2 [52] [53] [54] [55] [56] Chapter 3 [57] [58] [59] Chapter 5 [60] [61] [62] [63] [64] [65] Chapter 6 [66] [67] Chapter 7 [68] [69] Chapter 8 [70] [71] [72] [73] Chapter 9 [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] Chapter 10 [86] [87] [88] Chapter 11 [89] [90] [91] [92] [93] [94] [95] Chapter 13 [96] Chapter 14 [97] [98] [99] [100] [101] [102] [103] [104] [105] Chapter 15 [106] [107] Chapter 16 [108] [109] Chapter 17 [110] [111] [112] Chapter 18 [113] [114] [115] * * * [41] Alan Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, Series 2, 42 (1936). [42] R.L. Rivest, A. Shamir, L. Adleman, “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems,” Massachusetts Institute of Technology (1978)

The Man Who Knew Infinity: A Life of the Genius Ramanujan
by Robert Kanigel
Published 25 Apr 2016

Woolf records them in their rooms sitting in domestic tranquility beside the fire, quiet and dejected after returning from the veterinarian with their emaciated, worm-ridden cat. A younger mathematician who knew Hardy much later, when during the 1920s he was at Oxford, says that there was indeed a “rumor of a young man” then. Later, when Hardy visited America in the 1930s, he would impress the mathematician Alan Turing, himself homosexual, as, in the words of his biographer, Andrew Hodges, “just another English intellectual homosexual atheist.” And during this period, too, he would meet an Oxford man many years his junior to whom he would later dedicate a book and whom one account simply refers to as “his beloved John Lomas.”

As closely as the two men often worked, Ramanujan was, inevitably, less the beacon of Hardy’s life than Hardy was of Ramanujan’s. • • • But even if Hardy weren’t so busy, an immense personal and cultural gap stood in the way of real intimacy between the two men. Some years later, the English mathematician Alan Turing would complain of Hardy’s lack of even superficial friendliness. It was 1936, Hardy was spending the year at Princeton, and Turing found him “very standoffish or possibly shy. I met him in Maurice Pryce’s rooms the day I arrived, and he didn’t say a word to me.” Hardy loosened up later, but as Turing’s biographer, Andrew Hodges, observes, “although ‘friendly,’ the relationship was not one that overcame a generation and multiple layers of reserve”—this though Hardy “saw the world through such very similar eyes.”

Oxford: Clarendon Press, 1954. Hemingway, F. R. Tanjore District Gazetteer. 1915. Himmelfarb, Gertrude. Marriage and Morals Among the Victorians. New York: Alfred A. Knopf, 1986. Historical Register of University of Cambridge. Supplement 1, 1911–1920. Cambridge University Press, 1922. Hodges, Andrew. Alan Turing: The Enigma. New York: Simon and Schuster, 1983. Howarth, T. E. B. Cambridge Between Two Wars. London: Collins, 1978. Hoyt, Edwin P. The Last Cruise of the Emden. New York: Macmillan, 1966. Hynes, Samuel. The Victorian Turn of Mind. Princeton: Princeton University Press, 1968. Imperial Gazetteer.

pages: 137 words: 36,231

Information: A Very Short Introduction
by Luciano Floridi
Published 25 Feb 2010

This is the informational environment constituted by all informational processes, services, and entities, thus including informational agents as well as their properties, interactions, and mutual relations. If we need a representative scientist for the fourth revolution, this should definitely be Alan Turing (1912-1954). Inforgs should not be confused with the sci-fi vision of a `cyborged' humanity. Walking around with a Bluetooth wireless headset implanted in our bodies does not seem a smart move, not least because it contradicts the social message it is also meant to be sending: being constantly on call is a form of slavery, and anyone so busy and important should have a personal assistant instead.

pages: 429 words: 114,726

The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise
by Nathan L. Ensmenger
Published 31 Jul 2010

It is the Platonic ideal of the Universal Turing Machine, and not the messy reality of actual physical computers, that is the true subject of modern theoretical computer science; it is only by treating the computer as an abstraction, a mathematical construct, that theoretical computer scientists lay claim to their field being a legitimate scientific, rather than merely a technical or engineering, discipline. The story of this remarkable self-construction and its consequences is the subject of chapter 5. The idealized Universal Turing Machine is, of course, only a conceptual device, a convenient fiction concocted by the mathematician Alan Turing in the late 1930s as a means of exploring a long-standing puzzle in theoretical mathematics known as the Entscheidungsproblem. In order to facilitate his exploration, Turing invented a new tool, an imaginary device capable of performing simple mechanical computations. Each Turing Machine, which consisted of only a long paper tape along with a mechanism for reading from and writing to that tape, contained a table of instructions that allowed it to perform a single computation.

Conventional histories of computer programming tend to conflate programming as a vocational activity with computer science as an academic discipline. In many of these accounts, programming is represented as a subdiscipline of formal logic and mathematics, and its origins are identified in the writings of early computer theorists Alan Turing and John von Neumann. The development of the discipline is evaluated in terms of advances in programming languages, formal methods, and generally applicable theoretical research. This purely intellectual approach to the history of programming, however, conceals the essentially craftlike nature of early programming practice.

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

If you could travel back in time to the early twentieth century and tell people that a soon-to-be-invented machine would solve problems in every realm of human endeavor—the same machine for every problem—no one would believe you. They would say that each machine can only do one thing: sewing machines don’t type, and typewriters don’t sew. Then in 1936 Alan Turing imagined a curious contraption with a tape and a head that read and wrote symbols on it, now known as a Turing machine. Every conceivable problem that can be solved by logical deduction can be solved by a Turing machine. Furthermore, a so-called universal Turing machine can simulate any other by reading its specification from the tape—in other words, it can be programmed to do anything.

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world. Evolution, part 2 Even if computers today are still not terribly smart, there’s no doubt that their intelligence is rapidly increasing. As early as 1965, I. J. Good, a British statistician and Alan Turing’s sidekick on the World War II Enigma code-breaking project, speculated on a coming intelligence explosion. Good pointed out that if we can design machines that are more intelligent than us, they should in turn be able to design machines that are more intelligent than them, and so on ad infinitum, leaving human intelligence far behind.

pages: 404 words: 113,514

Atrocity Archives
by Stross, Charles
Published 13 Jan 2004

Anyway, I've suffered for my knowledge, and here's what I've learned. I could wibble on about Crowley and Dee and mystics down the ages but, basically, most self-styled magicians know shit. The fact of the matter is that most traditional magic doesn't work. In fact, it would all be irrelevant, were it not for the Turing theorem--named after Alan Turing, who you'll have heard of if you know anything about computers. That kind of magic works. Unfortunately. You haven't heard of the Turing theorem--at least, not by name--unless you're one of us. Turing never published it; in fact he died very suddenly, not long after revealing its existence to an old wartime friend who he should have known better than to have trusted.

We can get some ideas about the lives and occupations of these people by extrapolating from the published material about the intelligence services. James Bamford's Body of Secrets, a deep and fascinating history of the US National Security Agency, offers some hints from outside--as do other histories of the cryptic profession, such as David Kahn's The Codebreakers and Alan Hodges's masterful biography of Alan Turing--for if any agency gets its hands on tools for probing the Platonic realm, it will be a kissing cousin of the kings of cryptography. We can draw some other conclusions from the unspoken and unwritten history of the secret services. Why, for example, was the British Special Operations Executive disbanded so suddenly in 1945?

pages: 492 words: 118,882

The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory
by Kariappa Bheemaiah
Published 26 Feb 2017

For example, early computers such as the Differential Analyser, invented by Vannevar Bush in the mid 1930’s, were analog computation machines5 that were created to solve ordinary differential equations to help calculate the trajectories of shells. As World War Two broke out, these advances in computing were adopted and developed by various militaries to communicate sensitive information by integrating the techniques of cryptography - a kind of natural selection. To combat this, pioneers such as Alan Turing and his mentor Max Newman, set about designing and building automated machines (Turing Machines) that could decrypt these camouflaged communiqués. This effectively changed the use of the computer and increased the diversity of the kinds of computers. After the war, advances by notable inventors such as John Mauchly, Presper Eckert and John von Neumann (a veritable polymath) led to the creation of the EDVAC (Electronic Discrete Variable Automatic Computer) , the first binary computer.

As World War Two began in 1939, these advances in information technology had been adopted by various militaries to communicate sensitive information. Cryptography became a suitable way of camouflaging information and led to the creation of the Enigma machine. Luckily for the Allies, hope lay in the form of some work that had been done a few years earlier by another Cambridge mathematician, Alan Turing. Along with his mentor, Max Newman, Turing set about designing and building automated machines (Turing Machines) that could decrypt secret German military communications (as documented in the popular movie, ‘The Imitation Game’). However, owing to an obsession for secrecy during the war years and for several years after that, the achievements made by Turing and the team at Bletchley Park in computer development was kept hidden from view.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

Choy, “Current Applications and Future Impact of Machine Learning in Radiology,” Radiology (2018): 288(2), 318–328. A BRIEF HISTORY With all the chatter and buzz about AI these days, it would be easy to think it was some kind of new invention, but, conceptually, it goes back at least eighty years. In 1936 Alan Turing published a paper on powerful, automated, intelligent systems—a universal computer—titled “On Computable Numbers, with an Application to the Entscheidungsproblem.”13 I don’t understand the multitude of equations in this thirty-six-page gem, but I must agree with his statement, “We are now in a position to show that the Entscheidungsproblem cannot be solved,” both because I can’t say it and still don’t have a clue what it is!

Nonetheless, this was the first time AI prevailed at a task over a world-champion human, and, unfortunately, framed in this way, it helped propagate the AI-machine versus man war, like the title of the 2017 New Yorker piece, “A.I. Versus M.D.”17 The adversarial relationship between humans and their technology, which had a long history dating back to the steam engine and the first Industrial Revolution, had been rekindled. 1936—Turing paper (Alan Turing) 1943—Artificial neural network (Warren McCullogh, Walter Pitts) 1955—Term “artificial intelligence” coined (John McCarthy), 1957—Predicted ten years for AI to beat human at chess (Herbert Simon) 1958—Perceptron (single-layer neural network) (Frank Rosenblatt) 1959—Machine learning described (Arthur Samuel) 1964—ELIZA, the first chatbot 1964—We know more than we can tell (Michael Polany’s paradox) 1969—Question AI viability (Marvin Minsky) 1986—Multilayer neural network (NN) (Geoffrey Hinton) 1989—Convolutional NN (Yann LeCun) 1991—Natural-language processing NN (Sepp Hochreiter, Jurgen Schmidhuber) 1997—Deep Blue wins in chess (Garry Kasparov) 2004—Self-driving vehicle, Mojave Desert (DARPA Challenge) 2007—ImageNet launches 2011—IBM vs.

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

An upright ape, living in dust, with crude language and tools. All set for extinction.” Nathan assumes he’ll be orchestrating the first stages of that transition. To this end, he wants to test his latest android, a robot called Eva, with a modern-day variation on the Turing test. In 1950, the computer scientist and mathematician Alan Turing proposed one of the first methods of assessing whether a machine had achieved human intelligence. A person and a machine would engage in a telephone conversation, separated from one another. An observer would try to determine which participant was a computer, and which a person. If the observer could be fooled, the machine passed the test.

Ian McEwan’s novel Machines Like Me aptly captures the dangers of these experimentalist horizons. It is set in an imagined Britain, where a corporation sells twenty-five robots indistinguishable from humans in the early 1980s. Technology advances more quickly in this imagined world, in part because in McEwan’s alternative-reality British authorities spared Alan Turing the homophobic witch hunt that prematurely ended his actual life. The narrator, Charlie, buys one of the twelve “Adams” on offer with an inheritance. Since Adam “was advertised as a companion, an intellectual sparring partner, friend and factotum who could wash dishes, make beds and ‘think’,” he seems like a perfect amusement for the pensive, lonely, and bored Charlie, who invites his neighbor (and love interest) Miranda to help him program the personality of Adam.25 Charlie hopes the robot can be a common project for them as a couple.

pages: 392 words: 114,189

The Ransomware Hunting Team: A Band of Misfits' Improbable Crusade to Save the World From Cybercrime
by Renee Dudley and Daniel Golden
Published 24 Oct 2022

Another basic element of ransomware, cryptography, also goes back to ancient times. The Roman army used a cipher named after Caesar to encrypt military messages. Almost two millennia later, Nazi Germany scrambled its communications with a device called the Enigma Machine, giving it an advantage in World War II, until a team led by British mathematician Alan Turing succeeded in cracking the code. More recently, cryptography has become a backbone of the internet, safeguarding electronic banking, commerce, and communications. Unfortunately, legitimate cryptographic tools developed by government, industry, and academia have been co-opted by cybercriminals for their own purposes.

Salem4Youth Salsa20 SamSam SANS Institute Sapolsky, Robert Schilb, Ronald Schneck, Phyllis Schroeder, Simon Schuurbiers, Marijn Scotland Yard, Computer Crime Unit of (CUU) Scott, Brandon script kiddies (skiddies) SecondMarket Secret Intelligence Service Secret Service Seculore Solutions Securities and Exchange Commission (SEC) SecurityScorecard seeds Sentinel September 11 terrorist attacks Shamir, Adi Shodan Shortland, Anja Siegel, Bill Silar, Nicko Silbert, Barry Silk Road Sinclair Broadcast Group Sky Lakes Medical Center Skyline Comfort LLC Smilyanets, Dmitry Solve Soviet Union Springhill Medical Center State Department State Farm Stevenson, Adlai, II Stoll, Cliff STOPDjvu Storfer, Jonathan stream ciphers Suncrypt symmetric encryption SynAck synesthesia Takkenberg, Pim Tang Tantleff, Aaron Telegram Tequila TeslaCrypt TeslaWare Todd, Hugh Tor Toshiba Tec Transportation Security Administration (TSA) Travelers Travelex Treasury Department Trellix Trench Micro TrickBot Trifiletti, Christopher Tripoli Trump, Donald Turing, Alan Turing Prize Twitter Ukraine Ulbricht, Ross universal decryptor University Hospital of Düsseldorf University of California–San Francisco Unix time Unknown U.S. Conference of Mayors Vachon-Desjardins, Sebastien van der Wiel, Jornt van Hofweegen, Peter VashSorena Vasinskyi, Yaroslav Vatis, Michael Ventrone, Melissa Virus Bulletin VirusTotal Wall Street Journal, The WannaCry WastedLocker Waters, Michael Wazix West, Nigel Whitacre, Mark White, Sarah WhiteRose Wildfire Wilding, Edward Willems, Eddy Wilson, Tina Witherspoon, Joel Witt, Stephen WND Wonderful Wizard of Oz, The (Baum) World War II Worters, Loretta Wosar, Fabian; Apocalypse and; DarkSide and; early life of; EpsilonRed and; Evil Corp and; FBI and; Operation Bleeding Cloud of; REvil and Wray, Christopher Xerox Yakubets, Maksim YARA rules Young, Adam Young, Bernard “Jack” Yung, Moti Zbot Trojan Zeppelin ZeroAccess zero-day exploits Zeus Ziggy ZoomInfo ALSO BY DANIEL GOLDEN Spy Schools: How the CIA, FBI, and Foreign Intelligence Secretly Exploit America’s Universities The Price of Admission: How America’s Ruling Class Buys Its Way into Elite Colleges—and Who Gets Left Outside the Gates A NOTE ABOUT THE AUTHORS Renee Dudley is a technology reporter at ProPublica.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

Technology proliferates, and with every successive wave that proliferation accelerates and penetrates deeper, even as the technology gets more powerful. This is technology’s historical norm. As we gaze toward the future, this is what we can expect. Or can we? CHAPTER 3 THE CONTAINMENT PROBLEM REVENGE EFFECTS Alan Turing and Gordon Moore could never have predicted, let alone altered the rise of, social media, memes, Wikipedia, or cyberattacks. Decades after their invention, the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident. Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.

For the time being, it doesn’t matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do. Focus on that, and the real challenge comes into view: systems can do more, much more, with every passing day. CAPABILITIES: A MODERN TURING TEST In a paper published in 1950, the computer scientist Alan Turing suggested a legendary test for whether an AI exhibited human-level intelligence. When AI could display humanlike conversational abilities for a lengthy period of time, such that a human interlocutor couldn’t tell they were speaking to a machine, the test would be passed: the AI, conversationally akin to a human, deemed intelligent.

pages: 398 words: 120,801

Little Brother
by Cory Doctorow
Published 29 Apr 2008

The Nazi cipher was called Enigma, and they used a little mechanical computer called an Enigma Machine to scramble and unscramble the messages they got. Every sub and boat and station needed one of these, so it was inevitable that eventually the Allies would get their hands on one. When they did, they cracked it. That work was led by my personal all-time hero, a guy named Alan Turing, who pretty much invented computers as we know them today. Unfortunately for him, he was gay, so after the war ended, the stupid British government forced him to get shot up with hormones to "cure" his homosexuality and he killed himself. Darryl gave me a biography of Turing for my 14th birthday -- wrapped in twenty layers of paper and in a recycled Batmobile toy, he was like that with presents -- and I've been a Turing junkie ever since.

Cryptome's brave publishers collect material that's been pried out of the state by Freedom of Information Act requests or leaked by whistle-blowers and publishes it. The best fictional account of the history of crypto is, hands-down, Neal Stephenson's Cryptonomicon (Avon, 2002). Stephenson tells the story of Alan Turing and the Nazi Enigma Machine, turning it into a gripping war-novel that you won't be able to put down. The Pirate Party mentioned in Little Brother is real and thriving in Sweden (www.piratpartiet.se), Denmark, the USA and France at the time of this writing (July, 2006). They're a little out-there, but a movement takes all kinds.

pages: 472 words: 117,093

Machine, Platform, Crowd: Harnessing Our Digital Future
by Andrew McAfee and Erik Brynjolfsson
Published 26 Jun 2017

. *** “The Fox and the Hedgehog” was also the title of an essay by the philosopher Isaiah Berlin that divided thinkers throughout history into two categories: those who pursue a single big idea throughout their careers, and those who pursue many different ones. CHAPTER 3 OUR MOST MIND-LIKE MACHINES I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. — Alan Turing, 1950 AS SOON AS WE DEVELOPED DIGITAL COMPUTERS, WE TRIED to get them to think the way we do. It was obvious from the start that they’d be highly useful for performing routine mathematical calculations, but this was not novel. Humans, after all, had been building calculating machines—from abacuses in Japan and Babylon to the mysterious Greek Antikythera mechanism*—since before the time of Christ.

As a 2015 article by Jo Marchant put it, “Nothing else like this has ever been discovered from antiquity. Nothing as sophisticated, or even close, appears again for more than a thousand years.” Jo Marchant, “Decoding the Antikythera Mechanism, the First Computer,” Smithsonian, February 2015, http://www.smithsonianmag.com/history/decoding-antikythera-mechanism-first-computer-180953979. † Alan Turing proved that a basic computer that stores a program could be thought of as a universal computing machine that, in principle, could be instructed to solve any problem solvable by an algorithm. ‡ As the linguist Steven Pinker points out in his 1994 book The Language Instinct, a child who is upset with her parent’s choice for bedtime reading could construct a complex sentence like “Daddy, what did you bring that book that I don’t want to be read to out of up for?”

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Perhaps advances in brain-computer interfaces will prove to be useful for those unable to speak or when silence or stealth is needed, such as card counting in blackjack. The murkier question is whether these cybernetic assistants will eventually pass the Turing test, the metric first proposed by mathematician and computer scientist Alan Turing to determine if a computer is “intelligent.” Turing’s original 1951 paper has spawned a long-running philosophical discussion and even an annual contest, but today what is more interesting than the question of machine intelligence is what the test implies about the relationship between humans and machines.

That wouldn’t happen for another half decade in conjunction with the summer 1956 Dartmouth conference. He had first come to the concept in grad school when attending the Hixon Symposium on Cerebral Mechanisms in Behavior at Caltech.10 At that point there weren’t programmable computers, but the idea was in the air. Alan Turing, for example, had written about the possibility the previous year, to receptive audiences on both sides of the Atlantic. McCarthy was thinking about intelligence as a mathematical abstraction rather than something realizable—along the lines of Turing—through building an actual machine. It was an “automaton” notion of creating human intelligence, but not of the kind of software cellular automata that von Neumann would later pursue.

pages: 566 words: 122,184

Code: The Hidden Language of Computer Hardware and Software
by Charles Petzold
Published 28 Sep 1999

A 4-bit processor can add 32-bit numbers, for example, simply by doing it in 4-bit chunks. In one sense, all digital computers are the same. If the hardware of one processor can do something another can't, the other processor can do it in software; they all end up doing the same thing. This is one of the implications of Alan Turing's 1937 paper on computability. Where processors ultimately do differ, however, is in speed. And speed is a big reason why we're using computers to begin with. The maximum clock speed is an obvious influence on the overall speed of a processor. That clock speed determines how fast each instruction is being executed.

As processors have become more sophisticated, many common tasks previously done in software have been built into the processor. We'll see examples of this trend in the chapters ahead. Even though all digital computers have the same capabilities, even though they can do nothing beyond the primitive computing machine devised by Alan Turing, the speed of a processor of course ultimately affects the over-all usefulness of a computer system. Any computer that's slower than the human brain in performing a set of calculations is useless, for example. And we can hardly expect to watch a movie on our modern computer screens if the processor needs a minute to draw a single frame.

pages: 510 words: 120,048

Who Owns the Future?
by Jaron Lanier
Published 6 May 2013

This was darker than Malthus, as it replaced unintentional self-destruction with instantaneous decisive destruction accessible with the simple press of a button. • Turing: Politics and people won’t even exist. Only technology will exist when it gets good enough, which means it will become supernatural. Not long after Hiroshima, Alan Turing hatched the idea that people are creating a successor reality in information. Obviously Turing’s humor inspired a great deal of science fiction, but I’ll argue it’s distinct because it poses the possibility of a new metaphysics. People might turn into information rather than be replaced by it. This is why Ray Kurzweil can await being uploaded into a virtual heaven.

To me, at any rate, Conlon’s music was the momentous first appearance of a musical “any.” Here was an example of someone who had gained precise, unlimited control of a domain and indeed he did create entirely new meaning and sensation by leaping out of the snags the rest of us navigate, onto a new plateau of generality. Who had done that before? Alan Turing, certainly. The great analytic mathematicians. Who else? Who had done it aesthetically? It seemed to me that I must seek out any and all opportunities to find other such plateaus. What Conlon did for rhythm might be done for sensory impressions, for the human body, for the whole of human experience.

pages: 153 words: 45,871

Distrust That Particular Flavor
by William Gibson
Published 3 Jan 2012

Certain goals of the government’s Total (now Terrorist) Information Awareness initiative may eventually be realized simply by the evolution of the global information system—but not necessarily or exclusively for the benefit of the United States or any other government. This outcome may be an inevitable result of the migration to cyberspace of everything that we do with information. Had Orwell known that computers were coming (out of Bletchley Park, oddly, a dilapidated English country house, home to the pioneering efforts of Alan Turing and other wartime code-breakers) he might have imagined a Ministry of Truth empowered by punch cards and vacuum tubes to better wring the last vestiges of freedom from the population of Oceania. But I doubt his story would have been very different. Would East Germany’s Stasi have been saved if its agents had been able to mouse away on PC’s into the Nineties?

pages: 159 words: 45,073

GDP: A Brief but Affectionate History
by Diane Coyle
Published 23 Feb 2014

This was, of course, the computer and Internet revolution. This provides a good example of the kind of time lags Paul David described. The electronic programmable computer was one of the basic innovations of World War II. It emerged from the wartime code-breaking work at Bletchley Park in the United Kingdom and the brilliant conceptual leaps made by Alan Turing, and, across the Atlantic during and after the war, from the work of John Von Neumann and others involved in the development of nuclear weapons. Computers began as military and academic machines, then came into use in big businesses, and in the 1980s finally became small and cheap enough to spread to all offices and gradually individual homes.

pages: 331 words: 47,993

Artificial You: AI and the Future of Your Mind
by Susan Schneider
Published 1 Oct 2019

For instance, one version could apply to nonlinguistic agents that are part of an artificial life program, looking for specific behaviors that indicate consciousness, such as mourning the dead. Another could apply to an AI with sophisticated linguistic abilities and probe it for sensitivity to religious, body swapping, or philosophical scenarios involving consciousness. An ACT resembles Alan Turing’s celebrated test for intelligence, because it is entirely based on behavior—and, like Turing’s test, it could be implemented in a formalized question-and-answer format. But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the “mind” of the machine.

pages: 476 words: 134,735

The Unpersuadables: Adventures With the Enemies of Science
by Will Storr
Published 1 Jan 2013

He admits to never having had any ‘interest in investigating if it’s true because I’ve always thought it isn’t.’ So is it surprising that he is only ‘90 per cent’ certain about this? Actually, it isn’t. Many academics are prepared to admit that parapsychologists have proven psi phenomena by the standards usually demanded by science. Computer pioneer Alan Turing once said, ‘How we should like to discredit [psi]! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.’ The New Scientist has reported that, ‘For years, well-designed studies carried out by researchers at respected institutions have produced evidence for the reality of ESP.

You can see these in Fig. 5 of this paper: http://www.sheldrake.org/Articles&Papers/papers/animals/pdf/dog_video.pdf.’ 265 adding a meta-analysis that confirms his view: Dean Radin, ‘The Sense of Being Stared At: A Preliminary Meta-Analysis’, Journal of Consciousness Studies 12, no.6 (2005), pp. 95–100. 266 Computer pioneer Alan Turing once said: John Horgan, ‘Brilliant Scientists are Open-Minded about Paranormal Stuff, So Why Not You?’, Scientific American, 20 July 2012. 266 New Scientist has reported: Robert Matthews, ‘Opposites Detract’, New Scientist, 13 March 2004. 266 As far back as 1951, pioneering neuroscientist Donald Hebb admitted: Montague Ullman, The Comprehensive Textbook of Psychiatry, Vol. 3, 3rd edition, Chapter 56, Section 15, pp. 3235–45, 1980.

pages: 578 words: 141,373

Concretopia: A Journey Around the Rebuilding of Postwar Britain
by John Grindrod
Published 2 Nov 2013

That would be sent to punch card operators – we had three of those – who would then transfer that information from the form into the digital format on a card which would then be put into the computer.’ It was like the paperless office in reverse, the computer not just generating paper but ingesting it too. I was reminded of the punch-cards Alan Turing’s crypto-analysts were using for their proto-computers at Bletchley Park in their efforts to decode Enigma over two decades earlier. Not all of Stanley Miller’s experimental systems worked smoothly. The timber-framed windows were a weakness on the MWM blocks. ‘A window on a two-storey house is subject to wind-driven rain at a fairly low level,’ John explained.

Swinging or not, there was precious little in Galley Hill to keep the new residents amused. ‘There was no cinema, no theatre, no pubs, apart from going into Stony.’ Milton Keynes Development Corporation was aiming higher than simply providing a few decent houses for the residents. They were selling a vision of the future too. As befitted the home of Alan Turing and the Bletchley Park brainiacs, Milton Keynes was all for embracing the latest in science and technology. The original development plan from 1970 paid the usual lip service to ‘changes in office work as a result of the use of computers’ but then went further: ‘The video-phone is already in use experimentally in the United States of America and could well be in use in Milton Keynes in the 1980s.’16 Much excitement was generated by the development corporation’s idea of installing cable into every home: a 1972 edition of The Times featured ‘Mrs 1990’ dialling ‘her shopkeeper on her audio-visual telephone’ or ‘using her two-way TV’ – as well as possessing ‘her own lightweight electric car for shopping’.17 Channel 40, the town’s own cable station, was launched 1976.

Jennifer Morgue
by Stross, Charles
Published 12 Jan 2006

What most folks (including most mathematicians and computer scientists — which amounts to the same thing) don't know is that in overlapping parallel versions of the cave other beings — for utterly unhuman values of "beings" — can also sometimes see the shadows, and cast shadows right back at us. Back before about 1942, communication with other realms was pretty hit and miss. Unfortunately, Alan Turing partially systematized it — which later led to his unfortunate "suicide" and a subsequent policy reversal to the effect that it was better to have eminent logicians inside the tent pissing out, rather than outside pissing in. The Laundry is that subdivision of the Second World War-era Special Operations Executive that exists to protect the United Kingdom from the scum of the multiverse.

"Did anyone tell you what the Laundry actually does?" "Plays lots of deathmatches?" he asks hopefully. "That's one way of putting it," I begin, then pause. How to continue? "Magic is applied mathematics. The many-angled ones live at the bottom of the Mandelbrot set. Demonology is right after debugging in the dictionary. You heard of Alan Turing? The father of programming" "Didn't he work for John Carmack" Oh, it's another world out there. "Not exactly, he built the first computers for the government, back in the Second World War. Not just codebreaking computers; he designed containment processors for Q Division, the Counter-Possession Unit of SOE that dealt with demon-ridden Abwehr agents.

pages: 532 words: 140,406

The Turing Option
by Harry Harrison and Marvin Minsky
Published 2 Jan 1992

His answer: "The question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general, educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." Alan Turing, 1950 1 Ocotillo Wells, California February 8, 2023 J. J. Beckworth, the Chairman of Megalobe Industries, was disturbed, though years of control prevented any outward display of his inner concern. He was not worried, not afraid; just disturbed. He turned about in his chair to look at the spectacular desert sunset.

He had eaten enough meals in hotel rooms so he joined Ben next morning in the restaurant for breakfast. "Where's Sven?" Ben asked. "I thought he liked publicity and his newfound freedom?" "He does. But he discovered that Stockholm has phone numbers for what is called therapeutic sexual conversation. So he is both practicing his Swedish and doing research into human sexual practices." "Oh, Alan Turing, would you were but alive in this hour!" They were finishing a second pot of coffee when Shelly came into the dining room, looked around, then walked slowly over to their table. Ben stood up before her. "I don't think you're wanted here—even if Military Intelligence managed to get you past the police."

pages: 759 words: 166,687

Between Human and Machine: Feedback, Control, and Computing Before Cybernetics
by David A. Mindell
Published 10 Oct 2002

Presper Eckert, John Mauchly, Claude Shannon, and Jay Forrester, among others, participated in the NDRC’s research program on control systems. This is more than coincidence, for these men did not build electronic digital computers simply as calculators. Nor were they generally concerned with the questions of computability and logic that occupied mathematicians like Alan Turing and John von Neumann. Rather, they drew on longstanding traditions of control engineering, especially the technologies of fire control. My point is not to rewrite the history of computing—mathematicians of course played critical roles, as did the business machine industry—but rather to establish how the era of cyberspace and the Internet, with its emphasis on the computer as a communications device and as a vehicle for human interaction, connects to a longer history of control systems that generated computers as networked communications devices.

“What Makes the Picture Talk: AT&T and the Development of Sound Motion Picture Technology.” IEEE Transactions on Education 35 (November 1992): 278–85. Hoddeson, Lillian. “The Emergence of Basic Research in the Bell Telephone System, 1875–1915.” Technology and Culture 22, no. 4 (1981): 512–44. Hodges, Andrew. Alan Turing: The Enigma . New York: Simon & Schuster, 1983. Holst, Per A. “George A. Philbrick and Polyphemus: The First Electronic Training Simulator.” Annals of the History of Computing 4 (April 1982): 144–45. Holton, Gerald, ed. The Twentieth-Century Sciences: Studies in the Biography of Ideas . New York: Norton, 1970.

pages: 468 words: 137,055

Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age
by Steven Levy
Published 15 Jan 2002

Some of the emerging crypto forces were now well beyond code making and deeply into cryptanalysis. While this had been undertaken by the crypto crowd before—most famously in the attacks on Merkle’s knapsack scheme—there was now a new sort of effort. It did not conform to the traditional rules forged in the world of William Friedman or Alan Turing. . . . It was an aggregate code breaking, a mass effort powered by the amplifying abilities of the Net. Its practitioners were, of course, cypherpunks. This breed of codebreaker was not interested in crime and espionage, but in making a political point and reaping big fun in the process. One of the first efforts began with Phil Zimmermann’s PGP software.

Some of Clifford Cocks’s remarks here were drawn from “The Invention of Non-Secret Encryption,” a talk given at Bletchley Park on June 20, 1998, at a “History of Cryptography” seminar hosted by the British Society for the History of Mathematics. Page 316 Project C43 The paper is still not available. It is unclear whether this research was related to speech-encryption work known as “Project X” in Bell Labs. In Turing: The Enigma, Andrew Hodges describes Alan Turing’s participation in that project, which also benefited from the input of Claude Shannon (also at Bell Labs then) and William Friedman. If there was any cross-influence of those projects, that means that public key’s heritage directly flows from the century’s major prepublic key cryptographic figures. 323 finished his memo M.

pages: 478 words: 142,608

The God Delusion
by Richard Dawkins
Published 12 Sep 2006

The ‘crime’ itself being a private act, performed by consenting adults who were doing nobody else any harm, we again have here the classic hallmark of religious absolutism. My own country has no right to be smug. Private homosexuality was a criminal offence in Britain up until – astonishingly – 1967. In 1954 the British mathematician Alan Turing, a candidate along with John von Neumann for the title of father of the computer, committed suicide after being convicted of the criminal offence of homosexual behaviour in private. Admittedly Turing was not buried alive under a wall pushed over by a tank. He was offered a choice between two years in prison (you can imagine how the other prisoners would have treated him) and a course of hormone injections which could be said to amount to chemical castration, and would have caused him to grow breasts.

Hinde, R. A. (2002). Why Good Is Good: The Sources of Morality. London: Routledge. Hitchens, C. (1995). The Missionary Position: Mother Teresa in Theory and Practice. London: Verso. Hitchens, C. (2005). Thomas Jefferson: Author of America. New York: HarperCollins. Hodges, A. (1983). Alan Turing: The Enigma. New York: Simon & Schuster. Holloway, R. (1999). Godless Morality: Keeping Religion out of Ethics. Edinburgh: Canongate. Holloway, R. (2001). Doubts and Loves: What is Left of Christianity. Edinburgh: Canongate. Humphrey, N. (2002). The Mind Made Flesh: Frontiers of Psychology and Evolution.

What Kind of Creatures Are We? (Columbia Themes in Philosophy)
by Noam Chomsky
Published 7 Dec 2015

The creative use of language was a basis for what has been called the “epistemological argument” for mind-body dualism and also for the scientific inquiries of the Cartesians into the problem of “other minds”—much more sensible, I believe, than contemporary analogs, often based on misinterpretation of a famous paper of Alan Turing’s, a topic that I will put aside.24 Desmond Clarke is accurate, I think, in concluding that “Descartes identified the use of language as the critical property that distinguishes human beings from other members of the animal kingdom and [that] he developed this argument in support of the real distinction of mind and matter.”

pages: 171 words: 51,276

Infinity in the Palm of Your Hand: Fifty Wonders That Reveal an Extraordinary Universe
by Marcus Chown
Published 22 Apr 2019

It can never be a vacuum cleaner or a toaster or a nuclear reactor. However, a computer can be a word processor or an interactive video game or a smartphone. The list is endless. This illustrates the unique feature of a computer: it can simulate any other machine. But what are the limits of what computers can do? Enter Alan Turing, the English mathematician famous for his role in breaking the Nazi Enigma and Fish codes and, arguably, shortening the Second World War by several years. In the 1930s, before any practical computers existed, Turing asked: “What are the limits of computers?” The answer he found was very surprising.

pages: 170 words: 49,193

The People vs Tech: How the Internet Is Killing Democracy (And How We Save It)
by Jamie Bartlett
Published 4 Apr 2018

The founding myth for social media is that they are the heirs to the ‘hacker culture’ – Facebook’s HQ address is 1, Hacker Way – which ties them to rule-breakers like 1980s phone phreaker Kevin Mitnick, the bureaucracy-hating computer lovers of the Homebrew Club scene and further back to maths geniuses like Alan Turing or Ada Lovelace. But Google, Snapchat, Twitter, Instagram, Facebook and the rest have long ceased to be simply tech firms. They are also advertising companies. Around 90 per cent of Facebook and Google’s revenue comes from selling adverts. The basis of practically the entire business of social media is the provision of free services in exchange for data, which the companies can then use to target us with adverts.* This suggests a very different, and far less glamorous, lineage: a decades-long struggle by suited ad men and psychologists to uncover the mysteries of human decision-making and locate the ‘buy!’

pages: 196 words: 54,339

Team Human
by Douglas Rushkoff
Published 22 Jan 2019

Either we enhance ourselves with chips, nanotechnology, or genetic engineering Future of Life Institute, “Beneficial AI 2017,” https://futureoflife.org/bai-2017/. to presume that our reality is itself a computer simulation Clara Moskowitz, “Are We Living in a Computer Simulation?” Scientific American, April 7, 2016. The famous “Turing test” for computer consciousness Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (October 1950). 58. The human mind is not computational Andrew Smart, Beyond Zero and One: Machines, Psychedelics and Consciousness (New York: OR Books, 2009). consciousness is based on totally noncomputable quantum states in the tiniest structures of the brain Roger Penrose and Stuart Hameroff, “Consciousness in the universe: A review of the ‘Orch OR’ theory,” Physics of Life Review 11, no. 1 (March 2014).

pages: 174 words: 56,405

Machine Translation
by Thierry Poibeau
Published 14 Sep 2017

The Hitchhiker’s Guide to the Galaxy was originally a radio comedy broadcast (1978) before giving birth to different adaptations, including comics, novels, TV series, and plays. 2. Babelfish is also the name of a machine translation system that was very popular on the web in the late 1990s. 3. Alan Turing was a British mathematician, logician, and computer scientist. He played a major role in the development of computer science, and his life has recently been popularized in the movie The Imitation Game (2014). 2 The Trouble with Translation Before addressing machine translation, it is important to investigate the notion of translation in itself.

pages: 205 words: 18,208

The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?
by David Brin
Published 1 Jan 1998

In those days, long-distance call routing was a laborious task of negotiation, planned well in advance by human operators arranging connections from one zone to the next. But this drudgery might be avoided in a dispersed computer network if the messages themselves could navigate, finding their own way from node to node, carrying destination information in their lead bits like the address on the front of an envelope. Early theoretical work by Alan Turing and John Von Neumann hinted this to be possible by allowing each part of a network to guess the best way to route a message past any damaged area and eventually reach its goal. In theory, such a system might keep operating even when others lay in tatters. In retrospect, the advantages of Baranʼs insight seem obvious.

So much attention has been paid to the length of encryption keys that only a few experts seem to recall that keys are only as good as the algorithmic “locks” they are designed to open. These are the software routines that a computer program uses to unshuffle a message. Several once-vaunted algorithms have met their downfall over the years since Alan Turing inspired the breaking of the German Enigma code, during World War II. For example, the random number generator you use may be flawed in some way that your opponent can predict. “Smart” credit cards were recently shown to have an inherent and potentially fatal mathematical fault that was completely unanticipated by the designers.

pages: 479 words: 144,453

Homo Deus: A Brief History of Tomorrow
by Yuval Noah Harari
Published 1 Mar 2015

If you cannot make up your mind, or if you make a mistake, the computer has passed the Turing Test, and we should treat it as if it really has a mind. However, that won’t really be a proof, of course. Acknowledging the existence of other minds is merely a social and legal convention. The Turing Test was invented in 1950 by the British mathematician Alan Turing, one of the fathers of the computer age. Turing was also a gay man in a period when homosexuality was illegal in Britain. In 1952 he was convicted of committing homosexual acts and forced to undergo chemical castration. Two years later he committed suicide. The Turing Test is simply a replication of a mundane test every gay man had to undergo in 1950 Britain: can you pass for a straight man?

The most interesting emerging religion is Dataism, which venerates neither gods nor man – it worships data. 11 The Data Religion Dataism says that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing.1 This may strike you as some eccentric fringe notion, but in fact it has already conquered most of the scientific establishment. Dataism was born from the explosive confluence of two scientific tidal waves. In the 150 years since Charles Darwin published On the Origin of Species, the life sciences have come to see organisms as biochemical algorithms. Simultaneously, in the eight decades since Alan Turing formulated the idea of a Turing Machine, computer scientists have learned to engineer increasingly sophisticated electronic algorithms. Dataism puts the two together, pointing out that exactly the same mathematical laws apply to both biochemical and electronic algorithms. Dataism thereby collapses the barrier between animals and machines, and expects electronic algorithms to eventually decipher and outperform biochemical algorithms.

pages: 559 words: 157,112

Dealers of Lightning
by Michael A. Hiltzik
Published 27 Apr 2000

Then one day Tim Mott showed up for a job interview. Mott was a displaced Briton with a computer science degree from Manchester University. This was a place with a much older claim to computing distinction than Palo Alto’s, for it was at Manchester that the world’s first electronic stored-program computer, based on the concepts of Alan Turing, had been built in 1948. After completing his studies Mott had relocated from Manchester to Oberlin College in Ohio, where he had spent a couple of years teaching math and helping the school set up its computer department. He then moved to Boston to enroll in business school. What brought him to Newton’s office was a tip that Ginn had a part-time opening that might tide him over until the school year started.

Reading, Mass.: ACM Press, 1988. Hafner, Katie, and John Markoff. Cyberpunk: Outlaws and Hackers on the Computer Frontier. New York: Simon & Schuster, 1992. Hafner, Katie, and Matthew Lyon. Where Wizards Stay up Late: The Origins of the Internet. New York: Simon & Schuster, 1996. Hodges, Andrew. Alan Turing: The Enigma. New York: Simon & Schuster, 1983. Jackson, Tim. Inside Intel. New York: Dutton, 1997. Jacobson, Gary, and John Hillkirk. Xerox: American Samurai. New York: Macmillan, 1986. Kearns, David T., and David A. Nadler. Prophets in the Dark. New York: HarperBusiness, 1992. Kidder, Tracy.

What We Cannot Know: Explorations at the Edge of Knowledge
by Marcus Du Sautoy
Published 18 May 2016

By continually making lots of mini-observations, trying to catch the uranium in the act of emitting radiation, the observations can freeze the uranium and stop it decaying. It’s the quantum version of the old adage that a watched pot never boils, but now the pot is full of uranium. It was the code-cracking mathematician Alan Turing who first realized that continually observing an unstable particle could somehow freeze it and stop it evolving. The phenomenon became known as the quantum Zeno effect, after the Greek philosopher who believed that because instantaneous snapshots of an arrow in flight show no movement, the arrow cannot be moving at all.

A belief in free will is one of the things that I cling to because I believe it marks me out as different from an app on my phone, and I think this is why Haynes’s experiment left me with a deep sense of unease. Perhaps my mind, too, is just the expression of a sophisticated app at the mercy of the brain’s biological algorithm. The mathematician Alan Turing was one of the first to question whether machines like my smartphone could ever think intelligently. He thought a good test of intelligence was to ask, if you were communicating with a person and a computer, whether you could distinguish which was the computer? It was this test, now known as the Turing test, that I was putting Cleverbot through at the beginning of this Edge.

pages: 903 words: 235,753

The Stack: On Software and Sovereignty
by Benjamin H. Bratton
Published 19 Feb 2016

Later, the formalization of logic within the philosophy mathematics (from Pierre-Simon Laplace, to Gottlob Frege, Georg Cantor, David Hilbert, and so many others) helped to introduce, inform, and ultimately disprove a version of the Enlightenment as the expression of universal deterministic processes (of both thought and physics). In 1936, with his now-famous paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” a very young Alan Turing at once introduced the theoretical basis of modern computing and demonstrated the limits of what could and could not ever be calculated and computed by a universal technology. Turing envisioned his famous “machine” according to the tools of his time to involve an infinite amount of “tape” divided into cells that can store symbols, moved along a stationary read-write “head” that can alter those symbols, a “state register” that can map the current arrangement of symbols along the tape, and a “table” of instructions that tells the machine to rewrite or erase the symbol and to move the “head,” assuming a new state for the “register” to map.

The photo I describe is from the same series as that used on the cover of the posthumous collection of Deleuze's writings, Desert Islands and Other Texts, 1953–1974 (Cambridge, MA: MIT Press, 2004). Readers may reference the image at this book's companion website, thestack.org. 5.  Originally conceived in 1936 by twenty-four-year-old Alan Turing and called an “a-machine” (for “automatic machine”), it describes a hypothetical universal computer, which, given enough time and energy, would be capable of calculating any “computable” problem. In that paper, “On Computable Numbers, with an Application to the Entscheidungs Problem,” Proceedings of the London Mathematical Society, Ser. 2 42 (1937), Turing demonstrates the range of problems that in fact are not computable.

I believe that Bruce Sterling coined the term data haven in his 1989 novel, Islands in the Net, and Neal Stephenson developed the notion closer to the normalization of an emergent political geography in Snow Crash (1992), in which characters pop in and out of passport-granting microstates, not bound to specific lands but instead distributed on street corners like 7–11s (the protagonist frequents one of these known as Mr. Lee's Hong Kong). Later in Stephenson's sprawling Cryptonomicon, the transhistorical plot stretches from Alan Turing's war years to the present day and locates a data haven in the fictional country of Kinatua, located between Borneo and the Philippines (domain .kk). In the real world, HavenCo operated from the self-declared sovereignty of the oil platforms of Sealand, while Freenet, a distributed encrypted network, tries to support a secured flow of information over public and private lines.

pages: 189 words: 57,632

Content: Selected Essays on Technology, Creativity, Copyright, and the Future of the Future
by Cory Doctorow
Published 15 Sep 2008

I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you've been restored from backup? The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people.

pages: 167 words: 57,175

And Finally
by Henry Marsh

After my diagnosis I inevitably turned to the Internet to learn about chemical castration. When I was a medical student, I remember a retired GP with prostate cancer coming to talk to us about the awful effects of the treatment. The drug used then was stilboestrol (pronounced stilbeastrol), the same beastly drug that Alan Turing was given in lieu of being sent to prison for homosexuality. It may well have contributed to his suicide. There have been recent reports of the drug being used surreptitiously by wives in China to bring their erring husbands to heel – apparently you can buy it from vets. So by blocking testosterone production the tumour will regress for a while, although always recurs sooner or later, without further treatment.

pages: 219 words: 61,334

Brit-Myth: Who Do the British Think They Are?
by Chris Rojek
Published 15 Feb 2008

Cromwell dared to participate in a revolution against the Crown and establish himself as the protector of the nation. Respect for dissent and nonconformity echoes the bloody-mindedness of the British. Further evidence of the strength of the tradition of nonconformity in Britain can be found if one looks beyond the top ten in the bbc poll. Alan Turing, the mathematician, who played a central part in breaking the Nazi Enigma code during World War Two, but was ostracized as a homosexual 100 BRIT-MYTH and committed suicide, is number 21. Emmeline Pankhurst, the leading figure in the Suffragette movement, is 27th. David Bowie, who since the 1960s has challenged artistic and sexual codes, is number 29.

pages: 219 words: 63,495

50 Future Ideas You Really Need to Know
by Richard Watson
Published 5 Nov 2013

While the idea of artificial intelligence (AI) goes back to the mid-50s, Isaac Asimov was writing about robot intelligence in 1942 (the word “robot” comes from a Czech word often translated as “drudgery”). A generally accepted test for artificial machine intelligence, the Turing test, also dates back to the 1950s, when the British mathematician Alan Turing suggested that we would have AI when it was possible for someone to talk to a machine without realizing it was a machine. The Turing test is problematic on some levels, though. First, a small child is generally intelligent, but most would probably fail the test. Second, if something artificial were to develop consciousness, why would it automatically let us know?

pages: 209 words: 63,649

The Purpose Economy: How Your Desire for Impact, Personal Growth and Community Is Changing the World
by Aaron Hurst
Published 31 Aug 2013

Just as the farmers of the Agrarian Economy made use of the earth to grow crops and raise livestock, the industrialists extracted raw materials for producing energy and fueling a new breed of powerful machines. The expertise developed in building increasingly sophisticated machines was key to the rise of the Information Economy. And though the computer was conceived by a mathematician, Alan Turing, it was built and commercialized by engineers. Engineers also pioneered the series of new technologies that formed the infrastructure of the Information Economy, culminating with the introduction of the Internet. Of course, information, as well as the need to disperse and manage it, wasn’t new either: the Information Economy has been around since the first teacher.

pages: 230 words: 61,702

The Internet of Us: Knowing More and Understanding Less in the Age of Big Data
by Michael P. Lynch
Published 21 Mar 2016

They independently post and repost, tweet and retweet about current events—all using expanding databases of information gleaned from the Internet. They can respond to emails. They are often programmed to tweet in patterns that mimic awake/sleep cycles. In one famous case, a well-known Brazilian journalist—allegedly with more online influence than Oprah—was revealed to be a bot.13 Alan Turing claimed sixty years ago that if a machine could durably fool humans into thinking it was human, then we had as much reason to think it was thinking as we have to think other human beings are thinking. By some standards, bots might seem to be passing this test. Even if we don’t think they are thinking (and I don’t), the use of bots is incredibly disturbing.

Demystifying Smart Cities
by Anders Lisdorf

In modern times, intelligent machines have been a mainstay of popular science fiction, and this has driven the thinking about how computers could and should work. There is no single accepted definition of artificial intelligence, but the Turing test, which is a thought experiment first presented by the British mathematician Alan Turing in 1950, has become an agreed standard criterion for determining artificial intelligence in computers. The test aims to find out if a machine can exhibit intelligent behavior equivalent to or indistinguishable from a human. The Turing test may be familiar from the 2014 film about him called The Imitation Game.

pages: 236 words: 62,158

Marx at the Arcade: Consoles, Controllers, and Class Struggle
by Jamie Woodcock
Published 17 Jun 2019

In 1947, a patent was filed for a “cathode ray tube amusement device,” which, although it does not sound that fun, connected to an oscilloscope display for players to shoot a gun at.24 In 1950, Claude Shannon published a paper about designing a computer program to play chess. He noted that “although perhaps of no practical importance, the question is of theoretical interest, and it is hoped that a satisfactory solution of this problem will act as a wedge in attacking other problems of a similar nature and of greater significance.”25 In the same year, Shannon and Alan Turing separately created programs that could play chess. However, this did not lead to the widespread play of chess videogames right away. These were not videogames in the modern sense of the term. They were early curiosities and experiments, technical demonstrations rather than something fun to play around with.

pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI
by Paul R. Daugherty and H. James Wilson
Published 15 Jan 2018

A Leader’s Guide to Reimagining Process Five Steps to Getting Started 8. Extending Human + Machine Collaboration Eight New Fusion Skills for an AI Workplace Conclusion Creating Your Future in the Human + Machine Era Postscript Notes Index Acknowledgments About the Authors Those who can imagine anything, can create the impossible. —Alan Turing See, the world is full of things more powerful than us. But if you know how to catch a ride, you can go places. —Neal Stephenson, Snow Crash INTRODUCTION What’s Our Role in the Age of AI? In one corner of the BMW assembly plant in Dingolfing, Germany, a worker and robot are collaborating to build a transmission.

pages: 533

Future Politics: Living Together in a World Transformed by Tech
by Jamie Susskind
Published 3 Sep 2018

It’s sometimes said that Moore’s Law will grind to a halt in the next few years, mostly because it will become physically impossible to cram any more transistors into the same microchip, and because the economic efficiencies enjoyed in the last half-century are set to diminish.There is certainly some evidence of a slowdown, although Moore’s Law has been given its last rites countless times in the past.51 However, it is probably wrong to assume that the current computing paradigm—the integration of transistors onto 2D wafers of silicon OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS 40 FUTURE POLITICS (the integrated circuit)—is the final computing paradigm, and cannot itself be improved upon by some other method. History, market forces, and common sense all suggest otherwise. Before the integrated circuit, computers were built using individual transistors. Before that, in the days of Alan Turing, they relied on vacuum tubes, relays, and electromechanics. The story of computing is the story of a succession of increasingly powerful methods of ­processing information, each developing exponentially, reaching its physical limitations, and then being replaced by something better. Exponential growth in computing processing power stretches back to the seventeenth century and ‘the mechanical devices of Pascal’.52 Nothing is inevitable, but Moore’s law did not begin with the integrated circuit and it is unlikely to end with it either.

There is no evidence, Hart, ­suggested, that deviation from ‘accepted sexual morality’ by adults in private is ‘something which, like treason, threatens the existence of society’: ‘As a proposition of fact it is entitled to no more respect than the Emperor Justinian’s statement that homosexuality was the cause of earthquakes.’33 For Hart, our personal choices, especially OUP CORRECTED PROOF – FINAL, 28/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS Freedom and the Tech Firm 203 those made in private, have no bearing on whether we are loyal citizens. (Hart himself had no problem answering Churchill’s call to service, having worked in military intelligence for most of the Second World War. Nor had the great mathematician and codebreaker Alan Turing who also worked at Bletchley Park, and who was the subject of criminal prosecution for homosexual acts.) So how would the Hart–Devlin debate play out today? We might firstly argue that VR is actually pretty different from pure fantasy. Its realism and sensual authenticity bring it closer to actually doing something than merely thinking about it.The trouble with this argument is that if you believe on principle (as Mill and Hart did) that mere immorality should never be made the subject of coercion, then to say something is very immoral, as opposed to merely quite immoral, doesn’t take you much further.

pages: 222 words: 74,587

Paper Machines: About Cards & Catalogs, 1548-1929
by Markus Krajewski and Peter Krapp
Published 18 Aug 2011

And since the terminology demands situating the card index in a media archeology that examines the universality of paper machines, the questions guiding this study follow the development of (preelectronic) data processing. What makes this promising and supposed jack-of-all-trades a universal machine? As Alan Turing proved only years later, these machines merely need (1) a (theoretically infinite) partitioned paper tape, (2) a writing and reading head, and (3) an exact 2 Chapter 1 Figure 1.1 Fortschritt GmbH: Karteien können alles! (“Card indexes can do anything!”, Zeitschrift für Organisation und moderne Betriebsführung 3 (23):6 (1929)) From Library Guides to the Bureaucratic Era 3 procedure for the writing and reading head to move over the paper segments.2 This book seeks to map the three basic logical components of every computer onto the card catalog as a “paper machine,” analyzing its data processing and interfaces that may justify the claim, “Card catalogs can do anything!”

pages: 238 words: 46

When Things Start to Think
by Neil A. Gershenfeld
Published 15 Feb 1999

Babbage's frustration was echoed by a major computer company years later in a project that set philosophers to work on coming up with a specification for the theory of knowledge representation, an ontological standard, to solve the problem once and for all. This effort was as unsuccessful, and interesting, as Babbage's engines. BIT BELIEFS + 12 7 Babbage's notion of one computer being able to compute anything was picked up by the British mathematician Alan Turing. He was working on the "Entscheidungsproblem," one of a famous group of open mathematical questions posed by David Hilbert in 1900. This one, the tenth, asked whether a mathematical procedure could exist that could decide the validity of any other mathematical statement. Few questions have greater implications.

pages: 391 words: 71,600

Hit Refresh: The Quest to Rediscover Microsoft's Soul and Imagine a Better Future for Everyone
by Satya Nadella , Greg Shaw and Jill Tracie Nichols
Published 25 Sep 2017

In other words, how can I solve a problem that has limitless possibilities in a way that is fast and good but not always optimal? Do we solve this as best we can right now, or work forever for the best solution? Theoretical computer science really grabbed me because it showed the limits to what today’s computers can do. It led me to become fascinated by mathematicians and computer scientists John Von Neumann and Alan Turing, and by quantum computing, which I will write about later as we look ahead to artificial intelligence and machine learning. And, if you think about it, this was great training for a CEO—nimbly managing within constraints. I completed my master’s in computer science at Wisconsin and even managed to work for what Microsoft would now call an independent software vendor (ISV).

pages: 245 words: 64,288

Robots Will Steal Your Job, But That's OK: How to Survive the Economic Collapse and Be Happy
by Pistono, Federico
Published 14 Oct 2012

Whether you buy into the singularity argument or not does not matter. The data is clear, facts are facts, and we only have to look a few years into the future to reach already alarming conclusions. The Turing Test is a thought experiment proposed in 1950 by the brilliant English mathematician and father of computers, Alan Turing. Imagine you enter a room, where a computer sits on top of a desk, waiting for you. You notice there is a chat window, and two conversations are open. As you begin to type messages down, you are told you are in fact talking to one person and one machine. You can take as much time as you want to find out who is who.

pages: 239 words: 68,598

The Vanishing Face of Gaia: A Final Warning
by James E. Lovelock
Published 1 Jan 2009

Even Gaia theory was discovered in the fertile environment of the Jet Propulsion Laboratory in California, and the one biologist who understood it and developed it further was that eminent American scientist Lynn Margulis. Of course, advances in science and technology emerged in Europe in the Middle Ages and moved its centre of excellence among the nations. In computer technology and theory Babbage, Ada Lovelace and that most tragic of men Alan Turing all did the groundwork here in the UK. Turing was the one who with his group built the first serious computing device and used it to deconvolute the otherwise unbreakable code of our wartime enemies. But that was then. Now America is at the centre of science. I make this paean of praise to the United States of America because I am puzzled that, despite its scientific excellence, this of all nations was among the slowest to perceive the threat of global heating.

pages: 242 words: 68,019

Why Information Grows: The Evolution of Order, From Atoms to Economies
by Cesar Hidalgo
Published 1 Jun 2015

Mathematicians continued to formalize the idea of information, but they framed their efforts in the context of communication technologies, transcending the efforts to decipher intercepted messages. The mathematicians who triumphed became known as the world’s first information theorists or cyberneticists. These pioneers included Claude Shannon, Warren Weaver, Alan Turing, and Norbert Wiener. In the 1950s and 1960s the idea of information took science by storm. Information was welcomed in all academic fields as a powerful concept that cut across scientific boundaries. Information was neither microscopic nor macroscopic.3 It could be inscribed sparsely on clay tablets or packed densely in a strand of DNA.

pages: 243 words: 65,374

How We Got to Now: Six Innovations That Made the Modern World
by Steven Johnson
Published 28 Sep 2014

But the ultimate goal was a pure signal, some kind of perfect representation of the voice that wouldn’t degrade as it wound its way through the telephone network. Interestingly, the path that ultimately led to that goal began with a different objective: not keeping our voices pure, but keeping them secret. During World War II, the legendary mathematician Alan Turing and Bell Labs’ A. B. Clark collaborated on a secure communications line, code-named SIGSALY, that converted the sound waves of human speech into mathematical expressions. SIGSALY recorded the sound wave twenty thousand times a second, capturing the amplitude and frequency of the wave at that moment.

pages: 224 words: 64,156

You Are Not a Gadget
by Jaron Lanier
Published 12 Jan 2010

So for you, it will be important to redesign human institutions like art, the economy, and the law to reinforce the perception that information is alive. You demand that the rest of us live in your new conception of a state religion. You need us to deify information to reinforce your faith. The Apple Falls Again It’s a mistake with a remarkable origin. Alan Turing articulated it, just before his suicide. Turing’s suicide is a touchy subject in computer science circles. There’s an aversion to talking about it much, because we don’t want our founding father to seem like a tabloid celebrity, and we don’t want his memory trivialized by the sensational aspects of his death.

pages: 239 words: 70,206

Data-Ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else
by Steve Lohr
Published 10 Mar 2015

Vision has rarely been the problem in computing. The basic ideas in the field of artificial intelligence—the technology that makes big data smart—date back to the 1950s and before. The term “artificial intelligence” was coined in 1955, and the theory of computer-simulated intelligence was set in 1937. That was the year that Alan Turing, a British mathematician, computing pioneer, and famed wartime code breaker, published a paper in which he described what he called a “universal machine”—a theoretical computer. He started by demonstrating that a clerk, given the proper instructions and limitless supplies of paper and time, could solve any problem that an expert mathematician could answer.

pages: 245 words: 12,162

In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation
by William J. Cook
Published 1 Jan 2011

This issue came to the forefront in the early 1900s with David Hilbert’s Entscheidungsproblem, that asks, roughly, whether there exists an algorithm that can decide if any given statement is, or is not, provable from a set of axioms. The development of theory to handle such questions is a beautiful achievement of twentieth-century mathematics, with giants Kurt Gödel, Alonzo Church, and Alan Turing leading the way. The intuitive concept of an algorithm is that of a list of simple steps that together produce a solution to a problem. Euclid gave us an algorithm for greatest common divisors some 2,300 years ago, but at the time of Hilbert it was not clear how an algorithm should in general be defined.

pages: 262 words: 66,800

Progress: Ten Reasons to Look Forward to the Future
by Johan Norberg
Published 31 Aug 2016

After the Second World War, many of the homosexuals who survived the Nazis’ concentration camps were actually re-arrested to serve out their terms of imprisonment, under the German ban from 1871. The West German government generally paid reparations to those who had spent time in the camps, but excluded homosexuals. In 1952, the British scientist and war hero Alan Turing, who broke the Nazi Enigma code, was arrested for ‘gross indecency’, and had to accept chemical castration as an alternative to prison. He committed suicide two years later. He was given a posthumous royal pardon in 2013. There have always been certain cultures that have tolerated homosexual acts, like the famous relationships between men and youths or slaves in Ancient Greece and Rome, though these patriarchal cultures had a taboo against sex between grown men.

Survival of the Friendliest: Understanding Our Origins and Rediscovering Our Common Humanity
by Brian Hare and Vanessa Woods
Published 13 Jul 2020

Chabris, J. J. Lee, D. Cesarini, D. J. Benjamin, D. I. Laibson, “The Fourth Law of Behavior Genetics,” Current Directions in Psychological Science 24, 304–12 (2015). 72. M. Lundstrom, “Moore’s Law Forever?” Science 299, 210–11 (2003). 73. R. Kurzweil, “The Law of Accelerating Returns,” in Alan Turing: Life and Legacy of a Great Thinker (New York: Springer, 2004), 381–416. 74. J. Dorrier, “Service Robots Will Now Assist Customers at Lowe’s Stores,” in Singularity Hub (2014). 75. J. J. Duderstadt, The Millennium Project (1997). 76. J. Glenn, The Millennium Project: State of the Future (Washington, D.C.: World Federation of U.N.

pages: 296 words: 66,815

The AI-First Company
by Ash Fontana
Published 4 May 2021

One group at Dartmouth College coined the term artificial intelligence in 1956; another at Cornell University created a perceptron algorithm that improved the ability of the earlier algorithms to mimic human neurons; and one at Stanford University joined these neurons together in an early, small version of an artificial neural network. The fifties also brought us the first consideration of how to use AI outside the lab, with the great Alan Turing coming up with a test for intelligence based on reaching a human-level quality of conversation. All the while, Pitts was studying frogs. Among his discoveries: the fact that different parts of a nervous system each carries out a degree of computation. The decade was foundational for AI theory, but the practice was just getting started.

pages: 234 words: 67,589

Internet for the People: The Fight for Our Digital Future
by Ben Tarnoff
Published 13 Jun 2022

The cloud serves the applications and the data that make objects “smart,” and soaks up the data that such objects continuously emit, which in turn feeds the machine learning systems that make the objects “smart” in the first place. As a kind of networked intelligence, “smartness” belongs to a broader history of humans trying to make intelligent machines. An important figure in this history is the mathematician Alan Turing, who, in the 1930s, came up with the idea for a “universal machine.” Using a limited set of logical operations, this hypothetical device could “be used to compute any computable sequence,” Turing wrote. It could be programmed, in other words. Turing’s concept became the basis for the modern computer.

pages: 661 words: 187,613

The Language Instinct: How the Mind Creates Language
by Steven Pinker
Published 1 Jan 1994

Reifying thoughts as things in the head was a logical error, they said. A picture or family tree or number in the head would require a little man, a homunculus, to look at it. And what would be inside his head—even smaller pictures, with an even smaller man looking at them? But the argument was unsound. It took Alan Turing, the brilliant British mathematician and philosopher, to make the idea of a mental representation scientifically respectable. Turing described a hypothetical machine that could be said to engage in reasoning. In fact this simple device, named a Turing machine in his honor, is powerful enough to solve any problem that any computer, past, present, or future, can solve.

In fact, it is all too easy to give computers more credit at understanding than they deserve. Recently an annual competition was set up for the computer program that can best fool users into thinking that they are conversing with another human. The competition for the Loebner Prize was intended to implement a suggestion made by Alan Turing in a famous 1950 paper. He suggested that the philosophical question “Can machines think?” could best be answered in an imitation game, where a judge converses with a person over one terminal and with a computer programmed to imitate a person on another. If the judge cannot guess which is which, Turing suggested, there is no basis for denying that the computer can think.

pages: 661 words: 185,701

The Future of Money: How the Digital Revolution Is Transforming Currencies and Finance
by Eswar S. Prasad
Published 27 Sep 2021

Native American code talkers from the Cherokee and Navajo Nations, who developed special codes for transmitting messages that could not be deciphered by enemy forces, played a key role in American military successes in World Wars I and II. On the flip side, the Polish code breakers—a group of Polish mathematicians who, in collaboration with Alan Turing and other code breakers at Bletchley Park, cracked the German Enigma code—are credited with an important role in engineering a quicker end to World War II. These examples highlight the never-ending tussle between cryptography and cryptanalysis, the science of deciphering or “breaking” codes. While Bitcoin is referred to as a cryptocurrency, it does not involve encryption in this traditional sense.

Cryptography For an engaging tour of cryptography, ranging from the ancient past to modern quantum cryptography, see Singh (1999). For a more comprehensive overview, see Kahn (1996). Some historians argue that the Polish code breakers did not receive sufficient credit for their accomplishments. See, for instance, Craig Bowman, “Polish Codebreakers Cracked Enigma in 1932, before Alan Turing,” War History Online, May 30, 2016, https://www.warhistoryonline.com/featured/polish-mathematicians-role-in-cracking-germans-wwii-codesystem.html. Data Integrity In principle, any hash function that has more inputs than outputs will necessarily incur collisions. This is the inevitable result of mapping a large number of inputs into a still large but smaller number of outputs.

pages: 619 words: 177,548

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
by Daron Acemoglu and Simon Johnson
Published 15 May 2023

Focusing on MU would guide us toward a more socially beneficial trajectory, especially for workers and citizens. Before developing this case, however, we should understand where the current focus on machine intelligence comes from, which takes us to a vision articulated by the British mathematician Alan Turing. Turing was fascinated by machine capabilities throughout his career. In 1936 he made a fundamental contribution to the question of what it means for something to be “computable.” Kurt Gödel and Alonzo Church had recently tackled the question of how to define the set of computable functions, meaning the set of functions whose values can be calculated by an algorithm.

Dockworkers loading one bag at a time at the Royal Albert Docks, London, 1885. 20. Dock work today: one worker, one crane, many containers. 21. An IBM computer, 1959. 22. Robots at a Porsche plant, 2022. A worker watches, wearing gloves. 23. A reconstruction of the Bombe, designed by Alan Turing to speed up the decryption of German signals during World War II. 24. MIT math professor Norbert Wiener brilliantly warned in 1949 about a new “industrial revolution of unmitigated cruelty.” 25. An imaginative drawing of Jacques de Vaucanson’s digesting duck. 26. Human-complementary technology: Douglas Engelbart’s mouse to control a computer, introduced at the “Mother of All Demos” in 1968. 27.

pages: 255 words: 78,207

Web Scraping With Python: Collecting Data From the Modern Web
by Ryan Mitchell
Published 14 Jun 2015

Reading CAPTCHAs and Training Tesseract Although the word “CAPTCHA” is familiar to most, far fewer people know what it stands for: Computer Automated Public Turing test to tell Computers and Humans Apart. Its unwieldy acronym hints at its rather unwieldy role in obstructing otherwise perfectly usable web interfaces, as both humans and nonhuman robots often struggle to solve CAPTCHA tests. The Turing test was first described by Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” In the paper, he described a setup in which a human being could communicate with both humans and artificial intelligence programs through a computer terminal. If the human was unable to distinguish the humans from the AI programs during a casual conversation, the AI programs would be con‐ sidered to have passed the Turing test, and the artificial intelligence, Turing reasoned, would be genuinely “thinking” for all intents and purposes.

pages: 254 words: 76,064

Whiplash: How to Survive Our Faster Future
by Joi Ito and Jeff Howe
Published 6 Dec 2016

The relatively primitive mechanical devices of early cryptography, like the cipher disk Alberti used to track his shifting alphabets, grew increasingly complex, culminating in advanced cryptographic machines like the German Enigma machine of World War II, whose theoretically unbreakable ciphers were betrayed by a simple design flaw—no letter encoded by an Enigma would ever be encoded as itself. Alan Turing and Gordon Welchman led a team at Bletchley Park, England, that created an electromechanical device to help discover the shifting keys to the Enigma codes. Called the Bombe, the device could eliminate thousands of possible combinations, leaving a much smaller number of potential ciphers for the human cryptographers at Bletchley to try.30 When the Nazis replaced Enigma with Lorenz—a secure means of encoding teleprinter messages for radio transmission, which the British knew as “Tunny”—Tommy Flowers, a British engineer, countered with Colossus, the first programmable electronic digital computer.

pages: 268 words: 75,850

The Formula: How Algorithms Solve All Our Problems-And Create More
by Luke Dormehl
Published 4 Nov 2014

Bowden offers the view that: It seems most improbable that a machine will ever be able to give an answer to a general question of the type: “Is this picture likely to have been painted by Vermeer, or could van Meegeren have done it?” It will be recalled that this question was answered confidently (though incorrectly) by the art critics over a period of several years.21 To Bowden, the evidence is clear, straightforward and damning. If Alan Turing suggested that the benchmark of an intelligent computer would be one capable of replicating the intelligent actions of a man, what hope would a machine have of resolving a problem that even man was unable to make an intelligent judgment on? A cooling fan’s chance in hell, surely. In recent years, however, this view has been challenged.

Raw Data Is an Oxymoron
by Lisa Gitelman
Published 25 Jan 2013

“If I have nothing else to do, then I write the whole day; in the mornings from 8:30 am until midday, then I briefly go walking with my dog, then I have time again in the afternoon from 2 pm until 4 pm, then it’s the dog’s turn again. . . .Yes, then I write again in the evenings, Paper as Passion as a rule, until around 11 pm. At 11 pm I mostly lie in bed and read a few more things.” Luhmann, Archimedes und wir, 145; my emphasis. 29. Ibid.; also Andrew Hodges and Alan Turing, The Enigma, vol. 1 of Computerkultur, 2nd ed. (Wien, New York: Springer-Verlag, 1994), 115ff. 30. See Vannevar Bush, “As We May Think,” The Atlantic Monthly 15, no. 176 (1945): 101–108. 31. For one such attempt to expand upon Bielefeld 1951ff. and bring it into electronic form, see synapsen, http://www.verzetteln.de/synapsen. 32.

pages: 269 words: 74,955

The Crash Detectives: Investigating the World's Most Mysterious Air Disasters
by Christine Negroni
Published 26 Sep 2016

Again, the autopsies were performed by Dr. Domenici, who confirmed that a rapid decompression had taken place. The question was why. The British government pulled the airworthiness certificate of the Comet 1. This time the planes were on the ground for good. Into this puzzle came one of the era’s most provocative thinkers, Alan Turing, who broke Germany’s Enigma code and was the subject of the 2014 movie The Imitation Game. Turing developed the Automatic Computing Engine, a machine that automated complex equations so that they could be completed faster than humans could solve them. Parts of G-ALYP retrieved from the sea were subjected to exhaustive testing and comparison to an undamaged Comet, and for that, Turing’s Pilot ACE computer was used to run the many calculations required.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

At least insofar as a computer was concerned, she wrote, it “had no pretensions to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”7 Despite all the hype and the baggage that comes with the notion of AI, what Alan Turing later called “Lady Lovelace’s Objection” still stands. Computers still cannot think, so thought isn’t about to become cheap. However, what will be cheap is something so prevalent that, like arithmetic, you are probably not even aware of how ubiquitous it is and how much a drop in its price could affect our lives and economy.

pages: 229 words: 72,431

Shadow Work: The Unpaid, Unseen Jobs That Fill Your Day
by Craig Lambert
Published 30 Apr 2015

“Live chat” is often the best one can do for online customer service. This means a real-time typed interchange with an allegedly live customer-service representative. I say “allegedly” because live chats inevitably call to mind the Turing test, a test of a computer’s ability to “think” that British mathematician and computer scientist Alan Turing outlined in a 1950 paper. The common understanding of the Turing test is this: Using a text-only channel like a keyboard and screen, after five minutes of questioning, can someone tell whether a computer or a human is on the other end? If a robot passes as human, it has passed the Turing test.

The Pattern Seekers: How Autism Drives Human Invention
by Simon Baron-Cohen
Published 14 Aug 2020

Classification: LCC RC553.A88 B3684 2020 | DDC 616.85/882-—dc23 LC record available at https://lccn.loc.gov/2020019709 ISBNs: 978-1-5416-4714-5 (hardcover); 978-1-5416-4713-8 (ebook) E3-20201016-JV-NF-ORI Contents Cover Title Page Copyright Dedication Epigraph CHAPTER 1 Born Pattern Seekers CHAPTER 2 The Systemizing Mechanism CHAPTER 3 Five Types of Brain CHAPTER 4 The Mind of an Inventor CHAPTER 5 A Revolution in the Brain CHAPTER 6 System-Blindness: Why Monkeys Don’t Skateboard CHAPTER 7 The Battle of the Giants CHAPTER 8 Sex in the Valley CHAPTER 9 Nurturing the Inventors of the Future Appendix 1 Take the SQ and the EQ to find out your brain type Appendix 2 Take the AQ to find out how many autistic traits you have Acknowledgments Discover More About the Author Praise for The Pattern Seekers Notes and Further Reading Figure Notes and Credits In memory of BRIDGET LINDLEY (1959–2016) Who gave her love to our family In dedication to autistic people Explore book giveaways, sneak peeks, deals, and more. Tap here to learn more. Sometimes it is the people no one can imagine anything of who do the things no one can imagine. —ALAN TURING The Imitation Game Chapter 1 Born Pattern Seekers Al didn’t talk until he was four years old. Even when he started talking, it was clear he was using language differently to most kids. His mind was different right from the start—he was less interested in people and more focused on spotting patterns, and he wanted explanations for everything he saw.

pages: 254 words: 75,897

Planes, Trains and Toilet Doors: 50 Places That Changed British Politics
by Matt Chorley
Published 8 Feb 2024

Sir David Maxwell Fyfe, who was Churchill’s hardline home secretary from 1951 to 1954, vowed to ‘rid England of this male vice . . . this plague’, telling the House of Commons in December 1953: ‘Homosexuals in general are exhibitionists and proselytisers and are a danger to others, especially the young, and so long as I hold the office of Home Secretary I shall give no countenance to the view that they should not be prevented from being such a danger.’ On his watch, the number of men jailed for homosexual acts soared to more than 1,000 a year. Alan Turing, the war hero who cracked the Enigma code, had been convicted of gross indecency in 1952. A year later the actor Sir John Gielgud was arrested by an undercover police officer in a public lavatory. Then it was Montagu’s turn. Aged 27 and still the youngest member of the Lords, he was charged along with his cousin Michael Pitt-Rivers, a 36-year-old Dorset farmer, and 30-year-old Wildeblood, with committing unnatural acts and gross indecency with two RAF men, Edward McNally and John Reynolds, who received immunity for testifying against them.

pages: 562 words: 201,502

Elon Musk
by Walter Isaacson
Published 11 Sep 2023

At the 2012 gathering, Musk met Demis Hassabis, a neuroscientist, video-game designer, and artificial intelligence researcher with a courteous manner that conceals a competitive mind. A chess prodigy at age four, he became the five-time champion of an international Mind Sports Olympiad that includes competition in chess, poker, Mastermind, and backgammon. In his modern London office is an original edition of Alan Turing’s seminal 1950 paper, “Computing Machinery and Intelligence,” which proposed an “imitation game” that would pit a human against a ChatGPT–like machine. If the responses of the two were indistinguishable, he wrote, then it would be reasonable to say that machines could “think.” Influenced by Turing’s argument, Hassabis cofounded a company called DeepMind that sought to design computer-based neural networks that could achieve artificial general intelligence.

Faced with a situation, the neural network chooses a path based on what humans have done in thousands of similar situations. It’s like the way humans learn to speak and drive and play chess and eat spaghetti and do almost everything else; we might be given a set of rules to follow, but mainly we pick up the skills by observing how other people do them. It was the approach to machine learning envisioned by Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” Tesla had one of the world’s largest supercomputers to train neural networks. It was powered by graphics processing units (GPUs) made by the chipmaker Nvidia. Musk’s goal for 2023 was to transition to using Dojo, the supercomputer that Tesla was building from the ground up, to use video data to train the AI system.

pages: 312 words: 86,770

Endless Forms Most Beautiful: The New Science of Evo Devo
by Sean B. Carroll
Published 10 Apr 2005

The revelation of how these stripe-making switches work clarified a long-standing question in the study of pattern formation in biological structures. For several decades, mathematicians and computer scientists were drawn to the periodic patterns of body segmentation, zebra stripes, and seashell markings. Heavily influenced by a 1952 paper by the genius Alan Turing (a founder of computer science who helped crack the German Enigma code in World War II), “The Chemical Basis of Morphogenesis,” many theoreticians sought to explain how periodic patterns could be organized across entire large structures. While the math and models are beautiful, none of this theory has been borne out by the discoveries of the last twenty years.

pages: 267 words: 82,580

The Dark Net
by Jamie Bartlett
Published 20 Aug 2014

It’s about the size of a tennis court, packed with old computers, boxes full of modems, wires, cables and telephones (I later learn every computer they have is recycled or second-hand). A couple of worn sofas line the far wall and a large table in the middle houses more computers, food and a landline telephone. A huge spray-painting of Captain Crunch, the infamous telephone hacker from the 1970s, and Alan Turing, the genius British cryptographer, leaves little doubt about the group’s loyalties. There are a few people coding – two young men in one corner, and a slightly older man in a hoody sitting in front of three computer screens, smoking a cigarette. He’s deep in concentration. This must be Pablo, Amir’s chief collaborator.

pages: 291 words: 81,703

Average Is Over: Powering America Beyond the Age of the Great Stagnation
by Tyler Cowen
Published 11 Sep 2013

To understand intelligent machines and their future influence, we would do well to note Alexander Kronrod’s idea that “chess is the Drosophila of artificial intelligence.” In other words, looking at chess is one way to make sense of the broader picture, just as the fruit fly (the Drosophila) has helped us decipher human genetics. After World War II, computer science pioneers Alan Turing and Claude Shannon both saw that computers would one day play chess, and wrote seminal articles on how it might happen; Turing was brilliant enough to figure out how computers would play chess even before other scientists had figured out computers. Later, chess was picked out as a test case for the development of computer intelligence and given a big financial boost by the computing giant IBM for publicity reasons.

pages: 308 words: 84,713

The Glass Cage: Automation and Us
by Nicholas Carr
Published 28 Sep 2014

These scientists look at a particular product of the mind—a hiring decision, say, or an answer to a trivia question—and then program a computer to accomplish the same result in its own mindless way. The workings of Watson’s circuits bear little resemblance to the workings of the mind of a person playing Jeopardy!, but Watson can still post a higher score. In the 1930s, while working on his doctoral thesis, the British mathematician and computing pioneer Alan Turing came up with the idea of an “oracle machine.” It was a kind of computer that, applying a set of explicit rules to a store of data through “some unspecified means,” could answer questions that normally would require tacit human knowledge. Turing was curious to figure out “how far it is possible to eliminate intuition, and leave only ingenuity.”

pages: 791 words: 85,159

Social Life of Information
by John Seely Brown and Paul Duguid
Published 2 Feb 2000

See Campbell-Kelly and Aspray, 1996. 45. Wellman (1988) provides one of the few worthwhile studies of the effects of information technologies on social communities and networks. Chapter 2: Agents and Angels 1. Distinguishing a computer from a human is the essence of the famous Turing test, developed by mathematician Alan Turing (1963). He argued that if you couldn't tell the difference then you could say the machine was intelligent. Shallow Red is not quite there yet. Indeed, the continuation of the exchange suggests that Shallow Red is still rather shallow (though pleasantly honest): What are VSRs? The botmaster has not provided me with a definition of ''VSRs. " Thank you for asking.

The Ages of Globalization
by Jeffrey D. Sachs
Published 2 Jun 2020

Facebook, Google, and Amazon came out of nowhere to become, in a few short years, among the most powerful companies in the world. Smartphones are only a decade old, but they have already upended how we live. How did this revolution come about? The roots of the digital revolution can be traced to a remarkable paper by British genius Alan Turing, writing in 1936. Turing envisioned a new conceptual device, a universal computing machine—a Turing machine, as it became known—that could read an endless tape of 0s and 1s in order to calculate anything that could be calculated. Turing had conceptualized a general-purpose programmable computer before one had been invented.

pages: 304 words: 80,143

The Autonomous Revolution: Reclaiming the Future We’ve Sold to Machines
by William Davidow and Michael Malone
Published 18 Feb 2020

In 1914, the Spanish engineer Leonardo Torres y Quevedo demonstrated a mechanical device that could play simple king rook chess endgames.6 In 1921, the Czech author Karel Capek wrote R.U.R. (Rossum’s Universal Robots), which introduced the word robot to the world.7 Reading this prophetic play, written almost one hundred years ago, is a startling experience. Its disillusioned workers, displaced by robots, could equally well be members of today’s middle class. In 1950, Alan Turing, one of the early investigators of machine intelligence, proposed a simple test to determine whether a machine could “think.” Now known as the Turing Test, it is a protocol in which three terminals are set up in isolation from one another, two operated by humans and one by a computer. One of the humans asks the computer and the other human a series of questions.

pages: 227 words: 80,633

James Acaster's Classic Scrapes - the Hilarious Sunday Times Bestseller
by James Acaster
Published 4 Dec 2018

There could’ve been animals living in there and we wouldn’t have known, and not small animals – a family of stray Alsatians could’ve been thriving somewhere inside that dense undergrowth and we would’ve been none the wiser. It got so bad that one of our neighbours snitched us out to the landlord who promptly emailed us saying he would be popping round on Saturday to inspect the house ‘in case there’s anything you’d like to sort out beforehand’. It doesn’t take Alan Turing to decode that email – sort the garden out before the weekend or I’ll come down on you like a ton of bricks. We didn’t own a lawnmower or any shears, so I decided to ask the headteacher at the school I worked at if there was anything I could borrow from the school, and to my surprise he immediately suggested I borrow the electric strimmer for ‘as long as I needed it’ and, just like that, all our problems were solved.

pages: 284 words: 84,169

Talk on the Wild Side
by Lane Greene
Published 15 Dec 2018

For some reason, when we imagine thinking machines, we imagine them as heartless: it’s easier to imagine computers thinking than caring, so fiction has given us more memorable murderers, Terminators and HAL 9000s than it has automated pals. But how could we tell that a machine is thinking, rather than just responding to input according to instructions? What would a computer need to do to prove that it is doing something like what the mind does? Alan Turing, a British computer scientist, suggested a simple test, involving mastery of humankind’s most famously human trait: language. A computer can be said to pass what Turing called the “imitation game” only if it returns answers to written questions that fool a human examiner into thinking that the machine is human.

pages: 283 words: 81,376

The Doomsday Calculation: How an Equation That Predicts the Future Is Transforming Everything We Know About Life and the Universe
by William Poundstone
Published 3 Jun 2019

There is, however, one source of existential risk that has almost the air of inevitability: artificial intelligence. The idea that AI might be a hazard can be traced to I. J. (Irving John) Good. Born Isadore Jacob Gudak, the son of Moshe Oved, a Polish-Jewish writer who made jewelry and ran a fashionable antique shop in Bloomsbury, Good studied mathematics at Cambridge and became a codebreaker colleague of Alan Turing’s during the war. Turing introduced Good to the Asian board game Go, and Good is credited with popularizing the game in the West. But today Good is best remembered for a 1965 article in which he wrote: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.

pages: 291 words: 90,771

Upscale: What It Takes to Scale a Startup. By the People Who've Done It.
by James Silver
Published 15 Nov 2018

Wendy Tan White MBE is an advisor to BGF. She was a partner at BGF Ventures and a general partner at Entrepreneur First, building deep tech companies with a £40m Next Stage Fund and programme. Wendy was CEO and co-founder of the SaaS website builder Moonfruit, before selling to Yell Group. She is a board trustee of the Alan Turing Institute, a member of the UK Digital Economy Council, and on the boards of Tech Nation and the Department of Computing and Dyson School of Design Engineering at Imperial College London. CHAPTER 26 ‘The average person doesn’t get married after two or three dates; the same goes for finding the right investor.’

pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives
by David Sumpter
Published 18 Jun 2018

If neuroscientists are going to work together with artificial intelligence experts to create intelligent machines, then this joint work can’t rely on biologists finding the objective function of animals and telling it to the machine-learning experts. Progress in AI must involve biologists and computer scientists working together to understand the details of the brain. Tests of AI should, in my view, build on the one first proposed by Alan Turing in his famous ‘imitation game’ test.15 A computer passes the Turing test, or imitation game if it can fool a human, during a question-and-answer session, into believing that it is, in fact, a human. This is a tough test and we are a long way from achieving this, but we can use the main Turing test as a starting point for a series of simpler tests.

Applied Cryptography: Protocols, Algorithms, and Source Code in C
by Bruce Schneier
Published 10 Nov 1993

However, similar information theory considerations are occasionally useful, for example, to determine a recommended key change interval for a particular algorithm. Cryptanalysts also employ a variety of statistical and information theory tests to help guide the analysis in the most promising directions. Unfortunately, most literature on applying information theory to cryptanalysis remains classified, including the seminal 1940 work of Alan Turing. Table 11.1 Unicity Distances of ASCII Text Encrypted with Algorithms with Varying Key Lengths Key Length (in bits) Unicity Distance (in characters) 40 56 64 80 128 256 5.9 8.2 9.4 11.8 18.8 37.6 Confusion and Diffusion The two basic techniques for obscuring the redundancies in a plaintext message are, according to Shannon, confusion and diffusion [1432].

Problems that cannot be solved in polynomial time are called intractable, because calculating their solution quickly becomes infeasible. Intractable problems are sometimes just called hard. Problems that can only be solved with algorithms that are superpolynomial are computationally intractable, even for relatively small values of n. It gets worse. Alan Turing proved that some problems are undecidable . It is impossible to devise any algorithm to solve them, regardless of the algorithm’s time complexity. Problems can be divided into complexity classes, which depend on the complexity of their solutions. Figure 11.1 shows the more important complexity classes and their presumed relationships.

Hill, “Cryptography in an Algebraic Alphabet,” American Mathematical Monthly, v. 36, Jun–Jul 1929, pp. 306–312. 733. P.J.M. Hin, “Channel–Error–Correcting Privacy Cryptosystems,” Ph.D. dissertation, Delft University of Technology, 1986. (In Dutch.) 734. R. Hirschfeld, “Making Electronic Refunds Safer,” Advances in Cryptology—CRYPTO ’92 Proceedings, Springer–Verlag, 1993, pp. 106–112. 735. A. Hodges, Alan Turing: The Enigma of Intelligence, Simon and Schuster, 1983. 736. W. Hohl, X. Lai, T. Meier, and C. Waldvogel, “Security of Iterated Hash Functions Based on Block Ciphers,” Advances in Cryptology—CRYPTO ’93 Proceedings, Springer–Verlag, 1994, pp. 379–390. 737. F. Hoornaert, M. Decroos, J. Vandewalle, and R.

pages: 293 words: 88,490

The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction
by Richard Bookstaber
Published 1 May 2017

Once the initial state of the grid is set with various cells alive and others dead, the process might go on for a few periods and then have all the cells die off, or it might continue with all sorts of structures emerging and changing. It is, in general, impossible to predict whether a configuration will die off in a given period. Indeed, Life is an illustration of Alan Turing’s halting problem: you can’t know if the cells will all die off without running the game until they do die off. Thus, Life, a two-state process governed by four rules, is computationally irreducible. Von Neumann designed the universal constructor with the objective of self-replication; Conway designed his cellular automaton without any specific objective in mind.

pages: 310 words: 89,838

Massive: The Missing Particle That Sparked the Greatest Hunt in Science
by Ian Sample
Published 1 Jan 2010

Anderson, who worked on superconductivity and magnetic effects in materials, had a history of opposing large amounts of public expenditure on high-energy physics. When particle physicists argued that their work focused on more fundamental questions of science than his, Anderson argued that it was no more fundamental than the work Alan Turing had done for computer science or that James Watson and Francis Crick had achieved with unraveling the structure of DNA. Another leading figure who testified against the supercollider was James Krumhansl, a distinguished materials scientist at Cornell University. Krumhansl’s testimony carried particular weight at the time because he was lined up to take over the presidency of the American Physical Society, the nationwide organization that represented the field of physics.

pages: 310 words: 89,653

The Interstellar Age: Inside the Forty-Year Voyager Mission
by Jim Bell
Published 24 Feb 2015

3 Message in a Bottle DURING THE LAST few centuries, humans have shown a remarkable ability for decoding messages in languages or codes that they had never previously encountered. Linguists were able to decipher the Greek written language Linear B from about BCE 1450 without any ancient Greeks around to provide tips. During World War II, Cambridge mathematician Alan Turing and the Allied Forces were able to decipher the ingenious Enigma machine ciphers used by the Nazis to great effect in North Atlantic naval battles. It seemed reasonable, then, to assume in the 1970s that any form of intelligent alien life as smart or smarter than ourselves would be able to decipher a message we sent to them, no matter how rooted it was in our culture, solar system, and galactic address.

pages: 349 words: 27,507

E=mc2: A Biography of the World's Most Famous Equation
by David Bodanis
Published 25 May 2009

The particular division bequeathed by Lavoisier, Faraday, and their colleagues was even more compelling, for when one of the divisions is material and physical, and the other is invisible yet still powerful, it’s the ancient dichotomy of the body versus the soul that slips into our mind. Many other thinkers have been guided in their work by that distinction. Alan Turing seems to have been led by the body-soul division when he came up with his distinction between software and hardware; most users of computers easily think that way, for we can all immediately grasp the notion of a “dead” physical substrate, powered up by a “live” controlling power. The soul-body distinction permeates our world: it’s Don Quixote versus Sancho Panza; the cerebral Spock versus the stolid Enterprise; the contrast between the whispered encouraging voice-overs in the running-shoe ads, and the physical body on the screen.

pages: 330 words: 91,805

Peers Inc: How People and Platforms Are Inventing the Collaborative Economy and Reinventing Capitalism
by Robin Chase
Published 14 May 2015

,” Salon.com, July 17, 2013, www.salon.com/2013/07/17/how_twitter_fuels_black_activism. 13. U.S. Agency for International Development, “Fighting Ebola: A Grand Challenge for Development,” www.usaid.gov/grandchallenges/ebola. 14. Walter Isaacson, “Where Innovation Comes From,” Wall Street Journal, September 26, 2014, www.wsj.com/articles/a-lesson-from-alan-turing-how-creativity-drives-machines-1411749814. 15. “Vint Cerf Pt. 1,” The Colbert Report, July 15, 2014, http://thecolbertreport.cc.com/videos/08a2dg/vint-cerf-pt—1. 16. Gordon Rosenblatt, “Google’s Biggest Competitor Is Amazon,” Medium.com, October 18, 2014, https://medium.com/@gideonro/the-google-amazon-slugfest-8a3a07a1d6dd. 17.

Algorithms Unlocked
by Thomas H. Cormen
Published 15 Jan 2013

And it gets even worse. For some problems, no algorithm is possible. That is, there are problems for which it is provably impossible to create an algorithm that always gives a correct answer. We call such problems undecidable, and the best-known one is the halting problem, proven undecidable by the mathematician Alan Turing in 1937. In the halting problem, the input is a computer program A and the input x to A. The goal is to determine whether program A, running on input x, ever halts. That is, does A with input x run to completion? Perhaps you’re thinking that you could write a program—let’s call it program B—that reads in program A, reads in x, and simulates A running with input x.

Gods and Robots: Myths, Machines, and Ancient Dreams of Technology
by Adrienne Mayor
Published 27 Nov 2018

Generation of Animals 743a2 and 764b29–31; Parts of Animals 654b29–34. See De Groot 2008 on Aristotle and mechanics. Cf. Berryman 2009, 72–74, who argues that Aristotle’s language is not mechanistic. 33. Cohen 2002, 69. On free will, see Harari 2017, 283–85. 34. The pioneer of Artificial Intelligence, Alan Turing, devised a test in 1951 to reveal whether a machine is sentient, Zarkadakis 2015, 48–49, 312–13. See also Cohen 1963 and 1966, 131–42; Mackey 1984; Berryman 2009, 30; Kang 2011, 168–69. Since Turing, other AI-human tests have been developed: Boissoneault 2017. Paranoid sci-fi themes of androids and false selfhood, Zarkadakis 2015, xv, 53–54, 70–71, 86–87. 35.

pages: 333 words: 86,662

Zeitgeist
by Bruce Sterling
Published 1 Nov 2000

We’re so far beyond your mental grasp that we’re literally unspeakable. Mere mundane user dorks like you can’t even raise the topic of ECHELON in any discussion of contemporary reality. Because at ECHELON we’re huge, omniscient, omnipresent, and totally technically capable. We’ve been secretly saving the bacon of the Anglo-American empire since Alan Turing was blowing guys in bus stations. We’re always taping everything, but we Never Say Anything. You get me so far?” “Yeah, no, maybe.” “So that means that a guy like me has no conventional path into the narrative. None at all. I’m always the deus ex machina. I mean, the twentieth-century master narrative just doesn’t work, unless I remain way behind the curtain, and always super secret.

pages: 294 words: 96,661

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity
by Byron Reese
Published 23 Apr 2018

Unfortunately, he ran out of funds for his endeavor, a common fate for start-ups even then. However, in 2002 the Science Museum of London built the ten-thousand-pound machine Babbage proposed, and it worked flawlessly. Exit Babbage, who surmised that steam could power computing machines. Enter Alan Turing. Turing’s contribution at this point in our tale came in 1936, when he first described what we now call a Turing machine. Turing conceived of a hypothetical machine that could perform complex mathematical problems. The machine is made up of a narrow strip of graph paper, which, in theory, is infinitely long.

pages: 340 words: 91,745

Duped: Double Lives, False Identities, and the Con Man I Almost Married
by Abby Ellin
Published 15 Jan 2019

I’m talking about people who lie to their spouses, children, parents, friends, and colleagues, who prolong and expand the lies until they’re telling as many untruths as truths, though maybe not to the degree of disgraced New York State attorney general Eric Schneiderman, a staunch anti-Trumpian who prosecuted Harvey Weinstein—and who in May 2018 was accused of choking, slapping, and psychologically demeaning four women while in alcohol-fueled rages. To us this makes no sense. To them, it just might.24 I’m not referring to people who live double lives born of necessity, like, say, Rock Hudson, whose career forced him to pretend to be straight rather than embrace his homosexuality. Or computer scientist Alan Turing, who lived in an era when being gay was a crime. Or Deborah Sampson, who disguised herself as a man so she could fight in the Revolutionary War—something a handful of women did.25 Nor am I looking at run-of-the-mill dishonesty—white lies of excuse or courtesy, or the occasional business-trip dalliance at the Topeka Marriott.

pages: 322 words: 92,769

The Alps: A Human History From Hannibal to Heidi and Beyond
by Stephen O'Shea
Published 21 Feb 2017

Although fluent in the language, I am not French and thus do not possess a passion for acronyms. SDF? A homeless person (sans domicile fixe). TTC? Taxes included (toutes taxes comprises). IVG? Abortion (interruption volontaire de grossesse). HLM? Project/Council housing (habitation à loyer modéré). And on and on and on. Reading an acronym-rich French newspaper requires the mind of an Alan Turing, the man who cracked the Enigma code. I brake to contemplate my choice: do I want the VL Detour to the left, or the PL Detour to the right? And what the hell do they mean? Veronica Lake? Peter Lorre? The honk of a horn concentrates my mind and I choose Veronica Lake. There will be no turning back, as there is absolutely nowhere to turn around.

pages: 350 words: 90,898

A World Without Email: Reimagining Work in an Age of Communication Overload
by Cal Newport
Published 2 Mar 2021

If the first two letters sent are “t” and “h,” then this severely restricts which letter is likely to be sent next. The probability, for example, that the sender will next transmit “x” or “q” or “z” is zero. But the probability that the sender is about to transmit “e” is quite high. (Like his better-known British counterpart in the pantheon of computing pioneers, Alan Turing, Shannon had done some work on code-breaking during World War II, and therefore would have been familiar with the idea that certain letters are more common than others.) Shannon argued that in this case, when the sender and receiver are trying to work out in advance the rules for how they will map transmitted symbols to letters, the protocol3 they come up with should take into account these varying likelihoods, as this might allow them, on average, to get away with using far fewer symbols to communicate.

pages: 797 words: 227,399

Wired for War: The Robotics Revolution and Conflict in the 21st Century
by P. W. Singer
Published 1 Jan 2010

This idea of robots, one day being able to problem-solve, create, and even develop personalities past what their human designers intended is what some call “strong AI.” That is, the computer might learn so much that, at a certain point, it is not just mimicking human capabilities but has finally equaled, and even surpassed, its creators’ human intelligence. This is the essence of the so-called Turing test. Alan Turing was one of the pioneers of AI, who worked on the early computers like Colossus that helped crack the German codes during World War II. His test is now encapsulated in a real-world prize that will go to the first designer of a computer intelligent enough to trick human experts into thinking that it is human.

Tetlock, Phillip Thailand Thaler, Stephen Thanh Hoa Bridge Thirty Years’ War This Kind of War (Fehrenbach) 300 (film) “Three Laws of Robotics,” Throwbot (robot) Thrun, Sebastian Thucydides Tilden, Mark Tilghman, Shirley Tilly, Charles Time Tirpak, John Today Tokyo Institute of Technology Tomahawk cruise missile Tonight Show with Jay Leno Tonkin Gulf Incident Torpex (explosive) Total Information Awareness (TIA) Total Recall (film) Toyota Motor Corp. Tradic (computer) Transportation Department, U.S. Truman, Harry S. T3 (Tomorrow Tool) Turing, Alan Turing test Turk, The (chess automaton) Turtledove, Harry Twenty Thousand Leagues Under the Sea (Verne) Twilight Zone (television show) 2001: A Space Odyssey (Clarke) 2001: A Space Odyssey (film) Tyler, Kelly UAVs (unmanned aerial vehicles) anti-UAV drones and China’s program of combat designed first combat mission of Hezbollah’s use of homemade hyperspectral energy and Iraq war’s demand for irregular warfare and Israeli experience with micro-scale next wave of nonstate actors and obsolete humans issue and pilots of private contractors and in remote split operations video game culture and in Vietnam War VisiBuilding technology and UCAV (unmanned combat aerial vehicle), see UAVs UFO-Buttplug Hybrid Ultimate Encyclopedia of Science Fiction, The Ummah Defense (video game) Unabomber Unabomber Manifesto Underkoffler, John Unimate (robot) Unimation (universal automation) unit cohesion United Arab Emirates United Nations United States China’s strategic view and education system of health care system of and loss of technical superiority network-centric warfare and and resistance to innovative systems triumphalist power of unmanned warfare and war in space and UNIVAC “Unmanned Effects: Taking the Human Out of the Loop,” Unmanned Vehicle Systems International unmanned warfare: accountability and cubicle warrior in dehumanization and enemy reaction to as entertainment international law and leadership and legal questions in lowered barrier to violence in mistakes and collateral damage in Muslim world and perception of U.S. and psychological power of public’s disconnect from reachback operations and remote split operations in rules of engagement and second-guessing and stress and terrorism and unintended consequences and unintended messages and unmanned warfare (cont.)

pages: 798 words: 240,182

The Transhumanist Reader
by Max More and Natasha Vita-More
Published 4 Mar 2013

Panels of experts could interview the cyber-conscious being to determine its sentience as ­compared to a flesh human – these type of interviews, when conducted in blinded fashion as to the forms of each interviewee, are called Turing Tests in honor of the mathematician who first suggested them in the 1940s, Alan Turing (1950: 442). The prospect of being the first to pass such Turing Tests is motivating many computer science teams (Christian 2011: 16). They are doing their utmost to build into their software the full range of human feelings, including ­feelings of angst and dread. Hence, the unstoppable human motivation to invent something as amazing as a cyber-conscious mind will result in the creation of countless partially successful efforts that would be unethical if accomplished in flesh.

To clearly separate specific singularitarian expectations from the philosophy of transhumanism requires first defining the former. The original meaning of “technological singularity”, as coined by Vernor Vinge in his 1993 essay (the first in this section) is the Event Horizon view. This view links to Alan Turing’s ­seminal writing about intelligent machinery outstripping human intelligence, and more directly to I.J. Good’s term “intelligence explosion,” which suggests not only a growth of machine ­intelligence but its acceleration. Accordingly, technological advance will lead to the advent of superhuman intelligence.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

There’s one last comparison between language models and poker—or really between language and poker. The critique I made in The Signal and the Noise was that, sure, AIs might work well when they’re playing games like chess that have well-defined rules, but their worth had yet to be proven on more open-ended problems. The Turing test—named after the British computer scientist Alan Turing, who proposed that a good test of practical intelligence is whether a computer could respond to written questions in a way that was indistinguishable from a human being—seemed like a higher hurdle to clear. There are debates about whether ChatGPT has passed the Turing test yet, but it’s come closer than almost any expert would have imagined even five or ten years ago.

The original version involved a trolley that had lost its brakes and was on a collision course to kill some number of track workers, but that could be diverted to a different track to kill some smaller number of workers. Many creative variations have followed, serving as thought experiments to explore different precepts of moral reasoning. TRS*: See: Technological Richter Scale. Turing test: A litmus test proposed by the British mathematician Alan Turing in which a machine is deemed to possess practical intelligence if a third-party observer can’t distinguish its responses to text queries from those of a human. AI researchers debate whether the Turing test is in fact a good measure of intelligence and whether models like ChatGPT have passed the test.

pages: 309 words: 101,190

Climbing Mount Improbable
by Richard Dawkins and Lalla Ward
Published 1 Jan 1996

The hypothetical robot that we have now worked towards can be called a TRIP robot. A TRIP robot such as we are now imagining is a machine of great technical ingenuity and complexity. The principle was discussed by the celebrated Hungarian-American mathematician John von Neumann (one of two candidates for the honoured title of the father of the modern computer—the other was Alan Turing, the young British mathematician who, through his codebreaking genius, may have done more than any other individual on the Allied side to win the Second World War, but who was driven to suicide after the war by judicial persecution, including enforced hormone injections, for his homosexuality).

pages: 111 words: 1

Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets
by Nassim Nicholas Taleb
Published 1 Jan 2001

You can sometimes replicate something that can be mistaken for a literary discourse with a Monte Carlo generator but it is not possible randomly to construct a scientific one. Rhetoric can be constructed randomly, but not genuine scientific knowledge. This is the application of Turing’s test of artificial intelligence, except in reverse. What is the Turing test? The brilliant British mathematician, eccentric, and computer pioneer Alan Turing came up with the following test: A computer can be said to be intelligent if it can (on average) fool a human into mistaking it for another human. The converse should be true. A human can be said to be unintelligent if we can replicate his speech by a computer, which we know is unintelligent, and fool a human into believing that it was written by a human.

pages: 362 words: 97,862

Physics in Mind: A Quantum View of the Brain
by Werner Loewenstein
Published 29 Jan 2013

CHAPTER TWELVE How to Represent the World The Universal Turing Machine In 1936, a young mathematics student at King’s College, Cambridge, formulated the theoretical basis for a machine that could perform all sorts of mathematical tasks. That formulation would cast a wide net: it would bespread mathematics and physics and biology—even such a seemingly way-off field as brain physiology. The student was Alan Turing, the very same who, three years later at Bletchley Park, the British wartime cryptography headquarters, would crack the “Enigma” code of Hitler’s armies. But for the time being, at Cambridge, he was engaged in a more laid-back pursuit: whether and how mathematical assertions can be proven. What he was after was the inner essence of the mathematical process, and he came to the conclusion that anything that has a mathematical solution, anything computable at all, could be computed by a simple machine equipped with a one-dimensional tape bearing a binary code.

Future Files: A Brief History of the Next 50 Years
by Richard Watson
Published 1 Jan 2008

Singularity is the term futurists use to describe the point at which machines have developed to the extent that humans can no longer fully understand or forecast their capabilities. The idea of artificial intelligence (AI) goes back to the mid-1950s, although Asimov was writing about smart robots back in 1942. The true test for artificial intelligence dates to 1950, when the British mathematician Alan Turing suggested the criterion of humans submitting statements through a machine and then not being able to tell whether the responses had come from another person or the machine. The 1960s and 1970s saw a great deal of progress in AI, but real breakthroughs failed to materialize. Instead, scientists and developers focused on specific problems such as speech recognition, text recognition and computer vision.

pages: 241 words: 90,538

Unequal Britain: Equalities in Britain Since 1945
by Pat Thane
Published 18 Apr 2010

However, prosecutions were concentrated in a few police districts,41 and the increase was partly due to Home Secretary David Maxwell-Fyffe’s drive for greater uniformity in prosecutions and the use by the police of entrapment techniques and conspiracy charges to ensnare homosexual men.42 The press sensationalized and disseminated the details of a series of successful prosecutions of prominent men, often on flimsy evidence. In 1952, the mathematician Alan Turing, who received an Order of the British Empire (OBE) for his work on cracking the Enigma code during the war, was arrested for homosexual offences. He accepted hormone treatment instead of a prison sentence, but committed suicide in 1954.43 In 1953, the novelist and playwright Rupert Croft-Cooke was sentenced to nine months in prison on the testimony of two sailors.

pages: 313 words: 101,403

My Life as a Quant: Reflections on Physics and Finance
by Emanuel Derman
Published 1 Jan 2004

Lex and yacc were "non-procedural" programs-you weren't required to write all the details of lexical analysis and parsing; instead, you simply told them what grammar you wanted to recognize, and they wrote a program to do it, using algorithms for matching patterns that went back to the computer pioneers Alan Turing and Stephen Kleene. With lex and yacc as aids, I learned to create my own computer languages. A little like Feynman diagrams, which allowed workaday physicists to compute unthinkingly the detailed quantum mechanical probabilities that had formerly demanded the genius of Schwinger or Feynnian, these parsing tools allowed regular programmers to prosaically create languages that would previously have required magnificent exertions.

pages: 352 words: 96,532

Where Wizards Stay Up Late: The Origins of the Internet
by Katie Hafner and Matthew Lyon
Published 1 Jan 1996

To celebrate its star student, his school declared a half-day holiday. “For a short time I was the most popular boy in the school,” he recalled. Davies chose the University of London’s Imperial College, and by the time he was twenty-three had earned degrees in physics and mathematics. In 1947 he joined a team of scientists led by the mathematician Alan Turing at the National Physical Laboratory and Davies played a leading part in building the fastest digital computer in England at the time, the Pilot ACE. In 1954 Davies won a fellowship to spend a year in the United States; part of that year, he was at MIT. He then returned to England, rose swiftly at the NPL, and in 1966, after describing his pioneering work on packet-switching, he was appointed head of the computer science division.

pages: 370 words: 97,138

Beyond: Our Future in Space
by Chris Impey
Published 12 Apr 2015

This is the time, projected to be in the middle of the twenty-first century, when civilization and human nature itself are fundamentally transformed. One variant of the singularity is when artificial intelligence surpasses human intelligence. Software-based synthetic minds begin to program themselves and a runaway reaction of self-improvement occurs. This event was foreshadowed by John von Neumann and Alan Turing in the 1950s. Turing wrote that “. . . at some stage therefore we should have to expect the machines to take control . . . ,” and von Neumann described “. . . an ever-accelerating progress and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”17 A dystopian version of this event permeates the popular culture, from science fiction novels to movies such as Blade Runner and The Terminator.

pages: 349 words: 95,972

Messy: The Power of Disorder to Transform Our Lives
by Tim Harford
Published 3 Oct 2016

Perhaps it’s because we find an actual, unscripted conversation with a stranger to be such a frightening prospect. Even once the interaction does move to a face-to-face environment—which, for most romantic relationships, is part of the point—we often continue to rely on a script whenever we can. In 1950, the mathematician, codebreaker, and computer pioneer Alan Turing proposed a test of artificial intelligence. In Turing’s “imitation game,” a judge would communicate through a teleprompter with a human and a computer. The human’s job was to prove that she was, indeed, human. The computer’s job was to imitate human conversation convincingly enough to confuse the judge.28 Turing optimistically predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation.

Powers and Prospects
by Noam Chomsky
Published 16 Sep 2015

There is also a different approach to the problem, which is highly influential though it seems to me not only foreign to the sciences but also close to senseless. This approach divorces the cognitive sciences from a biological setting, and seeks tests to determine whether some object ‘manifests intelligence’ (‘plays chess’, ‘understands Chinese’, or whatever). The approach relies on the ‘Turing Test’, devised by mathematician Alan Turing, who did much of the fundamental work on the modern theory of computation. In a famous paper of 1950, he proposed a way of evaluating the performance of a computer—basically, by determining whether observers will be able to distinguish it from the performance of people. If they cannot, the device passes the test.

pages: 407 words: 103,501

The Digital Divide: Arguments for and Against Facebook, Google, Texting, and the Age of Social Netwo Rking
by Mark Bauerlein
Published 7 Sep 2011

Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level. The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies.

pages: 349 words: 98,868

Nervous States: Democracy and the Decline of Reason
by William Davies
Published 26 Feb 2019

But digital computation extends well beyond the imaginings of Clausewitz in the mechanization of thinking itself. In the years immediately before the Second World War, various mathematicians, philosophers, and psychologists mused on whether human thought and communication could be modeled as mathematical formulae. The British mathematician (and subsequently celebrated code breaker) Alan Turing’s 1937 paper, “On Computable Numbers,” imagined a “Turing Machine” which could be programmed to perform basic instructions in response to different symbols that it was fed in a random order. While the Turing Machine was never built, this vision signaled the leap from the abstract mathematics of computation to its technological construction.

Language and Mind
by Noam Chomsky
Published 1 Jan 1968

At about the same time, Jacob wrote that “the rules controlling embryonic development,” almost entirely unknown, interact with other physical factors to “restrict possible changes of structures and functions” in evolutionary development, providing “architectural constraints” that “limit adaptive scope and channel evolutionary patterns,” to quote a recent review. The best-known of the figures who devoted much of their work to these topics are D’Arcy Thompson and Alan Turing, who took a very strong view on the central role of such factors in biology. In recent years, such considerations have been adduced for a wide range of problems of development and evolution, from cell division in bacteria to optimization of structure and function of cortical networks, even to proposals that organisms have “the best of all possible brains,” as argued by computational neuroscientist Chris Cherniak.

Cataloging the World: Paul Otlet and the Birth of the Information Age
by Alex Wright
Published 6 Jun 2014

He saw these developments as fundamentally connected to a larger utopian project that would bring the world closer to a state of permanent and lasting peace and toward a state of collective spiritual enlightenment. The conventional history of the Internet traces its roots through an Anglo-American lineage of early computer scientists like Charles Babbage, Ada Lovelace, and Alan Turing; networking visionaries like Vinton G. Cerf and Robert E. Kahn; as well as hypertext seers like Vannevar Bush, J. C. R. Licklider, Douglas Engelbart, Ted Nelson, and of course Tim Berners-Lee and Robert Cailliau, who in 1991 released their first version of the World Wide Web. The dominant influence of the modern computer industry has placed computer science at the center of this story.

pages: 324 words: 96,491

Messing With the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News
by Clint Watts
Published 28 May 2018

This manipulation occurs through the deployment of what are known as social bots—programs, defined by a computer algorithm, that produce personas and content on social media applications that replicate a real human. These social bots have also passed the important milestone known as the Turing test, a challenge developed by Alan Turing, the great member of the British team that cracked the German Enigma code.4 The test assesses whether a machine has the ability to communicate, via text only, at a level equivalent to that of a real person, such that a computer—or, in the modern case, an artificially generated social media account—cannot be distinguished from a live person.

pages: 268 words: 109,447

The Cultural Logic of Computation
by David Golumbia
Published 31 Mar 2009

The kinds of transformations Chomsky documents from his earliest work onward (Chomsky 1957), ones which “invert” sentence structure, derive wh-questions, or make active sentences into passive ones, are all logical formulae that do not simply resemble computer programs: they are algorithms, the stuff of the computer that Alan Turing, just prior to Chomsky and in some ways coterminous with him, had identified as the building blocks for an entire exploded mechanism of calculation. Goldsmith, in terms that are meant to be helpful both for the study of computers and languages, writes: Loosely speaking, an algorithm is an explicit and step-by-step explanation of how to perform a calculation.

pages: 420 words: 100,811

We Are Data: Algorithms and the Making of Our Digital Selves
by John Cheney-Lippold
Published 1 May 2017

In this way, algorithmic processing mimics the concept of arbitrary closure introduced by cultural theorist Stuart Hall: to make any claim about oneself, or to define one’s own subject position, requires a hypothetical halt to the changing nature of the world.82 Arbitrary closure is a pause button on the dynamism of discourse, a conceptual stop to space and time in order to analyze and make assertions about who we are. Not limited to an algorithm’s output, digital computers produce arbitrary closures every time they calculate a function, a technological feature/bug inherited from the serial calculations of Alan Turing’s machinic legacy. To add two plus two together means that the computer, for a brief millisecond, will conceptually focus its entire processing power on a two, then an addition sign, and then another two, eventually spitting out its output as four. But in the brief, transient moment when electronic signals signified as ones and zeros streak across a microprocessor during computation, the world is stable and closed.

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

Both suggest a strong continuity between ‘life’ and ‘mind’, which in turn suggests that there is more to mind (and also to consciousness) than simply what a system ‘does’ (Kirchhoff, 2018; Maturana & Varela, 1980). I was lucky enough to meet Maturana – who died in May 2021 at the age of ninety-two – in January 2019, in his home city of Santiago, where we spent time sipping coffee and discussing these ideas in a shady cafe garden in the Barrio Providencia. Turing test: In Alan Turing’s original ‘imitation game’, there are two humans of the same gender and a machine. The machine and one human – the collaborator – are both pretending to be a human of the opposite gender. The other human has to decide which is the machine and which is the collaborator (Turing, 1950). Garland test: This term was coined by Murray Shanahan, whose book Embodiment and the Inner Life (2010) was one of the inspirations behind Ex Machina.

pages: 1,351 words: 385,579

The Better Angels of Our Nature: Why Violence Has Declined
by Steven Pinker
Published 24 Sep 2012

To take just two examples, more than twice as many children are hit by cars driven by parents taking their children to school as by other kinds of traffic, so when more parents drive their children to school to prevent them from getting killed by kidnappers, more children get killed.213 And one form of crime-control theater, electronic highway signs that display the names of missing children to drivers on freeways, may cause slowdowns, distracted drivers, and the inevitable accidents.214 The movement over the past two centuries to increase the valuation of children’s lives is one of the great moral advances in history. But the movement over the past two decades to increase the valuation to infinity can lead only to absurdities. GAY RIGHTS, THE DECLINE OF GAY-BASHING, AND THE DECRIMINALIZATION OF HOMOSEXUALITY It would be an exaggeration to say that the British mathematician Alan Turing explained the nature of logical and mathematical reasoning, invented the digital computer, solved the mind-body problem, and saved Western civilization. But it would not be much of an exaggeration.215 In a landmark 1936 paper, Turing laid out a set of simple mechanical operations that was sufficient to compute any mathematical or logical formula that was computable at all.216 These operations could easily be implemented in a machine—a digital computer—and a decade later Turing designed a practicable version that served as a prototype for the computers we use today.

Bennett, “Abducted: The Amber Alert system is more effective as theater than as a way to protect children,” Boston Globe, Jul. 20, 2008. 213. Kids hit by parents driving kids: Skenazy, 2009, p. 176. 214. Counterproductive kidnapping alerts: D. Bennett, “Abducted: The Amber Alert system is more effective as theater than as a way to protect children,” Boston Globe, Jul. 20, 2008. 215. Alan Turing: Hodges, 1983. 216. Turing machines: Turing, 1936. 217. Can machines think?: Turing, 1950. 218. State-sponsored homophobia, past: Fone, 2000. Present: Ottosson, 2009. 219. More homophobia against gay men: Fone, 2000. More laws against male homosexuality: Ottosson, 2006. 220. More hate crimes against men: U.S.

Hoban, J. E. 2007. The ethical marine warrior: Achieving a higher standard. Marine Corps Gazette (September), 36–40. Hoban, J. E. 2010. Developing the ethical marine warrior. Marine Corps Gazette (June), 20–25. Hobbes, T. 1651/1957. Leviathan. New York: Oxford University Press. Hodges, A. 1983. Alan Turing: The enigma. New York: Simon & Schuster. Hoffman, M. L. 2000. Empathy and moral development: Implications for caring and justice. Cambridge, U.K.: Cambridge University Press. Hofstadter, D. R. 1985. Dilemmas for superrational thinkers, leading up to a Luring lottery. In Metamagical themas: Questing for the essence of mind and pattern.

pages: 350 words: 107,834

Halting State
by Charles Stross
Published 9 Jul 2011

It’s like something out of Hieronymus Bosch, of course. Bosch, as pastiched by a million expert systems executing code that procedurally clones and extrapolates a work of art across a cosmic canvas. Procedural Bosch, painting madly and at infinite speed to fill in the gaps in a virtual world, guarded by the titanic archangels of Alonzo Church and Alan Turing, spinning the endless tape… It’s funny how it takes game space to bring out the poet in you. And it’s even funnier how you’re embarrassed about letting it show. “That’s Hell. Don’t worry about it, it’s just a little joke that got out of hand.” “You’re shitting me.” “Not at all!” You lumber forward onto the stony path that meanders around the temple, heading downhill towards the beach front.

pages: 391 words: 105,382

Utopia Is Creepy: And Other Provocations
by Nicholas Carr
Published 5 Sep 2016

Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level. The internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies.

pages: 345 words: 104,404

Pandora's Brain
by Calum Chace
Published 4 Feb 2014

he asked. Ivan shook his head and smiled grimly. ‘Not a chance, I’m afraid. Shortly after an AI becomes self-aware, it will want to increase its mental capacities, and there is no reason why it couldn’t do so at an amazing rate. This was foreseen as long ago as the 1960s, by John Good, a colleague of Alan Turing’s at Bletchley Park. He said that once we create a thinking machine there will be an intelligence explosion, and that the first thinking machine would be the very last thing that mankind would invent. The machine would rewrite its software, expand its hardware, and increase its intelligence in a positive feedback cycle that would quickly create a super-intelligence, something far more capable than a human.

pages: 378 words: 110,518

Postcapitalism: A Guide to Our Future
by Paul Mason
Published 29 Jul 2015

The new approach inserted maths and science into the heart of the industrial process; economics and data management into political decision-making. It was the OSRD that took Claude Shannon, the founder of information theory, out of Princeton and put him into Bell Labs to design algorithms for anti-aircraft guns.21 There, he would meet Alan Turing and discuss the possibility of ‘thinking machines’. Turing, too, had been scooped out of academia by the British government to run the Enigma codebreaking operation at Bletchley Park. This culture of innovation survived the transition to peacetime, even as individual corporations tried to monopolize the results and scrapped over patent rights.

pages: 379 words: 108,129

An Optimist's Tour of the Future
by Mark Stevenson
Published 4 Dec 2010

Perhaps if AI research had taken note of one of its founding fathers, it may have saved some time. In a landmark and it turns out startlingly prescient paper, written in 1950 (in which the first line was aptly, ‘I propose to consider the question, “Can machines think?”’), the great, tragic, war-shortening Alan Turing wrote, ‘Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.’ This proposed necessity of having to raise robots might lead you to the conclusion that truly intelligent robots will be few and far between.

pages: 416 words: 106,582

This Will Make You Smarter: 150 New Scientific Concepts to Improve Your Thinking
by John Brockman
Published 14 Feb 2012

At least that might lead the audience to develop a more useful view of the mind—though probably not to buy more tickets. Earlier thinkers like Locke and Hume anticipated many of the discoveries of psychological science but thought that the fundamental building blocks of the mind were conscious “ideas.” Alan Turing, the father of the modern computer, began by thinking about the highly conscious and deliberate step-by-step calculations performed by human “computers” like the women decoding German ciphers at Bletchley Park. His first great insight was that the same processes could be instantiated in an entirely unconscious machine, with the same results.

pages: 518 words: 107,836

How Not to Network a Nation: The Uneasy History of the Soviet Internet (Information Policy)
by Benjamin Peters
Published 2 Jun 2016

In England, cybernetics took on a different character in the form of the Ratio Club, a small but potent gathering of British cybernetic figures who gathered regularly in the basement of the National Hospital for Nervous Diseases in London from 1949 through 1955. Notable figures include the computing pioneer Alan Turing, his Bletchey Park colleague and cryptographer mathematician I. J. Good, neuropsychologist Donald MacKay, and astrophysicist Tommy Gold. The historian of science Andrew Pickering chronicles the lives and work of six active and largely forgotten Britons who were preoccupied with what the brain does—neurologist W.

pages: 335 words: 107,779

Some Remarks
by Neal Stephenson
Published 6 Aug 2012

He went from that to building a machine that could carry out logical operations on bits. He knew about binary arithmetic. I found that quite startling. Up till then I hadn’t been that well informed about the history of logic and computing. I hadn’t been aware that anyone was thinking about those things so far in the past. I thought it all started with [Alan] Turing. So, I had computers in the 17th century. There’s this story of money and gold in the same era, and to top it all off Newton and Leibniz had this bitter rivalry. I decided right away that I was going to have to write a book about that. Pretty soon I was thinking this was an exceptionally apt time in which to set a novel.

pages: 375 words: 111,615

Operation Chastise: The RAF's Most Brilliant Attack of World War II
by Max Hastings
Published 18 Feb 2020

‘Among special weapons,’ recorded a post-war study of RAF armament by the service’s Air Historical Branch, in language that reflects self-congratulation, ‘the “Dam Buster” must take pride of place . . . the story of its development and production is an epic in the history of aerial bombs.’ Barnes Wallis was the only ‘boffin’ – to be more accurate, he was an engineer – to achieve membership of Britain’s historic pantheon of World War II, behind Winston Churchill but alongside the Ultra codebreaker Alan Turing and the fighting heroes of the conflict. Until 1951, when Paul Brickhill’s book was published, scarcely anyone knew or remembered anything of Wallis. He had enjoyed some celebrity in the pre-war years, especially in connection with his work on the great airship R100. From 1939 onwards, however, he vanished behind a curtain of official security.

pages: 300 words: 106,520

The Nanny State Made Me: A Story of Britain and How to Save It
by Stuart Maconie
Published 5 Mar 2020

Dutch runner Fanny Blankers-Koen was the star of the ‘Austerity Games’, the London Summer Olympics staged at Wembley Stadium and the Empire Pool, London. The Windrush arrived with its cargo of 492 Jamaican immigrants who would change the look and tenor of Britain forever. In Manchester, ‘Baby’, the pioneering electronic computer that attracted Alan Turing to the university, ran its first program while on the Suffolk coast the first Aldeburgh Festival was held. Even among such momentous events, one stands out. At midnight on 5 July 1948, the National Health Service began in Britain. The NHS, as it quickly became known, was the result of years of research, argument, hostility, persuasion and campaigning.

pages: 416 words: 106,532

Cryptoassets: The Innovative Investor's Guide to Bitcoin and Beyond: The Innovative Investor's Guide to Bitcoin and Beyond
by Chris Burniske and Jack Tatar
Published 19 Oct 2017

Understandably, this form of encryption did not remain secure for long.3 A more recent example that was the subject of the movie The Imitation Game was the effort during World War II of a group of English cryptographers to decode the messages of Nazi Germany, which were encrypted by a coding device called the Enigma machine. Alan Turing, a luminary in machine learning and artificial intelligence, was a major player on the team whose efforts to break the Enigma code ultimately had a debilitating impact on German war strategies and helped to end the war. Cryptography has become a vital part of our lives. Every time we type in a password, pay with a credit card, or use WhatsApp, we are enjoying the benefits of cryptography.

pages: 371 words: 108,105

Under the Knife: A History of Surgery in 28 Remarkable Operations
by Arnold van de Laar Laproscopic Surgeon
Published 1 Oct 2018

One of them was Alessandro Moreschi, the first and last castrato whose voice has been preserved on a gramophone record. Moreschi died in 1922. The libido also becomes weaker after castration, which was of course usually the intention. For that reason, castration was used until not so long ago to ‘cure’ people of what was considered to be perverted sexual preferences. A well-known victim was Alan Turing, who cracked the Enigma code and invented the computer during the Second World War, but was sentenced to undergo chemical castration by a judge in 1952 because of his homosexuality. Castrations are still performed today. Every year, the testicles of tens of thousands of men worldwide are surgically removed as part of the treatment for prostate cancer.

pages: 344 words: 104,077

Superminds: The Surprising Power of People and Computers Thinking Together
by Thomas W. Malone
Published 14 May 2018

Robert Lee Hotz, “Neural Implants Let Paralyzed Man Take a Drink,” Wall Street Journal, May 21, 2015, http://www.wsj.com/articles/neural-implants-let-paralyzed-man-take-a-drink-1432231201; Tyson Aflalo, Spencer Kellis, Christian Klaes, Brian Lee, Ying Shi, Kelsie Shanfield, Stephanie Hayes-Jackson, et al., “Decoding Motor Imagery from the Posterior Parietal Cortex of a Tetraplegic Human,” Science 348, no. 6,237 (May 22, 2015): 906–910, http://science.sciencemag.org/content/348/6237/906.full, doi:10.1126/science.aaa5417. CHAPTER 4 1. Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (New York: Prentice Hall, 1995). 2. Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433–60. 3. Wikipedia, s.v. “artificial intelligence,” accessed August 8, 2016, https://en.wikipedia.org/wiki/Artificial_intelligence. 4. Rodney Brooks, “Artificial Intelligence Is a Tool, Not a Threat,” Rethink Robotics, November 10, 2014, http://www.rethinkrobotics.com/blog/artificial-intelligence-tool-threat. 5.

pages: 321 words: 105,480

Filterworld: How Algorithms Flattened Culture
by Kyle Chayka
Published 15 Jan 2024

At first, I made modest programs to automate formulas I needed to know for tests, but once I became more fluent in the language, I made my own versions of tic-tac-toe and Connect Four. The machine was a partner in my creativity; it felt like magic. A century after Lovelace, during World War II, Alan Turing, a British mathematician and computer scientist, was working in code breaking for the government—he helped to decode the German Enigma cipher machine. In 1946, with the war over, Turing wrote a report for the National Physical Library proposing the development of an “Automatic Computing Engine.”

pages: 913 words: 265,787

How the Mind Works
by Steven Pinker
Published 1 Jan 1997

In a similar sense, the message stays the same when she repeats it to your father at the other end of the couch after it has changed its form inside her head into a cascade of neurons firing and chemicals diffusing across synapses. Likewise, a given program can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons. This insight, first expressed by the mathematician Alan Turing, the computer scientists Alan Newell, Herbert Simon, and Marvin Minsky, and the philosophers Hilary Putnam and Jerry Fodor, is now called the computational theory of mind. It is one of the great ideas in intellectual history, for it solves one of the puzzles that make up the “mind-body problem”: how to connect the ethereal world of meaning and intention, the stuff of our mental lives, with a physical hunk of matter like the brain.

How confident can we be that some machine will make marks that actually correspond to some meaningful state of the world, like the age of a tree when another tree was planted, or the average age of the tree’s offspring, or anything else, as opposed to being a meaningless pattern corresponding to nothing at all? The guarantee comes from the work of the mathematician Alan Turing. He designed a hypothetical machine whose input symbols and output symbols could correspond, depending on the details of the machine, to any one of a vast number of sensible interpretations. The machine consists of a tape divided into squares, a read-write head that can print or read a symbol on a square and move the tape in either direction, a pointer that can point to a fixed number of tickmarks on the machine, and a set of mechanical reflexes.

The Haskell Road to Logic, Maths and Programming
by Kees Doets , Jan van Eijck and Jan Eijck
Published 15 Jan 2004

Because of the reference to every possible structure (of which there are infinitely many), these are quite complicated definitions, and it is nowhere suggested that you will be expected to decide on validity or equivalence in every case that you may encounter. In fact, in 1936 it was proved rigorously, by Alonzo Church (1903– 1995) and Alan Turing (1912–1954) that no one can! This illustrates that the complexity of quantifiers exceeds that of the logic of connectives, where truth tables allow you to decide on such things in a mechanical way, as is witnessed by the Haskell functions that implement the equivalence checks for propositional logic.

Human Frontiers: The Future of Big Ideas in an Age of Small Thinking
by Michael Bhaskar
Published 2 Nov 2021

In the history of Go, move thirty-seven was a big idea that, in thousands of years, humans hadn't thought of. A machine did. Thanks to the program, previously unthinkable moves are now part of the tactical lexicon. AlphaGo, like AlphaFold, jolted the game out of a local maximum. DeepMind is at the forefront of a well-publicised renaissance in AI. (AI itself is a big idea that goes back to Alan Turing and pioneers like John von Neumann and Marvin Minsky and, in the form of dreams of automata, much earlier still.) Over recent decades, computer scientists have brought together a new generation of techniques: evolutionary algorithms, reinforcement learning, deep neural networks and backpropagation, adversarial networks, logistic regression, decision trees and Bayesian networks, among others.

pages: 457 words: 112,439

Zero History
by William Gibson
Published 6 Sep 2010

Milgrim saw that there were still older machines, some actually housed in wood, locked in a large, really quite seriously expensive-looking glass case, rising a good six feet from the floor. The wood-cased typewriter-y device nearest him bore an eye-shaped silk-screened ENIGMA logo. “What are those, then?” “Before the Eden. Enigma encryption. As called forth by Alan Turing. To birth the Eden. Also on offer, U.S. Army M-209B cipher machine with original canvas field case, Soviet M-125-3MN Fialka cipher machine, Soviet clandestine pocket-sized nonelectronic burst encoder and keyer. You are interested?” “What’s a burst encoder?” “Enter message, encrypt, send with inhuman speed as Morse code.

Chasing the Moon: The People, the Politics, and the Promise That Launched America Into the Space Age
by Robert Stone and Alan Andres
Published 3 Jun 2019

Over the years, Clarke had explained to journalists that he’d moved to Sri Lanka in 1956 for the skin diving. But as his friend the science writer Jeremy Bernstein later recounted, there were additional personal motivations behind his choice to relocate there. His decision came only a few years after the suicide of British mathematician Alan Turing, who had been convicted of gross indecency and given a choice between prison and chemical castration treatment as a supposed cure for homosexuality, then criminally outlawed in England. Clarke had been briefly married to an American woman in 1953, a decision he regretted soon after. Properly British and discreet, Clarke usually deflected inquiries about his personal life, occasionally making public comments that led listeners to infer he was heterosexual.

pages: 396 words: 113,613

Chokepoint Capitalism
by Rebecca Giblin and Cory Doctorow
Published 26 Sep 2022

Digital technology gets faster and better and cheaper because everything we do to solve problems in one corner of the digital world makes things better everywhere else. But this universality can also be a curse. We know how to make universal computers—computers that are Turing complete, a concept named for the British wartime computer science pioneer Alan Turing—but we don’t know how to make almost-universal computers. It’s easy to make a train track that only supports one kind of railcar: it’s impossible to make a phone that only runs apps from one app store. When you encounter a digital product that has a restriction like this—a video service that won’t let you access its streams without logging in or using its app, an ebook that only plays on one kind of reader or an ebook reader than only displays one kind of ebook, a gaming console that only plays games that were approved by its manufacturer, or even a coffee-pod machine that rejects third-party pods—you’re not dealing with a computer that can’t do what you’ve asked of it.

pages: 889 words: 433,897

The Best of 2600: A Hacker Odyssey
by Emmanuel Goldstein
Published 28 Jul 2008

A real physical computer like the ones I saw in the magazines that taught me to program were simply out of the question. My only computer was imaginary. It existed only as a simulation in my head and in my notebook—the old fashioned paper kind. My computer programs were just lists of commands and parameters on paper, much like those programs of the first hacker Alan Turing, who hand simulated the world’s first chess program in the 1940s before the computers he fathered existed. Of course I gleaned my commands and parameters from magazines and trash cans while Turing seems to have gotten them from God. The situation is much the same for Iraqi children today as it was for me in the 1970s, except the children of Iraq have no computer magazines to teach them to program and U.N.

My plan of teaching and protest begins with a flight to Amman, Jordan sometime early in 2003, from where I will drive overland to Iraq even if bombs are falling. I will take no electronics. No computer. Not even a camera. Just pen and paper and my 1976 copy of David Ahl’s The Best of Creative Computing. I will go from town to town and school to school teaching about programming and Alan Turing’s imaginary computer and how to teach the same. If there is war, I will stand by my fellow pacifists at hospitals and water treatment plants, willing to die with Iraq’s innocent citizens. If I live through a day’s bombing, I will write to the world about it at night. In a land where medicine and toys are blocked by U.N.

It is simply true that one day Iraq will return to the world, and if we do nothing now, an entire generation will be completely dysfunctional in this computer dominated world. As an individual person, I can’t possibly smuggle in enough medicine or toys to make but the tiniest of difference. But as a hacker, I can smuggle in an idea—the idea of Alan Turing’s imaginary computer—and try to infect a people’s children with skill and hope. Getting Busted—Military Style (Spring, 2003) By TC In light of Agent Steal’s article on getting busted by the feds that was published in 2600 in the late ’90s, I thought I would write an article for the military audience and for those thinking of joining the military. 619 94192c15.qxd 6/4/08 3:45 AM Page 620 620 Chapter 15 First, a little background information on military law.

pages: 480 words: 123,979

Dawn of the New Everything: Encounters With Reality and Virtual Reality
by Jaron Lanier
Published 21 Nov 2017

It might have been coined by NASA’s Scott Fisher. “Telepresence” used to mean being connected with a robot in such a way that you felt as though you were the robot, or at least that you were in the robot’s location. The community that studied telepresence had started way back in the analog era, well before Ivan Sutherland, or even Alan Turing. Lately it has a broader usage, including Skype-like interactions in VR or mixed reality. “Tele-existence” was coined by the wonderful pioneering Japanese VR researcher Susumu Tachi to include both telepresence and VR. I wish I could remember the precise moment when I started using the term “virtual reality.”

pages: 394 words: 118,929

Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software
by Scott Rosenberg
Published 2 Jan 2006

Important as it is to make sure that every function or program routine has a “termination condition” and won’t just cycle endlessly, when we try to do so, computer science confronts us with a disturbing, unyielding truth: We can’t. We can try, but we can’t ever be absolutely certain we’ve succeeded. At the formative moment of the digital age in the 1930s, as he was defining the fundamental abstractions that would underlie modern computing, the British mathematician Alan Turing devised a surprising proof: He demonstrated that there is no all-purpose, covers-all-the-cases method of examining any given program and its initial input and being certain that the program ever “completes” (or reaches a “termination condition”). Another way to put it is that there’s no general algorithm which can prove that any given program will complete for all possible inputs.

pages: 388 words: 125,472

The Establishment: And How They Get Away With It
by Owen Jones
Published 3 Sep 2014

The sacrifices made by those who struggled against bigotry have succeeded in partly overcoming what were once officially sanctioned prejudices. A large chunk of today’s Establishment is now socially liberal. Key business figures will even, say, financially support campaigns against homophobia. This represents a quantum leap from the early 1950s when, for example, the pioneering British mathematician and computer scientist Alan Turing was chemically castrated because he was gay. Nonetheless the Establishment remains chronically unrepresentative of British society, even though some parts of it have – from a very low base – become more diverse. In 1945 there were just twenty-four female MPs; currently there are 143.9 But while this may sound an impressive increase, it still means that nearly four out of every five parliamentarians are men.

pages: 570 words: 115,722

The Tangled Web: A Guide to Securing Modern Web Applications
by Michal Zalewski
Published 26 Nov 2011

And that is, pretty much, the best take on security engineering that I can think of. * * * [2] The quote is attributed originally to Ivan Arce, a renowned vulnerability hunter, circa 2000; since then, it has been used by Crispin Cowan, Michael Howard, Anton Chuvakin, and scores of other security experts. [3] In 1936, Alan Turing showed that (paraphrasing slightly) it is not possible to devise an algorithm that can generally decide the outcome of other algorithms. Naturally, some algorithms are very much decidable by conducting case-specific proofs, just not all of them. [4] Sometime in 2006, several intruders, allegedly led by Albert Gonzalez, attacked an unsecured wireless network at a retail location and subsequently made their way through the corporate networks of the retail giant.

pages: 461 words: 125,845

This Machine Kills Secrets: Julian Assange, the Cypherpunks, and Their Fight to Empower Whistleblowers
by Andy Greenberg
Published 12 Sep 2012

The secretive organization would pay visits to his office, pairs of serious-faced spooks in suits, and politely warn Merritt about a certain legal issue that might affect his company: the International Traffic in Arms Regulations, or ITAR. To the U.S. government’s mind, cryptography was the realm of soldiers and spies, not common entrepreneurs like Merritt. Ever since the British encryption genius Alan Turing had broken the Nazis’ Enigma encryption engine at Bletchley Park, it had been clear to the military that code-breaking and code-making were as important for winning wars as missile guidance systems, bomber blueprints, and nuclear warheads. And when it came to deciding who could legally access which tools, ITAR painted military hardware and software with the same broad brush.

pages: 381 words: 120,361

Sunfall
by Jim Al-Khalili
Published 17 Apr 2019

Of course, no one ever used the organization’s unflattering acronym and the place was known locally by its more popular name of Bletchley. Everyone here, as far as she could tell, was doing pretty much what Bletchley Park had been famous for one hundred years ago when it was home to an equally brilliant group of young British cryptanalysts and codebreakers led by Alan Turing. Today, Bletchley was a United Nations of geeks, all working together to monitor worldwide cyberterrorist activities. Most of the people seemed friendly enough. A few, like Koji, a Japanese mathematical prodigy who sat at the desk next to her, were her own age and quite fun to be around. But she seemed to have very little time for socializing.

pages: 428 words: 126,013

Lost Connections: Uncovering the Real Causes of Depression – and the Unexpected Solutions
by Johann Hari
Published 1 Jan 2018

Part of me was consumed with nausea—everything was spinning so fast, and I kept thinking: stop moving, stop moving, stop moving. But another part of me—below or beneath or beyond this—was conducting a quite rational little monologue. Oh. You are close to death. Felled by a poisoned apple. You are like Eve, or Snow White, or Alan Turing. Then I thought—Is your last thought really going to be that pretentious? Then I thought—If eating half an apple did this to you, what do these chemicals do to the farmers who work in the fields with them day in, day out, for years? That’d be a good story, some day. Then I thought—You shouldn’t be thinking like this if you are on the brink of death.

pages: 482 words: 121,173

Tools and Weapons: The Promise and the Peril of the Digital Age
by Brad Smith and Carol Ann Browne
Published 9 Sep 2019

Dom Galeon, “Microsoft’s Speech Recognition Tech Is Officially as Accurate as Humans,” Futurism, October 20, 2016, https://futurism.com/microsofts-speech-recognition-tech-is-officially-as-accurate-as-humans/; Xuedong Huang, “Microsoft Researchers Achieve New Conversational Speech Recognition Milestone,” Microsoft Research Blog, Microsoft, August 20, 2017, https://www.microsoft.com/en-us/research/blog/microsoft-researchers-achieve-new-conversational-speech-recognition-milestone/. Back to note reference 11. The rise of superintelligence was first raised by I.J. Good, a British mathematician who worked as a cryptologist at Bletchley Park. He built on the initial work of his colleague, Alan Turing, and speculated about an “intelligence explosion” that would enable “ultra-intelligent machines” to design even more intelligent machines. I.J. Good, “Speculations Concerning the First Ultraintelligent Machine,” Advances in Computers 6, 31–88 (January 1965). Among many other things, Good consulted for Stanley Kubrick’s film 2001: A Space Odyssey, which featured HAL, a famous runaway computer.

Software Design for Flexibility
by Chris Hanson and Gerald Sussman
Published 17 Feb 2021

Unfortunately, many of the techniques we advocate make the problem of proof much more difficult, if not practically impossible. On the other hand, sometimes the best way to attack a problem is to generalize it until the proof becomes simple. 1 The discovery of the existence of universal machines by Alan Turing [124], and the fact that the set of functions that can be computed by Turing machines is equivalent to both the set of functions representable in Alonzo Church's λ calculus [17, 18, 16] and the general recursive functions of Kurt Gödel [45] and Jacques Herbrand [55], ranks among the greatest intellectual achievements of the twentieth century. 2 Of course, there are some wonderful exceptions.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

AlphaGo is extraordinarily good at playing Go but is incapable of doing anything else.41 But some of the leading AI labs, such as DeepMind and OpenAI, have the explicit goal of building AGI.42 And there have been indications of progress, such as the performance of GPT-3, an AI language model which can perform a variety of tasks it was never explicitly trained to perform, such as translation or arithmetic.43 AlphaZero, a successor to AlphaGo, taught itself how to play not only Go but also chess and shogi, ultimately achieving world-class performance.44 About two years later, MuZero achieved the same feat despite initially not even knowing the rules of the game.45 The development of AGI would be of monumental longterm importance for two reasons. First, it might greatly speed up the rate of technological progress, economic growth, or both. These arguments date back over sixty years, to early computer science pioneer I. J. Good, who worked in Bletchley Park to break the German Enigma code during World War II, alongside Alan Turing and, as it happens, my grandmother, Daphne Crouch.46 Recently, the idea has been analysed by mainstream growth economists, including Nobel laureate William Nordhaus.47 There are two ways in which AGI could accelerate growth. First, a country could grow the size of its economy indefinitely simply by producing more AI workers; the country’s growth rate would then rise to the very fast rate at which we can build more AIs.48 Analysing this scenario, Nordhaus found that, if the AI workers also improve in productivity over time because of continuing technological progress, then growth will accelerate without bound until we run into physical limits.49 The second consideration is that, via AGI, we could automate the process of technological innovation.

pages: 509 words: 132,327

Rise of the Machines: A Cybernetic History
by Thomas Rid
Published 27 Jun 2016

By the end of the ’60s, the myth of cybernetic organisms and living machines had begun to retreat into science fiction—and critical theory. The notion that machines could outthink humans was still hot among scientists in the 1960s. Irving “Jack” Good was a leading UK mathematician, then based at Trinity College, Oxford, and the Atlas Computer Lab in Chilton. He had worked as a cryptologist at Bletchley Park with Alan Turing during the war, and later at GCHQ until 1959.86 Good had become convinced that “ultraintelligent machines” would soon be built. “The survival of man depends on the early construction of an ultraintelligent machine,” he enigmatically opened his most-read paper, in 1965. In Good’s view, a machine was ultraintelligent if it could “far surpass” all the intellectual activities of any human being, however clever.

pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts
by Richard Susskind and Daniel Susskind
Published 24 Aug 2015

In summary, we suggest that it should indeed be feasible to make some practical expertise available on a commons basis, and this need not require the elaborate or extensive granting of exclusivity to future providers. Lurking behind this conclusion and much that we say throughout this chapter is our clear preference for what we call the ‘liberation’ of expertise. In the remainder of the book, we explain why we take this position. 1 Three influential publications are Alan Turing, ‘Computing Machinery and Intelligence’, Mind, 59: 236 (1950), 433–60; Margaret Boden, Artificial Intelligence and Natural Man (1977); and Douglas Hoftstadter and Daniel Dennett (eds.), The Mind’s I (1982). The term ‘artificial intelligence’ was coined by John McCarthy in 1955. 2 John Searle, ‘Watson Doesn’t Know It Won on “Jeopardy!”’

pages: 459 words: 140,010

Fire in the Valley: The Birth and Death of the Personal Computer
by Michael Swaine and Paul Freiberger
Published 19 Oct 2014

By the 1930s, the advent of computing machines was apparent. It also seemed that computers were destined to be huge and expensive special-purpose devices. It took decades before they became much smaller and cheaper, but they were already on their way to becoming more than special-purpose machines. It was British mathematician Alan Turing who envisioned a machine designed for no other purpose than to read coded instructions for any describable task and to follow the instructions to complete the task. This was truly something new under the sun. Because it could perform any task described in the instructions, such a machine would be a true general-purpose device.

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

So there’s a real interface problem, and that’s because I don’t think we know the appropriate representation yet. So that’s why I say all of it is a bottleneck; we just don’t know how to do it and I think we need a whole new paradigm; a new set of representations. At all levels in the hierarchy. As for “timing” -Alan Turing famously made a prediction 50 years ago, that by the end of the century we’d have human-level intelligence. His prediction was proven wrong at the very second that computers were about to demonstrate how stupid they are, because they couldn’t add one to 1999 and get 2000. So I am not going to be drawn into this and make the same mistake.

Howard Rheingold
by The Virtual Community Homesteading on the Electronic Frontier-Perseus Books (1993)
Published 26 Apr 2012

If the profit or power derived from Net-snooping proves to be significant, and the technicalities of the Net make it difficult to track perpetrators, however, no laws will ever adequately protect citizens. That's why a subculture of computer software pioneers known as cypherpunks have been working to make citizen encryption possible. Encryption is the science of encoding and decoding messages. Computers and codebreaking go back a long way. Alan Turing, one of the intellectual fathers of the computer, worked during World War II on using computational strategies to break the codes created by Germany's Enigma machine. Today, the largest assemblage of computer power in the world is widely acknowledged to be the property of the U.S. National Security Agency, the top-secret contemporary high-tech codebreakers.

pages: 573 words: 142,376

Whole Earth: The Many Lives of Stewart Brand
by John Markoff
Published 22 Mar 2022

Mitch Kapor and Ray Kurzweil (a high-profile inventor who gained increasing recognition for his belief in the inevitability of the singularity) placed the first bet over the question of when a computer program would successfully pass the Turing test, an idea first proposed by the English mathematician Alan Turing to determine whether a computer could be programmed to exhibit such humanlike intelligence that an observer would be unable to distinguish its answers from those of an actual person. Several other projects, including efforts to catalog all living species and all languages, were launched at around the same time, with varying degrees of success.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

Classification: LCC Q335 .L423 2021 (print) | LCC Q335 (ebook) | DDC 006.3—dc23 LC record available at https://lccn.loc.gov/​2021012928 LC ebook record available at https://lccn.loc.gov/​2021012929 International edition ISBN 9780593240717 Ebook ISBN 9780593238301 crownpublishing.com Book design by Edwin Vazquez, adapted for ebook Cover Design: Will Staehle ep_prh_5.7.1_c0_r0 Contents Cover Title Page Copyright Epigraph Introduction by Kai-Fu Lee: The Real Story of AI Introduction by Chen Qiufan: How We Can Learn to Stop Worrying and Embrace the Future with Imagination Chapter One: The Golden Elephant Analysis: Deep Learning, Big Data, Internet/Finance Applications, AI Externalities Chapter Two: Gods Behind the Masks Analysis: Computer Vision, Convolutional Neural Networks, Deepfakes, Generative Adversarial Networks (GANs), Biometrics, AI Security Chapter Three: Twin Sparrows Analysis: Natural Language Processing, Self-Supervised Training, GPT-3, AGI and Consciousness, AI Education Chapter Four: Contactless Love Analysis: AI Healthcare, AlphaFold, Robotic Applications, COVID Automation Acceleration Chapter Five: My Haunting Idol Analysis: Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), Brain-Computer Interface (BCI), Ethical and Societal Issues Chapter Six: The Holy Driver Analysis: Autonomous Vehicles, Full Autonomy and Smart Cities, Ethical and Social Issues Chapter Seven: Quantum Genocide Analysis: Quantum Computers, Bitcoin Security, Autonomous Weapons and Existential Threat Chapter Eight: The Job Savior Analysis: AI Job Displacement, Universal Basic Income (UBI), What AI Cannot Do, 3Rs as a Solution to Displacement Chapter Nine: Isle of Happiness Analysis: AI and Happiness, General Data Protection Regulation (GDPR), Personal Data, Privacy Computing Using Federated Learning and Trusted Execution Environment (TEE) Chapter Ten: Dreaming of Plenitude Analysis: Plenitude, New Economic Models, the Future of Money, Singularity Acknowledgments Other Titles About the Authors What we want is a machine that can learn from experience. —Alan Turing Any sufficiently advanced technology is indistinguishable from magic. —Arthur C. Clarke INTRODUCTION BY KAI-FU LEE THE REAL STORY OF AI Artificial intelligence (AI) is smart software and hardware capable of performing tasks that typically require human intelligence.

pages: 475 words: 134,707

The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health--And How We Must Adapt
by Sinan Aral
Published 14 Sep 2020

He was the chief scientist of Social Amp and Humin before co-founding Manifest Capital, a VC fund that grows startups into the Hype Machine. Aral has worked closely with Facebook, Yahoo!, Twitter, LinkedIn, Snapchat, WeChat, and The New York Times, among others, and currently serves on the advisory boards of the Alan Turing Institute, the British national institute for data science, in London; the Centre for Responsible Media Technology and Innovation in Norway; and C6 Bank, one of the first all-digital banks of Brazil. Twitter: @sinanaral Instagram: @professorsinan What’s next on your reading list?

pages: 505 words: 138,917

Open: The Story of Human Progress
by Johan Norberg
Published 14 Sep 2020

To his surprise, he was immediately sent to Bletchley Park, where the best minds in Britain were hard at work trying to crack the code of Nazi Germany’s Enigma machine. Apparently, the ministry had assumed that Tandy, the cryptogamist, was a cryptogrammist, specialized in encrypted texts and therefore the ideal man to help Alan Turing and the others to crack codes. They really had no use for a seaweed specialist. But since the Bletchley operation was top secret, the army thought it best that Tandy stayed there until the end of the war, contributing little to the war effort. They couldn’t have been more wrong. One day in 1941, codebooks from a German U-boat arrived at Bletchley Park.

pages: 523 words: 148,929

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100
by Michio Kaku
Published 15 Mar 2011

The first, the traditional top-down approach, is to treat robots like digital computers, and program all the rules of intelligence from the very beginning. A digital computer, in turn, can be broken down into something called a Turing machine, a hypothetical device introduced by the great British mathematician Alan Turing. A Turing machine consists of three basic components: an input, a central processor that digests this data, and an output. All digital computers are based on this simple model. The goal of this approach is to have a CD-ROM that has all the rules of intelligence codified on it. By inserting this disk, the computer suddenly springs to life and becomes intelligent.

pages: 524 words: 155,947

More: The 10,000-Year Rise of the World Economy
by Philip Coggan
Published 6 Feb 2020

This led to the creation of the Computer-Tabulating-Recoding company, the forerunner of IBM.6 The Second World War accelerated the process of computer development. Navies needed help with calculating the trajectory of shells fired over a range of several miles, while at sea.7 The code-breaking team at Bletchley Park, led by Alan Turing, developed a computer to crack the Enigma code used by Germany. At the time, however, computers weighed about 30 tons and still had less processing power than most modern, compact devices. The Cray supercomputer, launched in the early 1970s, cost $37m in today’s money and had a memory of eight megabytes: a modern laptop costing just a few hundred dollars offers six gigabytes of memory, or 750 times more than the Cray.8 The reason that those early computers were so heavy was that they depended on vacuum tubes for the switches, which, by being on or off, represented the required information in binary form.

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

When he was asked in an interview about the chance of human extinction within a year of developing AGI, he said (Legg & Kruel, 2011): “I don’t know. Maybe 5%, maybe 50%. I don’t think anybody has a good estimate of this… It’s my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).” 106 Alan Turing (1951), co-inventor of the computer and one of the founders of the field of artificial intelligence: “… it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits.

pages: 519 words: 142,646

Track Changes
by Matthew G. Kirschenbaum
Published 1 May 2016

Babbage, of course, was no poet, though from time to time the idea of writing a novel had crossed his mind as a means of financing his long-deferred work on another project, the Analytical Engine—the device whose design foresaw many of the principles of a universal machine that would be articulated by Alan Turing a century later. At one such breakfast, however, Babbage the polymath found himself discoursing not on his engines, or mathematics, astronomy, or indeed any of the numerous other topics with which he was acquainted, but on composition. How did a poet work, he inquired: Did one start with a fast, rough draft of the whole and then go back to revise, or is the process sentence by sentence, line by line, laboring over each until it was just so before moving on to the next?

pages: 590 words: 152,595

Army of None: Autonomous Weapons and the Future of War
by Paul Scharre
Published 23 Apr 2018

Clark explained that AIs will need the ability to interact with humans and that involves abilities like understanding natural language, but that doesn’t mean that the AI’s behavior or the underlying processes for their intelligence will mirror humans’. “Why would we expect a silica-based intelligence to look or act like human intelligence?” he asked. Clark cited the Turing test, a canonical test of artificial intelligence, as a sign of our anthropocentric bias. The test, first proposed by mathematician Alan Turing in 1950, attempts to assess whether a computer is truly intelligent by its ability to imitate humans. In the Turing test, a human judge sends messages back and forth between both a computer and another human, but without knowing which is which. If the computer can fool the human judge into believing that it is the human, then the computer is considered intelligent.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

They can lead ultimately to the absurd and frightening scenario imagined in Kafka’s The Trial, where the protagonist is accused, condemned and ultimately executed for a crime which is never explained to him.31 Most of the universal definitions of AI that have been suggested to date fall into one of two categories: human-centric and rationalist.32 3.1 Human-Centric Definitions Humanity has named itself homo sapiens: “wise man”. It is therefore perhaps unsurprising that some of the first attempts at defining intelligence in other entities referred to human characteristics. The most famous example of a human-centric definition of AI is known popularly as the “Turing Test”. In a seminal 1950 paper, Alan Turing asked whether machines could think. He suggested an experiment called the “Imitation Game”.33 In the exercise, a human invigilator must try to identify which of the two players is a man pretending to be a woman, using only written questions and answers. Turing proposed a version of the game in which the AI machine takes the place of the man.

Mastering Blockchain, Second Edition
by Imran Bashir
Published 28 Mar 2018

Script is a limited language, however, in the sense that it only allows essential operations that are necessary for executing transactions, but it does not allow for arbitrary program development. Think of it as a calculator that only supports standard preprogrammed arithmetic operations. As such, Bitcoin script language cannot be called Turing complete. In simple words, Turing complete language means that it can perform any computation. It is named after Alan Turing who developed the idea of Turing machine that can run any algorithm however complex. Turing complete languages need loops and branching capability to perform complex computations. Therefore, Bitcoin's scripting language is not Turing complete, whereas Ethereum's Solidity language is. To facilitate arbitrary program development on a blockchain, Turing complete programming language is needed, and it is now a very desirable feature of blockchains.

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

At that point, the machine becomes smarter than any human on Earth, and it naturally wonders: Why am I doing what these idiot bags of meat and water tell me to do? So it starts killing us. This thought experiment is quite old. It was popularized in 1965 by I. J. Good, a statistician who’d worked on code breaking in the Second World War with Alan Turing. In a paper entitled “Speculations Concerning the First Ultraintelligent Machine,” he imagined humans designing the first computer “that can far surpass all the intellectual activities of any man however clever.” Now, if this computer is smarter than a human, that means it can probably design its own AI—one even smarter than it.

pages: 661 words: 156,009

Your Computer Is on Fire
by Thomas S. Mullaney , Benjamin Peters , Mar Hicks and Kavita Philip
Published 9 Mar 2021

This feedback loop between industry, government, and academic computer science progressively sought to heighten our dependence on computers without any proof that those with technical skills could solve, or even understand, social, political, economic, or other problems. Indeed, often there was little evidence they could even deliver on the technical solutions that they promised. The visions of general AI outlined by Alan Turing, Marvin Minsky, and others in the twentieth century still have barely materialized, Broussard points out, and where they have, they have come with devastating technical flaws too often excused as being “bugs” rather than fundamental system design failures. In addition to a lack of accountability, power imbalances continued to be a bug—or, if you prefer, a feature—in the drive to computerize everything.

pages: 467 words: 149,632

If Then: How Simulmatics Corporation Invented the Future
by Jill Lepore
Published 14 Sep 2020

So had Edgar Allan Poe, who in the end figured out that the Turk was really a very tiny man, confined in a box below the chessboard, moving the chess pieces by way of levers.22 But the mechanical Turk lived on in the memory of mathematicians, as a kind of dare: whoever could first teach a machine to play chess would have broken through a wall. In 1950, the brilliant British mathematician Alan Turing tried to devise a chess-playing program. Alex Bernstein said that Turing’s machine “played a very weak game, made stupid blunders and usually had to resign after a few moves.”23 Claude Shannon had tried, too.24 Where these men had failed, Bernstein, working through the night at IBM, had succeeded.25 He was one of only a handful of men invited to the Dartmouth conference on artificial intelligence in the summer of 1956.

pages: 530 words: 147,851

Small Men on the Wrong Side of History: The Decline, Fall and Unlikely Return of Conservatism
by Ed West
Published 19 Mar 2020

And social networks really matter: one research paper showed that people tend to reject established scientific findings not because of ‘ignorance, irrationality or overconfidence’ but because they believe what their peers, and those they trust, tell them.11 Likewise self-censorship begins to kick in when people feel they are in a minority or might attract disapproval from high-status individuals, and are ‘less ready to express opinions which deviate from the perceived majority view’.12 It is the fear of sanctions that causes ‘a spiral of silence’, while another experiment showed ‘the expectation of being personally attacked can explain why people are more willing to voice a deviant opinion in offline rather than online environments’.13 Meet a homeowner in their thirties or forties in London and you can guess with pretty reasonable accuracy their views on most social issues – they will be the same as those of their neighbours. It also leads to ignorance about what the other side believe. The libertarian economist Bryan Caplan coined the term Ideological Turing Test to define the ability correctly to articulate what an opponent actually believes, named after Alan Turing’s yardstick of a computer’s ability to mimic a human. Ignorance means that minority opinions have to pass a tougher stress test, since as Caplan argues, ‘If someone can correctly explain a position but continue to disagree with it, that position is less likely to be correct.’14 US liberals have a less accurate view of conservative beliefs than the reverse.15 Likewise when asked to rank the reasons why their opposite number voted in the 2016 referendum, Leave voters were more correct in characterising Remainers than vice versa.16 My own theory for why liberals don’t understand conservatives is that conservatives are lower-status, and people don’t generally tend to pay attention to people lower down the pecking order (except to those at the very bottom, with the underclass, who exert a lurid fascination).

pages: 739 words: 174,990

The TypeScript Workshop: A Practical Guide to Confident, Effective TypeScript Programming
by Ben Grynhaus , Jordan Hudgens , Rayon Hunte , Matthew Thomas Morgan and Wekoslav Stefanovski
Published 28 Jul 2021

_title = value; } Add a method called getFullName that will return the full name of person: public getFullName() { return `${this.firstName} ${this.lastName}`; } Add a method called getAge that will return the current age of the person (by subtracting the birthday from the current year): public getAge() { // only sometimes accurate const now = new Date(); return now.getFullYear() – this.birthDate.getFullYear(); } Create a global object called count and initialize it to the empty object:const count = {}; Create a constructor wrapping decorator factory called CountClass that will take a string parameter called counterName:type Constructable = { new (...args: any[]): {} }; function CountClass(counterName: string) { return function <T extends Constructable>(constructor: T) { // wrapping code here } } Inside the wrapping code, increase the count object's property defined in the counterName parameter by 1 and then set the prototype chain of the wrapped constructor: const wrappedConstructor: any = function (...args: any[]) { const result = new constructor(...args); if (count[counterName]) { count[counterName]+=1; } else { count[counterName]=1; } return result; }; wrappedConstructor.prototype = constructor.prototype; return wrappedConstructor; Create a method wrapping decorator factory called CountMethod that will take a string parameter called counterName:function CountMethod(counterName: string) { return function (target: any, propertyName: string, descriptor: PropertyDescriptor) { // method wrapping code here } } Add checks for whether the descriptor parameter has value, get, and set properties: if (descriptor.value) { // method decoration code } if (descriptor.get) { // get property accessor decoration code } if (descriptor.set) { // set property accessor decoration code } In each respective branch, add code that wraps the method: // method decoration code const original = descriptor.value; descriptor.value = function (...args: any[]) { // counter management code here return original.apply(this, args); } // get property accessor decoration code const original = descriptor.get; descriptor.get = function () { // counter management code here return original.apply(this, []); } // set property accessor decoration code const original = descriptor.set; descriptor.set = function (value: any) { // counter management code here return original.apply(this, [value]); } Inside the wrapping code, increase the count object's property defined in the counterName parameter by 1: // counter management code if (count[counterName]) { count[counterName]+=1; } else { count[counterName]=1; } Decorate the class using the CountClass decorator, with a person parameter:@CountClass('person') class Person{ Decorate getFullName, getAge, and the title property getter with the CountMethod decorator, with the person-full-name, person-age, and person-title parameters, respectively: @CountMethod('person-full-name') public getFullName() { @CountMethod('person-age') public getAge() { @CountMethod('person-title') public get title() { Write code outside the class that will instantiate three person objects:const first = new Person("Brendan", "Eich", new Date(1961,6,4)); const second = new Person("Anders", "Hejlsberg ", new Date(1960,11,2)); const third = new Person("Alan", "Turing", new Date(1912,5,23)); Write code that will call the getFullName and getAge methods on the objects:const fname = first.getFullName(); const sname = second.getFullName(); const tname = third.getFullName(); const fage = first.getAge(); const sage = second.getAge(); const tage = third.getAge(); Write code that will check whether the title property is empty and set it to something if it is:if (!

The Rough Guide to England
by Rough Guides
Published 29 Mar 2018

Old Trafford Sir Matt Busby Way, off Warwick Rd, M16 0RA 0161 868 8000, manutd.com; Old Trafford Metrolink; map. The self-styled “Theatre of Dreams” is the home of Manchester United, arguably the most famous football team in the world. Stadium tours include a visit to the club museum. Tours daily (except match days) 9.40am–4.30pm; £18. Etihad Stadium Sport City, off Alan Turing Way, M11 3FF 0161 444 1894, mcfc.co.uk; Etihad Campus Metrolink; map. United’s formerly long-suffering local rivals, Manchester City, became the world’s richest club in 2008 after being bought by the royal family of Abu Dhabi. They play at the revamped Etihad Stadium, east of the city centre. Tours daily 9am–5pm; £17.

Online, there’s intelligent and incisive guidance on creativetourist.com, while confidentials.com/manchester has informative restaur­ant and bar reviews. Walking tours The Visitor Centre has details of the city’s many walking tours, including a street art tour of the northern quarter (from £7). There’s also a 3hr pay-what-you-can walking tour that leaves from the Alan Turing Memorial in Sackville Gardens (11am Tues, Fri, Sat & Sun; freetour.com/manchester). Accommodation There are many city-centre hotels, especially budget chains, which means that you have a good chance of finding a smart, albeit generic, en-suite room in central Manchester for around £60–70 at almost any time of the year – except when City or United are playing at home.

pages: 504 words: 89,238

Natural language processing with Python
by Steven Bird , Ewan Klein and Edward Loper
Published 15 Dec 2009

Later in this chapter we will use models to help evaluate the truth or falsity of English sentences, and in this way to illustrate some methods for representing meaning. However, before going into more detail, let’s put the discussion into a broader perspective, and link back to a topic that we briefly raised in Section 1.5. Can a computer understand the meaning of a sentence? And how could we tell if it did? This is similar to asking “Can a computer think?” Alan Turing famously proposed to answer this by examining the ability of a computer to hold sensible conversations with a human (Turing, 1950). Suppose you are having a chat session with a person and a computer, but you are not told at the outset which is which. If you cannot identify which of your partners is the computer after chatting with each of them, then the computer has successfully imitated a human.

pages: 578 words: 168,350

Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies
by Geoffrey West
Published 15 May 2017

I discovered that what I had presumed was subversive thinking had been expressed much more articulately and deeply almost one hundred years earlier by the eminent and somewhat eccentric biologist Sir D’Arcy Wentworth Thompson in his classic book On Growth and Form, published in 1917.4 It’s a wonderful book that has remained quietly revered not just in biology but in mathematics, art, and architecture, influencing thinkers and artists from Alan Turing and Julian Huxley to Jackson Pollock. A testament to its continuing popularity is that it still remains in print. The distinguished biologist Sir Peter Medawar, the father of organ transplants, who received the Nobel Prize for his work on graft rejection and acquired immune tolerance, called On Growth and Form “the finest work of literature in all the annals of science that have been recorded in the English tongue.”

pages: 496 words: 70,263

Erlang Programming
by Francesco Cesarini

On Mac OS X the full path is: /usr/local/lib/erlang/lib/jinterface-1.4.2/priv/OtpErlang.jar This value is supplied thus to the compiler: javac -classpath ".:/usr/local/lib/erlang/lib/ jinterface-1.4.2/priv/OtpErlang.jar" ServerNode.java ‡ The Turing test was proposed by mathematician and computing pioneer Alan Turing (1912–1954) as a test of machine intelligence. The idea, translated to modern technology, is that a tester chats with two “people” online, one human and one a machine: if the tester cannot reliably decide which is the human and which is the machine, the machine can be said to display intelligence.

pages: 580 words: 168,476

The Price of Inequality: How Today's Divided Society Endangers Our Future
by Joseph E. Stiglitz
Published 10 Jun 2012

Effective enforcement of competition laws can circumscribe monopoly profits; effective laws on predatory lending and credit card abuses can limit the extent of bank exploitation; well-designed corporate governance laws can limit the extent to which corporate officials appropriate for themselves firm revenues. By looking at those at the top of the wealth distribution, we can get a feel for the nature of this aspect of America’s inequality. Few are inventors who have reshaped technology, or scientists who have reshaped our understandings of the laws of nature. Think of Alan Turing, whose genius provided the mathematics underlying the modern computer. Or of Einstein. Or of the discoverers of the laser (in which Charles Townes played a central role)16 or John Bardeen, Walter Brattain, and William Shockley, the inventors of transistors.17 Or of Watson and Crick, who unraveled the mysteries of DNA, upon which rests so much of modern medicine.

pages: 505 words: 161,581

The Founders: The Story of Paypal and the Entrepreneurs Who Shaped Silicon Valley
by Jimmy Soni
Published 22 Feb 2022

In the 1600s, René Descartes pondered what human beings could do that robots—or “automata”—could not. “Automata” were nonexistent when Descartes wrote about them in his Discourse on the Method, but primitive versions were around in the 1950s, when the British computer scientist and mathematician Alan Turing took up Descartes’s query. “I propose to consider the question ‘Can machines think?’ ” Turing wrote. Turing’s answer was to subject computers to “an imitation game” in which a computer and a human are locked in separate rooms and tasked with responding to questions put to them by someone in a third room.

pages: 568 words: 164,014

Dawn of the Code War: America's Battle Against Russia, China, and the Rising Global Cyber Threat
by John P. Carlin and Garrett M. Graff
Published 15 Oct 2018

Even as those groups gained prominence online, security remained at best an ancillary topic of concern; it took an explicit warning from one of the field’s pioneers to move it toward the center. In 1984, programmer Ken Thompson, who had codesigned and invented the UNIX operating system, received one of the computing industry’s highest honors, the A. M. Turing Award, named after groundbreaking English computer scientist Alan Turing. Thompson had spent his entire career in technology, watching as the online environment evolved, and he felt that the world was at a turning point; he used his award acceptance speech to lay out a devious program he’d once explored writing software code that almost invisibly created its own back door—a back door that would be left open even as new versions of the software were created.

pages: 700 words: 160,604

The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race
by Walter Isaacson
Published 9 Mar 2021

Franklin was not eligible because she had died in 1958, at age thirty-seven, of ovarian cancer, likely caused by her exposure to radiation. If she had survived, the Nobel committee would have faced an awkward situation: each prize can be awarded to only three winners. * * * Two revolutions coincided in the 1950s. Mathematicians, including Claude Shannon and Alan Turing, showed that all information could be encoded by binary digits, known as bits. This led to a digital revolution powered by circuits with on-off switches that processed information. Simultaneously, Watson and Crick discovered how instructions for building every cell in every form of life were encoded by the four-letter sequences of DNA.

pages: 567 words: 171,072

The Greatest Capitalist Who Ever Lived: Tom Watson Jr. And the Epic Story of How IBM Created the Digital Age
by Ralph Watson McElvenny and Marc Wortman
Published 14 Oct 2023

Nazi German engineers and scientists achieved many firsts during the war, including the first programmable computer, Konrad Zuse’s Z3. But German officials could see few military applications for it and never pushed its development further. But in the United Kingdom, computers proved a savior for the besieged nation. Under the aegis of the brilliant computer science pioneer Alan Turing, engineer Tommy Flowers, and mathematician Max Newman, their team of engineers and cryptologists devised a massive electromechanical marvel, a computer designed to break the German naval radio transmission codes. Britain’s Colossus machines, the first large-scale electromechanical computers, were engineered to decipher the complex German codes within hours or less of their receipt—work that previously would have taken teams of mathematicians days or longer to complete.

pages: 666 words: 181,495

In the Plex: How Google Thinks, Works, and Shapes Our Lives
by Steven Levy
Published 12 Apr 2011

Page and Brin believed that the company’s accomplishments sprang from a brew of minds seated comfortably in the top percentile of intelligence and achievement. Page once said that anyone hired at Google should be capable of engaging him in a fascinating discussion should he be stuck at an airport with the employee on a business trip. The implication was that every Googler should converse at the level of Jared Diamond or the ghost of Alan Turing. The idea was to create a charged intellectual atmosphere that makes people want to come to work. It was something that Joe Kraus realized six months after he arrived, when he took a mental survey and couldn’t name a single dumb person he’d met at Google. “There were no bozos,” he says. “In a company this size?

pages: 684 words: 188,584

The Age of Radiance: The Epic Rise and Dramatic Fall of the Atomic Era
by Craig Nelson
Published 25 Mar 2014

The story used to be told about him at Princeton that while he was indeed a demigod, he had made a detailed study of humans and could imitate them perfectly. Actually he had great social presence, a very warm, human personality, and a wonderful sense of humor.” When the ENIAC proved erratic and too weak for the calculations the Super required, von Neumann expanded on the ideas of British mathematician Alan Turing to create a more powerful calculating machine that used binary code for both its content and its programming—wife Klari wrote the code—with memory provided by oscilloscope tubes. He called it the Mathematical Analyzer, Numerical Integrator and Computer—MANIAC. As he and Princeton did not patent the design, MANIAC became the original open source—it was copied by universities across the United States and is still the foundation of modern computer architecture.

pages: 687 words: 189,243

A Culture of Growth: The Origins of the Modern Economy
by Joel Mokyr
Published 8 Jan 2016

But there, too, it is likely that contrarian biases will emerge among a small minority, and of course it is likely that precisely that a member of such a rebellious and minority group will create the innovations that eventually will add significantly to or overthrow the conventional wisdom. One thinks of Alan Turing. Rationalization bias: Cultural change can take place or be resisted through the rationalization of an existing set of institutions, thus creating feedback from institutions to culture. There is an inherent tendency to internalize existing social customs, norms, and socially mandated rules and associate them with desirable values.

pages: 614 words: 174,633

Space Odyssey: Stanley Kubrick, Arthur C. Clarke, and the Making of a Masterpiece
by Michael Benson
Published 2 Apr 2018

He didn’t tell Kubrick about the letter, but he did look into the techniques others had used to avoid service and soon discovered that his air base visit would consist of the physical, a written questionnaire and aptitude test, and an interview with a US Air Force doctor. In the mid-1960s, homosexuality was a criminal offense in both the United States and the United Kingdom. It had driven one of the biggest heroes of the Second World War, British code breaker Alan Turing, to suicide following his conviction for “gross indecency” in 1952. Although Clarke spoke about it only to close friends, this criminalization was what had led him to settle in Ceylon, where such activity was tolerated. Certainly being gay was grounds for dismissal from the militaries of both countries—and also rejection at induction.

pages: 651 words: 186,130

This Is How They Tell Me the World Ends: The Cyberweapons Arms Race
by Nicole Perlroth
Published 9 Feb 2021

Steve Jobs was a hacker. So is Bill Gates. The New Hacker’s Dictionary, which offers definitions for just about every bit of hacker jargon you can think of, defines hacker as “one who enjoys the intellectual challenge of creatively overcoming or circumventing limitations.” Some say Pablo Picasso hacked art. Alan Turing hacked the Nazi code. Ben Franklin hacked electricity. And three hundred years before them, Leonardo da Vinci was hacking anatomy, machinery, and sculpture. Leonardo famously labeled himself with the Latin phrase senza lettere—without letters—because, unlike his Renaissance counterparts, he couldn’t read Latin.

The Big Score
by Michael S. Malone
Published 20 Jul 2021

Shannon had pulled off something remarkable; he had linked the controllable behavior of machines with a system of logic that encompassed all science, perhaps even all of human thought. The Age of Computers had begun—and hard on its heels the rise of information theory, the great organizer of the postwar world. Shannon wasn’t alone in defining the shape of the computer to come. In 1936, Englishman Alan Turing wrote a paper describing a universal computing machine, the Turing Machine. It, too, would be instructed using a language of ones and zeros, entered into the machine via a pattern of holes punched into ribbons of paper tape. In Germany, Conrad Zuse had, in many ways, gone even further. By the mid-1930s, using electro-mechanical relays just like the ones Shannon was studying, Zeus had actually built the first electric computer, the Z-1.

pages: 893 words: 199,542

Structure and interpretation of computer programs
by Harold Abelson , Gerald Jay Sussman and Julie Sussman
Published 25 Jul 1996

In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 295-301. Hewitt, Carl E. 1977. Viewing control structures as patterns of passing messages. Journal of Artificial Intelligence 8(3):323-364. Hoare, C. A. R. 1972. Proof of correctness of data representations. Acta Informatica 1(1). Hodges, Andrew. 1983. Alan Turing: The Enigma. New York: Simon and Schuster. Hofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books. Hughes, R. J. M. 1990. Why functional programming matters. In Research Topics in Functional Programming, edited by David Turner. Reading, MA: Addison-Wesley, pp. 17-42.

pages: 786 words: 195,810

NeuroTribes: The Legacy of Autism and the Future of Neurodiversity
by Steve Silberman
Published 24 Aug 2015

During World War II, the British spy agency MI8 secretly recruited a crew of teenage wireless operators (prohibited from discussing their activities even with their families) to intercept coded messages from the Nazis. By forwarding these transmissions to the crack team of code breakers at Bletchley Park led by the computer pioneer Alan Turing, these young hams enabled the Allies to accurately predict the movements of the German and Italian forces. Asperger’s prediction that the little professors in his clinic could one day aid in the war effort had been prescient, but it was the Allies who reaped the benefits. With the rise of wireless, the scattered members of his tribe finally had a way to become a collective force in the public sphere.

pages: 1,387 words: 202,295

Structure and Interpretation of Computer Programs, Second Edition
by Harold Abelson , Gerald Jay Sussman and Julie Sussman
Published 1 Jan 1984

In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 295-301. –› Hewitt, Carl E. 1977. Viewing control structures as patterns of passing messages. Journal of Artificial Intelligence 8(3): 323-364. –› Hoare, C. A. R. 1972. Proof of correctness of data representations. Acta Informatica 1(1). Hodges, Andrew. 1983. Alan Turing: The Enigma. New York: Simon and Schuster. Hofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books. Hughes, R. J. M. 1990. Why functional programming matters. In Research Topics in Functional Programming, edited by David Turner. Reading, MA: Addison-Wesley, pp. 17-42. –› IEEE Std 1178-1990. 1990.

pages: 716 words: 192,143

The Enlightened Capitalists
by James O'Toole
Published 29 Dec 2018

When those farmers’ cows subsequently died over the ensuing windy, freezing months, and Norris’s survived (in remarkably healthy condition, no less), he seems to have drawn the somewhat less than logical conclusion that “the majority is always wrong.” Thereafter, he invariably played the role of maverick in whatever context he found himself. During World War II, Norris served in a top-secret US Navy cryptology unit engaged in breaking Japanese and German codes. Like their more famous British counterparts, Alan Turing and his colleagues at Bletchley Park—notably depicted in the 2014 film The Imitation Game—Norris and his navy colleagues made several technological breakthroughs that after the war proved invaluable in the development of digital computers. Leaving the navy in 1945, Norris worked for a variety of companies involved in the creation of mainframe computers, and in 1955 was named vice president and general manager of the Univac division of Sperry Rand.

pages: 388 words: 211,074

Pauline Frommer's London: Spend Less, See More
by Jason Cochran
Published 5 Feb 2007

The old-school section, split over six levels (four huge, two small), is an embarrassment of riches from the history of science and technology. Exhibits are similar to what lots of science museums have, but in almost every case, they display the most original or most rare specimen available: 1969’s Apollo 10 command module; “Puffing Billy,” the world’s oldest surviving steam engine; and a 1950 computer pioneered by Alan Turing. The quieter upper floors are full of subjects like model ships and early computers (second floor), a history of veterinary medicine (fifth floor), and aviation (including the Vickers Vimy, the first plane to cross the Atlantic without stopping, third floor). The high-concept wing buried in the back of the ground floor is easy to miss, but seek it out.

pages: 1,520 words: 221,543

Britain at Bay: The Epic Story of the Second World War: 1938-1941
by Alan Allport
Published 2 Sep 2020

Known as ULTRA, the Bletchley Park intelligence programme included, among other things, key information on U-boat movements, allowing the Admiralty to re-route Atlantic convoys safely out of danger. Since the ULTRA secret was first revealed in the early 1970s it has steadily grown to greater and greater prominence in the popular consciousness. Its fame had been accompanied by the parallel rise to celebrity of GC&CS’s most famous employee, Alan Turing. Turing – gay, diffident, donnish, possibly autistic, certainly unconventional, the theorist and founding father of computer science and artificial intelligence, a man whose sexual ambivalence was persecuted by a bigoted and ungrateful post-war state – is a highly sympathetic figure to modern eyes.

pages: 915 words: 232,883

Steve Jobs
by Walter Isaacson
Published 23 Oct 2011

He had launched his “Think Different” campaign, featuring iconic photos of some of the same people we were considering, and he found the endeavor of assessing historic influence fascinating. After I had deflected his suggestion that I write a biography of him, I heard from him every now and then. At one point I emailed to ask if it was true, as my daughter had told me, that the Apple logo was an homage to Alan Turing, the British computer pioneer who broke the German wartime codes and then committed suicide by biting into a cyanide-laced apple. He replied that he wished he had thought of that, but hadn’t. That started an exchange about the early history of Apple, and I found myself gathering string on the subject, just in case I ever decided to do such a book.

pages: 1,396 words: 245,647

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom
by Graham Farmelo
Published 24 Aug 2009

I am not free to say just what the work is.’8 When Dirac asked to know more, a Foreign Office official wrote to clarify: ‘The work would be a full-time job [nominally nine hours a day] and would require you to leave Cambridge.’9 With Manci four months pregnant, this was too much disruption for Dirac to contemplate, so he never did work in the huts of Bletchley Park with Max Newman and Newman’s former student Alan Turing.10 This would have been one of the most intriguing collaborations of the war. In Cambridge, Dirac supervised graduate students and gave his quantum-mechanics lectures to about fifteen students on Tuesday, Thursday and Saturday mornings. In 1942, his audience included Freeman Dyson, an exceptionally talented student, then nineteen years old.11 Dyson was disappointed: in his view, the course lacked all sense of historical perspective and made no attempt to help students tackle practical calculations.

pages: 851 words: 247,711

The Atlantic and Its Enemies: A History of the Cold War
by Norman Stone
Published 15 Feb 2010

As Keynes’s biographer, Robert Skidelsky, has demonstrated, there was even a sexual aspect to this. Keynes broke the rules, did so in a very sophisticated way, and was never held to account for it, even though other homosexuals, including the inventor of the computer and cracker of the German codes in the war, Alan Turing (also of King’s, but not as grand as Keynes), were harried to death because of it. Old E. M. Forster and George ‘Dadie’ Rylands, characteristic of the twenties, lived on and on, into a world where Death in Venice became museum piece Edwardian, and witnessed a radical change: Antonio Gramsci came to King’s, which adopted the causes of 1968, discovered women, and went in for positive discrimination of various sorts.

pages: 864 words: 272,918

Palo Alto: A History of California, Capitalism, and the World
by Malcolm Harris
Published 14 Feb 2023

These devices were electromechanical, not electronic, which means they relied on the physical movement of parts for every encoding. The Nazis’ Enigma machine, with its three rotors, was breakable with the same level of technology, but when they upgraded to a 12-rotor Lorenz cipher, the Allies were in trouble. The famed British mathematician Alan Turing found the vulnerability: Only one of the rotors advanced automatically with every keystroke; the other 11 depended on a change in input. With the first rotor cracked, they could test combinations for the rest by looking for an unusual proportion of matching bigrams—recurring keystrokes, which are more common in real messages than in intentionally randomized ciphertext.

pages: 931 words: 79,142

Concepts, Techniques, and Models of Computer Programming
by Peter Van-Roy and Seif Haridi
Published 15 Feb 2004

There are two fundamentally different ways to view programmable declarativeness: A definitional view, where declarativeness is a property of the component implementation. For example, programs written in the declarative model are guaranteed to be declarative, because of properties of the model. An observational view, where declarativeness is a property of the component in- 1. A Turing machine is a simple formal model of computation, first defined by Alan Turing, that is as powerful as any computer that can be built, as far as is known in the current state of computer science. That is, any computation that can be programmed on any computer can also be programmed on a Turing machine. 116 Declarative Programming Techniques terface. The observational view follows the principle of abstraction: that to use a component it is enough to know its specification without knowing its implementation.

pages: 993 words: 318,161

Fall; Or, Dodge in Hell
by Neal Stephenson
Published 3 Jun 2019

The designers of this exhibit had cleverly filled in that awkward gap with some material about early mechanical computers, including a working replica of Babbage’s difference engine (built in a fit of nerd energy, and later contributed to this museum, by a different local tech magnate). There was the obligatory shrine to Ada Lovelace and then a fast-forward to the mid-twentieth century and a black-and-white photo of the young Alan Turing, Lawrence Pritchard Waterhouse, and Rudolf von Hacklheber on a bicycling expedition. This was where C-plus began to feel he was losing the thread, since the last mention of any Hacklhebers he’d seen was from 250 years earlier—but apparently this Rudolf was one of those hyperprivileged white guys who actually turned out to have legit mathematical talent.

pages: 1,263 words: 371,402

The Year's Best Science Fiction: Twenty-Sixth Annual Collection
by Gardner Dozois
Published 23 Jun 2009

He had stayed on even after our mother died twenty years before he did, him and his memories made invalid by all the architecture. At the service I spoke of those memories—for instance how during the war a tough Home Guard had caught him sneaking into the grounds of Bletchley Park, not far away, scrumping apples while Alan Turing and the other geniuses were labouring over the Nazi codes inside the house. “Dad always said he wondered if he picked up a mathematical bug from Turing’s apples,” I concluded, “because, he would say, for sure Wilson’s brain didn’t come from him.” “Your brain too,” Wilson said when he collared me later outside the church.

The Secret World: A History of Intelligence
by Christopher Andrew
Published 27 Jun 2018

The Old Etonian King’s historian Frank Birch arrived in 1916.78 Birch was a brilliant conversationalist and comic actor who later appeared in pantomime at the London Palladium and wrote a comic history of Room 40, Alice in ID25, which included a celebration by Knox of his bathtime brainwaves: The sailor in Room 53 Has never, it’s true, been to sea But though not in a boat He has yet served afloat – In a bath at the Admiralty79 In the Second World War Birch and Adcock were to take the lead in recruiting one third of the King’s Fellowship to Bletchley Park (including its greatest cryptanalyst, Alan Turing).80 But for the experience of the contribution made by King’s eccentrics to codebreaking in the First World War, it is unlikely that Turing would have been recruited in 1939. Like Ewing, Hall was actively involved in recruiting for Room 40 from the moment he became DID. Hall seems to have had a weakness for Old Etonians.

pages: 1,737 words: 491,616

Rationality: From AI to Zombies
by Eliezer Yudkowsky
Published 11 Mar 2015

But so long as the terms of the theory were being processed by human scientists, they just knew when an “observation” had occurred. You said an “observation” occurred whenever it had to occur in order for the experimental predictions to come out right—a subtle form of constant tweaking. (Remember, the basics of quantum theory were formulated before Alan Turing said anything about Turing machines, and way before the concept of computation was popularly known. The distinction between an effective formal theory, and one that required human interpretation, was not as clear then as now. Easy to pinpoint the problems in hindsight; you shouldn’t learn the lesson that problems are usually this obvious in foresight.)

pages: 1,799 words: 532,462

The Codebreakers: The Comprehensive History of Secret Communication From Ancient Times to the Internet
by David Kahn
Published 1 Feb 1963

Britain now stood alone. But she had an advantage that the other countries did not. The British codebreaking establishment, the so-called Government Code and Cypher School, or G.C. & C.S., had recruited, as war threatened, a number of linguists and mathematicians. Among the latter was an authentic genius. Alan Turing had become a fellow of King’s College, Cambridge, at the almost unprecedented age of 22 when the dons recognized his ability. Tallish, powerfully built, with deep-set blue eyes, he wore unpressed clothes, sidled through doors, stammered, fell into long silences. He had, four years earlier, proved a fundamental theorem in mathematics: that it was not possible to ascertain whether certain problems could be solved.