trolley problem

back to index

description: a thought experiment in ethics used to explore the complexities of making moral decisions

49 results

pages: 202 words: 58,823

Willful: How We Choose What We Do
by Richard Robb
Published 12 Nov 2019

Society, in deciding whether to let the government build a dam, would have to make a for-itself decision about whether the benefit of the electricity exceeds its costs to an unwilling few. Two Moral Dilemmas The distinction between purposeful and for-itself decision-making can be demonstrated by looking at two enduring moral puzzles: the merchant’s choice posed by Cicero in 44 BCE and the trolley problem posed by Philippa Foot in 1967.3 The merchant’s choice belongs in the purposeful category, where options can be evaluated, ranked, and traded. Choices in the trolley problem, however, depend ultimately on impulse—attempts to calculate the trade-offs are swamped by an individual exercise of will. Action (or inaction) is for-itself. In Cicero’s story, Rhodes is suffering a famine when a merchant arrives at the port with a ship full of grain.

It’s possible that personal moral principles would dictate a particular response—a utilitarian might favor pushing, while a Kantian might not. For effective altruists, utilitarians, and Kantians, the moral considerations arising from the trolley problem fit with purposeful choice. As long as they don’t abandon their principles at the crucial moment, their actions should be predictable. For the rest of us, though, it might not be so easy. The trolley problem is carefully constructed so that there is no Pareto-efficient solution. Variations of the problem that deal with injury, where everyone can be made better off, are easy to solve. Say the man you pushed would break his arm to save five people from breaking their arms.

But if the fat man is a shot putter about to compete in the Olympics, don’t push, since the cost of breaking his arm likely exceeds the sum cost of breaking the arms of five random people. In the actual trolley problem, though, he can’t be compensated for blocking the trolley since he’ll be dead. I don’t think we can resort to a “veil of ignorance” solution, either. If I didn’t know ex ante whether I’d be the fat man or one of the five people on the tracks and I had a one-sixth chance of each, of course I would choose “push.” But that doesn’t help with the trolley problem. It is already resolved who will be the fat man and that’s the individual you’d have to kill. In the end, I probably wouldn’t push, although I can’t say for sure.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

So, what would and what should an AI agent do when faced with a Trolley Problem, or something like it? Well, first of all, we should ask ourselves whether it is reasonable to expect more of an AI system that we would expect of a person in a situation like this. If the greatest philosophical thinkers in the world cannot definitively resolve the Trolley Problem, then is it reasonable of us to expect an AI system to do so? Secondly, I should point out that I have been driving cars for decades, and in all that time I have never faced a Trolley Problem. Nor has anyone else I know. Moreover, what I know about ethics, and the ethics of the Trolley Problem, is roughly what you read above: I wasn’t required to pass an ethics exam to get my driving licence.

We’ll begin our survey with one particular scenario which has attracted more attention in the ethical AI community than perhaps any other. The Trolley Problem is a thought experiment in the philosophy of ethics, originally introduced by British philosopher Philippa Foot in the late 1960s.7 Her aim in introducing the Trolley Problem was to disentangle some of the highly emotive issues surrounding the morality of abortion. There are many versions of Foot’s trolley problem, but the most common version goes something like this (see Figure 21): A trolley (i.e. a tram) has gone out of control, and is careering at high speed towards five people who are unable to move.

There is a lever by the track; if you pull the lever, then the trolley will be diverted down an alternative track, where there is just one person (who also cannot move). If you pull the lever, then this person would be killed, but the five others would be saved. Should you pull the lever or not? The Trolley Problem has risen rapidly in prominence recently because of the imminent arrival of driverless cars. Pundits were quick to point out that driverless cars might well find themselves in a situation like the Trolley Problem, and AI software would then be called upon to make an impossible choice. ‘Self-driving cars are already deciding who to kill’ ran one Internet headline in 2016.8 There was a flurry of anguished online debate, and several philosophers of my acquaintance were surprised and flattered to discover that there was suddenly an attentive audience for their opinions on what had hitherto been a rather obscure problem in the philosophy of ethics.

pages: 175 words: 54,755

Robot, Take the Wheel: The Road to Autonomous Cars and the Lost Art of Driving
by Jason Torchinsky
Published 6 May 2019

George, on the other hand, is actually and directly murdering the poor fat man to save the five people. Is this difference actually significant? Does anything about the trolley problem really matter? The truth is that, in reality, I don’t think the trolley problem is really a likely dilemma that autonomous cars will face. Sure, they may end up in situations where sacrifice of life is unavoidable, but the idea that the robotic vehicles will have access to all the information that makes up the trolley problem—the number of passengers in the vehicle, specifically—is by no means assured, and as such is not likely to be a factor in the cars’ decision making.

These are, of course, extremely important questions and concerns, but let’s be real here: we’re all sort of being hypocrites whenever we wring our hands over how we expect robotic vehicles to behave in morally or ethically difficult situations, where real lives are at stake. We’re hypocrites because humanity is basically a collection of all kinds of often miserable jackasses who wouldn’t know the best ethical solution to the trolley problem if it shoved its ethical and hypothetical tongue in their nostril, and just about all of those miserable jackasses have car keys. Oh, and in case you’re not familiar with it, I’ll explain the trolley problem soon. The recent interest in autonomous vehicles has made this fifty-two-year-old thought experiment surprisingly popular, so, don’t worry, before you fling this book to the ground in disgust, you’ll know what the hell everyone’s talking about.

Let’s look into this aspect first, and think about how future robotic cars will deal with a confusing world. This means we should probably address the trolley problem first, since almost every discussion of autonomous car ethics will bring this up, and I’ve put it off as long as I could. The trolley problem⁵³ was first “officially” stated by the British philosopher, ethicist, and hilarious-name-haver Philippa Foot in 1967. Foot’s original description of the trolley problem reads like this: Edward is the driver of a trolley, whose brakes have just failed. On the track ahead of him are five people; the banks are so steep that they will not be able to get off the track in time.

pages: 88 words: 26,706

Against the Web: A Cosmopolitan Answer to the New Right
by Michael Brooks
Published 23 Apr 2020

The consequentialist approach to the Trolley Problem is to make whatever decision—in this case, turning the trolley to the right—that will result in the fewest deaths. When they first hear Foot’s version of the Trolley Problem, the majority of people have a consequentialist reaction. (Or their eyes glaze over, as yours might be doing. Just give me a minute here. This is going to come up later.) The usual response is to argue that the morally “right” thing for Edward to do is to turn the trolley to the right, killing the one person to save the five. Nevertheless, Judith Jarvis Thomson amended the Trolley Problem in such a way that, when hearing her version, people have the opposite reaction.

George can shove the fat man onto the track in the path of the trolley, or he can refrain from doing this, letting the five die. When presented with this version of the Trolley Problem, most people refuse to sacrifice the fat man’s life to save the five people. In other words, though on the face of it the moral calculation in both Trolley Problems is the same—in both versions of the story, one person dies to save five—the different responses that people give demonstrate that in real life, people distinguish between actively participating in a killing and letting someone die. Whatever you think about the solutions to the Trolley Problems, you can see the point of the thought experiment. Two principles are being pitted against each other to test which one we think ‘outranks’ the other.

According to my friend Ben Burgis, the author of Give Them An Argument: Logic for the Left, a thought experiment generally refers to two things: first, an imaginary situation designed to test whether a certain definition of a concept captures what we really mean by it, and second, an imaginary situation in which we bring two moral principles into conflict in order to discover which one we care more about. The most famous thought experiment is the so-called “Trolley Problem,” which was originally formulated by the British philosopher Philippa Foot, though the version that most people are familiar with incorporates a change suggested by the American philosopher Judith Jarvis Thomson. Here’s Foot’s original example: Edward is the driver of a trolley whose breaks have just failed.

pages: 472 words: 80,835

Life as a Passenger: How Driverless Cars Will Change the World
by David Kerrigan
Published 18 Jun 2017

Trolleyology In philosophy circles, there’s an ethical question to explore this phenomenon, known as the trolley problem. It challenges that if you had to push one large person in front of a moving trolley to save a group of people on the tracks, would you? This abstract thought exercise has been widely applied in discussion about how we should design the programming for self-driving cars: what should it choose to do in a trolley-style situation where not everybody can be saved but relative value choices need to be made? In an interesting public exploration of the trolley problem in the context of driverless cars, MIT have created a website[291] offering users the chance to choose their preferred outcome in a variety of scenarios.

In an interesting public exploration of the trolley problem in the context of driverless cars, MIT have created a website[291] offering users the chance to choose their preferred outcome in a variety of scenarios. The MIT reworking of the trolley problem replaces the trolley with a driverless car experiencing brake failure. The experiment depicts 13 variations of the “trolley problem”, asking users to decide who should perish, which involves agonising priority choices: more deaths against fewer, humans over animals, elderly compared to young, professionals against criminals, law abiding people over jaywalkers, and larger people against athletes. I strongly recommend you try it yourself: http://moralmachine.mit.edu/ and see how your choices compare with others who have completed the experiment.

They may make no choice - frozen into inaction by fear. So programming cars for the best possible outcome, even if unfavourable, adds a degree of certainty we don't currently have. The driverless cars trolley problem discussions portend many forthcoming debates about ethics in the time of Artificial Intelligence and how we will hold machines to different standards than we do humans. We don’t endlessly debate the trolley problem for human drivers, nor is it part of any driver test. Patrick Lin, a philosopher at California Polytechnic State University, San Luis Obispo and a legal scholar at Stanford University notes that “Even if a machine makes the exact same decision as a human being, I think we’ll see a legal challenge.”[298] For all the debate about how to treat ethics in relation to driverless cars, it’s also noteworthy that today we commonly put our safety in the hands of a driver who may be forced to make a life or death decision every time we get into a taxi.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

The participant has a choice: act, and divert the trolley so that it hits the one person, or do nothing and allow the trolley to kill five.105 The most direct analogy to the Trolley Problem for AI is the programming of self-driving cars.106 For instance: if a child steps into the road, should an AI car hit that child, or steer into a barrier and thereby kill the passenger? What if it is a criminal who steps into the road?107 The parameters can be tweaked endlessly, but the basic choice is the same—which of two (or more) unpleasant or imperfect outcomes should be chosen? Aspects of the Trolley Problem are by no means unique to autonomous vehicles. For instance, whenever a passenger gets into a taxi, they delegate such decisions to the driver.

An autonomous weapon may have to decide whether to fire a weapon at an enemy when the enemy is surrounded by civilians, taking the risk of causing collateral damage in order to eliminate the target.112 A common objection to the Trolley Problem or its variants being applied to AI is to say that humans are very rarely faced with extreme situations where they must choose between, for example, killing five schoolchildren or one member of their family. However, this objection confuses the individual example with the underlying philosophical dilemma. Moral dilemmas do not arise only in life and death situations. To this extent, the Trolley Problem is misleading in that it could encourage people to think that AI’s moral choices are serious, but rarely arise.

However, these will be autonomous in the relevant sense so long as the software within the vehicles, which may come from a single central hub and be sent to the individual vehicles via the Internet, contains features which would qualify as AI within this book’s definition. See, for example, Joel Achenbach, “Driverless Cars Are Colliding with the Creepy Trolley Problem”, Washington Post, 29 December 2015, https://​www.​washingtonpost.​com/​news/​innovations/​wp/​2015/​12/​29/​will-self-driving-cars-ever-solve-the-famous-and-creepy-trolley-problem/​?​utm_​term=​.​30f91abdad96, accessed 1 June 2018; Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles”, Cornell University Library Working Paper, 4 July 2016, https://​arxiv.​org/​abs/​1510.​03346, accessed 1 June 2018. 107The scenario involving a criminal pedestrian was posed by researchers at MIT, in their “Moral Machine” game, which is described by its designers as “A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.

pages: 223 words: 66,428

The Comforts of a Muddy Saturday
by Alexander McCall Smith
Published 22 Sep 2008

She drew in her breath and read quickly to the bottom; there was the signature, bold as brass: Christopher Dove. She read the letter again, more slowly this time. Dear Ms. Dalhousie, I enclose with this letter an article that I have recently completed and that I think is suitable for publication in the Review. You may be familiar, of course, with the famous Trolley Problem that Philippa Foot raised all those years ago in Virtues and Vices. I have recently given this matter considerable thought and feel that I have a new approach to propose. There are a number of other editors keen to take this piece (both here and in the United States), but I thought that I would give you first option.

Then she picked up the letter again and began to enumerate its various effronteries and, not to beat about the bush, lies. To begin with, there was Dove’s choice of the words you may be familiar with: that may have sounded innocuous, but was in reality a piece of naked condescension. Of course she would be familiar with the Trolley Problem, one of the most famous thought experiments of twentieth-century philosophy—and twenty-first-century philosophy, too, as the problem continued to rumble along, as everyone knew. Everyone professionally involved in philosophy, that is, and that included Isabel. To suggest that she may be familiar with it was to imply ignorance on her part; what Dove should have written was you will of course be familiar with.

He had gone to East Berlin, as had Dove, and had publicly complained about reactionaries, as he described them, who had questioned the visit on the grounds that meetings would be restricted to those with posts in the universities, Party men every one of them. Dove…She thought of his paper on the Trolley Problem; she felt a vague unease about that, and she felt that there would be more to come. But Brecht and the GDR, and even Dove and Lettuce, seemed far away. “Let’s leave Brecht out of it for a moment,” she said. “After Charlie has been fed, I thought we could go out to the Pentlands and just…just go for a walk.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

The trolley problem has recently reemerged as part of the media’s coverage of self-driving cars,20 and the question of how an autonomous vehicle should be programmed to deal with such problems has become a central talking point in discussions on AI ethics. Many AI ethics thinkers have pointed out that the trolley problem itself, in which the driver has only two horrible options, is a highly contrived scenario that no real-world driver will ever encounter. But the trolley problem has become a kind of symbol for asking about how we should program self-driving cars to make moral decisions on their own. In 2016, three researchers published results from surveys of several hundred people who were given trolley-problem-like scenarios that involved self-driving cars, and were asked for their views of the morality of different actions.

If you steer the trolley to the right, the trolley will kill the single worker. What is the moral thing to do? The trolley problem has been a staple of undergraduate ethics classes for the last century. Most people answer that it would be morally preferable for the driver to steer onto the spur, killing the single worker and saving the group of five. But philosophers have found that a different framing of essentially the same dilemma can lead people to the opposite answer.19 Human reasoning about moral dilemmas turns out to be very sensitive to the way in which the dilemmas are presented. The trolley problem has recently reemerged as part of the media’s coverage of self-driving cars,20 and the question of how an autonomous vehicle should be programmed to deal with such problems has become a central talking point in discussions on AI ethics.

Clarke, 2001: A Space Odyssey (London: Hutchinson & Co, 1968). 17.  Ibid., 192. 18.  N. Wiener, “Some Moral and Technical Consequences of Automation,” Science 131, no. 3410 (1960): 1355–58. 19.  J. J. Thomson, “The Trolley Problem,” Yale Law Journal 94, no. 6 (1985): 1395–415. 20.  For example, see J. Achenbach, “Driverless Cars Are Colliding with the Creepy Trolley Problem,” Washington Post, December 29, 2015. 21.  J.-F. Bonnefon, A. Shariff, and I. Rahwan, “The Social Dilemma of Autonomous Vehicles,” Science 352, no. 6293 (2016): 1573–76. 22.  J. D. Greene, “Our Driverless Dilemma,” Science 352, no. 6293 (2016): 1514–15. 23.  

pages: 502 words: 132,062

Ways of Being: Beyond Human Intelligence
by James Bridle
Published 6 Apr 2022

This is the paradox at the heart of the Trolley problem, an ethical problem posed for self-driving cars and other autonomous systems such as an automated trolley (or tramcar, for non-Americans). The Trolley problem asks what an automated vehicle should do if there are two unavoidable paths for it to take: one towards a group of people and one towards an individual, for example. Whose life is worth more? The Trolley problem has even been turned into an online game, the Moral Machine, by researchers at MIT seeking to formulate rules for autonomous vehicles.21 The problem with the Trolley problem is that it was originally formulated for a human operator at the controls of a runaway tram car: the power of the problem resides in the unavoidable nature of its two outcomes.

These include, but are not limited to, the car-centric design of modern cities; the education or otherwise of pedestrians in road safety and much else; the fatally addictive design of the app they were playing with on their phone at the time; the financial incentives of automation; the assumptions of actuaries and insurance companies; and the legal processes which govern everything from the speed limit to the assignation of blame and recompense. In short, the most important factor in the Trolley problem is not the internal software of the vehicle, but the culture which surrounds the self-driving car, which has infinitely more impact on the outcomes of the crash than any split-second decision by the driver, human or otherwise. This is the real lesson of scenarios like the Trolley problem, the Basilisk and the paperclip machine: we cannot control every outcome, but we can work to change our culture. Technological processes like artificial intelligence won’t build a better world by themselves, just as they tell us nothing useful about general intelligence.

See, for example, Stuart Russell and Peter Norvig’s Artificial Intelligence: A Modern Approach, 3rd edn (Harlow: Pearson Education, 2016), the standard textbook on the subject, which cites Yudkowsky’s concerns about AI safety. 21. The Trolley problem was first given that name by the moral philosopher Judith Jarvis Thomson in ‘Killing, Letting Die, and the Trolley Problem’, The Monist, 59 (2), April 1976, pp. 204–17. Her conclusion, from a number of examples, was that ‘there are circumstances in which – even if it is true that killing is worse than letting die – one may choose to kill instead of letting die’.

pages: 197 words: 59,656

The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically
by Peter Singer
Published 1 Jan 2015

Bloom, “The Baby in the Well.” 12. Gleichgerrcht and Young, “Low Levels of Empathic Concern,” e60418. For an entertaining discussion of trolley problems, see David Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us about Right and Wrong (Princeton: Princeton University Press, 2013). 13. C. D. Navarrete, M. M. McDonald, M. L. Mott, and B. Asher, “Virtual Morality: Emotion and Action in a Simulated Three-Dimensional ‘Trolley Problem,’” Emotion 12 (2011): 364–70. I owe this reference to Gleichgerrcht and Young. 14. Bloom, “The Baby in the Well.” 15. Immanuel Kant, Critique of Practical Reason, trans.

You may feel a little worse, but it is unlikely that you feel anything like ten times worse.11 Effective altruists, as we have seen, need not be utilitarians, but they share a number of moral judgments with utilitarians. In particular, they agree with utilitarians that, other things being equal, we ought to do the most good we can. In a study of the role of emotion in moral decision making, subjects were presented with so-called trolley problem dilemmas in which, for example, a runaway trolley is heading for a tunnel in which there are five people, and it will kill them all unless you divert it down a sidetrack, in which case only one person will be killed. In a variant, the only way to stop the five being killed is to push a heavy stranger off a footbridge.

Empathic concern is, as we have seen, one aspect of emotional empathy. Other aspects of empathy, including personal distress and perspective taking, did not vary between those who made consistently utilitarian judgments and those who did not. Neither did demographic or cultural differences, including age, gender, education, and religiosity.12 Another trolley problem study used virtual reality technology to give people a more vivid sense of being in the situation in which they must decide whether to throw the switch to divert the trolley down the sidetrack, thereby killing one but saving five. In this study, experimenters measured the skin conductivity of their subjects while making these decisions.

pages: 296 words: 78,631

Hello World: Being Human in the Age of Algorithms
by Hannah Fry
Published 17 Sep 2018

Michael Taylor, ‘Self-driving Mercedes-Benzes will prioritize occupant safety over pedestrians’, Car and Driver, 7 Oct. 2016, https://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/. 34. Jason Kottke, Mercedes’ Solution to the Trolley Problem, Kottke.org, 24 Oct. 2016, https://kottke.org/16/10/mercedes-solution-to-the-trolley-problem. 35. Jean-François Bonnefon, Azim Shariff and Iyad Rahwan (2016), ‘The social dilemma of autonomous vehicles’, Science, vol. 35, 24 June 2016, DOI 10.1126/science.aaf2654; https://arxiv.org/pdf/1510.03346.pdf. 36. All quotes from Paul Newman are from private conversation. 37.

Except, Hugo wasn’t being asked about any old crash. He was ­being tested on his response to a well-worn thought experiment ­dating back to the 1960s, involving a very particular kind of ­collision. The interviewer was asking him about a curious conundrum that forces a choice between two evils. It’s known as the trolley problem, after the runaway tram that was the subject of the original formulation. In the case of driverless cars, it goes something like this. Imagine, some years into the future, you’re a passenger in an autonomous vehicle, happily driving along a city street. Ahead of you a traffic light turns red, but a mechanical failure in your car means you’re unable to stop.

Because when the same study asked participants if they would actually buy a car which would murder them if the circumstances arose, they suddenly seemed reluctant to sacrifice themselves for the greater good. This is a conundrum that divides opinion – and not just in what people think the answer should be. As a thought experiment, it remains a firm favourite of technology reporters and other journalists, but all the driverless car experts I interviewed rolled their eyes as soon as the trolley problem was mentioned. Personally, I still have a soft spot for it. Its simplicity forces us to recognize something important about driverless cars, to challenge how we feel about an algorithm making a value judgement on our own, and others’, lives. At the heart of this new technology – as with almost all algorithms – are questions about power, expectation, control, and delegation of responsibility.

pages: 386 words: 113,709

Why We Drive: Toward a Philosophy of the Open Road
by Matthew B. Crawford
Published 8 Jun 2020

On one side, you have inputs consisting of empirical facts, on the other side you have outputs consisting of some new state of affairs in the world, and in the middle you have a person who applies principles. These principles need to be likewise precise, capable of clear articulation, and universally applicable. One appeal of the trolley problem, then, is that it lends itself to a kind of moral calculus that resembles the input-output logic of a computer. The most widely adopted moral operating system, if you will, is utilitarianism, the motto of which is “the greatest good for the greatest number.” Another appeal of the trolley problem is that one can ask people to imagine themselves in such a scenario, vary the specifications of the scenario, and see how they respond, thereby gathering social data.

Automation as Moral Reeducation What happens when an autonomous car cannot avoid colliding with another car, or with pedestrians, or a dog, and it must make a decision whom to hit? What sort of moral priorities shall the computers be programmed with? Anyone who took an undergraduate philosophy class in the last twenty years is likely to have encountered the “trolley problem,” a classic thought experiment that goes like this. Suppose a trolley is headed on a collision course with a group of pedestrians. But you, as an alert bystander, can pull a lever to switch the track to a different course. The problem is, there is an innocent on this new track as well. But only one.

In both its idealist and empirical versions, this is a way of thinking about ethics that has a long pedigree, and has been subject to critique (perhaps most witheringly by Nietzsche, in his treatment of “the English moralists”) for nearly as long, but has lately taken on a new life for reasons that should be obvious.1 It offers a certain intellectual tractability that makes it seem a good fit with machine logic.2 And sure enough, when talk turns to the ethical dilemmas posed by driverless cars, the industry and its adjuncts in academia and journalism quickly settle into the trolley problem, that reassuringly self-contained conundrum of utilitarian ethics, and proceed to debate the “death algorithm.” (Mercedes-Benz was the first automaker to come out and declare that its cars would be programmed to prioritize the lives of the car’s occupants.) This way of thinking about ethics seems to permit the transfer of a moral burden to a machine.

pages: 280 words: 85,091

The Wisdom of Psychopaths: What Saints, Spies, and Serial Killers Can Teach Us About Success
by Kevin Dutton
Published 15 Oct 2012

Schug, “The Neural Correlates of Moral Decision-Making in Psychopathy,” Molecular Psychiatry 14 (January 2009): 5–6, doi:10.1038/mp.2008.104. 10 Consider, for example, the following conundrum (case 1) … The Trolley Problem was first proposed in this form by Philippa Foot in “The Problem of Abortion and the Doctrine of the Double Effect,” in Virtues and Vices: And Other Essays in Moral Philosophy (Berkeley: University of California Press, 1978). 11 Now consider the following variation (case 2) … See Judith Jarvis Thomson, “Killing, Letting Die, and the Trolley Problem,” The Monist 59, no. 2 (1976): 204–17. 12 Daniel Bartels at Columbia University and David Pizarro at Cornell … See Daniel M.

That, of course, leaves 10 percent unaccounted for: a less morally hygienic minority who, when push quite literally comes to shove, have little or no compunction about holding another person’s life in the balance. But who is this unscrupulous minority? Who is this 10 percent? To find out, Bartels and Pizarro presented the trolley problem to more than two hundred students, and got them to indicate on a four-point scale how much they were in favor of shoving the fat guy over the side—how “utilitarian” they were. Then, alongside the trolleyological question, the students also responded to a series of personality items specifically designed to measure resting psychopathy levels.

These included statements such as “I like to see fistfights” and “The best way to handle people is to tell them what they want to hear” (agree/disagree on a scale of one to ten). Could the two constructs—psychopathy and utilitarianism—possibly be linked? Bartels and Pizarro wondered. The answer was a resounding yes. Their analysis revealed a significant correlation between a utilitarian approach to the trolley problem (push the fat guy off the bridge) and a predominantly psychopathic personality style. Which, as far as Robin Dunbar’s prediction goes, is pretty much on the money. But which, as far as the traditional take on utilitarianism goes, is somewhat problematic. In the grand scheme of things, Jeremy Bentham and John Stuart Mill, the two nineteenth-century British philosophers credited with formalizing the theory of utilitarianism, are generally thought of as good guys.

pages: 450 words: 144,939

Unthinkable: Trauma, Truth, and the Trials of American Democracy
by Jamie Raskin
Published 4 Jan 2022

Real morality cannot be just an exercise for the classroom; the classroom must help us discover and exercise morality out in the world. There is plainly no right answer to the trolley problem, no real “solution” to it. Yet Tommy had managed to solve it in his own way, by completely changing the terms of the question. Sometimes, when I let my thoughts run away with me these days, I wonder if Tommy’s even thinking about the trolley problem led him down a blind alley. Did he think, in his stressed frame of mind, that by taking one life, his own, he could somehow save ninety-nine other lives? Did he think he would redirect people’s attention to the necessity of human decency and kindness, or was it just a psychological compulsion he was acting on, his illness speaking?

—Sophocles Epigraph I realized, through it all, that in the midst of winter, there was, within me, an invincible summer. —ALBERT CAMUS Contents Cover Title Page Dedication Epigraph Preface Prologue: Democracy Winter Part I Chapter 1: Democracy Summer Chapter 2: A Sea of Troubles Chapter 3: The Trolley Problem Part II Chapter 4: “There Is a North” Chapter 5: Complete the Count Chapter 6: Midnight Meditations and Orwellian Preparations Chapter 7: “This Is About the Future of Democracy” Chapter 8: An All-American Defense of Democracy Chapter 9: Reverse Uno Chapter 10: Writing Trump Part III Chapter 11: Violence v.

As uncomfortable and intrusive as it may seem, it is essential to use the word suicide itself in order to demystify and deflate it, to strip it of its phony pretense to omnipotence and supernatural force. Suicide is not a “bad word,” as Tommy Raskin might have said, for there is no such thing as a bad word. It is just, in reality, a terrible thing and an irreversible detour from the road we all try to walk down together, the road of life. Chapter 3 The Trolley Problem It is only with the heart that one can see right; what is essential is invisible to the eye. —ANTOINE DE SAINT-EXUPÉRY, THE LITTLE PRINCE It was important to me for a long while—less so now—to reconstruct the final days to map out exactly how we let our guard down in the final week of 2020.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

It was like the time I went to Le Bernadin for lunch, then came home and realized the only thing we had for dinner was hot dogs. As a car, the Tesla is amazing. As an autonomous vehicle, I am skeptical. Part of the problem is that the machine ethics haven’t been finalized because they are very difficult to articulate. The ethical dilemma is generally led by the trolley problem, a philosophical exercise. Imagine you’re driving a trolley that’s hurtling down the tracks toward a crowd of people. You can divert it to a different track, but you will hit one person. Which do you choose: certain death for one, or for many? Philosophers have been hired by Google and Uber to work out the ethical issues and embed them in the software.

Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver? Do you trust the unknown programmers who are making these decisions on your behalf? In a self-driving car, death is a feature, not a bug. The trolley problem is a classic teaching example of computer ethics. Many engineers respond to this dilemma in an unsatisfying way. “If you know you can save at least one person, at least save that one. Save the one in the car,” said Christoph von Hugo, Mercedes’s manager of driverless car safety, in an interview with Car and Driver.22 Computer scientists and engineers, following the precedent set by Minsky and previous generations, don’t tend to think through the precedent that they’re establishing or the implications of small design decisions.

There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules? Ito replied: “When we did the car trolley problem, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car.” It should surprise no one that members of the public are both more ethical and more intelligent than the machines we are being encouraged to entrust our lives to.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

GO TO NOTE REFERENCE IN TEXT a well-groomed poodle: Andy Newman, “The Dog Was Running, So the Subway Was Not,” The New York Times, February 17, 2018, sec. New York, nytimes.com/2018/02/16/nyregion/dog-subway-tracks.html. GO TO NOTE REFERENCE IN TEXT a trolley problem: Judith Jarvis Thomson, “The Trolley Problem,” The Yale Law Journal 94, no. 6 (May 1985): 1395, doi.org/10.2307/796133. GO TO NOTE REFERENCE IN TEXT relatively dangerous occupation: Steven Markowitz et al., “The Health Impact of Urban Mass Transportation Work in New York City,” July 2005, nycosh.org/wp-content/uploads/2014/10/TWU_Report_Final-8-4-05.pdf.

Transit officials faced a difficult choice: they could shut down the F, blocking a vital link between New York’s two densest boroughs right as commuters were beginning to get off work—or they could potentially run over poor Dakota. They elected to close the F for more than an hour until Dakota was found. Dakota’s story was a real-world example of what philosophers call a trolley problem, a moral dilemma first posed by the philosopher Philippa Foot in 1967. The original version was this: You’re driving a trolley that is speeding along the tracks, when to your horror you discover that the brakes are out. If you continue onward, five transit workers on the track ahead will be killed.

Alternatively, she could have her grandparents babysit, which would mean less COVID exposure for the kids but more for the grandparents—who because of their advanced age were at much greater risk of developing a fatal case of COVID. Or the mother could quit her job and lock down the household, but that could mean bankruptcy or foreclosure if she couldn’t find other work. None of these choices were good—COVID was like an endless series of trolley problems. Adding to the difficulty is that we are often forced to compare unlike things. “It is not easy to compare ‘point five percent risk of serious illness’ to ‘joy.’ But in the end, this is what you will have to do. Take a deep breath, look carefully at your risks and benefits, and make a choice,” Oster wrote in May 2020, after many people were beginning to wonder what the endgame was following months of social distancing.

pages: 147 words: 39,910

The Great Mental Models: General Thinking Concepts
by Shane Parrish
Published 22 Nov 2019

This experiment was first proposed in modern form by Philippa Foot in her paper “The Problem of Abortion and the Doctrine of the Double Effect,”3 and further considered extensively by Judith Jarvis Thomson in “The Trolley Problem.”4 In both cases the value of the thought experiment is clear. The authors were able to explore situations that would be physically impossible to reproduce without causing serious harm, and in so doing significantly advanced certain questions of morality. Moreover, the trolley problem remains relevant to this day as technological advances often ask us to define when it is acceptable, and even desirable, to sacrifice one to save many (and lest you think this is always the case, Thomson conducts another great thought experiment considering a doctor killing one patient to save five through organ donation).

Retrieved from: https://plato.stanford.edu/entries/thought-experiment/ 2 Isaacson, Walter. Einstein: His Life and Universe. New York: Simon and Schuster, 2007. 3 Foot, Philippa. “The Problem of Abortion and the Doctrine of the Double Effect.” Oxford Review, No. 5 (1967). 4 Thomson, Judith Jarvis. “The Trolley Problem.” Yale Law Journal, Vol. 94, No. 6 (May, 1985). 5 Rawls, John. A Theory of Justice, revised edition. Cambridge: Harvard University Press, 2005. Second-Order Thinking 1 Keller, Evelyn Fox. A Feeling for the Organism: The Life and Work of Barbara McClintock. New York: W.H. Freeman and Company, 1983. 2 Atwood, Margaret.

Driverless: Intelligent Cars and the Road Ahead
by Hod Lipson and Melba Kurman
Published 22 Sep 2016

While raised in a new context, this ethical choice question is actually an old chestnut, a variant of the well-known Trolley Problem4 that students in philosophy classes have discussed for decades. The Trolley Problem, conceived by Philippa Foot in 1967, describes the ethical conundrum of “a driver of a runaway tram [who] can steer only from one narrow track onto another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.” Most people will do the simple utilitarian calculation that five lives are worth more than one, and consider this a no-brainer. But the Trolley Problem case then continues to get more complicated with other morbid choices that lead, eventually, to paradoxical dilemmas. The Trolley Problem is not unique to driverless cars.

The Trolley Problem is not unique to driverless cars. Recently, in downtown Ithaca in upstate New York, we witnessed a tragic demonstration of the Trolley Problem. One sunny Friday afternoon while driving down the steep hill that leads into Ithaca’s bustling downtown, a truck driver became aware that his brakes had given out. He was forced to make the painful decision about which way to aim his deadly, out of control two-ton truck. The driver elected to steer his truck away from a group of construction workers and instead, aimed his truck into a nearby café, accidentally killing Amanda Bush, 27, a young mother spending that summer afternoon earning extra money as a bartender.

See Simultaneous Localization and Mapping Software companies versus car companies, 46–55, 63 State space, 76, 165 Sun, Jian, 225 SuperVision, 224–227. See also Deep learning Sutskever, Ilya, 224 Taxi drivers, 260 Template-based perception, 91, 229, 230. See also Shakey the robot Templeton, Brad, 142–146 Thrun, Sebastian, 152, 168 Traffic congestion, 25–28 Traffic prediction software Trolley problem. See Ethics Truckers, 259–263 Uber, 68, 260 Unemployment. See Jobs U.S. Department of Transportation (USDOT), 128–132 V2I. See V2X V2V. See V2X V2X Drawbacks of, 136–140 Overview of, 129, 130, 136 Vehicular lifespan, 28, 29 Werbos, Paul, 210, 213 “Who to kill.” See Ethics Wiesel, Torsten, 229 World’s Fair (New York, 1939), 107–110 World’s Fair (New York, 1964), 121 XOR problem, 208 Yosinski, Jason, 232 Zero Principle, 255–258.

Autonomous Driving: How the Driverless Revolution Will Change the World
by Andreas Herrmann , Walter Brenner and Rupert Stadler
Published 25 Mar 2018

A car that prioritises the safety of its occupants above all other considerations is socially just as unacceptable as a vehicle that sacrifices its passengers to save other road users’ lives. Is the decision 249 Autonomous Driving 250 over life and death to be left to a random generator, or is the ultimate authority a matter for the driver or occupants? TROLLEY PROBLEM A central aspect of the debate about ethical principles for autonomous driving is the trolley problem, which is based on a philosophical thought experiment [38, 77]. Should a runaway trolley that threatens to run over five people be deliberately diverted along a side track so that only one innocent person is killed? This is based on the question of whether, in a situation of danger, one death may be sacrificed in order to save several.

Ethics and Morals 257 K e y T a ke a w a y s As soon as autonomous vehicles are on the roads, situations will occur in which they have to decide on life and death. Depending on the selected manoeuvre in a dangerous situation, more or fewer and differing persons will be killed or injured. This ethical reflection must be pre-programmed in the cars and any decisions cannot be made by an individual programmer. The debate centres on the trolley problem. This is based on the question of whether, in a dangerous situation, the death of a smaller number of people should be accepted in order to save the lives of a larger number. An economic or utilitarian approach consists of comparing human lives with each other and possibly sacrificing an individual for the sake of a group.

An economic or utilitarian approach consists of comparing human lives with each other and possibly sacrificing an individual for the sake of a group. This offsetting of human lives not only violates many people’s moral intuition, it also contravenes the principle of human dignity. This conviction goes back to the philosopher Kant, and is a fixed element of many national legal systems. As there is no quick and easy answer to the trolley problem, a social discourse is the only way forward. Society is compelled to reflect upon ethical principles and to begin a far-reaching debate. This page intentionally left blank PART 7 IMPACT ON VEHICLES This page intentionally left blank CHAPTER 26 THE VEHICLE AS AN ECOSYSTEM The information and communication technologies that make self-driving cars possible are fundamentally changing the nature of what a vehicle is.

pages: 389 words: 119,487

21 Lessons for the 21st Century
by Yuval Noah Harari
Published 29 Aug 2018

Based on its lightning calculations, the algorithm driving the car concludes that the only way to avoid hitting the two kids is to swerve into the opposite lane, and risk colliding with an oncoming truck. The algorithm calculates that in such a case there is a 70 per cent chance that the owner of the car – who is fast asleep in the back seat – would be killed. What should the algorithm do?16 Philosophers have been arguing about such ‘trolley problems’ for millennia (they are called ‘trolley problems’ because the textbook examples in modern philosophical debates refer to a runaway trolley car racing down a railway track, rather than to a self-driving car).17 Up till now, these arguments have had embarrassingly little impact on actual behaviour, because in times of crisis humans all too often forget about their philosophical views and follow their emotions and gut instincts instead.

However, there might be some new openings for philosophers, because their skills – hitherto devoid of much market value – will suddenly be in very high demand. So if you want to study something that will guarantee a good job in the future, maybe philosophy is not such a bad gamble. Of course, philosophers seldom agree on the right course of action. Few ‘trolley problems’ have been solved to the satisfaction of all philosophers, and consequentialist thinkers such as John Stuart Mill (who judge actions by consequences) hold quite different opinions to deontologists such as Immanuel Kant (who judge actions by absolute rules). Would Tesla have to actually take a stance on such knotty matters in order to produce a car?

, Forbes, 24 November 2010; Cecilia Mazanec, ‘Will Algorithms Erode Our Decision-Making Skills?’, NPR, 8 February 2017. 16 Jean-François Bonnefon, Azim Shariff and Iyad Rahwan, ‘The Social Dilemma of Autonomous Vehicles’, Science 352:6293 (2016), 1573–6. 17 Christopher W. Bauman et al., ‘Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology’, Social and Personality Psychology Compass 8:9 (2014), 536–54. 18 John M. Darley and Daniel C. Batson, ‘“From Jerusalem to Jericho”: A Study of Situational and Dispositional Variables in Helping Behavior’, Journal of Personality and Social Psychology 27:1 (1973), 100–8. 19 Kristofer D.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

For example, the computer might infer that the person who would escape death if the trolley is left alone is a convicted terrorist recidivist loaded up with doomsday pathogens, or a saintly POTUS—or part of a much more elaborate chain of events in detailed alternative realities. If one of these problem descriptions seems paradoxical or illogical, it may be that the authors of the Trolley Problem have adjusted the weights on each side of the balance such that hesitant indecision is inevitable. Alternatively, one can use misdirection to rig the system, such that the error modes are not at the level of attention. For example, in the Trolley Problem, the real ethical decision was made years earlier when pedestrians were given access to the rails—or even before that, when we voted to spend more on entertainment than on public safety.

Faith and ethics are widespread in our species and can be studied using scientific methods, including but not limited to fMRI, psychoactive drugs, questionnaires, etc. Very practically, we have to address the ethical rules that should be built in, learned, or probabilistically chosen for increasingly intelligent and diverse machines. We have a whole series of Trolley Problems. At what number of people in line for death should the computer decide to shift a moving trolley to one person? Ultimately this might be a deep-learning problem—one in which huge databases of facts and contingencies can be taken into account, some seemingly far from the ethics at hand. For example, the computer might infer that the person who would escape death if the trolley is left alone is a convicted terrorist recidivist loaded up with doomsday pathogens, or a saintly POTUS—or part of a much more elaborate chain of events in detailed alternative realities.

See singularity Tegmark, Max, 76–87 AI safety research, 81 Asilomar AI Principles, 2017, 81, 84 background and overview of work of, 76–77 competence of superintelligent AGI, 85 consciousness as cosmic awakening, 78–79 general expectation AGI achievable within next century, 79 goal alignment for AGI, 85–86 goals for a future society that includes AGI, 84–86 outlook, 86–87 rush to make humans obsolescent, reasons behind, 82–84 safety engineering, 86 societal impact of AI, debate over, 79–82 Terminator, The (film), 242 three laws of artificial intelligence, 39–40 Three Laws of Robotics, Asimov’s, 250 threshold theorem, 164 too-soon-to-worry argument against AI risk, 26–27, 81 Toulmin, Stephen, 18–19 transhumans, rights of, 252–53 Treister, Suzanne, 214–15 Trolley Problem, 244 trust networks, building, 200–201 Tsai, Wen Ying, 258, 260–61 Turing, Alan, 5, 25, 35, 43, 60, 103, 168, 180 AI-risk message, 93 Turing Machine, 57, 271 Turing Test, 5, 46–47, 276–77 Tversky, Amos, 130–31, 250 2001: A Space Odyssey (film), 183 Tyka, Mike, 212 Understanding Media (McLuhan), 208 understanding of computer results, loss of, 189 universal basic income, 188 Universal Turing Machine, 57 unsupervised learning, 225 value alignment (putting right purpose into machines) Dragan on, 137–38, 141–42 Griffiths on, 128–33 Pinker on, 110–11 Tegmark on, 85–86 Wiener on, 23–24 Versu, 217 Veruggio, Gianmarco, 243 visualization programs, 211–13 von Foerster, Heinz, xxi, 209–10, 215 Vonnegut, Kurt, 250 von Neumann, John, xx, 8, 35, 60, 103, 168, 271 digital computer architecture of, 58 second law of AI and, 39 self-replicating cellular automaton, development of, 57–58 use of symbols for computing, 164–65 Watson, 49, 246 Watson, James, 58 Watson, John, 225 Watt, James, 3, 257 Watts, Alan, xxi Weaver, Warren, xviii, 102–3, 155 Weizenbaum, Joe, 45, 48–50, 105, 248 Wexler, Rebecca, 238 Whitehead, Alfred North, 275 Whole Earth Catalog, xvii “Why the Future Doesn’t Need Us” (Joy), 92 Wiener, Norbert, xvi, xviii–xx, xxv, xxvi, 35, 90, 96, 103, 112, 127, 163, 168, 256 on automation, in manufacturing, 4, 154 on broader applications of cybernetics, 4 Brooks on, 56–57, 59–60 control via feedback, 3 deep-learning and, 9 Dennett on, 43–45 failure to predict computer revolution, 4–5 on feedback loops, 5–6, 103, 153–54 Hillis on, 178–80 on information, 5–6, 153–59, 179 Kaiser on Wiener’s definition of information, 153–59 Lloyd on, 3–7, 9, 11–12 Pinker on, 103–5, 112 on power of ideas, 112 predictions/warnings of, xviii–xix, xxvi, 4–5, 11–12, 22–23, 35, 44–45, 93, 104, 172 Russell on, 22–23 on social risk, 97 society, cybernetics impact on, 103–4 what Wiener got wrong, 6–7 Wilczek, Frank, 64–75 astonishing corollary (natural intelligence as special case of AI), 67–70 astonishing hypothesis of Crick, 66–67 background and overview of work of, 64–65 consciousness, creativity and evil as possible features of AI, 66–68 emergence, 68–69 human brain’s advantage over AI, 72–74 information-processing technology capacities that exceed human capabilities, 70–72 intelligence, future of, 70–75 Wilkins, John, 275 wireheading problem, 29–30 With a Rhythmic Instinction to Be Able to Travel Beyond Existing Forces of Life (Parreno), 263–64 Wolfram, Stephen, 266–84 on AI takeover scenario, 277–78 background and overview of work of, 266–67 computational knowledge system, creating, 271–77 computational thinking, teaching, 278–79 early approaches to AI, 270–71 on future where coding ability is ubiquitous, 279–81 goals and purposes, of humans, 268–70 image identification system, 273–74 on knowledge-based programming, 278–81 purposefulness, identifying, 281–84 Young, J.

pages: 483 words: 144,957

At the Existentialist Café: Freedom, Being, and Apricot Cocktails With Jean-Paul Sartre, Simone De Beauvoir, Albert Camus, Martin Heidegger, Maurice Merleau-Ponty and Others
by Sarah Bakewell
Published 1 Mar 2016

So: should he do the right thing by his mother, with clear benefits to her alone, or should he take a chance on joining the fight and doing right by many? Philosophers still get into tangles trying to answer ethical conundrums of this kind. Sartre’s puzzle has something in common with a famous thought experiment, the ‘trolley problem’. This proposes that you see a runaway train or trolley hurtling along a track to which, a little way ahead, five people are tied. If you do nothing, the five people will die — but you notice a lever which you might throw to divert the train to a sidetrack. If you do this, however, it will kill one person, who is tied to that part of the track and who would be safe if not for your action.

(In a variant, the ‘fat man’ problem, you can only derail the train by throwing a hefty individual off a nearby bridge onto the track. This time you must physically lay hands on the person you are going to kill, which makes it a more visceral and difficult dilemma.) Sartre’s student’s decision could be seen as a ‘trolley problem’ type of decision, but made even more complicated by the fact that he could not be sure either that his going to England would actually help anyone, nor that leaving his mother would seriously harm her. Sartre was not concerned with reasoning his way through an ethical calculus in the traditional way of philosophers, however — let alone ‘trolleyologists’, as they have become known.

Ivan Karamazov asks his brother Alyosha to imagine that he has the power to create a world in which people will enjoy perfect peace and happiness for the rest of history. But to achieve this, he says, you must torture to death one small creature now — say, that baby there. This is an early and extreme variety of the ‘trolley problem’, in which one person must be sacrificed in order (it’s hoped) to save many. So, would you do it, asks Ivan? Alyosha’s answer is a clear no. Nothing can justify torturing a baby, in his view, and that is all there is to be said. No weighing of benefits changes this; some things cannot be measured or traded.

pages: 255 words: 79,514

How Many Friends Does One Person Need? Dunbar’s Number and Other Evolutionary Quirks
by Robin Dunbar and Robin Ian MacDonald Dunbar
Published 2 Nov 2010

They asked subjects to make judgements about morally dubious behaviour, but some did so while rather closer than they might have wished to a smelly toilet or a messy desk, and others did so in a more salubrious environment. The first group gave much harsher judgements than the second, [Page 268] suggesting that their judgements were affected by their emotional state. One of the classic dilemmas used in studies of morality is known as the ‘trolley problem’. It goes like this. Imagine you are the driver of a railway trolley approaching a set of points. You realise that your route takes you down a line where five men are working on the railway unaware of your approach. But there is a switch you can pull that would throw the points and send you off down the other line where just one man is working.

The important role of intentions was borne out by a study of stroke patients, which showed that people with damage to the brain’s frontal lobe will usually opt for the rational utilitarian option and throw their companion off the bridge. The frontal lobes provide one area [Page 269] in the brain where we evaluate intentional behaviour. The importance of intentionality has recently been confirmed by Marc Hauser from Harvard and Rebecca Saxe from MIT: they found that, when subjects are processing moral dilemmas like the trolley problem, the areas in the brain that are especially involved in evaluating intentionality (such as the right temporal-pari-etal junction just behind your right ear) are particularly active. Our appreciation of intentions is crucially wrapped up with our ability to empathise with others. The final piece in the jigsaw has now been added by Ming Hsu and colleagues at the California Institute of Technology in Pasadena.

P., 223 temperature rises, 156–7 testes, size, 253 testosterone, 247 tetrachromatic women, 17–18 theology, 287–8 Thomas, Dylan, 22 Thornhill, Randy, 102 titis, 259 tits, 193, 260 Tomasello, Mike, 194 tools, 131, 137, 192 touching, 61–3 toumaï (Sahelanthropus tchadensis), 133–5 traders, 54–6 tree-climbing, 134 Treherne, John, 217 tribal groupings, 25–6 ‘trolley problem’, 269–70 trust, 63–6 tsunami, Indian Ocean, 145, 156 turtles, 98 Tusi, Nasir al-Din, 119 Tyrannosaurus rex, 120, 121 ultraviolet radiation (UVR), 89–91 Upper Palaeolithic Revolution, 137 vasopressin, 262–5 Venus figures, 137 vervet monkeys, 195–6 village sizes, 27 visual processing, 181, 272–3 vitamin: B, 90, 92; D, 87, 90–2 Vivaldi, Antonio, 71 Voland, Eckart, 42, 227, 237 voting patterns, 165–9 Vugt, Mark van, 68 walking upright, see bipedalism Walum, Hasse, 262 war chiefs, 250–1 waulking songs, 78, 155 Waynforth, David, 231, 236 wealth: advertising, 233, 236, 241; differentials, 227–8, 230, 240; [Page 301] inherited, 221; IQ and, 207 Whiten, Andy, 29, 179 Wilberforce, ‘Soapy Sam’, 117 Wilson, Edward O., 5 Wilson, Margo, 259 Wilson, Sandra, 95–6 Winston, Robert, 217 women: attractiveness, 233–5; colour vision, 17–20; conversations, 75, 79–80; extra-pair mating, 258–9; female–female bonding, 16, 79–80; Lonely Hearts adverts, 228–32; marriage, 227–8; skin colour, 91; social skills, 16–17 Young, Thomas, 183 Younger Dryas Event, 156–7 Zulus, 90 [Page 302] About This ePub Corresponding pdf: Library Genesis md5=BC65A72F278ACD9099870DA938364156 P2P ePub [UL] v1 urn:uuid:926fa8e5-1b71-4666-b18a-bb734a892208 2014.12.03

pages: 250 words: 79,360

Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
by Erica Thompson
Published 6 Dec 2022

Even if you are a theologian or a string theorist (perhaps they are not so very different), that still holds true. But if the aim of models is to inform better decisions, then there is an unavoidable question of defining what we mean by a better decision, and this is not trivial even for seemingly quite trivial questions. The well-known ‘trolley problem’ is one philosophical attempt to grapple with this problem: there are three people tied to one branch of a railway line and one person tied to another branch, in such a way that the passage of a train would lead to their certain deaths. A train is coming and you are at the points, which are set so that the train will kill the three people.

People who make models are primarily well-educated, middle-class individuals, often trained in a certain way that values what they perceive as scientific detachment and therefore seeks to suppress value judgements and make them less visible. Their choices reflect the social norms of the modelling environment. The target of the trolley problem memes mentioned above is the incommensurability of value judgements. What if the three people on one rail are terrible criminals and the singleton on the other is a highly respected and productive member of society? What if we are in fact trading off biodiversity for economic gain, air quality outside a school for lower commuting times or the quality of a personal relationship for higher productivity at work?

Sometimes, where a dollar value is just too crude, we find alternative commensurable units such as Quality-Adjusted Life Years which perform the same operation of reducing trade-offs to quantitative comparison. Mathematical models don’t need to do this: we can always choose to keep incommensurables separate. That the contrived trolley problem is discussed at all is a bizarre and even somewhat morbid symptom of an obsession with quantifying, comparing and judging. But, again, the social norms of the modelling environment do prioritise comprehensiveness, generalisability and universality. All of these speak in favour of slicing, dicing and weighting the multiple outputs of a model, or of many models, in order to be able to present them on the same chart.

pages: 413 words: 119,587

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
by John Markoff
Published 24 Aug 2015

Congress could create a liability exemption for self-driving vehicles, as it has done for childhood vaccines. Insurance companies could impose a no-fault regime when only autonomous vehicles are involved in accidents. Another aspect of the liability issue is what has been described as a version of the “trolley problem,” which is generally stated thus: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks that has only one person on it, but that person will be killed.

A technology known as V2X that continuously transmits the location of nearby vehicles to each other is now being tested globally. In the future, even schoolchildren will be carrying sensors to alert cars to their presence and reduce the chance of an accident. It’s puzzling, then, that the philosophers generally don’t explore the trolley problem from the point of view of the greater good, but rather as an artifact of individual choice. Certainly it would be an individual tragedy if the technology fails—and of course it will fail. Systems that improve the overall safety of transportation seem vital, even if they aren’t perfect. The more interesting philosophical conundrum is over the economic, social, and even cultural consequences of taking humans out of the loop in driving.

Some people have been so bamboozled by the word ‘machine’ that they don’t realize what can be done and what cannot be done with machines—and what can be left, and what cannot be left to the human beings.”19 Only now, six and a half decades after Wiener wrote Cybernetics in 1948, is the machine autonomy question becoming more than hypothetical. The Pentagon has begun to struggle with the consequences of a new generation of “brilliant” weapons,20 while philosophers grapple with the “trolley problem” in trying to assign moral responsibility for self-driving cars. Over the next decade the consequences of creating autonomous machines will appear more frequently as manufacturing, logistics, transportation, education, health care, and communications are increasingly directed and controlled by learning algorithms rather than humans.

The Science of Language
by Noam Chomsky
Published 24 Feb 2012

And there's been no progress. These are just questions that are too hard. There is by now some study – like John Mikhail's – some empirical study of elements of human moral nature. Contemporary ethical philosophy has given interesting examples, the kind that Judith Thompson talks about, and Gil Harman and others – the trolley problem and others. There are situations in which we just have an intuition about what the right answer is – and it's a very strange one. For example, sometimes it leads everybody to prefer an outcome that will kill more people when they have a choice of killing one person; and the results are pretty systematic.

NC: There is now for the first time some serious research into it. A lot of it grew out of John Mikhail's dissertation; now Marc Hauser is doing work, Elizabeth Spelke, and others. And they're finding some quite interesting things. There are these kinds of paradoxical situations that have been worked on by ethical philosophers for some time – trolley problems, for example – conditions under which you have a choice to make. A typical case is a doctor in a hospital who has five patients who each have different diseased organs, and they're all going to die. And a healthy person comes in and you could kill him and take the appropriate organs and transplant them and save five patients.

S. 51, 53 Hale, Kenneth 17, 62 Halle, Morris 21 Hamilton, William D. 104 Harman, Gilbert 100 Harris, Zellig 38, 80, 81, 86 Hauser, Marc 100, 109, 286evolution of communication 12, 58 faculty of language 60, 170, 172, 268, 269 hearing 48 Helmholtz, Hermann von 73, 97 Herbert of Cherbury 181 Higginbotham, Jim 129, 130 Hirsh-Pasek, Kathy 196 homunculus 37, 290 Hornstein, Norbert 29, 183, 265 human behavior 138–151, 286 human evolution 2, 13, 71developmental constraints on 41 ‘great leap forward' 13, 70, 77 human nature 95–102, 108–112 and biological capacities 95 Chomsky on 95–102 determined and uniform 95, 99 distinctiveness of 176–179 enlightenment conception of 142 and evolution 103–107 ‘great leap forward' 179 moral agency 101 plasticity of 121 humanitarian intervention 121, 122, 287 humans, genetic variation 13 Hume, David 26, 90, 99, 106, 179color problem 247–248, 286 theory of moral nature 63, 99, 109 Huxley, Thomas 23 I-beliefs 153–156 definition of 156 I-concepts 153–156 definition of 155 I-language 81, 153–156, 164, 239, 258, 266intensional specification of 167 imagination 70, 161 inclusiveness 62, 281 induction 88, 90, 95 inference 73, 165, 221 information 208, 213, 218, 228, 229, 254pragmatic 30 semantic 29, 260 innateness 39–45, 60, 89, 91, 255, 267, 284 innatism 123 innovation 71, 74, 95, 177, 178, 185, 282technological 145 insects, study of 147 instinct 96, 143, 178, 181, 247, 248, 287 instrumentalism 211 intention (see also nativism) 163 internalism 6, 228, 248, 262–263, 269, 287and concepts 188, 190, 209, 255–257, 260, 272 intuitions 125, 126 island sentences 50 Jackendoff, Ray 170, 172 Jacob, François 24, 53, 60, 243 Joos, Martin 145 justice 120 Kahneman, Daniel 140 Kant, Immanuel 90 Kauffman, Stuart 21, 22, 266 Kayne, Richard 55, 84, 241 Keller, Helen 45 Kissinger, Henry 101, 107, 113, 287 Klein, Ralph 111 knowledge 70, 193See also information Kripke, Saul 126 Kropotkin, Peter 103, 111 languageand agency 124–128 as an animal instinct 178 and arithmetical capacities 16 and biology 21–30, 80, 235, 284 biophysical explanations of 208 and brain morphology 46 capacity for 70, 164 characteristic uses of 11–12 cognitive benefits of 2 competence and use 63 and complex thought 1 complexity of 52, 146 compositional character of 37 computational theory of 174, 272 and concepts 71, 198 conceptual resources of 212 displacement property 16 distinctive features 22 domination 232–238 expectations for 54 externalization of 52, 78, 79, 153, 222, 278 flexibility 95, 162, 197, 210, 224, 227 formal languages 16, 17, 289 formal theory of 21–30 functions of 11–20, 164, 165 generative capacity 49 head-first 240 hierarchical structure 232–238 I-language 153–156, 164, 239, 258, 266 interface conditions 25 internal 37 internal, individual and intensional 37, 154, 167 internal use of 52, 69, 124, 153, 160, 197, 262–263, 272–274 a ‘knowledge' system 187, 193 localization of 46, 59, 69–74 and mathematics 181 modularity of 59 movement property 16, 85, 108, 264–265 as a natural object 2, 7 nominalizing languages 155 open texture of 273 and other cognitive systems 271 phonetic features 42 phonological features 42, 57 precursors of 43, 77 properties of 22, 37, 60, 62 public language 153, 288 purposes of 224 and reason 181 result of historical events 84 rules of 165, 221, 223, 224, 225, 283, 284 and science 124–128 sounds available in 282 structural features of 42 structure of 236, 277–278 study of 36, 76, 79, 154See also linguistics theories of 164, 193, 239, 243, 285 unboundedness 177, 262 uniqueness to humans 150 variation in the use of 164, 239–242 language faculty 74, 172, 177, 243, 260, 261, 270adicity requirements of 198, 199 perfection of 50 language of thought 27, 71, 189, 190, 220, 230, 269 Lasnik, Howard 85 learning 95, 180, 200, 226, 281, 282empiricism and 173, 179 learning a language 187, 225, 226 Lenneberg, Eric 21, 43, 47, 59 Lepore, E. 195 Lewis, David 153, 165, 220, 222, 223, 224 Lewontin, Richard 58, 157, 170, 172, 173, 175, 231 lexical items 62categories of 234 origin of 46 liberalism 98 linguistic communities 222 linguistic development 39See also development linguistic practices 221, 223 linguistic principles 237, 276 linguistics 19, 36, 82, 145and biology 150 first factor considerations 45, 96, 148 and natural science 38 and politics 152 procedural theories in 149 second factor considerations 148, 277 structural 80 theories of 87, 265 third factor considerations:separate entry Locke, John 26, 125, 267personal identity 31, 271 secondary qualities 256 logic, formal 251 Logical Structure of Linguistic Theory 84–85 Lohndal, Terje 57 Lorenz, Konrad 21 Marx, Karl 122 mathematics 127, 165, 214, 215, 266capacity for 15, 136 formal functions in 166–169 and language 181 semantics for 251, 252 Mayr, Ernst 174 meaning 29, 98, 199, 206, 250, 252, 270, 273computational theory of 213 construction of a science of 226–230 externalist science of 209–220 methodology for a theory of 226, 227 study of 261 theories of 221 theory of 212, 214, 217, 226 Mehler, Jacques 55 Merge 16, 77, 91, 181, 236, 243, 263, 279–280 centrality of 41, 60, 62, 176, 245 consequences of 17 and edge properties 17, 41 Merge, external 17, 166, 201, 238, 263 Merge, internal 16, 25, 29, 85, 201, 238, 264 mutation giving rise to 43, 52 origin of 14, 15 Pair Merge 201, 264 and psychic identity 28 uniqueness to humans 25, 200, 205 metaphor 195 metaphysics 125, 157 Mikhail, John 63, 99, 100, 109, 129, 286 Mill, John Stuart 121, 122, 287 Miller, George 81 mindas a causal mechanism 138 computational sciences of 247 computational theory of 280 philosophy of 186, 255 place of language in 69–74 representational theory of 162, 188 science of 138–151, 212, 288 theory of 14 Minimalist Program 24, 83, 84, 233, 235–236, 237, 245, 246, 264and adaptationism 172 aim of 42, 199 simplicity and 80, 243, 285 modes of presentation (MOPs) 187, 190, 217, 219, 275roles of 218 morality 99, 100, 109, 287character of 110 conflicting systems 114 generation of action or judgment 110 moral truisms 101, 102 theories of 110, 135 trolley problems 109 and universalization 113–117 Moravcsik, Julius 164 morphemes 81, 149 morphology 52, 54, 195distributed 27 and syntax 200 Morris, Charles 250 Move 108 mutations 14, 43, 170, 171survival of 51, 53 mysterianism 97 Nagel, Thomas 98 Narita, Hiroki 57 nativism 187, 217, 283 natural numbers 204 natural sciences 18, 38 natural selection 58, 76, 104, 143, 157 Navajo language 277 neural networks 225 neurophysiology 74 Newton, Isaac 66, 67, 72, 88, 127, 134alchemy 67 nominalism 87, 91 non-violence 114 Norman Conquest 84 objective existence 169 optimism 118–123, 288 parameters 39–45, 54, 239–242, 277, 282, 283and acquisition of language 241 choice of 45, 83 developmental constraints in 243 functional categories 240 head-final 55, 240 headedness macroparameter 241, 276 linearization parameter 55 macroparameters 55 microparameters 55, 84, 241 polysynthesis 55 and simplicity 80 Peck, James 288 Peirce, Charles Sanders 96, 132, 184, 250abduction 168, 183, 246, 248 truth 133, 136 perfection 50–58, 172, 175, 263–264, 279 person, concept of 125, 126, 271, 284‘forensic' notion of 125 persuasion 114, 116 Pesetsky, David 30 Petitto, Laura-Ann 48, 78 phenomenalism 211 philosophers 129–131, 282, 283contribution of 129 contribution to science 129 philosophy 181accounts of visual sensations 255–257 of language 35, 273 of mind 186, 255 problems in 286 and psychology 140 phonemes 81 phonetic/phonological interfaces 161, 194, 253, 278 phonology 28, 40, 52, 54, 57, 109, 208 physicalism 187 physics 19, 65, 106, 144and chemistry 65 folk physics 72 theoretical 18, 65, 73, 100 Piattelli-Palmarini, Massimo 140, 246, 279 Pietroski, Paulconcepts 47, 199, 200, 209 semantics 198, 211, 223, 229, 254 Pinker, Steven 166, 170, 172, 176 Pirahã language 30 Plato 115 Plato's Problem 23, 195, 236, 244, 246, 266 Poincaré, Henri 65 politics 116, 119, 145, 146, 152 poverty of the stimulus observations 5, 23, 40, 177, 200, 227, 233, 262 power 120 pragmatic information 30 pragmatics 36, 130, 250–254, 289definition of 250 and reference 253 principles and parameters approach to linguistic theory 24, 53, 235, 236, 240, 245, 276language acquisition 60, 82, 83, 149 and simplicity 246 progress 118, 145, 183 projection problem 83, 89 prosody 37 psychic continuity 26, 205, 207, 271 psychology 219of belief and desire 138, 141 comparative 21 evolutionary 103–107, 111 folk psychology 72, 141 and philosophy 140 rationalistic 255 scientific 140 psychology, comparative 21 public intellectuals 122 Pustejovsky, James 164, 195 Putnam, Hilary 95, 126, 138 Quine, W.

pages: 338 words: 100,477

Split-Second Persuasion: The Ancient Art and New Science of Changing Minds
by Kevin Dutton
Published 3 Feb 2011

Discover (April 2004). http://discovermagazine.com/2004/apr/whose-life-would-you-save (accessed January 9th, 2007). 5 Consider, for example … The Trolley Problem was first proposed in this form by Philippa Foot in ‘The Problem of Abortion and the Doctrine of the Double Effect’. In Virtues and vices and other essays in moral philosophy (Berkeley, CA: University of California Press, 1978). 6 Now consider the following … Thomson, Judith J. ‘Killing, Letting Die, and the Trolley Problem.’ The Monist 59 (1976): 204–17. Want to take things a stage further? How about this? A brilliant transplant surgeon has five patients.

A healthy young traveller, just passing through, comes in to the doctor’s surgery for a routine checkup. While performing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that were the young man to disappear, no-one would suspect the doctor … (See Thomson, Judith J. ‘The Trolley Problem.’ Yale Law Journal 94 (1985): 1395–1415.) 7 Harvard psychologist Joshua Greene … Greene, Joshua D., Sommerville, R. Brian, Nystrom, Leigh E., Darley, John M. and Cohen, Jonathan D., ‘An fMRI Investigation of Emotional Engagement in Moral Judgement.’ Science 293 (2001): 2105–2108. For a more general account of the neuroscience of morality see Greene, Joshua D. and Haidt, Jonathan, ‘How (and Where) Does Moral Judgement Work?’

pages: 198 words: 59,351

The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning
by Justin E. H. Smith
Published 22 Mar 2022

In this short book we will range widely in topic and time, permitting ourselves to linger far from some of the questions that internet users and tech analysts today consider most pressing: the outsized power of the tech monopolies; the racism built into AI applications in security, social media, and credit-rating algorithms; the variations on the trolley problem to which self-driving vehicles give rise; the epidemic of disinformation and the corollary crisis of epistemic authority in our culture; internet mobs and the culture wars; and so on, ad nauseam. For the most part, this aloofness is intentional. This book does describe itself as a “philosophy” of the internet and, while there will be much disagreement about what that might mean, most of us can at least agree that a philosophy of something, whatever else it may be, has the right to zoom out from that thing and to consider it in relation to its precedents, or in relation to other things alongside which it exists in a totality.

See Cantwell Smith, Brian sociobiology, 71 Source, The (computer network), 8 Spotify, 47–49, 164 Srinivasan, Balaji, 29 Stanley, Manfred, 6–7 Stendhal (Marie-Henri Beyle), 35 telecommunication: among humans, 59, 83–84, 124; among plants and animals, 56–59, 73–74, 83–84 teledildonics, 164 TikTok, 50 Tinder, 21 Tormé, Mel, 47 trolley problem, 13 Trump, Donald, 44, 49 Tupi (language), 108 Turing test, 30 Turing Tumble (toy), 110–11 Twitter, 32, 53–55, 122, 155, 164 Tyson, Neil DeGrasse, 90 Uber, 45 Vaucanson, Jacques de, 98, 119, 128–30 video games, 41, 43–45, 122 virality. See viruses viruses, 141–43 Vischer, Friedrich Theodor, 26 Vischer, Robert, 25–26 Vosterloch, Captain, 78 Wales, Jimmy, 156 Walton, Izaak, 40 Walzer, Michael, 10 Warhol, Andy, 31 Watson, James D., 70 weaving, 66, 127–39 White, Leslie, 80 Wiener, Norbert, 6, 60, 116–18, 142 Wikipedia, 154–58, 168, 170 Williams, James, 30, 37–38 Wilson, E.

pages: 233 words: 69,745

The Reluctant Carer: Dispatches From the Edge of Life
by The Reluctant Carer
Published 22 Jun 2022

Mum could get by without him, up to a point; the reverse is impossible. The longer this lasts, the worse it will be and perhaps the worse I will become. The more of us there are, the faster we sink. Our life is a leaking lifeboat. Or so it seems to me. In psychology and ethics this is known as a Trolley Problem, after a model in which one might change the direction of a runaway tram to spare one group of people, but in doing so kill another. There is no ‘right’ answer, but as Wikipedia explains, ‘Under some interpretations of moral obligation, simply being present in this situation and being able to influence its outcome constitutes an obligation to participate.’

If you want to see how much you love someone, try and fix their computer. ‘Password?’ ‘Don’t know.’ ‘User ID?’ ‘Don’t know.’ I would consider taking my own life and the lives of others before dealing with the online ‘help’ desk of his email provider again, bearing so little basic information. No Trolley Problems there. If anything gets logged out of now that will be end of his online adventures. * Into this stable instability land the groceries. A spin-off of the Amazon issue, except this is about things we need. If I am not there to monitor the delivery, one or both of two things will happen. Either my mother will struggle to unpack it and give my dad a hard time about all the things he has or hasn’t ordered and the expense of all this, or my sister will arrive and unpack it herself along with a diatribe against the whole household, which she will save and relay to me another time.

pages: 1,261 words: 294,715

Behave: The Biology of Humans at Our Best and Worst
by Robert M. Sapolsky
Published 1 May 2017

The Frontal Cortex and Its Relationship with the Limbic System We now have a sense of what different subdivisions of the PFC do and how cognition and emotion interact neurobiologically. This leads us to consider how the frontal cortex and limbic system interact. In landmark studies Joshua Greene of Harvard and Princeton’s Cohen showed how the “emotional” and “cognitive” parts of the brain can somewhat dissociate.66 They used philosophy’s famous “runaway trolley” problem, where a trolley is bearing down on five people and you must decide if it’s okay to kill one person to save the five. Framing of the problem is key. In one version you pull a lever, diverting the trolley onto a side track. This saves the five, but the trolley kills someone who happened to be on this other track; 70 to 90 percent of people say they would do this.

More interesting than squabbling about the relative importance of reasoning and intuition are two related questions: What circumstances bias toward emphasizing one over the other? Can the differing emphases produce different decisions? As we’ve seen, then–graduate student Josh Greene and colleagues helped jump-start “neuroethics” by exploring these questions using the poster child of “Do the ends justify the means?” philosophizing, namely the runaway trolley problem. A trolley’s brake has failed, and it is hurtling down the tracks and will hit and kill five people. Is it okay to do something that saves the five but kills someone else in the process? People have pondered this since Aristotle took his first trolley ride;* Greene et al. added neuroscience.

Alabama, 171, 589 mimicry, 390 empathic, 102, 522–24 mirror neurons and, see mirror neurons minimal group paradigm, 389–91 Minsky, Marvin, 603, 605 mirror neurons and supposed functions, 166n, 180n, 536–41 autism and, 539–40 empathy and, 540–41 social interactions and, 538–39 Mischel, Walter, 186–87 Mitchell, David, 657 M’Naghten, Daniel, 586–87, 598 Mogil, Jeffrey, 133, 524, 544 mole rats, 120, 352 Moniz, Egas, 9 Money, John, 215 monkeys, 4, 35, 36, 47, 48, 50–51, 55, 67, 68, 70, 71, 73–74, 82, 104, 109–10, 123, 148, 172, 221, 429, 535, 557 baboons, 17, 123, 131–32, 162, 172, 191–92, 196, 207, 295, 303, 337, 338, 429, 648–52, 648, 650 “Garbage Dump” troop of, 648–50, 649 hierarchies and, 426–27, 427, 428, 436–39, 442, 455 deception in, 513 “executive,” stress in, 436 Harlow’s experiments with, 189–90, 190, 192 kinship understanding in, 337–38 langurs and competitive infanticide, 334–35 moral judgments in, 484–85, 487 sex differences in behaviors of, 213–14, 214 social rank and, 433, 434 tamarins, 110, 213, 355, 357 monoamine oxidase-A (MAO-A), 251–55, 257, 264, 605 monogamy, 339, 366 morality and moral decisions, 478–520 in animals, 484–87 applying science of, 504–20 automaticity and, 50 in children, 181–85 reasoning in, 182–83 competition and, 495–500 consequentialism and, 504–7, 520 context in, 488–503 cultural, 275, 493–503 framing, 491–92 language, 491 proximity, 491 special circumstances, 492–93 cooperation and, 495–500, 508–9 cultural differences and, 275 deontology and, 504, 505, 520 disgust and, 398, 454, 561–65 doing the harder thing when it’s the correct thing to do, 45, 47–48, 50, 51, 55, 56, 63, 64, 74, 75, 92, 130, 134, 513, 515, 614 dumbfounding in, 483 honesty and duplicity and, 512–20 in infants, 483–84 internal motives and external actions in, 493 intuition in, 478, 479, 481–83, 507–8 “me vs. us” and “us vs. them” in, 508–12 obedience and, 471, 473 see also obedience and conformity political orientation and, 449–50 punishment and, see punishment reasoning in, 169, 478–81, 487–88, 507–8, 542 in adolescents, 167–69 in children, 182–83 in infants, 483–84 runaway trolley problem (killing one person to save five) and, 55, 56, 58–59, 117, 482, 488–91, 505–7 self-driving cars and, 612n saving person vs. dog, 368, 371 and sins of commission vs. omission, 490 and tragedy of the commons vs. tragedy of commonsense morality, 508–11, 533 universals of, 494–95 utilitarianism and, 505–7 virtue ethics and, 504, 520 Moral Life of Children, The (Coles), 181n Moral Origins: The Evolution of Virtue, Altruism, and Shame (Boehm), 323 Moral Politics: How Liberals and Conservatives Think (Lakoff), 558 Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (Greene), 508–9 Mormons, 367 Morozov, Pavlik, 368–69, 487 Morse, Stephen, 598–600 Moscone, George, 92n Mother Teresa, 535 motivation, “you must be so smart” vs.

pages: 345 words: 75,660

Prediction Machines: The Simple Economics of Artificial Intelligence
by Ajay Agrawal , Joshua Gans and Avi Goldfarb
Published 16 Apr 2018

Even if we’re all comfortable with complete machine autonomy, the law might not allow it. Isaac Asimov anticipated the regulatory issue by opting for hard coding robots with three laws, cleverly designed to remove the possibility that robots harm any human.8 Similarly, modern philosophers often pose ethical dilemmas that seem abstract. Consider the trolley problem: Imagine yourself standing at a switch that allows you to shift a trolley from one track to another. You notice five people in the trolley’s path. You could switch it to another track, but along that path is one person. You have no other options and no time to think. What do you do? That question confounds many people, and often they just want to avoid thinking about the conundrum altogether.

See autonomous vehicles sensors, 15, 44–45, 105 Shevchenko, Alex, 96 signal vs. noise, in data, 48 Simon, Herbert, 107 simulations, 187–188 skills, loss of, 192–193 smartphones, 129–130, 155 Smith, Adam, 54, 65 The Snows of Kilimanjaro (Hemingway), 25–26 society, 3, 19, 209–224 control by big companies and, 215–217 country advantages and, 217–221 inequality and, 212–214 job loss and, 210–212 Solow, Robert, 123 Space Shuttle Challenger disaster, 143 sports, 117 camera automation and, 114–115 sabermetrics in, 56, 161–162 spreadsheets, 141–142, 163, 164 Standard & Poor’s, 36–37 statistics and statistical thinking, 13, 32–37 economic thinking vs., 49–50 human weaknesses in, 54–58 stereotypes, 19 Stern, Scott, 169–170, 218–219 Stigler, George, 105 strategy, 2, 18–19 AI-first, 179–180 AI’s impact on, 153–166 boundary shifting in, 157–158 business transformation and, 167–178 capital and, 170–171 cheap AI and, 15–17 data and, 174–176 economics of, 165 hybrid corn adoption and, 158–160 judgment and, 161–162 labor and, 171–174 learning, 179–194 organizational structure and, 161–162 value capture and, 162–165 strokes, predicting, 44–46, 47–49 Sullenberger, Chesley “Sully,” 184 supervised learning, 183 Sweeney, Latanya, 195, 196 Tadelis, Steve, 199 Taleb, Nassim Nicholas, 60–61 The Taming of Chance (Hacking), 40 Tanner, Adam, 195 task analysis, 74–75, 125–131 AI canvas and, 134–139 job redesign and, 142–145 Tay chatbot, 204–205 technical support, 90–91 Tencent Holdings, 164, 217, 218 Tesla, 8 Autopilot legal terms, 116 navigation apps and, 89 training data at, 186–187 upgrades at, 188 Tesla Motor Club, 111–112 Thinking, Fast and Slow (Kahneman), 209–210 Tinder, 189 tolerance for error, 184–186 tools, AI, 18 AI canvas and, 134–138 for deconstructing work flows, 123–131 impact of on work flows, 126–129 job redesign and, 141–151 usefulness of, 158–160 topological data analysis, 13 trade-offs, 3, 4 in AI-first strategy, 181–182 with data, 174–176 between data amounts and costs, 44 between risks and benefits, 205 satisficing and, 107–109 simulations and, 187–188 strategy and, 156 training data for, 43, 45–47 data risks, 202–204 in decision making, 74–76, 134–138 by humans, 96–97 in-house and on-the-job, 185 in medical imaging, 147 in modeling skills, 101 translation, language, 25–27, 107–108 trolley problem, 116 truck drivers, 149–150 Tucker, Catherine, 196 Tunstall-Pedoe, William, 2 Turing, Alan, 13 Turing test, 39 Tversky, Amos, 55 Twitter, Tay chatbot on, 204–205 Uber, 88–89, 164–165, 190 uncertainty, 3, 103–110 airline industry and weather, 168–169, 170 airport lounges and, 105–106 business boundaries and, 168–170 contracts in dealing with, 170–171 in e-commerce delivery times, 157–158 reducing, strategy and, 156–157 strategy and, 165 unknown knowns, 59, 61–65, 99 unknown unknowns, 59, 60–61 US Bureau of Labor Statistics, 171 US Census Bureau, 14 US Department of Defense, 14, 116 US Department of Transportation, 112, 185 Validere, 3 value, capturing, 162–165 variables, 45 omitted, 62 Varian, Hal, 43 variance, 34–36 fulfillment industry and, 144–145 taming complexity and, 103–110 Vicarious, 223 video games, 183 Vinge, Vernor, 221 VisiCalc, 141–142, 163, 164 Wald, Abraham, 101 Wanamaker, John, 174–175 warehouses, robots in, 105 Watson, 146 Waymo, 95 Waze, 89–90, 106, 191 WeChat, 164 Wells Fargo, 173 Windows 95, 9–10 The Wizard of Oz, 24 work flows AI tools’ impact on, 126–129 decision making and, 133–140 deconstructing, 123–131 iPhone keyboard design and, 129–130 job redesign and, 142–145 task analysis, 125–131 World War II bombing raids, 100–102 X.ai, 97 Xu Heyi, 164 Yahoo, 216 Y Combinator, 210 Yeomans, Mike, 117 YouTube, 176 ZipRecruiter, 93–94, 100 About the Authors AJAY AGRAWAL is professor of strategic management and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management and the founder of the Creative Destruction Lab.

Psychopathy: An Introduction to Biological Findings and Their Implications
by Andrea L. Glenn and Adrian Raine
Published 7 Mar 2014

Finally, studies have also found that damage to the ventromedial PFC alters moral judgment. One popular way for examining moral judgment has been to present individuals with a series of hypothetical moral dilemmas and ask them to make judgments (Greene et al. 2001). One of the most famous of these dilemmas is the trolley problem: A runaway trolley is heading down the tracks toward five workmen who will be killed if the trolley proceeds on its present course. You are on a footbridge over the tracks, in between the approaching trolley and the five workmen. Next to you on this footbridge is a stranger who happens to be very large.

Notably, there is significant overlap between the brain regions implicated in psychopathy and the regions important in emotional responding during moral decision making (for a review, see Raine and Yang 2006). In a study conducted in our laboratory, we presented participants with a series of moral dilemmas that had been compiled in a previous study examining the neural correlates of moral judgment (Greene et al. 2001). The trolley problem, presented in Chapter 4, is one of these dilemmas. Another example is the crying baby scenario: Enemy soldiers have taken over your village. They have orders to kill all remaining civilians. You and some of your townspeople have sought refuge in the cellar of a large house. Outside you hear the voices of soldiers who have come to search the house for valuables.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

This is really not too much to ask for the AI systems of the future, given that present-day Facebook systems are already maintaining more than two billion individual profiles. A related misunderstanding is that the goal is to equip machines with “ethics” or “moral values” that will enable them to resolve moral dilemmas. Often, people bring up the so-called trolley problems,12 where one has to choose whether to kill one person in order to save others, because of their supposed relevance to self-driving cars. The whole point of moral dilemmas, however, is that they are dilemmas: there are good arguments on both sides. The survival of the human race is not a moral dilemma.

Also Alan Fern et al., “A decision-theoretic model of assistance,” Journal of Artificial Intelligence Research 50 (2014): 71–104. 11. A critique of beneficial AI based on a misinterpretation of a journalist’s brief interview with the author in a magazine article: Adam Elkus, “How to be good: Why you can’t teach human values to artificial intelligence,” Slate, April 20, 2016. 12. The origin of trolley problems: Frank Sharp, “A study of the influence of custom on the moral judgment,” Bulletin of the University of Wisconsin 236 (1908). 13. The “anti-natalist” movement believes it is morally wrong for humans to reproduce because to live is to suffer and because humans’ impact on the Earth is profoundly negative.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

For example, when confronted with a choice about whether to swerve into a bicycle lane to protect the car’s driver or to harm parents bicycling with their children, what should the autonomous system that pilots the car be programmed to do? Consider a hypothetical dilemma introduced by the English philosopher Philippa Foot in the late 1960s, the “Trolley Problem,” that has now become a real problem for engineers. In the context of autonomous cars, the problem asks whether a vehicle should be programmed to endanger or sacrifice the life of its sole passenger by running off the road in order to avoid potentially hitting five pedestrians crossing the road.

See also Y Combinator start-up mindset, xxi “Statement on the Purpose of a Corporation” (Business Roundtable), 181 Stiglitz, Joseph, 254 stock options, 26–28 substantive fairness, 92–93 success disasters, 20–21 Sundar Pichai, 64–65 Sundararajan, Arun, 49 Sunflower Movement, Taiwan, 242 supervised data, 85–86 Supreme Court of the United States, 199, 201 surveillance capitalism, 115, 121–22 surveillance society, 151 surveillance technologies, 21, 112, 113–14, 125–26 Swartz, Aaron, xxi–xxvi, 44 Sweeney, Latanya, 130 Swift, Taylor, 111–12 systemic problems in a democracy, 239–43 Taiwan, 242–43, 261–62 Tang, Audrey, 242–43 Taylor, Frederick, and Taylorism, 14 technological innovation overview, 240 balancing the competing values created by, 240–43, 258 Clipper Chip technology, 115–16 deceleration in, 52 democratic resolution of rival values, xxxiii–xxxiv externalities created by, xxvii failure to examine potential societal harm, xxi and governance, 52–53 insider argument for a reflective stance, 254 instant wealth as a priority, xxv–xxvi maximizing benefits while minimizing harms, xiii–xiv, 65 See also algorithmic decision-making; governance; innovation technological unemployment, 174–76 technologists enablers of, xxviii funding for OpenAI’, 234 governing us vs. governing them, xxviii–xxix, 68–69, 257–63 lack of diversity, 17, 41 legislative ignorance of, 66–68 libertarian tendencies, 25, 52, 67 new masters of the universe, 22–23 optimizing facial recognition software, 17 small group of humans make choices for all of us, 11, 25–26 transforming education to create civic-minded technologists, 251 See also optimization mindset technology, 21, 53–59, 169, 174, 237–39 Telecommunications Act (1996), 60, 61, 62 telegraph, 56–57 telephone system, 60 Terman, Frederick, 28–29 terrorist attack, San Bernardino, California, 72 Theranos, xxx Thiel, Peter, 28, 38, 42, 52 Thrun, Sebastian, 154 Time magazine, 30 transparency of algorithmic decisions, 105, 107–9 and control, 134 Facebook Oversight Board, 215–16 requiring internet platforms to disclose information on credibility of sources, 225–26 “Traveling Salesperson Problem” (TSP), 12–13 Triangle Waist Company fire in 1911, 53–55 Trolley Problem, the, 155 truck drivers and trucking industry, 175 Trump, Donald J., xi, 187–88, 215 Tuskegee experiment, xxxi Twitter as digital civic square, 21 leaders surprised by ways the platform could do harm, 254 Trump’s access denied after January 6, 2021, xi–xii, 187–88 See also big tech platforms ultimatum game, 91 unicorns, 37–38, 39, 43 United Kingdom, 165, 218, 254, 260–62 United Nations Development Programme (UNDP), 173 United States Postal Service, 3–4 universal basic income (UBI), 182–84, 185 University College London Jeremy Bentham display, 120–21, 124 unsupervised data, 85 US Air Force Academy, 103 US Capitol assault (Jan. 6, 2021), xi-xii, xxvi, 115, 187, 209, 215, 223 US Census Bureau, 41 US Department of Justice (DOJ), 257 US Federation of Worker Cooperatives, 180 US security forces and message encryption, 128–29 USA PATRIOT Act, 116 user engagement in online platforms, 40 user-centric privacy, 149–50 utilitarianism, 9, 121, 168, 245 Vacca, James, 104–5 values overview, xvii, xxix balancing the competing values created by innovation, 240–43, 258 expressing ourselves in support of each other, 178 free expression, democracy, individual dignity at risk online, 190–91 freedom as, 172–73 goals assessment for evaluating efficiency vs. values, 15–16 replacing governance by big tech with process of deciding, xxix resolving trade-offs between rival values, xxxi–xxxiii, 45 at risk from new, unregulated innovations, 56 of tech leaders as expert rulers, 67–68 See also dignity, fairness, free speech, privacy, safety, security Varian, Hal, 174 venture capital, inequality in distribution of, 41 venture capitalists (VCs), 25–49 ecosystem of, 31–33 funding Soylent, 8 funds as investment vehicles for their LPs, 38–39 hackers and, 28, 52, 68 high value exits, 40–41 increasing numbers of, 39 narrow view of success as white, male, nerd, 41 optimizing from multiple starting points, 43–45 and scalability of businesses, xxviii and Silicon Valley, 17, 26–28 at Stanford showcasing their new companies, 42–45 unicorns, search for, 37–38, 39, 43 Vestager, Margrethe, 252–53, 255 virtual reality, the experience machine, 167–69 Waal, Frans de, 92 Wales, Jimmy, 195 Walker, Darren, 180 Wall Street Journal, 42–43 Warren, Elizabeth, 181, 256 washing machines and laundry, 157–58 watch time metric, 34 Watchdog.net, xxiii Weapons of Math Destruction (O’Neil), 98 Weinberg, Gabriel, 135–36 Weinstein, Jeremy, xv–xvi, 72 Weld, William, 130 Western Union, 57 Westin, Alan, 137–38 WhatsApp, 127–28 Wheeler, Tom, 63, 76 Whitt, Richard, 149 “Why Software Is Eating the World” (Wall Street Journal), 42–43 Wikipedia, 195–96 Wikipedia conference, xxiii–xxiv Wilde, Oscar, 63 winner-take-all, disruption vs. democracy, 51–76 overview, 51–53 democracy and regulation of technology, 68–73 democracy as a guardrail, 73–76 government’s complicity in absence of regulation, 59–63 innovation vs. regulation, 53–59 and Plato’s philosopher kings, 63–68 Wisconsin’s COMPAS system, 88, 98 Wong, Nicole, 40, 254 worker cooperatives, 180 workers’ compensation benefit, 55 workplace safety, 53–54, 55 World Economic Forum 1996, Davos, Switzerland, 25 World Health Organization, 154 World Wide Web, 29, 30.

pages: 698 words: 198,203

The Stuff of Thought: Language as a Window Into Human Nature
by Steven Pinker
Published 10 Sep 2007

The semantic distinction between after and from points to a causal distinction between succession and impingement, which in turn animates a moral distinction between tragedy and evil. Another force-dynamic distinction, the one between causing and letting, deeply penetrates our moral reasoning. The difference is exposed in the trolley problem, a famous thought experiment devised by the philosopher Philippa Foot that has long been a talking point among moral philosophers. 141 A trolley is hurtling out of control and is bearing down on five railroad workers who don’t see it coming. You are standing by a switch and can divert the trolley onto another track, though it will then kill a single worker who hasn’t noticed the danger either.

Berkeley: University Of California Press. Tetlock, P. E., Kristel, O. V., Elson, B., Green, M. C., & Lerner, J. 2000. The psychology Of the unthinkable: Taboo tradeoffs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78, 853—870. Thomson, J. J. 1985. The trolley problem. Yale Law Journal, 94, 1395—1415. Tomasello, M. 2003. Constructing a language: A usage-based theory of language acquisition. Cambridge, Mass.: Harvard University Press. Tooby, J., & Cosmides, L. 1992. Psychological foundations Of culture. In J. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture.

Singer, Isaac Bashevis skin color, words for skirt lengths slang Sloman, Steven Smith, Anna Nicole Smith, Susan Snedeker, Jesse sniglets snow, Eskimo words for social contract social relationships Authority Ranking Communal Sharing Exchange in human nature mismatches of type of politeness as signal of switching types words and society: duality in social life individual decision in trends and moral responsibility variation and language type words and see also social relationships Soja, Nancy solidarity sound: meaning’s relation to symbolism working memory and space as cognitive framework in Authority Ranking body terms for place and direction causal discontinuities as aligning with spatial terms in conceptual semantics dimensionality of economizing of spatial terms either-or distinctions in engineering finite versus infinite ideal vocabulary for imagination constrained by imprecision of language of Kant on Linguistic Determinism and linguistic variation regarding as medium metaphorical use in other domains in metaphors for time in physics polysemy of terms for precision in expressing shape versus location words similarity between time and time contrasted with spam Spanish language Spaulding, Bailey species, concepts of specious present Spelke, Elizabeth spell, preserving the Spellman, Barbara Sperber, Dan Splash Starr, Kenneth states: aspect category (Aktionsart) as involuntary in language of thought state-change effect state-space statistical thinking status Steve Stewart, Potter straw man Stroop effect structural parallelism Subbiah, Ilavenil Subbiah, Saroja substance in Authority Ranking in conceptual semantics in engineering applications Kant on in locative construction subtexts suffixes superlatives Sutcliffe, Stuart swearing (taboo language): aloud in aphasia basal ganglia and cathartic desensitization of different ways of emotional charge of feminism and historical changes in persecution of pros and cons of profanity’s meaning religious semantics of sexual swearing “on” and “by,” syntax of truncated profanities as universal see also taboo words syllepsis sympathetic politeness syntax of taboo expressions see also adjectives; causative construction; dative construction; double-object (ditransitive) dative; grammar; intransitive verbs; locative construction; nouns; prepositions; transitive verbs; verbs taboos: food in human nature in prenuptial agreements relationship type mismatches leading to in swearing by see also taboo words taboo words: abusive use of acceptability of in aphasia brain and common denominator of in coprolalia count and mass nouns among descriptive use of domains dysphemistic emphasis euphemisms for for excretion and bodily effluvia historical changes idiomatic use of negative emotion triggered by paradox of identifying without using seven you can’t say on television substituting for one another as word magic see also swearing (taboo language) tact Talmy, Len Tamil language Tannen, Deborah television, seven words you can’t say on telic events Tenner, Edward tense: and aspect basic discrete nature of in engineering in English as “location” in time and “thinking for speaking” effect time as embedded in Tetlock, Philip thing “thinking for speaking” effect third commandment third-person pronoun, gender-neutral Thomas, Clarence Thomas, Dylan Thurber, James threats: rational ignorance of veiled time in Authority Ranking in conceptual semantics and consciousness counting events in as cultural universal as cyclical in engineering finite versus infinite as cognitive framework goals and language of intuitive versus scientific conceptions of “landscape” versus “procession” metaphor as medium model underlying language moving time metaphor as one-dimensional in physics precision in expressing representation of space contrasted with spatial metaphors for time-orientation metaphor see also aspect; tense Tipping Point, The (Gladwell) titles tits Tlingit language To Have and Have Not toilet token bow Tootsie topology Tourette syndrome Tower of Hanoi problem T pronouns transitive verbs absence of polite verb for sex and causative construction meaning and moral judgments and used intransitively trolley problem Truman, Bess truncation truth and tense see also reality; relativism Truth, Sojourner Tucker, Sophie Turkish language tumor problem Turner, Mark Tversky, Amos Twain, Mark Twin Earth thought experiment Tzeltal language Tzotzil language United Nations Resolution Universal Grammar.

pages: 533

Future Politics: Living Together in a World Transformed by Tech
by Jamie Susskind
Published 3 Sep 2018

(Think, for example, of ‘the traitorous coffee maker’ sold by Keurig that refused to brew coffee from non-Keurig brand beans.)44 Each individual limitation induced by these technologies may constitute only a small exertion of force, but the cumulative effect will be that we’re subject to a good deal of power flowing OUP CORRECTED PROOF – FINAL, 26/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS Force 117 from whoever controls those technologies. The implications for freedom are discussed in the next Part. Take the famous example of the ‘trolley problem’.45 You are motoring down the freeway in a self-driving car, and a small child steps into the path of your vehicle. If you had control of the car, you would swerve to avoid the child. You know that this would cause a collision with the truck in the adjacent lane, probably killing both you and the trucker—but to preserve the life of the child, that’s a sacrifice you are willing to make.Your car, however, has different ideas.

In return for these affordances, however, we’ll necessarily sacrifice other ­freedoms. The freedom (occasionally) to drive over the speed limit. The freedom (occasionally) to make an illegal manoeuvre or park on a double yellow line. The freedom to make a journey with no record of it. Perhaps even the freedom to make moral choices, like (in the case of the trolley problem described in chapter six) whether to kill the child or the trucker. Again, I don’t seek to ­suggest that this isn’t a deal worth striking. But I do suggest that we see it for what it is: a trade-off in which our precious liberties are part of the bargain. From the perspective of freedom, there are four important differences between the power wielded by the state and that wielded by tech firms.

pages: 225 words: 70,180

Humankind: Solidarity With Non-Human People
by Timothy Morton
Published 14 Oct 2017

Utilitarian holism, the holism of populations, is explosive—the whole is especially different (better or worse) than the part. There is no such thing as society! Or, specific people don’t matter! Utilitarian holism sets up a zero-sum game between the actually existing lifeform and the population. One consequence is the trolley problem: it is better to kill one person tied to the tracks by diverting the trolley than it is to kill hundreds of people on the trolley who will go off a cliff if we don’t divert the trolley. There’s the left-wing variant: talk of wholes is necessarily violent (racist, sexist, homophobic, transphobic and so on) because what exists are highly differentiated beings that are radically incommensurable.

pages: 1,351 words: 385,579

The Better Angels of Our Nature: Why Violence Has Declined
by Steven Pinker
Published 24 Sep 2012

(It also may be a response to whatever external threat would have caused a fellow animal to issue an alarm call.)247 The participants in Stanley Milgram’s famous experiment, who obeyed instructions to deliver shocks to a bogus fellow participant, were visibly distraught as they heard the shrieks of pain they were inflicting.248 Even in moral philosophers’ hypothetical scenarios like the Trolley Problem, survey-takers recoil from the thought of throwing the fat man in front of the trolley, though they know it would save five innocent lives.249 Testimony on the commission of hands-on violence in the real world is consistent with the results of laboratory studies. As we saw, humans don’t readily consummate mano a mano fisticuffs, and soldiers on the battlefield maybe petrified about pulling the trigger.250 The historian Christopher Browning’s interviews with Nazi reservists who were ordered to shoot Jews at close range showed that their initial reaction was a physical revulsion to what they were doing.251 The reservists did not recollect the trauma of their first murders in the morally colored ways we might expect—neither with guilt at what they were doing, nor with retroactive excuses to mitigate their culpability.

Today, for example, people might be dumbfounded when asked whether we should burn heretics, keep slaves, whip children, or break criminals on the wheel, yet those very debates took place several centuries ago. We even saw a neuroanatomical basis for the give-andtake between intuition and reasoning in Joshua Greene’s studies of trolley problems in the brain scanner: each of these moral faculties has distinct neurobiological hubs.215 When Hume famously wrote that “reason is, and ought to be, only the slave of the passions,” he was not advising people to shoot from the hip, blow their stack, or fall head over heels for Mr. Wrong.216 He was basically making the logical point that reason, by itself, is just a means of getting from one true proposition to the next and does not care about the value of those propositions.

Response to torture warrants: Dershowitz, 2004b; Levinson, 2004a. 245. Taboo against torture is useful: Levinson, 2004a; Posner, 2004. 246. Aversiveness of conspecifics in pain: de Waal, 1996; Preston & de Waal, 2002. 247. Reasons for aversiveness of pain displays: Hauser, 2000, pp. 219–23. 248. Anxiety while hurting others: Milgram, 1974. 249. Trolley Problem: Greene & Haidt, 2002; Greene et al., 2001. 250. Aversion to direct violence: Collins, 2008. 251. Ordinary Germans: Browning, 1992. 252. Nausea not soul-searching: Baumeister, 1997, p. 211. 253. Distinguishing fiction from reality: Sperber, 2000. 254. Blunted emotions in psychopathy: Blair, 2004; Hare, 1993; Raine et al., 2000. 255.

pages: 249 words: 77,342

The Behavioral Investor
by Daniel Crosby
Published 15 Feb 2018

Our proclivity to conflate the known with the advisable is so pronounced that we actually perceive stocks with pronounceable tickers (e.g., MOO) to be less risky than those with hard to pronounce tickers (e.g., NTT). So, rather than trying to scour your local mall for the next big investment idea, put in place a plan that diversifies across geographies and asset classes, both familiar and foreign. Don’t know what you own The trolley problem is a formulation used in many philosophy and ethics courses. A slight modification of the general form of the problem is as follows: There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them.

pages: 383 words: 92,837

Incognito: The Secret Lives of the Brain
by David Eagleman
Published 29 May 2011

Something about interacting with the person up close stops most people from pushing the man to his death. Why? Because that sort of personal interaction activates the emotional networks. It changes the problem from an abstract, impersonal math problem into a personal, emotional decision. When people consider the trolley problem, here’s what brain imaging reveals: In the footbridge scenario, areas involved in motor planning and emotion become active. In contrast, in the track-switch scenario, only lateral areas involved in rational thinking become active. People register emotionally when they have to push someone; when they only have to tip a lever, their brain behaves like Star Trek’s Mr.

pages: 356 words: 106,161

The Glass Half-Empty: Debunking the Myth of Progress in the Twenty-First Century
by Rodrigo Aguilera
Published 10 Mar 2020

People are said to be irrational when they behave in ways that would appear inconsistent with logical or factual decision-making, relying instead on intuitive or emotional impulses, or other forms of motivated reasoning. Put in the simplest dichotomy: Facts good. Feelings bad. Reason, however, is a far more complex epistemological process, of which rationality is just one of many components. Take, for example, the famous “trolley problem” in philosophy, which in its original conception goes something like this: Imagine that you are on a street and you see a trolley which has lost control of its breaks. It is hurtling down at such speed that it will inevitably kill five pedestrians who are crossing the tracks. However, in front of you is a lever which will divert the trolley into a second set of tracks where only one pedestrian is crossing, but who will also be killed.

pages: 412 words: 115,266

The Moral Landscape: How Science Can Determine Human Values
by Sam Harris
Published 5 Oct 2010

Theory-based Bayesian models of inductive reasoning. In A. Feeney & E. Heit (Eds.), Inductive reasoning: Experimental, developmental, and computational approaches (pp. 167–204). Cambridge, UK: Cambridge University Press. Teresi, D. (1990). The lone ranger of quantum mechanics. New York Times. Thompson, J. J. (1976). Letting die, and the trolley problem. The Monist, 59 (2), 204–217. Tiihonen, J., Rossi, R., Laakso, M. P., Hodgins, S., Testa, C., Perez, J., et al. (2008). Brain anatomy of persistent violent offenders: More rather than less. Psychiatry Res, 163 (3), 201–212. Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

There are two fundamental levels of the ethics of AI: machine ethics, which refers to the AI systems per se, and the wider domain, not specific to the algorithms. The prototypical example of machine ethics involves how driverless cars handle the dilemma of choosing between evils in the case of an impending accident, when no matter how it responds, people are going to die. It’s the modern-day version of the trolley problem introduced more than fifty years ago. Jean-Francois Bonnefon and colleagues examined the driverless car dilemma in depth using simulations and input from more than 1,900 people.59 In each of the three scenarios (Figure 5.1), there is no good choice; it’s just a matter of who and how many people are killed, whether the car’s passenger, a pedestrian, or several of them.

pages: 533 words: 125,495

Rationality: What It Is, Why It Seems Scarce, Why It Matters
by Steven Pinker
Published 14 Oct 2021

See distributions, statistical statistical significance overview, 221–22 alternative hypothesis, 223 as Bayesian likelihood, 224–25, 352n21 critical value, placement of, 223 definition, 224 null hypothesis, 222–23 scientists’ misunderstanding of, 224–26 statistical power and, 223 Type I & II errors, 223–24, 225 stereotypes base-rate neglect and, 155–56 in the conjunction fallacy (Linda problem), 156 failures of critical thinking, 19–20, 27 family resemblance categories and, 99–100, 105, 155 illusory correlation and, 251–52 vs. propositional reasoning, 108–9 random sampling vs., 168 representativeness heuristic, 27, 155–56 Stone, Oliver, JFK, 303 Stoppard, Tom, Jumpers, 44–45, 66 straw man, 88, 291 Styron, William, Sophie’s Choice, 184 subjective reality, claims for, 39 subjectivist interpretation of probability, 115, 116, 151, 194–96 sucker’s payoff, 239, 242, 244 suicide, 156 Suits, Bernard, 346n28 sunk cost fallacy, 237–38, 320, 323 Sunstein, Cass, 56 Superfund sites, 191 superstitions the cluster illusion and, 147 and coincidences, prevalence of, 143–44, 287 confirmation bias and, 14, 142 openness to evidence and, 311 prevalence of, 285–86, 354–55n8 syllogisms, 12, 81 synchronicity, 144, 305 System 1 & 2 defined, 10 equality and System 2, 108–9 the Monty Hall dilemma and, 20 rational choice and, 187 reflective and unreflective thinking, 8–10, 311 visual illusions and, 30 taboos and communal outrages, 123–25 definition, 62 forbidden base rates, 62, 163–66 heretical counterfactuals, 64–65 taboo on discussing taboo, 166 taboo tradeoffs, 62–64, 184, 350n15 victim narratives, 124 taboo tradeoffs, 62–64, 184, 350n15 talent and practice, 272–73, 277–78, 278 Talking Heads, 35 Talleyrand, Charles-Maurice de, 337 tautologies, 80 See also begging the question; circular explanations taxicab problem, 155, 168, 170, 171 television, 216, 238–39, 267–68, 303, 305 temporal discounting, 47–56, 320 temporal logic, 84 temporal stability, 258 tendentious presuppositions, 89 terrorism availability bias and, 122 Bayesian reasoning and prediction of, 162–63 man carrying own bomb joke, 127– 28, 138 media coverage and, 126 paradoxical tactics and, 60 profiling and, 156–57 torture of terrorists, 218 Tetlock, Philip, 62–65, 162–66 Texas sharpshooter fallacy, 142–46, 160, 321 Thaler, Richard, 56 theocracies, 43 theoretical reason, 37 #thedress, 32 threats, and paradoxical tactics, 58, 60 Three Cards in a Hat, 138 The Threepenny Opera (Brecht), 121 time, San people and, 3 See also goals—time-frame conflicts Tit for Tat strategy, 241–42, 243–44 Tooby, J., 169 Toplak, M. F., 356–57n67 trade and investment, international, 327 Tragedy of the Carbon Commons, 242–44, 328 Tragedy of the Commons, 242, 243–44, 315 Tragedy of the Rationality Commons, 298, 315–17 Trivers, Robert, 241 trolley problem, 97 Trump, Donald, 6, 60, 82–83, 88, 92, 126, 130–31, 145, 245, 283–84, 284, 285, 288, 303, 306, 310, 312–13, 313 truth tables, 76–78 tu quoque (what-aboutery), 89 Turkmenistan, 245–47, 251 Tversky, Amos, 7, 25–29, 119, 131, 146, 154–55, 156, 186–87, 190–95, 196, 254, 342n15, 349–50nn6,27 Twain, Mark, 201 Twitter, 313, 316, 321–23 uncertainty, distinguished from risk, 177 United Nations, 327 unit homogeneity, 258 universal basic income (UBI), 85–87 universal realism, 300–301 universities academic freedom in, 41 benefits of college education, 264 college admissions, 262, 263, 266–67, 294 sexual misconduct policies, 218 suppression of opinions in, 43, 313–14 viewpoint diversity, lack of, 313–14 See also academia; education unreflective thinking, 8–10, 311 See also System 1 & 2 urban legends, 287, 306, 308 Uscinski, Joseph, 287 US Constitution, 75, 333 US Department of Education, 218 USSR, 60, 89, 122 vaccines, 284, 325.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

NON-TECHNOLOGY ISSUES THAT MAY IMPEDE L5 In order to make autonomous vehicles pervasive, a number of challenges will need to be overcome, including ethics, liability issues, and sensationalism. This is to be expected because there are millions of lives at stake, not to mention many industries and hundreds of millions of jobs. There will be circumstances that force AVs to make agonizing ethical decisions. Perhaps the most famous ethical dilemma is “the trolley problem,” which boils down to a scenario in which a decision would need to be made between taking action and killing person A, or taking no action and killing persons B and C. If you think the answer is obvious, what if person A is a child? What if person A is your child? What if the car belongs to you, and person A is your child?

Lifespan: Why We Age—and Why We Don't Have To
by David A. Sinclair and Matthew D. Laplante
Published 9 Sep 2019

The chairperson of the Joint Chiefs of Staff tells you that six US Air Force F-22 Raptor fighters are tracking the plane as it circles over the Pacific Ocean. The pilots have it locked in; their missiles are ready. The plane is running out of gas. The fate of the passengers, and the entire United States, rests upon your orders. What do you do? This, of course, is a “trolley problem,” an ethical thought experiment, of the type popularized by the philosopher Philippa Foot, that pits our moral duty not to inflict harm on others against our social responsibility to save a greater number of lives. It’s also, however, a handy metaphor, because the highly contagious disease the passengers are carrying is, as you doubtless have noticed, nothing more than a faster-acting version of aging.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

Similarly, although some philosophers have spent entire careers trying to carefully formulate deontological systems, new cases and consequences occasionally come to light that necessitate revisions. For example, deontological moral philosophy has in recent years been reinvigorated through the discovery of a fertile new class of philosophical thought experiments, “trolley problems,” which reveal many subtle interactions among our intuitions about the moral significance of the acts/omissions distinction, the distinction between intended and unintended consequences, and other such matters; see, e.g., Kamm (2007). 26. Armstrong (2010). 27. As a rule of thumb, if one plans to use multiple safety mechanisms to contain an AI, it may be wise to work on each one as if it were intended to be the sole safety mechanism and as if it were therefore required to be individually sufficient.

pages: 743 words: 201,651

Free Speech: Ten Principles for a Connected World
by Timothy Garton Ash
Published 23 May 2016

In a study conducted with 1,800 undecided voters in India’s 2014 parliamentary election, he claimed to have shifted votes by an average of 12.5 percent to particular candidates simply by improving their placings in search results found by the individual voter.52 An extreme example of algorithmic choice could be provided by Google’s computer-driven car. An old chestnut for students of ethics is the ‘trolley problem’: you control the railway points and have to decide whether the trolley will turn left and run over one person or turn right and kill five. Now suppose this automated Google car, steered by computer, faces a similar choice. It cannot stop in time. It has to run over either that grey-haired old woman on the left or that funky young man on the right.

pages: 669 words: 210,153

Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers
by Timothy Ferriss
Published 6 Dec 2016

I don’t always get a chance to do it, but I find that it clears the head in a very useful way.” more in Audio Listen to episode #87 of The Tim Ferriss Show (fourhourworkweek.com/87) for Sam’s thoughts on the following: What books would you recommend everyone read? (6:55) A thought experiment worth experiencing: The Trolley Problem (55:25) * * * Caroline Paul Caroline Paul (TW: @carowriter, carolinepaul.com) is the author of four published books. Her latest is the New York Times bestseller The Gutsy Girl: Escapades for Your Life of Epic Adventure. Once a young scaredy-cat, Caroline decided that fear got in the way of the life she wanted.