deepfake

back to index

description: a synthetic media in which a person's likeness is altered using artificial intelligence

45 results

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

According to Sensity AI, one of the few research firms tracking deepfakes, in 2019, roughly 95 per cent of online deepfake videos were non-consensual pornography, almost all of which featured women.2 The study’s author, Henry Ajder told me that deepfakes had become so ubiquitous in the years since his study that writing a report like that now would be a near-impossible task. However, he said that indications from more recent research continue to show that the majority of deepfake targets are still women, who are hypersexualized by the technology. Today, several years after the term deepfake was introduced, there is still little recourse for victims.

CHAPTER 2: YOUR BODY 1 Meredith Somers, ‘Deepfakes, Explained’, MIT Sloan Management Review, July 21, 2020, https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained#:~:text=The%20term%20%E2%80%9Cdeepfake%E2%80%9D%20was%20first,open%20source%20face%2Dswapping%20technology. 2 Karen Hao, ‘Deepfake Porn Is Ruining Women’s Lives. Now the Law May Finally Ban It’, MIT Technology Review, February 21, 2021, https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/. 3 James Vincent, ‘Facebook’s Problems Moderating Deepfakes Will Only Get Worse in 2020’, The Verge, January 15, 2020, https://www.theverge.com/2020/1/15/21067220/deepfake-moderation-apps-tools-2020-facebook-reddit-social-media. 4 Tiffany Hsu, ‘As Deepfakes Flourish, Countries Struggle With Response’, The New York Times Magazine, January 22, 2023, https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html. 5 Helen Mort, ‘This Is Wild’, in Extra Teeth – Issue Four, ed.

Katie Goh (Edinburgh: Extra Teeth, 2021), https://www.extrateeth.co.uk/shop/issuefour. 6 Samantha Cole, ‘Creator of DeepNude, App That Undresses Photos of Women, Takes It Offline’, Vice News, June 29, 2019, https://www.vice.com/en/article/qv7agw/deepnude-app-that-undresses-photos-of-women-takes-it-offline. 7 Matt Burgess, ‘The Biggest Deepfake Abuse Site Is Growing in Disturbing Ways’, Wired, December 15, 2021, https://www.wired.co.uk/article/deepfake-nude-abuse. 8 Ibid. 9 Matt Burgess, ‘A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women’, Wired, October 28, 2020, https://www.wired.co.uk/article/telegram-deepfakes-deepnude-ai. 10 Ibid. 11 Rachel Metz, ‘She Thought a Dark Moment in Her Past Was Forgotten. Then She Scanned Her Face Online’, CNN Business, May 24, 2022, https://edition.cnn.com/2022/05/24/tech/cher-scarlett-facial-recognition-trauma/index.html. 12 Carrie Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls (Little, Brown and Company, 2019). 13 Margaret Talbot, ‘The Attorney Fighting Revenge Porn’, The New Yorker, November 27, 2016, https://www.newyorker.com/magazine/2016/12/05/the-attorney-fighting-revenge-porn. 14 ‘Section 230’, EFF, n.d., https://www.eff.org/issues/cda230. 15 Haleluya Hadero, ‘Deepfake Porn Could Be a Growing Problem Amid AI Race’, Associated Press News, April 16, 2023, https://apnews.com/article/deepfake-porn-celebrities-dalle-stable-diffusion-midjourney-ai-e7935e9922cda82fbcfb1e1a88d9443a. 16 Ibid. 17 Molly Williams, ‘Sheffield Writer Launches Campaign over “Deepfake Porn” after Finding Own Face Used in Violent Sexual Images’, The Star News, July 21, 2021, https://www.thestar.co.uk/news/politics/sheffield-writer-launches-campaign-over-deepfake-porn-after-finding-own-face-used-in-violent-sexual-images-3295029. 18 ‘Facts and Figures: Women’s Leadership and Political Participation’, The United Nations Entity for Gender Equality and the Empowerment of Women, March 7, 2023, https://www.unwomen.org/en/what-we-do/leadership-and-political-participation/facts-and-figures. 19 Jeffery Dastin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women’, Reuters, October 11, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. 20 Mary Ann Sieghart, The Authority Gap: Why Women Are Still Taken Less Seriously Than Men, and What We Can Do about It (Transworld, 2021). 21 Steven Feldstein, ‘How Artificial Intelligence Systems Could Threaten Democracy’, Carnegie Endowment for International Peace, April 24, 2019, https://carnegieendowment.org/2019/04/24/how-artificial-intelligence-systems-could-threaten-democracy-pub-78984. 22 ‘Deepfakes, Synthetic Media and Generative AI’, WITNESS, 2018, https://www.gen-ai.witness.org/. 23 Yinka Bokinni, ‘Inside the Metaverse’ (United Kingdom: Channel 4, April 25, 2022). 24 Yinka Bokinni, ‘A Barrage of Assault, Racism and Rape Jokes: My Nightmare Trip into the Metaverse’, Guardian, April 25, 2022, https://www.theguardian.com/tv-and-radio/2022/apr/25/a-barrage-of-assault-racism-and-jokes-my-nightmare-trip-into-the-metaverse.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

A growing network of tech companies, media outlets, and AI researchers are working to combat deepfakes, but even the best deepfake detectors still have a long way to go. In 2019, Facebook partnered with Amazon, Microsoft, and the Partnership on AI to create a Deepfake Detection Challenge to improve the state of the art in deepfake detectors. Facebook created a dataset of over 100,000 new videos (using paid actors) for AI researchers to use to train deepfake detectors. (Google has also created datasets of synthetic audio and video, using paid actors, to help researchers train detection models.) The Deepfake Detection Challenge drew over 2,000 participants who submitted more than 35,000 trained models as detectors, a major spur to improving deepfake detection.

Slate, February 22, 2019, https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html; James Vincent, “OpenAI has Published the Text-Generating AI It Said Was Too Dangerous to Share,” The Verge, November 7, 2019, https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters. 120pre-briefing the press “got us some concerns that we were hyping it”: Jack Clark, interview by author, March 3, 2020. 120more careful about the phrasing around potential dangers: “Better Language Models.” 121realistic-looking fake videos: Samantha Cole, “We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now,” Vice, January 24, 2018, https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley. 121swap the faces of celebrities: Samantha Cole, “AI-Assisted Fake Porn Is Here and We’re All Fucked,” Vice, December 11, 2017, https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn. 12114,000 deepfake porn videos online: Henry Adjer et al., The State of Deepfakes: Landscape, Threats, and Impact (DeepTrace Labs, September 2019), 1, https://regmedia.co.uk/2019/10/08/deepfake_report.pdf. 121The videos didn’t only harm the celebrities: Cole, “AI-Assisted Fake Porn Is Here.” 121revenge porn attacks: Kirsti Melville, “The Insidious Rise of Deepfake Porn Videos—and One Woman Who Won’t Be Silenced,” abc.net.au, August 29, 2019, https://www.abc.net.au/news/2019-08-30/deepfake-revenge-porn-noelle-martin-story-of-image-based-abuse/11437774. 121“Deepfake technology is being weaponized against women”: Adjer et al., The State of Deepfakes, 6. 121AI assistant called Duplex: Jeff Grubb’s Game Mess, “Google Duplex: A.I.

Facebook, April 30, 2022, https://www.facebook.com/kpszsu/posts/363834939117794. 129deepfake of Ukrainian president Volodymyr Zelensky: Operational Reports, “This morning, polite hackers hacked into several Ukrainian sites and posted there a deepfake with Zelensky calling for laying down arms.,” Telegram (public post), March 16, 2022, https://t.me/opersvodki/1788; Mikael Thalen, “A deepfake of Ukrainian President Volodymyr Zelensky calling on his soldier to lay down their weapons was reportedly uploaded to a hacked Ukrainian news site,” Twitter, March 16, 2022, https://twitter.com/MikaelThalen/status/1504123674516885507; Samantha Cole, “Hacked News Channel and Deepfake of Zelenskyy Surrendering Is Causing Chaos Online,” VICE News, March 16, 2022, https://www.vice.com/en/article/93bmda/hacked-news-channel-and-deepfake-of-zelenskyy-surrendering-is-causing-chaos-online; Tom Simonite, “A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be,” Wired, March 17, 2022, https://www.wired.com/story/zelensky-deepfake-facebook-twitter-playbook/; Digital Forensic Research Lab, “Russian War Report: Hacked news program and deepfake video spread false Zelenskyy claims,” New Atlanticist (blog) on Atlantic Council, March 16, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-hacked-news-program-and-deepfake-video-spread-false-zelenskyy-claims/#deepfake; Nathaniel Gleicher, “1/ Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did.,” Twitter (thread), March 16, 2022, https://twitter.com/ngleicher/status/1504186935291506693. 130the “liar’s dividend”: Bobby Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107, no. 1753 (2019), https://doi.org/10.15779/Z38RV0D15J. 130“post-truth” information landscape: “Word of the Year 2016,” OxfordLanguages, 2016, https://languages.oup.com/word-of-the-year/2016/. 130“the internet is a vast wormhole of darkness”: Drew Harwell, “Fake-Porn Videos Are Being Weaponized to Harass and Humiliate Women: ‘Everybody Is a Potential Target,’” Washington Post, December 30, 2018, https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/. 130Sensity (formerly Deeptrace): Giorgio Patrini, LinkedIn profile, https://nl.linkedin.com/in/giorgiopatrini. 130“visual threat intelligence company”: Sensity (website), 2021, https://sensity.ai/about/. 130Steve Buscemi’s face swapped onto Jennifer Lawrence’s body: The Curious Ape, “Jennifer Lawrence as STEVE BUSCEMI at The Golden Globes DEEPFAKE,” YouTube, February 6, 2019, https://www.youtube.com/watch?

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

By 2041, fully photo-realistic 3D models should be possible, as we will see in “Twin Sparrows” and “My Haunting Idol.” Peele’s deepfake was forged for fun and food for thought, while in the story here, Chi recruits Amaka to forge a deepfake with malice. In addition to spreading rumors, deepfakes could also lead to blackmail, harassment, defamation, and election manipulation. How would you make a deepfake? How would an AI tool detect deepfakes? And as the deepfake and anti-deepfake software are pitted against each other, which will win? To answer these questions, we need to understand the mechanism that generates deepfakes—GAN. GENERATIVE ADVERSARIAL NETWORK (GAN) Deepfakes are built on a technology called generative adversarial networks (GAN).

Websites and apps are required by law to install anti-deepfake software (just like anti-virus software today) to protect users from fake videos. But the tug-of-war between the deepfake makers and the deepfake detectors has become an arms race—the side that has more computation wins. While the story is set in 2041, the situation described above is likely to impact the developed world earlier because it can afford the cost of the expensive computers, software, and AI experts needed to create and detect deepfakes and other AI manipulations. Also, legislation will likely be implemented in developed countries first. This story is set in a developing country, where the externalities of deepfakes will likely occur later. So, how does AI learn to see—both through cameras and prerecorded videos?

So, how does AI learn to see—both through cameras and prerecorded videos? What are the applications? And how does an AI deepfake maker work? Can humans or AI detect deepfakes? Will social networks be filled with fake videos? How can deepfakes be stopped? What other security holes might AI present? Is there anything good about the technology behind deepfakes? WHAT IS COMPUTER VISION? In “The Golden Elephant,” we witnessed the potential prowess of deep learning in big-data applications, like the Internet and finance. You’re probably not surprised that AI beat humans on big-data-crunching applications.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

In Peele’s public service video intended to make the public aware of the looming threat from deepfakes, Obama says things like “President Trump is a total and complete dipshit.”3 In this instance, the voice is Peele’s imitation of Obama, and the technique used alters an existing video by manipulating President Obama’s lips so they synchronize with Peele’s speech. Eventually, we will likely see videos like this in which the voice is also a deepfake fabrication. An especially common deepfake technique enables the digital transfer of one person’s face to a real video of another person. According to the startup company Sensity (formerly Deeptrace), which offers tools for detecting deepfakes, there were at least 15,000 deepfake fabrications posted online in 2019, and this represented an eighty-four percent increase over the prior year.4 Of these, a full ninety-six percent involve pornographic images or videos in which the face of a celebrity—nearly always a woman—is transplanted onto the body of a pornographic actor.5 While celebrities like Taylor Swift and Scarlett Johansson have been the primary targets, this kind of digital abuse could eventually be targeted against virtually anyone, especially as the technology advances and the tools for making deepfakes become more available and easier to use.

According to the startup company Sensity (formerly Deeptrace), which offers tools for detecting deepfakes, there were at least 15,000 deepfake fabrications posted online in 2019, and this represented an eighty-four percent increase over the prior year.4 Of these, a full ninety-six percent involve pornographic images or videos in which the face of a celebrity—nearly always a woman—is transplanted onto the body of a pornographic actor.5 While celebrities like Taylor Swift and Scarlett Johansson have been the primary targets, this kind of digital abuse could eventually be targeted against virtually anyone, especially as the technology advances and the tools for making deepfakes become more available and easier to use. As the quality of deepfakes relentlessly advances, the potential for fabricated audio or video media to be genuinely disruptive looms as a seemingly inevitable threat. As the fictional anecdote at the beginning of this chapter illustrates, a sufficiently credible deepfake could quite literally shift the arc of history—and the means to create such fabrications might soon be in the hands of political operatives, foreign governments or even mischievous teenagers.

James Vincent, “Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news,” The Verge, April 17, 2018, www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordanpeele-buzzfeed. 4. Sensity, “The state of deepfakes 2019: Landscape, threats, and impact,” September 2019, sensity.ai/reports/. 5. Ian Sample, “What are deepfakes—and how can you spot them?,” The Guardian, January 13, 2020, www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them. 6. Lex Fridman, “Ian Goodfellow: Generative Adversarial Networks (GANs),” Artificial Intelligence Podcast, episode 19, April 18, 2019, lexfridman.com/ian-goodfellow/.

pages: 521 words: 118,183

The Wires of War: Technology and the Global Struggle for Power
by Jacob Helberg
Published 11 Oct 2021

Because of the COVID-19 pandemic, our conversation took place by Zoom, and Daniel offered a timely illustration. “Today, I took it for granted that the voice I hear over Zoom is your voice, and that the face I see over Zoom is your face,” he said. “Now there’s nifty prototypes that people are using to do deepfakes live.”31 A society disrupted by deepfakes, he suggested, was not far off. Using deep learning, deepfakes mimic visual and speech patterns to create eerily realistic images, audio, and video. The believability of synthetic content has progressed along with advances in neural networks. As recently as 2015, algorithms trying to generate the original face of a man produced results that looked only somewhat more realistic than a painting produced by a talented ten-year-old.

The video was fake—a satirical warning from the comedian Jordan Peele and BuzzFeed about the danger of “synthetic content,” more commonly known as deepfakes. This type of synthetic media will render obsolete the old axiom that “seeing is believing”—with potentially devastating ramifications for the fabric of our democracy and the outcome of the Gray War. In mid-2020, I asked Daniel Gross, a partner at the start-up accelerator Y Combinator and in 2011 one of Forbes’s “30 Under 30” tech pioneers, where tech trends could be leading us. He quickly zeroed in on the rise of deepfakes. “A lot of discussion is around synthetic generation of content,” Daniel told me.

Before the article was debunked, Asif responded with a tweet threatening an attack of his own: “Israel forgets Pakistan is a Nuclear State too.” How much more believable would those stories have been had they included synthetic video footage of an injured Obama? Or a deepfake of that Israeli official “warning” Pakistan of nuclear annihilation? Deepfakes of celebrities in the nude and in compromising positions are routinely created.37 Who in a position of power might be blackmailed by manipulated content? Synthetic media will also make it harder to prove that those spreading this propaganda are trolls and not your next-door neighbor.

pages: 277 words: 70,506

We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News
by Eliot Higgins
Published 2 Mar 2021

THE PERILS AND OPPORTUNITIES OF ARTIFICIAL INTELLIGENCE You never forget your first glimpse of a ‘deepfake’. Partly, you are awed by the power of technology – that video footage can so convincingly be falsified. Partly, you are filled with dread at what this tool will wreak. As a public warning in 2018, the comedian Jordan Peele put out a deepfake showing Barack Obama in a video address, calling Trump ‘a total and complete dipshit’.27 The former Democratic president had said no such thing, of course. Information chaos online was already frightening enough. Now, it looked as though deepfakes were about to demolish legitimate discourse, and perhaps undermine the verification techniques that serve as our best defence.

Fearful of misuse, OpenAI decided not release the research.31, 32 While deepfakes are a threat, we can inform ourselves, prepare and respond. To become paranoid about deepfakes would itself have disastrous consequences, leading people to judge all documentation cynically. What quicker way to discredit a conclusive open-source investigation than to claim that nothing is to be believed? I am certain this tactic will soon become routine in disinformation campaigns. I already see tweets dismissing videos from Syria, saying, But how do you know this isn’t a deepfake? The uninformed give this technology powers beyond its current capabilities.

__twitter_impression=true https://openai.com/blog/better-language-models/ 32 www.vice.com/en_us/article/594qx5/there-is-no-tech-solution-to-deepfakes 33 lab.witness.org/projects/synthetic-media-and-deep-fakes/ 34 www.youtube.com/watch?time_continue=1&v=Qh_6cHw50l0 35 amp.axios.com/deepfake-authentication-privacy-5fa05902-41eb-40a7-8850-5450bcad0475.html?__twitter_impression=true 36 open.nytimes.com/introducing-the-news-provenance-project-723dbaf07c44?gi=5f9c26d709a7 www.newsprovenanceproject.com/FAQs 37 ai.facebook.com/blog/deepfake-detection-challenge/ 38 syrianarchive.org/en/tech-advocacy 39 syrianarchive.org/en 40 amp.theguardian.com/world/2019/aug/18/new-video-evidence-of-russian-tanks-in-ukraine-european-court-human-rights?

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

The race is tough: while detection methods are improving, so is the deepfake technology. In 2020, Facebook held a competition for artificial intelligence that can detect deepfakes. The winning algorithm detected deepfakes only 65 percent of the time. Some scholars, including law professor Danielle Citron, a leading voice in the field of sexual privacy, are skeptical that technology alone can battle deepfakes. Citron explains that to be effective, detection software would have to keep pace with innovations in deepfake technology, dooming those wanting to protect against deepfakes to a cat-and-mouse game. Citron warns that the experience of fighting malware, spam, and viruses shows the difficulty in such a race.

A team of computer scientists at the University at Buffalo developed an algorithm that detects deepfake videos by analyzing the light reflections in the eyes. This method was reportedly 94 percent effective at catching deepfakes, and the researchers created a “DeepFake-o-meter,” an online resource to help people test to see if the video they’ve viewed is real or (deep)fake. Other methods to identify deepfakes include detecting lack of or inconsistencies in detail or resolution inconsistencies around eyes, teeth, and facial contours. For example, mouths created by deepfake videos often have misshapen or excess teeth. Like with other areas of technology that harm and help, the race to do good often feels like a game of whack-a-mole.

Democratized creativity is a good thing—people around the world are creating humorous memes and videos, social and political commentary, and creative art. But the insidious use of deepfakes is extremely concerning. Like Gal Gadot, Taylor Swift and Scarlett Johansson have had deepfakes of them posted online, but celebrities aren’t the only victims. Deepfakes have also been used politically to shame women for not aligning with their gender roles or the values of their communities. The story of Indian journalist Rana Ayyub is telling. Ayyub exposed Hindu nationalist politics as corrupt, and thereafter she became the victim of a deepfake porn video, which in turn led to Ayyub receiving rape and death threats.27 Unsurprisingly, 90 percent of the victims of revenge porn are women.

Spies, Lies, and Algorithms: The History and Future of American Intelligence
by Amy B. Zegart
Published 6 Nov 2021

Because these algorithms are designed to learn by competing, deepfake countermeasures are unlikely to work for long. “We are outgunned,” said Hany Farid, a computer science professor at the University of California at Berkeley.127 Deepfake code is open and spreading fast. In the past few years, anonymous GitHub user “torzdf” and Reddit user “deepfakeapp” have vastly simplified the code and interface required to generate deepfakes, creating programs called “faceswap” and “FakeApp,” which are easy enough for a high school student with no coding background to use. The two other key ingredients for making deepfakes—computing power and large libraries of training data—are also becoming widely available.128 The impact of deepfakes could be profound, and policymakers know it.

Kaylee Fagan, “A Viral Video That Appeared to Show Obama Calling Trump a ‘Dips—’ Shows a Disturbing New Trend Called ‘Deepfakes,’ ” Business Insider, April 17, 2018, https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4; Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman, “Synthesizing Obama: Learning Lip Sync from Audio,” ACM Transactions on Graphics 36, no. 4 (July 2017), http://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf; Drew Harwell, “Top AI Researchers Race to Detect ‘Deepfake’ Videos: ‘We Are Outgunned,’ ” Washington Post, June 12, 2019, https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/. 126.

In October 2019, Facebook publicly acknowledged its discovery of foreign influence campaigns waged by Iran and China on its platform.86 As the COVID-19 pandemic spread in 2020, China orchestrated an aggressive global social media campaign spreading false information along with it—including that the United States created COVID-19 as a bioweapon.87 Advances in artificial intelligence have given rise to deepfakes, digitally manipulated audio, photographs, and videos that are highly realistic and difficult to authenticate. Deepfake application tools are now widely available online and so simple to use that high school students with no coding background can create convincing forgeries. In May 2019, anonymous users doctored a video to make House Speaker Nancy Pelosi appear drunk, which went viral on Facebook. When the social media giant refused to take it down, two artists and a small technology startup created a deepfake of Mark Zuckerberg and posted it on Instagram.

pages: 444 words: 117,770

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma
by Mustafa Suleyman
Published 4 Sep 2023

GO TO NOTE REFERENCE IN TEXT Both looked and sounded First reported in Nilesh Cristopher, “We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign,” Vice, Feb. 18, 2020, www.vice.com/​en/​article/​jgedjb/​the-first-use-of-deepfakes-in-indian-election-by-bjp. GO TO NOTE REFERENCE IN TEXT In another widely publicized incident Melissa Goldin, “Video of Biden Singing ‘Baby Shark’ Is a Deepfake,” Associated Press, Oct. 19, 2022, apnews.com/​article/​fact-check-biden-baby-shark-deepfake-412016518873; “Doctored Nancy Pelosi Video Highlights Threat of ‘Deepfake’ Tech,” CBS News, May 25, 2019, www.cbsnews.com/​news/​doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25.

As we saw in chapter 4, large language models now show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real. Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more everyday people will be imitated as the required training data falls to just a handful of examples. It’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition.

GO TO NOTE REFERENCE IN TEXT All the documents seemed Catherine Stupp, “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case,” Wall Street Journal, Aug. 30, 2019, www.wsj.com/​articles/​fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402. GO TO NOTE REFERENCE IN TEXT It’s not the president charging Which is a real deepfake. See Kelly Jones, “Viral Video of Biden Saying He’s Reinstating the Draft Is a Deepfake,” Verify, March 1, 2023, www.verifythis.com/​article/​news/​verify/​national-verify/​viral-video-of-biden-saying-hes-reinstating-the-draft-is-a-deepfake/​536-d721f8cb-d26a-4873-b2a8-91dd91288365. GO TO NOTE REFERENCE IN TEXT His radicalizing messages were Josh Meyer, “Anwar al-Awlaki: The Radical Cleric Inspiring Terror from Beyond the Grave,” NBC News, Sept. 21, 2016, www.nbcnews.com/​news/​us-news/​anwar-al-awlaki-radical-cleric-inspiring-terror-beyond-grave-n651296; Alex Hern, “ ‘YouTube Islamist’ Anwar al-Awlaki Videos Removed in Extremism Clampdown,” Guardian, Nov. 13, 2017, www.theguardian.com/​technology/​2017/​nov/​13/​youtube-islamist-anwar-al-awlaki-videos-removed-google-extremism-clampdown.

pages: 306 words: 82,909

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back
by Bruce Schneier
Published 7 Feb 2023

TRUST AND AUTHORITY 191“Only amateurs attack machines”: Bruce Schneier (15 Oct 2000), “Semantic attacks: The third wave of network attacks,” Crypto-Gram, https://www.schneier.com/crypto-gram/archives/2000/1015.html#1. 191One victim lost $24 million: Joeri Cant (22 Oct 2019), “Victim of $24 million SIM swap case writes open letter to FCC chairman,” Cointelegraph, https://cointelegraph.com/news/victim-of-24-million-sim-swap-case-writes-open-letter-to-fcc-chairman. 192the 2020 Twitter hackers: Twitter (18 Jul 2020; updated 30 Jul 2020), “An update on our security incident,” Twitter blog, https://blog.twitter.com/en_us/topics/company/2020/an-update-on-our-security-incident. 192the CEO of an unnamed UK energy company: Nick Statt (5 Sep 2019), “Thieves are now using AI deepfakes to trick companies into sending them money,” Verge, https://www.theverge.com/2019/9/5/20851248/deepfakes-ai-fake-audio-phone-calls-thieves-trick-companies-stealing-money. 192one scam artist has used a silicone mask: Hugh Schofield (20 Jun 2019), “The fake French minister in a silicone mask who stole millions,” BBC News, https://www.bbc.com/news/world-europe-48510027. 193a video of Gabon’s long-missing president: Drew Harwell (12 Jun 2019), “Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned,’ ” Washington Post, https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned. 193BuzzFeed found 140 fake news websites: Craig Silverman and Lawrence Alexander (3 Nov 2016), “How teens in the Balkans are duping Trump supporters with fake news,” BuzzFeed, https://www.buzzfeednews.com/article/craigsilverman/how-macedonia-became-a-global-hub-for-pro-trump-misinfo. 47.

Technology is making this kind of chicanery easier. Criminals are now using deep-fake technology to commit social engineering attacks. In 2019, the CEO of a UK energy company was tricked into wiring €220,000 to an account because he thought the chief executive of his parent company was telling him to do so in a phone call and then in an email. That hack only used fake audio, but video is next. Already one scam artist has used a silicone mask to record videos and trick people into wiring him millions of dollars. This kind of fraud can have geopolitical effects, too. Researchers have produced deep-fake videos of politicians saying things they didn’t say and doing things they didn’t do.

HACKING COMPUTERIZED FINANCIAL EXCHANGES 83the rise of computerization: Atlantic Re:think (21 Apr 2015), “The day social media schooled Wall Street,” Atlantic, https://www.theatlantic.com/sponsored/etrade-social-stocks/the-day-social-media-schooled-wall-street/327. Jon Bateman (8 Jul 2020), “Deepfakes and synthetic media in the financial system: Assessing threat scenarios,” Carnegie Endowment, https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237. 20. LUXURY REAL ESTATE 87160 UK properties: Matteo de Simone et al. (Mar 2015), “Corruption on your doorstep: How corrupt capital is used to buy property in the U.K.,” Transparency International, https://www.transparency.org.uk/sites/default/files/pdf/publications/2016CorruptionOnYourDoorstepWeb.pdf. 87owned by shell corporations: Louise Story and Stephanie Saul (7 Feb 2015), “Stream of foreign wealth flows to elite New York real estate,” New York Times, https://www.nytimes.com/2015/02/08/nyregion/stream-of-foreign-wealth-flows-to-time-warner-condos.html. 88geographic targeting orders: Michael T.

pages: 501 words: 114,888

The Future Is Faster Than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives
by Peter H. Diamandis and Steven Kotler
Published 28 Jan 2020

Deeper Fakes 2018, a YouTube video: David Mack, “This PSA About Fake News from Barack Obama Is Not What It Appears,” BuzzFeed News, April 17, 2018. See: https://www.buzzfeednews.com/article/davidmack/obama-fake-news-jordan-peele-psa-video-buzzfeed. the dangers of deepfakes: Technology, “The Real Danger of DeepFake Videos Is That We May Question Everything,” NewScientist, August 29, 2018. See: https://www.newscientist.com/article/mg23931933-200-the-real-danger-of-deepfake-videos-is-that-we-may-question-everything/. See also: Oscar Swartz, “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die,” Guardian, November 12, 2018, https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth.

On the left, Obama continues talking. On the right, we see actor-director-comedian Jordan Peele actually speaking the words being put into the former President’s mouth. The video is a deepfake, an AI-driven, human image synthesis technique that takes existing images and videos—say, Obama speaking—and maps them onto source images and video—such as Jordan Peele imitating President Obama insulting President Trump. Peele created the video to illustrate the dangers of deepfakes. He felt the need to make it because it’s really just one of thousands. Political chicanery, revenge porn, celebrity revenge porn—they’ve all been tried and tried again.

At the same time, we humans might be losing that skill. Deepfakes are an obvious example. What began as a disturbing trend in politics and pornography has spread to other forms of entertainment. It’s an entirely new kind of collaborative, active media. In 2018, researchers at the University of California, Berkeley, developed an AI motion transfer technique that superimposes the bodies of professional dancers onto the bodies of amateurs, lending their fluid movements to your normal cha-cha-cha. This means anyone can become Fred Astaire, Ginger Rogers, or Missy Elliott. It’s a full-body deepfake, with a key difference: democratization.

pages: 475 words: 134,707

The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health--And How We Must Adapt
by Sinan Aral
Published 14 Sep 2020

This characterization may seem dramatic, but there is no doubt that technological innovation in the fabrication of falsity is advancing at a breakneck pace. The development of “deepfakes” is generating exceedingly convincing synthetic audio and video that is even more likely to fool us than textual fake news. Deepfake technology uses deep learning, a form of machine learning based on multilayered neural networks, to create hyperrealistic fake video and audio. If seeing is believing, then the next generation of falsity threatens to convince us more than any fake media we have seen so far. In 2018 movie director (and expert impersonator) Jordan Peele teamed up with BuzzFeed to create a deepfake video of Barack Obama calling Donald Trump a “complete and total dipshit.”

In 2018 movie director (and expert impersonator) Jordan Peele teamed up with BuzzFeed to create a deepfake video of Barack Obama calling Donald Trump a “complete and total dipshit.” It was convincing but obviously fake. Peele added a tongue-in-cheek nod to the obvious falsity of his deepfake when he made Obama say, “Now, I would never say these things…at least not in a public address.” But what happens when the videos are not made to be obviously fake, but instead made to convincingly deceive? Deepfake technology is based on a specific type of deep learning called generative adversarial networks, or GANs, which was first developed by Ian Goodfellow while he was a graduate student at the University of Montreal. One night while drinking beer with fellow graduate students at a local watering hole, Goodfellow was confronted with a machine-learning problem that had confounded his friends: training a computer to create photos by itself.

Or by something as simple as invented news reports about Iranian or North Korean military plans for preemptive strikes on any number of targets….It might end up causing a war, or just as consequentially, impeding a national response to a genuine threat.” Deepfaked audio is already being used to defraud companies of millions of dollars. In the summer of 2019, Symantec CTO Hugh Thompson revealed that his company had seen deepfaked audio attacks against several of its clients. The attackers first trained a GAN on hours of public audio recordings of a CEO’s voice, while giving news interviews, delivering public speeches, speaking during earnings calls, or testifying before Congress.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

In 2021, the CWRU team published a paper showing that they can identify which of the participants in the study (four students) actually made the painting with greater than 95 per cent accuracy (Ji/McMaster/Schwab et al. 2018). In conclusion, I believe that education is a powerful safeguard against fake news. Deepfakes necessitate the devel5 6 https://ai.facebook.com/blog/heres-how-were-using-ai-to-help-detect-misinformation/. https://sensity.ai/deepfakes-detection/. 195 196 Part 2: Perspectives opment of critical thinking skills in an era when seeing is no longer believing. This is where museum professionals have an important role to play in helping to raise awareness about fake news.

The field of digital museum and heritage studies—by which I mean the totality of interdisciplinary studies that address the role of digital technologies in museums and heritage—has made great strides in recent years (for instance, Giaccardi 2012; Drotner/Dziekan/Parry et al. 2018; Giannini/Bowen 2019; Lewi/Smith/vom Lehn et al. 2019; Arvanitis/Zuanni 2021; Geismar 2021; Stylianou-Lambert/Heraclidou/ Bounia 2022), and the number of studies on the role of AI for museums is growing. 1 This list was generated mostly by my student assistant Julia Molin, with funding provided by the Humboldt Universität zu Berlin. 100 Part 1: Reflections Oonagh Murphy, Elena Villaespesa, and Ariana French have surveyed the range of AI technologies in museums (French/Villaespesa 2019; Murphy/Villaespesa 2020). Luciana Bordoni et al. (2016) and Marco Fiorucci et al. (2020) have discussed possible uses of AI in the field of heritage studies. Others have conducted studies on topics such as museum chatbots and deepfakes (Gaia/Boiano/Borda 2019; Kidd/Rees 2022), the use of AI in digital archives and its ethical implications (Ciecko 2020; Villaespesa/Murphy 2021; Foka/Attemark/Wahlberg 2022), the changing working conditions in museums resulting from AI (Fang 2019), and concrete AI projects and their implementation (for example, Machidon/Tavčar/Gams 2020).

Available at: https://journals.sub.uni-hamburg.de/hjk/article/view/1955. Hopkins, Julian (2019). Monetising the Dividual Self: The Emergence of the Lifestyle Blog and Influencers in Malaysia. New York, Berghahn Books. https://doi.org/1 0.2307/j.ctv12pnrw6. 111 112 Part 1: Reflections Kidd, Jenny/Rees, Arran J. (2022). A Museum of Deepfakes? Potentials and Pitfalls for Deep Learning Technologies. In: Theopisti Stylianou-Lambert/Alexandra Bounia/Antigone Heraclidou (Eds.). Emerging Technologies and Museums. New York/Oxford, Berghahn Books, 218–32. https://doi.org/10.1515/9781800733756-0 12. Kim, Eun-sung/Yun, Gi Woong/Oh, Yoehan (2022).

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

be a period of adjustment: Ibid. “There’s a lot of other areas where AI”: Ibid. someone calling themselves “Deepfakes”: Samantha Cole, “AI-Assisted Fake Porn Is Here and We’re All Fucked,” Motherboard, December 11, 2017, https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn. Services like Pornhub and Reddit: Samantha Cole, “Twitter Is the Latest Platform to Ban AI-Generated Porn: Deepfakes Are in Violation of Twitter’s Terms of Use,” Motherboard, February 6, 2018, https://www.vice.com/en_us/article/ywqgab/twitter-bans-deepfakes; Arjun Kharpal, “Reddit, Pornhub Ban Videos that Use AI to Superimpose a Person’s Face,” CNBC, February 8, 2018, https://www.cnbc.com/2018/02/08/reddit-pornhub-ban-deepfake-porn-videos.html.

This period of adjustment began almost immediately, as someone calling themselves “Deepfakes” started splicing celebrity faces into porn videos and posting them to the Internet. After this anonymous prankster distributed a software app that did the trick, these videos turned up, en masse, across discussion boards and social networks and video sites like YouTube. One used the face of Michelle Obama. Several pulled the trick with Nicolas Cage. Services like Pornhub and Reddit and Twitter soon banned the practice, but not before the idea spilled into the mainstream media. “Deepfake” entered the lexicon, a name for any video doctored with artificial intelligence and spread online.

It also included a technology developed at DeepMind called WaveNet that could generate realistic sounds, even help duplicate someone’s voice, like Donald Trump’s or Nancy Pelosi’s. This was evolving into a game of AI versus AI. As another election approached, Schroepfer launched a contest that urged researchers from across industry and academia to build AI systems that could identify deepfakes, fake images generated by other AI systems. The question was: Which side would win? For researchers like Ian Goodfellow, the answer was obvious. The misinformation would win. GANs, after all, were designed as a way of building a creator that could fool any detector. It would win even before the game was played.

pages: 284 words: 75,744

Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond
by Tamara Kneese
Published 14 Aug 2023

Many people would be horrified by the idea of “uncanny valley” versions of their dead loved ones or themselves, which could become sources of tension among family members and other relations of a dead person. One person’s beloved chatbot is another person’s nightmare.9 Deepfakes of the dead conjure further ethical issues. Beyond highly publicized examples like Kanye West’s resurrecting Kim Kardashian’s dead father as a birthday gift, there are companies like the Israeli MyHeritage, which uses a technology called Deep Nostalgia to help ordinary people create deepfake animated GIFs of their dead relatives through family photographs.10 You can add moving, suggestive eyebrows to an image of your great-grandmother or Albert Einstein.11 What are the implications of adding artificial signs of life to victims of racism, war, or genocide?

How-to articles in the New York Times and Wall Street Journal blandly instruct users on how to secure their online accounts and pass them on to their next of kin through digital estate–planning startups.14 Episodes of the sci-fi series Black Mirror speculate on what happens to digital accounts after people die, asking whether people can live on through their digital ephemera.15 When it comes to digital death, the reality is often stranger than fiction. Deepfake versions of dead celebrities and political figures can appear “live” at public events, and startups are building programs that turn dead individuals into chatbots, using their stored social media histories to predict and generate new content long into the future.16 What do such life-and-death developments reveal about the contemporary social world?

Thanks to Google Home, Nest, and Amazon Echo and their invisible, feminized virtual assistants, a person can leave behind a cluster of smart objects: a self-contained universe of efficiency. Sometimes smart devices have the capacity to become otherworldly. In June 2022, Amazon advertised its new Alexa speakers by claiming the devices could manifest deepfaked voices of dead relatives, making it so a dead grandmother could read a story to her grandchild.5 Such technologies have emotional and ethical repercussions. Do smart objects have afterlives? Or, rather, what happens when the Internet of Things breaks down? What fantasies about transcendence and immortality exist in boring appliances like the Roomba?

pages: 418 words: 102,597

Being You: A New Science of Consciousness
by Anil Seth
Published 29 Aug 2021

theories about why the uncanny valley exists: Mori et al. (2012). ‘deepfake’ technologies: To ‘deepfake’ is to generate a realistic but fake video, usually of a human face, using machine learning to combine a source and a target video. In a widely disseminated example from 2017, the deepfake method was used to create convincing videos of Barack Obama saying things that he did not say (https://www.youtube.com/watch?v=cQ54GDm1eL0). A series of TikTok videos deepfaking Tom Cruise, released in 2021, raises the bar substantially (https://www.theverge.com/22303756/tiktok-tom-cruise-impersonator-deepfake). vast uncontrolled global experiment: The AI researcher Stuart Russell eloquently describes the threats posed by current and near-future AI, as well as ways to redesign AI systems to avoid them, in his book Human Compatible (2019).

Recent advances in machine learning using ‘generative adversarial neural networks’ – GANNs for short – can generate photorealistic faces of people who never actually existed (see opposite).* These images are created by cleverly mixing features from large databases of actual faces, employing techniques similar to those we used in our hallucination machine (described in chapter 6). When combined with ‘deepfake’ technologies, which can animate these faces to make them say anything, and when what they say is powered by increasingly sophisticated speech recognition and language production software, such as GPT-3, we are all of a sudden living in a world populated by virtual people who are effectively indistinguishable from virtual representations of real people.

.† There are legitimate worries about delegating decision-making capability to artificial systems, the inner workings of which may be susceptible to all kinds of bias and caprice, and which may remain opaque – not only to those affected, but also to those who designed them. At the extreme end of the spectrum, what horror could be unleashed if an AI system were put in charge of nuclear weapons, or of the internet backbone? There are also ethical concerns about the psychological and behavioural consequences of AI and machine learning. Privacy invasion by deepfakes, behaviour modification by predictive algorithms, and belief distortion in the filter bubbles and echo chambers of social media are just a few of the many forces that pull at the fabric of our societies. By unleashing these forces we are willingly ceding our identities and autonomy to faceless data-corporations in a vast uncontrolled global experiment.

pages: 401 words: 112,589

Flowers of Fire: The Inside Story of South Korea's Feminist Movement and What It Means for Women's Rights Worldwide
by Hawon Jung
Published 21 Mar 2023

Meanwhile, some users have taken cyber sexual abuse of women to a whole new level with fakeporn, which turns ordinary images of women into porn through various technical tools as basic as Photoshop or as advanced as artificial intelligence (AI). These images and videos are also known as cheap fakes, shallow fakes, or deepfake porn, depending on the sophistication of technology involved. Deepfakes, using a form of AI called deep learning, have been a source of growing alarm over their potential use in political disinformation, after fake videos of politicians—including of Nancy Pelosi drunkenly stammering during a speech—went viral in recent years. But despite all the attention paid to disinformation in politics, most deepfakes have nothing to do with politics—96 percent of such videos circulating online are porn, and all of them target women, a 2019 study showed.53 A quarter of such porn features K-pop stars, although celebrities are not the only victims, and most abusers don’t require advanced skills or technology.

These sex crimes encompass secretly filming others without consent, or “spycam porn,” which also includes downblouse and upskirt photography—the practice of taking nonconsensual images looking down a person’s top or under a person’s skirt, respectively; sharing sexually explicit imagery without the subject’s consent, dubbed “nonconsensual porn,”7 “revenge porn,” or “cyber rape”8; digitally altering others’ images to create fake pornographic photos or videos, also known as “deepfake porn”9; and blackmailing others into providing intimate images, sexual favors, or money by threatening to distribute their private photos, called “sextortion.”10 Such abuse is hardly unique to South Korea; it is a concern worldwide. In the United States, a nationwide survey found that nearly 13 percent of three thousand participants had been threatened by or fallen victim to nonconsensual porn.11 With the sexually charged abuse disproportionately affecting young women, 33 percent of American women under thirty-five said in 2020 that they’d been sexually harassed online, far higher than 11 percent of their male counterparts.12 And in 2022, the Biden administration set up a task force focused on preventing online abuse13 after promising to study “rampant online sexual harassment … including revenge porn, deepfakes,” and their potential link with real-life violence against women, mass shootings, or extremism.14 Europe also saw an alarming rise in online posts of intimate images of women, usually by current or former partners who were “stuck at home in front of a screen” during the COVID-19 lockdowns in 2020.15 Around the same period, Bangladeshi police launched an all-women unit to tackle a rise in online harassment, as women account for most of the nearly 6,100 cases of digital abuse, including nonconsensual porn, reported in the country.16 The scenes in Seoul’s subways may strike some as overly paranoid, but data says otherwise.

In the United States, a nationwide survey found that nearly 13 percent of three thousand participants had been threatened by or fallen victim to nonconsensual porn.11 With the sexually charged abuse disproportionately affecting young women, 33 percent of American women under thirty-five said in 2020 that they’d been sexually harassed online, far higher than 11 percent of their male counterparts.12 And in 2022, the Biden administration set up a task force focused on preventing online abuse13 after promising to study “rampant online sexual harassment … including revenge porn, deepfakes,” and their potential link with real-life violence against women, mass shootings, or extremism.14 Europe also saw an alarming rise in online posts of intimate images of women, usually by current or former partners who were “stuck at home in front of a screen” during the COVID-19 lockdowns in 2020.15 Around the same period, Bangladeshi police launched an all-women unit to tackle a rise in online harassment, as women account for most of the nearly 6,100 cases of digital abuse, including nonconsensual porn, reported in the country.16 The scenes in Seoul’s subways may strike some as overly paranoid, but data says otherwise.

Forward: Notes on the Future of Our Democracy
by Andrew Yang
Published 15 Nov 2021

Social media will become all the more powerful in establishing alternate versions of reality as artificial-intelligence-enabled “deepfake” videos become more ubiquitous. If you think there is a lot of disinformation now, what’s coming will be even worse. Synthetic media—audio or video recordings that are altered or doctored by technology—is becoming more and more convincing as technology puts the editing capacities of a Hollywood studio into the hands of individual actors. In 2018, Jordan Peele and BuzzFeed released a video of Barack Obama uttering obscenities that has been viewed millions of times to illustrate deepfake technology in action. Victor Riparbelli, the CEO of a start-up that is generating synthetic media for commercial use, believes that synthetic video may account for up to 90 percent of all video content by 2025.

In 2020, Oxford University researchers found evidence that the governments of seventy different countries practiced online disinformation, often via social media. Imagine a world where we are truly unable to differentiate fact from fiction, where you can show video recordings of anyone saying or doing something and they may simply deny that it happened. Nina Schick, the author of Deepfakes: The Coming Infocalypse, suggests that this could be the end of representative democracy. You could think of the above as alarmist. But what I saw out on the campaign trail across the country between 2018 and 2021 reinforced my sense that we are living through a crisis in journalism and information born of both technology and market-based incentives.

It’s harder to feel outraged after seeing two people sit together and have a nuanced conversation that’s more similar to how human beings interact in real life. SOCIAL MEDIA Social media seems to be the most intractable problem of all. These platforms actually have a few interrelated problems: market incentives that maximize engagement and addiction, misinformation and deepfakes, and data-enabled targeted advertising. There is a big rule when it comes to social media platforms you might have heard of, thanks to Donald Trump: section 230 of the Communications Decency Act (CDA). Section 230 says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

pages: 244 words: 81,334

Picnic Comma Lightning: In Search of a New Reality
by Laurence Scott
Published 11 Jul 2018

New concern for our sense of reality is emerging in the form of ‘deepfakes’ – digitally manipulated videos in which the face of one person is put on the body of someone else. Deepfake technology has come into public view because it has been used to create hybrid porn clips that seem to star Hollywood’s female leads. This ‘face-swapping’ software, which uses AI machine learning, has been turned into an easily accessible app. This type of image might be a credible masquerade of presence, but it is certainly no certificate of it. While deepfakes may become popular as a gross way of externalising private fantasies, they may have more fundamental consequences.

While deepfakes may become popular as a gross way of externalising private fantasies, they may have more fundamental consequences. Deepfakes’ potential for framing people, for putting words in their mouths using vocal-manipulation software, is clear. As an article in The New York Times points out, ‘Fake video and18 audio may become so convincing that it can’t be distinguished from real recordings, rendering audio and video evidence inadmissible in court.’ And so that old treachery of images persists here in new guises: just at the moment when it seems that we can all use video footage to make our claim on what undeniably occurred, the very reliability of such imagery is being compromised.

Herzog et al.’s study on ‘time slices’ suggests that the brain allows such a long lag between our first experience and awareness of a certain stimulus because it ‘wants to give19 you the best, clearest information it can, and this demands a substantial amount of time’. As well as the obvious moral objections to deepfakes, we should be appalled by all purposely deceitful imagery, out of respect for this constant, hidden micro-industry of our minds. In Herzog’s model, during its first, unconscious ‘processing stage’, the brain discerns the features of objects, such as colour and shape, assembling as accurate a profile of reality as it can, before presenting it to us as a conscious perception.

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

.*8 This is one of the more underutilized tools on the Web for fact-checking. If you are suspicious of a Twitter or Facebook account, check to see if the profile photo comes from a stock photo website. Be aware of deepfakes and other synthetic media. A random stranger on the Internet could be anybody, anywhere. But while we’ve learned to distrust user names by themselves, we’re still susceptible to people’s pictures. In the past, a person’s photo was pretty good proof that they existed. No longer. So-called deepfake technology makes it possible to generate photorealistic images of people who don’t exist. For now, one can still spot them with a bit of practice. Learn how at our website, http://whichfaceisreal.com.

Similar machine learning algorithms are able to “voiceshop,” generating fake audio and video that are nearly indistinguishable from the real thing. By synthesizing audio from previous recordings and grafting expressions and facial movements from a person acting as model onto the visage of a target, these so-called deepfake videos can make it look like anyone is doing or saying anything. Director and comedian Jordan Peele created a public service announcement about fake news using this technology. Peele’s video depicts Barack Obama addressing the American people about fake news, misinformation, and the need for trusted news sources.

And we adjusted to a Photoshop world in which pictures do lie. How? In a word, we triangulate. We no longer trust a single message, a single image, a single claim. We look for independent witnesses who can confirm testimony. We seek multiple images from multiple vantage points. Society will adjust similarly to a world of deepfakes and whatever reality-bending technologies follow. There are three basic approaches for protecting ourselves against misinformation and disinformation online. The first is technology. Tech companies might be able to use machine learning to detect online misinformation and disinformation. While this is a hot area for research and development, we are not optimistic.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

At the time of writing, for example, there is a lot of concern about DeepFakes.26 These are pictures or videos which have been altered by a neural network to include people who were not present in the original. A notorious example occurred in 2019, when a video of US House Speaker Nancy Pelosi was altered to make it appear that she had a speaking impairment, or perhaps was under the influence of drugs or alcohol.27 DeepFakes have been used to alter pornographic videos too, inserting ‘actors’ into the video who did not in fact participate.28 At present, the quality of DeepFake videos is poor, but it is getting better, and, soon, we won’t be able to tell whether a photo or video is real or a DeepFake.

A notorious example occurred in 2019, when a video of US House Speaker Nancy Pelosi was altered to make it appear that she had a speaking impairment, or perhaps was under the influence of drugs or alcohol.27 DeepFakes have been used to alter pornographic videos too, inserting ‘actors’ into the video who did not in fact participate.28 At present, the quality of DeepFake videos is poor, but it is getting better, and, soon, we won’t be able to tell whether a photo or video is real or a DeepFake. At that point, the principle that photos or videos provide a reliable record of events will no longer be viable. If we each inhabit our own, AI-powered digital universe, there is a real danger that societies, built on common values and principles, will start to fracture. Fake news on social media is just the beginning. Fake AI We saw in Chapter 4 how software agents like Siri, Alexa and Cortana emerged in the first decade of this century as direct descendants of research on agents in the 1990s. Shortly after Siri emerged, a number of stories appeared in the popular press reporting some undocumented features of the system.

A A* 77 À la recherche du temps perdu (Proust) 205–8 accountability 257 Advanced Research Projects Agency (ARPA) 87–8 adversarial machine learning 190 AF (Artificial Flight) parable 127–9, 243 agent-based AI 136–49 agent-based interfaces 147, 149 ‘Agents That Reduce Work and Information Overload’ (Maes) 147–8 AGI (Artificial General Intelligence) 41 AI – difficulty of 24–8 – ethical 246–62, 284, 285 – future of 7–8 – General 42, 53, 116, 119–20 – Golden Age of 47–88 – history of 5–7 – meaning of 2–4 – narrow 42 – origin of name 51–2 – strong 36–8, 41, 309–14 – symbolic 42–3, 44 – varieties of 36–8 – weak 36–8 AI winter 87–8 AI-complete problems 84 ‘Alchemy and AI’ (Dreyfus) 85 AlexNet 187 algorithmic bias 287–9, 292–3 alienation 274–7 allocative harm 287–8 AlphaFold 214 AlphaGo 196–9 AlphaGo Zero 199 AlphaZero 199–200 Alvey programme 100 Amazon 275–6 Apple Watch 218 Argo AI 232 arithmetic 24–6 Arkin, Ron 284 ARPA (Advanced Research Projects Agency) 87–8 Artificial Flight (AF) parable 127–9, 243 Artificial General Intelligence (AGI) 41 artificial intelligence see AI artificial languages 56 Asilomar principles 254–6 Asimov, Isaac 244–6 Atari 2600 games console 192–6, 327–8 augmented reality 296–7 automated diagnosis 220–1 automated translation 204–8 automation 265, 267–72 autonomous drones 282–4 Autonomous Vehicle Disengagement Reports 231 autonomous vehicles see driverless cars autonomous weapons 281–7 autonomy levels 227–8 Autopilot 228–9 B backprop/backpropagation 182–3 backward chaining 94 Bayes nets 158 Bayes’ Theorem 155–8, 365–7 Bayesian networks 158 behavioural AI 132–7 beliefs 108–10 bias 172 black holes 213–14 Blade Runner 38 Blocks World 57–63, 126–7 blood diseases 94–8 board games 26, 75–6 Boole, George 107 brains 43, 306, 330–1 see also electronic brains branching factors 73 Breakout (video game) 193–5 Brooks, Rodney 125–9, 132, 134, 243 bugs 258 C Campaign to Stop Killer Robots 286 CaptionBot 201–4 Cardiogram 215 cars 27–8, 155, 223–35 certainty factors 97 ceteris paribus preferences 262 chain reactions 242–3 chatbots 36 checkers 75–7 chess 163–4, 199 Chinese room 311–14 choice under uncertainty 152–3 combinatorial explosion 74, 80–1 common values and norms 260 common-sense reasoning 121–3 see also reasoning COMPAS 280 complexity barrier 77–85 comprehension 38–41 computational complexity 77–85 computational effort 129 computers – decision making 23–4 – early developments 20 – as electronic brains 20–4 – intelligence 21–2 – programming 21–2 – reliability 23 – speed of 23 – tasks for 24–8 – unsolved problems 28 ‘Computing Machinery and Intelligence’ (Turing) 32 confirmation bias 295 conscious machines 327–30 consciousness 305–10, 314–17, 331–4 consensus reality 296–8 consequentialist theories 249 contradictions 122–3 conventional warfare 286 credit assignment problem 173, 196 Criado Perez, Caroline 291–2 crime 277–81 Cruise Automation 232 curse of dimensionality 172 cutlery 261 Cybernetics (Wiener) 29 Cyc 114–21, 208 D DARPA (Defense Advanced Research Projects Agency) 87–8, 225–6 Dartmouth summer school 1955 50–2 decidable problems 78–9 decision problems 15–19 deduction 106 deep learning 168, 184–90, 208 DeepBlue 163–4 DeepFakes 297–8 DeepMind 167–8, 190–200, 220–1, 327–8 Defense Advanced Research Projects Agency (DARPA) 87–8, 225–6 dementia 219 DENDRAL 98 Dennett, Daniel 319–25 depth-first search 74–5 design stance 320–1 desktop computers 145 diagnosis 220–1 disengagements 231 diversity 290–3 ‘divide and conquer’ assumption 53–6, 128 Do-Much-More 35–6 dot-com bubble 148–9 Dreyfus, Hubert 85–6, 311 driverless cars 27–8, 155, 223–35 drones 282–4 Dunbar, Robin 317–19 Dunbar’s number 318 E ECAI (European Conference on AI) 209–10 electronic brains 20–4 see also computers ELIZA 32–4, 36, 63 employment 264–77 ENIAC 20 Entscheidungsproblem 15–19 epiphenomenalism 316 error correction procedures 180 ethical AI 246–62, 284, 285 European Conference on AI (ECAI) 209–10 evolutionary development 331–3 evolutionary theory 316 exclusive OR (XOR) 180 expected utility 153 expert systems 89–94, 123 see also Cyc; DENDRAL; MYCIN; R1/XCON eye scans 220–1 F Facebook 237 facial recognition 27 fake AI 298–301 fake news 293–8 fake pictures of people 214 Fantasia 261 feature extraction 171–2 feedback 172–3 Ferranti Mark 1 20 Fifth Generation Computer Systems Project 113–14 first-order logic 107 Ford 232 forward chaining 94 Frey, Carl 268–70 ‘The Future of Employment’ (Frey & Osborne) 268–70 G game theory 161–2 game-playing 26 Gangs Matrix 280 gender stereotypes 292–3 General AI 41, 53, 116, 119–20 General Motors 232 Genghis robot 134–6 gig economy 275 globalization 267 Go 73–4, 196–9 Golden Age of AI 47–88 Google 167, 231, 256–7 Google Glass 296–7 Google Translate 205–8, 292–3 GPUs (Graphics Processing Units) 187–8 gradient descent 183 Grand Challenges 2004/5 225–6 graphical user interfaces (GUI) 144–5 Graphics Processing Units (GPUs) 187–8 GUI (graphical user interfaces) 144–5 H hard problem of consciousness 314–17 hard problems 84, 86–7 Harm Assessment Risk Tool (HART) 277–80 Hawking, Stephen 238 healthcare 215–23 Herschel, John 304–6 Herzberg, Elaine 230 heuristic search 75–7, 164 heuristics 91 higher-order intentional reasoning 323–4, 328 high-level programming languages 144 Hilbert, David 15–16 Hinton, Geoff 185–6, 221 HOMER 141–3, 146 homunculus problem 315 human brain 43, 306, 330–1 human intuition 311 human judgement 222 human rights 277–81 human-level intelligence 28–36, 241–3 ‘humans are special’ argument 310–11 I image classification 186–7 image-captioning 200–4 ImageNet 186–7 Imitation Game 30 In Search of Lost Time (Proust) 205–8 incentives 261 indistinguishability 30–1, 37, 38 Industrial Revolutions 265–7 inference engines 92–4 insurance 219–20 intelligence 21–2, 127–8, 200 – human-level 28–36, 241–3 ‘Intelligence Without Representation’ (Brooks) 129 Intelligent Knowledge-Based Systems 100 intentional reasoning 323–4, 328 intentional stance 321–7 intentional systems 321–2 internal mental phenomena 306–7 Internet chatbots 36 intuition 311 inverse reinforcement learning 262 Invisible Women (Criado Perez) 291–2 J Japan 113–14 judgement 222 K Kasparov, Garry 163 knowledge bases 92–4 knowledge elicitation problem 123 knowledge graph 120–1 Knowledge Navigator 146–7 knowledge representation 91, 104, 129–30, 208 knowledge-based AI 89–123, 208 Kurzweil, Ray 239–40 L Lee Sedol 197–8 leisure 272 Lenat, Doug 114–21 lethal autonomous weapons 281–7 Lighthill Report 87–8 LISP 49, 99 Loebner Prize Competition 34–6 logic 104–7, 121–2 logic programming 111–14 logic-based AI 107–11, 130–2 M Mac computers 144–6 McCarthy, John 49–52, 107–8, 326–7 machine learning (ML) 27, 54–5, 168–74, 209–10, 287–9 machines with mental states 326–7 Macintosh computers 144–6 magnetic resonance imaging (MRI) 306 male-orientation 290–3 Manchester Baby computer 20, 24–6, 143–4 Manhattan Project 51 Marx, Karl 274–6 maximizing expected utility 154 Mercedes 231 Mickey Mouse 261 microprocessors 267–8, 271–2 military drones 282–4 mind modelling 42 mind-body problem 314–17 see also consciousness minimax search 76 mining industry 234 Minsky, Marvin 34, 52, 180 ML (machine learning) 27, 54–5, 168–74, 209–10, 287–9 Montezuma’s Revenge (video game) 195–6 Moore’s law 240 Moorfields Eye Hospital 220–1 moral agency 257–8 Moral Machines 251–3 MRI (magnetic resonance imaging) 306 multi-agent systems 160–2 multi-layer perceptrons 177, 180, 182 Musk, Elon 238 MYCIN 94–8, 217 N Nagel, Thomas 307–10 narrow AI 42 Nash, John Forbes Jr 50–1, 161 Nash equilibrium 161–2 natural languages 56 negative feedback 173 neural nets/neural networks 44, 168, 173–90, 369–72 neurons 174 Newell, Alan 52–3 norms 260 NP-complete problems 81–5, 164–5 nuclear energy 242–3 nuclear fusion 305 O ontological engineering 117 Osborne, Michael 268–70 P P vs NP problem 83 paperclips 261 Papert, Seymour 180 Parallel Distributed Processing (PDP) 182–4 Pepper 299 perception 54 perceptron models 174–81, 183 Perceptrons (Minsky & Papert) 180–1, 210 personal healthcare management 217–20 perverse instantiation 260–1 Phaedrus 315 physical stance 319–20 Plato 315 police 277–80 Pratt, Vaughan 117–19 preference relations 151 preferences 150–2, 154 privacy 219 problem solving and planning 55–6, 66–77, 128 programming 21–2 programming languages 144 PROLOG 112–14, 363–4 PROMETHEUS 224–5 protein folding 214 Proust, Marcel 205–8 Q qualia 306–7 QuickSort 26 R R1/XCON 98–9 radiology 215, 221 railway networks 259 RAND Corporation 51 rational decision making 150–5 reasoning 55–6, 121–3, 128–30, 137, 315–16, 323–4, 328 regulation of AI 243 reinforcement learning 172–3, 193, 195, 262 representation harm 288 responsibility 257–8 rewards 172–3, 196 robots – as autonomous weapons 284–5 – Baye’s theorem 157 – beliefs 108–10 – fake 299–300 – indistinguishability 38 – intentional stance 326–7 – SHAKEY 63–6 – Sophia 299–300 – Three Laws of Robotics 244–6 – trivial tasks 61 – vacuum cleaning 132–6 Rosenblatt, Frank 174–81 rules 91–2, 104, 359–62 Russia 261 Rutherford, Ernest (1st Baron Rutherford of Nelson) 242 S Sally-Anne tests 328–9, 330 Samuel, Arthur 75–7 SAT solvers 164–5 Saudi Arabia 299–300 scripts 100–2 search 26, 68–77, 164, 199 search trees 70–1 Searle, John 311–14 self-awareness 41, 305 see also consciousness semantic nets 102 sensors 54 SHAKEY the robot 63–6 SHRDLU 56–63 Simon, Herb 52–3, 86 the Singularity 239–43 The Singularity is Near (Kurzweil) 239 Siri 149, 298 Smith, Matt 201–4 smoking 173 social brain 317–19 see also brains social media 293–6 social reasoning 323, 324–5 social welfare 249 software agents 143–9 software bugs 258 Sophia 299–300 sorting 26 spoken word translation 27 STANLEY 226 STRIPS 65 strong AI 36–8, 41, 309–14 subsumption architecture 132–6 subsumption hierarchy 134 sun 304 supervised learning 169 syllogisms 105, 106 symbolic AI 42–3, 44, 181 synapses 174 Szilard, Leo 242 T tablet computers 146 team-building problem 78–81, 83 Terminator narrative of AI 237–9 Tesla 228–9 text recognition 169–71 Theory of Mind (ToM) 330 Three Laws of Robotics 244–6 TIMIT 292 ToM (Theory of Mind) 330 ToMnet 330 TouringMachines 139–41 Towers of Hanoi 67–72 training data 169–72, 288–9, 292 translation 204–8 transparency 258 travelling salesman problem 82–3 Trolley Problem 246–53 Trump, Donald 294 Turing, Alan 14–15, 17–19, 20, 24–6, 77–8 Turing Machines 18–19, 21 Turing test 29–38 U Uber 168, 230 uncertainty 97–8, 155–8 undecidable problems 19, 78 understanding 201–4, 312–14 unemployment 264–77 unintended consequences 263 universal basic income 272–3 Universal Turing Machines 18, 19 Upanishads 315 Urban Challenge 2007 226–7 utilitarianism 249 utilities 151–4 utopians 271 V vacuum cleaning robots 132–6 values and norms 260 video games 192–6, 327–8 virtue ethics 250 Von Neumann and Morgenstern model 150–5 Von Neumann architecture 20 W warfare 285–6 WARPLAN 113 Waymo 231, 232–3 weak AI 36–8 weapons 281–7 wearable technology 217–20 web search 148–9 Weizenbaum, Joseph 32–4 Winograd schemas 39–40 working memory 92 X XOR (exclusive OR) 180 Z Z3 computer 19–20 PELICAN BOOKS Economics: The User’s Guide Ha-Joon Chang Human Evolution Robin Dunbar Revolutionary Russia: 1891–1991 Orlando Figes The Domesticated Brain Bruce Hood Greek and Roman Political Ideas Melissa Lane Classical Literature Richard Jenkyns Who Governs Britain?

The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot
by Yolande Strengers and Jenny Kennedy
Published 14 Apr 2020

,” Knowledge Base, RealDoll, accessed December 3, 2019, https://www.realdoll.com/knowledgebase/can-i-have-a-doll-made-of-a-celebrity-model-or-my-ex-girlfriend/. 84. Deepfakes use a machine learning technique to superimpose “fake” images and videos over source images and videos to generate new and increasingly realistic content. Deepfakes are being used to create fake and nonconsensual celebrity pornography videos or revenge porn (the distribution, or threat of distribution, of sexually explicit images and videos without the permission of the person/people depicted). Asher Flynn, “Image-Based Abuse: The Disturbing Phenomenon of the ‘Deepfake,’” Lens, March 12, 2019, https://lens.monash.edu/@politics-society/2019/03/12/1373665/image-based-abuse-deep-fake. 85.

RealDoll refuses to make exact replicas of women without their consent, but can “use photographs of a person of your choice to select a face structure as similar as possible from our line of 16 standard female faces.” It has apparently “done this with good success in the past.”83 It’s a move that raises many of the same issues of consent now preoccupying the attention of criminologists and ethicists in relation to “deepfakes,” and leading to concerns about the risk of image-based sexual abuse (which we return to in chapter 7).84 Figure 5.9 Mark 1 doll by Ricky Ma made to look like Scarlett Johansson. Source: Bobby Yip/Reuters In addition, sexbots raise their own unique roboethical issues regarding consent and rape.

See Social robots Computer nerds, 44 Computer science code of ethics, 172 gender imbalances in, 9–11, 54, 62, 163, 212–214 Comuzi (design studio), 219 Consent affirmative (enthusiastic), 140–141, 198, 222–224 and rape issues with sexbots, 114, 134–141, 142, 223–224 Consumer Electronics Show, 30, 58–59, 64, 112, 123–124, 145–146 Consumer Technology Association, 123–124 Consumption, 82, 84, 90–91, 97, 149–151, 186 Control by users of smart wives, 39–40, 193–194 Copenhagen Pride, 172 Cortana (Halo), 146, 148 Cortana (Microsoft), 11, 83, 154, 182 Counterproductive (Gregg), 33 Cowan, Ruth Schwartz, 42 Cox, Tracey, 124 Crabb, Annabel, 1, 2, 6–7, 215 Crawford, Kate, 11, 97–98, 99, 103, 107, 189, 220 Crypton Future Media, 126 Cultural genitals, 62 Custer’s Revenge (video game), 135 Cuteness in social robots, 68–70 Cybersecurity risks, 187–189, 192, 193–194, 196, 197–198, 221 Cyborgs, female, 148, 152 Danaher, John, 113, 116 Darling, Kate, 56, 71–73, 173 Data (personal), 187–190, 193, 194, 196, 197, 224 Data centers, 101 Davecat (iDollator), 126 Davidoff, Scott, 39–40 Davis, Allison, 112, 119 Deepfakes, 134, 267n84 Delusions of Gender (Fine), 56 Dementia, 9, 52, 53, 74, 75 Demon Seed (film), 198–199 Design fiction, 220–221 Designing a Feminist Alexa (workshops), 170 Design of technologies and troubling gender, 203, 210 Despicable Me (film series), 57–58 Destructive image of sci-fi robots, 67 Devlin, Kate, 110, 111, 114–115, 116, 130, 136, 141–142 DiCarlo, Lora Haddock, 123 Digital chivalry, 44 Digital content controlled by men, 178 Digital feudalism, 85 Digital housekeeping, 43, 44–47, 178, 216 Digital skills and gender, 9–11, 178–179 Digital skills and racial diversity, 11 Digital tethering of women to men, 178 Digital voice assistants.

pages: 320 words: 95,629

Decoding the World: A Roadmap for the Questioner
by Po Bronson
Published 14 Jul 2020

Google, Amazon, Facebook all have different data centers and wall them off.” “So, you know deepfakes of course. Basically they fool a human into thinking it’s a real image or video. Well, that’s what hacking is. Except not to fool the human eye, rather to fool a computer that’s guarding itself. For every computer, there are authorized users and unauthorized users. The same systems we use to generate deepfakes is how it tricks a computer to trust it—to believe it’s an authorized user. It just gets better and better until it’s in.” “So AIs start hacking into each other? Pretending to be authorized human users?” “Yes. And it is the nature of deepfakes that the fakes are always a tiny bit ahead of the fake detectors.”

It will be about what we think about when we think about China. In the fall of 2019, huge American institutions like the NBA, Activision Blizzard, Tiffany, and Vans all self-censored themselves to appease the Chinese government—and were savagely attacked for it back home. We live at a time when China could easily create a deepfake with Houston Rockets star James Harden saying whatever they wanted him to say—but such trickery wasn’t needed, because the real Harden was going to say it anyway. This was a taste of things to come. Americans—and Europeans—will be asking themselves, “Do I want to buy something made in China? What if I don’t like their position on civil rights?”

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

It then uses these signals as immediate feedback on the success or failure of its attempt to influence each individual; in this way, it quickly learns to become more effective in its work. This is how content selection algorithms on social media have had their insidious effect on political opinions. Another recent change is that the combination of AI, computer graphics, and speech synthesis is making it possible to generate deepfakes—realistic video and audio content of just about anyone saying or doing just about anything. The technology will require little more than a verbal description of the desired event, making it usable by more or less anyone in the world. Cell phone video of Senator X accepting a bribe from cocaine dealer Y at shady establishment Z?

See intelligent agent agent program, 48 “AI Researchers on AI Risk” (Alexander), 153 Alciné, Jacky, 60 Alexander, Scott, 146, 153, 169–70 algorithms, 33–34 Bayesian networks and, 275–77 Bayesian updating, 283, 284 bias and, 128–30 chess-playing, 62–63 coding of, 34 completeness theorem and, 51–52 computer hardware and, 34–35 content selection, 8–9, 105 deep learning, 58–59, 288–93 dynamic programming, 54–55 examples of common, 33–34 exponential complexity of problems and, 38–39 halting problem and, 37–38 lookahead search, 47, 49–50, 260–61 propositional logic and, 268–70 reinforcement learning, 55–57, 105 subroutines within, 34 supervised learning, 58–59, 285–93 Alibaba, 250 AlphaGo, 6, 46–48, 49–50, 55, 91, 92, 206–7, 209–10, 261, 265, 285 AlphaZero, 47, 48 altruism, 24, 227–29 altruistic AI, 173–75 Amazon, 106, 119, 250 Echo, 64–65 “Picking Challenge” to accelerate robot development, 73–74 Analytical Engine, 40 ants, 25 Aoun, Joseph, 123 Apple HomePod, 64–65 “Architecture of Complexity, The” (Simon), 265 Aristotle, 20–21, 39–40, 50, 52, 53, 114, 245 Armstrong, Stuart, 221 Arnauld, Antoine, 21–22 Arrow, Kenneth, 223 artificial intelligence (AI), 1–12 agent (See intelligent agent) agent programs, 48–59 beneficial, principles for (See beneficial AI) benefits to humans of, 98–102 as biggest event in human history, 1–4 conceptual breakthroughs required for (See conceptual breakthroughs required for superintelligent AI) decision making on global scale, capability for, 75–76 deep learning and, 6 domestic robots and, 73–74 general-purpose, 46–48, 100, 136 global scale, capability to sense and make decisions on, 74–76 goals and, 41–42, 48–53, 136–42, 165–69 governance of, 249–53 health advances and, 101 history of, 4–6, 40–42 human preferences and (See human preferences) imagining what superintelligent machines could do, 93–96 intelligence, defining, 39–61 intelligent personal assistants and, 67–71 limits of superintelligence, 96–98 living standard increases and, 98–100 logic and, 39–40 media and public perception of advances in, 62–64 misuses of (See misuses of AI) mobile phones and, 64–65 multiplier effect of, 99 objectives and, 11–12, 43, 48–61, 136–42, 165–69 overly intelligent AI, 132–44 pace of scientific progress in creating, 6–9 predicting arrival of superintelligent AI, 76–78 reading capabilities and, 74–75 risk posed by (See risk posed by AI) scale and, 94–96 scaling up sensory inputs and capacity for action, 94–95 self-driving cars and, 65–67, 181–82, 247 sensing on global scale, capability to, 75 smart homes and, 71–72 softbots and, 64 speech recognition capabilities and, 74–75 standard model of, 9–11, 13, 48–61, 247 Turing test and, 40–41 tutoring by, 100–101 virtual reality authoring by, 101 World Wide Web and, 64 “Artificial Intelligence and Life in 2030” (One Hundred Year Study on Artificial Intelligence), 149, 150 Asimov, Isaac, 141 assistance games, 192–203 learning preferences exactly in long run, 200–202 off-switch game, 196–200 paperclip game, 194–96 prohibitions and, 202–3 uncertainty about human objectives, 200–202 Association for the Advancement of Artificial Intelligence (AAAI), 250 assumption failure, 186–87 Atkinson, Robert, 158 Atlas humanoid robot, 73 autonomous weapons systems (LAWS), 110–13 autonomy loss problem, 255–56 Autor, David, 116 Avengers: Infinity War (film), 224 “avoid putting in human goals” argument, 165–69 axiomatic basis for utility theory, 23–24 axioms, 185 Babbage, Charles, 40, 132–33 backgammon, 55 Baidu, 250 Baldwin, James, 18 Baldwin effect, 18–20 Banks, Iain, 164 bank tellers, 117–18 Bayes, Thomas, 54 Bayesian logic, 54 Bayesian networks, 54, 275–77 Bayesian rationality, 54 Bayesian updating, 283, 284 Bayes theorem, 54 behavior, learning preferences from, 190–92 behavior modification, 104–7 belief state, 282–83 beneficial AI, 171–210, 247–49 caution regarding development of, reasons for, 179 data available for learning about human preferences, 180–81 economic incentives for, 179–80 evil behavior and, 179 learning to predict human preferences, 176–77 moral dilemmas and, 178 objective of AI is to maximize realization of human preferences, 173–75 principles for, 172–79 proofs for (See proofs for beneficial AI) uncertainty as to what human preferences are, 175–76 values, defining, 177–78 Bentham, Jeremy, 24, 219 Berg, Paul, 182 Berkeley Robot for the Elimination of Tedious Tasks (BRETT), 73 Bernoulli, Daniel, 22–23 “Bill Gates Fears AI, but AI Researchers Know Better” (Popular Science), 152 blackmail, 104–5 blinking reflex, 57 blockchain, 161 board games, 45 Boole, George, 268 Boolean (propositional) logic, 51, 268–70 bootstrapping process, 81–82 Boston Dynamics, 73 Bostrom, Nick, 102, 144, 145, 150, 166, 167, 183, 253 brains, 16, 17–18 reward system and, 17–18 Summit machine, compared, 34 BRETT (Berkeley Robot for the Elimination of Tedious Tasks), 73 Brin, Sergey, 81 Brooks, Rodney, 168 Brynjolfsson, Erik, 117 Budapest Convention on Cybercrime, 253–54 Butler, Samuel, 133–34, 159 “can’t we just . . .” responses to risks posed by AI, 160–69 “. . . avoid putting in human goals,” 165–69 “. . . merge with machines,” 163–65 “. . . put it in a box,” 161–63 “. . . switch it off,” 160–61 “. . . work in human-machine teams,” 163 Cardano, Gerolamo, 21 caring professions, 122 Chace, Calum, 113 changes in human preferences over time, 240–45 Changing Places (Lodge), 121 checkers program, 55, 261 chess programs, 62–63 Chollet, François, 293 chunking, 295 circuits, 291–92 CNN, 108 CODE (Collaborative Operations in Denied Environments), 112 combinatorial complexity, 258 common operational picture, 69 compensation effects, 114–17 completeness theorem (Gödel’s), 51–52 complexity of problems, 38–39 Comprehensive Nuclear-Test-Ban Treaty (CTBT) seismic monitoring, 279–80 computer programming, 119 computers, 32–61 algorithms and (See algorithms) complexity of problems and, 38–39 halting problem and, 37–38 hardware, 34–35 intelligent (See artificial intelligence) limits of computation, 36–39 software limitations, 37 special-purpose devices, building, 35–36 universality and, 32 computer science, 33 “Computing Machinery and Intelligence” (Turing), 40–41, 149 conceptual breakthroughs required for superintelligent AI, 78–93 actions, discovering, 87–90 cumulative learning of concepts and theories, 82–87 language/common sense problem, 79–82 mental activity, managing, 90–92 consciousness, 16–17 consequentialism, 217–19 content selection algorithms, 8–9, 105 content shortcomings, of intelligent personal assistants, 67–68 control theory, 10, 44–45, 54, 176 convolutional neural networks, 47 cost function to evaluate solutions, and goals, 48 Credibility Coalition, 109 CRISPR-Cas9, 156 cumulative learning of concepts and theories, 82–87 cybersecurity, 186–87 Daily Telegraph, 77 decision making on global scale, 75–76 decoherence, 36 Deep Blue, 62, 261 deep convolutional network, 288–90 deep dreaming images, 291 deepfakes, 105–6 deep learning, 6, 58–59, 86–87, 288–93 DeepMind, 90 AlphaGo, 6, 46–48, 49–50, 55, 91, 92, 206–7, 209–10, 261, 265, 285 AlphaZero, 47, 48 DQN system, 55–56 deflection arguments, 154–59 “research can’t be controlled” arguments, 154–56 silence regarding risks of AI, 158–59 tribalism, 150, 159–60 whataboutery, 156–57 Delilah (blackmail bot), 105 denial of risk posed by AI, 146–54 “it’s complicated” argument, 147–48 “it’s impossible” argument, 149–50 “it’s too soon to worry about it” argument, 150–52 Luddism accusation and, 153–54 “we’re the experts” argument, 152–54 deontological ethics, 217 dexterity problem, robots, 73–74 Dickinson, Michael, 190 Dickmanns, Ernst, 65 DigitalGlobe, 75 domestic robots, 73–74 dopamine, 17, 205–6 Dota 2, 56 DQN system, 55–56 Dune (Herbert), 135 dynamic programming algorithms, 54–55 E. coli, 14–15 eBay, 106 ECHO (first smart home), 71 “Economic Possibilities for Our Grandchildren” (Keynes), 113–14, 120–21 The Economic Singularity: Artificial Intelligence and the Death of Capitalism (Chace), 113 Economist, The, 145 Edgeworth, Francis, 238 Eisenhower, Dwight, 249 electrical action potentials, 15 Eliza (first chatbot), 67 Elmo (shogi program), 47 Elster, Jon, 242 Elysium (film), 127 emergency braking, 57 enfeeblement of humans problem, 254–55 envy, 229–31 Epicurus, 219 equilibrium solutions, 30–31, 195–96 Erewhon (Butler), 133–34, 159 Etzioni, Oren, 152, 157 eugenics movement, 155–56 expected value rule, 22–23 experience, learning from, 285–95 experiencing self, and preferences, 238–40 explanation-based learning, 294–95 Facebook, 108, 250 Fact, Fiction and Forecast (Goodman), 85 fact-checking, 108–9, 110 factcheck.org, 108 fear of death (as an instrumental goal), 140–42 feature engineering, 84–85 Fermat, Pierre de, 185 Fermat’s Last Theorem, 185 Ferranti Mark I, 34 Fifth Generation project, 271 firewalling AI systems, 161–63 first-order logic, 51, 270–72 probabilistic languages and, 277–80 propositional logic distinguished, 270 Ford, Martin, 113 Forster, E.

(tv show), 80 Jevons, William Stanley, 222 JiaJia (robot), 125 jian ai, 219 Kahneman, Daniel, 238–40 Kasparov, Garry, 62, 90, 261 Ke Jie, 6 Kelly, Kevin, 97, 148 Kenny, David, 153, 163 Keynes, John Maynard, 113–14, 120–21, 122 King Midas problem, 136–40 Kitkit School (software system), 70 knowledge, 79–82, 267–72 knowledge-based systems, 50–51 Krugman, Paul, 117 Kurzweil, Ray, 163–64 language/common sense problem, 79–82 Laplace, Pierre-Simon, 54 Laser-Interferometer Gravitational-Wave Observatory (LIGO), 82–84 learning, 15 behavior, learning preferences from, 190–92 bootstrapping process, 81–82 culture and, 19 cumulative learning of concepts and theories, 82–87 data-driven view of, 82–83 deep learning, 6, 58–59, 84, 86–87, 288–93 as evolutionary accelerator, 18–20 from experience, 285–93 explanation-based learning, 294–95 feature engineering and, 84–85 inverse reinforcement learning, 191–93 reinforcement learning, 17, 47, 55–57, 105, 190–91 supervised learning, 58–59, 285–93 from thinking, 293–95 LeCun, Yann, 47, 165 legal profession, 119 lethal autonomous weapons systems (LAWS), 110–13 Life 3.0 (Tegmark), 114, 138 LIGO (Laser-Interferometer Gravitational-Wave Observatory), 82–84 living standard increases, and AI, 98–100 Lloyd, Seth, 37 Lloyd, William, 31 Llull, Ramon, 40 Lodge, David, 1 logic, 39–40, 50–51, 267–72 Bayesian, 54 defined, 267 first-order, 51–52, 270–72 formal language requirement, 267 ignorance and, 52–53 programming, development of, 271 propositional (Boolean), 51, 268–70 lookahead search, 47, 49–50, 260–61 loophole principle, 202–3, 216 Lovelace, Ada, 40, 132–33 loyal AI, 215–17 Luddism accusation, 153–54 machines, 33 “Machine Stops, The” (Forster), 254–55 machine translation, 6 McAfee, Andrew, 117 McCarthy, John, 4–5, 50, 51, 52, 53, 65, 77 malice, 228–29 malware, 253 map navigation, 257–58 mathematical proofs for beneficial AI, 185–90 mathematics, 33 matrices, 33 Matrix, The (film), 222, 235 MavHome project, 71 mechanical calculator, 40 mental security, 107–10 “merge with machines” argument, 163–65 metareasoning, 262 Methods of Ethics, The (Sidgwick), 224–25 Microsoft, 250 TrueSkill system, 279 Mill, John Stuart, 217–18, 219 Minsky, Marvin, 4–5, 76, 153 misuses of AI, 103–31, 253–54 behavior modification, 104–7 blackmail, 104–5 deepfakes, 105–6 governmental reward and punishment systems, 106–7 intelligence agencies and, 104 interpersonal services, takeover of, 124–31 lethal autonomous weapons systems (LAWS), 110–13 mental security and, 107–10 work, elimination of, 113–24 mobile phones, 64–65 monotonicity and, 24 Moore, G.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

By combining these techniques, AI can thus already imitate a specific person’s writing style, replicate their voice, or even realistically graft their face into a whole video. As mentioned in the previous chapter, Google’s experimental Duplex technology uses AI that can react believably in unscripted phone conversations—so successfully that when it was first tested in 2018, real humans it called had no idea they were speaking to a computer.[79] “Deepfake” videos can be used to create harmful political propaganda, or to imagine what movies would look like with different actors in iconic roles.[80] For example, a YouTube channel called Ctrl Shift Face has a viral clip showing what Javier Bardem’s character in No Country for Old Men would look like if played by Arnold Schwarzenegger, Willem Dafoe, or Leonardo DiCaprio.[81] These technologies are still in their infancy.

Assistant Calls Local Businesses to Make Appointments,” Jeff Grubb’s Game Mess, YouTube video, May 8, 2018, https://www.youtube.com/watch?v=D5VN56jQMWM. BACK TO NOTE REFERENCE 79 Monkeypaw Productions and BuzzFeed, “You Won’t Believe What Obama Says in This Video,” BuzzFeedVideo, YouTube video, April 17, 2018, https://www.youtube.com/watch?v=cQ54GDm1eL0; “Could Deepfakes Weaken Democracy?,” Economist, YouTube video, October 22, 2019, https://www.youtube.com/watch?v=_m2dRDQEC1A; Kristin Houser, “This ‘RoboTrump’ AI Mimics the President’s Writing Style,” Futurism, October 23, 2019, https://futurism.com/robotrump-ai-text-generator-trump. BACK TO NOTE REFERENCE 80 “No Country for Old Actors,” Ctrl Shift Face, YouTube video, November 13, 2019, https://www.youtube.com/watch?

See brain-computer interface computer programming, 50, 60, 124, 160 “Computing Machinery and Intelligence” (Turing), 12 computronium, 8, 68 Concorde, 113 connectionism, 14, 18, 26–29, 40–41, 54 connectomic emulations, 104 consciousness, 1, 7, 75–82 AI, 65 of animals, 75, 76, 77–78, 80 causal explanation of, 80–82 origins of, 78 overview of, 75–79 replicants and, 103 subjective, 62, 76–80, 81, 93, 94 use of drugs to alter, 109 use of term, 76 “You 2” conscious, 90–94, 102, 103 zombies, qualia, and hard problem of, 79–82 conspiracy theories, 227–28, 273 consumer surplus, 212, 213 contact lenses, 222 containerization, 204 contextual memory, 55 Conway’s Game of Life, 83–86 Coolidge, Calvin, 232 cooperation, 153 Core 2 Duo E6300, 166, 308–9 Cornell University, 27 corn production, 180 cosmetics, 109 COVID-19 pandemic, 135, 271–73 AI and medicine, 227, 237–38, 240, 278 biotechnology risks, 271–73 income and poverty, 139, 143, 144, 200 labor force, 146, 147, 215, 216 misinformation about, 227, 273 social safety spending, 223, 224 teleworking, 146, 172 COVID-19 vaccines, 227, 237–38, 240, 273 Craigslist, 218 creativity, 38, 48, 221 Cretaceous-Paleogene extinction event, 34 crime, 148–54, 233 actual US rates, 119 homicide in US, 151 homicide rates in Western Europe since 1300, 149 pollution and, 150–51, 233 public perception of, 118, 118–20, 152, 233 racial disparities and policing, 150 violent crime in US, 151 CRISPR, 241 crop densities, 180 crossword puzzles, 64, 326n cruise missiles, 270 cryptocurrencies, 217–18 cryptogenic strokes, 262 Ctrl Shift Face, 100 cultured meat, 169–70, 171 Curtiss, Susan, 88–89 cyberattacks, 193 cybersecurity, 228 Cycorp, 17 D Dafoe, Willem, 100 Dalí, Salvador, 49 DALL-E, 49–50, 221 DALL-E 2, 209 Dartmouth College workshop on AI, 12–13, 14 Darwin, Charles, 38–39, 48 data collection and analysis, 58–59 Data General Nova, 165, 301 data mining, 102 dating apps, 232 Dawkins, Richard, 334n decentralized manufacturing, 173, 185–87 DEC PDP-1, 15, 165, 300 DEC PDP-4, 165, 300–301 DEC PDP-8, 165, 301 Deep Blue. See IBM deepfake videos, 100 deep learning, 40–54, 99–100 Moore’s law, 40–41 transformers, 46–47 DeepMind. See Alphabet deep neural nets, 43–44, 154, 196, 369n deep reinforcement learning, 41–43 deer mice, 32 Defense Advanced Research Projects Agency (DARPA), 71, 195, 280 deflation, 167, 169, 214 degenerative diseases, 134–35, 192, 239–40 DELFI, 243 democracy, 122, 159–63, 194 spread since 1800, 163 dendrites, 93 deserts, solar power in, 174–75 deskilling, 208 determinism, 82–83, 86–88, 331–32n Deutschland (ship), 113 diabetes, 134, 192, 259 Diamandis, Peter, 112 Diamond Age, The (Stephenson), 250–51 diamondoids, 250–51, 252, 254–55, 258 diarrheal disease, 177 DiCaprio, Leonardo, 100 digestion, 71, 259 digital economy, 218–19, 254 dinosaurs, 34 Diplomacy (game), 42 Discovery Channel, 220 disease treatment and prevention, 4, 133–36 combining AI with biotechnology, 235–45 DNA, 8, 102, 186, 238, 242, 261–62, 271–72 junk, 242 nanotechnology, 251, 261–62 origami, 251 repair, 71 sequencing, 2, 135, 189, 261 dogs, 77 Domain Name System (DNS), 132 domestic appliances, 122, 128 down quarks, 97 Drexler, K.

Doppelganger: A Trip Into the Mirror World
by Naomi Klein
Published 11 Sep 2023

“People take on these digital selves”: American Dharma. “I want Dave in Accounting to be Ajax”: Jennifer Senior, “American Rasputin,” The Atlantic, June 6, 2022. “Lee Seong-yoon”: Timothy W. Martin and Dasl Yoon, “These Campaigns Hope ‘Deepfake’ Candidates Help Get Out the Vote,” Wall Street Journal, March 8, 2022. “a bit creepy”: Martin and Yoon, “These Campaigns Hope ‘Deepfake’ Candidates Help Get Out the Vote.” “These digital doppelgangers”: Mark Sutherland, “ABBA’s ‘Voyage’ CGI Extravaganza Is Everything It’s Cracked Up to Be, and More: ‘Concert’ Review,” Variety, May 27, 2022. “to take the sting out”: Anjana Ahuja, “‘Grief Tech’ Avatars Aim to Take the Sting Out of Death,” Financial Times, December 20, 2022.

If Mark Zuckerberg’s plans for the “Metaverse” proceed as he hopes, with all of us represented by personalized animated avatars to our banks and our friends, this is only going to get more confusing. It already is. In March 2022, South Korea elected Yoon Suk-yeol as its new president. The conservative politician campaigned, in part, by seeding the internet with a deepfake version of himself, known as AI Yoon. This version, created by his younger campaign team, was funnier and more charming than the real Yoon. The Wall Street Journal reported that for some voters the fake politician—whose fakeness was not hidden—felt more authentic and appealing than the real one: “Lee Seong-yoon, a 23-year-old college student, first thought AI Yoon was real after viewing a video online.

pages: 309 words: 79,414

Going Dark: The Secret Social Lives of Extremists
by Julia Ebner
Published 20 Feb 2020

Available at https://archive.org/stream/TheCallForAGlobalIslamicResistance-EnglishTranslationOfSomeKeyPartsAbuMusabAsSuri/TheCallForAGlobalIslamicResistanceSomeKeyParts_djvu.txt. 2Alex Hern, ‘New AI fake text generator may be too dangerous to release, say creators’, Guardian, 14 February 2019. Available at https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction. 3Paige Leskin, ‘The AI tech behind scare real celebrity “deepfakes” is being used to create completely fictitious faces, cats, and Airbnb listings’, Business Insider, 21 February 2019. Available at https://www.businessinsider.de/deepfake-tech-create-fictitious-faces-cats-airbnb-listings-2019-2?r=US&IR=T. 4Lizzie Plaugic, ‘Watch a man manipulate George Bush’s face in real time’, Verge, 21 March 2016. Available at https://www.theverge.com/2016/3/21/11275462/facial-transfer-donald-trump-george-bush-video. 5Hern, ‘New AI fake text generator may be too dangerous to release, say creators’. 6Friedrich Nietzsche, Beyond Good and Evil: Prelude to a Philosophy of the Future (Mineola; New York: Dover Publications, unabridged edn, 1997). 7Amy Chua, Political Tribes: Group Instinct and the Fate of Nations (London: Bloomsbury, 2018). 8Ibid., p. 164. 9David Goodhart, The Road to Somewhere: The Populist Revolt and the Future of Politics (London: Hurst, 2017). 10Hamza Shaban, ‘Google for the first time outspent every other company to influence Washington in 2017’, Washington Post, 23 January 2018.

pages: 294 words: 81,850

Drunk on All Your Strange New Words
by Eddie Robson
Published 27 Jun 2022

He’s one of those people—she runs into them quite often—who asks questions about her job with a tone of bemusement and rather you than me. “People say they don’t really need translators, the aliens,” he says. “Who says that?” “Apparently it’s all a big grooming ring.” “What?” “Yeah,” Jank says with a leer. He used to send Lydia nasty messages back when she lived around here, until Gil told him to stop it. He’d deepfake vids of them fucking, stuff like that, the most basic sex trolling you could imagine. He’s not changed much. “You must’ve seen them using translators on the feeds though.” Jank shrugs. “Yeah. So?” “So … that’s obviously bollocks, isn’t it.” “Maybe not everyone who went to your school got a posh job like yours though.

Fitz comes around the other side and opens it for her, then guides her out, ensuring she doesn’t step into traffic, not that there is any—the timestamp shows 12:52 A.M. and the street is quiet. This seems to be the last footage of Fitz alive. “That doesn’t tell us much,” says Rollo. “It doesn’t tell us anything,” says Alinn. None of it jogs Lydia’s memory. It could all be a deepfake for all she knows. She’s rarely drawn this much of a blank before. She pushed herself too hard at that conference, and so soon after the festival debacle: Why didn’t she learn? Why didn’t she listen to Fitz? Maybe if she had, he wouldn’t be dead. Rollo sits back and folds his arms. “In the absence of other data, the only other person in that house—” “That’s conjecture,” says Alinn.

pages: 340 words: 101,675

A New History of the Future in 100 Objects: A Fiction
by Adrian Hon
Published 5 Oct 2020

Scenes from the march taken from a thousand cameras were spread over the mesh and across the country within minutes, spawning dozens of new protests across the country. More were killed, and even more came forward to fight. Having anticipated protests for decades, the monarchy acted quickly, putting into place a contingency plan prepared years earlier. Deepfake videos and forged emails were released on government websites that “proved” the protests had been instigated by Shiite spies and provocateurs from Iran. A disinformation campaign about “pernicious Western involvement” followed soon after. Most people distrusted the official accounts enough to continue the fight, but those allied to the royal family understood that their interests lay in the status quo and so did nothing.

The searches for the thylacine in the twentieth century were slow-going, requiring human eyes and human thought. It’s unusual, therefore, to see searches today also limited by our easily-tired bodies and minds. Wouldn’t it be faster to rely on drones and satellites and field DNA sequencers? It would, if you could rely on them. But if you suspected your tools were prone to deepfakes, to the accidental or intentional warping of data, to hacked DNA sequencers, then perhaps not. During the early days of the Indigenous Thylacine Discovery Group, their sensors registered dozens of old thylacines every week, and yet every time the leads were followed, they evaporated into digital smoke.

pages: 307 words: 101,998

IRL: Finding Realness, Meaning, and Belonging in Our Digital Lives
by Chris Stedman
Published 19 Oct 2020

, but rather in terms of what we think makes us real, what makes us human. The catfishes (people who assume an online identity different from who they are offline), the filters (software that alters the shades or colors of a photo to enhance it—or, increasingly, goes much further in enabling people to dramatically manipulate their physical appearance), the deepfakes (videos created using artificial intelligence that look deceptively real), the curated digital selves we create. All these things fundamentally uproot what we’ve thought constitutes realness and show how incomplete that understanding was. This uprooting presents us with an opportunity to imagine another way forward.

This feels monumentally ­daunting—it would be much simpler to identify a problem in our technology and patch it—which is why I understand the draw of easy answers, of polemicists who can say “this is good” or “this is bad” when it comes to our digital lives. Still, in a time when we hear the word “fake” a lot—fake news, deepfakes, fake followers—we’re not wrong to be skeptical. It’s unwise to put your complete trust in the internet. I was reminded of this when, not long after that terrible summer, someone reached out to me through social media and said my work had caught his interest. We began chatting, and while there wasn’t a great deal of substance to our conversations, I found him attractive.

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

Not just governments, but also firms, can play a constructive role here. OpenAI’s reluctance in 2019 to release a speech-generating model offers one case in point. AI-driven text generation may not seem like much of a weapon. But once it is combined with automated creation of social media profiles (complete with deepfaked AVIs), bot speech is a perfect tool for authoritarian regimes to use to disrupt organic opinion formation online. Moreover, militaries can apply the technology to interfere with other nations’ elections. Perhaps it is best not distributed at all. “TECH WON’T BUILD IT” AND INTERNAL RESISTANCE TO THE COMMERCIALIZATION OF MILITARY AI At some of the largest American AI firms, a growing number of software engineers are refusing to build killer robots—or even precursors of their development.

Mejias, The Costs of Connection How Data Is Colonizing Human Life and Appropriating It for Capitalism (Redwood City: Stanford University Press, 2019). 93. Rahel Jaeggi, Alienation (trans. Frederick Neuhouser and Alan E. Smith) (New York: Columbia University Press, 2014), 1. 94. Britt Paris and Joan Donovan, Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence (New York: Data & Society, 2019). 5. MACHINES JUDGING HUMANS 1. Mike Butcher, “The Robot-Recruiter Is Coming—VCV’s AI Will Read Your Face in a Job Interview,” TechCrunch, April 23, 2019, https://techcrunch.com/2019/04/23/the-robot-recruiter-is-coming-vcvs-ai-will-read-your-face-in-a-job-interview/. 2.

pages: 412 words: 116,685

The Metaverse: And How It Will Revolutionize Everything
by Matthew Ball
Published 18 Jul 2022

For example, users may need to give other users explicit levels of permission to interact in given spaces (e.g., for motion capture, the ability to interact via haptics, etc.), and platforms will also automatically block certain capabilities (“no-touch zones”). However, novel forms of harassment will doubtlessly emerge. We are right to be terrified by what “revenge porn” might look like in the Metaverse, powered by high-fidelity avatars, deepfakes, synthetic voice construction, motion capture, and other emergent virtual and physical technologies. The question of data rights and usage is more abstract, but just as fraught. There is not only the issue of private corporations and governments accessing personal data but also more fundamental issues, such as whether users understand what they’re sharing.

See decentralized apps (“dapps”) Dassault Systèmes, 118 data rights, 17, 290, 292 data security, 17, 290 dating apps, 19, 203, 215, 255, 272 Daydream VR platform, 142 DC Comics, 139 Death Stranding, 117 Decentraland, 115 decentralization, 58–59 in blockchains, 210, 235 of compute resources, 100–102, 223–24 downsides of, 208, 235, 291 Metaverse in tension with centralization, 283–85 “progressive decentralization,” 214 decentralized apps (“dapps”), 210–11, 214–16, 222–23, 283 decentralized autonomous organizations (DAOs), 225–29, 230, 300 deepfakes, 292 Denmark, xiv Diamond Age, The, 255n Dick, Philip K., 5 “digital divide,” 294 digital payment networks, 61, 171–72, 177 digital twins, 30–31, 48, 118, 157, 255, 267–68, 280, 282 Discord, 62, 134–35, 179, 228, 229 Disney hypothetical use of IPs in blockchains, 233 Industrial Light & Magic (ILM) special effects division, 118–19, 136, 258–59 Marvel Comics, 30, 139, 248, 259, 263 Pixar, 29–30, 36–37, 82, 89–90, 118, 136 use of Unity, 118–19 disruption, xiv–xv common patterns of, 205 confusion as necessary to, 23–28 generational changes in computing and networking, 61–62 Metaverse as opportunity for, xiv–xv, 213–14, 294 recursive innovation and, 27–28, 274, 295 stopping disruptive technologies, 193–99 see also blockchains; decentralization Dixon, Chris, 233 dotcom crash, 9, 24, 27, 128, 309 “Downtown Drop” branded experience, 264 Dropcam, 158 “dumbphones,” 240, 294 eBay, 212, 301 Echo Frames, 143 Edge browser, 286 “edge computers” or “edge servers,” 161 Edison, Thomas, 240, 242 education, 30, 34, 190, 250–54, 291, 300 EGS.

pages: 909 words: 130,170

Work: A History of How We Spend Our Time
by James Suzman
Published 2 Sep 2020

In 2019, an austere black column, the IBM Debater, which had been practising sharpening its tongue arguing in private with IBM employees for several years, put in a losing but persuasive and ‘surprisingly charming’ performance arguing in favour of pre-school subsidies against a one-time grand finalist from the World Debating Championships. More than this, with technology to generate deep-fake videos now accessible to everybody with an Internet connection and machines getting ever better at interpreting human language and making creative use of it, there is a palpable sense that no one’s job is entirely safe. It was thus no surprise when in 2018 Unilever announced it was farming out part of its recruitment functions to an automated AI system, saving the company 70,000 person-hours of work per year.

In 2019, an austere black column, the IBM Debater, which had been practising sharpening its tongue arguing in private with IBM employees for several years, put in a losing but persuasive and ‘surprisingly charming’ performance arguing in favour of pre-school subsidies against a one-time grand finalist from the World Debating Championships. More than this, with technology to generate deep-fake videos now accessible to everybody with an Internet connection and machines getting ever better at interpreting human language and making creative use of it, there is a palpable sense that no one’s job is entirely safe. It was thus no surprise when in 2018 Unilever announced it was farming out part of its recruitment functions to an automated AI system, saving the company 70,000 person-hours of work per year.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

In fact, one study found that during the 2016 election, people sixty-five and older were seven times more likely to share internet-based misinformation than younger people. And while debunking internet misinformation is already hard, it’s going to get even harder in the coming years, with the rise of algorithmically generated text, realistic conversational AI, and synthetic video (“deepfakes”) produced with the help of machine learning. There is no perfect digital discernment solution, but researchers have made some headway. In a 2018 report for the nonprofit organization Data & Society, Monica Bulger and Patrick Davison write that while media literacy programs have some limitations, certain types of interventions can be effective.

pages: 259 words: 84,261

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World
by Mo Gawdat
Published 29 Sep 2021

Don’t ever click on content recommended to you, search for what you actually need and don’t click on ads. Don’t approve of FinTech AI that uses machine intelligence to trade or aid the wealth concentration of a few. Don’t share about these on your LinkedIn page. Don’t celebrate them. Stop using deepfakes – a video of a person in which their face or body has been digitally altered so that they appear to be someone else. Resist the urge to use photo editors to change your own look. Never like or share content that you know is fake. Disapprove publicly of any kind of excessive surveillance and the use of AI for any form of discrimination, whether that’s loan approval or CV scanning.

pages: 317 words: 87,048

Other Pandemic: How QAnon Contaminated the World
by James Ball
Published 19 Jul 2023

During her livestream, Prim referenced a notorious rumoured video supposedly showing Hillary Clinton and her close aide Huma Abedin ritually murdering a child, wearing its face as a mask and drinking its blood (the original blood libel once again).17 The video does not exist and has never existed – it is not a deepfake or some other bit of trickery. Instead, it is something that QAnon followers believe was found when FBI agents raided the home of Anthony Weiner (who was then Abedin’s husband) in 2020. They insist this video was found by FBI agents and saved under the codename ‘Frazzledrip’. No amount of fact-checking or official denials has persuaded conspirators that it doesn’t exist.18 Alongside this, Prim had posted a semi-coherent message as she drove: ‘Hillary Clinton and her assistant, Joe Biden and Tony Podesta need to be taken out in the name of Babylon [generally a reference akin to invoking Christ]!

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

Some theorists have suggested that ‘war’ is the wrong lens through which to view cyber security, since many hostile actors aren’t interested in, or capable of generating violent effects from their activities. That’s true for example of those looking to extort, like hackers locking computers until a ransom is paid, or those engaged in cyber espionage to steal valuable information. Some cyber attackers are motivated by malice, like makers of revenge porn, using deep-fakes; others by mischief, or the pure thrill of the game—the hacking equivalent of Mallory on climbing Everest—because it is there. But plenty of hacking is motivated by national security.12 And while it’s a bit of a stretch to say that you can kill someone directly by hacking their code, that’s also an overly restrictive definition of what constitutes a weapon.

pages: 396 words: 96,049

Upgrade
by Blake Crouch
Published 6 Jul 2022

After they shut the glass door, I said, “Why are you asking me about my mother?” “Because she’s alive.” “Fuck you.” He took out his phone and placed it on the table. “One year ago, she broke into my house and sent me a video of her standing in my kitchen, holding a wineglass.” I pressed play. If the video was a deepfake, it had been masterfully done. Miriam’s hair had turned silver, she’d made numerous cosmetic changes (probably to elude facial-recognition AI), and her face was gaunt and lined with more wrinkles than the last time I’d seen her. But it was unquestionably my mother. I would’ve known those eyes—dark and frighteningly intense—anywhere.

pages: 362 words: 97,288

Ghost Road: Beyond the Driverless Car
by Anthony M. Townsend
Published 15 Jun 2020

But your first draft always has rough patches, and even the gentlest of critics will quickly find them. And so you return to your desk for rewrites, having taken your first real step developing the acute sense of delayed gratification you’ll hone as an author. Now the pressure is on, because while we live in a world of Wikipedia revisions, deepfakes, and redacted tweets—you the author still get but one shot to nail your text perfectly. Nothing could be more different from the way programmers create code. Their vocation is intensely collaborative, endlessly iterative, and immediately gratifying. Software either works or it doesn’t, and the computer lets you know right away.

Reset
by Ronald J. Deibert
Published 14 Aug 2020

Retrieved from https://www.politico.eu/article/fake-news-regulation-misinformation-europe-us-elections-midterms-bavaria/ Malicious actors are now using altered images and videos: Burgess, M. (2018, January 27). The law is nowhere near ready for the rise of AI-generated fake porn. Retrieved from https://www.wired.co.uk/article/deepfake-app-ai-porn-fake-reddit; Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753. Retrieved from https://ssrn.com/abstract=3213954 In spite of the deletions, fact-checking, and monitoring systems they produce, social media will remain easy to exploit: Stewart, L.

pages: 388 words: 111,099

Democracy for Sale: Dark Money and Dirty Politics
by Peter Geoghegan
Published 2 Jan 2020

As Peter Pomerantsev notes, “In an age in which all the old ideologies have vanished and there is no competition over coherent political ideas, the aim becomes to lasso together disparate groups around a new notion of the people, an amorphous but powerful emotion that each can interpret in their own way, and then seal it by conjuring up phantom enemies who threaten to undermine it.”82 The technological revolution in politics is more likely to speed up than slow down. A few weeks before the 2019 British general election, a video showing Boris Johnson and Jeremy Corbyn endorsing each other for prime minister spread online. It was obviously a fake – created in an attempt to demonstrate the potential for ‘deepfake’ videos to undermine democracy – but it was real enough to show what could soon be available to unscrupulous and well-funded operators. Britain faces “a perfect storm” of digital disruption and weak rules, Louise Edwards told me as we sat in a nondescript meeting room in the Electoral Commission’s offices.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

OpenAI was created in 2015 as a nonprofit organization funded by wealthy technologists, including Elon Musk, Peter Thiel, Sam Altman, and Reid Hoffman, who were concerned with charting a path toward safe artificial general intelligence. With a social rather than profit-making mission, the team worried that the powerful tool it created could easily be put to illicit or even nefarious use producing fake text analogous to deep-fake images and videos. Middle school students could ask it to write short essays, leading to widespread and undetectable cheating. At the extreme, propagandists could use it to create automated fountains of disinformation, delivered through fake websites and social media accounts. But what seemed a sober precaution was considered by some in the AI world either as running afoul of research norms and rank hypocrisy given the “open” part of OpenAI or as a cheap publicity stunt designed to call attention to the organization.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Generative adversarial networks (GANs) can create novel photorealistic images, fooling most people most of the time. One kind of image is the deepfake—an image or video that looks like a particular person, but is generated from a model. For example, when Carrie Fisher was 60, a generated replica of her 19-year-old face was superimposed on another actor’s body for the making of Rogue One. The movie industry creates ever-better deepfakes for artistic purposes, and researchers work on countermeasures for detecting deepfakes, to mitigate the destructive effects of fake news. Generated images can also be used to maintain privacy. For example, there are image data sets in radiological practices that would be useful for researchers, but can’t be published because of patient confidentiality.

E., 268, 1108 Dean, S., 1060, 1104 Dean, T., 401, 473, 516, 586, 587, 984, 985, 1070, 1092 Dearden, R., 587, 871, 1088, 1092 Deb, S., 160, 1117 Debevec, R, 1030, 1092 de Borda, J-C., 630 Debreu, G., 533, 1092 debugging, 291 DEC (Digital Equipment Corporation), 41, 310 Dechter, A., 189, 1092 Dechter, R., 125, 188–191, 474, 475, 478, 1091, 1092, 1100, 1105, 1109, 1112 decision rational, 403, 518 robust, 543 sequential, 537, 552 DECISION-LIST-LEARNING, 693 decision analysis, 548 decision boundary, 700 decision list, 692 decision maker, 548 decision network, 472, 518, 534, 534–537, 547 dynamic, 560, 585 evaluation of, 536 decision node, 535 decision stump, 718 decision theory, 28, 43, 405, 547 decision tree, 675, 733 expressiveness, 675 pruning, 681 declarative, 269 declarative bias, 757 declarativism, 228, 265 decoder (in autoencoders), 829 decoding, 918 decoding (in MT), 918 greedy, 918 decomposability (of lotteries), 521 DECOMPOSE, 382 decomposition, 374 DeCoste, D., 736, 1092 Dedekind, R., 296, 1092 deduction theorem, 240 deductive database, 310, 328, 329 Deep Blue, 222 Deep Blue (chess program), viii, 48, 222 deepfake, 1022 DEEPHOL (theorem prover), 327 deep learning, 44, 716, 801–839 for NLP, 907–931 for robotics, 965–975 for vision, 1001–1025 DEEPMATH (theorem prover), 330 DeepMind, 49, 225, 830, 835, 867, 873, 1059 deep Q-network (DQN), 835, 867, 873 deep reinforcement learning, 857, 986 Deep Space One, 373, 402 DeepStack (poker program), 224, 612 DEEP THOUGHT (chess program), 222 Deerwester, S.

pages: 504 words: 129,087

The Ones We've Been Waiting For: How a New Generation of Leaders Will Transform America
by Charlotte Alter
Published 18 Feb 2020

“The average age of a House Democrat has skyrocketed to 65,” she tweeted in March, “It’s time to hand over the keys.” When Mark Zuckerberg had to explain Facebook’s business model to aging senators, she tweeted that millennials would be better equipped to handle issues like privacy, election security, and deepfakes. “It’s a HUGE problem that our leadership isn’t digitally competent,” she wrote. “How can they prep us for the future?” The diversity of Queens and the Bronx offered Alexandria an opening, and her timing was exquisite. This race would be all about the primary, since a Democratic victory was assured in this solid blue district.

pages: 558 words: 175,965

When the Heavens Went on Sale: The Misfits and Geniuses Racing to Put Space Within Reach
by Ashlee Vance
Published 8 May 2023

According to Polyakov, Markusic was an ungrateful piece of shit who had milked money out of one set of investors and then out of Max and still couldn’t launch a rocket after all those years. Markusic cared for no one other than Markusic, Polyakov said. Near the end of our chat, Polyakov told me not to wrong him again. He joked that people on his team were deepfake experts and that videos of me doing all kinds of things could turn up on the internet. It was a typical Max moment when I was pretty sure he would not do something like that but not totally sure. The truth is that I loved it when Polyakov went on his tirades. Part of me also relished the idea that maybe I knew only a small fraction of what his life was really like.