deepfake

back to index

description: a synthetic media in which a person's likeness is altered using artificial intelligence

generative artificial intelligence

62 results

AI 2041: Ten Visions for Our Future

by Kai-Fu Lee and Qiufan Chen  · 13 Sep 2021

: The Golden Elephant Analysis: Deep Learning, Big Data, Internet/Finance Applications, AI Externalities Chapter Two: Gods Behind the Masks Analysis: Computer Vision, Convolutional Neural Networks, Deepfakes, Generative Adversarial Networks (GANs), Biometrics, AI Security Chapter Three: Twin Sparrows Analysis: Natural Language Processing, Self-Supervised Training, GPT-3, AGI and Consciousness, AI Education

examples rather than quotidian incremental advances: autonomous vehicles killing pedestrians, technology companies using AI to influence elections, and people using AI to disseminate misinformation and deepfakes. Relying on “thought leaders” ought to be the best option, but unfortunately most who claim the title are experts in business, physics, or politics, not

BECOME LIGHT WITH TIME. —AFRICAN PROVERB NOTE FROM KAI-FU: This story revolves around a Nigerian video producer who is recruited to make an undetectable deepfake with dangerous consequences. A major branch of AI, computer vision teaches computers to “see,” and recent breakthroughs allow AI to do so like never before

explore that question in my commentary, as I describe recent and impending breakthroughs in computer vision, biometrics, and AI security, three AI technology areas enabling deepfakes and many other applications. AS THE LIGHT-RAIL train inched into Yaba station, Amaka pushed a button next to the door of his carriage. Even

. After a few moments, he grinned. The facial scan he had undergone back at the reception desk had provided the data to make this instantaneous deepfake. “The face might be mine, but not the neck,” said Amaka as he pulled down his hood, exposing a long pink scar that cut diagonally

anti-fake detector, however, the app might automatically detect anomalies in the video, marking them with red translucent square warnings. In the early days of deepfake technology, factors like Internet speed and exaggerated expressions could easily cause glitches, resulting in images that blurred, or out-of-sync lip movement. Even if

only 0.05 seconds, the human brain, after millions of years of evolution, could sense something was amiss. By 2041, however, DeepMask—the successor of deepfake—had achieved a degree of image verisimilitude and synchronization that could fool the human eye. Anti-fake detectors had become a part of the standard

public figures: politicians, government officials, celebrities, athletes, and scholars. Such prominent people had large Internet trails—which made them particularly ripe to be targets of deepfakes. The VIP detector was intended to prevent those “supernodes” in cyberspace from becoming the victims of fraud, and the consequential devastating damage to social order

discover, in amazement, that the faces behind FAKA were the cultural gods and goddesses in the New Afrika Shrine. ANALYSIS COMPUTER VISION, CONVOLUTIONAL NEURAL NETWORKS, DEEPFAKES, GENERATIVE ADVERSARIAL NETWORKS (GANs), BIOMETRICS, AI SECURITY “Gods Behind the Masks” tells a tale of visual deception. When AI can see, recognize, understand, and synthesize

can no longer rely on their naked eyes to tell real videos from fake ones. Websites and apps are required by law to install anti-deepfake software (just like anti-virus software today) to protect users from fake videos. But the tug-of-war between the

deepfake makers and the deepfake detectors has become an arms race—the side that has more computation wins. While the story is set in 2041, the situation described above is

to impact the developed world earlier because it can afford the cost of the expensive computers, software, and AI experts needed to create and detect deepfakes and other AI manipulations. Also, legislation will likely be implemented in developed countries first. This story is set in a developing country, where the externalities

of deepfakes will likely occur later. So, how does AI learn to see—both through cameras and prerecorded videos? What are the applications? And how does an

AI deepfake maker work? Can humans or AI detect deepfakes? Will social networks be filled with fake videos? How can deepfakes be stopped? What other security holes might AI present? Is there anything good about the technology

behind deepfakes? WHAT IS COMPUTER VISION? In “The Golden Elephant,” we witnessed the potential prowess of deep learning in big-data applications, like the Internet and finance.

dance move in an Xbox game. Scene understanding—understands a full scene, including subtle relationships, like a hungry dog looking at a bone. In the deepfake-making tools used by Amaka in the story, all of the above steps were implicitly included. For example, in order for Amaka to edit the

smart image search (that can find images from keywords or other images) and, of course, making deepfakes (replacing occurrences of one face with another in a video) In “Gods Behind the Masks,” we saw a deepfake-making tool that is essentially an automatic video-editing tool that replaces one person with another

, from face, fingers, hand, and voice to body language, gait, and facial expression. More on deepfakes below. CONVOLUTIONAL NEURAL NETWORKS (CNNs) FOR COMPUTER VISION Making deep learning work on a standard neural network turned out to be a challenge, because an

. Also around this time, fast computers and large storage were becoming affordable. The confluence of these elements catalyzed the maturation and proliferation of computer vision. DEEPFAKES “President Trump is a total and complete dipsh*t,” said President Obama, or a person who looked and sounded a lot like Obama. This video

went viral in late 2018, but it was a deepfake (a fake video made by deep learning) created by Jordan Peele and BuzzFeed. AI took Peele’s recorded speech and morphed Peele’s voice into

-syncing as well as matching facial expressions. The purpose of Peele’s 2018 video was to warn people that deepfakes were coming, which was exactly what happened. That same year a number of deepfake celebrity porn videos were uploaded to the Internet, leading to angry denouncements and eventually a new law against

it. But new manifestations of deepfakes kept appearing all the time. An app in China emerged in 2019 that could take your selfie and make you the main character of a

Avatarify became number one in the Apple App Store. Avatarify brings any photo to life, making a person in the photo sing or laugh. Suddenly, deepfakes were mainstream, and anybody could make a fake (though amateurish and detectable) video. This means our future is one where everything digital can be forged

uses tools much more advanced than Peele’s to make a sophisticated high-fidelity video that is undetectable as fake by humans and ordinary anti-deepfake detection software. He first used a text-to-speech tool that could convert any text to audio that sounded just like Repo speaking. Then that

software detectors. By 2041, fully photo-realistic 3D models should be possible, as we will see in “Twin Sparrows” and “My Haunting Idol.” Peele’s deepfake was forged for fun and food for thought, while in the story here, Chi recruits Amaka to forge a

deepfake with malice. In addition to spreading rumors, deepfakes could also lead to blackmail, harassment, defamation, and election manipulation. How would you make a deepfake? How would an AI tool detect deepfakes? And as the deepfake and anti-deepfake software are pitted against each other, which will

win? To answer these questions, we need to understand the mechanism that generates deepfakes—GAN. GENERATIVE ADVERSARIAL NETWORK (GAN) Deepfakes are built on a technology called generative adversarial networks (GAN). As the name suggests, a GAN is a pair of “adversarial” deep learning neural networks

video, speech, and many types of content, including the infamous Obama video mentioned earlier. Can GAN-generated deepfakes be detected? Due to their relatively rudimentary nature and the limits of modern computer power, most deepfakes today are detectable by algorithms, and even sometimes by the human eye. Facebook and Google have both

launched challenge competitions for the development of deepfake detection programs. Effective deepfake detectors can be deployed today, but there is a computational cost, which can be a problem if your website has millions of uploads a

“upgrade” the forger network. Let’s say you trained a GAN forger network, and someone came up with a new detective algorithm for detecting your deepfake. You can just retrain your GAN’s forger network with the goal of fooling that detective algorithm. The result is an arms race to see

, this GAN was trained on a lot of data that was available for a celebrity like Repo. As a result, it could deceive all ordinary deepfake detectors. Imagine a jewelry store that had bulletproof windows capable of blocking all ordinary ammunition. If a criminal arrived with a rocket-propelled grenade, however

, the bulletproof window would no longer be adequate to block the criminal. It’s all about the computer power. By 2041, anti-deepfake software will be similar to anti-virus software. Government websites, news sites, and other sites where good information is paramount have no tolerance for any

fake content, and will install high-quality deepfake detectors designed to identify high-resolution deepfakes created by large GAN networks trained on powerful computers. Websites with too many images and videos (such as Facebook and YouTube) will

have trouble affording the cost of scanning all uploaded content with the highest-quality deepfake detectors, so they may use lower-quality detectors for all media content, and when a particular video or image starts to trend up exponentially, it

to be trained on the most powerful computer with the most data, in order to avoid detection by the highest-quality anti-deepfake detectors. So, is 100-percent detection of deepfakes hopeless? In the very long term, 100-percent detection may be possible with a totally different approach—to authenticate every photo

has never been altered), at the time of capture. Then any photo loaded to a website must show its blockchain authentication. This process will eliminate deepfakes. However, this “upgrade” will not arrive by 2041, as it requires all devices to use it (like all AV receivers use Dolby Digital today), and

. Until we have this longer-term solution based on blockchain or equivalent technology, we hope there will be continuously improved technologies and tools for detecting deepfakes. Since that is unlikely to be perfect, there will also need to be laws that make the penalty for making malicious

deepfakes very high, in order to deter potential perpetrators. For example, California passed a law in 2019 against using deepfakes for porn, and for manipulating videos of political candidates near an election. Finally, we may need

a new world (until the blockchain solution works) where online content should always be questioned, no matter how real it looks. In addition to making deepfakes, GANs can be used for constructive tasks, such as to age or de-age photos, colorize black-and-white movies and photos, make animated paintings

Mona Lisa), enhance resolution, detect glaucoma, predict climate change effects, and even discover new drugs. We must not think of GAN only in regard to deepfake, as its positive applications will surely outnumber its negative applications, just as with most new breakthrough technologies. HUMAN VERIFICATION USING BIOMETRICS Biometrics is the field

, viruses for PCs, identity theft for credit cards, and spam for email. As AI goes mainstream, it, too, will suffer from attacks on its vulnerabilities. Deepfakes are but one of many such vulnerabilities. Another vulnerability that can be exploited is AI’s decision boundaries, which can be estimated and used to

this chapter, usher us into the age of plenitude. At the same time, AI will bring about myriad challenges and perils: AI biases, security risks, deepfakes, privacy infringements, autonomous weapons, and job displacements. These problems were not inflicted by AI, but by humans who use AI maliciously or carelessly. In the

The Wires of War: Technology and the Global Struggle for Power

by Jacob Helberg  · 11 Oct 2021  · 521pp  · 118,183 words

generate lists of Chinese citizens they deem likely to protest or sympathize with dissenters. They select a few of these individuals and release high-resolution “deepfake” images of them in humiliating positions—some abusing drugs, others in brothels. The regime immediately cuts the “social credit scores” for the would-be activists

it. The video was fake—a satirical warning from the comedian Jordan Peele and BuzzFeed about the danger of “synthetic content,” more commonly known as deepfakes. This type of synthetic media will render obsolete the old axiom that “seeing is believing”—with potentially devastating ramifications for the fabric of our democracy

2011 one of Forbes’s “30 Under 30” tech pioneers, where tech trends could be leading us. He quickly zeroed in on the rise of deepfakes. “A lot of discussion is around synthetic generation of content,” Daniel told me. “Music, movies, faces.” Because of the COVID-19 pandemic, our conversation took

over Zoom is your face,” he said. “Now there’s nifty prototypes that people are using to do deepfakes live.”31 A society disrupted by deepfakes, he suggested, was not far off. Using deep learning, deepfakes mimic visual and speech patterns to create eerily realistic images, audio, and video. The believability of synthetic

AI makes the whole process easier.”32 And in the coming years, that will make the front-end battle a whole lot harder. For starters, deepfakes will accelerate the scourge of false news. Until now, we’ve tended to consider video and audio content to be fairly solid evidence that an

much more believable would those stories have been had they included synthetic video footage of an injured Obama? Or a deepfake of that Israeli official “warning” Pakistan of nuclear annihilation? Deepfakes of celebrities in the nude and in compromising positions are routinely created.37 Who in a position of power might be

. Astute social media users have outed trolls posing as Israeli supermodel Bar Refaeli, for instance.38 But what happens when the trolls’ profile pictures are deepfakes, and those reverse image searches come up empty? We’re already living out this scenario—thispersondoesnotexist.com, an online image generator, is a prime example

now, you can buy these fake personas for as little as $2.99.41 Moreover, the legal scholars Robert Chesney and Danielle Citron warn that deepfakes will create a “liar’s dividend.”42 As the public becomes more aware of the disruptive potential of synthetic media, it will offer cover for

unflattering coverage as fake news. How much more will that ring true when they can dismiss a report—of abusing drugs or taking bribes—as deepfake news? Over time, fake news will become cheaper to make. It will become more prevalent—and more potent. Our civic and democratic processes will become

more vulnerable than ever before. And as disturbing as a world awash in deepfakes would be, that’s just the beginning of what the front-end future holds in store. The Language of Deception For millennia, language has been

cancer might see nothing but article after article reinforcing that idea. An aunt who implicitly trusts what Barack Obama says might be presented with a deepfake calling a Republican president’s election illegitimate and urging supporters to protest. Ultimately, artificial intelligence could come perilously close to supplanting free will. A few

busy targeting each other. That’s a recipe for failure, and it needs to change—fast. Put another way, as “Obama” warns in that BuzzFeed deepfake: “How we move forward in the Age of Information is going to be the difference between whether we survive or whether we become some kind

detect if accounts are posting in unnatural increments or using suspicious syntax. To combat manipulated images and video, social media platforms should likewise invest in deepfake detection. Qualcomm has begun incorporating sophisticated photo and video verification tools into its smartphone chips, making it easier to determine if content originating on its

resources to build scalable systems and new technologies to automatically detect and prevent activity ranging from wire fraud and money laundering to the dissemination of deepfakes and forgeries. Moreover, what’s big domestically isn’t necessarily big internationally. In the United States, Amazon is unavoidable; in China, it’s negligible, with

with sixteen followers? Does this grainy video really show ballots being stolen? That photo seems too good to be true—what if it’s a deepfake? In short, we, too, must learn to discern. The IREX initiative in Ukraine is a case in point. IREX spent more than a year and

. Trolls and bots intentionally try to amplify disinformation as much as possible. “Trending” doesn’t necessarily mean truthful. Don’t believe your own eyes. With deepfakes and their low-budget cousins, “cheapfakes,” even images and video can be doctored. If something seems too good to be true, it probably is. Don

Have No Idea a Telegram Network Is Sharing Fake Nude Images of Them,” BuzzFeed News, October 20, 2020, https://www.buzzfeednews.com/article/janelytvynenko/telegram-deepfake-nude-women-images-bot. 38 Yonah Jeremy Bob, “How is Bar Refaeli connected to a plot to discredit Robert Mueller?,” Jerusalem Post, November 1, 2018

.S. presidential election,” Science 363 (January 2019): 374–378, https://science.sciencemag.org/content/363/6425/374.full. 110 Clint Watts and Tim Hwang, “Opinion: Deepfakes are coming for American democracy. Here’s how we can prepare,” Washington Post, September 10, 2020, https://www.washingtonpost.com/opinions/2020/09/10

/deepfakes-are-coming-american-democracy-heres-how-we-can-prepare/. 111 “Analyzing the First Years of the Ticket or Click It Mobilizations,” National Highway Traffic Safety

–34, 150, 156–57, 162, 201, 234–35, 247, 249 competitiveness investing and, 235, 240, 243–47, 249 data and, 136, 143, 147, 156–57 deepfakes and, 137, 139, 144 natural language processing and, 140–43 Silicon Valley–Washington relations and, 162, 178, 182 Asian Infrastructure Investment Bank, 114, 126 Asif

, xix, 44, 100, 105–6, 110–11, 118, 120, 124–26, 135–36, 147, 151, 153, 156–57, 184, 186–87, 193, 201, 213, 231 deepfakes and, 126, 138 and deterring, disrupting, and degrading global ambitions of, 221–28, 230–31 digital citizenship strengthening and, 264–66 and digital defense of

–87, 190 data centers, 100, 109–12, 124, 166, 170, 184, 187, 220 data voids, 76–77 Declaration of the Independence of Cyberspace, 171–72 deepfakes, 136–40, 144, 158, 260, 263 China and, 126, 138 tech role reimagining and, 253, 257 Deep Knowledge Ventures, 132 Defense Department, U.S., 8

, 143–44, 153, 162, 193, 201, 219, 227, 244 advertising of, 59–60, 68, 85, 174 AI and, 134, 143 Arab Spring and, 37–38 deepfakes and, 138–39 digital citizenship strengthening and, 260–63 digital Maginot lines and, 80, 82 and elections of 2016, 17, 49, 54, 67–68 and

, 103–5 facial recognition technology: China and, 117, 134, 156, 186, 229 controlling everything and, 147, 149 fake news, 53–56, 70, 142, 154, 159 deepfakes and, 137–39 digital citizenship strengthening and, 260–61, 264 and elections of 2016, ix, 13, 17, 19, 49, 54, 61 Google and, 19–20

–95, 99, 109, 111–12, 118, 123–24, 130–31, 145, 147, 151, 157, 198–200 competitiveness investing and, 235–36, 238, 243–45, 249 deepfakes and, 136, 139–40 and deterring, disrupting, and degrading authoritarians’ global ambitions, 221–22, 224, 228–29, 231 and digital defense of democracy, 207–10

, 211–12, 216 cyberattacks and, 30, 197–98, 211 Norway, 21, 104n, 214 nuclear weapons, xiv, 39, 141, 180, 208, 229 cyberattacks and, 45–46 deepfakes and, 138–39 Obama, Barack, 4, 7, 11, 35, 39, 49, 63, 100, 166–67, 205, 228, 234 on climate change, 206–7 competitiveness investing

and, 236, 238, 242–43 cyberattacks and, 44, 47 deepfakes and, 136, 138–39, 144, 158 Ocasio–Cortez, Alexandria, 239 Office of Personnel Management (OPM), 44–45, 172, 184 Office of Technology Assessment, 174, 210

, 81 Regional Comprehensive Economic Partnership, 123 Relcom, 28–29 Ren Zhengfei, 91–94 Republicans, 70, 87, 184, 188, 190–91, 202, 219, 233, 239, 264 deepfakes and, 138, 144 and elections of 2016, xi, 10–11 Silicon Valley–Washington relations and, 162, 170, 174 tech industry congressional hearings and, 159–60

, 37–38 Twain, Mark, 53–54 Twitter, xi, 6, 28, 49, 64, 68, 84, 128, 162, 172, 193, 201, 227 Arab Spring and, 37–38 deepfakes and, 138–39 digital citizenship strengthening and, 260, 262–64 digital Maginot lines and, 82–83 and elections of 2020, 85–87 fake news and

Spies, Lies, and Algorithms: The History and Future of American Intelligence

by Amy B. Zegart  · 6 Nov 2021

technology for theft, espionage, information warfare, and more. Cyber threats are hacking both machines and minds. This is only the beginning: artificial intelligence is creating deepfake video, audio, and photographs so real, their inauthenticity may be impossible to detect. No set of threats has changed so fast and demanded so much

inconceivable. Now it’s unavoidable. The code is open, available, and so simple, even a high schooler with no background in computer science can make deepfakes. And we haven’t even talked about information warfare or synthetic biology. The point is, good intelligence oversight requires much more technical knowledge than it

the United States created COVID-19 as a bioweapon.87 Advances in artificial intelligence have given rise to deepfakes, digitally manipulated audio, photographs, and videos that are highly realistic and difficult to authenticate. Deepfake application tools are now widely available online and so simple to use that high school students with no

drunk, which went viral on Facebook. When the social media giant refused to take it down, two artists and a small technology startup created a deepfake of Mark Zuckerberg and posted it on Instagram. In August, the Wall Street Journal reported the first known use of

deepfake audio to impersonate a voice in a cyber heist. Believing he was talking to his boss, an energy executive transferred $243,000. The voice turned

sources. It does not take much to realize the manipulative potential these technologies hold for nuclear-related issues. In a world of cheap satellite imagery, deepfakes, and the weaponization of social media, foreign governments, their proxies, and third party organizations and individuals will all be able to inject convincing, false information

public domain at speed and scale. If their goal is to confuse rather than convince, a little deception can go a long way. Imagine a deepfake video depicting a foreign leader secretly discussing a clandestine nuclear program with his inner circle. Although the leader issues vehement denials, doubt lingers—because seeing

that was relatively easy to spot. Deception is going to get much worse, thanks to advances in artificial intelligence fueling the development of deepfake digital impersonation technology. Already, deepfake technology has created remarkably lifelike photographs of non-existent celebrities,123 audios so real they duped an employee into letting criminals steal hundreds

of thousands of dollars,124 and videos of leaders saying things they never uttered.125 Deepfakes are growing more convincing and nearly impossible to detect, thanks to a breakthrough AI technique invented by Google engineer Ian Goodfellow in 2014.126 Called

image of something while the other learns to decide whether the image is real or fake. Because these algorithms are designed to learn by competing, deepfake countermeasures are unlikely to work for long. “We are outgunned,” said Hany Farid, a computer science professor at the University of California at Berkeley.127

Deepfake code is open and spreading fast. In the past few years, anonymous GitHub user “torzdf” and Reddit user “deepfakeapp” have vastly simplified the code and

interface required to generate deepfakes, creating programs called “faceswap” and “FakeApp,” which are easy enough for a high school student with no coding background to use. The two other key

ingredients for making deepfakes—computing power and large libraries of training data—are also becoming widely available.128 The impact of deepfakes could be profound, and policymakers know it. In 2019, deepfakes were a leading point of discussion at Congress’s Worldwide Threat Hearings

, and the House Intelligence Committee held a separate hearing dedicated entirely to deepfakes for the first time.129 “One does not need any great imagination to envision … nightmarish scenarios that would leave the government, the media, and the

and what is fake,” said House Intelligence Committee chairman Adam Schiff (D-CA). Schiff proceeded to describe three possible nightmares: A malign actor creates a deepfake video of a political candidate accepting a bribe to influence an election. A hacker claims to have stolen audio of a conversation between two world

. Kaylee Fagan, “A Viral Video That Appeared to Show Obama Calling Trump a ‘Dips—’ Shows a Disturbing New Trend Called ‘Deepfakes,’ ” Business Insider, April 17, 2018, https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4; Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman, “Synthesizing Obama: Learning Lip Sync from

on Graphics 36, no. 4 (July 2017), http://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf; Drew Harwell, “Top AI Researchers Race to Detect ‘Deepfake’ Videos: ‘We Are Outgunned,’ ” Washington Post, June 12, 2019, https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect

-deepfake-videos-we-are-outgunned/. 126. Martin Giles, “The GANfather: The man who’s given machines the gift of imagination,” MIT Technology Review, February 21, 2018,

.senate.gov/hearings/open-hearing-worldwide-threats; U.S. House Permanent Select Committee on Intelligence, Hearing: National Security Challenges of Artificial Intelligence, Manipulated Media and Deepfakes, 116th Cong., 1st sess., June 13, 2019. 130. Adam Schiff, remarks, House Permanent Select Committee on Intelligence, Hearing: National Security Challenges of Artificial Intelligence, Manipulated

Media and Deepfakes, 116th Cong., 1st sess., June 13, 2019. 131. Sanger, Perfect Weapon; Kim Zetter, Countdown to Zero Day: Stuxnet and the Launch of the World’s

, Conn.: Yale University Press. Schiff, Adam. 2019. Remarks, House Permanent Select Committee on Intelligence. Hearing on National Security Challenges of Artificial Intelligence, Manipulated Media, and Deepfakes. 116th Cong., 1st sess., June 13. Schlesinger, Arthur M. Jr. and Roger Bruns. 1975. Congress Investigates: A Documented History, 1792–1794. Vol. 1. New York

. Rpt. 114-891. U.S. House of Representatives Permanent Select Committee on Intelligence. 2019. Hearing on National Security Challenges of Artificial Intelligence, Manipulated Media and Deepfakes. 116th Cong., 1st sess., June 13. U.S. Library of Congress Congressional Research Service, Michael R. DeVine. 2019. “Covert Action and Clandestine Activities of the

to cooperate with government, 222–23 Arnold, Benedict, 44, 52, 54, 155 artificial intelligence (AI): challenges created by, 2; and competition for advantage, 141; and deepfake audio and video, 223, 243–46, 268; and improved data analysis, 139–41, 235–36; limitations of, 140; obstacles to adoption of, 140–41; professors

volume and availability of: and artificial intelligence, value of, 139–41, 235–36; challenges created by, 4–6 dead drop, 143–44, 152, 153, 306n37 deepfake audio and video, 223, 243–46, 267–69 Deep State, as conspiracy theory, 38–39 defectors, false, 161, 162 Defense Department: and Chinese nuclear weapons

of, 70–71; functions of, 72–73; Office of, 47, 49; weakness of, 71–72 disinformation campaigns online, 8, 265–69, 365n10, 373n113. See also deepfake audio and video; Russian interference in 2016 election DNI. See Director of National Intelligence domestic spying: legal constraints on, 75, 83–84; post-9/11

generative adversarial networks, 268 Geospatial intelligence (GEOINT), definition of, 74. See also National Geospatial Intelligence Agency Goodfellow, Ian, 268 Google: and big data, 8; and deepfake technology, 268; and encryption of data, 222; influence on U.S. policy, 75; NSA monitoring of, 20; and open-source intelligence, 84; powerful tools available

The Future Is Faster Than You Think: How Converging Technologies Are Transforming Business, Industries, and Our Lives

by Peter H. Diamandis and Steven Kotler  · 28 Jan 2020  · 501pp  · 114,888 words

the right, we see actor-director-comedian Jordan Peele actually speaking the words being put into the former President’s mouth. The video is a deepfake, an AI-driven, human image synthesis technique that takes existing images and videos—say, Obama speaking—and maps them onto source images and video—such

as Jordan Peele imitating President Obama insulting President Trump. Peele created the video to illustrate the dangers of deepfakes. He felt the need to make it because it’s really just one of thousands. Political chicanery, revenge porn, celebrity revenge porn—they’ve all

concerned about “fake news,” which has the power to destroy reputations, set off civil unrest, and even shape global politics. There are also legal ramifications. Deepfakes make it easier for those “caught on tape” to claim it wasn’t them—because you never know—and that’s exactly the problem. Yet

entire films, and—by inputting enough storytelling—know how to distinguish diamonds from dung. At the same time, we humans might be losing that skill. Deepfakes are an obvious example. What began as a disturbing trend in politics and pornography has spread to other forms of entertainment. It’s an entirely

your normal cha-cha-cha. This means anyone can become Fred Astaire, Ginger Rogers, or Missy Elliott. It’s a full-body deepfake, with a key difference: democratization. Deepfake version 1.0 used AI-driven frame-by-frame image transfers that required multiple sensors and cameras. To pull off these dance fakes

? How long before we get new movies with old stars? Our hunch, not very long. And then there’s the real fakes side of the deepfakes discussion: the use of computers to create alternative versions of ourselves. We already have AI-driven personal assistants: Siri, Echo, and Cortana. Imagine, you just

News, April 17, 2018. See: https://www.buzzfeednews.com/article/davidmack/obama-fake-news-jordan-peele-psa-video-buzzfeed. the dangers of deepfakes: Technology, “The Real Danger of DeepFake Videos Is That We May Question Everything,” NewScientist, August 29, 2018. See: https://www.newscientist.com/article/mg23931933-200-the-real-danger-of

-deepfake-videos-is-that-we-may-question-everything/. See also: Oscar Swartz, “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die,”

industry, 55 contact lenses, AR and, 139, 140 content, entertainment: AI and, 130, 131–32 AR and, 139–42 brain-computer interfaces and, 141–42 deepfakes and, 131–32 democratization of, 131–32 immersive, 132–35 new forms of, 130–38 new venues for, 138–42 sensory input in, 134–35

decentralized autonomous organizations, 85–86 deception, exponential technologies and, 31, 32, 33, 39, 215 Deep Blue computer, 28, 35–36 deep brain stimulators, 253–55 deepfake technology, 122–23, 131–32 DeepMind, 167 Defense Department, US, 36 deforestation, 48, 206, 207, 223, 224, 226 de Grey, Aubrey, 173 dematerialization, 31, 32

, 175 diagnostics, personal, 156–58 Diamond Age, The (Stephenson), 149 digital assistants, see AI assistants digital currency, see cryptocurrencies digital mimicry, 121–22 see also deepfake technology digital technology, 31 and availability of capital, 73–77 Digital Trends, 122 digital world, boundaries between physical world and, 118–20 disaster relief, drones

Code Dependent: Living in the Shadow of AI

by Madhumita Murgia  · 20 Mar 2024  · 336pp  · 91,806 words

are some ideas.’ It wasn’t clear to Helen how her photos had been manipulated. She later learned these types of images were known as ‘deepfakes’, realistic pictures and videos generated using artificial intelligence technologies. ‘They were all scenarios that made sex look like it might be non-consensual,’ she

had commented: ‘this is wild.’ It’s the image Helen keeps returning to, the one she can’t unsee. The Rise of the Deepfake The term ‘deepfake’ is relatively recent, coined in 2017 by a Reddit user referring to a newly developed AI technique for creating hyperreal fake images and videos

. The results were completely falsified ‘pornography’, featuring women who had no knowledge or control over the creation of the images or their distribution. The ‘deepfake’ creator was an unnamed software programmer with an interest in artificial intelligence, working on what he described as a ‘research project’. He didn’t see

illegal in most parts of the world, and neither was posting and distributing them, so there was no one to police use of the software. Deepfakes are generated using ‘deep’ learning: a subset of artificial intelligence where algorithms perform tasks, such as image generation, by learning patterns in millions of

mapping individual image pixels, and then recognize higher-order structures, like the shape of a specific face or figure. One of the algorithms that create deepfakes are Generative Adversarial Networks, or GANs. GANs work in pairs; one algorithm trains on images of the faces you want to replicate, and generates

can no longer spot a fake. The technique can doctor faces or entire bodies into realistic-looking photos and videos – like the eerily lifelike deepfake Tom Cruise videos that went viral on TikTok in 2020. GANs can generate high-quality images and videos exponentially faster and more cheaply than professional

alternative in the entertainment industry. Film studios like Disney’s Industrial Light and Magic, and VFX companies like Framestore are already exploring the use of deepfake algorithms to create hyper-real CGI content and synthetic versions of celebrities, alive and dead, for advertisements and films. GANs are not the only

tool available to make deepfakes, as new AI techniques have grown in sophistication. In the past two years, a new technology known as the transformer has spurred on advances

South Korea, Australia and individual US states like California, New York and Virginia – have recently criminalized the non-consensual distribution of all intimate imagery, including deepfakes, but the creation of AI-generated images still falls outside the purview of most international laws. The only other option for victims is to request

platforms to take down the content, using copyright claims. Social media platforms, and even some pornogaphy sites, promise to remove deepfakes and non-consensual imagery from their platforms under their terms of service, but in reality, they often don’t.3,4 Either they aren’t

how people can request takedowns of illegal material and to action these requests, but this doesn’t work if the material in question – such as deepfakes – isn’t illegal in the first place. AI is hardly the first digital technology adapted to harass and abuse marginalized groups on the internet.

Like so many simpler image-based technologies before it, from hidden webcams to Photoshop and social media, deepfakes have been co-opted by armies of perverts and cowards to invade that most intimate of spaces, our bodies. What makes AI image-manipulation so

consent. The technology scales up the mass-scale production of harassment and abuse. I did a quick Google search and discovered dozens of websites offering deepfake pornography, assuring me that all the videos were fully fake. Hundreds of videos abused the likenesses of real women, depicting them in realistic performances

featured, she was shocked to find an entire section devoted to those with no public persona, a market for female bodies, engendered by technology. Deepfakes aren’t unintended consequences of AI. They are tools knowingly twisted by individuals trying to cause harm. But their effects are magnified and entrenched by

a few pieces that have tried to recollect the intensity of the experience. In an essay titled ‘This Is Wild’, Helen juxtaposed snippets of her deepfake encounter with an account of a transformative glacier expedition she undertook near Kulusuk, in south-eastern Greenland. In it, she drew parallels between the

responsibilities, needing him desperately. To begin to heal, the first step she made was a conscious choice to stop wondering who was responsible for the deepfakes. Then, to repair her relationship with her body, she began adding to her collection of tattoos. ‘I’ve never had great body confidence, and

: choose a woman, any woman, strip her down, transmute her, and put her on display. Rinse and repeat. A business model has mushroomed around deepfake pornography. Websites often share their deep-nude code with ‘partners’, or knock-off websites that pay them in return for offering the photo-stripping service

when making ‘deepnudes’ became as straightforward as sending someone an instant message. A bot on the messaging app Telegram popped up, offering a simple deepfaking service to users. You could text the bot a photo of your clothed victim on Telegram, and it would text back a realistic naked image

you could click to have these intimate media removed and erased permanently. And even now, trying to get non-consensual images and videos – let alone deepfakes – removed from the internet permanently is futile because of the lack of legal protections, and the decentralized nature of the internet. Once downloaded onto

digital ghouls that roam the nether regions of the web, popping up when least expected. In most parts of the world, creating and distributing deepfakes of intimate and sexual images are not illegal, so there is no obvious path to justice. The issue has been taken on by legal academics

were not in Australia, because they are unafraid of being caught or prosecuted under the local laws that criminalize the sharing of intimate images, including deepfakes, without consent – something that remains legal in other parts of the world, including parts of the United States and the European Union. ‘They’re

Noelle who are courageous enough – and feel safe enough – to speak about their experiences are fighting for legal recourse, demanding that their governments recognize deepfake imagery is on the rise, and that creating and distributing pornographic AI-generated content without consent is a criminal act. Their fight is to show

is part of the reason they share their stories widely. In recent years, as more women share stories of their own experiences of being deepfaked, and AI image-creation software becomes better than ever, there has been a rise in lobbying by digital and women’s rights campaigners for governments

of Helen and others like Sensity AI’s Henry Ajder, alongside dozens of other campaigners, the UK has recently criminalized the non-consensual distribution of deepfake intimate images in its new Online Safety Bill, a draft law that has been in the works since 2019. The bill, despite widespread criticism,

are currently only scattered national and state-level laws globally that pertain to image abuse, and even fewer that criminalize the creation and distribution of deepfakes. Even the laws that exist aren’t being strictly enforced, according to survivors that Clare has spoken to, making it extremely hard for women,

don’t harm humanity. Against this backdrop of complexity, global political leaders, who are overwhelmingly male,18 simply haven’t prioritized the rising prevalence of deepfakes, Clare said. Perhaps, it’s because the practice is disproportionately hurting women and those identifying as female, she suggested. In other words, these victims

facial recognition AI, which systematically misidentifies women and people of colour more frequently than males and Caucasians. As I began to track the onset of deepfake pornography, I found that this technology had spread far afield of the West and that similar algorithms were being used against women in Egypt,

brought together journalists, policy advocates and human rights defenders from across these regions.22 Their goal has been to discuss the threats and opportunities that deepfakes and other generative AI technologies could bring to human rights work, including identifying the most pressing concerns and recommendations for future policy work in

American regions was the use of synthetic media to ‘poison the well’ or break trust in their work. They were worried about the use of deepfakes and other AI technologies to launch targeted, gender-based attacks and create false narratives about individuals fighting against those in power. ‘This is already

Noelle is a lawyer and researcher at the University of Western Australia, where she is studying technology and policy, particularly focused on immersive technologies like deepfakes and the metaverse, and their impact on individuals. Over the years, her closest friends have fielded dozens of phone calls from Noelle, as she

improvements in precision. Then, a new type of AI known as deep learning emerged – the same discipline that allowed miscreants to generate sexually deviant deepfakes of Helen Mort and Noelle Martin, and the model that underpins ChatGPT. The cutting-edge technology was helped along by an embarrassment of data riches

systems are a challenge to find too. Individuals harmed by algorithmic outputs can be reluctant to recount their experiences. Helen Mort, the poet who discovered deepfake pornography of herself, Diana Sardjoe, the mother of young men scored by a criminal prediction algorithm, and Uber driver Alexandru Iftimie were some of

, actors Tom Hanks and Robin Wright will be digitally de-aged using generative AI software. Currently, though – as was the case with the pornographic deepfakes of Noelle Martin and Helen Mort – there are no regulations that govern generative AI technology and therefore legal action is largely non-existent. Artists Holly

Own Face Used in Violent Sexual Images’, The Star News, July 21, 2021, https://www.thestar.co.uk/news/politics/sheffield-writer-launches-campaign-over-deepfake-porn-after-finding-own-face-used-in-violent-sexual-images-3295029. 18 ‘Facts and Figures: Women’s Leadership and Political Participation’, The United Nations

Democracy’, Carnegie Endowment for International Peace, April 24, 2019, https://carnegieendowment.org/2019/04/24/how-artificial-intelligence-systems-could-threaten-democracy-pub-78984. 22 ‘Deepfakes, Synthetic Media and Generative AI’, WITNESS, 2018, https://www.gen-ai.witness.org/. 23 Yinka Bokinni, ‘Inside the Metaverse’ (United Kingdom: Channel 4, April

ref1 Rome Call ref1 algorithms agency and see agency algor-ethics and ref1, ref2 autonomy and ref1 bias and ref1, ref2 data ‘colonialism’ and ref1 deepfakes and ref1, ref2, ref3 diagnostic ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 discriminatory and unethical ref1 exam grades and ref1 facial recognition

(film) ref1, ref2 artificial intelligence (AI) algorithms see algorithms alignment/ethics see alignment, AI data annotation/data-labelling see data annotation/data-labelling deepfakes and see deepfakes facial recognition and see facial recognition future of and see China generative AI see generative AI gig workers and see gig workers growth of

Web’ ref1 ‘Story of Your Life’ ref1 The Lifecycle of Software Objects ref1 Chien-Shiung Wu ref1 China ref1 Covid-19 and ref1, ref2, ref3 deepfakes in ref1 dissidents in ref1, ref2 facial recognition in ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8 gig workers in ref1, ref2, ref3,

ref7, ref8, ref9, ref10 Dazzle Club ref1 De Moeder Is De Sleutel (The Mother Is the Key) ref1 deathworlds ref1, ref2 Decentraland ref1 ‘#DeclineNow’ ref1 deepfakes ref1, ref2, ref3, ref4, ref5 Can You Forgive Me app ref1 data colonialism and ref1 DeepNude ref1 DreamTime ref1 GANs and ref1 Goldberg and fighting

, draft ref1 exam grades ref1 Experian ref1 exports, AI Surveillance ref1, ref2 Eyeota ref1 fabricated sentences ref1 Face++ ref1 Facebook content moderators and ref1, ref2 deepfakes and ref1, ref2, ref3, ref4, ref5 gig workers and ref1 metaverse and ref1, ref2 ‘move fast and break things’ motto ref1 NDAs and ref1,

, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 AI alignment and ref1, ref2, ref3 ChatGPT see ChatGPT creativity and ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 GPT (Generative Pre-trained Transformer) ref1, ref2, ref3, ref4 job losses and ref1 ‘The Machine Stops’ and ref1 Georgetown University ref1

ref1 golem (inanimate humanoid) ref1 Gonzalez, Wendy ref1 Google ref1 advertising and ref1 AI alignment and ref1 AI diagnostics and ref1, ref2, ref3 Chrome ref1 deepfakes and ref1, ref2, ref3, ref4 DeepMind ref1, ref2, ref3, ref4 driverless cars and ref1 Imagen AI models ref1 Maps ref1, ref2, ref3 Reverse Image

Light and Magic ref1 Information Commissioner’s Office ref1 Instacart ref1, ref2 Instagram ref1, ref2 Clearview AI and ref1 content moderators ref1, ref2, ref3, ref4 deepfakes and ref1, ref2, ref3 Integrated Joint Operations Platform (IJOP) ref1, ref2 iPhone ref1 IRA ref1 Iradi, Carina ref1 Iranian coup (1953) ref1 Islam ref1,

ref13, ref14 Ministry of Early Childhood, Argentina ref1 Mort, Helen ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9 A Line Above the Sky ref1 ‘Deepfake: A Pornographic Ekphrastic’ ref1 ‘This Is Wild’ ref1, ref2 Mosul, Iraq ref1 Motaung, Daniel ref1, ref2, ref3, ref4, ref5 Mothers (short film) ref1 ‘move

ref1 ProKid ref1, ref2, ref3 Right To Be Forgotten and ref1 Polosukhin, Illia ref1, ref2 Pontifical Academy of Sciences ref1 Pontifical Gregorian University ref1 pornography deepfake see deepfakes revenge porn ref1, ref2, ref3, ref4, ref5 Portal De La Memoria ref1 Posada, Julian ref1 pregnancy, teenage ref1 abandonment of AI pilot in

ref1, ref2 Rappi ref1 recruitment systems ref1 Red Caps ref1 red-teamers ref1 Reddit ref1, ref2, ref3 regulation ref1 benefits fraud ref1 content moderators ref1 deepfakes ref1, ref2, ref3, ref4, ref5, ref6, ref7, ref8, ref9, ref10 exam grades ref1 facial recognition ref1, ref2, ref3, ref4 Foxglove and see Foxglove hateful

Smith, Brad ref1, ref2, ref3, ref4, ref5 Snow, Olivia ref1 social media ref1, ref2, ref3, ref4 content moderators ref1, ref2, ref3, ref4, ref5, ref6 deepfakes and ref1, ref2, ref3, ref4, ref5, ref6 electoral manipulation and ref1, ref2 SoftBank ref1 South Wales Police ref1 Sri Lanka: Easter Day bombings (2019) ref1

Four Battlegrounds

by Paul Scharre  · 18 Jan 2023

nuclear war. China is building a techno-dystopian surveillance state to monitor and repress its citizens through facial recognition and an Orwellian “social credit” system. Deepfake video and audio continue to improve, and long-term trends are likely to lead to fakes that are indistinguishable from reality, undermining truth. And AI

megacorporations that control the content for billions of people, human rights activists who’ve uncovered abuses using AI technology, researchers who are working to build deepfake detectors, and scientists who are trying to build the next generation of safer, more robust AI systems. The diversity of voices in democratic societies debating

singer, songwriter, and performer,” and part of a trend in AI-generated synthetic media. AI is being used to generate not only fake text and deepfake videos, but also a slew of different forms of synthetic media including music, voice, and more. Similar to how machine learning systems can be trained

video or audio clip of a politician doing or saying something that might sway an election, or worse, authorize a bad policy or military attack. Deepfakes did not feature heavily in the 2020 U.S. presidential election, but security researchers worry it won’t be long before they’re used in

enable more sophisticated fake audio and video, making it harder for people to know what is real and what is fake. The geopolitical risks from deepfakes extend beyond manipulating elections. Fake audio or video could be used to manufacture a political crisis, undermine relations between allies, or inflame geopolitical tensions.

partner nation in instigating the conflict could cause hesitation among would-be defenders, buying time for an aggressor to seize territory. In March 2022, a deepfake of Ukrainian president Volodymyr Zelensky surfaced online in which he appeared to call on Ukrainians to lay down their arms against Russian invaders. The video

theories, and disinformation isn’t due to AI-generated media, but could be accelerated by it. Actor Scarlett Johansson, who has been a victim of deepfake porn, remarked that “the internet is a vast wormhole of darkness that eats itself.” As AI-generated media becomes more sophisticated our senses alone won

generated fakes are good enough to fool the average person at a quick glance, but don’t stand up to detailed scrutiny by forensic experts. Deepfake videos often feature unnatural eye movement or awkward facial expressions or body movements. For AI-generated still images of faces, glitches in the hair, eyes

for $30” and “custom voice cloning for $10 per 50 words generated.” Many fakes are currently of poor quality, however. To make a truly convincing deepfake, at least at present, takes a significant amount of effort. In 2020, a U.S. nonprofit dedicated to fighting government corruption released

of publicly available data on which to train machine learning systems continues to grow, synthetic media will become increasingly accessible and of higher quality. Automated deepfake detectors, like that from Sensity as well as other firms such as Microsoft, Quantum Integrity, Sentinel, and others, will become increasingly important

learning classifiers, by using large datasets of real and fake videos and training a neural network to learn the difference. The nascent arms race between deepfakes and detectors is just one example of the contest between AI and counter-AI systems that will unfold in the years ahead. As AI becomes

more widely used, techniques for detecting, fooling, manipulating, and defeating AI systems will become essential tools for geostrategic competitors. Giorgio Patrini at Sensity compared deepfake detectors to antivirus software. He pointed out that it was standard behavior now for individuals and organizations to scan software to ensure it doesn’t

can also imagine bad actors . . . will weaponize them.” A growing network of tech companies, media outlets, and AI researchers are working to combat deepfakes, but even the best deepfake detectors still have a long way to go. In 2019, Facebook partnered with Amazon, Microsoft, and the Partnership on AI to create a

Deepfake Detection Challenge to improve the state of the art in deepfake detectors. Facebook created a dataset of over 100,000 new videos (using paid actors) for AI researchers to use to

the time—better than a coin flip, but not nearly reliable enough to be useful. Given more data, computing power, and training, deepfake detectors will improve, but so will deepfake quality. In the long run, detectors are fighting an uphill battle. The adversarial method GANs use to generate synthetic media will train

there is a clue for neural networks to understand that something is off.” Giorgio pointed out, however, that “there is no silver bullet against deepfakes.” He described deepfake detection as a “cyber security problem” and said, “It’s very important that you do not base all your defenses on one particular technical

element.” Drawing a parallel to the antivirus industry, he said that deepfake detection needed to draw upon “a portfolio of techniques” and be constantly updated against an evolving threat. Giorgio was very cognizant of the risk that

as relying on metadata embedded in the videos. Though more advanced fakes will eventually be able to bypass detection methods, Giorgio predicted that even after deepfakes were good enough to fool humans, “There will be a long gap where we can still trust automated instruments” to catch most fakes. Just

as deepfake detectors must rely on multiple methods, so too must society build a portfolio approach to responding to the problem of AI-generated fake media. Increased

for realistic-looking AI-generated fakes is part of the solution. In fact, in just the past few years public awareness about deepfakes has rapidly eclipsed the threat itself. The biggest role deepfakes played in the 2020 U.S. election was in public service announcement videos warning viewers about the dangers of

social media that the video was a fake. A week later, military officers launched a coup to oust the president. The coup failed and multiple deepfake experts have said the video is likely real. But questions about the video’s authenticity may have contributed to rumors and disinformation about whether the

a world of increasingly realistic fake video, I turned to the world of synthetic audio, which is far more advanced than video today. While convincing deepfake videos are still difficult to produce, AI-generated audio is more mature and has already been used to commit fraud. 17 TRANSFORMATION Modulate is a

a while but eventually synthetic media will become good enough to fool the detectors as well. Mike Pappas pointed out that in the contest between deepfakes vs. detectors, “It’s going to be an arms race until it’s not, and at that point defense will just have lost.” Once

label their synthetic media. One of the first things programmers did with the publicly released version of Stable Diffusion was strip off the watermarking. As deepfake technology becomes more accessible, malicious actors will be able to train their own machine learning systems to create fake media with misleading watermarks. Other technical

to “clean” the data of the markings or create their own datasets from scratch, raising the bar for creating detection-free fakes. The hype surrounding deepfakes may have outpaced their reality today, but long-term trends in data and computing power will enable more powerful and accessible machine learning tools to

. Regulators in the United States, Europe, and China have scrambled to adapt. Chinese regulators have moved fastest, issuing regulations in 2019 and 2022 that govern deepfakes and other forms of synthetic media. Democratic nations have, understandably, moved more deliberately. The 2021 proposed European AI Act includes a requirement to label AI

-generated media. In the United States, federal lawmakers directed several government studies of deepfakes starting in 2019, while some states have begun issuing their own regulations. Social media companies are adapting to both rapidly evolving technology and societal expectations

trustworthy information ecosystem will depend critically, however, on who controls the information environment . . . and the algorithms they use. 18 BOT WARS AI-generated media like deepfakes are entering an information ecosystem that is already supercharged with bots and algorithms warring for control of what information people see. Democratic governments are largely

pdf. 121The videos didn’t only harm the celebrities: Cole, “AI-Assisted Fake Porn Is Here.” 121revenge porn attacks: Kirsti Melville, “The Insidious Rise of Deepfake Porn Videos—and One Woman Who Won’t Be Silenced,” abc.net.au, August 29, 2019, https://www.abc.net.au/news/2019-08-30

2019, https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402. 1282020 U.S. presidential election: Gary Grossman, “Deepfakes May Not Have Upended the 2020 U.S. Election, but Their Day Is Coming,” VentureBeat, November 1, 2020, https://venturebeat.com/2020/11/01

), https://www.congress.gov/116/meeting/house/109620/witnesses/HHRG-116-IG00-Wstate-CitronD-20190613.pdf; National Security Challenges of Artificial Intelligence, Manipulated Media, and ‘Deepfakes,’ Hearing of the House Intelligence Committee, 116th Cong. (June 13, 2019), https://www.congress.gov/event/116th-congress/house-event/109620. 128Manipulated media: Nadine Ajaka

com/kpszsu/posts/363834939117794. 129deepfake of Ukrainian president Volodymyr Zelensky: Operational Reports, “This morning, polite hackers hacked into several Ukrainian sites and posted there a deepfake with Zelensky calling for laying down arms.,” Telegram (public post), March 16, 2022, https://t.me/opersvodki/1788; Mikael Thalen, “A

was reportedly uploaded to a hacked Ukrainian news site,” Twitter, March 16, 2022, https://twitter.com/MikaelThalen/status/1504123674516885507; Samantha Cole, “Hacked News Channel and Deepfake of Zelenskyy Surrendering Is Causing Chaos Online,” VICE News, March 16, 2022, https://www.vice.com/en/article/93bmda/hacked-news-channel-and

-deepfake-of-zelenskyy-surrendering-is-causing-chaos-online; Tom Simonite, “A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be,” Wired, March 17, 2022, https://www.wired.com/story/zelensky

March 16, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-hacked-news-program-and-deepfake-video-spread-false-zelenskyy-claims/#deepfake; Nathaniel Gleicher, “1/ Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did.,” Twitter (thread), March 16, 2022

://sensity.ai/about/. 130Steve Buscemi’s face swapped onto Jennifer Lawrence’s body: The Curious Ape, “Jennifer Lawrence as STEVE BUSCEMI at The Golden Globes DEEPFAKE,” YouTube, February 6, 2019, https://www.youtube.com/watch?v=m8-kQUE1QYE. 131“lip-sync” an audio clip: K. R. Prajwal et al., A

Horvitz, “New Steps to Combat Disinformation,” Microsoft on the Issues (blog), September 1, 2020, https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/. 132Quantum Integrity: Quantum+Integrity (website), n.d., https://quantumintegrity.ch/. 132Sentinel: Sentinel (website), 2021, https://thesentinel.ai/. 132“in the future . . . the

Challenge Results: An Open Initiative to Advance AI.” 133“due to the way lenses are made”: Patrini, interview. 134public service announcement videos: Ehr for Congress, “#DeepFake,” YouTube, October 1, 2020, https://www.youtube.com/watch?v=Y6HKo-IAltA. 134“the fear that sometimes the media is communicating”: Patrini, interview. 134coup to

Horvitz, “New Steps to Combat Disinformation,” Microsoft on the Issues (blog), September 1, 2020, https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/. 138Serelay: Serelay (website), 2021, https://www.serelay.com/. 138Truepic: Truepic (website), 2021, https://truepic.com/. 138Project Origin: “Trusted News Initiative (TNI) Steps

Union legislative document), 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206. 140several government studies of deepfakes: Scott Briscoe, “U.S. Laws Address Deepfakes”, Security Management, January 12, 2021, https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2021/january/U-S-Laws

453, 454 AlphaPilot drone racing, 229–30, 250 AlphaStar, 180, 221, 269, 271, 441 AlphaZero, 267, 269–71, 284 Amazon, 32, 36, 215–16, 224 Deepfake Detection Challenge, 132 and facial recognition, 22–23 and Google-Maven controversy, 62, 66 and government regulation, 111 revenue, 297 AMD (company), 28 American Civil

of Normandy, 46 dead hand, 289–90 Dead Hand, 447; See also Perimeter deception in warfare, 45 Deep Blue, 275 deepfake detection, 127, 132–33, 137–38 Deepfake Detection Challenge, 132–33 deepfake videos, 121, 130–32 deep learning, 2, 19, 31, 210, 236 Deep Learning Analytics, 209–13, 233 DeepMind, 23,

92 digital devices, 18 DigitalGlobe, 204 Digital Silk Road, 110 DiResta, Renée, 139 disaster relief, 201, 204 disinformation, 117–26 AI text generation, 117–21 deepfake videos, 121 GPT-2 release, 123–24 Russian, 122 voice bots, 121–22 distributional shift, 233, 426 DIU, See Defense Innovation Unit (DIU) DNA database

54 F-35 stealth fighter jet, 254–55 Faber, Isaac, 193–94, 203 Face++, 88 Facebook account removal, 142 algorithms, 144–46 content moderation, 149 Deepfake Detection Challenge, 132 manipulated media policies of, 140 number of users, 22 and Trusted News Initiative, 139 face swapping, 121, 130–31 facial recognition attacks

143, 296 metrics, 320 Mexico, 107 Michel, Arthur Holland, 54 Micron, 182 Microsoft, 294 China presence, 159 and computer vision, 57 and cyberattacks, 246–47 deepfake detection, 132, 138–39 and Department of Defense, 36, 62, 66, 215–17, 224–25 digital watermarks, 138 and facial recognition, 23, 111 financial backing

257 Sweden, 108, 158, 187 Switch-C, 294 Synopsys, 162 synthetic aperture radar, 210 synthetic media, 127–34, 138–39 criminal use, 128–29 deepfake detectors, 132–33 deepfake videos, 130–32 geopolitical risks, 129–30 watermarks, digital, 138–39 Syria, 58 system integration, 91 tactics and strategies, 270 Taiwan, 27, 71

We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News

by Eliot Higgins  · 2 Mar 2021  · 277pp  · 70,506 words

no reply. A few years late, this is an answer. THE PERILS AND OPPORTUNITIES OF ARTIFICIAL INTELLIGENCE You never forget your first glimpse of a ‘deepfake’. Partly, you are awed by the power of technology – that video footage can so convincingly be falsified. Partly, you are filled with dread at what

this tool will wreak. As a public warning in 2018, the comedian Jordan Peele put out a deepfake showing Barack Obama in a video address, calling Trump ‘a total and complete dipshit’.27 The former Democratic president had said no such thing, of

course. Information chaos online was already frightening enough. Now, it looked as though deepfakes were about to demolish legitimate discourse, and perhaps undermine the verification techniques that serve as our best defence. Another alarm sounded in 2019, when video

distribution demanded obedient publications. Today, manipulations of far greater technical complexity can be done on a smartphone, while distribution is effortless and global. The term ‘deepfake’ encompasses more than video. Artificial intelligence, AI, can conjure photographic portraits of people who never existed. The website Which Face Is Real demonstrates this, placing

a computer-generated portrait beside a real photo.29 They are frighteningly hard to distinguish. Audio deepfakes have already been put to malicious use, with scammers using speech samples of a CEO to replicate his voice digitally, with which they ordered a

theories and dilute meaningful public discussion. Fearful of misuse, OpenAI decided not release the research.31, 32 While deepfakes are a threat, we can inform ourselves, prepare and respond. To become paranoid about deepfakes would itself have disastrous consequences, leading people to judge all documentation cynically. What quicker way to discredit a

will soon become routine in disinformation campaigns. I already see tweets dismissing videos from Syria, saying, But how do you know this isn’t a deepfake? The uninformed give this technology powers beyond its current capabilities. A leading expert on synthetic media and its possible misuse, Sam Gregory of the human

. Video archives could become targets, too, allowing bad actors to meddle with the historical record. Third, we must think about where this technology is heading. Deepfakes are improving. The AI technology that underlies it is a topic of vast and competitive research, and software companies are commercialising what had been academic

of the response is media literacy. The difficulty is keeping up to date. This technology moves so quickly that the flaws that give away a deepfake tend to be resolved soon after they are recognised. Another part of the response, Gregory argues, is closer coordination between major news sources and open

transaction, which prevents tampering.36 Facebook also encouraged researchers to work on the problem, launching a $10 million Deepfake Detection Challenge, whose goal was to find how best to identify synthetic media.37 Deepfakes do not trouble me too much for now. A photo or a video is not a free-floating

at the alleged location at the supposed time? If anything about a video seems dubious, you dig deep. The most likely outcome for a viral deepfake video is that we would end up exposing it, to the discredit of whoever created the clip. If people become sceptical (rather than cynical) about

BBC investigation won a Peabody Award. Elsewhere in the news world, verification teams and disinformation squads are increasingly common. The Wall Street Journal established a ‘deepfake’ committee of staffers from across the newsroom to help reporters judge whether content is authentic.43 Reuters trained staff to spot fake content while employing

-fiction?__twitter_impression=true https://openai.com/blog/better-language-models/ 32 www.vice.com/en_us/article/594qx5/there-is-no-tech-solution-to-deepfakes 33 lab.witness.org/projects/synthetic-media-and-deep-fakes/ 34 www.youtube.com/watch?time_continue=1&v=Qh_6cHw50l0 35 amp.axios.com

/deepfake-authentication-privacy-5fa05902-41eb-40a7-8850-5450bcad0475.html?__twitter_impression=true 36 open.nytimes.com/introducing-the-news-provenance-project-723dbaf07c44?gi=5f9c26d709a7 www.newsprovenanceproject.

com/FAQs 37 ai.facebook.com/blog/deepfake-detection-challenge/ 38 syrianarchive.org/en/tech-advocacy 39 syrianarchive.org/en 40 amp.theguardian.com/world/2019/aug/18/new-video-evidence-of-russian

-atrocity-finding-the-soldiers-who-killed-this-woman 43 digiday.com/media/the-wall-street-journal-has-21-people-detecting-deepfakes/amp/?__twitter_impression=true 44 digiday.com/media/reuters-created-a-deepfake-video-to-train-its-journalists-against-fake-news/ 45 www.bellingcat.com/resources/articles/2015/12/08/women-in

, here Daraa here Dawes, Kevin here Dawson, Ryan here de Kock, Peter here De Wereld Draait Door here ‘death flights’ here Deep State here Deepfake Detection Challenge here ‘deepfakes’ here, here Democratic National Convention here Denmark here Detroit street gangs here ‘digilantism’ here DigitalGlobe here, here Discord here disinformation here, here, here

Rule of the Robots: How Artificial Intelligence Will Transform Everything

by Martin Ford  · 13 Sep 2021  · 288pp  · 86,995 words

to the data it was trained on. Generative deep learning systems are the technology behind so-called deepfakes—media fabrications that can be very difficult, or perhaps impossible, to distinguish from the real thing. Deepfakes are a critical risk factor associated with artificial intelligence, and we will discuss their implications in Chapter

hires a panel of experts to independently review the audio file. After a day of intense analysis, they declare that the recording is likely a “deepfake”—audio generated by machine learning algorithms that have been extensively trained on examples of the candidate speaking. There have been warnings about

deepfakes for years, but so far they have been rudimentary and easy to identify as fabrications. This example is different; it is clear that the state

, in July 2019, the cybersecurity firm Symantec revealed that three unnamed corporations had already been bilked out of millions of dollars by criminals using audio deepfakes.1 In all three cases, the criminals did this by using an AI-generated audio clip of the company CEO’s voice to fabricate a

produce truly high-quality audio, the criminals in these cases intentionally inserted background noise (such as traffic) to mask the imperfections. However, the quality of deepfakes is certain to get dramatically better in the coming years, and eventually, things will likely reach a point where truth is virtually indistinguishable from fiction

. Malicious deployment of deepfakes, which can be used to generate not only audio but also photographs, video and even coherent text, is just one of the important risks we

of the other major concerns that are likely to arise as AI becomes ever more powerful. WHAT IS REAL, AND WHAT IS ILLUSION? DEEPFAKES AND THREATS TO SECURITY Deepfakes are often powered by an innovation in deep learning known as a “generative adversarial network,” or GAN. GANs deploy two competing neural networks

illness struck. However, the potential for malicious use of the technology is inescapable and, evidence already suggests for many tech savvy individuals, irresistible. Widely available deepfake videos created with humorous or educational intent demonstrate what is possible. You can find numerous fake videos featuring high-profile individuals like Mark Zuckerberg saying

Barak Obama’s voice, in collaboration with BuzzFeed. In Peele’s public service video intended to make the public aware of the looming threat from deepfakes, Obama says things like “President Trump is a total and complete dipshit.”3 In this instance, the voice is Peele’s imitation of Obama, and

lips so they synchronize with Peele’s speech. Eventually, we will likely see videos like this in which the voice is also a deepfake fabrication. An especially common deepfake technique enables the digital transfer of one person’s face to a real video of another person. According to the startup company Sensity

(formerly Deeptrace), which offers tools for detecting deepfakes, there were at least 15,000 deepfake fabrications posted online in 2019, and this represented an eighty-four percent increase over the prior year.4 Of these, a full

of digital abuse could eventually be targeted against virtually anyone, especially as the technology advances and the tools for making deepfakes become more available and easier to use. As the quality of deepfakes relentlessly advances, the potential for fabricated audio or video media to be genuinely disruptive looms as a seemingly inevitable

threat. As the fictional anecdote at the beginning of this chapter illustrates, a sufficiently credible deepfake could quite literally shift the arc of history—and the means to create such fabrications might soon be in the hands of political operatives, foreign

of viral videos, social media shaming and “cancel culture,” virtually anyone could be targeted and possibly have both their career and life destroyed by a deepfake. Because of its history of racial injustice, the United States may be especially vulnerable to orchestrated social and political disruption. We’ve seen how viral

. A video of a corporate CEO making a false statement, or perhaps engaging in erratic behavior, would likely cause the company’s stock to plunge. Deepfakes will also throw a wrench into the legal system. Fabricated media could be entered as evidence, and judges and juries may eventually live in a

eyes is really true. To be sure, there are smart people working on solutions. Sensity, for example, markets software that it claims can detect most deepfakes. However, as the technology advances, there will inevitably be an arms race—not unlike the one between those who create new computer viruses and the

buildings to jewelry and expensive trinkets.7 Still, Goodfellow thinks that ultimately there’s probably not going to be a foolproof technological solution to the deepfake problem. Instead, we will have to somehow learn to navigate within a new and unprecedented reality where what we see and what we hear can

always potentially be an illusion. While deepfakes are intended to deceive human beings, a related problem involves the malicious fabrication of data intended to trick or gain control of machine learning algorithms

training data in the hope that the neural network will be able to identify attacks if they occur once the system is deployed. As with deepfakes, however, there is likely destined to be a perpetual arms race in which attackers will always have an advantage. As Goodfellow points out, “no one

/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordanpeele-buzzfeed. 4. Sensity, “The state of deepfakes 2019: Landscape, threats, and impact,” September 2019, sensity.ai/reports/. 5. Ian Sample, “What are deepfakes—and how can you spot them?,” The Guardian, January 13, 2020, www.theguardian.com/technology/2020/jan

/13/what-are-deepfakes-and-how-can-you-spot-them. 6. Lex Fridman, “Ian Goodfellow: Generative Adversarial Networks (GANs),” Artificial Intelligence Podcast, episode 19, April 18, 2019, lexfridman.com/

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future

by Orly Lobel  · 17 Oct 2022  · 370pp  · 112,809 words

agreement and gender violence. Imagine integrating the best of such projects in the commercial immersive experiences that are already saturating our markets. Revenge Porn and Deepfakes In 2017, Gal Gadot, the star of the Wonder Woman movie franchise, was horrified to discover that a video of her supposedly starring in a

person’s body using a machine learning algorithm. “Seeing isn’t believing anymore,” wrote the Wall Street Journal in an exposé about revenge porn and deepfakes—technology that uses machine learning to create an illusion on video.25 Revenge porn utilizes non-consensual pornographic image sharing. This phenomenon encompasses highly destructive

uses of deepfake technologies, making it seem like people are photographed or filmed in situations they were never involved in. According to the Data & Society Research Institute, one

in twenty-five Americans has been the victim of revenge porn.26 And deepfakes are getting better. In the Gadot video, there were obvious flaws that exposed the fact that the video wasn’t real: her mouth and eyes

videos, social and political commentary, and creative art. But the insidious use of deepfakes is extremely concerning. Like Gal Gadot, Taylor Swift and Scarlett Johansson have had deepfakes of them posted online, but celebrities aren’t the only victims. Deepfakes have also been used politically to shame women for not aligning with their

communities. The story of Indian journalist Rana Ayyub is telling. Ayyub exposed Hindu nationalist politics as corrupt, and thereafter she became the victim of a deepfake porn video, which in turn led to Ayyub receiving rape and death threats.27 Unsurprisingly, 90 percent of the victims of revenge porn are women

. How can we encourage creativity in new technology while curbing harmful uses? Can we counter deepfakes deployed to take revenge on exes, to objectify women, to demean, harass, and extort? The most effective way to combat

deepfake porn will be a combination of technology and policy. AI experts envision developing technology to detect fake videos; teams of academics and private companies are

Berkeley researcher designed an algorithm that can flag videos with gestures that do not appear human.28 This detection method was once very successful, catching deepfake videos with an accuracy of over 90 percent, but the race is a difficult one: after the research was made public

, deepfake algorithms adapted and incorporated blinking into their code. In a similar tell, in a human face, eyes (literally) reflect whatever the subject is looking at,

and that reflection is symmetrical in both eyes. Deepfake videos fail to accurately or consistently generate videos with symmetrical reflections in the corneas. A team of computer scientists at the University at Buffalo developed

an algorithm that detects deepfake videos by analyzing the light reflections in the eyes. This method was reportedly 94 percent effective at catching deepfakes, and the researchers created a “DeepFake-o-meter,” an online resource to help people test to see if the

video they’ve viewed is real or (deep)fake. Other methods to identify deepfakes include detecting lack of or inconsistencies in detail or resolution

inconsistencies around eyes, teeth, and facial contours. For example, mouths created by deepfake videos often have misshapen or excess teeth. Like with other areas of technology that harm and help, the race to do good often feels like

game of whack-a-mole. The race is tough: while detection methods are improving, so is the deepfake technology. In 2020, Facebook held a competition for artificial intelligence that can detect deepfakes. The winning algorithm detected deepfakes only 65 percent of the time. Some scholars, including law professor Danielle Citron, a leading voice

in the field of sexual privacy, are skeptical that technology alone can battle deepfakes. Citron explains that to be effective, detection software

would have to keep pace with innovations in deepfake technology, dooming those wanting to protect against deepfakes to a cat-and-mouse game. Citron warns that the experience of fighting

, “Putting Yourself in the Skin of a Black Avatar Reduces Implicit Racial Bias,” Consciousness and Cognition 22, no. 3 (September 2013): 779. 25. Hilke Schellman, “Deepfake Videos Are Getting Real and That’s a Problem,” Wall Street Journal, October 15, 2018, https://www.wsj.com/articles

/deepfake-videos-are-ruining-lives-is-democracy-next-1539595787. 26. Amanda Lenhart, Michele Ybarra, and Myeshia Price-Feeney, Nonconsensual Image Sharing: One in 25 Americans Has

The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma

by Mustafa Suleyman  · 4 Sep 2023  · 444pp  · 117,770 words

blunt forms of physical assault. Information and communication together is its own escalating vector of risk, another emerging fragility amplifier requiring attention. Welcome to the deepfake era. THE MISINFORMATION MACHINE In the 2020 local elections in India, the Bharatiya Janata Party Delhi president, Manoj Tiwari, was filmed making a campaign speech

he goes on the attack, accusing the head of a rival party of having “cheated us.” But the version in the local dialect was a deepfake, a new kind of AI-enabled synthetic media. Produced by a political communications firm, it exposed the candidate to new, hard-to-reach constituencies. Lacking

awareness of the discourse around fake media, many assumed it was real. The company behind the deepfake argued it was a “positive” use of the technology, but to any sober observer this incident heralded a perilous new age in political communication. In

when anyone has the power to create and broadcast material with incredible levels of realism? These examples occurred before the means to generate near-perfect deepfakes—whether text, images, video, or audio—became as easy as writing a query into Google. As we saw in chapter 4, large language models now

show astounding results at generating synthetic media. A world of deepfakes indistinguishable from conventional media is here. These fakes will be so good our rational minds will find it hard to accept they aren’t real

. Deepfakes are spreading fast. If you want to watch a convincing fake of Tom Cruise preparing to wrestle an alligator, well, you can. More and more

’s already happening. A bank in Hong Kong transferred millions of dollars to fraudsters in 2021, after one of their clients was impersonated by a deepfake. Sounding identical to the real client, the fraudsters phoned the bank manager and explained how the company needed to move money for an acquisition. All

. Polls nose-dive. Swing states suddenly shift toward the opponent, who, against all expectations, wins. A new administration takes charge. But the video is a deepfake, one so sophisticated it evades even the best fake-detecting neural networks. The threat here lies not so much with extreme cases as in subtle

by a U.S. drone strike, before any of these events. His radicalizing messages were, though, still available on YouTube until 2017. Suppose that using deepfakes new videos of al-Awlaki could be “unearthed,” each commanding further targeted attacks with precision-honed rhetoric. Not everyone would buy it, but those who

for “reopening America” were bots. This was a targeted “propaganda machine,” most likely Russian, designed to intensify the worst public health crisis in a century. Deepfakes automate these information assaults. Until now effective disinformation campaigns have been labor-intensive. While bots and fakes aren’t difficult to make, most are of

create another unpredictable stressor, another splintering crack in the system. Yet stressors might also be less discrete events, less a robot attack, lab leak, or deepfake video, and more a slow and diffuse process undermining foundations. Consider that throughout history, tools and technologies have been designed to help us do more

this argument. GO TO NOTE REFERENCE IN TEXT Both looked and sounded First reported in Nilesh Cristopher, “We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign,” Vice, Feb. 18, 2020, www.vice.com/​en/​article/​jgedjb/​the-first-use-of

-deepfakes-in-indian-election-by-bjp. GO TO NOTE REFERENCE IN TEXT In another widely publicized incident Melissa Goldin, “Video of Biden Singing ‘Baby Shark’ Is a Deepfake,” Associated Press, Oct. 19, 2022, apnews.com/​article/​fact-check-biden-baby

-shark-deepfake-412016518873; “Doctored Nancy Pelosi Video Highlights Threat of ‘Deepfake’ Tech,” CBS News, May 25, 2019, www.cbsnews.com/​news/​doctored-nancy-pelosi-video

-highlights-threat-of-deepfake-tech-2019-05-25. GO TO NOTE REFERENCE IN TEXT If you want

-case-11567157402. GO TO NOTE REFERENCE IN TEXT It’s not the president charging Which is a real deepfake. See Kelly Jones, “Viral Video of Biden Saying He’s Reinstating the Draft Is a Deepfake,” Verify, March 1, 2023, www.verifythis.com/​article/​news/​verify/​national-verify/​viral-video-of-biden-saying

-hes-reinstating-the-draft-is-a-deepfake/​536-d721f8cb-d26a-4873-b2a8-91dd91288365. GO TO NOTE REFERENCE IN TEXT His radicalizing messages were Josh Meyer, “Anwar al-Awlaki: The Radical Cleric Inspiring

-videos-removed-google-extremism-clampdown. GO TO NOTE REFERENCE IN TEXT Soon these videos will be fully Eric Horvitz, “On the Horizon: Interactive and Compositional Deepfakes,” ICMI ’22: Proceedings of the 2022 International Conference on Multimodal Interaction, arxiv.org/​abs/​2209.01714. GO TO NOTE REFERENCE IN TEXT According to Facebook

TEXT In the words of a Brookings Institution William A. Galston, “Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics,” Brookings, Jan. 8, 2020, www.brookings.edu/​research/​is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics. GO TO NOTE REFERENCE IN TEXT First discovered in China Figure

cults, 212–13 culture, 267–70 cyberattacks, 160–63, 166–67 D Daimler, Gottlieb, 24 data, proliferation of, 33 decentralization, 198–99 Deep Blue, 53 deepfakes, 169–71, 172 deep learning autonomy and, 113 computer vision and, 58–60 limitations of, 73 potential of, 61–62 protein structure and, 89–90

, 27, 157 See also large language models LanzaTech, 87 large language models (LLMs), 62–65 bias in, 69–70, 239–40 capabilities of, 64–65 deepfakes and, 170 efficiency of, 68 open source and, 69 scale of, 65–66 synthetic biology and, 91 laser weapons, 263 law enforcement, 97–98 Lebanon

Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond

by Tamara Kneese  · 14 Aug 2023  · 284pp  · 75,744 words

Flowers of Fire: The Inside Story of South Korea's Feminist Movement and What It Means for Women's Rights Worldwide

by Hawon Jung  · 21 Mar 2023  · 401pp  · 112,589 words

Forward: Notes on the Future of Our Democracy

by Andrew Yang  · 15 Nov 2021

AI in Museums: Reflections, Perspectives and Applications

by Sonja Thiel and Johannes C. Bernhardt  · 31 Dec 2023  · 321pp  · 113,564 words

The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health--And How We Must Adapt

by Sinan Aral  · 14 Sep 2020  · 475pp  · 134,707 words

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World

by Cade Metz  · 15 Mar 2021  · 414pp  · 109,622 words

A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back

by Bruce Schneier  · 7 Feb 2023  · 306pp  · 82,909 words

Decoding the World: A Roadmap for the Questioner

by Po Bronson  · 14 Jul 2020  · 320pp  · 95,629 words

This Is for Everyone: The Captivating Memoir From the Inventor of the World Wide Web

by Tim Berners-Lee  · 8 Sep 2025  · 347pp  · 100,038 words

Calling Bullshit: The Art of Scepticism in a Data-Driven World

by Jevin D. West and Carl T. Bergstrom  · 3 Aug 2020

The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot

by Yolande Strengers and Jenny Kennedy  · 14 Apr 2020

Nexus: A Brief History of Information Networks From the Stone Age to AI

by Yuval Noah Harari  · 9 Sep 2024  · 566pp  · 169,013 words

Picnic Comma Lightning: In Search of a New Reality

by Laurence Scott  · 11 Jul 2018  · 244pp  · 81,334 words

Being You: A New Science of Consciousness

by Anil Seth  · 29 Aug 2021  · 418pp  · 102,597 words

Superbloom: How Technologies of Connection Tear Us Apart

by Nicholas Carr  · 28 Jan 2025  · 231pp  · 85,135 words

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

by Eliezer Yudkowsky and Nate Soares  · 15 Sep 2025  · 215pp  · 64,699 words

Human Compatible: Artificial Intelligence and the Problem of Control

by Stuart Russell  · 7 Oct 2019  · 416pp  · 112,268 words

The Road to Conscious Machines

by Michael Wooldridge  · 2 Nov 2018  · 346pp  · 97,890 words

The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future

by Keach Hagey  · 19 May 2025  · 439pp  · 125,379 words

Character Limit: How Elon Musk Destroyed Twitter

by Kate Conger and Ryan Mac  · 17 Sep 2024

Artificial Intelligence: A Modern Approach

by Stuart Russell and Peter Norvig  · 14 Jul 2019  · 2,466pp  · 668,761 words

The Singularity Is Nearer: When We Merge with AI

by Ray Kurzweil  · 25 Jun 2024

Doppelganger: A Trip Into the Mirror World

by Naomi Klein  · 11 Sep 2023

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

by Karen Hao  · 19 May 2025  · 660pp  · 179,531 words

These Strange New Minds: How AI Learned to Talk and What It Means

by Christopher Summerfield  · 11 Mar 2025  · 412pp  · 122,298 words

Going Dark: The Secret Social Lives of Extremists

by Julia Ebner  · 20 Feb 2020  · 309pp  · 79,414 words

The Quiet Damage: QAnon and the Destruction of the American Family

by Jesselyn Cook  · 22 Jul 2024  · 321pp  · 95,778 words

What If We Get It Right?: Visions of Climate Futures

by Ayana Elizabeth Johnson  · 17 Sep 2024  · 588pp  · 160,825 words

Amateurs!: How We Built Internet Culture and Why It Matters

by Joanna Walsh  · 22 Sep 2025  · 255pp  · 80,203 words

New Laws of Robotics: Defending Human Expertise in the Age of AI

by Frank Pasquale  · 14 May 2020  · 1,172pp  · 114,305 words

A New History of the Future in 100 Objects: A Fiction

by Adrian Hon  · 5 Oct 2020  · 340pp  · 101,675 words

IRL: Finding Realness, Meaning, and Belonging in Our Digital Lives

by Chris Stedman  · 19 Oct 2020  · 307pp  · 101,998 words

The Metaverse: And How It Will Revolutionize Everything

by Matthew Ball  · 18 Jul 2022  · 412pp  · 116,685 words

Drunk on All Your Strange New Words

by Eddie Robson  · 27 Jun 2022  · 294pp  · 81,850 words

Collaborative Society

by Dariusz Jemielniak and Aleksandra Przegalinska  · 18 Feb 2020  · 187pp  · 50,083 words

Enshittification: Why Everything Suddenly Got Worse and What to Do About It

by Cory Doctorow  · 6 Oct 2025  · 313pp  · 94,415 words

Blank Space: A Cultural History of the Twenty-First Century

by W. David Marx  · 18 Nov 2025  · 642pp  · 142,332 words

Against the Machine: On the Unmaking of Humanity

by Paul Kingsnorth  · 23 Sep 2025  · 388pp  · 110,920 words

Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist

by Liz Pelly  · 7 Jan 2025  · 293pp  · 104,461 words

The Ones We've Been Waiting For: How a New Generation of Leaders Will Transform America

by Charlotte Alter  · 18 Feb 2020  · 504pp  · 129,087 words

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World

by Mo Gawdat  · 29 Sep 2021  · 259pp  · 84,261 words

When the Heavens Went on Sale: The Misfits and Geniuses Racing to Put Space Within Reach

by Ashlee Vance  · 8 May 2023  · 558pp  · 175,965 words

Other Pandemic: How QAnon Contaminated the World

by James Ball  · 19 Jul 2023  · 317pp  · 87,048 words

Democracy for Sale: Dark Money and Dirty Politics

by Peter Geoghegan  · 2 Jan 2020  · 388pp  · 111,099 words

Upgrade

by Blake Crouch  · 6 Jul 2022  · 396pp  · 96,049 words

The Long History of the Future: Why Tomorrow's Technology Still Isn't Here

by Nicole Kobie  · 3 Jul 2024  · 348pp  · 119,358 words

Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

by Raj M. Shah and Christopher Kirchhoff  · 8 Jul 2024  · 272pp  · 103,638 words

Ghost Road: Beyond the Driverless Car

by Anthony M. Townsend  · 15 Jun 2020  · 362pp  · 97,288 words

Futureproof: 9 Rules for Humans in the Age of Automation

by Kevin Roose  · 9 Mar 2021  · 208pp  · 57,602 words

Reset

by Ronald J. Deibert  · 14 Aug 2020

Algospeak: How Social Media Is Transforming the Future of Language

by Adam Aleksic  · 15 Jul 2025  · 278pp  · 71,701 words

Co-Intelligence: Living and Working With AI

by Ethan Mollick  · 2 Apr 2024  · 189pp  · 58,076 words