content moderation

back to index

85 results

pages: 390 words: 109,519

Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media
by Tarleton Gillespie
Published 25 Jun 2018

Personal interview, member of content policy team, Facebook In May 2017, the Guardian published a trove of documents it dubbed “The Facebook Files.” These documents instructed, over many Powerpoint slides and in unsettling detail, exactly what content moderators working on Facebook’s behalf should remove, approve, or escalate to Facebook for further review. The document offers a bizarre and disheartening glimpse into a process that Facebook and other social media platforms generally keep under wraps, a mundane look at what actually doing the work of content moderation requires. As a whole, the documents are a bit difficult to stomach. Unlike the clean principles articulated in Facebook’s community standards, they are a messy and disturbing hodgepodge of parameters, decision trees, and rules of thumb for how to implement those standards in the face of real content.

(In the original, the images on the first page are outlined in red, those on the second in green, to emphasize which are to be rejected and which are to be kept.) But the most important revelation of these leaked documents is no single detail within them—it is the fact that they had to be leaked (along with Facebook’s content moderation guidelines leaked to S/Z in 2016, or Facebook’s content moderation guidelines leaked to Gawker in 2012).3 These documents were not meant ever to be seen by the public. They instruct and manage the thousands of “crowdworkers” responsible for the first line of human review of Facebook pages, posts, and photos that have been either flagged by users or identified algorithmically as possibly violating the site’s guidelines.

Other sites, modeled after blogging tools and searchable archives, subscribed to an “information wants to be free” ethos that was shared by designers and participants alike.14 Facebook User Operations team members at the main campus in Menlo Park, California, May 2012. Photo by Robyn Beck, in the AFP collection. © Robyn Beck/Getty Images. Used with permission In fact, in the early days of a platform, it was not unusual for there to be no one in an official position to handle content moderation. Often content moderation at a platform was handled either by user support or community relations teams, generally more focused on offering users technical assistance; as a part of the legal team’s operations, responding to harassment or illegal activity while also maintaining compliance with technical standards and privacy obligations; or as a side task of the team tasked with removing spam.

pages: 898 words: 236,779

Digital Empires: The Global Battle to Regulate Technology
by Anu Bradford
Published 25 Sep 2023

The video was posted to awaken the international community to the horrors of the unfolding war, with the hope of galvanizing a global condemnation of the repressive Syrian regime. But as these examples reveal, drawing lines between permissible and impermissible speech in ways that are socially acceptable is exceedingly difficult. Yet, despite the delicate nature of content moderation, many government regulators have largely abdicated these types of decisions to the platforms themselves. In addition to the flawed outcomes from content moderation, the methods used in content moderation can be disconcerting as well. For example, alongside their reliance on algorithms, all platforms use human moderators that deploy so-called community guidelines to decide what content stays up and what is removed.

But as revealed by the German newspaper Sueddeutsche Zeitung in a 2018 story,10 there is a massive human toll borne by these moderators who work on the frontlines “cleaning” the internet. In return for meager pay and few employment protections, content moderators are exposed to a constant stream of graphic violence and cruelty. The story reported that a single Facebook moderator in Germany, for instance, was expected to handle 1,300 reports a day.11 A 2014 article published in Wired documented the work of Facebook’s content moderators in the Philippines, who clean the platform of illegal content while being paid $1–4 per hour for their work. These moderators are exposed hour after hour to the worst possible content posted to these internet platforms.

These moderators are exposed hour after hour to the worst possible content posted to these internet platforms. One Google moderator estimated having to filter through 15,000 images a day, including images of child pornography, beheadings, and animal abuse.12 In 2020, Meta settled a lawsuit brought by over 10,000 of its content moderators, agreeing to pay $52 million in mental health compensation.13 Content moderators pay an enormous psychological price in helping to keep the platforms safer and more civil for users around the world, but their plight also lays bare the distance between the highly compensated and powerful tech executives in Silicon Valley and the behind-the-scenes labor employed to scour the internet for harmful content.

System Error: Where Big Tech Went Wrong and How We Can Reboot
by Rob Reich , Mehran Sahami and Jeremy M. Weinstein
Published 6 Sep 2021

The number of human content moderators needed to tackle the volume of information on these platforms is enormous. In 2017, YouTube CEO Susan Wojcicki announced that the company would hire 10,000 content moderators in the coming year. The following year, Mark Zuckerberg wrote that “the team responsible for enforcing these [community standards] policies is made up of around 30,000 people. . . . they review more than two million pieces of content every day.” Indeed, the big tech platforms should be credited for taking an aggressive stance toward content moderation and hiring armies of content moderators. But being a content moderator comes with a price.

But being a content moderator comes with a price. Selena Scola, a onetime content moderator at Facebook, sued the company in 2018, claiming that the stress of the job had caused her to develop post-traumatic stress disorder (PTSD). Her suit reported that her job involved viewing “distressing videos and photographs of rapes, suicides, beheadings and other killings.” The suit was joined by several other former content moderators who had had similar experiences, claiming that “Facebook had failed to provide them with a safe workspace.” In May 2020, Facebook agreed to settle the case with its former and current content moderators to the tune of $52 million.

The inability of AI alone to solve the problem was on full display at the outset of the COVID-19 pandemic. YouTube, Twitter, and Facebook wanted to limit the number of workers, including content moderators, coming into the office. Concerns for user privacy posed obstacles to accessing data from home computers and networks. Company guidelines often require that content moderators do the job only in secure locations in corporate offices. Twitter acknowledged the impact this greater reliance on AI for content moderation would have on its platform, writing in a March 2020 post that it was “increasing our use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content.

pages: 661 words: 156,009

Your Computer Is on Fire
by Thomas S. Mullaney , Benjamin Peters , Mar Hicks and Kavita Philip
Published 9 Mar 2021

I use the term “elite content reviewers” to denote the class (and often racial) difference between “commercial content moderators” (in the sense used by Sarah T. Roberts) and the reviewers I study on the other end, at police agencies, NCMEC, and Facebook and Microsoft’s child safety teams. Sarah T. Roberts’s important work on commercial content moderators has documented the intense precarity and psychological burden of CCM as outsourced, contractual labor. See Sarah T. Roberts, “Digital Detritus: ‘Error’ and the Logic of Opacity in Social Media Content Moderation,” First Monday 23, no. 3 (2018), http://dx.doi.org/10.5210/fm.v23i3.8283. 19.

Roberts, “Digital Detritus: ‘Error’ and the Logic of Opacity in Social Media Content Moderation,” First Monday 23, no. 3 (2018), http://dx.doi.org/10.5210/fm.v23i3.8283. 19. See Roberts, “Your AI Is a Human,” this volume, on initial stages of content moderation flagging. 20. Adrian Chen, “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed,” Wired (October 2014), https://www.wired.com/2014/10/content-moderation/. 21. Cf. Roberts, “Your AI Is a Human,” this volume; Sarah T. Roberts, “Digital Refuse: Canadian Garbage, Commercial Content Moderation and the Global Circulation of Social Media’s Waste,” Wi: Journal of Mobile Media 10, no. 1 (2016): 1–18, http://wi.mobilities.ca/digitalrefuse/. 22. Michael C. Seto et al., “Production and Active Trading of Child Sexual Exploitation Images Depicting Identified Victims.

What I discovered, through subsequent research of the state of computing (including various aspects of AI, such as machine vision and algorithmic intervention), the state of the social media industry, and the state of its outsourced, globalized low-wage and low-status commercial content moderation (CCM) workforce10 was that, indeed, computers did not do that work of evaluation, adjudication, or gatekeeping of online UGC—at least hardly by themselves. In fact, it has been largely down to humans to undertake this critical role of brand protection and legal compliance on the part of social media firms, often as contractors or other kinds of third-party employees (think “lean workforce” here again). While I have taken up the issue of what constitutes this commercial, industrialized practice of online content moderation as a central point of my research agenda over the past decade and in many contexts,11 what remained a possibly equally fascinating and powerful open question around commercial content moderation was why I had jumped to the conclusion in the first place that AI was behind any content evaluation that may have been going on.

pages: 234 words: 67,589

Internet for the People: The Fight for Our Digital Future
by Ben Tarnoff
Published 13 Jun 2022

It’s true that these damages can be softened somewhat, and that larger firms can soften them more easily than smaller ones. But here too the market sets limits: Facebook can only spend so much on content moderation before its shareholders revolt. More importantly, its addiction to engagement, and the symbiosis with the Right that this addiction has engendered, is the very basis of its business model—which creates the problems that content moderation is supposed to address. The comparison that comes to mind is the tragicomedy of coal companies embracing carbon capture: it would be easier to simply stop burning coal. Online malls are not inequality machines purely on account of their effects, however.

Another way to think of these interventions is as investments in care. The scholar Lindsay Bartkowski argues that content moderation is best understood as a form of care work and, like other forms of care work, is systematically under-valued. Certain kinds of caring go entirely unpaid—mothers bearing and raising children, for instance—while others are performed for menial pay, as in the case of the home health aides and nursing home staff who care for the elderly. Similarly, content moderation is typically a low-wage job, despite being as essential to the reproduction of our online social worlds as other kinds of care are to the reproduction of our offline social worlds.

Google is especially reliant on contingent labor; the company has more temps, vendors, and contractors (TVCs) than full-time employees. 131, They also perform … Gray and Suri, Ghost Work. 132, This shadow workforce is just … A 2016 report found that average earnings for direct employees in Silicon Valley was $113,300, while white-collar contract workers made $53,200 and blue-collar contract workers made $19,900. The same study found that contract workers receive few benefits: 31 percent of blue-collar contract workers had no health insurance at all. See Silicon Valley Rising, “Tech’s Invisible Workforce,” March 2016. Traumatizing working conditions for content moderators: Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (New Haven, CT: Yale University Press, 2019). 132, Predatory inclusion, argues … Predatory inclusion: Tressie McMillan Cottom, “Where Platform Capitalism and Racial Capitalism Meet: The Sociology of Race and Racism in the Digital Society,” Sociology of Race and Ethnicity 6, no. 4 (2020): 441–49.

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

Together, they have potentially global implications for the employment conditions of a hidden army of tens of thousands of workers employed to perform outsourced digital work for large technology companies. The content-moderation work was distinct from the labelling tasks that Ian, Benja and their colleagues had been doing. Sama’s chief executive, Wendy Gonzalez, told me she believed content moderation was ‘important work’, but ‘quite, quite challenging’, adding that that type of work had only ever been 2 per cent of Sama’s business.8 Sama was, at its heart, a data-labelling outfit, she said. In early 2023, as it faced multiple lawsuits, Sama exited the content-moderation business and this entire office was shut down. The closure of the Meta content hub was sparked by Daniel Motaung, a twenty-seven-year-old employee of Sama who had worked in this very building.

She observed, first-hand, that the most lucrative parts of Silicon Valley products – AI recommendation engines, such as Instagram and TikTok’s main feeds or X’s For You tab that grab our attention – are often built on the shoulders of the most vulnerable, including poor youth, women and migrant labourers whose right to live and work in a country is dependent on their job. Without the labour of outsourced content moderators, these feeds would be simply unusable, too poisonous for our society to consume as greedily as we do. It wasn’t just the nature of the content itself that was a problem – it was their working conditions, this reimagining of a human worker as an automaton, simultaneously training AI to do their own jobs. Content moderators were expected to process hundreds of pieces of content every day, irrespective of the nature of their toxicity. They had to watch details in the videos to determine intent and context and zoom into unpleasant imagery of wounds or body parts, to classify their nature.

Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt Publishing, 2019). 5 Sama, ‘Sama by the Numbers’, February 11, 2022, https://www.sama.com/blog/building-an-ethical-supply-chain/. 6 Sama, ‘Environmental & Social Impact Report’, June 14, 2022, https://etjdg74ic5h.exactdn.com/wp-content/uploads/2023/07/Impact-Report-2023-2.pdf. 7 Ayenat Mersie, ‘Court Rules Meta Can Be Sued in Kenya over Alleged Unlawful Redundancies’, Reuters, April 20, 2023, https://www.reuters.com/technology/court-rules-meta-can-be-sued-kenya-over-alleged-unlawful-redundancies-2023-04-20/. 8 David Pilling and Madhumita Murgia, ‘“You Can’t Unsee It”: The Content Moderators Taking on Facebook’, Financial Times, May 18, 2023, https://www.ft.com/content/afeb56f2-9ba5-4103-890d-91291aea4caa. 9 Billy Perrigo, ‘Inside Facebook’s African Sweatshop’, Time, February 17, 2022, https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/. 10 Milagros Miceli and Julian Posada, ‘The Data-Production Dispositif’, CSCW 2022. Forthcoming in the Proceedings of the ACM on Human-Computer Interaction, May 24, 2022, 1–37. 11 Dave Lee, ‘Why Big Tech Pays Poor Kenyans to Teach Self-Driving Cars’, BBC News, November 3, 2018, https://www.bbc.co.uk/news/technology-46055595.

pages: 346 words: 97,330

Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass
by Mary L. Gray and Siddharth Suri
Published 6 May 2019

The Brothers Reuther and the Story of the UAW. Boston: Houghton Mifflin, 1976. Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, forthcoming. ———. “Digital Detritus: ‘Error’ and the Logic of Opacity in Social Media Content Moderation.” First Monday 23, no. 3 (March 1, 2018). http://firstmonday.org/ojs/index.php/fm/article/view/8283. ———. “Social Media’s Silent Filter.” The Atlantic, March 8, 2017. https://www.theatlantic.com/technology/archive/2017/03/commercial-content-moderation/518796/. Roediger, David R. The Wages of Whiteness: Race and the Making of the American Working Class.

(For example, if a person wants to find an expensive wedding gift, would they enter “fine china” or “fancy dinnerware”?) Kala’s children also help when she picks up tasks identifying “adult content,” a common job that information studies scholar Sarah T. Roberts refers to as “commercial content moderation.”25 This kind of content moderation requires someone like Kala in the loop precisely because words, as plain as they may seem, can mean many different things depending on who is reading and writing them. Artificial intelligence can learn and model some human deliberation like that between Kala and her sons, but it must be constantly updated to account for new slang or unexpected word use.

It’s time for social media companies that benefit greatly from this volunteer work to pitch in to the commons and pay for the diligence of co-moderators willing to take on the trolls. This could look like, following Amara’s example, the threading together of flash team members invested in content moderation as community work and, as the need arises, infusing those communities with the software upgrades, equipment, and pay to keep the commons of content moderation well nourished. There is no easy, free alternative, unless everyone decides to delete their social media accounts. FIX 5: RÉSUMÉ 2.0 AND PORTABLE REPUTATION SYSTEMS Since requesters can seamlessly enter and exit the market, independent workers are often at a disadvantage when it comes to getting a rating or recommendation after they finish a task.

pages: 81 words: 24,626

The Internet of Garbage
by Sarah Jeong
Published 14 Jul 2015

ON MODERN-DAY SOCIAL MEDIA CONTENT MODERATION I will acknowledge that there is a very good reason why the debate focuses on content over behavior. It’s because most social media platforms in this era focus on content over behavior. Abuse reports are often examined in isolation. In an article for WIRED in 2014, Adrian Chen wrote about the day-to-day job of a social media content moderator in the Philippines, blasting through each report so quickly that Chen, looking over the moderator’s shoulder, barely had time to register what the photo was of. Present-day content moderation, often the realm of U.S.

All rights reserved. Cover Design: Uyen Cao Edited by Jennifer Eum and Annabel Lau CONTENTS I. THE INTERNET IS GARBAGE INTRODUCTION A THEORY OF GARBAGE II. ON HARASSMENT HARASSMENT ON THE NEWS IS HARASSMENT GENDERED? ON DOXING A TAXONOMY OF HARASSMENT ON MODERN-DAY SOCIAL MEDIA CONTENT MODERATION III. LESSONS FROM COPYRIGHT LAW THE INTERSECTION OF COPYRIGHT AND HARASSMENT HOW THE DMCA TAUGHT US ALL THE WRONG LESSONS TURNING HATE CRIMES INTO COPYRIGHT CRIMES IV. A DIFFERENT KIND OF FREE SPEECH STUCK ON FREE SPEECH THE MARKETPLACE OF IDEAS SOCIAL MEDIA IS A MONOPOLY AGORAS AND RESPONSIBILITIES V.

But most importantly, even if the focus on creating bright-line rules specific to harassment-as-content never shifts, looking at harassing behavior as the real harm is helpful. Bright-line rules should be crafted to best address the behavior, even if the rule itself applies to content. Beyond Deletion The odd thing about the new era of major social media content moderation is that it focuses almost exclusively on deletion and banning (respectively, the removal of content and the removal of users). Moderation isn’t just a matter of deleting and banning, although those are certainly options. Here are a range of options for post hoc content management, some of which are informed by James Grimmelmann’s article, “The Virtues of Moderation,” which outlines a useful taxonomy for online communities and moderation: • Deletion Self-explanatory

Likewar: The Weaponization of Social Media
by Peter Warren Singer and Emerson T. Brooking
Published 15 Mar 2018

There Were More Than You Think,” Now I Know, December 10, 2012, http://nowiknow.com/remember-all-those-aol-cds-there-were-more-than-you-think/. 245 special screen names: Lisa Margonelli, “Inside AOL’s ‘Cyber-Sweatshop,’” Wired, October 1, 1999, https://www.wired.com/1999/10/volunteers/. 245 three-month training process: Jim Hu, “Former AOL Volunteers File Labor Suit,” CNET, January 2, 2002, https://www.cnet.com/news/former-aol-volunteers-file-labor-suit/. 245 minimum of four hours: Ibid. 245 14,000 volunteers: Margonelli, “Inside AOL’s ‘Cyber-Sweatshop.’” 245 “cyber-sweatshop”: Ibid. 245 $15 million: Lauren Kirchner, “AOL Settled with Unpaid ‘Volunteers’ for $15 Million,” Columbia Journalism Review, February 20, 2011, http://archives.cjr.org/the_news_frontier/aol_settled_with_unpaid_volunt.php. 245 a thousand graphic images: Olivia Solon, “Underpaid and Overburdened: The Life of a Facebook Moderator,” The Guardian, May 25, 2017, https://www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content. 246 a million pieces of content: Buni and Chemaly, “The Secret Rules of the Internet.” 246 a 74-year-old grandfather: Olivia Solon, “Facebook Killing Video Puts Moderation Policies Under the Microscope, Again,” The Guardian, April 17, 2017, https://www.theguardian.com/technology/2017/apr/17/facebook-live-murder-crime-policy. 246 an estimated 150,000 workers: Benjamin Powers, “The Human Cost of Monitoring the Internet,” Rolling Stone, September 9, 2017, https://www.rollingstone.com/culture/features/the-human-cost-of-monitoring-the-internet-w496279. 246 India and the Philippines: Adrian Chen, “The Laborers Who Keep Dick Pics and Beheadings out of Your Facebook Feed,” Wired, October 23, 2014, https://www.wired.com/2014/10/content-moderation/. 246 bright young college graduates: Sarah T. Roberts, “Behind the Screen: The People and Politics of Commercial Content Moderation” (presentation at re:publica 2016, Berlin, May 2, 2016), transcript available at Open Transcripts, http://opentranscripts.org/transcript/politics-of-commercial-content-moderation/. 247 reduced libido: Brad Stone, “Policing the Web’s Lurid Precincts,” New York Times, July 28, 2010, http://www.nytimes.com/2010/07/19/technology/19screen.html. 247 regular psychological counseling: Abby Ohlheiser, “The Work of Monitoring Violence Online Can Cause Real Trauma.

The rise and fall of AOL’s digital serfs foreshadowed how all big internet companies would come to handle content moderation. If the internet of the mid-1990s had been too vast for paid employees to patrol, it was a mission impossible for the internet of the 2010s and beyond. Especially when social media startups were taking off, it was entirely plausible that there might have been more languages spoken on a platform than total employees at the company. But as companies begrudgingly accepted more and more content moderation responsibility, the job still needed to get done. Their solution was to split the chore into two parts.

The first part was crowdsourced to users (not just volunteers but everyone), who were invited to flag content they didn’t like and prompted to explain why. The second part was outsourced to full-time content moderators, usually contractors based overseas, who could wade through as many as a thousand graphic images and videos each day. Beyond establishing ever-evolving guidelines and reviewing the most difficult cases in-house, the companies were able to keep their direct involvement in content moderation to a minimum. It was a tidy system tacked onto a clever business model. In essence, social media companies relied on their users to produce content; they sold advertising on that content and relied on other users to see that content in order to turn a profit.

pages: 562 words: 201,502

Elon Musk
by Walter Isaacson
Published 11 Sep 2023

Maybe he would tell them, either out of calculation or his compulsion to be brutally honest, what he really thought: that they were wrong to kick off Trump, that their content moderation policies crossed the line into unjustifiable censorship, that the staff had been infected by the woke-mind virus, that people should show up to work in person, and that the company was way overstaffed. The ensuing explosion might not scuttle the deal, but it could shake up the chessboard. Musk didn’t do that. Instead, he was rather conciliatory on these hot-button issues. Leslie Berland, the chief marketing officer of Twitter, began with the issue of content moderation. Instead of simply invoking his mantra about the goodness of free speech, Musk went deeper and made a distinction between what people should be allowed to post and what Twitter should cause to be amplified and spread.

“He seemed like someone who isn’t afraid of offending people,” Musk says, which for him is a more unalloyed compliment than it would be for most people. He invited Taibbi to spend time at Twitter headquarters rummaging through the old files, emails, and Slack messages of the company’s employees who had wrestled with content-moderating issues. Thus was launched what became known as “the Twitter Files,” which could and should have been a healthy linen-airing and transparency exercise, suitable for judicious reflection about media bias and the complexities of content moderation, except that it got caught in the vortex that these days sends people scurrying into their tribal bunkers on talk shows and social media. Musk helped stoke the reaction with excited arm-waving as he heralded the forthcoming Twitter threads with popcorn and fireworks emojis.

He also explained why he wanted to “open the aperture” of what speech was permissible on Twitter and avoid permanent bans of people, even those with fringe ideas. On talk radio and cable TV, there were separate information sources for progressives and conservatives. By pushing away right-wingers, the content moderators at Twitter, more than 90 percent of whom, he believed, were progressive Democrats, might be creating a similar Balkanization of social media. “We want to prevent a world in which people split off into their own echo chambers on social media, like going to Parler or Truth Social,” he said. “We want to have one place where people with different viewpoints can interact.

pages: 372 words: 100,947

An Ugly Truth: Inside Facebook's Battle for Domination
by Sheera Frenkel and Cecilia Kang
Published 12 Jul 2021

A few months after the riots and the internet shutdown, he discovered why the platform might be so slow to respond to problems on the ground, or even to read the comments under posts. A member of Facebook’s PR team had solicited advice from the group on a journalist’s query regarding how Facebook handled content moderation for a country like Myanmar, in which more than one hundred languages are spoken. Many in the group had been asking the same thing, but had never been given answers either on how many content moderators Facebook had or on how many languages they spoke. Whatever Schissler and the others had assumed, they were stunned when a Facebook employee in the chat typed, “There is one Burmese-speaking rep on community operations,” and named the manager of the team.

One of the members of the Kenosha Guard who had administrative privileges to the page removed it.8 At the next day’s Q&A, Zuckerberg said there had been an “an operational mistake” involving content moderators who hadn’t been trained to refer these types of posts to specialized teams. He added that upon the second review, those responsible for dangerous organizations realized that the page violated their policies, and “we took it down.” Zuckerberg’s explanation didn’t sit well with employees, who believed he was scapegoating content moderators, the lowest-paid contract workers on Facebook’s totem pole. And when an engineer posted on a Tribe board that Zuckerberg wasn’t being honest about who had actually removed the page, many felt they had been misled.

The First Amendment was meant to protect society. And ad targeting that prioritized clicks and salacious content and data mining of users was antithetical to the ideals of a healthy society. The dangers present in Facebook’s algorithms were “being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online,” in the words of Renée DiResta, a disinformation researcher at Stanford’s Internet Observatory. “There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing.”10 It was a complicated issue, but to some, at least, the solution was simple.

pages: 706 words: 202,591

Facebook: The Inside Story
by Steven Levy
Published 25 Feb 2020

had only been viewed live: VP and Deputy General Counsel of Facebook Chris Sonderby posted “Update on New Zealand,” Facebook Newsroom, March 18, 2019. came from academics: There have been several deep studies of content moderators and policy by academics, notably Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019); Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, 2018); and Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review, April 10, 2018.

Since 2012, when Facebook started centers in Manila and India, it began outsourcing companies to hire and employ the workers. They can’t attend all-hands meetings and they don’t get Facebook swag. Facebook is not the only company to use content moderators: Google, Twitter, and even dating apps like Tinder have the need to monitor what happens on their platforms. But Facebook uses the most. While a global workforce of content moderators was slowly building to tens of thousands, it was at first a largely silent phenomenon. The first awareness came from academics. Sarah T. Roberts, then a graduate student, assumed, like most people, that artificial intelligence did the job, until a computer-science classmate explained how primitive AI was back then.

Meanwhile, it’s left to the 15,000 or so content moderators to actually determine what stuff crosses the line, forty seconds at a time. In Phoenix I asked the moderators I was interviewing whether they felt that artificial intelligence could ever do their jobs. The room burst out in laughter. * * * • • • THE MOST DIFFICULT calls that Facebook has to make are the ones where following the rule book creates an outcome that seems just plain wrong. For some of these, moderators “elevate” the situation to full-time employees, and sometimes off-site to the people who sit in the Content Moderation meetings. The toughest ones are sometimes elevated to Everest, to the worktables of Sandberg and Zuckerberg.

Reset
by Ronald J. Deibert
Published 14 Aug 2020

It was a “bug,” said Guy Rosen: Associated Press. (2020, March 17). Facebook bug wrongly deleted authentic coronavirus news. Retrieved from https://www.ctvnews.ca/health/coronavirus/facebook-bug-wrongly-deleted-authentic-coronavirus-news-1.4857517; Human social media content moderators have extremely stressful jobs, given the volume of potentially offensive and harmful posts: Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press; Kaye, D. A. (2019). Speech police: The global struggle to govern the internet. Columbia Global Reports; Jeong, S. (2015). The internet of garbage. Forbes Media..

Retrieved from https://www.theverge.com/2020/3/16/21182726/coronavirus-covid-19-facebook-google-twitter-youtube-joint-effort-misinformation-fraud “Platforms should be forced to earn the kudos they are getting”: Douek, E. (2020, March 25). COVID-19 and social media content moderation. Retrieved from https://www.lawfareblog.com/covid-19-and-social-media-content-moderation Mandatory or poorly constructed measures could be perverted as an instrument of authoritarian control: Lim, G., & Donovan, J. (2020, April 3). Republicans want Twitter to ban Chinese Communist Party accounts. That’s a dangerous idea. Retrieved from https://slate.com/technology/2020/04/republicans-want-twitter-to-ban-chinese-communist-party-accounts-thats-dangerous.html; Lim, G. (2020).

At first, Facebook announced that it would remove posts containing misinformation about the virus, while YouTube, Twitter, and Reddit claimed that inaccurate information about health was not a violation of their terms-of-service policies, leaving the algorithms to surface content that grabbed attention, stoked fear, and fuelled conspiracy theorizing. After sending human content moderators into self-isolation, Facebook turned to artificial intelligence tools instead, with unfortunate results. The machine-based moderation system swept through, but mistakenly removed links to even official government-related health information.92 Gradually, each of the platforms introduced measures to point users to credible health information, flag unverified information, and remove disinformation — but in an uncoordinated fashion and with questionable results.93 The platforms’ intrinsic bias towards speed and volume of posts made the measures inherently ineffective, allowing swarms of false information to circulate unimpeded.

pages: 290 words: 73,000

Algorithms of Oppression: How Search Engines Reinforce Racism
by Safiya Umoja Noble
Published 8 Jan 2018

Commercial platforms such as Facebook and YouTube go to great lengths to monitor uploaded user content by hiring web content screeners, who at their own peril screen illicit content that can potentially harm the public.78 The expectation of such filtering suggests that such sites vet content on the Internet on the basis of some objective criteria that indicate that some content is in fact quite harmful to the public. New research conducted by Sarah T. Roberts in the Department of Information Studies at UCLA shows the ways that, in fact, commercial content moderation (CCM, a term she coined) is a very active part of determining what is allowed to surface on Google, Yahoo!, and other commercial text, video, image, and audio engines.79 Her work on video content moderation elucidates the ways that commercial digital media platforms currently outsource or in-source image and video content filtering to comply with their terms of use agreements. What is alarming about Roberts’s work is that it reveals the processes by which content is already being screened and assessed according to a continuum of values that largely reflect U.S.

P. Tarcher/Putnam. Ritzer, G., and Jurgenson. N. (2010). Production, Consumption, Prosumption. Journal of Consumer Culture, 10(1), 13–36. doi:10.1177/1469540509354673. Roberts, S. T. (2012). Behind the Screen: Commercial Content Moderation (CCM). The Illusion of Volition (blog). Retrieved from www.illusionofvolition.com. Roberts, S. T. (2016). Commercial Content Moderation: Digital Laborers’ Dirty Work. In S. U. Noble and B. Tynes (Eds.), The Intersectional Internet, 147–160. New York: Peter Lang. Robertson, T. (2016, March 20). Digitization: Just Because You Can, Doesn’t Mean You Should. Tara Robertson’s blog.

-based social norms, and these norms reflect a number of racist and stereotypical ideas that make screening racism and sexism and the abuse of humans in racialized ways “in” and perfectly acceptable, while other ideas such as the abuse of animals (which is also unacceptable) are “out” and screened or blocked from view. She details an interview with one of the commercial content moderators (CCMs) this way: We have very, very specific itemized internal policies . . . the internal policies are not made public because then it becomes very easy to skirt them to essentially the point of breaking them. So yeah, we had very specific internal policies that we were constantly, we would meet once a week with SecPol to discuss, there was one, blackface is not technically considered hate speech by default.

pages: 642 words: 141,888

Like, Comment, Subscribe: Inside YouTube's Chaotic Rise to World Domination
by Mark Bergen
Published 5 Sep 2022

And, being YouTube, it didn’t necessarily want to dictate who was a news outlet and who wasn’t. The Arab Spring had shown that anyone could be. But the ISIS influx demanded action. In Paris, where Google had a plush office in the city’s center, nearly everyone there was suddenly deployed as content moderators for the week even if it wasn’t their job. They set up a large spreadsheet to track every re-upload of the Foley clip and similar horrors. One person on YouTube’s business team tasked with watching these ISIS videos recalled the shock of noticing how cinematic the propaganda felt. Maniacally, ISIS uploaders spliced actual news footage into clips, making them much harder for YouTube to find.

The Facebook whistleblower revealed that Instagram ignored internal research on the damage its app had on the mental health of teenage girls, spurring waves of criticism of Facebook. Afterward, multiple people who worked at YouTube said their company either didn’t share this type of research widely or simply didn’t conduct it. “YouTube is really opaque,” said Evelyn Douek, a Stanford Law assistant professor who studies content moderation. “It’s much more fun for me to lob stones from the outside. This stuff is hard,” she added. “That doesn’t mean that they don’t have responsibility.” Privately, people at YouTube, like their peers at Facebook, complained they were being scapegoated for the collapse in democratic norms brought about by cable news, inequality, and God knows what else.

GO TO NOTE REFERENCE IN TEXT In the fall of 2021: The vaccine ban included two caveats: scientific discussions and “personal testimonials” about vaccines were still permitted. GO TO NOTE REFERENCE IN TEXT authored an op-ed: Susan Wojcicki, “Free Speech and Corporate Responsibility Can Coexist Online,” The Wall Street Journal, August 1, 2021, https://www.wsj.com/articles/free-speech-youtube-section-230-censorship-content-moderation-susan-wojcicki-social-media-11627845973. GO TO NOTE REFERENCE IN TEXT he blogged: Neal Mohan, “Perspective: Tackling Misinformation on YouTube,” YouTube Official Blog, August 25, 2021, https://blog.youtube/inside-youtube/tackling-misinfo/. GO TO NOTE REFERENCE IN TEXT sat down with Wojcicki: hankschannel, “YouTube, Pandemics, Creators, and Power.”

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

Experts in law, ethics, culture, and technology must all be part of this conversation.64 Many RtbF cases involve difficult judgment calls: exactly what human workers exist to evaluate and adjudicate. Content moderators already eliminate pornographic, violent, or disturbing images, albeit at low wages and in often inhumane conditions.65 Given the extraordinary profitability of dominant tech firms, they can well afford to treat those front-line workers better. One first step is to treat content moderation as a professional occupation, with its own independent standards of integrity and workplace protections. Search-result pages are not a pristine reflection of some preexisting digital reality, as victims of misfortune, bad judgment, or angry mobs know all too well.

As Mark Andrejevic’s compelling work demonstrates, automated media “poses profound challenges to the civic disposition required for self-government.”6 These concerns go well beyond the classic “filter bubble” problem, to the habits of mind required to make any improvement in new media actually meaningful in the public sphere. The new laws of robotics also address the larger political economy of media. The first new law of robotics commends policies that maintain journalists’ professional status (seeing AI as a tool to help them rather than replace them).7 Content moderators need better training, pay, and working conditions if they are to reliably direct AI in new social media contexts. One way to fund the older profession of journalism and the newer ones created in the wake of media automation is to reduce the pressure on publications to compete in a digitized arms race for attention online (a concern of the third law, proscribing wasteful arms races).

This argument ignores the reality of continual algorithmic and manual adjustment of news feeds at firms like Facebook.28 Any enduring solution to the problem will require cooperation between journalists and coders. BIG TECH’S ABSENTEE OWNERSHIP PROBLEM After every major scandal, big tech firms apologize and promise to do better. Sometimes they even invest in more content moderation or ban the worst sources of misinformation and harassment. Yet every step toward safety and responsibility on platforms is in danger of being reversed thanks to a combination of concern fatigue, negligence, and the profit potential in arresting, graphic content. We have seen the cycle with Google and anti-Semitism.

pages: 1,136 words: 73,489

Working in Public: The Making and Maintenance of Open Source Software
by Nadia Eghbal
Published 3 Aug 2020

In the absence of money, ideally it’s work that maintainers are motivated to do—or, at least, the additional effort required is less than, or equal to, the benefits they receive. Once again, content moderation is an example of how not all costs can be reduced through automation, nor off-loaded onto users. Even after employing the tactics discussed above, companies like Facebook still hire teams of paid moderators to sort through disputed content. In Sarah T. Roberts’s Behind the Screen: Content Moderation in the Shadow of Social Media, she quotes Max Breen, a pseudonymous content moderator for an unnamed tech company, who notes that this kind of work is extremely non-fungible, even among human moderators: “I don’t think [outsourcing] works because . . . the culture is so dramatically different when you are trying to apply a policy that is based on [Western] culture.”300 Breen continues, “They tried outsourcing to India before I was there, and it was such a disaster that they just canceled it and said we have to do this all in-house.”

It’s the equivalent of setting up filters for your inbox so that unnecessary emails won’t vie for your attention in the first place. It’s not just open source developers but software companies more generally that make heavy use of automation, particularly in the realm of product support and content moderation. In a 2019 report, Twitter estimates that 38% of offensive tweets are now preemptively flagged through automation, instead of through the company relying solely on manual reporting.275 Jack Dorsey, Twitter’s founder and CEO, explains that automation is at least partly necessary to meet rising demand: “We want to be flexible on this, because we want to make sure that we’re, No. 1, building algorithms instead of just hiring massive amounts of people, because we need to make sure that this is scalable.”276 In open source, developers use automation to reduce costs, whether it’s bots, tests, scheduled tasks, linters and style guides, canned responses, or issue templates, all of which help manage the contribution process.

As discussed in Chapter 1, platforms play a critical role in absorbing costs for creators, in categories such as distribution, hosting, or security. By reducing or removing these costs, platforms enable creators to focus on their ideas. Sometimes, creators run into conflicts with the platforms they depend on, because it’s not always clear whose job it is to handle which tasks. Content moderation, for example, is a hotly contested battle on social platforms like Facebook and Twitter. Platform policies can affect creators’ ability to produce what they want, whether it’s Instagram’s policies on nudity, YouTube’s policies on content deemed unacceptable to advertisers, or Twitter’s policies on hate speech.

pages: 527 words: 147,690

Terms of Service: Social Media and the Price of Constant Connection
by Jacob Silverman
Published 17 Mar 2015

But the data gleaned from these human workers will be used to inform the future generation of automated systems that will replace human workers or shunt them to even smaller bits of work. Nano-work will replace micro-work, until humans are engineered out of the system entirely, capturing all value generated as profits for the platform owners and the few programmers kept on as overseers. Similar conditions describe the lives of content moderators for Facebook, Google, and other services. Drafted through online labor markets, these workers, most of whom live in the developing world, go through some preliminary training before they begin their work as moderators. Then, they sit and parse through feeds of images, triaging according to the company’s policies—blocking porn immediately, but perhaps flagging a photo of a fight for further review.

By outsourcing most of this work to distributed networks of cheaply paid laborers, Facebook is able to keep its margins higher, while likely developing automated systems that will one day make these moderators superfluous. In the meantime, these laborers remain essential, even as they operate in the platform’s shadows. And if social networking is defined by the machine state—that desensitized fugue we end up in when we keep scrolling through the feed, like a glassy-eyed gambler at the slots—then content moderation is the machine state at its most dehumanizing. There the feed becomes something malevolent. Contra the soothing humanitarian rhetoric of Facebook, here everything awful about human instinct is highlighted. While we enjoy the pleasures of connection, these workers are undergoing experiences that often leave them depressed, traumatized, and angry.

Add RFID chips to all packaged food and grocery products and you can track their movement through supply chains and stores without human assistance. Perhaps companies can partner with stores to help utilize their surveillance systems to monitor the placement of goods. Firm up sentiment analysis, trending-topic algorithms, and optical-character-recognition scanning so that humans aren’t forced to do such drudgery. To save content moderators from their on-the-job stress, we have to put them out of work again. Just as Facebook or Pinterest retains control of your data, online labor markets keep workers wedded to the platform. You can’t take your profile elsewhere, unless two labor markets form a partnership or decide to create an open protocol that other markets can take advantage of.

pages: 260 words: 67,823

Always Day One: How the Tech Titans Plan to Stay on Top Forever
by Alex Kantrowitz
Published 6 Apr 2020

“Amazon Raises Minimum Wage to $15 for All US Employees.” CNBC. CNBC, October 2, 2018. https://www.cnbc.com/2018/10/02/amazon-raises-minimum-wage-to-15-for-all-us-employees.html. $28,000 per year: Gross, Terry. “For Facebook Content Moderators, Traumatizing Material Is a Job Hazard.” NPR. NPR, July 1, 2019. https://www.npr.org/2019/07/01/737498507/for-facebook-content-moderators-traumatizing-material-is-a-job-hazard. San Bernardino, California: Nagourney, Adam, Ian Lovett, and Richard Pérez-Peña. “San Bernardino Shooting Kills at Least 14; Two Suspects Are Dead.” New York Times. New York Times, December 2, 2015. https://www.nytimes.com/2015/12/03/us/san-bernardino-shooting.html.

A few days before the 2018 US midterms—the first big test of Facebook’s ability to stand up to further election manipulation—I met with James Mitchell, Rosa Birch, and Carl Lavin, three people at Facebook who’ve seen the integration of these new “inputs” on the ground level. Mitchell heads Facebook’s risk and response team, which works to find vulnerabilities in its content-moderation systems. Birch is a program manager in its strategic response team, which coordinates Facebook’s response to crises across divisions. And Lavin is a former editor at the New York Times, Forbes, and CNN, who works on the company’s investigative operations team, a group formed entirely to think about the bad things people could do with Facebook’s products.

Facebook Newsroom, May 2, 2018. https://newsroom.fb.com/news/2018/05/removing-content-using-ai/. some of its moderators work in miserable conditions: Newton, Casey. “The Secret Lives of Facebook Moderators in America.” Verge. Vox, February 25, 2019. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona. a large-scale Kremlin-sponsored misinformation campaign: Stamos, Alex. “An Update on Information Operations on Facebook.” Facebook Newsroom, September 6, 2017. https://newsroom.fb.com/news/2017/09/information-operations-update/. Cambridge Analytica, a data analytics firm, illicitly used: Rosenberg, Matthew, Nicholas Confessore, and Carole Cadwalladr.

pages: 506 words: 133,134

The Lonely Century: How Isolation Imperils Our Future
by Noreena Hertz
Published 13 May 2020

See Newton, ‘The Trauma Floor: The secret lives of Facebook moderators in America’, The Verge, 25 February 2019, https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona; ‘Facebook firm Cognizant quits,’ BBC News, 31 October 2019, https://www.bbc.co.uk/news/technology-50247540; Isaac Chotiner, ‘The Underworld of Online Content’, New Yorker, 5 July 2019, https://www.newyorker.com/news/q-and-a/the-underworld-of-online-content-moderation; Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019). 91 Sebastian Deri, Shai Davidai and Thomas Gilovich, ‘Home alone: why people believe others’ social lives are richer than their own’, Journal of Personality and Social Psychology 113, no. 6 (December 2017), 858–77. 92 ‘Childline: More children seeking help for loneliness’, BBC News, 3 July 2018, https://www.bbc.co.uk/news/uk-44692344. 93 J.

But what this suggests is that as well as investing considerably more in technological fixes to the problem – using the engineering skills which are of course in abundance at these companies – the platforms also need to deploy a much larger number of human moderators to assist in this task. In doing so they must recognise that content moderating is a challenging job, both intellectually and emotionally, and that training moderators well, paying them decently and providing them with sufficient emotional support is a necessity. At present not enough is being done. If even 10% of the energy Big Tech devotes to corporate growth and expansion was dedicated to finding more ingenious solutions to content moderation, the world would be a lot further ahead in addressing online poison, polarisation, alienation and disconnection.

A bullying post, for example, can be surprisingly hard to identify, given how quickly vernacular changes and how humour can be used as a sword. ‘Paula is so cool!’ might sound like a positive post, but if Paula is an overweight, geeky girl with no friends it could actually be a form of bullying. Ascertaining what qualifies as offensive via an algorithm is almost impossible, which is why effective reporting systems and human content moderators are so necessary. This isn’t to say that there are no technological fixes when it comes to online civility. Social media platforms could adjust their algorithms to reward kindness over anger or to ensure ‘that open-minded, positive posts rise more quickly’, as Professor Jamil Zaki suggests.118 At the least they could tweak their algorithms so that rage and anger did not rise so fast to the top.

The Smartphone Society
by Nicole Aschoff

Today’s misrepresentations and blank spots are our blindness to key elements of the digital frontier. The contours of the big-data landscape are blurry and incomplete. We can’t see the algorithms, software, hardware, supply chains, energy, infrastructure, and people—the “sociotechnical layers”—that underlie the making of this new frontier. The mineworkers, assemblers and dissemblers, and content moderators along with so many others who make our phone worlds possible exist outside of the average user’s frame. Other lacunae are the ecological externalities smartphones produce. Our inability to comprehend the ecological dimensions of the world we create is not new, but our hand machines seem to increase the size of these blank spaces rather than diminish them.

Yet the low-paid service workers, 60 percent of whom are Black or Latino, who make these wonderlands go are expected to fade into the background. Nothing demonstrates who is valued and who isn’t better than the pay disparity between full-time, permanent employees and “blue-collar,” contract tech workers—workers who don’t receive the pay or benefits enjoyed by full-time employees despite filling critical roles ranging from content moderators and customer support reps to janitors and shuttle bus drivers. In Santa Clara County blue-collar tech workers make an average of $19,900 a year whereas permanent, full-time employees of the same companies make $113,000.8 Blue-collar tech workers have had enough. Facebook cafeteria workers voted to unionize in March 2018.

Tech companies are sitting atop mountains of cash thanks to mass quantities of unpaid and underpaid work, a technological infrastructure that was developed with taxpayer money, and access to cheap credit for development and expansion, courtesy of low-interest rates engineered by the Federal Reserve. The resources to provide a decent livelihood for all tech workers, whether they’re app drivers or content moderators, are available. Opportunities for organizing workers in tech and related industries are also there. When the assembly line was popularized in the early twentieth century observers thought the deskilled work and high-pace environment of factories would never support empowered workers.24 They couldn’t have been more wrong.

pages: 116 words: 31,356

Platform Capitalism
by Nick Srnicek
Published 22 Dec 2016

Given the broader context just outlined, we can see that they are simply extending earlier trends into new areas. Whereas outsourcing once primarily took place in manufacturing, administration, and hospitality, today it is extending to a range of new jobs: cabs, haircuts, stylists, cleaning, plumbing, painting, moving, content moderation, and so on. It is even pushing into white-collar jobs – copy-editing, programming and management, for instance. And, in terms of the labour market, lean platforms have turned what was once non-tradable services into tradable services, effectively expanding the labour supply to a near-global level.

The shift towards lean production and ‘just in time’ supply chains has been an ongoing process since the 1970s, and digital platforms continue it in heightened form today. The same goes for the trend towards outsourcing. Even companies that are not normally associated with outsourcing are still involved. For instance, content moderation for Google and Facebook is typically done in the Philippines, where an estimated 100,000 workers search through the content on social media and in cloud storage.90 And Amazon has a notoriously low-paid workforce of warehouse workers who are subject to incredibly comprehensive systems of surveillance and control.

‘Relative Constancy of Advertising Spending: A Cross-National Examination of Advertising Expenditures and Their Determinants’. International Communication Gazette, 67 (4): 339–57. Chen, Adrian. 2014. ‘The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed’. Wired, 23 October. http://www.wired.com/2014/10/content-moderation (accessed 4 June 2016). Clark, Jack. 2016. ‘Google Taps Machine Learning to Lure Companies to Its Cloud’. Bloomberg Technology. 23 March. http://www.bloomberg.com/news/articles/2016-03-23/google-taps-machinelearning-to-lure-companies-to-its-cloud (accessed 4 June 2016). Clark, Meagan, and Angelo Young. 2013.

pages: 207 words: 59,298

The Gig Economy: A Critical Introduction
by Jamie Woodcock and Mark Graham
Published 17 Jan 2020

Microwork involves the completion of short tasks on a platform interface that tend to be completed quickly, with the worker receiving a piece rate, minus the platform’s cut. In a recent ILO study of this kind of work, Berg et al. (2018: xv) found that workers were engaged in a diverse range of tasks, ‘including image identification, transcription and annotation; content moderation; data collection and processing; audio and video transcription; and translation.’ These tasks were used by clients to ‘post bulk tasks’, which are split up into small fragments for individual workers to complete. Figure 3(a) The availability of cloudwork Source: https://geonet.oii.ox.ac.uk/blog/mapping-the-availability-of-online-labour-in-2019/ Figure 3(b) The location of cloudworkers on the five largest English-language platforms Source: https://geonet.oii.ox.ac.uk/blog/mapping-the-availability-of-online-labour-in-2019/ Amazon’s Mechanical Turk – the world’s most well-known microwork platform – refers to these tasks as ‘artificial artificial intelligence’.

Echoing the Mechanical Turk example, Alison Darcy has called this the ‘Wizard of Oz design technique’, referring to the way the work is hidden behind an interface, like the man hiding behind the projection of the eponymous wizard in the story.21 This kind of work is also becoming an increasingly important part of content production on platforms like Facebook and YouTube. While much attention has been paid to the ‘produsage’ (Bruns, 2008) of users who both use and produce content, often the labour that this relies upon is obscured. A growing number of workers are now engaged in ‘commercial content moderation (CCM)’ to ensure that users can only upload and view content that is deemed acceptable. As Sarah Roberts (2016), who coined the term, explains, the ‘interventions of CCM workers on behalf of the platforms for which they labor directly contradict myths of the Internet as a site for free, unmediated expression’.

Oakland, CA: University of California Press. Raw, L. (2009) Striking a Light: The Bryant and May Matchwomen and their Place in History. London: Continuum Books. Richey, L.A. and Ponte, S. (2011) Brand Aid: Shopping Well to Save the World. Minneapolis, MN: University of Minnesota Press. Roberts, S.T. (2016) Commercial content moderation: Digital laborers’ dirty work. In S.U. Noble and B. Tynes (eds.), The Intersectional Internet: Race, Sex, Class and Culture Online. New York: Peter Lang. Rosenblat, A. (2018) Uberland: How Algorithms are Rewriting the Rules of Work. Oakland, CA: University of California Press. Rosenblat, A. and Stark, L. (2016) Algorithmic labor and information asymmetries: A case study of Uber’s drivers.

pages: 279 words: 85,453

Breaking Twitter: Elon Musk and the Most Controversial Corporate Takeover in History
by Ben Mezrich
Published 6 Nov 2023

Jack’s conflict with Elliott and the Twitter board hadn’t revolved around the financial future of the company, which might have been Elliott’s main focus, but the philosophical direction Jack believed the company was heading. As a manager, Jack had been notoriously hands-off when it came to issues like content moderation and the banning of problematic accounts; he’d always been a free speech advocate, and especially during the tumult of the recent elections, he’d become uncomfortable with the lengths Twitter had gone to try to police its platform. Parag couldn’t be sure when Jack had first reached out to Elon Musk regarding his concerns about Twitter, but at least as far back as a year ago, Jack had been pushing the Twitter board to reach out to the entrepreneur.

Reputable brands didn’t want their ads running next to hate speech, and time and again, social media had proven that without constant moderation, online conversation was always infiltrated, and eventually overrun, by bad actors. The dark corners of the internet were real and many, and more often than not, inhabited by teenagers. In his work, Yoel had always been careful to emphasize that content moderation shouldn’t be governed by “dictatorial edict.” As he would later explain, in an interview with tech journalist Kara Swisher, what was important wasn’t “the decisions you make, but how you make those decisions.” The rules couldn’t be arbitrary; they needed to be well communicated and transparent.

Robin would be meeting with Antonio Gracias tomorrow afternoon to give him an overview of their ad business, and she seemed confident that Elon would understand the importance of the relationships they’d built with Madison Avenue, how it was central to keeping Twitter healthy. In support of that notion, Elon also seemed to recognize Yoel and his work on the safety and moderation front as integral to the platform; he’d even tweeted that he intended to launch “a content moderation council, with widely diverse viewpoints to decide on moderation and account reinstatements.” Although Jessica had only texted with Yoel briefly as she’d left the #TrickOrTweet party on her way back to her office, she could tell that Yoel had decided to give Elon a chance. Maybe he felt that it was his duty to try to protect the site through the transition, or perhaps he believed that Elon, as a successful businessman, would understand that no major advertisers would be comfortable spending ad dollars only to have their campaigns appear in a swamp of hate speech and adult content.

pages: 314 words: 81,529

Badvertising
by Andrew Simms

It works: around a quarter of the global population, just over 2 billion people, go on Facebook every day.65 Unfortunately, that content tends to be highly emotive, provocative and tends towards the hateful, extreme and often violent. A testament to this is the legion of poorly paid content moderators who suffer post-traumatic stress as a result of their work,66 efforts which are nevertheless incapable of holding back the tide of content showing humanity at its worst that does get on to the site. The ruthlessness behind the operation of a platform optimised to sell advertising can be seen in the case of a group of Facebook content moderators based in Kenya, who were sacked after trying to organise a union to improve their conditions.67 Worse still, more positive and peaceful content tends, conversely, to be actively demoted as it is less likely to put advertising in front of users.

Vox (2018) The Facebook and Cambridge Analytica scandal explained. 2 May. www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram 7 65. Statista (2023) Number of daily active Facebook users worldwide as of 1st quarter 2023. www.statista.com/statistics/346167/facebook-global-dau/ 66. The Washington Post (2020) Facebook content moderator details trauma that prompted fight for $52 million PTSD settlement. 13 May. 67. Time Magazine (2023) Facebook content moderators sue meta over layoffs in Kenya. 20 March. 68. Federal Trade Commission (2021) Press release. Washington, DC. 15 December. www.ftc.gov/news-events/news/press-releases/2021/12/advertising-platform-openx-will-pay-2-million-collecting-personal-information-children-violation 69.

Ford Fiesta, SUVs Carbon Offsetting and Reduction Scheme for International Aviation (CORSIA) 144 carbon offsets 102−4, 142−4 Carmichael Coal Mine, Australia 166 Carnegie Mellon University, Decision Sciences Department 52 Carter, Jimmy 111 CCTV (closed circuit television) 36 Champions for Earth initiative 101 Channel One (US tv programme) 39−40 Chelsea FC 98 Chesterfield cigarettes 57, 63 Chevrolet Blazer cars 113, 118 Chevron Airline 155 children 8−9, 15, 17, 33, 39, 74 China 40 Cho, J.H. 46 Cho, Su Myat 48 chocolate cigarettes 63 cigarette advertising bans on 63, 71−82 bans on television advertising 56−7, 59 campaigns against 56−7, 185−7 cigarette coupons 63 cigarette smoking 31, 54−82 depiction in films 25, 48−9 and nicotine 77 substitutes for 64 Cipollone family: case against tobacco industry by 70 Citroen cars 129 Civil Aeronautics Board (US) 149 Civil Rights Act (US 1964) 148 Clarke, Kenneth 68, 72 Clean Development Mechanism (CDM) 142, 143 Clear Channel Outdoor Company 10 Clement-Jones, Lord Tim 78, 80 climate change 5, 15−16, 41, 83, 94−5, 101 Climate Change Act (2008) 151 Climate Change Committee 138, 151 Climeworks Company 143 Coca-Cola Company 22, 23 Code of Broadcast Advertising (BCAP) 158 Code of Non-Broadcast Advertising and Direct and Promotional Marketing (CAP) 158 colour 20−2 Columbus, Christopher 57 Committee on Public Information (CPI) 30 Competition and Markets Authority (CMA) 154−5, 157, 159, 178 Conservative government (Major) 72 (Thatcher) 65, 69 Conservative Party conference 22 consumer credit 3, 43−4 Consumer Federation of America 35 consumer neuroscience 52−3 consumer psychology 53, 121 consumerism 2−9, 44 Convention on International Civil Aviation 149 cookies 34 third-party cookies 36 Coolidge, Calvin 188 COP26 Conference 97, 165 Council for Tobacco Research (US) 61 Covid-19 pandemic 48, 91, 109, 125−6, 182, 193−4 impact on airline industry 152, 154 Cowling, Keith 45 Craigie, Jill 54−5 cricket 69, 84 Cruise, Tom 25 Culture Declares Emergency (group) 100 Culture Unstained (organisation) 99 Curtis, Adam 30 cycling 87−8, 89 Daedalus (myth) 135 Daily Mail 198 Daily Mile fitness programme 88 Daily Telegraph 198 Dawson, John 54, 66, 67 Dennison, Mary-Ann 148 Department of Health and Social Security (DHSS) 67, 72 Dobson, Frank 76 Doctors for Tobacco Law (organisation) 71 Dodge Durango cars 119 Dodge Ram trucks 114 Doll, Richard 55−6, 58 Doria, João 8 Dorrell, Stephen 73 Douglas-Home, Sir Alec 59 Dreiser, Theodore A Hoosier Holiday 115−6 Drope, Jeffrey 74 drought 94 DuckDuckGo (search engine) 196 e-fuels 137−8 Earth Overshoot Day 3−4 Easyjet Airline 150−1, 154, 163−4 advertisement 163ill slogan 13 Easyjet Future Flying programme 163−4 eavesdropping devices 36 Ecclestone, Bernie 76 Eddington, Paul 69 Edell, Marc 70 Edinburgh International Festival 195 Eisenhower, Dwight D. 109 electric vehicles 92, 129−30, 131 Emirates Airline 84−5, 96 slogan of 134 energy drinks 19 Ennals, David 64 Eshel, Gidon 46 Etihad Airline 155 European Advertising Standards Alliance (EASA) 167 European Commission 103 European Court of Justice 76 European Economic Community: dispute with US 111 European Parliament 104 European Union 79 Clean Claims Directive 155 Empowering Consumers for the Green Transition 155 Evangelical Environmental Network 129 Evening Standard 151 Exxon Valdez oil spill 128 ExxonMobil 77 Facebook 27, 32, 33 content moderators for 33 facial expressions 22 Febiac Federation 173 Federal Communications Commission (FCC) 24 Federal Trade Commission (FTC) 33−4, 49 Fédération Internationale de l’Automobile (FIA) 77 Fédération Internationale de Football Association (FIFA) 98, 102, 103 Ferrari Formula 1 cars 24 Fiat Chrysler Company 129 films: depiction of smoking in 25, 48−9 Financial Conduct Authority 178 Financial Times 75 fingerprinting 35, 36 Finland 93, 190 ban on tobacco advertising in 65, 75 Finnair airline 63 FIRES research programme 138 Fletcher, Charles 58−9 floods 94 food advertising: use of colour in 21 Foot, Michael 54−5, 58, 59−60, 67, 69 Forbes (magazine) 11 Ford, Bill 118 Ford, Edsell II: 112 Ford Explorer cars 112, 113, 117, 118, 124, 128 Ford Fiesta cars 107 Ford Model T cars 110, 118 Ford Motor Company: No Boundaries campaign 117−8 FOREST (campaign group) 62 Forest Green Rovers FC 104 Formula One racing 76, 77, 88 fossil fuels 77, 80−1, 97−8, 99−105 Foxley-Norris, Sir Christopher 62 fracking 88−9 France 173, 198 Jury de Déontologie Publicitaire 173 Résistance à l’Aggression (tv network) 11, 189 smoking ban in 79 Freiburg FC 104 Freud, Sigmund 29, 30 Frick, Vivian 51 Gahagan, Fritz 54 Galbraith, John Kenneth 160 The Affluent Society 16 The Great Crash 157 Gazprom Company 84, 97−9 football sponsorship by 98−9 General Motors Company 95−6 Geneva 193 George, Susan 192 Germany 11, 189 Deutsche Umwelthilfe (NGO) 11 Nazi era 30, 58, 116 smoking ban in 79 Werbefrei (organisation) 11, 189 Gillett, Lizzie xii Glantz, Stanton 49 Goebbels, Joseph 30 Global Financial Crisis (2008) 151 Global Footprint Network 3−4 Google 26, 27, 32 Gore, Al 24 graffiti art 7−8 Grahame, Kenneth The Wind in the Willows 116 Greenberg, Bradley 39−40 Greenpeace 99, 140, 172 greenwashing 85, 103, 155 Greenwood, Tony x Grenoble 194 Guardian 11, 144, 161 Gulf War (1990−91) 113 Gunster, Shane 123−4 Haarlem 189 Harris, Ralph, Baron of High Cross 62 Hauer, Ezra 121−2 Havard, John 66, 67 health campaigns 6, 60 Health Education Council 63, 68, 69 Red Book 68 Health Security Agency 131 Hidalgo, Anne 83, 102 High, Hugh 73−4 Hill, Austin Bradford 58 Hillary, Sir Edmund 127 Hind, Dan The Return of the Public 199, 200 Honda cars 120 House of Lords 176 Environment Committee 180 HSBC 71, 158, 165−7 advertisement 165ill Hubbard, Diane 125 Hühne, Matthias Airline Visual Identity 149 Hum Vee cars 113 Hummer cars 113, 114, 118, 126−7 hybrid vehicles 92 mild hybrid vehicles 130 Icarus (myth) 134ill, 135 Imperial Airways 146 advertisement 145ill Imperial Oil Company 81 ‘Review of Environmental Protection Activities for 1978-79’ 80 Impossible Hamster (film) xii India 194 Indian Premier League 94 induced demand 117 Ineos Company 84, 87−91, 97−8 Ineos Grenadiers cycling team 87, 89−90 Ineos Team UK 87, 105 Ineos Upstream Ltd 88 Infiniti QX4 cars 123 Inflation Reduction Act (US 2022) 5 Institute of Economic Affairs (IEA) 73, 75 internal combustion engine (ICE) 4−5, 92, 96 International Air Transport Association (IATA) 85−6, 142, 152 International Civil Aviation Organization 135, 144 Icarus mural 134ill International Energy Agency 138 International Olympic Committee (IOC) 93 International Ski and Snowboard Federation (FIS) 101 International Travel Association 142 Iran 194 Iraq war 151, 199 Ireland: smoking ban in 79 Jacobi, Derek 69 James I, King Counterblaste to Tobacco 57 Japanese car market 93, 114 Jardine Glanville Company 66 J.C.

pages: 412 words: 115,048

Dangerous Ideas: A Brief History of Censorship in the West, From the Ancients to Fake News
by Eric Berkowitz
Published 3 May 2021

A 2018 Reuters investigation found that despite Facebook’s commitment to combat hate speech against the Rohingya minority in Myanmar, hundreds of such items remained, some of which called them dogs, maggots, or rapists, and suggested they be exterminated.66 When far-right American extremists used Facebook to exploit the coronavirus pandemic to promote a race war, Facebook was unable to prevent such pages from proliferating.67 This is not surprising, given that as of January 2020, Facebook’s third-party fact checkers were reviewing a paltry two hundred to three hundred articles in the US per month out of millions posted each day.68 Its other content moderators, who number in the thousands worldwide, are often poorly trained to enforce Facebook’s porous and opaque standards, which result in much hateful content remaining on the platform. (One group of content moderators, distressed over the amount of hate speech Facebook allows, bitterly referred to themselves as “algorithm facilitators.”)69 And even if Facebook could take down all the false and harmful posts, which is unlikely, its commitment to silencing such material runs against both the platform’s internal design and the attitude of its majority shareholder and CEO.

While proposals are bubbling up to eliminate or modify the law’s liability shield, a divided Congress and the president must still agree on how, which is uncertain in the short term. For the present, American Internet censorship will continue to result less from legal constraints than from the individual platforms’ patchwork of evolving, conflicting, and inconsistently enforced content moderation policies. Section 230 was spurred in part by a crooked financial firm, Stratton Oakmont (made famous by its founder’s memoir, The Wolf of Wall Street, and Martin Scorsese’s eponymous film adaptation). In 1995, Stratton sued a now-forgotten Internet service provider (ISP), Prodigy, for libel for anonymous messages posted on one of its boards that accused Stratton of bad behavior.

Turkish authorities arrested at least eight journalists for “spreading misinformation” about the virus, while a Cambodian teen was arrested for her fearful social media posts about the virus in her area, and a Thai man was threatened with five years in prison for complaining online about inadequate preventive measures at Bangkok’s airport. And as Facebook sent many of its content moderators home for their safety and relied more on artificial intelligence to filter content, articles by reputable outlets such as The Atlantic and the Times of Israel were mistakenly taken down. By far the worst censor of online speech is China, both in the scale, sophistication, and brutality of its suppression and in its expanding reach around the world.

pages: 499 words: 144,278

Coders: The Making of a New Tribe and the Remaking of the World
by Clive Thompson
Published 26 Mar 2019

“We should be extremely careful before rushing to embrace an internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content,” as he wrote in the Washington Post. “Once systems like content moderation become the norm, those in power inevitably exploit them.” If massive scale is the problem, would smaller scale be a solution? This is one answer some observers argue: If big tech firms have too much power, break them up—classic trustbusting. After all, Facebook has no real competitor; Zuckerberg was unable to describe one, when asked by US congressional leaders during his public grilling.

“negative mood and body dissatisfaction”: Zoe Brown and Marika Tiggemann, “Attractive Celebrity and Peer Images on Instagram: Effect on Women’s Mood and Body Image,” Body Image 19 (December 2016): 37–43. like #thygap or #thynspo: Lily Herman, “Pro–Eating Disorder Content Continues to Spread Online, Researchers Say,” Allure, October 17, 2017, accessed August 18, 2018, https://www.allure.com/story/bonespiration-thinspiration-instagram-hashtag; Stevie Chancellor et al., “#thyghgapp: Instagram Content Moderation and Lexical Variation in Pro–Eating Disorder Communities,” ACM Conference on ComputerSupported Cooperative Work and Social Computing, February 27–March 2, 2016, accessed August 18, 2018, http://www.munmund.net/pubs/cscw16_thyghgapp.pdf; Emily Reynolds, “Instagram’s Pro-anorexia Ban Made the Problem Worse,” Wired UK, March 14, 2016, accessed August 18, 2018, https://www.wired.co.uk/article/instagram-pro-anorexia-search-terms.

How Twitter Turned Toxic,” Fast Company, April 4, 2018, accessed August 21, 2018, www.fastcompany.com/40547818/did-we-create-this-monster-how-twitter-turned-toxic. 10,000 to scour YouTube videos: April Glaser, “Want a Terrible Job? Facebook and Google May Be Hiring,” Slate, January 18, 2018, accessed August 21, 2018, https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html. that’s genuinely threatening: Alexis C. Madrigal, “The Basic Grossness of Humans,” The Atlantic, December 15, 2017, accessed August 21, 2018, https://www.theatlantic.com/technology/archive/2017/12/the-basic-grossness-of-humans/548330. whitewash their reputation: Joseph Cox, “Leaked Documents Show Facebook’s Post-Charlottesville Reckoning with American Nazis,” Motherboard, May 25, 2018, accessed August 21, 2018, https://motherboard.vice.com/en_us/article/mbkbbq/facebook-charlottesville-leaked-documents-american-nazis.

pages: 284 words: 75,744

Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond
by Tamara Kneese
Published 14 Aug 2023

The attributes that make digital technologies appear ghostly, disembodied, and otherworldly are belied by their material, industrial, and sometimes boring infrastructural qualities.34 Digital remains depend on the global supply chains and human exploitation underlying the success of corporate platforms, including the extraction of minerals necessary for the production of hardware, factories in cities like Shenzhen, and call center workers and content moderators in cities like Manila and Mumbai.35 Digital remains also rely on vast server farms that dot rural landscapes, sapping up endless amounts of energy, and on physical fiber optic cables under the ocean floor.36 Despite the material nature of computer technologies and their practical uses in commerce, offices, schools, and the military, there has long been a sense that they can foster communication with the dead.37 Although Marxist definitions of labor and the valorization of affect are helpful when considering how capitalism extracts profit from digital interactions, data are not just tied to political economic processes but rely on notions of legacy, kinship, posterity, and sentiment.38 “Data” originally referred to things given, perhaps as a gift.

Caring for a dead body and material possessions often intersects with caring for digital remains; mourners may sift through and pack away physical photographs, clothes, and records while also sorting out plans for devices and old files. Tracking down all of a dead person’s account information is an arduous task, especially in a period of acute grief. Globally dispersed content moderators and IT professionals are part of the network of laborers who keep the digital dead alive, just as much as surviving kin or friends who faithfully maintain the pages of their dead, paying for domain names or removing spam messages as needed. Platform Temporality Along with materiality, temporality is also central to this book.

In contrast, on its main memorialization webpage, Facebook told its users to expect delays in addressing memorialization requests because of pandemic-related labor shortages. These examples provide glimpses of the human labor required to care for the dead on massive platforms, from designers to content moderators.2 By February 2022, nearly one million Americans had died with little to no public acknowledgment on a national scale. What does it mean to get back to normal in the face of mass death?3 The pandemic is a business opportunity for death entrepreneurs, but it has also catalyzed new organizing movements through and around platforms.

pages: 475 words: 134,707

The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health--And How We Must Adapt
by Sinan Aral
Published 14 Sep 2020

In the early days of the Internet, the courts gave CompuServe, a community-based communication platform, immunity from prosecution precisely because it did not moderate the content on its platform. At the same time, the courts found its competitor Prodigy liable for the content-moderation decisions it was making. This created a perverse incentive for platforms to avoid moderating content to escape the liability that accompanied content moderation decisions. Recognizing the need to incentivize platforms to moderate content, Section 230 provided platforms the protections they needed to make tough content-moderation decisions without the fear of civil prosecution. When we understand this history, it becomes clear how Section 230 helps maintain free speech and the quality of our communication ecosystem.

Strategic Whiplash That’s why Facebook seems schizophrenic right now. It’s frantically seeking a solution that consumers can believe in. In 2018 and 2019 it floated several conflicting ideas about how it was going to chart a course to smoother waters. First it was going to stay the course and tweak the platform, using AI and content moderators to root out harmful content, improve data portability, and pay attention to consumers’ privacy. But its public pronouncements fell on deaf ears. So Facebook COO Sheryl Sandberg floated the idea of abandoning the advertising model altogether and charging a monthly fee for Facebook services instead.

The killer posted a seventy-four-page manifesto filled with racist rants and links to the forthcoming livestream on social media, and subsequently posted the video to his Facebook page. Several thousand people saw the original live feed, and 1.5 million others tried to share it on Facebook in the hours after the attack. Facebook blocked 1.2 million of these uploads, but 300,000 slipped through its content moderation. In the fall of 2019, both Elizabeth Warren and Joe Biden derided Facebook for allowing Donald Trump to promote known falsehoods about Biden in political ads. Facebook refused to take down the ads because they “did not violate Facebook policy.” The Christchurch video and false political advertising posted during the 2020 U.S. election highlight a critical dilemma in dealing with the Hype Machine.

pages: 239 words: 80,319

Lurking: How a Person Became a User
by Joanne McNeil
Published 25 Feb 2020

Tumblr and Instagram, like Yahoo and other services before it, have issued policies to ban content that glorifies self-harm, only to see these users misspell common hashtags or otherwise evade bans through in-community secret language or signaling. More execrable content calls for specialized content moderators, labor that platforms usually contract out. It is grinding and traumatizing work. The contractors’ task is to clean the platforms of snuff images, child pornography, and other things I’d rather not think about. No one should have to do this job, even if it is paid (which it is, but not particularly well). The content moderators and the PTSD they suffer is evidence enough that it is impossible for these platforms to ever clean themselves up. Accountability on the internet, for the internet, and for users collectively depends on the tension between privacy for a few and openness to newcomers.

This clash between users is revenue: generating “impressions,” the term of art for the number of views of a particular piece of content. If wide use is a company’s goal, harassment is not necessarily in opposition to that goal. Abuse, hate reads, coordinated harassment, and yes, outrage all lead to online rubbernecking—monetized clash. Advertising is sold by quantity of eyeballs, and interfering with the flow of content—moderating, mediating what is shared—comes at the expense of click-throughs. Impersonation is one of the cruelest strains of online harassment. A friend of mine noticed an obscene example when she participated in the #NotYourAsianSidekick campaign, which was a follow-up to #SolidarityIsForWhiteWomen.

For an even earlier account of this practice, check out Rita Ferrandino’s essay “Terms of Service” (The Village Voice, March 20, 2001), about her experiences in Albuquerque, New Mexico, where she worked as a moderator with AOL’s “Community Action Team” in 1997. I also referred to Sarah T. Roberts’s book Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019, 73–133) for background on this practice. The quote from Tim Berners-Lee comes from an interview with Katrina Brooker (“‘I Was Devastated’: The Man Who Created the World Wide Web Has Some Regrets,” The Hive, Vanity Fair, July 9, 2018).

pages: 356 words: 116,083

For Profit: A History of Corporations
by William Magnuson
Published 8 Nov 2022

Could people trust the company to do the right thing with their intimate information? And what precisely would the “right” thing be? The problem was devilishly difficult to solve. To get a sense of the problem, one need only look at how Facebook handled “offensive” content posted to the site. Content moderation proved a constant thorn in the side at Facebook’s headquarters. In its first years, Facebook had a small team of content moderators, and most of them received little more than a half hour of training. Rather than following a list of rules created at the outset about what constituted objectionable content, the team was forced to develop them as it muddled along from crisis to crisis.

The lactivists were not satisfied with this answer, and protests continued in future years, leading Facebook to regularly clarify its breastfeeding photo policy. The difficulty of drawing ever more nuanced boundaries eventually led the company, in 2012, to set up a moderation center in the Philippines, where an army of content moderators screened the hundreds of millions of photos being uploaded to the site every day.32 But content moderation was just the tip of the iceberg. The more general issue the company faced was how to decide who had access to a user’s information. An important turning point in this debate came in 2007, when Facebook began allowing developers to build services directly into the Facebook website.

One of its rules of thumb was “three strikes and you’re out,” whereby users would be banned from the site if they posted objectionable content three times. Another was the “thong rule,” meaning that if you could see a thong in a picture, it was too risqué and had to come down. But the way that content moderators handled complaints left much to be desired. Initially, when the team received a complaint about a photo, a moderator would log into the complainer’s account using the complainer’s own username and password to look at the photo and determine whether it was objectionable. If it was, the moderator would then turn to the poster’s account, using the poster’s username and password to log in and remove the photo.

pages: 208 words: 57,602

Futureproof: 9 Rules for Humans in the Age of Automation
by Kevin Roose
Published 9 Mar 2021

They were called “editors,” “producers,” and “reporters,” and there were tens of thousands of them, most earning a decent middle-class living. Today, a huge number of those jobs have disappeared, and in their place sits the automation-age job title of “content moderator.” Like the editors and producers of old, content moderators spend their days making sure that information being broadcast to the masses through Facebook, YouTube, Twitter, and other platforms is suitable for public consumption. They usually aren’t employed by the platforms themselves, but rather contracted out through temp agencies and consulting firms.

In these jobs, most of the work is directed and overseen by machines, and humans act as the gap-fillers, doing only the things the machines can’t yet do on their own. Prominent examples of machine-managed jobs include gig work for companies like Uber, Lyft, and Postmates, along with the work performed by Amazon warehouse workers, Facebook and Twitter content moderators, and other people whose jobs consist mainly of carrying out instructions given to them by a machine. Machine-managed jobs are less about collaborating with AI systems, and more about serving them. An Uber driver is not “collaborating” with Uber’s ride-matching algorithm, any more than a military cadet is “collaborating” with the drill sergeant who gives her marching orders.

pages: 382 words: 105,819

Zucked: Waking Up to the Facebook Catastrophe
by Roger McNamee
Published 1 Jan 2019

Unfortunately, anonymity, the ability to form Groups in private, and the hands-off attitude of platforms have altered the normal balance of free speech, often giving an advantage to extreme voices over reasonable ones. In the absence of limits imposed by the platform, hate speech, for example, can become contagious. The fact that there are no viable alternatives to Facebook and Google in their respective markets places a special burden on those platforms with respect to content moderation. They have an obligation to address the unique free-speech challenges posed by their scale and monopoly position. It is a hard problem to solve, made harder by continuing efforts to deflect responsibility. The platforms have also muddied the waters by frequently using free-speech arguments as a defense against attacks on their business practices.

Facebook gave Motherboard access to the moderation team, which Motherboard supplemented with leaked documents, possibly the first to come from active Facebook employees. According to the story’s authors, Jason Koebler and Joseph Cox, “Zuckerberg has said that he wants Facebook to be one global community, a radical ideal given the vast diversity of communities and cultural mores around the globe. Facebook believes highly-nuanced content moderation can resolve this tension, but it’s an unfathomably complex logistical problem that has no obvious solution, that fundamentally threatens Facebook’s business, and that has largely shifted the role of free speech arbitration from governments to a private platform.” Facebook told Motherboard that its AI tools detect almost all of the spam it removes from the site, along with 99.5 percent of terrorist-related content removals, 98.5 percent of fake account removals, 96 percent of nudity and sexual content removals, 86 percent of graphic violence removals, and 38 percent of hate speech removals.

Department of, 32, 34 Delaney, John, 167 Dell, 35, 152 democracy, 2, 3, 10–12, 17, 37, 84, 85, 93, 103, 112, 117, 122, 126–27, 129, 133, 136, 159, 162, 200, 203, 204, 217, 220, 233–35, 238, 239, 242, 253–57, 263–65, 258, 268–69, 277–83 Facebook’s threat to, 242–45, 258, 280, 281 four pillars of, 243–45 see also elections Democratic Congressional Campaign Committee (DCCC), 11, 111, 114, 116 Democratic National Committee (DNC), 11, 111, 114, 116, 124, 125 Democratic Party, Democrats, 121, 130–33, 167, 197, 210, 212, 227 Pizzagate and, 124–26, 130 Depression, Great, 43 developing countries, 179–80, 233 Diamond, Larry, 243 Diana, Elisabeth, 231–32 Digital Equipment, 46, 152 DiResta, Renée, 92–93, 121–23, 127, 128, 146, 242 disinformation and fake news, 94–96, 117, 123, 173, 229, 233, 234, 237, 253, 257–58, 260, 274 on Facebook, 89, 94–96, 119, 159, 165, 166, 208–9, 228, 243, 245, 258, 270, 281 Free Basics and, 180 Pizzagate, 124–26, 130 Russia and, 121–24, 174; see also Russia on Twitter, 177 on YouTube, 177–78 see also conspiracy theories Disney Company, 143–44, 193 Doerr, John, 26–27, 61 dopamine, 86, 88, 107, 147, 258, 271, 273 Dorsey, Jack, 229 dot-com bubble, 27, 38–40, 48, 49 Doubleclick, 65, 138–39 Doyle, Mike, 210 Dropbox, 41 Drummond, Jamie, 159 DuckDuckGo, 271 Dwoskin, Elizabeth, 168 eBay, 36, 38 economy, economics, 43–48, 82, 155, 162–63, 201, 220, 225, 235, 238, 254, 262–63, 281 Edelman, Gilad, 155 Edelman, Joe, 108 Edge, 28–29 education, 257 Egypt, 243 8chan, 102, 123 Einstein, Albert, ix, 53 elections, 82, 122, 129, 141, 156, 159, 208, 215, 216, 227, 238, 243, 244, 251–55, 258–59, 269, 274, 280, 282 microtargeting and, 237–38 2016 presidential, see presidential election of 2016 of 2018 and 2020, 12, 111, 119, 127, 153, 220, 235, 278, 279, 281 Electronic Arts, 14, 26 Elevation Partners, 13–14, 17–18, 30, 61, 72, 147 EMC, 152 emerging economies, 179–80, 233 emotions, 69, 81, 126 contagion of, 88–89, 92, 103, 233, 234 Facebook and, 9, 11, 68, 69, 75, 233, 270 Facebook study on, 88–89 “lizard brain,” 9, 88 energy, renewable, 248 Engelbart, Douglas, 33, 34 entrepreneurs, 224, 238, 261, 265 see also startups Equifax, 226 Estrin, Judy, 220, 233 European Union (EU), 162 Brexit and, 8–9, 96, 180, 196, 198, 244 Global Data Protection Regulation in, 221, 222, 224, 259–60 Google and, 138, 260, 281–82 Excite, 42 extreme views, 91–93, 102, 119, 173, 214, 232, 253, 260, 279 Facebook, 47–51, 53–79, 110, 139–40, 156, 163, 202–12, 213–20, 223, 224, 232–35, 238–40, 241–48, 251, 255, 256, 258, 262–64, 274, 275, 278, 279, 282, 283, 287 addiction to, 63, 281 advertising and, 11–12, 47, 59–61, 63, 68–77, 85, 103, 119, 128–29, 130, 132, 143, 148, 173, 184, 185, 202, 207–9, 211, 217–19, 237–28, 258, 265, 270, 281, 283–84 algorithms of, 4, 9, 11, 66, 74, 76, 81, 87, 91, 128–29, 143, 166, 232, 235, 243, 270, 274, 277, 281 Android and, 204 artificial intelligence of, 10, 11, 69, 85, 87, 91, 95, 108, 203, 219–20, 230, 261 banks and, 231–32 Beacon, 60, 62, 64, 142 behavior modification and, 63, 278 Black Lives Matter and, 8, 243 Bosworth memo and, 204–6 Cambridge Analytica and, 78, 180–98, 199, 202–4, 207, 208, 210, 213, 216–18, 251, 259 Center for Humane Technology and, 166–67 changing personal usage of, 270 Clear History, 218–19 community standards and, 103, 179, 228, 244–45 Congress and, 122, 127–33 Connect, 62, 63, 68, 72, 158, 218, 249, 270 Connectivity, 232 content moderation by, 229–30, 232, 237 content suppliers harmed by, 283 criticisms of, and responses, 3, 65, 95–96, 141, 143, 146, 147–49, 151–53, 158–61, 169, 174–75, 178–80, 192–93, 207, 216, 217, 230, 247–48, 280, 281 Custom Audiences, 76–77 data brokers banned by, 208 dating service, 207, 218 deactivating account, 99 decision making and organization of, 144–45, 154, 155, 160 democracy threatened by, 242–45, 258, 278, 280, 281 disinformation on, 89, 94–96, 119, 159, 165, 166, 208–9, 228, 243, 245, 258, 270, 281 early days of, 16–17 earnings in 2018, 216–17, 228 Elevation and, 17–18, 72 emotional contagion study of, 88–89 emotions and, 9, 11, 68, 69, 75, 233, 270 F8 conference, 217–18 fake accounts deleted by, 229, 230, 265 FarmVille, 184, 195, 196 filter bubbles and, 78, 87, 90, 96, 116–19, 124–27, 141, 166, 209, 215, 245, 257, 258 financial value of, 9, 284 Free Basics, 179–80, 232, 245 FriendFeed and, 63 FTC and, 182–84, 188–90 games, 184, 191, 195, 196 Global Data Protection Regulation and, 221, 259 Goldman’s tweet and, 169–73 Groups, 7–8, 51, 78, 89–92, 94–96, 115–17, 119, 124, 126, 130–32, 165–66, 207–8, 270 Growth group at, 76, 84, 148 hackers and, 280 hate speech and, 178–79, 203, 204, 215, 216, 228, 230, 245, 280 History, 270 housing discrimination and, 11, 129, 245 influence power of, 10–11, 141, 204, 215 innovation threatened by, 246–47 Instagram and, 70, 139, 140, 144, 223, 261, 270–71 IPO (initial public offering) of, 70–75, 78, 145, 169, 184, 190 Jones and Infowars and, 228–29 lack of alternative to, 92, 100, 141, 223, 280 Like button, 63, 68, 72, 98 Live, 74 Lookalike Audiences, 76, 77, 115 marketing messages sent by, 174 market power of, 46 McGinn’s departure from, 167–69, 172 McNamee’s investment in, 1, 17–18, 59 Messenger, 67, 70, 85, 139, 140, 207, 232 Messenger Kids, 254 metrics of, 76–78, 206, 216 Microsoft and, 14, 15, 59–60, 65 mobile platforms and, 106, 228 monopoly power and antitrust issues, 47–48, 100, 136–41, 162, 225–26, 234, 246, 247, 261–63, 284 Moonalice and, 73–75 motto and philosophy of, 41, 51, 72–73, 142–43, 193 as national security threat, 282 network effects and, 47, 246 new kind of marketplace created by, 283 News Feed, 59, 60, 68, 70, 74, 75, 88, 89, 90–91, 97, 165–66, 208–9, 243, 244, 247 as news source, 10 nondisclosure agreement for employees of, 169, 172 Oculus and, 139, 144, 223, 261 Onavo and, 139–40 Open Graph, 70, 71, 74 persuasive technology used by, 17 photo tagging on, 59, 63, 68, 98–99 Platform, 188, 190–91 Portal, 281 presidential election and, 183, 190, 232, 278; see also Cambridge Analytica; Russia privacy settings of, 97 privacy threatened by, 246 public health threatened by, 246 regulation and, 112, 280 Russian interference and, 90, 115–17, 119, 124, 126, 130–32, 145–46, 149, 152–54, 166, 168–72, 187, 193–94, 203, 207–8, 217, 228, 244, 245, 251 Sandberg’s joining of, 5, 16, 60, 61, 64 self-regulation of, 280 smartphones and, 59 Snapchat and, 140, 246 Social Graph, 70 Sponsored Stories, 71 state attorneys general and, 120, 172, 227 stock price in 2018, 228, 265 team philosophy of, 145 and technology as value neutral, 129 terms of service of, 97, 178, 182, 211, 253–54 Trending Stories, 166 Truth About Tech conference and, 166, 167 user browsing history and, 218–19 user data and, 4–5, 9, 62, 72, 75–76, 78, 87, 131–32, 141–42, 174, 180–98, 202–4, 210–11, 216–19, 223, 258–59 user engagement and, 9 user growth and numbers, 62–63, 65, 71–72, 77, 78, 84, 128, 140–41, 143, 151, 184, 187, 215, 228, 233, 234 user trust and, 2, 7 “war room” opened by, 281 WhatsApp and, 139, 140, 144, 223, 261, 270–71, 280 words associated with, 231 work environment at, 45 Yahoo and, 14–16, 56, 57 Zuckerberg considers selling, 13–16 Zuckerberg’s launch of, 4, 39, 42, 50–51, 35–56, 104, 141, 189, 203, 241 Zuckerberg’s plan for addressing problems with, 158–59 Facemash, 53, 60 fact-checking, 103, 178, 179 Fair Housing Act, 11, 245 fake news, see disinformation and fake news FarmVille, 184, 195, 196 Fast Company, 53 FBI, 209, 252 fear of missing out (FOMO), 98, 99, 253, 269 Federal Communications Commission, 136 Federal Trade Commission (FTC), 113–14, 136, 182–84, 188–90, 200, 286 Feinstein, Dianne, 132 Fernando, Randima, 167 Ferrell, Will, 156 filter bubbles, 66–67, 87, 94, 109, 125–27, 157–58, 246, 269, 278–81 extreme views and, 91, 93 Facebook and, 78, 87, 90, 96, 116–19, 124–27, 141, 166, 209, 215, 245, 257, 258 polarization and, 89–90, 274 Financial Times, 224–25 FireEye, 229 Fisher, Adam, 55 5G wireless technology, 261–62 Foer, Franklin, 167 Fogg, B.

pages: 918 words: 257,605

The Age of Surveillance Capitalism
by Shoshana Zuboff
Published 15 Jan 2019

The norm is that information corruption is not catalogued as problematic unless it poses an existential threat to supply operations—Bosworth’s imperative of connection—either because it might trigger user disengagement or because it might attract regulatory scrutiny. This means that any efforts toward “content moderation” are best understood as defensive measures, not as acts of public responsibility. So far, the greatest challenge to radical indifference has come from Facebook and Google’s overreaching ambitions to supplant professional journalism on the internet. Both corporations inserted themselves between publishers and their populations, subjecting journalistic “content” to the same categories of equivalence that dominate surveillance capitalism’s other landscapes.

.… Voices that were lurking in the shadows are now at the center of the public discourse.”33 The guiding principles of radical indifference are reflected in the operations of Facebook’s hidden low-wage labor force charged with limiting the perversion of the first text. Nowhere is surveillance capitalism’s outsized influence over the division of learning in society more concretely displayed than in this outcast function of “content moderation,” and nowhere is the nexus of economic imperatives and the division of learning more vividly exposed than in the daily banalities of these rationalized work flows where the world’s horrors and hate are assigned to life or death at a pace and volume that leave just moments to point thumbs up or down.

As one account notes, “Facebook and Pinterest, along with Twitter, Reddit, and Google, all declined to provide copies of their past or current internal moderation policy guidelines.”34 Among the few reports that have managed to assess Facebook’s operations, the theme is consistent. This secret workforce—some estimates reckon at least 100,000 “content moderators,” and others calculate the number to be much higher—operates at a distance from the corporation’s core functions, applying a combination of human judgment and machine learning tools.35 Sometimes referred to as “janitors,” they review queues of content that users have flagged as problematic.

pages: 491 words: 77,650

Humans as a Service: The Promise and Perils of Work in the Gig Economy
by Jeremias Prassl
Published 7 May 2018

Alyson Shontell, ‘My nightmare experience as a TaskRabbit drone’, Business Insider (7 December 2011), http://www.businessinsider.com/confessions-of-a- task-rabbit-2011-12?IR=T, archived at https://perma.cc/7EYK-86QR 32. Ibid. 33. Crowdflower, ‘Crowdsourced content moderation’, https://success.crowd- flower.com/hc/en-us/article_attachments/201062449/CrowdFlower_Skout_ Case_Study.pdf, archived at https://perma.cc/4MY4-AAFX 34. Adrian Chen, ‘The labourers who keep dick pics and beheadings out of your Facebook feed’, Wired (23 October 2014), http://www.wired.com/2014/10/ content-moderation/, archived at https://perma.cc/4CJG-UDMT 35. Andrew Callaway, ‘Apploitation in a city of instaserfs: how the “sharing econ- omy” has turned San Francisco into a dystopia for the working class’, The Magazine (1 January 2016), http://www.policyalternatives.ca/publications/ monitor/apploitation-city-instaserfs, archived at https://perma.cc/AP9Z-TZ5J 36.

In deter- mining the quality of uploaded content, websites including Facebook and * * * 58 Lost in the Crowd YouTube increasingly rely on crowdworkers to determine whether content is inappropriate or offensive. Platforms such as CrowdFlower proudly advertise their ability to ‘cut costs without sacrificing quality’ in real-time content moderation: ‘[E]ach image submitted to the site is assessed by three reviewers whose judgments are automatically crosschecked to determine the best response.’33 That doesn’t sound too difficult—until you realize that the images in question will often include extreme pornography, ‘brutal street fights, ani- mal torture, suicide bombings, decapitations, and horrific traffic accidents’.34 How can we reconcile these conditions with many a worker’s public account of her happy experiences, including those we encountered in earlier chapters?

pages: 484 words: 114,613

No Filter: The Inside Story of Instagram
by Sarah Frier
Published 13 Apr 2020

Zollman, the Instagrammer who had worked on the earliest community moderation tools and had become so familiar with the threats to its users, was sure she wouldn’t be able to find and solve as many problems as Facebook’s vast army of contractors could. To better serve the millions of people joining Instagram, she worked on transitioning content moderation, so that whenever people clicked to report something awful they saw on Instagram, it would just be funneled into the same system of people who were cleaning up Facebook. Facebook had low-wage outside contractors quickly clicking through posts containing or related to nudity, violence, abuse, identity theft, and more to determine whether anything violated the rules and needed to be taken down.

While Facebook was a place where content went viral, the dangerous communities on Instagram were harder to find, discoverable only if you knew the right hashtag. Instagram wouldn’t be able to catch all the worst posts on the platform simply by adopting the same policing tactics as Facebook. Because Instagram had shifted its content moderation to Facebook after the acquisition, Systrom was disconnected from how specific issues were handled, except when it came to the company’s most high-profile users. It wasn’t like the early days, when they’d had actual employees going through all of the most upsetting content on the app. For the past few years, the Instagram full-time employees had been focused on shaping the community through promoting good behavior, and not paying as much attention to stopping the bad.

On Friday, March 17, 2018, the New York Times: Matthew Rosenberg, Nicholas Confessore, and Carole Cadwalladr, “How Trump Consultants Exploited the Facebook Data of Millions,” New York Times, March 17, 2018, https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html; and Carole Cadwalladr and Emma Graham-Harrison, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach,” The Observer, March 17, 2018, https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election. The average Facebook employee made: Casey Newton, “The Trauma Floor,” The Verge, February 25, 2019, https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona; and Munsif Vengatil and Paresh Dave, “Facebook Contractor Hikes Pay for Indian Content Reviewers,” Reuters, August 19, 2019, https://www.reuters.com/article/us-facebook-reviewers-wages/facebook-contractor-hikes-pay-for-indian-content-reviewers-idUSKCN1V91FK.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

Absent public regulation, private platforms for the most part have been given the lead to figure out how to moderate content in ways that protect speech, well-being, fairness, and equality. I serve as a consultant to major platforms seeking to tread this rough, largely uncharted territory ethically and responsibly. Content moderation is increasingly integrating human and automated processes, but human biases and cultural norms can still creep in along the way. For example, the New York Times recently reported how overseas content moderators tasked with tagging photos for an AI system that would automatically remove explicit material had classified all images of same-sex couples as “indecent.”12 Legal scholar Ari Waldman has documented many similar examples—YouTube’s AI flagging gay or queer images and not their heterosexual equivalent, Instagram flagging topless images of plus-size Black women with their arms covering their breasts but not of similarly posing thin white women, TikTok banning hashtags like #gay, #transgender, and #Iamagay/lesbian in some jurisdictions.13 AI scholar and activist Kate Crawford notes, “Every classification system in machine learning contains a worldview.

What is the responsibility of online platforms? What is the role of law enforcement? How do we best balance between equality and speech, safety and privacy, online? These are hard questions. I encounter the depth of these challenges in very concrete ways when I consult with online platforms on their content moderation policies. And the absence of government regulation has left too much of these critical issues in the hands of the private market. But given the normative challenges, the underlying promise must not be lost: the more accurate the technology, the more effectively it can aid us to draw such tough policy lines in an informed and consistent way.

For example, the New York Times recently reported how overseas content moderators tasked with tagging photos for an AI system that would automatically remove explicit material had classified all images of same-sex couples as “indecent.”12 Legal scholar Ari Waldman has documented many similar examples—YouTube’s AI flagging gay or queer images and not their heterosexual equivalent, Instagram flagging topless images of plus-size Black women with their arms covering their breasts but not of similarly posing thin white women, TikTok banning hashtags like #gay, #transgender, and #Iamagay/lesbian in some jurisdictions.13 AI scholar and activist Kate Crawford notes, “Every classification system in machine learning contains a worldview. Every single one.”14 We need to be mindful of the impact of these biases as systems are built. Context matters in content moderation, too. As Rory Kozoll, Tinder’s head of trust and safety products, put it, “One person’s flirtation can very easily become another person’s offense.”15 Yet AI is still in its infancy in terms of processing context. For example, Facebook has tagged and removed photos that show breastfeeding for being sexually explicit.

Four Battlegrounds
by Paul Scharre
Published 18 Jan 2023

TikTok’s algorithm plays a more dominant role in curating content, ByteDance has been far less transparent than U.S. companies about their content moderation rules, and the company is subject to the Chinese Communist Party’s direction. The combination of these factors poses serious risks of TikTok being used in the future for censorship or propaganda on behalf of the CCP. TikTok has struggled to rehabilitate its public image, but even improved content moderation sidesteps the crux of the issue. Any Chinese company is ultimately subject to the Chinese Community Party’s direction, through both legal and extralegal means.

To date, companies have responded in an ad hoc fashion, often adjusting their algorithms only in the face of public pressure. This is, at best, a stopgap measure. Even if responsible companies adopt reasonable guidelines, without uniform rules, other less scrupulous companies will crop up in their place. As Twitter and Facebook increased their content moderation to crack down on disinformation, far-right users flocked to alternatives like Parler and Gab. Fragmenting the social media ecosystem so that users fall into even more extreme bubbles is hardly an effective long-term solution. Regulation is needed to establish uniform guidelines and create an environment that preserves free expression while guarding against manipulative behavior.

Technology can be used to repress human freedoms or bolster them, to elevate a message or suppress it, to strengthen one group or weaken another. How technology is used reflects the values, whether conscious or unconscious, of its creators and users. Democratic societies find themselves in the midst of often messy debates about how AI technology should be used. The use of AI for facial recognition, surveillance, content moderation, synthetic media, and military applications are just some of the ways in which AI may have profound effects on human well-being, and these uses are rightly contested and debated in democracies. There are also debates about the use of AI in authoritarian regimes, but those debates happen under a radically different political system in which criticism of the state is censored and punished.

pages: 592 words: 125,186

The Science of Hate: How Prejudice Becomes Hate and What We Can Do to Stop It
by Matthew Williams
Published 23 Mar 2021

Between them, the tech giants employ tens of thousands of people to moderate suspect content. This low-paid work involves viewing hundreds of posts a day that feature child abuse, beheadings, suicides and acts of hate. Unsurprisingly, staff turnover is significant. In May 2020, Facebook was reported as having agreed to pay $52 million to current and former content moderators in a class-action lawsuit, with each of the 11,250 moderators receiving at least $1,000; more if they were diagnosed with a mental illness stemming from their job. The case was brought against Facebook for not providing a safe working environment for moderators, who complain of suffering from PTSD-like symptoms from continuously viewing disturbing content.

Their anti-establishment approach, facilitated by the democratising nature of the internet, was replicated by Nigel Farage when he set up his Brexit Party, which went on to dominate the 2019 European Parliament elections in the UK, taking the largest vote share. Further to the right, the leaders of Britain First and the ex-leader of the English Defence League (EDL) also tapped into right-wing filter bubbles to spread their divisive narrative. For years social media content moderators protected some far-right pages from deletion due to their high number of followers and profit-generating capacity. Before it was banned, the far-right ‘Tommy Robinson’ Facebook page had over a million followers, and held the same protections as media and government pages, despite having violated the platform’s policy on hate speech nine times, while typically only five violations were tolerated by Facebook’s content review process.8 The page was eventually removed in February 2019, a year after Twitter removed the account of Stephen Yaxley-Lennon (alias Tommy Robinson) from its platform.

Between them, they had amassed nearly 12 million Facebook followers. Those that fell foul of the platform’s hate speech rules remained online, some for years. Why? Because the platform had not adequately invested in understanding the use of their services in developing countries, and as a result only had a handful of content moderators who spoke Burmese.‡‡ Facebook in Myanmar was left unpoliced, making it an effective tool for accelerating ethnic conflict – it was weaponised. It took a Reuters investigation to convince Facebook to delete posts and accounts. The United Nations came to the conclusion that Facebook had a ‘determining role’ in stirring up hate against Rohingya Muslims in Myanmar in the 2016–17 genocide.38 Eventually Facebook acknowledged its role and apologised, admitting to being too slow to address the hate speech posted on its platform in the region.

pages: 311 words: 90,172

Nothing but Net: 10 Timeless Stock-Picking Lessons From One of Wall Street’s Top Tech Analysts
by Mark Mahaney
Published 9 Nov 2021

And they highlight the controversial nature of Facebook because they highlight the controversial nature of humanity. If you establish an open mic night on the town commons—or an open cam night—you bet you’re going to hear and see disturbing things, regardless of how many resources the company has put into content moderation. And the company has put a lot of resources into content moderation—tens of millions of dollars and tens of thousands of employees. Which has served to create even more controversy around Facebook. Who makes the decisions over which content should be moderated? What about all the borderline political propaganda that is part and parcel of competitive, at times divisive, political campaigns?

S&P, 279t, 282t as platform company, 241–242 playing quarters at, 57–58, 57t and pricing power flywheel, 192–194, 193f, 196–197 revenue, 82, 106–108, 153, 153t rise of, 5 sell-offs of, 48–52 share price, 51f underestimations of, 2 Amazon Kindle, 50, 115–116 Amazon Prime, 115, 174, 180–183, 192, 205 Amazon Subscribe & Save, 115 Amazon Web Services (AWS): launch of, 50 product innovation, 114–119, 179 revenue, 117f “Amazon.Bomb,” 10, 177–178 “Amazon’s Antitrust Paradox,” 308 American Technology Research, 156, 181 Android, ubiquity of, 63 Antitrust regulations, 308–310 AOL, 5, 7–8, 82 Apple: growth of, 305 management teams of, 203–204, 204t, 220t market cap of, 7 product innovation at, 114 revenue, 108, 153, 153t share price, 48–49 and Snap, 63 APRN (Blue Apron), 18–23, 19f, 33 Athar, Sohaib, 134 Auletta, Ken, 149 AWS (see Amazon Web Services [AWS]) Ayers, Charlie, 145 BABA (see Alibaba [BABA]) Bad stocks, 15–33, 293 Blue Apron, 18–23 and fundamentals of stock-picking, 15–18 Groupon, 27–32 Zulily, 23–27 Bain Capital, 24 Barron’s magazine, 9–10, 90, 91, 177–178 Barton, Rich, 188, 191, 209 Bezos, Jeff: acknowledgments of mistakes by, 219, 221 and Amazon Prime, 181, 193 and Amazon Web Services, 115, 116 and Burning Man, 220t as company founder, 204t competition for, 8 innovation by, 10, 205–206 shareholder letters by, 214–216, 216t–217t “Big baggers,” 1–2 “The Big Long,” 5–7 The Big Short (Lewis), 5 Black swan events, 18 Blockbuster, 10 Blue Apron (APRN), 18–23, 19f, 33 Blue Nile (NILE), 23–24 Booking.com (BKNG): acquisition of, 80 as big bagger, 1 during Covid-19 pandemic, 303 fundamentals at, 261t market cap of, 7 net income of, 5, 6t as tech stock, 3 Bow Street, 24 Boyd, Jeff, 91, 210 Brin, Sergey, 145, 146, 204t, 209, 219, 220t Broadcast.com, 87 Brown, Josh, 226 Buffett, Warren, 214 Burning Man, 219, 220t “Burning Up,” 177 Buyer, Lise, 145–146 Calico, 208 Cambridge Analytica, 37 Candor, of management teams, 221 Carpenter, Matt, 311 Case, Steve, 7 Cavens, Darrell, 23–24 Chernobyl (miniseries), 131 Chewy (CHWY): as competitor, 310 and Covid-19 pandemic, 3, 4, 17 during Covid-19 pandemic, 303 fundamentals at, 261t as long-term investment, 66–70 market cap of, 247t share price, 70f as tech stock, 3 CIA, AWS used by, 118 Cisco Systems, 6 Cloud computing, 117–118 Cloudflare, 17 CNBC, 68 Cohen, Ryan, 68, 302 Cohort performance, 250 Companies: judging high-quality, 292 live-from-home and work-from-home, 303 with minimal earnings, 235–242 with no earnings, 242–254, 243t with robust earnings, 232–235 user-generated content, 248–249 Competitive moats, 169–170 Comps, 105–107 Consumerism, as research, 140 ConsumerVoice.org, 21 Content moderation, on Facebook, 37 “A Conventional Look at an Unconventional IPO,” 156 Cooper, Bradley, 135 Covid-19 pandemic, 302–305 Amazon during, 3 Chewy during, 69 DoorDash during, 161–162 eBay during, 85 effect of, on markets, 17–18, 226, 246 and forecasting, 255 Google during, 46 and Internet sector, 3–4 Netflix during, 45 and revenue, 105–107 Stitch Fix during, 124–125, 127 stock dislocations due to, 276 Covid-19 pandemic tech stocks during, 131 Uber during, 159, 161 Cramer, Jim, 157 Criteo (CRTO), 288–289, 289f Cruz, Ted, 37 Customer focus: of Amazon, 180 of DoorDash, 185–186 long-term investments in companies with, 187 of management teams, 214–216, 216t–217t and pricing power flywheel, 194 of product innovation, 295–296 Customer value propositions, 173–199, 297 Amazon vs. eBay, 174–183 DoorDash vs.

How to Stand Up to a Dictator
by Maria Ressa
Published 19 Oct 2022

Every decision became about making a profit and protecting Facebook’s interests. In 2011, Sheryl hired Joel Kaplan, a former Harvard classmate, to lobby and court the conservatives and the American Right. By 2014, he was Facebook’s vice president of global public policy, running government relations and lobbying efforts in Washington, DC, along with determining its content moderation policy around the world. Other companies, including Google and Twitter, keep public policy and lobbying efforts separate from the teams that create and implement content rules. Several employees who resigned from Facebook demanded that those teams be separated, but to this day, that hasn’t been done.

And in 2020, I started to think that Facebook was the bad guy. That year, Carole also asked me to join her brainchild, what we later called the Real Facebook Oversight Board.6 Mark Zuckerberg had recently announced the creation of Facebook’s “Supreme Court,” an oversight board7 designed to take content moderation to an independent court-style setup. That board addressed the wrong issue: content, which had never really been the problem. The first problem was the company’s distribution model: an oversight board on content could never match the speed of the dissemination of information online. The Real Facebook Oversight Board consisted of experts demanding that Facebook change the policies that were destroying our world.

To build that world, we must: Bring an end to the surveillance-for-profit business model; end tech discrimination and treat people everywhere equally; and rebuild independent journalism as the antidote to tyranny. We call on all rights-respecting democratic governments to: 1.Require tech companies to carry out independent human rights impact assessments that must be made public as well as demand transparency on all aspects of their business—from content moderation to algorithm impacts to data processing to integrity policies. 2.Protect citizens’ right to privacy with robust data protection laws. 3.Publicly condemn abuses against the free press and journalists globally and commit funding and assistance to independent media and journalists under attack.

pages: 390 words: 96,624

Consent of the Networked: The Worldwide Struggle for Internet Freedom
by Rebecca MacKinnon
Published 31 Jan 2012

Also see his most recent book, Program or Be Programmed: Ten Commands for a Digital Age (New York: OR Books, 2010). 233 “The invention of a tool doesn’t create change”: Clay Shirky, Here Comes Everybody: The Power of Organizing Without Organizations (New York: Penguin Press, 2008), 105. 233 “cute-cat theory of digital activism”: Ethan Zuckerman, “The Cute Cat Theory Talk at ETech,” My Heart’s in Accra blog, March 8, 2008, www.ethanzuckerman.com/blog/2008/03/08/the-cute-cat-theory-talk-at-etech. 234 in 2007 WITNESS launched its own Video Hub: http://hub.witness.org; Yvette Alberdingk Thijm, “Update on the Hub and WITNESS’ New Online Strategy,” August 18, 2010, http://blog.witness.org/2010/08/update-on-the-hub-and-witness-new-online-strategy; Ethan Zuckerman, “Public Spaces, Private Infrastructure—Open Video Conference,” My Heart’s in Accra blog, October 1, 2010, www.ethanzuckerman.com/blog/2010/10/01/public-spaces-private-infrastructure-open-video-conference. 234 “Protecting Yourself, Your Subjects and Your Human Rights Videos on YouTube”: http://youtube-global.blogspot.com/2010/06/protecting-yourself-your-subjects-and.html. 234 2010 Global Voices Citizen Media Summit: Sami Ben Gharbia, “GV Summit 2010 Videos: A Discussion of Content Moderation,” Global Voices Advocacy, May 7, 2010, http://advocacy.globalvoicesonline.org/2010/05/07/gv-summit-2010-videos-a-discussion-of-content-moderation; and Rebecca MacKinnon, “Human Rights Implications of Content Moderation and Account Suspension by Companies,” RConversation blog, May 14, 2010, http://rconversation.blogs.com/rconversation/2010/05/human-rights-implications.html; 235 “Digital Maoism”: Jaron Lanier, “Digital Maoism: The Hazards of the New Online Collectivism,” Edge: The Third Culture, May 30, 2006, www.edge.org/3rd_culture/lanier06/lanier06_index.html.

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

For example, Facebook (like many other social networks) has developed increasingly specific community standards for the removal of objectionable content and accounts, listing dozens of categories of prohibited content as of late 2020. Because the platform has billions of active monthly users and billions of daily views,3 the sheer scale of content monitoring at Facebook is beyond the capabilities of human moderators alone. Despite Facebook reportedly having tens of thousands of people working on content moderation—with the objective of removing offensive content before users see it—the scale is simply such that it cannot be accomplished without AI. Such monitoring needs at Facebook and other companies have driven extensive research and development in an effort to automate text and image analysis by creating increasingly sophisticated machine learning, natural language processing, and computer vision techniques.

Unless such AI is arrested early in its deployment, manually identifying and disabling all its content through individual investigations and decisions would prove deeply challenging for even the most sophisticated governments and network platform operators. For such a vast and arduous task, they would have to turn—as they already do—to content-moderation AI algorithms. But who creates and monitors these and how? When a free society relies on AI-enabled network platforms that generate, transmit, and filter content across national and regional borders, and when those platforms proceed in a manner that inadvertently promotes hate and division, that society faces a novel threat that should prompt it to consider novel approaches to policing its information environment.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

Almost as striking, over 70 percent of those white people were male. When the company trained its system on this data, it might do a decent job of recognizing white people, Raji thought, but it would fail miserably with people of color, and probably women, too. The problem was endemic. Matt Zeiler and Clarifai were also building what was called a “content moderation system,” a tool that could automatically identify and remove pornography from the sea of images people posted to online social networks. The company trained this system on two sets of data: thousands of obscene photos pulled from online porn sites, and thousands of G-rated images purchased from stock photo services.

See also intelligence ability to remove flagged content, 253 AI winter, 34–35, 288 AlphaGo competition as a milestone event, 176–78, 198 artificial general intelligence (AGI), 100, 109–10, 143, 289–90, 295, 299–300, 309–10 the black-box problem, 184–85 British government funding of, 34–35 China’s plan to become the world leader in AI by 2030, 224–25 content moderation system, 231 Dartmouth Summer Research Conference (1956), 21–22 early predictions about, 288 Elon Musk’s concerns about, 152–55, 156, 158–60, 244, 245, 289 ethical AI team at Google, 237–38 “Fake News Challenge,” 256–57 Future of Life Institute, 157–60, 244, 291–92 games as the starting point for, 111–12 GANs (generative adversarial networks), 205–06, 259–60 government investment in, 224–25 importance of human judgment, 257–58 as an industry buzzword, 140–41 major contributors, 141–42, 307–08, 321–26 possibilities regarding, 9–11 practical applications of, 113–14 pushback against the media hype regarding, 270–71 robots using dreaming to generate pictures and spoken words, 200 Rosewood Hotel meeting, 160–63 the Singularity Summit, 107–09, 325–26 superintelligence, 105–06, 153, 156–60, 311 symbolic AI, 25–26 timeline of events, 317–20 tribes, distinct philosophical, 192 unpredictability of, 10 use of AI technology by bad actors, 243 AT&T, 52–53 Australian Centre for Robotic Vision, 278 autonomous weapons, 240, 242, 244, 308 backpropagation ability to handle “exclusive-or” questions, 38–39 criticism of, 38 family tree identification, 42 Geoff Hinton’s work with, 41 Baidu auction for acquiring DNNresearch, 5–9, 11, 218–19 as competition for Facebook and Google, 132, 140 interest in neural networks and deep learning, 4–5, 9, 218–20 key players, 324 PaddlePaddle, 225 translation research, 149–50 Ballmer, Steve, 192–93 Baumgartner, Felix, 133–34 Bay Area Vision Meeting, 124–25 Bell Labs, 47, 52–53 Bengio, Yoshua, 57, 162, 198–200, 205–06, 238, 284, 305–08 BERT universal language model, 273–74 bias Black in AI, 233 of deep learning technology, 10–11 facial recognition systems, 231–32, 234–36, 308 of training data, 231–32 Billionaires’ Dinner, 154 Black in AI, 233 Bloomberg Businessweek, 132 Bloomberg News, 130, 237 the Boltzmann Machine, 28–30, 39–40, 41 Bostrom, Nick, 153, 155 Bourdev, Lubomir, 121, 124–26 Boyton, Ed, 287–88 the brain innate machinery, 269–70 interface between computers and, 291–92 mapping and simulating, 288 the neocortex’s biological algorithm, 82 understanding how the brain works, 31–32 using artificial intelligence to understand, 22–23 Breakout (game), 111–12, 113–14 Breakthrough Prize, 288 Brin, Sergey building a machine to win at Go, 170–71 and DeepMind, 301 at Google, 216 Project Maven meetings, 241 Brockett, Chris, 54–56 Brockman, Greg, 160–64 Bronx Science, 17, 21 Buolamwini, Joy, 234–38 Buxton, Bill, 190–91 Cambridge Analytica, 251–52 Canadian Institute for Advanced Research, 307 capture the flag, training a machine to play, 295–96 Carnegie Mellon University, 40–41, 43, 137, 195, 208 the Cat Paper, 88, 91 Chauffeur project, 137–38, 142 Chen, Peter, 283 China ability to develop self-driving vehicles, 226–27 development of deep learning research within, 220, 222 Google’s presence in, 215–17, 220–26 government censorship, 215–17 plan to become the world leader in AI by 2030, 224–25, 226–27 promotion of TensorFlow within, 220–22, 225 use of facial recognition technology, 308 Wuzhen AlphaGo match, 214–17, 223–24 Clarifai, 230–32, 235, 239–40, 249–50, 325 Clarke, Edmund, 195 cloud computing, 221–22, 245, 298 computer chips.

See also intelligence ability to remove flagged content, 253 AI winter, 34–35, 288 AlphaGo competition as a milestone event, 176–78, 198 artificial general intelligence (AGI), 100, 109–10, 143, 289–90, 295, 299–300, 309–10 the black-box problem, 184–85 British government funding of, 34–35 China’s plan to become the world leader in AI by 2030, 224–25 content moderation system, 231 Dartmouth Summer Research Conference (1956), 21–22 early predictions about, 288 Elon Musk’s concerns about, 152–55, 156, 158–60, 244, 245, 289 ethical AI team at Google, 237–38 “Fake News Challenge,” 256–57 Future of Life Institute, 157–60, 244, 291–92 games as the starting point for, 111–12 GANs (generative adversarial networks), 205–06, 259–60 government investment in, 224–25 importance of human judgment, 257–58 as an industry buzzword, 140–41 major contributors, 141–42, 307–08, 321–26 possibilities regarding, 9–11 practical applications of, 113–14 pushback against the media hype regarding, 270–71 robots using dreaming to generate pictures and spoken words, 200 Rosewood Hotel meeting, 160–63 the Singularity Summit, 107–09, 325–26 superintelligence, 105–06, 153, 156–60, 311 symbolic AI, 25–26 timeline of events, 317–20 tribes, distinct philosophical, 192 unpredictability of, 10 use of AI technology by bad actors, 243 AT&T, 52–53 Australian Centre for Robotic Vision, 278 autonomous weapons, 240, 242, 244, 308 backpropagation ability to handle “exclusive-or” questions, 38–39 criticism of, 38 family tree identification, 42 Geoff Hinton’s work with, 41 Baidu auction for acquiring DNNresearch, 5–9, 11, 218–19 as competition for Facebook and Google, 132, 140 interest in neural networks and deep learning, 4–5, 9, 218–20 key players, 324 PaddlePaddle, 225 translation research, 149–50 Ballmer, Steve, 192–93 Baumgartner, Felix, 133–34 Bay Area Vision Meeting, 124–25 Bell Labs, 47, 52–53 Bengio, Yoshua, 57, 162, 198–200, 205–06, 238, 284, 305–08 BERT universal language model, 273–74 bias Black in AI, 233 of deep learning technology, 10–11 facial recognition systems, 231–32, 234–36, 308 of training data, 231–32 Billionaires’ Dinner, 154 Black in AI, 233 Bloomberg Businessweek, 132 Bloomberg News, 130, 237 the Boltzmann Machine, 28–30, 39–40, 41 Bostrom, Nick, 153, 155 Bourdev, Lubomir, 121, 124–26 Boyton, Ed, 287–88 the brain innate machinery, 269–70 interface between computers and, 291–92 mapping and simulating, 288 the neocortex’s biological algorithm, 82 understanding how the brain works, 31–32 using artificial intelligence to understand, 22–23 Breakout (game), 111–12, 113–14 Breakthrough Prize, 288 Brin, Sergey building a machine to win at Go, 170–71 and DeepMind, 301 at Google, 216 Project Maven meetings, 241 Brockett, Chris, 54–56 Brockman, Greg, 160–64 Bronx Science, 17, 21 Buolamwini, Joy, 234–38 Buxton, Bill, 190–91 Cambridge Analytica, 251–52 Canadian Institute for Advanced Research, 307 capture the flag, training a machine to play, 295–96 Carnegie Mellon University, 40–41, 43, 137, 195, 208 the Cat Paper, 88, 91 Chauffeur project, 137–38, 142 Chen, Peter, 283 China ability to develop self-driving vehicles, 226–27 development of deep learning research within, 220, 222 Google’s presence in, 215–17, 220–26 government censorship, 215–17 plan to become the world leader in AI by 2030, 224–25, 226–27 promotion of TensorFlow within, 220–22, 225 use of facial recognition technology, 308 Wuzhen AlphaGo match, 214–17, 223–24 Clarifai, 230–32, 235, 239–40, 249–50, 325 Clarke, Edmund, 195 cloud computing, 221–22, 245, 298 computer chips. See microchips connectionism, 25, 33–34, 60–61 content moderation system, 231 Cornell Aeronautical Laboratory, 17, 26 Corrado, Greg, 87–88, 183 Courage, Jane, 190–91 Covariant, 283–85, 310 CPUs (central processing units), 90–91 credibility deepfakes, 211, 260 images as evidence, 211 Crick, Francis, 35–36 Cyc project, 26, 29, 288 Dahl, George, 63–64, 72–74, 77, 181–83, 271 Dalle Molle, Angelo, 59 The Dam Busters (film), 89–90 DARPA funding for AI research, 44–45 data Cambridge Analytica data breach, 251–52 importance of, 226–27 privacy concerns, 248 using more data to solve a problem, 279 data centers, 76–77, 136, 138–39, 146–50, 299–300 Dean, Jeff and Andrew Ng, 84, 86–88 in China, 217 data center alternative, 146–50 Distillation project, 150 at Google, 85–88, 90–91, 129, 135–36, 180, 193, 279 Google Brain project to explore medical applications of deep learning, 183 meeting with DeepMind, 113, 115 neural networks experience, 87 at the NIPS Conference, 5 Turing Award introduction, 306 upbringing and education, 85 deep belief networks, 63, 68 deepfakes, 211, 260 deep learning building a machine to win at Go, 167–78, 269 disputes regarding who the major contributors were, 141–42 GANs (generative adversarial networks), 205–06, 259–60 and Google searches, 83–84 Institute of Deep Learning, 219 key players, 197–200 limitations of, 267, 268–71 as math and not AI, 142 medical applications, 179–80 Microsoft’s opposition to, 192–93, 197 and neural speech recognition, 67–68 origins of, 4, 64–65 proving the success of, 91–98 researcher bias, 10–11 for self-driving vehicles, 137–38, 197–98 teaching machines teamwork and collaboration, 296 DeepMind AI improvements at, 152 AI safety team, 157–58 AlphaGo, 169–78, 198, 214–17, 223–24 auction for acquiring DNNresearch, 5–7, 100 building systems that played games, 143, 155, 169–78, 295–98 data from the Royal Free NHS Trust, 188 DeepMind Health, 187–88 differences among the founders of, 186–87, 300–01 Facebook’s interest in, 121–24 Google’s acquisition of, 100, 112–16, 300–01 key players, 322–23 lack of revenue, 114–15 and OpenAI, 165–66, 289 petition against Google’s involvement with Project Maven, 248 power consumption improvements in Google’s data centers, 139 salary expenses, 132 tension between Google Brain and, 185–86 venture capital investments in, 110 WaveNet, 260 Defense Innovation Board, 242 Deng, Li and Geoff Hinton, 69–73, 218–19 role at Microsoft, 11, 66–67, 191–92 speech recognition research, 66–68, 193–94, 218–19 Department of Defense.

pages: 321 words: 105,480

Filterworld: How Algorithms Flattened Culture
by Kyle Chayka
Published 15 Jan 2024

But algorithms can misinterpret language. Gade showed me a case in which a model was assigning the word gay a very negative connotation, meaning that content that included it wasn’t getting prioritized. That could be a complete mistake if the word is meant positively—or perhaps it should be interpreted as neutral. If automated content moderation or recommender systems misinterpret a word or behavior, “You could potentially silence an entire demographic,” Gade said. That’s why being able to see what’s happening around a given algorithmic decision is so important. Twitter has provided an object lesson in the damage of nontransparency.

The problem with Section 230 is that ultimately, and strangely, the law makes it so that no one is currently responsible for the effects of algorithmic recommendations. The tech companies themselves aren’t liable, the systems are only regulated internally, and users are left to fend for themselves, short of basic content moderation. If algorithmic feeds mistreat us or contribute to an abusive or exploitative online environment, we, as users and citizens, have little recourse. One of our only possibilities is switching to another platform, and yet that, too, has already been limited by the problem of monopolization. Our relationship to algorithmic feeds feels like a trap: we can neither influence them nor escape them.

(Such content would still be filtered, in a way, by having a singular filter imposed on it.) TikTok’s wide range of niche recommendations, for example, would be impossible. One could imagine a TikTok feed made up of only clips that would fit in the anodyne TV show America’s Funniest Home Videos. It would still be amusing, but hardly addictive or manipulative. Content moderation would have to be much stricter. Changing the balance to emphasize linear, opt-in content over automated recommendations might be a good thing, if it can limit the possible exposure of harmful material online. But this would be a much more sanitized, and by necessity slower, Internet. The problem comes down to determining which kinds of content should be able to travel so quickly and frictionlessly across Filterworld, and which should be slowed down or stopped entirely.

pages: 282 words: 63,385

Attention Factory: The Story of TikTok and China's ByteDance
by Matthew Brennan
Published 9 Oct 2020

Again, more metrics will be evaluated, with the top-performing videos passing on to the next level, where they gain exposure to an even larger audience. As the video moves up to higher tiers, it will gain exposure to potentially millions of users. The process isn’t entirely run by an algorithm. At the higher levels, a person on the content moderation team will watch the video and follow a set of strict guidelines to confirm that it does not violate the platform’s terms of service or have any copyright issues. There are cases of videos reaching up to a million views only to suddenly be taken down once they hit the human review process. On a platform like TikTok, with such massive amounts of attentive users, there was no shortage of bad actors trying to find loopholes and shortcuts to game the system.

Standardized elements – universal across all markets Branding : the TikTok name, logo, and distinctive visual identity UX, UI: The core features and design, product logic Technology: recommendation, search, classification, facial recognition Localized elements – tailored to specific geographies and languages Content: the pool of recommended videos Operations: marketing, promotion, and growth Ancillary services could also be localized once the user base reached scale. These included: Commercialization: ad sales, business development Others: government relations, legal and content moderation Central to this system was the concept of regionalized content pools based on geography, culture, and language. 239 The core TikTok experience was the “For You” feed, which was localized for each market. Japanese users would not be recommended content from Indonesian accounts and vice-versa.

pages: 237 words: 74,109

Uncanny Valley: A Memoir
by Anna Wiener
Published 14 Jan 2020

We ran emails claiming to be from the Russian government through translation software and passed them to Legal with spinning question-mark emojis. We sifted through reports of harassment, revenge porn, child porn, and terrorist content. We pinged our more technical coworkers to examine malware and purportedly malicious scripts. We became reluctant content moderators, and realized we needed content policies. My teammates were thoughtful and clever, opinionated but fair. Speaking for a platform, however, was nearly impossible, and none of us were particularly well qualified to do it. We wanted to tread lightly: core participants in the open-source software community were sensitive to corporate oversight, and we didn’t want to undercut anyone’s techno-utopianism by becoming an overreaching arm of the company-state.

Systems thinkers, for whom the system was computational, and did not extend into the realm of the social. The software was transactional, fast, scalable, diffuse. Crowdfunding requests for insulin spread as quickly and efficiently as anti-vaccination propaganda. Abuses were considered edge cases, on the margin—flaws that could be corrected by spam filters, or content moderators, or self-regulation by unpaid community members. No one wanted to admit that abuses were structurally inevitable: indicators that the systems—optimized for stickiness and amplification, endless engagement—were not only healthy, but working exactly as designed. * * * In the spring, a far-right publication ran a blog post about the VP of Social Impact, zeroing in on her critique of diversity-in-tech initiatives that tended to disproportionately benefit white women.

Doppelganger: A Trip Into the Mirror World
by Naomi Klein
Published 11 Sep 2023

Or that cities like Dubai and Doha are built and maintained by armies of migrants living and working in conditions so abject that when they are killed on the job, their employers face no consequences. Or that warehouse workers in New Jersey have to fight one of the three richest men on the planet to get breaks long enough to make it to the toilet. Or that content moderators in Manila must stare at beheadings and child rapes all day to keep our social media feeds “clean.” Or that all of our frenetic consumption and energy use fuels wildfires in the swanky suburbs of Los Angeles and Sonoma that are battled by prison inmates who are paid just dollars a day for this perilous work, even as migrants from Central American nations battered by their own climate disasters pick avocados and strawberries in the toxic air—and if they fall ill or protest for fairer conditions are instantly sent home without pay, discarded like bruised fruit.

Business Insider Butler, Judith Callison, William calm Cam Cambridge Analytica Campbell, Naomi Canada; British Columbia; Canada Day; residential schools for Indigenous people in Canada Council for the Arts Canadian Anti-Hate Network cancel culture Capgras delusion capitalism; conspiracies and; disaster; and Jewish interest in Marxism; Nazi anti-Semitism and; neoliberal; progressive-cloaked; surveillance; woke Carlile, Brandi Carlson, Tucker Carlyle Group Catholicism Cave, Nick cell phones; manufacture of censorship Center for Countering Digital Hate Centers for Disease Control (CDC) Césaire, Aimé CGI changeling myths Chaplin, Charlie Chauvin, Derek Cheney, Dick childbirth children; achievements and; disabilities in, see disabilities; Nazi views on; parents and; in Red Vienna Children’s Health Defense Chile China, Chinese Communist Party (CCP); Covid and; social credit system of Chomsky, Noam Christ Christians; Catholic; evangelical; Jews and; residential schools and Church, Frank Church Committee CIA City & the City, The (Miéville) civilization civil rights movement climate change; climate justice; Gates and; lockdown threat and; self and Clinton, Bill Clinton, Hillary Clinton Global Initiative clouds clout CNN Coates, Ta-Nehisi Cohen, Michelle Colier, Nancy collective organizing college admissions colonialism; Israel as; movements against; Nazi Germany and; place names and; see also Indigenous people Columbus, Christopher Comăneci, Nadia Commentary communism; Jews and; see also China, Chinese Communist Party communities concentration camps Conrad, Joseph Conservative Party conspiracies, real conspiracy theories; about Covid; about Great Reset; about Jews; QAnon conspirituality Constitution, U.S. consumers content moderators copyrights and trademarks Corporate Self university course Counterlife, The (Roth) Covid pandemic; as bioweapon; China and; conspiracy theories about; as culling the herd; Disinformation Dozen and; far right and far-out alliance and; “Five Freedoms” and; Gates and; gyms during; health and wellness cultures and; individualism and; lockdowns in; long Covid; masks in; origin of the virus; as portal for change; profiteering from; race and class disparities and; relief programs during; religious people and; risk factors and; schools and; shock doctrine in; social media and; tech companies and; tests for the virus; Trump and; workers and Covid vaccines; adverse reactions to; Freedom Convoy and; mandates, passports, and apps; nanoparticles in; Nazi Germany analogies and; patents on and profits from; racial oppression analogies and; reproductive health and; restaurants and; shedding claims about; “slavery forever” video and; vaccine-autism myth and Crackdown Croatia Culture and Imperialism (Said) Cuomo, Andrew currency customization DailyClout Daily Command Brief Dark Matters (Browne) Darwin, Charles Darwish, Mahmoud data Davis, Angela Davis, Mike death de Bres, Helena Debt Collective Deception (Roth) DeepBrain AI Inc.

Seventeen sex Seymour, Richard “Shadow, The” (Andersen) Shadow Lands Shakespeare, William Sheffer, Edith shock, shock doctrine; Covid and Shock Doctrine, The (Klein) shootings, mass Siebenkās (Jean Paul) Silberman, Steve Silicon Valley; see also tech companies Simmons, Russell Simpson, Leanne Betasamosake Sinclair, Murray Singh, Lilly Sister Outsider (Lorde) 60 Minutes Slate slavery “slavery forever” video Slobodian, Quinn smallpox Smalls, Chris smart cities Smith, Adam Smith, Zadie Snowden, Edward social credit systems Social Democrats socialism; Jewish attraction to social media; avatars on; content moderators and; Covid and; digital doubles on; enclosure process and; Facebook; hacking of accounts on; influencers on; Instagram; Twitter, see Twitter; vaccine-autism myth on; YouTube, see YouTube social networks Social Security societies Son, John Sonalker, Anuja Soros, George souls South Africa South Korea Soviet Union Spain Spanish Inquisition Spartacus League Spectator, The speechlessness spyware Stalin, Joseph Starbucks State Department Status Update (Marwick) Steer Tech Stepford Wives, The sterilization, forced Stevenson, Robert Louis Stone, I.

pages: 304 words: 80,143

The Autonomous Revolution: Reclaiming the Future We’ve Sold to Machines
by William Davidow and Michael Malone
Published 18 Feb 2020

Recently, there have been a number of calls—and actions by certain social networks, such as Twitter and Facebook—to control the use of bots that spread fake news, block groups that spread hate messages, and set up independent groups to certify the reliability of news sources. In 2018, Facebook announced that it would hire 10,000 new security and content moderation employees.35 But one criticism of these efforts is that the philosophical and political biases of the content moderators will stifle free speech. As Juvenal famously warned, “Quis custodiet ipsos custodes?” Who will guard the guardians? In the past, the best cure for hate speech, lies, and hyper-partisanship has been free speech. A free press exposes people to objective data—and also invites them, via balanced news coverage and a diverse selection of opinion pieces—to consider other people’s points of view.

pages: 314 words: 88,524

American Marxism
by Mark R. Levin
Published 12 Jul 2021

The NTIA’s report also issues a sharp warning: “We caution that efforts to control or monitor online speech, even for the worthy goal of reducing crime, present serious First Amendment concerns and run counter to our nation’s dedication to free expression….”70 The NTIA strongly admonishes Big Tech against its tyrannical practices: “[T]ech leaders have recognized that relying on human teams alone to review content will not be enough and that artificial intelligence will have to play a significant role. That said, there are, of course, significant policy and practical limitations to reliance on automated content moderation. Interestingly, much of this technology is being developed from approaches pioneered by the Chinese Communist Party to stifle political discussion and dissent. The report goes on: “Given that all the major social media platforms have rules against hate speech and, in fact, employ sophisticated algorithmic artificial intelligence (AI) approaches to enforce these often vague and contradictory rules in a manner also used by tyrannous regimes, it is appropriate to ask what they gain from it.

The law also creates a liability shield for the platforms to ‘restrict access to or availability of material that the provider or user considers to be… objectionable, whether or not such material is constitutionally protected.’ ”58 She adds: “A handful of Big Tech companies are now controlling the flow of most information in a free society, and they are doing so aided and abetted by government policy. That these are merely private companies exercising their First Amendment rights is a reductive framing which ignores that they do so in a manner that is privileged—they are immune to liabilities to which other First Amendment actors like newspapers are subject—and also that these content moderation decisions occur at an extraordinary and unparalleled scale.”59 Thus, when Republicans next control Congress and the presidency, they must be aggressively pressured to withdraw Section 230 immunity from Big Tech, which President Trump attempted to do but was thwarted by his own party. Moreover, Facebook billionaire Mark Zuckerberg’s interference with and attempted manipulation of elections, including the presidential election in 2020 with hundreds of millions in targeted contributions, as well as Google’s manipulation of algorithms, must be investigated and outlawed both at the federal and state level.60 You can contact friendly state legislators and file complaints against corporations that make what are effectively in-kind contributions with various federal and state agencies and, again, show up at their shareholder meetings and be heard.

pages: 446 words: 109,157

The Constitution of Knowledge: A Defense of Truth
by Jonathan Rauch
Published 21 Jun 2021

Per Baybars Örsek, the director of the International Fact-Checking Network, podcast interview by Jen Patja Howell for Lawfare, March 26, 2020. 40. For a roundup of steps taken by social media companies against misinformation during the pandemic, see Evelyn Douek, “COVID-19 and Social Media Content Moderation,” Lawfare, March 25, 2020. Chapter 6 1. Adrian Chen, “The Agency,” New York Times Magazine, June 2, 2015. 2. Andy Szal, “Report: Russian ‘Internet Trolls’ behind Louisiana Chemical Explosion Hoax,” Manufacturing.net, June 3, 2015. 3. Translated by Paul Shorey, in The Collected Dialogues of Plato, Edith Hamilton and Huntington Cairns, eds.

See also persuasion criticism vs. coercive conformity, 217–20 Crockett, Molly, 128 cults, 88 Current Biology, 74 The Daily Stormer, 158–59 Dartmouth Review, 160 Debs, Eugene V., 253 Decety, Jean, 74 demarcation problem, 97 democracy: development of, 62; truthfulness in, 8–9, 112; U.S. Constitution and, 81–82; virtuous public and, 112. See also liberalism demoralization and demobilization, 162, 166–69, 217–18, 243, 247, 249 deplatforming, 39, 200, 205, 217–20, 236, 240, 255–59 design of digital media, 17–18, 139, 146–49 digital media: cancel culture and, 210; content moderating efforts of, 143–47; designing for truth in, 17–18, 139, 146–49; “disinfonomics” of, 135–38; disinformation campaigns and, 17, 161–66, 168, 184; institutionalization of, 18, 119, 143, 149–54, 239–40; misinformation through, 124–26, 133–35; outrage addiction and, 126–31; private realities of, 131–33, 169; Wikipedia as model for, 138–44, 146–48 DiResta, Renée, 161, 168 disagreement, 131–32, 133.

pages: 334 words: 104,382

Brotopia: Breaking Up the Boys' Club of Silicon Valley
by Emily Chang
Published 6 Feb 2018

While Twitter seems often reluctant to act on behalf of users who have been abused, this is one prominent case in which Twitter was remarkably responsive to censoring someone who was trying to out an abuser. These seemingly inexplicable decisions might be explained in part by a closer look at how offensive content is handled once it is reported. The social networks, including Twitter, outsource most content moderation to contractors around the world. While there’s hope that technology, with the help of artificial intelligence, might be able to enforce rules more consistently in the future, for now, the task is up to humans. The contractors faced with the difficult job of filtering and flagging disturbing content on these networks generally don’t last long and must constantly be retrained, yet wield an inordinate amount of power when it comes to deciding what stays up and what comes down.

says it works harder: Deepa Seetharaman, “Twitter Takes More Proactive Approach to Finding Trolls,” Wall Street Journal, March 1, 2017, https://www.wsj.com/articles/twitter-takes-more-proactive-approach-to-finding-trolls-1488394514. 66 million users: Brad Stone and Miguel Helft, “Facebook Hires Google Executive as No. 2,” New York Times, March 4, 2008, http://www.nytimes.com/2008/03/04/technology/04cnd-facebook.html. In 2017, it added: Olivia Solon, “Facebook Is Hiring Moderators. But Is the Job Too Gruesome to Handle?,” Guardian, May 4, 2017, https://www.theguardian.com/technology/2017/may/04/facebook-content-moderators-ptsd-psychological-dangers. In an interview with Axios: Mike Allen, “Exclusive Interview with Facebook’s Sheryl Sandberg,” Axios, Oct. 12, 2017, https://www.axios.com/exclusive-interview-facebook-sheryl-sandberg-2495538841.html. “a better place for everyone”: Pao, Reset, 166. When one of the site’s: Mike Isaac, “Details Emerge About Victoria Taylor’s Dismissal at Reddit,” New York Times, July 13, 2015, https://bits.blogs.nytimes.com/2015/07/13/details-emerge-about-victoria-taylors-dismissal-at-reddit.

pages: 344 words: 104,522

Woke, Inc: Inside Corporate America's Social Justice Scam
by Vivek Ramaswamy
Published 16 Aug 2021

Railway Labor Executives’ Assn, 489 U.S. 602, 109 S. Ct. 1402 (1989). casetext.com/case/skinner-v-railway-labor-executives-assn?. 40. Romm, Tony. “The Technology 202: Lawmakers Plan to Ratchet Up Pressure on Tech Companies’ Content Moderation Practices.” The Washington Post, 17 July 2020, www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/04/09/the-technology-202-lawmakers-plan-to-ratchet-up-pressure-on-tech-companies-content-moderation-practices/5cabee50a7a0a475985bd372/. 41. Blumenthal, Richard. “No Private Company Is Obligated to Provide a Megaphone for Malicious Campaigns to Incite Violence. It Took Blood & Glass in the Halls of Congress—& a Change in the Political Winds—for the Most Powerful Tech Companies to Recognize, at the Last Possible Moment, the Threat of Trump.

pages: 447 words: 111,991

Exponential: How Accelerating Technology Is Leaving Us Behind and What to Do About It
by Azeem Azhar
Published 6 Sep 2021

Your typical driver will make $19.73 per hour before expenses, or $30,390 a year if they drive 40 hours a week. There is a similar dynamic at work in Facebook. Half of all employees at Facebook, from engineers to marketers, accountants to salespeople, make $240,000 a year or more.79 Facebook’s content moderators, who are not employed by the firm but rather contracted via temping agencies, are paid on average $28,000 per annum. Ordinary people who create the contacts, content and conversation on Facebook are, of course, paid nothing. And this is not just true of digital platforms. Zymergen, a breakthrough company working on the intersection of biology and artificial intelligence, has a similar bifurcation of its workers.

Today’s complex exponential platforms are opaque: their inner workings are hidden and their decisions are largely cloaked in corporate pabulum. Greater transparency would allow us to observe how decisions by tech elites are made, and identify what effect they have on us. Consider what this might mean for content moderation. Silencing the prime minister of Norway over war reportage but allowing the American president to rant on for four years (before abruptly silencing him too) reveals just how opaque these rules have become. Imagine if, instead, we had clear and explicit rules on what forms of speech are deemed acceptable by social platforms, and what forms aren’t.

Super Thinking: The Big Book of Mental Models
by Gabriel Weinberg and Lauren McCann
Published 17 Jun 2019

You should consider heuristics because they can be a shortcut to a solution for the problem in front of you, even if they may not work as well in other situations. If the problem persists, however, and you keep adding more heuristic rules, this type of solution can become unwieldy. That’s what has happened to Facebook with content moderation. The company started out with a simple set of heuristic rules (e.g., no nudity), and gradually added more and more rules (e.g., nudity in certain circumstances, such as breastfeeding, is okay), until as of April 2018 it had amassed twenty-seven pages of heuristics. Algorithms, step-by-step processes, are another approach.

You don’t care how you got the best seats, you just want the best seats! You can think of each algorithm as a box where inputs go in and outputs come out, but outside it is painted black so you can’t tell what is going on inside. Common examples of black box algorithms include recommendation systems on Netflix or Amazon, matching on online dating sites, and content moderation on social media. Physical tools can also be black boxes. Two sayings, “The skill is built into the tool” and “The craftsmanship is the workbench itself,” suggest that the more sophisticated tools get, the fewer skills are required to operate them. Repairing or programming them is another story, though!

pages: 521 words: 118,183

The Wires of War: Technology and the Global Struggle for Power
by Jacob Helberg
Published 11 Oct 2021

Others continued to doubt whether disinformation and foreign interference were much of a problem. There were substantive concerns, of course. In the wake of Twitter banning President Trump’s account, calls for restraint on content moderation have grown louder. Prominent tech figures such as David Sacks have cautioned against “decisions to permanently ban or de-platform individuals and/or businesses with no ability to appeal.”66 As someone that has spent years working on content moderation issues, I am still confident and hopeful that conduct-based approaches can be effective ways of addressing platform abuse while simultaneously upholding foundational free speech principles.

pages: 439 words: 131,081

The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World
by Max Fisher
Published 5 Sep 2022

Maybe tightened liability laws could realign their incentives. But the window quickly closed. The social media giants were invested too deeply in status quo financial and ideological models for such radical change. Mostly, they built on the methods they knew best: automated technology and at-scale content moderation. Twitter, for instance, dialed up its “friction,” adding prompts and interstitials (“Want to read the article first?”) to slow users from compulsively sharing posts. It was a meaningful change, but far short of Jack Dorsey’s suggestions of rethinking the platform’s underlying mechanics, which remained in place.

That’s Really a You Problem,” Nellie Bowles, New York Times, October 6, 2019. 77 Harris called the campaign: Pardes, 2018. 78 “The CEOs, inside”: “Where Silicon Valley Is Going to Get in Touch with Its Soul,” Nellie Bowles, New York Times, December 4, 2017. 79 an American moderator filed a lawsuit: “Ex-Content Moderator Sues Facebook, Saying Violent Images Caused Her PTSD,” Sandra E. Garcia, New York Times, September 25, 2018. 80 In 2020, Facebook settled: “Facebook Will Pay $52 Million in Settlement with Moderators Who Developed PTSD on the Job,” Casey Newton, The Verge, May 12, 2020. Chapter 11: Dictatorship of the Like 1 a far-right lawmaker edited footage: “É horrível ser difamado pelo Bolsonaro,” Débora Lopes, Vice Portuguese, May 11, 2013. 2 millions of acres: “With Amazon on Fire, Environmental Officials in Open Revolt against Bolsonaro,” Ernesto Londoño and Letícia Casado, New York Times, August 28, 2019. 3 “I wouldn’t rape you”: “A Look at Offensive Comments by Brazil Candidate Bolsonaro,” Stan Lehman, Associated Press, September 29, 2018. 4 festered on the fringes of Brazil’s: “URSAL, Illuminati, and Brazil’s YouTube Subculture,” Luiza Bandeira, Digital Forensic Research Lab, August 30, 2018. 5 Dozens more conspiracies: “Fast and False in Brazil,” Luiza Bandeira, Digital Forensic Research Lab, September 19, 2018. 6 second-largest market; “Pesquisa Video Viewers: como os brasileiros estão consumindo vídeos em 2018,” Maria Helena Marinho, Google Marketing Materials, September 2018. 7 Almeida had some ideas: Almeida and his team provided us with several separate reports documenting their methodology and findings, along with the underlying raw data, in a series of interviews conducted throughout early 2019.

pages: 574 words: 148,233

Sandy Hook: An American Tragedy and the Battle for Truth
by Elizabeth Williamson
Published 8 Mar 2022

But here were the social platforms, scurrying to take down porn while trotting out the First Amendment to explain why they didn’t remove abusive content. Why? Because despite what they say, the platforms are all about pleasing their advertisers, most of whom don’t want their ads adjacent to sexually explicit content. Lenny wondered whether the content moderators were actually human beings. In Facebook’s case, they were poorly paid contractors, traumatized by the livestreamed beatings, suicides, and beheadings they watched every day. No discernible standards governed what stayed up and what came down. “The platforms were concerned with growth and income,” Lenny said.

See also specific individuals, including Jones, Alex Conspiracy Theorists Anonymous Facebook group, 167–69, 171 Cooper, Anderson: Maguire’s “archival footage” of, 225; and the McDonnell family, 128; and Parker’s statement, 98; on Tracy, 111–13, 134; Veronique’s interview with, 69–70, 118–21, 157, 161, 196, 221, 302, 310, 311, 351, 352 copyright-violations tactic, 195–96, 197–98, 199–200, 224, 239, 277, 290, 297–99, 304–6, 401, 421 Corgan, Billy, 267 coronavirus pandemic, 115, 250, 252, 415–16, 419–20, 424, 425, 436, 443 Costolo, Dick, 208 Coughlin, Charles E., 74 Cox, Sally, 100 Craigie, Philip “Professor Doom,” 219, 224 Cruson, Daniel, Jr., 274–75 Cruz, Nikolas, 308–9 Cruz, Ted, 262–63, 383–87, 389, 434 D D’Amato, Amanda, 156 Danbury Hospital, 26–27 Daniels, Kit, 309 defamation, 310, 323, 334, 336 Denying the Holocaust (Lipstadt), 191 Dew, Rob, 177–80, 198, 201, 268, 270, 277–78, 291, 367–68, 372, 413, 429 Dievert, Lisa, 61–62 DiFonzo, Nicholas, 151, 153 Dorsey, Jack, 165, 377, 378 Drudge Report, 113, 210, 279 Dykes, Aaron, 121 E Eastman, John, 433 El Paso, mass shooting in (2019), 440 Enoch, Mark, 347, 350–52, 368 F Facebook: bad actors on, 207–8; and Cambridge Analytica, 340–41, 343; Conspiracy Theorists Anonymous Facebook group, 167–69, 171; content moderators of, 194, 345; and coronavirus pandemic, 417, 419–20; disinformation on, 419–20; failure to protect the vulnerable, 163–65, 341, 344; and fake news, 241, 343, 344; and First Amendment protections, 204, 206, 209–10; and games countering misinformation, 424; and Infowars, 342, 343, 345, 375; and Jones, 166, 342, 343, 345, 370, 375; as news source, 258; and Pizzagate, 243, 250; and Pozners’ open letter to Zuckerberg, 343–45; Pozner’s trusted status with, 421; and QAnon, 250–51, 252; Sandy Hook Hoax Facebook group, 148–50, 153–60, 163–64, 166–69, 170, 226, 328, 392, 440; Sandy Hook Truth Facebook group, 167, 172; and Section 230 protections, 207; Swisher’s criticisms of, 165–66; traffic on, 422 Fairfield State Hospital, 12 fake news, 91–92, 241, 321, 336 families of Sandy Hook victims: and Clinton, 257; Congressional testimony of, 288–89; death threats received by, 106–7, 169–71, 218–20, 221, 236, 302, 343; doxxing of, 106, 107–8, 198, 199, 200–201, 219, 277, 303, 304, 351, 443; and Facebook platform, 166; grieving of, 55; harrassment of, 103, 170, 177, 183, 184, 231–34, 236–38, 239, 331, 343–44, 382, 388; and immediate aftermath of massacre, 14–19; and Jones’s deposition, 414; Jones’s Father’s Day message to, 283–84, 368–69; and Kelly’s interview with Jones, 280–86; lawsuit against Halbig, 410; lawsuit against Remington Arms Company, 281, 328, 333; lawsuits against Jones, 430, 431; and Malloy, 20–22, 65; and media coverage, 33–38; money donated to, 50–54; notified of deaths of children, 20–22, 33–34, 65; and public response to massacre, 44–45; state troopers assigned to, 29; and suicide of Richman, 380–82; Van Ness’s service to, 54–57.

pages: 145 words: 40,897

Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps
by Gabe Zichermann and Christopher Cunningham
Published 14 Aug 2011

What important business problem do you want to solve? A successful rewards program supports your ability to achieve core business objectives. Some common examples of appropriate objectives for a rewards program include: Increasing ad revenue Increasing sponsorship revenue Reducing content-creation costs Reducing content-moderation costs Skumo’s objectives Skumo’s main objective is to increase ad and sponsorship revenue by providing useful content that grows membership. As a business, Skumo cannot afford to hire full-time writers and editors, so it uses technology to drive a creation and moderation process that elevates useful content.

pages: 237 words: 67,154

Ours to Hack and to Own: The Rise of Platform Cooperativism, a New Vision for the Future of Work and a Fairer Internet
by Trebor Scholz and Nathan Schneider
Published 14 Aug 2017

Artists like Burak Arikan, Alex Rivera, Stephanie Rothenberg, and Dmytri Kleiner played pioneering roles in alerting the public to these issues. Later, debates became more concerned with “crowd fleecing,” the exploitation of thousands of invisible workers in crowdsourcing systems like Amazon Mechanical Turk or content moderation farms in the Philippines. Over the past few years, the search for concrete alternatives for a better future of work has become more dynamic. The theory of platform cooperativism has two main tenets: communal ownership and democratic governance. It is bringing together 135 years of worker self-management, the roughly 170 years of the cooperative movement, and commons-based peer production with the compensated digital economy.

pages: 420 words: 61,808

Flask Web Development: Developing Web Applications With Python
by Miguel Grinberg
Published 12 May 2014

Chapter 9. User Roles Not all users of web applications are created equal. In most applications, a small percentage of users are trusted with extra powers to help keep the application running smoothly. Administrators are the best example, but in many cases middle-level power users such as content moderators exist as well. There are several ways to implement roles in an application. The appropriate method largely depends on how many roles need to be supported and how elaborate they are. For example, a simple application may need just two roles, one for regular users and one for administrators.

pages: 295 words: 66,912

Walled Culture: How Big Content Uses Technology and the Law to Lock Down Culture and Keep Creators Poor
by Glyn Moody
Published 26 Sep 2022

In practice, copyright owners will force the entire Internet industry to adopt technology preferred by copyright owners—including mandatory filtering technology—and make the Internet services pay for it.”550 As Goldman notes, the SMART Copyright Act would essentially place the US Copyright Office in charge of mandating new technologies to control what people can do online—for example, by requiring upload filters: “[The] SMART Copyright Act would give the Copyright Office a truly extraordinary power—the ability to force thousands of businesses to adopt, at their expense, technology they don’t want and may not need, and the mandated technologies could reshape how the Internet works. That’s an enormous amount of power to put into the hands of any government agency. It’s especially puzzling to give that enormous power to the Copyright Office given its relatively narrow focus. The Copyright Office is not expert at Internet technology, content moderation, or the inherent tradeoffs in publication processes. If Congress really thinks DTMs are worth pursuing, that’s a massively consequential decision for the Internet. It’s an important enough decision that Congress should solicit a multi-stakeholder study conducted by an entity with broader expertise than just copyrights, and Congress should vet and approve the recommendations itself through regular order rather than letting an administrative agency make such important decisions without further supervision from Congress.”551 Alongside those clear knock-on effects of the EU Copyright Directive, there are some more subtle aspects that reflect deeper problems caused by attempts to impose outdated copyright approaches on the digital world.

pages: 277 words: 70,506

We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News
by Eliot Higgins
Published 2 Mar 2021

In the summer of 2017, YouTube introduced an algorithm to flag videos that violated its standards, and hundreds of thousands of Syria videos vanished,30 wiping out reams of potential evidence. Meantime, Facebook sought partner organisations like ours to provide credibility and quality to its content moderation, preferably for free. Bellingcat was not big enough for such an endeavour, but Facebook pursued its goal elsewhere, persuading a few US organisations, including the Associated Press, Snopes and ABC News, to help debunk false claims. Later, Facebook expanded to more partners in several dozen languages.

pages: 243 words: 76,686

How to Do Nothing
by Jenny Odell
Published 8 Apr 2019

Today, though, “‘we are all capitalists’…and therefore, we all have to take risks…The essential idea is that we should all consider life as an economic venture, as a race where there are winners and losers.”14 The way that Berardi describes labor will sound as familiar to anyone concerned with their personal brand as it will to any Uber driver, content moderator, hard-up freelancer, aspiring YouTube star, or adjunct professor who drives to three campuses in one week: In the global digital network, labor is transformed into small parcels of nervous energy picked up by the recombining machine…The workers are deprived of every individual consistency. Strictly speaking, the workers no longer exist.

pages: 252 words: 78,780

Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us
by Dan Lyons
Published 22 Oct 2018

Another approach is to create for-profit companies that tackle social missions, like mitigating poverty, that once were within the purview of nonprofits and philanthropies. An example of the latter is Samasource, a San Francisco company that outsources tasks for companies like Google to workers in impoverished countries. Workers need only a laptop and a bit of training, and can do things like content moderation, scanning websites for objectionable photos. Samasource was founded in 2008 and claims to have lifted sixty thousand people out of poverty. An early pioneer in social enterprise was Gregory Dees, a professor whose academic career involved stints at Yale School of Management, Harvard Business School, Stanford Graduate School of Business, and Duke University’s Fuqua School of Business.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

“Udacity’s Sebastian Thrun, Godfather of Free Online Education, Changes Course.” Fast Company, November 14, 2013. https://www.fastcompany.com/3021473/udacity-sebastian-thrun-uphill-climb. Chen, Adrian. “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” Wired, October 23, 2014. https://www.wired.com/2014/10/content-moderation/. Christian, Andrew, and Randolph Cabell. Initial Investigation into the Psychoacoustic Properties of Small Unmanned Aerial System Noise. Hampton, VA: NASA Langley Research Center, American Institute of Aeronautics and Astronautics, 2017. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170005870.pdf.

Off the Edge: Flat Earthers, Conspiracy Culture, and Why People Will Believe Anything
by Kelly Weill
Published 22 Feb 2022

The activist, Daryle Lamont Jenkins, told me he suspected that far-right trolls had flagged the video and that YouTube had automatically deleted it based on their complaints. “My guess is one of our ‘fans’ might have reported it knowing YouTube won’t do a real investigation and just take it down just because they saw the swastika” in the video, Jenkins told me after the purge in 2019. In March 2020, after Facebook turned over some of its content moderation to an artificial intelligence system, it accidentally flagged factual content about COVID-19 as spam while the virus raged. And these are acts of probably accidental suppression, carried out by corporate entities that only really take action when bullied into it by activists. Codified into law or placed under government oversight, the fight against “disinformation” could also ricochet back onto dissidents, this time with real legal repercussions.

pages: 328 words: 84,682

The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power
by Michael A. Cusumano , Annabelle Gawer and David B. Yoffie
Published 6 May 2019

It has even been suggested that the more outrageous or shocking the content, the more traffic it actually drives.27 But the backlash against the apparent neutrality, and what many have judged as callous behavior by Facebook and others, has made the “neutral” stance of social media platforms an increasingly untenable position. In effect, by not only policing but selecting content, Facebook backed further away from its identity as a neutral platform and took a step closer to acting as a publisher. Facebook already employed some 15,000 content moderators in early 2018, and Zuckerberg promised the U.S. Congress that number would grow to 20,000 by the year’s end.28 The Cambridge Analytica scandal further exacerbated Facebook’s legitimacy and trust problems. Cambridge Analytica collected its data by 2014, when Facebook’s rules permitted apps to collect private information from users of the app as well as their Facebook friends.

pages: 277 words: 81,718

Vassal State
by Angus Hanton
Published 25 Mar 2024

But the star-struck reporting on business celebrities can easily overlook the essential part of the story, as happened with Musk’s purchase of Twitter/X late in 2022. The BBC reported extensively on whether Musk paid too much, on the will-he-won’t-he takeover process and on whether he was employing enough content moderators.37 The real story, at least in the UK, should be why the ‘town square’, which sits at the heart of British political life, is owned and controlled – and reportedly manipulated – from California. Peter Thiel, founder of both PayPal and Palantir, has set up the Founders Network, where entrepreneurs offer each other support.38 Thiel says: ‘If you’re a founder starting a company you always want to aim for monopoly and you always want to avoid competition.’

pages: 326 words: 91,559

Everything for Everyone: The Radical Tradition That Is Shaping the Next Economy
by Nathan Schneider
Published 10 Sep 2018

The best-selling futurist handbook of the same period, John Naisbitt’s Megatrends, likewise promised that “the computer will smash the pyramid,” and with its networks “we can restructure our institutions horizontally.”16 What we’ve gotten instead are apps from online monopolies accountable to their almighty stock tickers. The companies we allow to manage our relationships expect that we pay with our personal data. The internet’s so-called sharing economy requires its permanently part-time delivery drivers and content moderators to relinquish rights that used to be part of the social contracts workers could expect. Yet a real sharing economy has been at work all along. During the 2016 International Summit of Cooperatives in Quebec, I attended a dinner at the Château Frontenac, a palatial hotel that casts its glow across the old city.

pages: 371 words: 93,570

Broad Band: The Untold Story of the Women Who Made the Internet
by Claire L. Evans
Published 6 Mar 2018

“animateurs” culled from: Howard Rheingold, The Virtual Community: Homesteading on the Electronic Frontier (Cambridge, MA: MIT Press, 2000), 235. “Hosts are the people”: Rheingold, The Virtual Community, 26. “front page of the Internet”: Adrian Chen, “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed,” Wired, October 23, 2014, www.wired.com/2014/10/content-moderation. “cyberaffirmative action”: Horn, Cyberville, 96. “I heard women talking”: Mittelmark, interview with the author, July 21, 2016. “A PLANE JUST CRASHED”: Horn, “Echo,” 245. “The hottest topic”: Horn, Cyberville, 53. “Someone in the twenty-second century”: Horn, interview with the author, May 26, 2016.

pages: 362 words: 87,462

Laziness Does Not Exist
by Devon Price
Published 5 Jan 2021

“Welcome to the Information Age: 174 Newspapers a Day,” Telegraph, https://www.telegraph.co.uk/news/science/science-news/8316534/Welcome-to-the-information-age-174-newspapers-a-day.html. 8. And that’s to say nothing of how traumatic working as a social media moderator can be; see Casey Newton, “The Trauma Floor: The Secret Lives of Facebook Moderators in America,” Verge, February 25, 2019, https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona. 9. “APA Stress in America Survey: US at ‘Lowest Point We Can Remember’; Future of Nation Most Commonly Reported Source of Stress,” American Psychological Association, November 1, 2017, https://www.apa.org/news/press/releases/2017/11/lowest-point. 10.

Forward: Notes on the Future of Our Democracy
by Andrew Yang
Published 15 Nov 2021

Again, the camera just silently recorded these exchanges with no interpretation or commentary. Afterward, the network cut to another recorded event. This night stuck with me because the treatment was in such stark contrast to the vast majority of my broadcast media appearances, almost all of which included some kind of content moderation or back-and-forth with a journalist. How many people were watching that replay on C-SPAN? Probably not many; you have likely flipped past C-SPAN myriad times while looking for something else to watch. Its ratings are something of a guess, because C-SPAN’s viewership is not measured by Nielsen and it doesn’t sell advertising.

pages: 344 words: 104,077

Superminds: The Surprising Power of People and Computers Thinking Together
by Thomas W. Malone
Published 14 May 2018

In some cases, the people who request these tasks also specify particular qualifications that workers must have before doing the tasks (such as getting correct answers on a series of qualifying tasks), so some of the workers (called Turkers) become hyperspecialists in particular kinds of microtasks. Companies like Facebook also routinely use Turkers and similar workers to fill in the gaps when their AI algorithms don’t always know enough to sensibly select trending topics and do other kinds of content moderation.3 Of course, Topcoder and Mechanical Turk are only early examples of what’s possible. What if it were super fast and easy to find someone who could fix the graphics in your PowerPoint slide? What if a law firm could instantly find someone online who is one of the world’s experts on the rules of evidence in Texas murder trials and get the answer to a specific question about this topic in minutes?

pages: 521 words: 110,286

Them and Us: How Immigrants and Locals Can Thrive Together
by Philippe Legrain
Published 14 Oct 2020

For instance, AI may make it easier and faster to collect and process data about a business’s logistic operations, making managers more productive, not replacing them. At the same time, AI will help create new products and services, and hence new jobs for those who provide them. The explosive growth of social media has created new jobs for digital marketers and content moderators. Some even earn a good living playing computer games as a spectator sport. The higher productivity – and thus the increased incomes – that AI generates will also raise demand for services that are not readily automatable, such as nursing, personal trainers, creative professionals, consultants and other advisory functions, and much else besides.

pages: 458 words: 116,832

The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism
by Nick Couldry and Ulises A. Mejias
Published 19 Aug 2019

Raw materials for the electronic infrastructure that supports the social quantification sector still come from Africa, Asia, and Latin America, with 36 percent of the Earth’s tin and 15 percent of its silver going into electronics manufacturing.28 Massive energy usage translates into pollution29 that, along with the dumping of toxic waste from the electronics industry, continues to impact poor communities disproportionately (by 2007, 80 percent of electronic waste was exported to the developing world).30 And much of the labor necessary for the social quantification sector is still located in places such as Asia, where it is abundant and cheap. In China, manufacturer Foxconn, responsible for about half of the world’s electronics production, employs a massive workforce of one million laborers who are managed under military-style conditions.31 Meanwhile, much of the distressing work of content moderation for platforms (weeding out violent and pornographic images) is done in places such as the Philippines.32 How the Social Quantification Sector Works If data in the cloud is stored in data centers and not in personal computers, this brings up the very important question of who owns it. Whereas earlier models of the internet allowed for distributed ownership of resources that could be used collectively, the cloud centralizes ownership and establishes very specific parameters for how data is shared.

pages: 412 words: 116,685

The Metaverse: And How It Will Revolutionize Everything
by Matthew Ball
Published 18 Jul 2022

These problems have mostly grown with time. Though they are delivered, facilitated, or exacerbated by technology, the challenges we face in the mobile era are human and societal problems at their core. As more people, time, and spending go online, more of our problems go online, too. Facebook has tens of thousands of content moderators; if hiring more moderators would solve for harassment, misinformation, and other ills on the platform, no one would be more motivated to do so than Mark Zuckerberg. And yet the tech world, including hundreds of millions if not billions of everyday users—think of all the individual creators in Roblox, for example—are pressing on to the “next internet.”

pages: 533 words: 125,495

Rationality: What It Is, Why It Seems Scarce, Why It Matters
by Steven Pinker
Published 14 Oct 2021

Conversely, some things may come true that are predicted by people who are not the smartest in the world but are experts in the relevant subject, in this case, the history of automation. Some of those experts predict that for every job lost to automation, a new one will materialize that we cannot anticipate: the unemployed forklift operators will retrain as tattoo removal technicians and video game costume designers and social media content moderators and pet psychiatrists. In that case the argument would fail—a third of Americans will not necessarily lose their jobs, and a UBI would be premature, averting a nonexistent crisis. The point of this exercise is not to criticize Yang, who was admirably explicit in his platform, nor to suggest that we diagram a logic chart for every argument we consider, which would be unbearably tedious.

AI 2041: Ten Visions for Our Future
by Kai-Fu Lee and Qiufan Chen
Published 13 Sep 2021

Some examples: smart editing of photos and videos (tools like Photoshop use computer vision extensively to find facial borders, remove red eyes, and beautify selfies) medical image analysis (to determine if there are malignant tumors in a lung CT) content moderation (detection of pornographic and violent content in social media) related advertising selection based on the content of a given video smart image search (that can find images from keywords or other images) and, of course, making deepfakes (replacing occurrences of one face with another in a video) In “Gods Behind the Masks,” we saw a deepfake-making tool that is essentially an automatic video-editing tool that replaces one person with another, from face, fingers, hand, and voice to body language, gait, and facial expression.

pages: 444 words: 130,646

Twitter and Tear Gas: The Power and Fragility of Networked Protest
by Zeynep Tufekci
Published 14 May 2017

Adrian Chen, “Inside Facebook’s Outsourced Anti-porn and Gore Brigade, Where ‘Camel Toes’ Are More Offensive than ‘Crushed Heads,’” Gawker, February 16, 2012, http://gawker.com/5885714/inside-facebooks-outsourced-anti-porn-and-gore-brigade-where-camel-toes-are-more-offensive-than-crushed-heads. 28. Adrian Chen, “The Laborers Who Keep Dick Pics and Beheadings out of Your Facebook Feed,” WIRED, October 23, 2014, https://www.wired.com/2014/10/content-moderation/; Catherine Buni and Soraya Chemaly, “The Secret Rules of the Internet,” Verge, April 13, 2016, http://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech. 29. Bill Moyers, “Transcript, September 12, 2008,” Bill Moyers Journal, http://www.pbs.org/moyers/journal/09122008/transcript_anti.html. 30.

pages: 502 words: 132,062

Ways of Being: Beyond Human Intelligence
by James Bridle
Published 6 Apr 2022

When we see the damage wrought to our societies and our democracies by the opacity and centralization of new technologies – the spread of demagoguery, fundamentalism and hatred, and the rise of inequality – it is precisely that opacity and centralization we must attend to and redress, through education and decentralization. When we see the human oppression inherent in our technological systems – the slave labourers in the coltan mines, the traumatized content moderators of social media platforms, the underpaid and sickening workers in Amazon warehouses and the gig economy – it is to the conditions of labour and our own patterns of consumption that we must turn. And when we see the damage wrought on our environment by extraction and abstraction – the rare earths and minerals that make up our devices and the invisible gases produced by data processing – we must fundamentally change the way we design, create, build and operate our world.

Beautiful Data: The Stories Behind Elegant Data Solutions
by Toby Segaran and Jeff Hammerbacher
Published 1 Jul 2009

What we do in this chapter is called “exploratory data analysis” (EDA)—as opposed to the ploddingly careful hypothesis testing that is usually taught in statistical methodology courses. Exploratory data analysis was strongly advocated by statistician John Tukey in his 1977 book of the same name (Addison-Wesley). Our startup, Dolores Labs, specializes in crowdsourcing: collecting human task data from large masses of people to solve practical problems in content moderation, information extraction, web search relevance, and other domains. We collect, look at, and automatically analyze lots of human judgment data. You can see follow-ups to this chapter, and analyses of other subjects such as sex, colors, and ethics, at our blog: http://blog.doloreslabs.com. SUPERFICIAL DATA ANALYSIS: EXPLORING MILLIONS OF SOCIAL STEREOTYPES Download at Boykma.Com 301 Download at Boykma.Com Chapter 18 CHAPTER EIGHTEEN Bay Area Blues: The Effect of the Housing Crisis Hadley Wickham, Deborah F.

We Are the Nerds: The Birth and Tumultuous Life of Reddit, the Internet's Culture Laboratory
by Christine Lagorio-Chafkin
Published 1 Oct 2018

These were called “self” posts, and this was the kind of post that u/BadgerMatt had made. The second transformative development had been the proliferation of subreddits, which any user could create and manage themselves. Reddit’s little sections were curated by an army of thousands of volunteer moderators from around the world, who watched over their troves of fascinating content. Moderators, in exchange for upholding Reddit’s content policy (no spamming, no doxing, and the like), were allowed to run their own little corners of Reddit, with extra rules of their choosing. Subreddits were wide-ranging: r/LifeProTips, where users would post often-funny life hacks, and r/todayilearned, which was filled with fascinating “aha” moments or strange historical coincidences.

pages: 524 words: 154,652

Blood in the Machine: The Origins of the Rebellion Against Big Tech
by Brian Merchant
Published 25 Sep 2023

Footnote 1 The likes of Facebook and Twitter are more likely to anger the upper-class commentariat than those facing the prospect of economic desperation at the hands of profit-generating technology. (Facebook has hurt and degraded some industries, like journalism, but it is not chiefly resented for killing jobs or for its poor working conditions. Its third-party content moderators, who complain of mistreatment and bear the psychological damage of wading through grotesque material daily for low wages, however, have a litany of deeply justified grievances.) CHRISTIAN SMALLS June 2022 Tired and elated, Chris Smalls stepped outside the JFK8 Amazon facility and uncorked the champagne.

pages: 541 words: 173,676

Generations: The Real Differences Between Gen Z, Millennials, Gen X, Boomers, and Silents—and What They Mean for America's Future
by Jean M. Twenge
Published 25 Apr 2023

Led by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), the hearings featured the testimony of Frances Haugen (b. 1984), a former data engineer at Facebook. After leaking a trove of documents to the Wall Street Journal, Haugen testified that Facebook regularly placed profits over safety. The company, she says, relaxed content moderation around misinformation after the 2020 election was over, likely contributing to the January 6, 2021, attempt to take over the Capitol. Facebook profits from anger and negativity, she noted, because negative emotions keep people on the site for longer, which earns the company more money via more ads being viewed and more users’ personal data being gathered and sold.