Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
by
Karen Hao
Published 19 May 2025
GO TO NOTE REFERENCE IN TEXT “We, the undersigned”: Google Walkout for Real Change, “Standing with Dr. Timnit Gebru—#ISupportTimnit #BelieveBlackWomen,” Medium, December 3, 2020, https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382. GO TO NOTE REFERENCE IN TEXT A few hours later, I: Karen Hao, “We Read the Paper That Forced Timnit Gebru out of Google. Here’s What It Says,” MIT Technology Review, December 4, 2020, technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru. GO TO NOTE REFERENCE IN TEXT On December 9, as protests: Ina Fried, “Scoop: Google CEO Pledges to Investigate Exit of Top AI Ethicist,” Axios, December 9, 2020, axios.com/2020/12/09/sundar-pichai-memo-timnit-gebru-exit.
…
GO TO NOTE REFERENCE IN TEXT On December 9, as protests: Ina Fried, “Scoop: Google CEO Pledges to Investigate Exit of Top AI Ethicist,” Axios, December 9, 2020, axios.com/2020/12/09/sundar-pichai-memo-timnit-gebru-exit. GO TO NOTE REFERENCE IN TEXT On December 16, representatives: Karen Hao, “Congress Wants Answers from Google About Timnit Gebru’s Firing,” MIT Technology Review, December 17, 2020, technologyreview.com/2020/12/17/1014994/congress-wants-answers-from-google-about-timnit-gebrus-firing. GO TO NOTE REFERENCE IN TEXT For more than a year, the protests: Ina Fried, “Google Fires Another AI Ethics Leader,” Axios, February 19, 2021, axios.com/2021/02/19/google-fires-another-ai-ethics-leader.
…
For more on the wide-reaching impacts of “Gender Shades” and “Actionable Auditing,” see: “Celebrating 5 Years of Gender Shades,” Algorithmic Justice League, accessed on January 15, 2025, gs.ajl.org/. GO TO NOTE REFERENCE IN TEXT Black in AI sparked: Karen Hao, “Inside the Fight to Reclaim AI from Big Tech’s Control,” MIT Technology Review, June 14, 2021, technologyreview.com/2021/06/14/1026148/ai-big-tech-timnit-gebru-paper-ethics. GO TO NOTE REFERENCE IN TEXT had approached Gebru: Author interview with Timnit Gebru, August 2023. GO TO NOTE REFERENCE IN TEXT In 2017, a Facebook: Alex Hern, “Facebook Translates ‘Good Morning’ into ‘Attack Them,’ Leading to Arrest,” The Guardian, October 24, 2017, theguardian.com/technology/2017/oct/24/facebook-palestine-israel-translates-good-morning-attack-them-arrest.
The Shame Machine: Who Profits in the New Age of Humiliation
by
Cathy O'Neil
Published 15 Mar 2022
Then She Was Fired for It,” The Washington Post, December 23, 2020, https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/. GO TO NOTE REFERENCE IN TEXT “On the Dangers of Stochastic Parrots”: Karen Hao, “We Read the Paper That Forced Timnit Gebru out of Google. Here’s What It Says,” MIT Technology Review, December 4, 2020, https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/. GO TO NOTE REFERENCE IN TEXT she denounced the company for censoring her: Casey Newton, “The Withering Email That Got an Ethical AI Researcher Fired at Google,” Platformer, December 3, 2020, https://www.platformer.news/p/the-withering-email-that-got-an-ethical.
…
GO TO NOTE REFERENCE IN TEXT “research integrity and academic freedom”: “Standing with Dr. Timnit Gebru—#ISupportTimnit #BelieveBlackWomen,” Google Walkout for Real Change, Medium, December 3, 2020, https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382. GO TO NOTE REFERENCE IN TEXT “I accept the responsibility of working to restore your trust”: Ina Fried, “Scoop: Google CEO Pledges to Investigate Exit of Top AI Ethicist,” Axios, December 9, 2020, https://www.axios.com/sundar-pichai-memo-timnit-gebru-exit-18b0efb0-5bc3-41e6-ac28-2956732ed78b.html. GO TO NOTE REFERENCE IN TEXT one of her top collaborators, Margaret Mitchell, was let go: Tom Simonite, “A Second AI Researcher Says She Was Fired by Google,” Wired, February 19, 2021, https://www.wired.com/story/second-ai-researcher-says-fired-google/.
…
GO TO NOTE REFERENCE IN TEXT she had published a groundbreaking 2017 study: Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” ed. Sorelle A. Friedler and Christo Wilson, Proceedings of Machine Learning Research 81 (2018): 1–15, https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf. GO TO NOTE REFERENCE IN TEXT led Amazon and Microsoft to stop selling the software to law enforcement: Nitasha Tiku, “Google Hired Timnit Gebru to Be an Outspoken Critic of Unethical AI. Then She Was Fired for It,” The Washington Post, December 23, 2020, https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/.
Supremacy: AI, ChatGPT, and the Race That Will Change the World
by
Parmy Olson
“Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s.” New York Times, May 22, 2023. Harris, Josh. “There Was All Sorts of Toxic Behaviour’: Timnit Gebru on Her Sacking by Google, AI’s Dangers and Big Tech’s Biases.” The Guardian, May 22, 2023. Horwitz, Jeff. “The Facebook Files.” Wall Street Journal, October 1, 2021. Payton, L’Oreal Thompson. “Americans Check Their Phones 144 Times a Day. Here’s How to Cut Back.” Fortune, July 19, 2023. Simonite, Tom. “What Really Happened When Google Ousted Timnit Gebru.” Wired, June 8, 2021. “The Social Atrocity: Meta and the Right to Remedy for the Rohingya.” Amnesty International report, September 29, 2022.
…
And while they bring greater wealth to the shareholders of those companies, including pension funds, they have also centralized power in such a way that the privacy, identity, public discourse, and increasingly the job prospects of billions of people are beholden to a handful of large firms, run by a handful of unfathomably wealthy people. It is little wonder that for those working inside a tech giant who see something wrong, sounding the alarm can seem as futile as trying to turn the Titanic around just moments before hitting the iceberg. Still, that didn’t stop an AI scientist named Timnit Gebru from trying. In December 2015, at the NeurIPS conference where Sam Altman and Elon Musk announced they were creating AI “for the benefit of humanity,” Gebru looked around at the thousands of other attendees and shuddered. Almost no one there looked like her. Gebru was in her early thirties and Black, and she’d had anything but a conventional upbringing with the support system that many of her peers had enjoyed.
…
Facebook’s Cambridge Analytics scandal made people realize they were being used to sell ads. Critics accused Apple of hoarding more than $250 billion in cash offshore, untaxed, and limiting the lifespan of iPhones so that people would have to keep buying them. And behind the scenes at Google, researchers Timnit Gebru and Margaret Mitchell were starting to sound a warning about how language models could amplify prejudice. Tech giants had amassed enormous wealth, and as they crushed their competitors and violated people’s privacy, the public grew more skeptical of their promises to make the world a better place.
Amateurs!: How We Built Internet Culture and Why It Matters
by
Joanna Walsh
Published 22 Sep 2025
@pharmapsychotic, ‘CLIP Interrogator’, github.com, huggingface.co/spaces/fffiloni/CLIP-Interrogator-2. 9.Jameson, Postmodernism, p. 30. 10.Benjamin, ‘The Work of Art in the Age of Mechanical Reproduction’, p. 226. 11.Hélène Cixous, Coming to Writing (Harvard University Press, 1991), p. 104. 239 12.Will Douglas Heaven, ‘This Avocado Armchair Could Be the Future of AI’, MIT Technology Review, 5 January 2021. 13.sillysaurusx at Hacker News, 28 September 2023. 14.Melissa Heikkilä, ‘This New Data Poisoning Tool Lets Artists Fight Back Against Generative AI’, technologyreview.com, 23 October 2023. 15.Elaine Velie, ‘New Tool Helps Artists Protect Their Work from AI Scraping’, Hyperallergic, 30 October 2023. 16.Marco Donnarumma, ‘AI Art Is Soft Propaganda for the Global North’, Hyperallergic, 24 October 2022. 17.Andy Baio, ‘Exploring 12 Million of the 2.3 Billion Images Used To Train Stable Diffusion’s Image Generator’, waxy.org, 30 August 2022. 18.Hito Steyerl, ‘Mean Images’, New Left Review, nos. 140/141, March–June 2023. 19.Ibid. 20.Ben Zimmer, ‘Tasty Cupertinos’, languagelog.ldc, 11 April 2012. 21.Kyle Wiggers, ‘3 Big Problems with Datasets in AI and Machine Learning’, venturebeat.com, 17 December 2021. 22.Cecilia D’Anastasio, ‘Meet Neuro-sama, the AI Twitch Streamer Who Plays Minecraft, Sings Karaoke, Loves Art’, Bloomberg, 16 June 2023. 23.Ibid. 24.‘Google AI Ethics Co-Lead Timnit Gebru Says She Was Fired Over an Email’, Venture Beat. 25.Rachel Gordon, ‘Large Language Models Are Biased. Can Logic Help Save them?’, MIT News, 3 March 2023. 26.Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman et al., ‘On the Opportunities and Risks of Foundation Models’, arXiv, 12 July 2022. 27.Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 610–23, see p. 615. 240 28.Bommasani et al., ‘On the Opportunities and Risks of Foundation Models’. 29.Maurice Merleau-Ponty, ‘Eye and Mind’, in The Primacy of Perception: And Other Essays on Phenomenological Psychology (Northwestern University Press, 1964), p. 189. 30.Lyotard, ‘The Sublime and the Avant-Garde’, p. 255. 31.Jameson, Postmodernism, p. 46. 32.Merleau-Ponty, ‘Eye and Mind’, p. 165. 33.Ibid., p. 162. 34.Yuriko Saito, Everyday Aesthetics (Oxford University Press, 2008), p. 24. 35.Geoff Dyer, ‘Diary: Why Can’t I See You?’
…
Hilariously, the task of editing Tay’s AI-generated pronouncements turned into a huge labour of mechanical turking. Who’s in charge here? Neuro-sama became one male developer’s ‘full-time job’.23 ‘Most language technology is in fact built first and foremost to serve the needs of those who already have the most privilege in society,’ wrote AI research scientist Timnit Gebru, sacked from her post as co-leader of Google’s Ethical Artificial Intelligence Team in 2020. One of Google’s complaints against Gebru was her open contribution to Google Brain Women and Allies’ listserv, criticising Google’s lack of progress in hiring women.24 (A 2016 AI Now Institute report found that just 10 per cent of AI researchers at Google were women, and the Global Gender Gap Report 2018 by the World Economic Forum showed that only 22 per cent of AI professionals globally are female).
…
Fredric Jameson warns of a pre-AI postmodern ‘image addiction which, by transforming the past into visual mirages, 126stereotypes or texts, effectively abolishes any practical sense of the future and of the collective project, thereby abandoning the thinking of future change to fantasies of sheer catastrophe’.31 Timnit Gebru sees these fantasies recur with disturbing political consequences she identifies using the acronym she developed with Émile P. Torres: TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism – all alt-right ideologies predicting catastrophe or promising utopias via the embrace of technological singularity.
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
by
Adam Becker
Published 14 Jun 2025
A Directors’ Conversation with Oren Etzioni,” Stanford University Human-Centered Artificial Intelligence, October 1, 2020, https://hai.stanford.edu/news/gpt-3-intelligent-directors-conversation-oren-etzioni. 105 Wilfred Chan, “Researcher Meredith Whittaker Says AI’s Biggest Risk Isn’t ‘Consciousness’—It’s the Corporations That Control Them,” Fast Company, May 5, 2023, www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them. 106 Will Douglas Heaven, “How Existential Risk Became the Biggest Meme in AI,” MIT Technology Review, June 19, 2023, www.technologyreview.com/2023/06/19/1075140/how-existential-risk-became-biggest-meme-in-ai/. 107 Ibid. 108 Chan, “Researcher Meredith Whittaker.” 109 Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; Julia Angwin et al., “What Algorithmic Injustice Looks Like in Real Life,” ProPublica, May 25, 2016, www.propublica.org/article/what-algorithmic-injustice-looks-like-in-real-life. 110 Ibid. 111 “Diversity in High Tech,” US Equal Employment Opportunity Commission, May 18, 2016, www.eeoc.gov/special-report/diversity-high-tech; Ashton Jackson, “Black Employees Make Up Just 7.4% of the Tech Workforce—These Nonprofits Are Working to Change That,” CNBC, February 24, 2022, www.cnbc.com/2022/02/24/jobs-for-the-future-report-highlights-need-for-tech-opportunities-for-black-talent.html. 112 Khari Johnson, “AI Ethics Pioneer’s Exit from Google Involved Research into Risks and Inequality in Large Language Models,” VentureBeat, December 3, 2020, https://venturebeat.com/2020/12/03/ai-ethics-pioneers-exit-from-google-involved-research-into-risks-and-inequality-in-large-language-models/; Karen Hao, “We Read the Paper That Forced Timnit Gebru Out of Google. Here’s What It Says,” MIT Technology Review, December 4, 2020, www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/. 113 Krystal Hu, “ChatGPT Sets Record for Fastest-Growing User Base,” Reuters, February 2, 2023, www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. 114 Hao, “We Read the Paper.” 115 Johnson, “AI Ethics Pioneer’s Exit”; Hao, “We Read the Paper.” 116 Megan Rose Dickey, “Google Fires Top AI Ethics Researcher Margaret Mitchell,” TechCrunch, February 19, 2021, https://techcrunch.com/2021/02/19/google-fires-top-ai-ethics-researcher-margaret-mitchell/; Nico Grant, Dina Bass, and Josh Eidelson, “Google Fires Researcher Meg Mitchell, Escalating AI Saga,” Bloomberg, February 19, 2021, www.bloomberg.com/news/articles/2021-02-19/google-fires-researcher-meg-mitchell-escalating-ai-saga. 117 Kyle Wiggers, “Google Trained a Trillion-Parameter AI Language Model,” VentureBeat, January 12, 2021, https://venturebeat.com/2021/01/12/google-trained-a-trillion-parameter-ai-language-model/; William Fedus, Barret Zoph, and Noam Shazeer, “Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity,” arXiv:2101.03961 (2021), https://doi.org/10.48550/arXiv.2101.03961. 118 Thaddeus L.
…
They left the company along with five others to found an AI start-up more focused on safety, called Anthropic. They made many hires from within the EA community and received half a billion dollars early on from Bankman-Fried’s FTX; Anthropic has since become one of the largest AI companies, with billions in funding from Google and Amazon.76 There is even federal funding for AI alignment grants, as Timnit Gebru, AI scientist and founder of the Distributed AI Research Institute, tells me. “So now, even if you don’t want to work in the companies [working on building and aligning an AGI], whatever money you’re going to get for your research is going to be influenced by that too. So it’s everywhere. You can’t get away from it.”77 Yet not all experts agree with Russell, Hinton, and the founders of Anthropic.
…
Less than 36 percent of all tech workers are women, only 7.4 percent are Black, and only 1.7 percent are Black women.111 When your entire professional life is filled with people who look like you, and when people who look a different way and have a different set of life experiences are entirely excluded, it’s easy to forget about their perspectives. That implicit bias on the part of the developers, reflected in training-set selection and in the algorithm designs themselves, exacerbates algorithmic bias. Timnit Gebru is part of that small fraction of the tech industry composed of Black women—and the even smaller fraction who have PhDs in AI. She did pioneering work on how AI-powered facial recognition systems were less accurate when dealing with Black faces and how that could lead to further erosion of privacy and reinforce existing biases in law enforcement.
Code Dependent: Living in the Shadow of AI
by
Madhumita Murgia
Published 20 Mar 2024
It dawned on me that the framework linking the algorithmic encounters I had gathered, spanning seemingly disconnected people, times and places, was actually predictable, and had been conceptualized by a small but growing community of academics around the world. Some names proposing the early roots of these ideas I recognized – Timnit Gebru, Joy Buolamwini, Kate Crawford, Cathy O’Neill, Meredith Whittaker, Virginia Eubanks5 and Safiya Umoja Noble.6 They were all, I noted, women, and their areas of expertise were in studying the disproportionate harms of AI experienced by marginalized communities. As I read their work and followed the trail of academic papers they cited, I discovered a wider pool of authors that were lesser known to the mainstream.
…
It is deployed at the famous Gordon’s Wine Bar in London, scanning for known troublemakers.5 It’s even been used to identify dead Russian soldiers in Ukraine.6 The question of whether it was ready for primetime use has taken on an urgency, as it impacts the lives of billions around the world. Facial Recognition’s Race Problem Karl knew that the technology was not ready for widespread rollout in this way. Indeed, in 2018, Joy Buolamwini, Timnit Gebru and Deborah Raji – three black female researchers at Microsoft – had published a study, alongside collaborators, comparing the accuracy of face recognition systems built by IBM, Face++ and Microsoft.7 They found the error rates for light-skinned men hovered at less than 1 per cent, while that figure touched 35 per cent for darker-skinned women.
…
People I spoke to talked about using it to write complaints to their local council, to draft speeches they had been dreading, and to analyse proposals and ideas, looking for gaps in reasoning or logic. On the other hand were those who felt it was all spiralling out of control too quickly, without any caution, oversight or governance. These included some of the respected computer scientists I had come across in my readings on data colonialism, researchers such as Timnit Gebru, Emily Bender and Deborah Raji.13 They were worried people were missing the real, human harms enacted by these AI systems, in the pursuit of some foolhardy dream of creating a super-intelligent machine. Others like Stuart Russell and Geoffrey Hinton worried that AI was advancing too quickly, without enough knowledge or careful thought about how to design advanced systems that also protect human safety in the long term.
The Long History of the Future: Why Tomorrow's Technology Still Isn't Here
by
Nicole Kobie
Published 3 Jul 2024
It’s a cute fantasy, but as a journalist who’s been made redundant for budgetary reasons more than once, it’s always safe to assume that any money saved by technologies won’t be invested in better journalism but used to prop up profits (or cut losses) instead. But this is how AI could impact me. If you want real criticism of LLMs and the great AI panic of the early 2020s, there’s one name you should know: Timnit Gebru. When Hinton quit Google, he was awarded interviews in all the best publications, with glowing headlines dubbing him the ‘godfather of AI’. But Gebru left Google years before, in 2020 – also over ethical concerns.19 Gebru and five co-authors were set to publish a paper that examined the downsides of large language models – it raised concerns about environmental impact and bias, nothing too controversial, really – but managers at Google asked her to remove her name and those of her colleagues, leaving Emily Bender, director of the University of Washington’s Computational Linguistics Laboratory, as the only author.
…
But because Herzberg was homeless, that’s not how she was using her bike, and instead she was pushing it slowly across the middle of the road with bags of belongings dangling from the handlebars. The car’s AI system stumbled when faced with behaviour spurred by poverty. Would it help to have more diversity in the companies developing future tech? Unquestionably. AI researchers such as Timnit Gebru and Joy Buolamwini, whom we met in an earlier chapter, use their time to watch for such flaws, aware of the impact as it reflects their lived experiences. They continue to do so, and thankfully, their warnings are heard – though perhaps less well heeded than the tech bro CEOs making headlines.
…
She didn’t like the future of the city as planned by Google – privatised, tech solutionism with automated transport – so she stopped it. Similar is happening in San Francisco with driverless cars, with activists taking on Google’s Waymo and GM’s Cruise with nothing more than a bit of free time and traffic cones. And Timnit Gebru and Margaret Mitchell may have lost their jobs at Google, but they are winning plenty of headlines and, perhaps, the argument regarding AI. People can push back against billionaires’ bad ideas. Activism alone won’t save us. We need regulators who understand how technology works to see its flaws, minimise harms and not be fooled by Silicon Valley lobbying or marketing – and voters literate enough to hold them accountable if they sell out.
The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future
by
Keach Hagey
Published 19 May 2025
It Has Learned to Code (and Blog and Argue),” The New York Times, November 24, 2020. 7.Paul Graham, “Do Things That Don’t Scale,” PaulGraham.com, July 2013. 8.Annie Altman, “How I Started Escorting,” Medium, March 27, 2024. 9.Weil, “Oppenheimer of Our Age.” 10.Annie Altman, “How I Started Escorting.” 11.Sam Altman, “Please Fund More Science,” Sam Altman blog, March 30, 2020. 12.Greg Brockman, Mira Murati, Peter Welinder, OpenAI, “OpenAI API,” OpenAI blog, June 11, 2020. 13.Tom Simonite, “OpenAI’s Text Generator Is Going Commercial,” Wired, June 11, 2020. 14.Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021. 15.Emily Bobrow, “Timnit Gebru Is Calling Attention to the Pitfall of AI,” The Wall Street Journal, February 24, 2023. 16.Sam Altman @sama, “I am a stochastic parrot and so r u,” Twitter, December 4, 2022.
…
Meanwhile, researchers in the broader field were waking up to LLMs’ immense power, promise, and pitfalls. As the Anthropic crew was heading for the door at OpenAI in late 2020, a similar fight over safety had broken out at Google over the publication of a controversial paper by lead researchers Emily Bender and Timnit Gebru called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”14 The title’s menacing image of a plumed colossus combines the talking birds’ famous knack for imitation and an uncommon word—“stochastic,” derived from the Greek stokhastikos, which is related to English’s “conjecture.”
…
But Google would not let the team release it to the public, citing safety concerns. So they kept working on it, with the help of Noam Shazeer, one of the researchers on Google Brain’s transformer paper. They renamed the model LaMDA, for Language Models for Dialog Applications. It was a lightning rod for controversy. In 2020, Timnit Gebru, the well-known AI ethics researcher, said she was fired for refusing to retract the “Stochastic Parrots” paper that raised questions about the risks of large language models like LaMDA. Google claimed she wasn’t fired, and that the paper did not meet its bar for publication. Then in 2022, Google fired AI researcher Blake Lemoine after he argued that LaMDA was sentient.
Enshittification: Why Everything Suddenly Got Worse and What to Do About It
by
Cory Doctorow
Published 6 Oct 2025
When a Google manager ordered one of the company’s most distinguished AI scientists to suppress a research paper that had already been accepted into a highly selective scientific conference, it revealed another important difference between tenure and a job at Google: Google is a massive multinational corporation. At gigantic corporations, mouthing off to your boss gets you in trouble. It can even get you fired. Google proceeded to fire Timnit Gebru. After years of ever-more-muscular displays of worker power by Googlers, the firing of Timnit Gebru—at the peak of the COVID-19 lockdowns, in December 2020—marked a shift in the attitude of Google bosses to their cherished employees. The Googlers I know describe 2021 as a year of tightening discipline and managerial impatience with the workforce’s demands.
…
Deep-pocketed competitors sprang up, and new, AI-powered chatbots built on large language models (LLMs) started to steal focus away from Google. The company responded with a full-court press to make its own LLMs, leveraging its vast resources to build some of the largest models ever conceived of. This move attracted the attention of some of Google’s top scientists, including Timnit Gebru, a distinguished AI researcher whose career had involved stints at Apple, Microsoft, and Stanford before she came to Google to work on AI ethics. In 2021, Gebru and several outside peers wrote a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” that was accepted for the Association for Computing Machinery’s highly selective Conference on Fairness, Accountability, and Transparency.
Futureproof: 9 Rules for Humans in the Age of Automation
by
Kevin Roose
Published 9 Mar 2021
The organization uses automated software to help low-income Americans file for Chapter 7 bankruptcy, a process that allows them to shed burdensome debt obligations and get a fresh financial start. So far, the service has helped families clear more than $120 million in debt. Or Joy Buolamwini and Timnit Gebru, two AI researchers who studied three leading facial-recognition algorithms, and found that all three were substantially less accurate when trying to classify darker-skinned faces than lighter-skinned faces. The study led several major tech firms to reexamine their AI for evidence of bias, and pledge to use more racially diverse data sets to train their machine learning models.
…
underappreciated heroes like Katherine Johnson, Dorothy Vaughan, and Mary Jackson Margot Lee Shetterly, Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race (New York: William Morrow, 2016). These are people like Jazmyn Latimer Vanessa Taylor, “This Founder Is Using Technology to Clear Criminal Records,” Afrotech, February 22, 2019. Or Rohan Pavuluri Kevin Roose, “The 2018 Good Tech Awards,” New York Times, December 21, 2018. Or Joy Buolamwini and Timnit Gebru Kevin Roose, “The 2019 Good Tech Awards,” New York Times, December 30, 2019. Or Sasha Costanza-Chock Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (Boston: MIT Press, 2020). a term coined by the evolutionary biologist Stuart Kauffman Stuart Kauffman, The Origins of Order: Self-Organization and Selection in Evolution (New York: Oxford University Press, 1993).
Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by
Cade Metz
Published 15 Mar 2021
The people choosing the training data—Matt Zeiler and the engineers he hired at Clarifai—were mostly white men. And because they were mostly white men, they didn’t realize their data was biased. Google’s gorilla tag should have been a wake-up call for the industry. It was not. It took other women of color to take this fundamental problem public. Timnit Gebru, who was studying artificial intelligence at Stanford University under Fei-Fei Li, was the Ethiopia-born daughter of an Eritrean couple who had immigrated to the U.S. At NIPS, as she entered the main hall for the first lecture and looked out over the hundreds of people seated in the audience, row after row of faces, she was struck by the fact that while some were East Asian and a few were Indian and a few more were women, the vast majority were white men.
…
The researcher who was part of the early deep learning efforts inside Microsoft Research, Mitchell had grabbed the community’s attention when she gave an interview to Bloomberg News saying that artificial intelligence suffered from a “sea of dudes” problem, estimating she had worked with hundreds of men over the past five years and about ten women. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.” Mitchell and Timnit Gebru, who joined her at Google, were part of a growing effort to lay down firm ethical frameworks for AI technologies, looking at bias, surveillance, and the rise of automated weapons. Another Googler, Meredith Whittaker, a product manager in the company’s cloud computing group, helped launch a research organization at NYU.
…
JEFF DEAN, the early Google employee who became the company’s most famous and revered engineer before founding Google Brain, its central artificial intelligence lab, in 2011. ALAN EUSTACE, the executive and engineer who oversaw Google’s rush into deep learning before leaving the company to set a world skydiving record. TIMNIT GEBRU, the former Stanford researcher who joined the Google ethics team. JOHN “J.G.” GIANNANDREA, the head of AI at Google who defected to Apple. IAN GOODFELLOW, the inventor of GANs, a technology that could generate fake (and remarkably realistic) images on its own, who worked at both Google and OpenAI before moving to Apple.
System Error: Where Big Tech Went Wrong and How We Can Reboot
by
Rob Reich
,
Mehran Sahami
and
Jeremy M. Weinstein
Published 6 Sep 2021
Instead, the images could be used to pinpoint the moment of the crime and track the people and vehicles at the scene backward and forward in time, using aerial data and street cameras to identify and locate potential suspects who were in the area. Of course, facial recognition technology can suffer from some of the same sorts of algorithmic biases that we saw in chapter 4. Joy Buolamwini, the founder of the Algorithmic Justice League and a researcher at MIT, and her coauthor, Timnit Gebru, a cofounder of Black in AI and a leading researcher in ethics and AI, documented significant gender and racial discrepancies in the performance of facial recognition systems from Microsoft, IBM, and the Chinese platform Face++. Their work shows that such systems perform worse on females and people with darker skin, with errors compounded for dark-skinned females.
…
A number of groups, such as the European Union’s High-Level Expert Group on Artificial Intelligence, have called for responsible publication guidelines that would identify when researchers should limit the release of new AI models. In the absence of such guidelines, we’ll continue to see controversy. As in the case of CRISPR, progress will require that the field’s most prominent scientists take a leading role. Yet Google’s firing in 2020 of Timnit Gebru, a leading AI ethics researcher, raises questions about the willingness of tech companies to accept the ethical critiques of those within their own ranks. Equally important is the development of norms that sanction rule breakers. One ethically dubious strain of AI research, for example, involves the deployment of facial recognition tools to make predictions about various forms of human identity or behavior such as homosexuality or criminal tendencies.
…
Baltimore is a case in point: Caroline Haskins, “Why Some Baltimore Residents Are Lobbying to Bring Back Aerial Surveillance,” The Outline, August 30, 2018, https://theoutline.com/post/6070/why-some-baltimore-residents-are-lobbying-to-bring-back-aerial-surveillance. Joy Buolamwini: Gender Shades (website), http://gendershades.org/. See also Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15, http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf. billionaire John Catsimatidis: Kashmir Hill, “Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich,” New York Times, March 5, 2020, https://www.nytimes.com/2020/03/05/technology/clearview-investors.html.
The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by
Orly Lobel
Published 17 Oct 2022
Facebook responded by calling for more public regulation of digital content, rebranding itself as Meta, and, along with other technology kings, racing to shift us all into the metaverse—an embodied immersive experience of our digital lives. Policymakers are racing (although a racing legislature is something of an oxymoron) to respond and tighten oversight of digital spheres. History is a wild ride that does not halt for anyone. And things sometimes get worse before they get better. A year before Haugen’s revelations, Dr. Timnit Gebru—a rising star in AI research with an extraordinary path from Ethiopia to Eritrea to political asylum in the United States, to three degrees at Stanford, to Apple, to Microsoft, and then to Google—was ousted from Google over a dispute with company executives about publishing an article on the potential risks and harms of large language models.
…
I’ve advocated that as a matter of policy, we should make existing data sets easier for all to access for the purposes of research and monitoring, and that governments should initiate and fund the creation of fuller data sets as well as more experimentation with digital technology that promotes equality and other socially valuable goals.8 Competition law and antitrust policies too must be revamped and refocused to better address the forces that amplify market concentration in the digital world, including the proprietary nature of data and the network effects of large-scale multisided online platforms that impede new entry into dominant markets. Beyond raw data, researchers and non-profit organizations should also receive more access to advances in AI itself and computational resources. Standardization of what algorithms are doing will help audits. A group of researchers, including AI ethics leaders Dr. Timnit Gebru, whom we met earlier, and Dr. Margaret Mitchell, have proposed model cards for reporting the use of AI: short documents that will accompany algorithms to disclose how the model performs across demographic groups.9 In 2021, the National Institute of Standards and Technology released a proposal calling on the tech community to develop voluntary, consensus-based standards for detecting AI bias, including examining, detecting, and monitoring for biases during all stages of an AI life cycle—planning and conceptualizing the system, designing it, and putting it to use.
…
Cas. 1120 (D. Mass. 1813). 8. Orly Lobel, “Biopolitical Opportunities: Datafication and Governance,” Notre Dame Law Review Reflection 96, no 4 (2021): 181–193. 9. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru, “Model Cards for Model Reporting,” in Proceedings of the Conference on Fairness, Accountability, and Transparency (New York: Association for Computing Machinery, 2019), 220–229, https://dl.acm.org/doi/abs/10.1145/3287560.3287596. 10. Jon Kleinberg et al., “Discrimination in the Age of Algorithms,” Journal of Legal Analysis 10 (2018): 113–174; Talia B.
Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond
by
Tamara Kneese
Published 14 Aug 2023
Daub, “The Undertakers of Silicon Valley.” 71. Benjamin, Race After Technology; Chun, Discriminating Data; Eubanks, Automating Inequality; Hicks, Programmed Inequality; Noble, Algorithms of Oppression. 72. Dylan Mulvin describes how Lena Forsén’s image has been continuously circulated without her permission in Proxies. Joy Buolamwini and Timnit Gebru describe how racial discrimination is reinforced by supposedly neutral AI. Buolamwini and Gebru, “Gender Shades.” 73. Van Doorn, “Platform Labor”; Nieborg and Poell, “The Platformization of Cultural Production.” 74. Sharma, In the Meantime. 75. Wajcman, Pressed for Time; Crary, 24/7; Williams and Srnicek, Inventing the Future. 76.
…
“The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms.” Information, Communication, and Society 20, no. 1 (2017): 30–44. Bunz, Mercedes. “Facebook Asks Users to Reconnect with the Dead.” Guardian, October 27, 2009. www.theguardian.com/media/pda/2009/oct/27/facebook-dead-reconnect-memorialise. Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research, Conference on Fairness, Accountability, and Transparency 81 (2018): 1–15. Burrell, Jenna. “The Field Site as a Network: A Strategy for Locating Ethnographic Research.”
Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
by
Erica Thompson
Published 6 Dec 2022
But it’s difficult, because most of the researchers are WEIRD and educated at elite schools that emphasise a dominant paradigm of science where a model’s accuracy is the highest virtue. Questions of power, bias and implications for marginalised communities do not arise naturally because they do not personally affect the majority of these researchers. Linguist Emily Bender, AI researcher Timnit Gebru and colleagues have written about ‘the dangers of stochastic parrots’, referring to language models that can emulate English text in a variety of styles. Such models can now write poetry, answer questions, compose articles and hold conversations. They do this by scraping a huge archive of text produced by humans – basically most of the content of the internet plus a lot of books, probably with obviously offensive words removed – and creating statistical models that link one word with the probability of the next word given a context.
…
Chapter 1: Locating Model Land King, Mervyn, and John Kay, Radical Uncertainty: Decision-Making for an Unknowable Future, Bridge Street Press, 2020 Chapter 2: Thinking Inside the Box #inmice, https://twitter.com/justsaysinmice https://www.climateprediction.net Held, Isaac, ‘The Gap Between Simulation and Understanding in Climate Modeling’, Bulletin of the American Meteorological Society, 86(11), 2005, pp. 1609–14 Mayer, Jurgen, Khaled Khairy and Jonathon Howard, ‘Drawing an Elephant with Four Complex Parameters’, American Journal of Physics, 78, 2010 Morgan, Mary, The World in the Model, Cambridge University Press, 2012 Page, Scott, The Model Thinker: What You Need to Know to Make Data Work for You, Basic Books, 2019 Parker, Wendy, ‘Model Evaluation: An Adequacy-for-Purpose View’, Philosophy of Science, 87(3), 2020 Pilkey, Orrin, and Linda Pilkey-Jarvis, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future, Columbia University Press, 2007 Stainforth, David, Myles Allen, Edward Tredger, and Leonard Smith, ‘Confidence, Uncertainty and Decision-Support Relevance in Climate Predictions’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1857), 2007 Stoppard, Tom, Arcadia, Faber & Faber, 1993 Chapter 3: Models as Metaphors Adichie, Chimamanda Ngozi, ‘The Danger of a Single Story’, TED talk (video and transcript), 2009 Bender, Emily, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021 Bolukbasi, Tolga, Kai-Wei Chang, James Y. Zou, et al., ‘Man Is to Computer Programmer as Woman Is to Homemaker?
Artificial Whiteness
by
Yarden Katz
Haraway, “Ecce Homo, Ain’t (Ar’n’t) I A Woman, and Inappropriated Others: The Human in a Post-Humanist Landscape,” in The Haraway Reader (New York: Routledge, 2004), 58–59. For readings of Sojourner Truth that try to imagine her voice and Afro-Dutch accent, see the Sojourner Truth Project, http://thesojournertruthproject.com. 57. Joy Buolamwini, “AI, Ain’t I A Woman?,” 2018, https://www.youtube.com/watch?v=QxuyfWoVV98. 58. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in Proceedings of Machine Learning Research, Conference on Fairness, Accountability and Transparency 81 (2018): 1–15. 59. The poem also attributes great powers to AI. The very title “AI, Ain’t I A Woman?”
…
Derek Partridge and Yorick Wilks. Cambridge: Cambridge University Press, 1990. Buolamwini, Joy. “AI, Ain’t I A Woman?,” June 28, 2018. https://www.youtube.com/watch?v=QxuyfWoVV98. ________. “Artificial Intelligence Has a Problem with Gender and Racial Bias. Here’s How to Solve It.” Time, February 2019. Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Proceedings of Machine Learning Research, Conference on Fairness, Accountability and Transparency 81 (2018): 1–15. Butler, Patrick. “UN Accuses Blackstone Group of Contributing to Global Housing Crisis.”
Applied Artificial Intelligence: A Handbook for Business Leaders
by
Mariya Yao
,
Adelyn Zhou
and
Marlene Jia
Published 1 Jun 2018
Using artificial intelligence to improve early breast cancer detection. MIT News. Retrieved from http://news.mit.edu/2017/artificial-intelligence-early-breast-cancer-detection-1017 4. The Challenges of Artificial Intelligence “The future is already here—it’s just not evenly distributed.” —William Gibson When Timnit Gebru attended a prestigious AI research conference in 2016, she counted six black people in the audience out of an estimated 8,500 attendees. There was only one black woman: herself. As a PhD from Stanford University who has published a number of notable papers in the field of artificial intelligence, Gebru finds the lack of diversity in the industry to be extremely alarming.(29) Data and technology are human inventions, ideally designed to reflect and advance human values.
Why Machines Learn: The Elegant Math Behind Modern AI
by
Anil Ananthaswamy
Published 15 Jul 2024
The math and algorithms described in this book give us ways of understanding the sources of such bias. One obvious way that bias creeps into machine learning is through the use of incomplete data (say, inadequate representation of faces of minorities in a database of images of people of some country—a point eloquently made in a 2018 paper titled “Gender Shades,” by Joy Buolamwini of MIT and Timnit Gebru, then with Microsoft Research). ML algorithms assume that the data on which they have been trained are drawn from some underlying distribution and that the unseen data on which they make predictions are also drawn from the same distribution. If an ML system encounters real-world data that falls afoul of this assumption, all bets are off as to the predictions.
…
GO TO NOTE REFERENCE IN TEXT a paper in Science: Ziad Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science 366, No. 6464 (October 25, 2019): 447–53. GO TO NOTE REFERENCE IN TEXT “Gender Shades”: Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15. GO TO NOTE REFERENCE IN TEXT following interaction with OpenAI’s GPT-4: Adam Tauman Kalai, “How to Use Self-Play for Language Models to Improve at Solving Programming Puzzles,” Workshop on Large Language Models and Transformers, Simons Institute for the Theory of Computing, August 15, 2023, https://tinyurl.com/56sct6n8.
Democracy's Data: The Hidden Stories in the U.S. Census and How to Read Them
by
Dan Bouk
Published 22 Aug 2022
This book has made clear how foolish, even dangerous, a cry like that is. We need more investigations into data histories, not fewer, and more people willing and able to read the stories behind the numbers. Otherwise, data will be more apt to train algorithms and models to perpetuate biases or inequalities, as the computer scientist Timnit Gebru and her colleagues point out while explaining why there should always be “datasheets for datasets.”9 Otherwise, data that should be personal and protected will instead be more readily extracted and exploited—as the historian of science and medicine Joanna Radin warns in an exemplary study of a data set used widely to “train” algorithms to “see” patterns in Big Data, a data set built from medical records originally intended to fight diabetes among Akimel O’oodham people living on a reservation in Arizona.10 When data sets represent people, those who want to use that data should have to think long and hard about how those “datafied” persons can continue to be faithfully and justly represented.
…
Minutes of Census Advisory Committee, March 29–30, 1940, in Folder Advisory Committee Meeting March 29 and 30, 1940, Box 76, Entry 148, RG 29, NARA, D.C. 8. Bureau of the Census, Report of the Seventeenth Decennial Census of the United States. Census of Population: 1950 (Washington, D.C.: Government Printing Office, 1952) I: xii–xiii. 9. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford, “Datasheets for Datasets,” March 19, 2020, arXiv:1803.09010v7 [cs.DB]. 10. Joanna Radin, “‘Digital Natives’: How Medical and Indigenous Histories Matter for Big Data,” Osiris (2017): 43–64.
Robot Rules: Regulating Artificial Intelligence
by
Jacob Turner
Published 29 Oct 2018
Sampling bias or skewed data can arise from the manner in which data is collected: landline telephone polls carried out in the daytime sample a disproportionate number of people who are elderly, unemployed or stay-at-home carers, because these groups are more likely to be at home and willing to take calls at the relevant time. Skewed data sets may arise because data of one type are more readily available, or because those inputting the data sets are not trying hard enough to find diverse sources. Joy Buolamwini and Timnit Gebru of MIT performed an experiment which demonstrated that three leading pieces of picture recognition software68 were significantly less accurate at identifying dark-skinned females than they were at matching pictures of light-skinned males.69 Though the input data sets used by the picture recognition software were not made available to the researchers, Buolamwini and Gebru surmised that the disparity arose from training on data sets of light-skinned males (which probably reflected the gender and ethnicity of the programmers).
…
Available at Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR” (6 October 2017), Harvard Journal of Law & Technology, Forthcoming, https://ssrn.com/abstract=3063289 or http://dx.doi.org/10.2139/ssrn.3063289, accessed 1 June 2018. 60Entry on Elizabeth I, The Oxford Dictionary of Quotations (Oxford: Oxford University Press, 2001), 297. 61A person’s mental state in terms of knowledge or intent may well be important, but it rarely has legal consequences unless it is accompanied by some form of culpable action or omission: people are not usually penalised for “having bad thoughts”. 62Ben Dickson, “Why It’s So Hard to Create Unbiased Artificial Intelligence”, Tech Crunch, 7 November 2016, https://techcrunch.com/2016/11/07/why-its-so-hard-to-create-unbiased-artificial-intelligence/, accessed 1 June 2018. 63Sam Levin, “A Beauty Contest Was Judged by AI and the Robots Didn’t Like Dark Skin”, The Guardian, https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people, accessed 1 June 2018. 64Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias ”, ProPublica, 23 May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 1 June 2018. 65Marvin Minsky, The Emotion Machine (London: Simon & Schuster, 2015), 113. 66See, for example, the Entry on Bias in the Cambridge Dictionary: “… the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment”, Cambridge Dictionary, https://dictionary.cambridge.org/dictionary/english/bias, accessed 1 June 2018. 67Nora Gherbi, “Artificial Intelligence and the Age of Empathy”, Conscious Magazine, http://consciousmagazine.co/artificial-intelligence-age-empathy/, accessed 1 June 2018. 68The programs tested were those of IBM , Microsoft and Face ++. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” (Conference on Fairness, Accountability, and Transparency , February 2018), http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf, accessed 1 June 2018. 69Ibid. 70“Mitigating Bias in AI Models”, IBM Website, https://www.ibm.com/blogs/research/2018/02/mitigating-bias-ai-models/, accessed 1 June 2018.
Your Computer Is on Fire
by
Thomas S. Mullaney
,
Benjamin Peters
,
Mar Hicks
and
Kavita Philip
Published 9 Mar 2021
Yet there is alarming concern about these technologies’ lack of reliability, and the consequences of erroneously flagging people as criminals or as suspects of crimes they have not committed. The deployment of these AI technologies has been brought to greater scrutiny by scholars like computer scientists Joy Buolamwini and Timnit Gebru, whose research on facial-recognition software’s misidentification of women and people of color found that commercial technologies—from IBM to Microsoft or Face++—failed to recognize the faces of women of color, and have a statistically significant error rate in the recognition of brown skin tones.22 Buolamwini even raised her concerns about Amazon’s Rekognition and its inaccuracy in detecting women of color to CEO Jeff Bezos.23 Drones and other robotic devices are all embedded with these politics that are anything but neutral and objective.
…
“An Open Letter to Microsoft: Drop your $19.4 million ICE tech contract,” petition, accessed July 29, 2018, https://actionnetwork.org/petitions/an-open-letter-to-microsoft-drop-your-194-million-ice-tech-contract. 21. See Kate Conger, “Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program,” Gizmodo (June 1, 2018), https://gizmodo.com/google-plans-not-to-renew-its-contract-for-project-mave-1826488620. 22. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research 81 (2018): 77–91. 23. See Ali Breland, “MIT Researcher Warned Amazon of Bias in Facial Recognition Software,” The Hill (July 26, 2018), http://thehill.com/policy/technology/399085-mit-researcher-finds-bias-against-women-minorities-in-amazons-facial. 24.
The Alignment Problem: Machine Learning and Human Values
by
Brian Christian
Published 5 Oct 2020
In addition, many ethnicities have very minor representation or none at all.”45 In recent years, greater attention has been paid to the makeup of these training sets, though much remains to be done. In 2015, the United States Office of the Director of National Intelligence and the Intelligence Advanced Research Projects Activity released a face image dataset called IJB-A, boasting, they claimed, “wider geographic variation of subjects.”46 With Microsoft’s Timnit Gebru, Buolamwini did an analysis of the IJB-A and found that it was more than 75% male, and almost 80% light-skinned. Just 4.4% of the dataset were dark-skinned females.47 Eventually it became clear to Buolamwini that the “somebody else [who] will solve this problem” was—of course—her. She started a broad investigation into the current state of face-detection systems, which became her MIT thesis.
…
Adventures in NI (blog), September 19, 2019. https://joanna-bryson.blogspot.com/2019/09/six-kinds-of-explanation-for-ai-one-is.html. Buchsbaum, Daphna, Alison Gopnik, Thomas L. Griffiths, and Patrick Shafto. “Children’s Imitation of Causal Action Sequences Is Influenced by Statistical and Pedagogical Evidence.” Cognition 120, no. 3 (2011): 331–40. Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91, 2018. Burda, Yuri, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A. Efros. “Large-Scale Study of Curiosity-Driven Learning.”
Nexus: A Brief History of Information Networks From the Stone Age to AI
by
Yuval Noah Harari
Published 9 Sep 2024
Emily Washburn, “What to Know About Effective Altruism—Championed by Musk, Bankman-Fried, and Silicon Valley Giants,” Forbes, March 8, 2023, www.forbes.com/sites/emilywashburn/2023/03/08/what-to-know-about-effective-altruism-championed-by-musk-bankman-fried-and-silicon-valley-giants/; Alana Semuels, “How Silicon Valley Has Disrupted Philanthropy,” Atlantic, July 25, 2018, www.theatlantic.com/technology/archive/2018/07/how-silicon-valley-has-disrupted-philanthropy/565997/; Timnit Gebru, “Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety,’ ” Wired, Nov. 30, 2022, www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/; Gideon Lewis-Kraus, “The Reluctant Prophet of Effective Altruism,” New Yorker, Aug. 8, 2022, www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism. 39.
…
Artificial Intelligence as a New Era in Medicine,” Journal of Personalized Medicine 11 (2021), article 32; Mohsen Soori, Behrooz Arezoo, and Roza Dastres, “Artificial Intelligence, Machine Learning, and Deep Learning in Advanced Robotics: A Review,” Cognitive Robotics 3 (2023): 54–70. 61. Christian, Alignment Problem, 31; D’Ignazio and Klein, Data Feminism, 29–30. 62. Christian, Alignment Problem, 32; Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in Proceedings of the 1st Conference on Fairness, Accountability, and Transparency, PMLR 81 (2018): 77–91. 63. Lee, “Learning from Tay’s Introduction.” 64. D’Ignazio and Klein, Data Feminism, 28; Jeffrey Dastin, “Insight—Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, Oct. 11, 2018, www.reuters.com/article/idUSKCN1MK0AG/. 65.
Four Battlegrounds
by
Paul Scharre
Published 18 Jan 2023
(“Trump Discusses China, ‘Political Fairness’ with Google CEO,” Reuters, March 27, 2019, https://www.reuters.com/article/us-usa-trump-google/trump-discusses-china-political-fairness-with-google-ceo-idUSKCN1R82CB.) 63“AI was largely hype”: Liz O’Sullivan, interview by author, February 12, 2020. 64Machine learning systems in particular can fail: Ram Shankar Siva Kumar et al., “Failure Modes in Machine Learning,” Microsoft Docs, November 11, 2019, https://docs.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning; Dario Amodei et al., Concrete Problems in AI Safety (arXiv.org, July 25, 2016), https://arxiv.org/pdf/1606.06565.pdf. 64perform poorly on people of a different gender, race, or ethnicity: Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018), 1–15, https://dam-prod.media.mit.edu/x/2018/02/06/Gender%20Shades%20Intersectional%20 Accuracy%20Disparities.pdf. 64Google Photos image recognition algorithm: Tom Simonite, “When It Comes to Gorillas, Google Photos Remains Blind,” Wired, January 11, 2018, https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/; Alistair Barr, “Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms,” Wall Street Journal, July 1, 2015, https://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/. 64insufficient representation of darker faces: Barr, “Google Mistakenly Tags Black People as ‘Gorillas.’” 64distributional shift in the data: Rohan Taori, Measuring Robustness to Natural Distribution Shifts in Image Classification (arXiv.org, September 14, 2020), https://arxiv.org/pdf/2007.00644.pdf. 64problems are common in image classification systems: Maggie Zhang, “Google Photos Tags Two African-Americans as Gorillas Through Facial Recognition Software,” Forbes, July 1, 2015, https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/#60111f6713d8. 65several fatal accidents: Rob Stumpf, “Tesla on Autopilot Crashes into Parked California Police Cruiser,” The Drive, May 30, 2018, https://www.thedrive.com/news/21172/tesla-on-autopilot-crashes-into-parked-california-police-cruiser; Rob Stumpf, “Autopilot Blamed for Tesla’s Crash Into Overturned Truck,” The Drive, June 1, 2020, https://www.thedrive.com/news/33789/autopilot-blamed-for-teslas-crash-into-overturned-truck; James Gilboy, “Officials Find Cause of Tesla Autopilot Crash Into Fire Truck: Report,” The Drive, May 17, 2018, https://www.thedrive.com/news/20912/cause-of-tesla-autopilot-crash-into-fire-truck-cause-determined-report; Phil McCausland, “Self-Driving Uber Car That Hit and Killed Woman Did Not Recognize That Pedestrians Jaywalk,” NBC News, November 9, 2019, https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281; National Transportation Safety Board, “Collision Between a Sport Utility Vehicle Operating With Partial Driving Automation and a Crash Attenuator” (presented at public meeting, February 25, 2020), https://www.ntsb.gov/news/events/Documents/2020-HWY18FH011-BMG-abstract.pdf; Aaron Brown, “Tesla Autopilot Crash Victim Joshua Brown Was an Electric Car Buff and a Navy SEAL,” The Drive, July 1, 2016, https://www.thedrive.com/news/4249/tesla-autopilot-crash-victim-joshua-brown-was-an-electric-car-buff-and-a-navy-seal. 65drone footage from a different region: Marcus Weisgerber, “The Pentagon’s New Artificial Intelligence Is Already Hunting Terrorists,” Defense One, December 21, 2017, https://www.defenseone.com/technology/2017/12/pentagons-new-artificial-intelligence-already-hunting-terrorists/144742/. 65Tesla has come under fire: Andrew J.
…
Brown, “Label Bias, Label Shift: Fair Machine Learning with Unreliable Labels” (paper presented at 34th Conference on Neural Information Processing Systems, Vancouver, Canada, December 2020), https://dynamicdecisions.github.io/assets/pdfs/29.pdf. 233trained on one set of data: Lack of robustness to distributional shift is a consistent problem with facial recognition systems across gender and racial groups. Joy Adowaa Buolamwini, “Gender Shades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers” (master’s thesis, MIT, September 2017), https://dam-prod.media.mit.edu/x/2018/02/05/buolamwini-ms-17_WtMjoGY.pdf; Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018), 1–15, https://dam-prod.media.mit.edu/x/2018/02/06/Gender%20Shades%20Intersectional%20Accuracy%20Disparities.pdf. 233Neural networks don’t know: Thulasidasan et al., An Effective Baseline for Robustness to Distributional Shift. 233deep learning systems in the real world: Chuan Guo et al., “On Calibration of Modern Neural Networks,” Proceedings of the 34th International Conference on Machine Learning 70 (August 2017), 1321–1330, https://arxiv.org/pdf/1706.04599.pdf; Dan Hendrycks et al., Scaling Out-of-Distribution Detection for Real-World Settings (arXiv.org, December 7, 2020), https://arxiv.org/pdf/1911.11132.pdf; “Overview,” ICML 2021 Workshop on Uncertainty & Robustness in Deep Learning, July 23, 2021, https://sites.google.com/view/udlworkshop2021/home. 233shallow “world model”: David Ha and Jürgen Schmidhuber, World Models (arXiv.org, May 9, 2018), https://arxiv.org/pdf/1803.10122.pdf. 233“mistaking performance for competence”: Rodney A.
The Authoritarian Moment: How the Left Weaponized America's Institutions Against Dissent
by
Ben Shapiro
Published 26 Jul 2021
The goal is to remake the constituency of companies themselves, so that the authoritarians can completely remake the algorithms in their own image. When Turing Award winner and Facebook chief AI scientist Yann LeCun pointed out that machine learning systems are racially biased only if their inputs are biased, and suggested that inputs could be corrected to present an opposite racial bias, the authoritarian woke critics attacked: Timnit Gebru, technical co-lead of the Ethical Artificial Intelligence Team at Google, accused LeCun of “marginalization” and called for solving “social and structural problems.” The answer, said Gebru, was to hire members of marginalized groups, not to change the data set used by machine learning.49 CROWDSOURCING THE REVOLUTION For most Americans, the true dangers of social media don’t even lie in the censorship of news itself: the largest danger lies in the roving mobs social media represent.
The Loop: How Technology Is Creating a World Without Choices and How to Fight Back
by
Jacob Ward
Published 25 Jan 2022
We instead need to account for past patterns of discrimination, the kind of horrific systemic abuses Jesus Hernandez has spent decades measuring, and in fact put our finger on the scale to compensate for it where we can. There is vital and rapid work being done on bias in AI. The researchers Joy Buolamwini and Timnit Gebru published a seminal paper in February 2018 revealing that the top three commercial facial-recognition systems misidentified white, male faces only 0.8 percent of the time, while the same systems misidentified dark-skinned women more than 20 percent of the time. And the stakes, they pointed out, are high: “While face recognition software by itself should not be trained to determine the fate of an individual in the criminal justice system, it is very likely that such software is used to identify suspects.”5 Inspired in part by that work, a study by the federal agency charged with establishing technical benchmarks on new technology, the National Institute of Standards and Technology (NIST), found that across 189 facial-recognition algorithms from 99 developers around the world, Asian and African American faces were misidentified far more often than white faces.
Power, for All: How It Really Works and Why It's Everyone's Business
by
Julie Battilana
and
Tiziana Casciaro
Published 30 Aug 2021
Dawes, “The Robust Beauty of Improper Linear Models in Decision-Making,” American Psychologist 34, no. 7 (1979): 571–82. 23 Batya Friedman and Helen Nissenbaum, “Bias in Computer Systems,” ACM Transactions on Information Systems 14, no. 3 (1996): 330–47; also discussed in Agrawal, Gans, and Goldfarb, Prediction Machines, and in Marco Ianstiti and Karim Lakhani, Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World (Boston: Harvard Business Review Press, 2020). 24 Tom Simonite, “The Best Algorithms Still Struggle to Recognize Black Faces,” Wired, Conde Nast, July 22, 2019, https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/. For more on algorithmic recognition bias see Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in Conference on Fairness, Accountability and Transparency, PMLR (2018): 77–91; Yui Man Lui et al., “A Meta-Analysis of Face Recognition Covariates,” in 2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems (2009): 1–8. 25 Joy Buolamwini, “How I’m Fighting Bias in Algorithms,” TEDxBeaconStreet, November 2016, https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms. 26 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: Picador, 2019); Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Cambridge, MA: Polity, 2019). 27 Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Westminster, UK: Penguin Books, 2017); Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018). 28 Cathy O’Neil, “The Era of Blind Faith in Big Data Must End,” TED, April 2017, https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end. 29 Emily Chang, Brotopia: Breaking up the Boys’ Club of Silicon Valley (New York: Portfolio/Penguin, 2019). 30 In the eighteenth century, English philosopher Jeremy Bentham designed an influential prison system, the “panopticon.”
Digital Empires: The Global Battle to Regulate Technology
by
Anu Bradford
Published 25 Sep 2023
For example, Twitter reportedly deactivated only 11 percent of over 3,500 total accounts spreading pro-government propaganda worldwide.161 US platforms are also criticized for their inability to effectively moderate content in foreign languages. Documents leaked by Frances Haugen reveal Facebook’s inability to curtail inflammatory hate speech in Ethiopia, where the platform was deployed to call for killings and mass internment of the country’s ethnic Tigrayans as part of the ongoing civil war.162 Timnit Gebru, a data scientist who used to lead Google’s ethical AI team and who is fluent in the Amharic language used in Facebook posts in Ethiopia, described the content circulating as “the most terrifying I’ve ever seen anywhere,” likening it to the language used in the context of the earlier Rwanda genocide.
…
(Feb. 26, 2019), https://egyptindependent.com/egypt-huawei-sign-mou-for-cloud-computing-ai-networks/. 29.James Barton, Telecom Egypt Secures $200M of Chinese Financing, Developing Telecoms (May 30, 2018), https://developingtelecoms.com/business/operator-news/7841-telecom-egypt-secures-200m-of-chinese-financing.html. 30.Hikvision Enhances Suez Governorate’s Bus Fleet Operation, Hikvision, https://www.hikvision.com/en/newsroom/success-stories/traffic/hikvision-enhances-suez-governorate-s-bus-fleet-operation/. 31.Atha et al., supra note 19, at 68–70. 32.Id. at 70. 33.Id. 34.Id. at 75 (quoting a Nairobi police force official speaking in a video posted on (yet subsequently removed from) Huawei’s website). 35.Chris Burt, Zimbabwe to Use Hikvision Facial Recognition Technology for Border Control, Biometric Update (Jun 14, 2018), https://www.biometricupdate.com/201806/zimbabwe-to-use-hikvision-facial-recognition-technology-for-border-control. 36.Problem Masau, Chinese Tech Revolution Comes to Zimbabwe, Herald (Oct. 9, 2019, 00:10), https://www.herald.co.zw/chinese-tech-revolution-comes-to-zim/. 37.See Amy Hawkins, Beijing’s Big Brother Tech Needs African Faces, Foreign Pol’y (July 28, 2018), https://foreignpolicy.com/2018/07/24/beijings-big-brother-tech-needs-african-faces/. 38.Lynsey Chutel, China Is Exporting Facial Recognition Software to Africa, Expanding Its Vast Database, Quartz Africa (July 20, 2022), https://qz.com/africa/1287675/china-is-exporting-facial-recognition-to-africa-ensuring-ai-dominance-through-diversity/ . 39.Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, 81 Proc. Mach. Learning Rsch. 77 (2018), http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf]. 40.Hawkins, supra note 37. 41.Id. 42.Ashnah Kalemera, Tanzania Issues Regressive Online Content Regulations, CIPESA (Apr. 12, 2018), https://cipesa.org/2018/04/tanzania-enacts-regressive-online-content-regulations/. 43.Jisuanji Xinxi Wangluo Guoji Lianwang Anquan Baohu Guanli Banfa (计算机信息网络国际联网安全保护管理办法) [Measures for Security Protection Administration of International Interconnection of Computer-Based Information Networks] (promulgated by St.
Searches: Selfhood in the Digital Age
by
Vauhini Vara
Published 8 Apr 2025
I started hearing from others who, having lost loved ones themselves, marveled at how the piece captured grief. It was better received, by far, than anything else I’d ever written. I thought I should feel proud, and to an extent I did. But I felt unsettled, too. Five months before the publication of “Ghosts,” the researchers Emily M. Bender and Timnit Gebru had written, with colleagues, a paper called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In it, they made a convincing case that the methods used to train AI language models, in addition to requiring huge amounts of energy, could lead them to produce biased, even racist or misogynistic, language.
AI in Museums: Reflections, Perspectives and Applications
by
Sonja Thiel
and
Johannes C. Bernhardt
Published 31 Dec 2023
The capabilities of AI trained on such data thus require demystification; to this end, existing AI projects in museums should critically document the use of these algorithms: document the use of labour, datasets, and industrial technologies, as well as how they assess the impact of these facts on their methodology. These reflections should be prominent in the project descriptions, oriented toward the model cards proposed by Margaret Mitchell, Timnit Gebru, and others (Mitchell/ Wu/Zaldivar 2019). What social aspects can we, however, observe when we shift our gaze from image recognition technology or machine learning to the human actors involved? To reflect critically on what we are doing when we use AI in art history and museums, it is essential to make the underlying work visible.
New Laws of Robotics: Defending Human Expertise in the Age of AI
by
Frank Pasquale
Published 14 May 2020
Kevin Arthur, “Hartzog and Selinger: Ban Facial Recognition,” Question Technology (blog), August 4, 2018, https://www.questiontechnology.org/2018/08/04/hartzog-and-selinger-ban-facial-recognition/; Woodrow Hartzog and Evan Selinger, “Surveillance As Loss of Obscurity,” Washington and Lee Law Review 72 (2015): 1343–1388. 17. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15, http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf. 18. Richard Feloni, “An MIT Researcher Who Analyzed Facial Recognition Software Found Eliminating Bias in AI Is a Matter of Priorities,” Business Insider, January 23, 2019, https://www.businessinsider.com/biases-ethics-facial-recognition-ai-mit-joy-buolamwini-2019-1. 19.
Ways of Being: Beyond Human Intelligence
by
James Bridle
Published 6 Apr 2022
The appointment to the council of Kay Coles James, the president of the deeply conservative think tank the Heritage Foundation, drew complaints from Google employees and outsiders over her ‘anti-trans, anti-LGBTQ and anti-immigrant’ statements, and other members of the board tendered their resignations. Unwilling to confront the real issues it had provoked, Google shut down the advisory council less than a fortnight after launching it.22 In December 2020, the issue flared up once more when Google fired one of the leaders of its own Ethical Artificial Intelligence team, Timnit Gebru, when she refused to withdraw an academic paper she had authored which criticized deep biases within Google’s own machine-learning systems, highlighting issues of opacity, environmental and financial cost, and the systems’ potential for deception and misuse. Despite the support of her team, and the resignation of several other employees, Google refused to officially release her original paper.23 By referring to these pressing concerns with new technology as ‘ethical issues’, the companies who address them are made to look and feel good about discussing them, while limiting that discussion to an internal, specialized debate about abstract values and the design of technology itself.
Architects of Intelligence
by
Martin Ford
Published 16 Nov 2018
The oversampling there actually helps those populations, because we can make better predictions about them, whereas if you then have an undersampled population, because they’re paying in cash and there is little available data, the algorithm could be less accurate for those populations, and as a result, more conservative in choosing to lend, which essentially biases the ultimate decisions. We have this issue too in facial recognitions systems which has been demonstrated in the work of Timnit Gebru, Joy Buolamwini, and others. It may not be the biases that any human being has in developing the algorithms, but the way in which we’ve collected the data that the algorithms are trained on that introduces bias. MARTIN FORD: What about other kinds of risks associated with AI? One issue that’s gotten a lot of attention lately is the possibility of existential risk from superintelligence.
On the Edge: The Art of Risking Everything
by
Nate Silver
Published 12 Aug 2024
” *22 Another good example of an effective hedonist is Perkins, whose book Die with Zero is about EV maximizing your life in a data-driven way. For instance, Perkins recommends prioritizing experiences over buying stuff or departing life with a large inheritance. *23 Though Alexander himself had twins right as I was completing this draft. *24 Some critics of EA like Émile Torres and Timnit Gebru use the term “TESCREAL” to describe this, for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. No, there won’t be a pop quiz. 8 Miscalculation Act 5: Lower Manhattan, October–November 2023 Sam Bankman-Fried, at least by his own account,[*1] wasn’t much of a fan of poker or other forms of capital-G Gambling.