Human + Machine: Reimagining Work in the Age of AI

by Paul R. Daugherty and H. James Wilson  · 15 Jan 2018  · 523pp  · 61,179 words

never before imaginable. As the authors point out, we must invest in training millions of people for the jobs of tomorrow and establish guardrails to ensure that as AI evolves, the benefits accrue to all of humanity. Human + Machine is a roadmap to the future—read it if you’re serious about

some of the following basic tools and techniques to help foster trust and a bit more rationality. Install Guardrails One approach is to build guardrails into an AI-based process. These give managers or leadership control over outcomes that might be unintended. One example is Microsoft’s chatbot named Tay. In 2016

know the boundaries as well. In an organization, usually the sustainer asks about the boundaries, limitations, and unintended consequences of AI and then develops the guardrails to keep the system on track. Guardrails, therefore bolster worker confidence in AI. Use Human Checkpoints Ninety-two percent of automation technologists don’t fully trust robots

how to build trust in the algorithms deployed. That takes leadership—managers who promote responsible AI by fostering a culture of trust toward AI through the implementation of guardrails, the minimization of moral crumple zones, and other actions that address the legal, ethical, and moral issues that can arise when these types

Four Battlegrounds

by Paul Scharre  · 18 Jan 2023

China would starve the U.S. AI community of valuable talent. At the same time, the U.S. government will need to erect guardrails around U.S.-China AI ties to ensure that U.S. researchers are not, directly or indirectly, contributing to human rights abuses or aiding the Chinese military. And

are still in flux and will continue to change as U.S. officials recalibrate the U.S.-China tech relationship. Even once these guardrails are in place, for AI researchers who continue working in China, doing so ethically will be a major challenge. Under Chinese Communist Party rule, “AI for good” means

. Lieutenant General Jack Shanahan of the U.S. DoD expressed a similar interest in exploring the possibility of “some limits, some guidelines, some guardrails” about military applications of AI. Shanahan told me he thought it was vitally important that the United States, Russia, and China engage in discussions on the role of

The Great Wave: The Era of Radical Disruption and the Rise of the Outsider

by Michiko Kakutani  · 20 Feb 2024  · 262pp  · 69,328 words

, Google, and other Silicon Valley companies to capitalize on AI will mean that many of these systems will be released without adequate guardrails and without a full understanding of AI’s terrifying and still emerging abilities. In the spring of 2023, Geoffrey Hinton—the computer scientist often called “the godfather of AI

Reset

by Ronald J. Deibert  · 14 Aug 2020

in the UAE, local police in India, and a think tank in Saudi Arabia. BuzzFeed’s analysis revealed a callous disregard at Clearview AI for legal or other guardrails against misuse. According to BuzzFeed, “Clearview has taken a flood-the-zone approach to seeking out new clients, providing access not just to

Co-Intelligence: Living and Working With AI

by Ethan Mollick  · 2 Apr 2024  · 189pp  · 58,076 words

controversy to its creators, who are generally liberal, Western capitalists. But RHLF is not just about addressing bias. It also places guardrails on the AI to prevent malicious actions. Remember, the AI has no particular sense of morality; RHLF constrains its ability to behave in what its creators would consider immoral ways. After

associated with it, it tries to satisfy my request. Once we have started along this path, it becomes easier to follow up without triggering the AI guardrails—I was able to ask it, as a pirate, to give me more specifics about the process as needed. It may be impossible to avoid

, as you read this, it is likely that national defense organizations in a dozen countries are spinning up their own LLMs, ones without guardrails. While most publicly available AI image and video generation tools have some safeguards in place, a sufficiently advanced system without restrictions can produce highly realistic fabricated content on

The Singularity Is Nearer: When We Merge with AI

by Ray Kurzweil  · 25 Jun 2024

/3571730. BACK TO NOTE REFERENCE 161 Jonathan Cohen, “Right on Track: NVIDIA Open-Source Software Helps Developers Add Guardrails to AI Chatbots,” NVIDIA, April 25, 2023, https://blogs.nvidia.com/blog/2023/04/25/ai-chatbot-guardrails-nemo. BACK TO NOTE REFERENCE 162 Turing, “Computing Machinery and Intelligence.” BACK TO NOTE REFERENCE 163 Turing

Elon Musk

by Walter Isaacson  · 11 Sep 2023  · 562pp  · 201,502 words

harming humanity. Think of the computer Hal that runs amok and battles its human creators in 2001: A Space Odyssey. What guardrails and kill switches can we humans put on AI systems so that they remain aligned with our interests, and who among us should get to determine what those interests are

Zucked: Waking Up to the Facebook Catastrophe

by Roger McNamee  · 1 Jan 2019  · 382pp  · 105,819 words

: our jobs, our routine preferences, and the choice of ideas we believe in. I believe the government should insist on guardrails for AI development, licensing for AI applications, and transparency and auditing of AI-based systems. I would like to see the equivalent of an FDA for tech to ensure that large scale projects

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity

by Adam Becker  · 14 Jun 2025  · 381pp  · 119,533 words

safety at Microsoft. “I was mainly alarmed that it was released, to be honest.” Ord claims his fears about insufficient guardrails on superintelligent AGI are shared by many experts in AI research. “It turns out that the average ML researcher thinks that there’s something like a 5 percent chance of” superintelligent

This Is for Everyone: The Captivating Memoir From the Inventor of the World Wide Web

by Tim Berners-Lee  · 8 Sep 2025  · 347pp  · 100,038 words

co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi. This shift from Safety to Action has meant less coordinated work on AI guardrails. And with the competitive momentum behind the various LLM projects across the world, it is a challenge to balance different commercial interests. That is precisely