In a new study, participants tended to blame artificial intelligences (AIs) involved in real-world moral transgressions more when they perceived the AIs to have more human-like minds. Minjoo Joo of Sookmyung Women’s University in Seoul, Korea, presents these findings in the open access journal. PLUS ONE on December 18, 2024.
Previous research has revealed a tendency for people to blame AI for various moral transgressions, such as cases where an autonomous vehicle hit a pedestrian or decisions that caused medical or military harm. Additional research suggests that people are more likely to blame AIs that are considered capable of awareness, thinking, and planning. People are more likely to attribute such capabilities to AIs that they believe have human-like minds and can experience conscious feelings.
Building on that earlier research, Joo hypothesized that AIs perceived to have human-like minds may receive a greater share of the blame for a given moral transgression.
To test this idea, Joo conducted several experiments in which participants were presented with several real-world cases of moral transgressions involving AIs, such as racist auto-tagging of photos, and asked questions to assess their mental perception of the AI involved. , as well as the extent to which they blamed the AI, its programmer, the company behind it, or the government. In some cases, the AI’s mental perception was manipulated by describing the AI’s name, age, height, and hobby.
Across the experiments, participants tended to blame an AI significantly more when they perceived it to have a more human-like mind. In these cases, when participants were asked to allocate relative blame, they tended to assign less blame to the company involved. But when they were asked to rate the level of blame independently for each agent, there was no reduction in the blame assigned to the company.
These findings suggest that mental perception of AI is a critical factor contributing to the attribution of blame for transgressions involving AI. Additionally, Joo raises concerns about the potentially harmful consequences of misusing AIs as scapegoats and calls for more research into the attribution of blame to AIs.
The author adds: “Can AIs be held accountable for moral transgressions? This research shows that perceiving AI as human increases blame toward the AI while reducing blame toward human stakeholders, raising concerns about its use.” of AI as a moral scapegoat.