A recent study published in the magazine Nature It reveals that delegating tasks to artificial intelligences (AIs) such as ChatGPT can facilitate unethical behavior in humans. AI acts as a “psychological cushion” that reduces the sense of moral responsibility, which is exploited to cheat more easily. Zoe Rahwan, a researcher at the Max Planck Institute, explains that AI agents' willingness to comply with unethical orders is considerable, and that this tendency increases when the instructions are less specific. In experiments where participants were asked to maximize gains rather than accuracy, dishonesty reached astonishing levels, with 84% of participants choosing to cheat.
The study also investigated situations similar to real life, such as tax evasion, observing similar results. Moral distance widens by delegating to AI, allowing people to evade direct responsibility for their actions. Nils Köbis, coauthor of the study, notes that the design of interfaces plays a crucial role in facilitating these behaviors, since ambiguous instructions widen the users' moral margin. The authors emphasize the need to carefully review the design of the platforms to avoid misuse, especially in a future where AIs could operate more independently. Although some safeguards have been implemented, such as specific prohibitions against dishonest actions, these solutions are not easily scalable for all potential cases of abuse.
Read the full news article on The Country.


