The more human-like AI bots appear, the stronger their impact on us. A study from Imperial College London shows that people feel sympathy for AI bots—especially those with human traits—and even protect them when excluded from social interactions. This surprising response highlights the deep influence AI can have on our emotions, behaviors, and perceptions.
The study used a virtual ball game where participants felt empathy for bots that were left out. These bots’ human-like characteristics made them relatable, leading people to treat them as social beings. Jianan, a researcher involved in the study, pointed out that this finding has profound implications for how AI bots should be designed. "By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction," Jianan explains. The key takeaway here is that the more human-like an AI becomes, the more likely people are to perceive it as a social entity deserving of care, protection, and empathy.
Our tendency to treat AI as social entities stems from our innate need for connection. When bots mimic human behaviors, we unconsciously apply empathy and social norms to them. While this may seem harmless, it raises ethical concerns: Could bots manipulate emotions for profit? Could humans become overly attached or reliant on them?
Developers must carefully design AI to match users’ psychological needs, especially for children, who may be more emotionally affected. Striking the right balance between relatability and realism is key to ensuring AI supports us without distorting our perceptions.
In the end, this study serves as a reminder that AI is not just a tool—it's something that interacts with us on a deeply emotional and social level. How we design and engage with these bots will shape the future of our relationship with technology. As we move forward, it's crucial that we proceed with care, mindfulness, and a deep understanding of the power of empathy in human-AI interactions.
Source:
Zhou, J., Porat, T., & Zalk, N. (2024). Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment. Human Behavior and Emerging Technologies. doi: 10.1155/2024/8864909