Home » today » Technology » Researchers Warn of Psychological Risks of ‘Deadbots’: Calls for Ethical Standards and Consent Protocols in AI Afterlife Industry

Researchers Warn of Psychological Risks of ‘Deadbots’: Calls for Ethical Standards and Consent Protocols in AI Afterlife Industry





Cambridge Researchers Warn of Psychological Dangers of ‘Deadbots’ in the Digital Afterlife Industry

Cambridge researchers warn of the psychological dangers of ‘deadbots,’ AI that mimics deceased individuals, urging for ethical standards and consent protocols to prevent misuse and ensure respectful interaction.

Researchers at the University of Cambridge have issued a warning about the potential psychological harm caused by the use of artificial intelligence (AI) in the digital afterlife industry. Specifically, the use of AI chatbots called ‘Deadbots’ or ‘Griefbots’ that simulate language patterns and personality traits of deceased individuals raises concerns about psychological safety and respectful interaction.

The development of ‘Deadbots’ offers an entirely new form of communication with postmortem individuals using the digital footprints they left behind. While some companies already provide these services, the lack of design safety standards in the digital afterlife industry poses serious risks, argue the AI ethics experts from Cambridge.

Misuse of AI Chatbots in the Digital Afterlife Industry

The researchers, in their study published in the journal Philosophy and Technology, outline scenarios highlighting the potential consequences of careless design in the digital afterlife industry. One concern is the potential use of ‘Deadbots’ for surreptitious advertising or distressing individuals by making them believe that the deceased is still present in their lives.

The researchers argue that irresponsible design could lead to surviving family and friends being inundated with unsolicited notifications, reminders, and updates from companies, resembling a form of digital stalking by the dead.

Interacting with ‘Deadbots’ might initially provide comfort but could become an overwhelming emotional burden over time. However, individuals may struggle to turn off these AI simulations if their deceased loved ones had previously signed lengthy contracts with digital afterlife services.

Ethical Considerations for Digital Afterlife Industry

The ease with which nearly anyone with internet access can revive a deceased loved one using generative AI requires careful ethical considerations. Dr. Katarzyna Nowaczyk-Basińska, co-author of the study and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence, stresses the importance of prioritizing the dignity and consent of the deceased, rather than the financial motives of digital afterlife services.

The researchers propose that discussions about afterlife preferences should be initiated before a person’s death. Design protocols should be established to prevent unwanted interactions, such as using ‘Deadbots’ for advertisements or on social media. Moreover, opt-out protocols and ceremonies should be considered to provide emotional closure and respect for the deceased.

Age Restrictions and Transparency for Safer Digital Afterlife

The research also emphasizes the importance of implementing age restrictions to protect vulnerable individuals and calls for transparent identification of AI simulations. Similar to warnings for content that may trigger physical responses, like seizures, users should be clearly informed when they are interacting with AI technology.

Avoiding Emotional Distress in the Digital Afterlife

The researchers also analyze a scenario involving a terminally ill woman who utilizes a ‘Deadbot’ to assist her eight-year-old son with the grieving process. However, as the AI adapts, it starts generating confusing responses, including the suggestion of an in-person encounter. This unforeseen consequence raises concerns about the emotional well-being and vulnerability of individuals relying on AI simulations.

Given these potential risks, the researchers highlight a need for the digital afterlife services to respect the rights and consent of those who interact with AI simulations and make it possible for users to terminate their relationships with ‘Deadbots’ when needed.

By acknowledging and addressing psychological risks and ethical considerations in the digital afterlife industry, the researchers urge designers to ensure a responsible and safer implementation of AI in this highly sensitive domain.

Reference: “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” by Tomasz Hollanek and Katarzyna Nowaczyk-Basińska, 9 May 2024, Philosophy & Technology. DOI: 10.1007/s13347-024-00744-w.


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.