10/23/24 · Health

Deadbots and regulation: an ethical and legal matter that demands discussion

UOC researchers are studying the ethical implications of these technologies, which some companies are already developing

New European regulations on artificial intelligence prohibit the use of chatbots that can manipulate people
Hug between a real person and a digital person

Deadbots are based on so-called 'continuing bonds' between the bereaved and the deceased, a term frequently used in the psychology of grief. (Image: Adobe Stock)

The new European legal framework on artificial intelligence, the EU AI Act, came into force on August 1, aimed at preventing rights violations through the use of this technology. The legislation classifies AI according to the level of risk it may pose to individuals and society, and prohibits technologies that pose an "unacceptable risk," such as those that manipulate and exploit people’s vulnerabilities.

One technology that could fall into this category is deadbots, which some companies are already developing and planning to market in the near future. These are chatbots based on the digital identity of a deceased person (WhatsApp messages, social media, emails, etc.) and which are capable of holding conversations with the deceased person's family and friends, emulating their personality. Although it may seem like science fiction, it is not, and services of this type are closer than we may imagine.

Belén Jiménez, who holds a PhD in Psychology and is a member of the Faculty of Psychology and Educational Sciences and a researcher in the IN3 CareNet group at the Universitat Oberta de Catalunya (UOC), is a specialist in the technological mediation of grief. Part of her research focuses on deadbots, an area in which she has published several studies.

“Certain precautions must be taken when using deadbots and it is essential to regulate their use, since the profit motive of the companies that market them may not be aligned with the potential therapeutic use”

A complex debate without clear answers

"Although deadbots have not yet been marketed, we need to reflect on the bioethical aspects of this technology. Their use may soon become normal, as has happened with other applications that may initially have surprised us, but which are now widely used, such as dating apps. More and more companies are emerging in what is is known as digital afterlife industry, and they are improving the technology," Jiménez explained. She believes it is essential to "study how deadbots mediate grief and can transform it. It is a field in which there are hardly any scientific studies and there are no clear answers, since their use and effects depend on various factors, including how these technologies are designed."

Among other things, the new European legislation stipulates that chatbots must inform the user that they are communicating with a computer program and not with a person. Although it classifies this technology as "limited risk", in sensitive contexts such as health, which would be the case with deadbots, the implications of these programs must be carefully analysed.

Research carried out by Belén Jiménez, who is also a member of the CERPOP research group at the University of Toulouse, has shown that the bereaved display ambivalent attitudes to this new technology: the desire to maintain emotional ties with their loved ones is combined with an uneasiness that comes from interacting with a program based on the deceased person's digital identity.

Deadbots are based on so-called "continuing bonds" between the bereaved and the deceased, a term frequently used in the psychology of grief. The UOC researcher said that "these technologies take advantage of people's need to establish emotional bonds". Indeed, they could be equivalent to an advanced and technological version of having an imaginary conversation with our loved one in front of their grave or preserving their memory through photographs and videos. "This need to maintain bonds doesn't necessarily have to be pathological," explains Jiménez, "and it is normal for many people. However, certain precautions must be taken when using deadbots and it is essential to regulate their use, since the profit motive of the companies that market them may not be aligned with the potential therapeutic use of this technology."

In the absence of studies, Jiménez pointed out that the psychological effects of these technologies will depend on the users themselves, on the relationship they had with the deceased and the relationship they establish with the chatbot. "One of the dangers is that it could lead to negative effects, such as the creation of a relationship of dependency, and even suffering caused by a second loss, if the deadbot disappears – for example, due to technical problems."

 

Regulating the digital afterlife industry

Our desire for immortality and technological progress is stimulating the digital afterlife industry, a sector that exploits the digital presence of deceased people to perpetuate their memory and even extend their digital activity. This has many ethical and social implications. Companies pursue commercial and economic ends that may be in conflict with the potential therapeutic objectives of these tools. Strategies such as having deadbots send notifications and other actions to keep the bereaved "hooked" may be ethically questionable, according to Jiménez.

"We are dealing with a new technological development based on artificial intelligence, involving great risks, and it must be regulated to anticipate its possible negative effects, while we must also take its ethical dimension into account," said the researcher. "The new European regulations focus on promoting the transparency of these technologies, which is essential in such sensitive areas as grief. In addition, companies that develop these services must comply with rigorous standards and invest in auditing, transparency and documentation programmes," she explained. The AI Act provides for fines of up to €30 million or 6% of a corporation's turnover if it fails to comply with the law.

In the absence of specific regulations for deadbots , Jiménez proposes that the regulations "should particularly ensure respect and dignity for the deceased person, as well as promoting the psychological well-being of the user, especially if they are grieving."

 

This research supports Sustainable Development Goal (SDG) 3, Good Health and Well-being.

Reference articles:

Jiménez-Alonso, B., & Brescó de Luna, I. (2024). AI and grief: a prospective study on the ethical and psychological implications of deathbots .. In S. Caballé, J. Casas-Roma, & J. Conesa (Eds.), Ethics in online AI-based systems (pp. 175-191). Academic Press. doi: https://doi.org/10.1016/B978-0-443-18851-0.00011-1

Jiménez-Alonso, B., & Brescó de Luna, I. (2022). Mediación tecnológica en el duelo: un análisis de los griefbots desde la psicología cultural. Pensamiento Psicológico, 20. https://doi.org/10.11144/Javerianacali.PPSI20.mdpc

 

UOC R&I

The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health.

Over 500 researchers and more than 50 research groups work in the UOC's seven faculties, its eLearning Research programme and its two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).

The university also develops online learning innovations at its eLearning Innovation Center (eLinC), as well as UOC community entrepreneurship and knowledge transfer via the Hubbik platform.

Open knowledge and the goals of the United Nations 2030 Agenda for Sustainable Development serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu.

Experts UOC

Press contact

You may also be interested in…

Most popular

See more on Health