10/4/24 · Health

"Bioethics must, above all, be interdisciplinary"

The purpose of medical AI must be people's well-being, says Camps

Professor of Philosophy Camps will be a speaker at a UOC conference on ethical challenges in digital health
Victòria Camps

The professor of philosophy, Victòria Camps, speaker at the conference on the ethical challenges in digital health organized by the UOC (photo: courtesy of Victoria Camps)

Victòria Camps, a leading philosopher in bioethics

Professor of Philosophy, Morals and Politics Victòria Camps i Cervera will speak at the Ethical Challenges for Digital Health conference at the Universitat Oberta de Catalunya (UOC). The conference, organized by the eHealth Center, will take place on 16 October at the Interdisciplinary R&I Hub on the Poblenou campus. 

Camps received her PhD from the Universitat Autònoma de Barcelona in 1975, and has been a professor of Philosophy since 1972 and a full professor of Ethics since 1986. From 1990 to 1993 she was Vice Rector of the university. She has headed the Víctor Grífols i Lucas Foundation in Barcelona (1998) and the bioethics committees of Catalonia and Spain, as well as serving as a permanent advisor to the Spanish Council of State.

“Questions are the alerts that force us to check whether the use of new technologies is correct”

Which uses of artificial intelligence (AI) raise the most ethical and philosophical concerns?

All those that could affect people's privacy, could be used in breach of a fundamental right, such as non-discrimination, or could lead to lack of equity. There is also increasing concern about the rise of misinformation and the growth of fake news, a phenomenon facilitated by the abusive and unscrupulous use of artificial intelligence.

In the foreword to the book En què pensen els robots? (What Do Robots Think About?) by Júlia Martín and Pau Valls you address some of the ethical and philosophical challenges posed by these technologies.

In the foreword I limit myself to explaining why it is important to take the ethical dimension into account when analysing the development of artificial intelligence as it is taking place now and how we can expect it to evolve in the future. It is important, however, to avoid catastrophic thinking and be realistic, since artificial intelligence has so far been limited to doing what people program it to do. It has no awareness or feelings, so it cannot be very creative or cope with unforeseen expressions of emotion. Like all innovations, the danger lies in its use without limits, which can harm people instead of serving them. This is where philosophy can provide ethical criteria to distinguish good use from bad.

What ethical challenges arise from empowering patients through mobile health applications?

I am not a health expert, but, as a patient, I think that artificial intelligence, rather than empowering patients, can more efficiently cater to their needs in situations of vulnerability. It can streamline medical consultations, facilitate the mobility of disabled patients and provide the elderly with online tools to make them feel safer and better attended. At least those should be the goals.

How can we ensure the confidentiality of health data in a context in which AI and algorithms are increasingly being used for medical decision-making?

I think this aspect of the use of artificial intelligence is the one that can be most effectively regulated. Encrypting personal data to facilitate the digitization of medical records, for example, is easy and possible. Data protection policy is very advanced, but there are contradictions. When someone asks us for our personal data, they assure us from the outset that they will be protected and ask us to consent to their use. But when a digital platform records the television series I like, for example, it uses algorithms to create a profile of what I am like in order to offer me what may suit my tastes; it uses my data in its own interest and does not ask my permission to do so.

What role should bioethics play in the development and implementation of new health technologies?

Everything we have commented on so far has been considered from the perspective of bioethics. Bioethics must address the concerns that arise as artificial intelligence develops – questions that are not only scientific – and it must do so within the framework of the principles of bioethics, the declaration of human rights and other declarations of principle that set limits. Answering ethical questions is not easy because principles must be interpreted and can clash with each other: It's not always easy to balance the patient's autonomy and the desire to help them. That's why it's very important for decisions to be based on deliberation and sharing opinions from different disciplines. Bioethics must, above all, be interdisciplinary.

What ethical solutions are there for these technological challenges?

The ethical perspective consists more of asking questions and comparing points of view than finding specific and definitive answers. The questions serve as alerts to make us check whether the use being made of new technologies is appropriate. It’s very important to continuously assess the situation.

What can you tell us about the regulations currently in place?

All regulations must have an ethical basis. Perhaps the most significant one so far is the European Declaration of Digital Rights and Principles from 2022. It is a regulatory framework designed to guide more specific regulations in the member states of the European Union. In the book by Júlia Martín and Pau Valls for which I wrote the foreword, you can find a more complete list of regulations than I can give now.

What challenges remain to be addressed?

All those that arise. It's impossible to imagine where artificial intelligence will take us, but I would say that the priority challenge should be to monitor and control issues that arise due to the dangerous or useless application of artificial intelligence and need to be remedied, always following Kant's categorical imperative that humanity must always be treated as an end in itself and never merely as a means to an end.

How can medical AI be improved?

I would repeat what I've already said: by always having people's well-being as its purpose and trying to correct or avoid everything that deviates from this objective.

What role do doctors play?

Doctors are the main agents using the tools provided by artificial intelligence in their field. An important caveat that should perhaps be made is that we must assess the extent to which these tools do more to help the healthcare professional or the patient. This is a question that is rarely asked.

In which countries is there most ethical mistrust of the development of health technologies?

Generally, those that do not have consolidated democracies and, therefore, lack the essential institutions that guarantee people's rights.

UOC R&I

The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health.

Over 500 researchers and more than 50 research groups work in the UOC's seven faculties, its eLearning Research programme and its two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).

The university also develops online learning innovations at its eLearning Innovation Center (eLinC), as well as UOC community entrepreneurship and knowledge transfer via the Hubbik platform.

Open knowledge and the goals of the United Nations 2030 Agenda for Sustainable Development serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu.

Press contact

You may also be interested in…

Most popular

See more on Health