How does trust in Artificial Intelligence emerge? When is that trust justified? And what does that mean for the doctor-patient relationship? All urgent questions to which we yet have to find the answers. To address these issues, a proposal for a new ELSA Lab is in preparation. Karin Jongsma is leading this proposal with members of the knowledge alliance TU/e, WUR, UU, UMC Utrecht (EWUU) and in partnership with the Dutch Patient Federation.
What is an ELSA lab exactly? “ELSA refers to ‘Ethical, Legal and Societal Aspects,’ and lab indicate that we experiment in practice, so that we can understand and guide the ELSA aspects in the implementation of these AI systems,” says Karin Jongsma, associate professor of Bioethics at the Julius Center of the University Medical Center Utrecht.
The goal of ELSA Labs is to ensure that companies, government, knowledge institutions, civil society organizations and citizens jointly develop responsible AI. This involves solutions to both societal and business problems, with honesty, fairness, security and, above all, trustworthiness. The approach focuses on human values as well as the public interests of AI.
Medical applications of AI together with patients
More than 20 ELSA Labs are presently active in the Netherlands, all with a different focus. For example, there is an ELSA Lab AI, Media & Democracy, an ELSA Lab Learning with AI and an ELSA Lab Urban Digital Twin. Jongsma: “In our patient-driven ELSA Lab, we want to scrutinize ELSA of medical AI to promote human health. We therefore collaborate with technical parties, insurance companies and health professionals. And I’m very happy that we have the Patiëntenfederatie Nederland as a partner. They are actively involved and they are important to our ELSA Lab, because the end users are essential if you want to deploy and use technology, such as AI, properly. Therefore, patients are involved as a partner in our ELSA Lab in writing up and also in testing solutions.”
Adding sustainable value to healthcare practice
“Our ambition in this lab is huge’, says Jongsma. “In any case, we want to help medical AI developers and insurance companies to understand how to design and guide AI in such a way that it is used well, and that it really adds sustainable value to healthcare practice. To do this, we look at issues such as: how does trust in AI emerge? And when that trust is there: is it justified, and what does that mean for the doctor-patient relationship? Basically, the key question is: how do we ensure that AI is adding value to our medical practice?”
“A nice concept in the scientific discourse is ‘value alignment.’ Or: How can we make the technology do things that are valuable to us? For these questions knowledge from various disciplines is needed. Interdisciplinary collaboration and the involvement of the user is crucial to the issue we work on in this ELSA Lab. In other application areas such as engineering, ‘value alignment’ is already used much more often, but in medical practice it Is something new.”
Partners can still participate
There are already a few AI Labs that also have funding from NWO and can conduct research with their labs. Jongsma: “We are not there yet, but we have a good idea and already have the first relevant partners together. We are just starting, but very driven to make this happen! Right now we are still open for partners. If you work at a tech company or an insurance company and are you interested in joining or contributing, let me know! For anyone who thinks ‘this ELSA Lab should definitely prioritize topic X or Y’: I’m happy to hear your thoughts!”
About Karin Jongsma
Karin Jongsma is associate professor of Bioethics at the Julius Center of the University Medical Center Utrecht. She received her PhD in medical ethics from the Erasmus University Medical Center in 2016. Her work focuses on the ethics of new technologies, specifically digital health, medical AI and the ethics of patient engagement. She can be reached for questions about the ELSA Lab at email@example.com.