Author: Matthew Fenech & Nika Strukelj, Future Advocacy
AI is coming to healthcare and medical research. How do these technologies challenge long-standing principles of medical ethics? What new challenges will healthcare practitioners, medical researchers, patients, and regulators need to face?
In their latest report, published in conjunction with the Wellcome Trust, Future Advocacy have identified 10 key ethical, social, and political challenges that urgently need to be addressed to ensure that the opportunities of artificial intelligence in health can be maximised, while the risks are minimised.
In this blog, two of the report’s authors, Matthew Fenech, AI Research & Advocacy Consultant and Nika Strukelj, Research, Advocacy & Communications Coordinator, discuss these challenges and the importance of a multidisciplinary approach to tackling them.
At Future Advocacy, we focus on policy-making and advocacy to maximise the opportunities and minimise the risks of artificial intelligence (AI). We therefore jumped at the opportunity to embark on our latest project, where we were asked by the Wellcome Trust to explore the ethical, social, and political challenges of the use of AI in health and medical research. We conducted an in-depth literature review and sought the expertise of more than 70 interviewees from around the world - clinicians, computer scientists, regulators, ethicists, patients and members of the public.
The process began with exploring the current and potential applications of AI in healthcare. Our approach was to arrange the use cases into 5 broad categories: process optimisation, preclinical research, clinical pathways, patient-facing applications and population-level applications. As we did so, something quickly became very clear: we would need to disentangle real-life use cases from those built on speculation - in other words hype and hope, from what is actually happening. This was a need echoed by many of our expert interviewees, who emphasised that the objective of these technologies is to address real-world challenges, which translates into improving patient care. For this blog, we have selected an example from each of the 5 categories, to give a flavour of the studies and applications being considered today.
Using AI to optimise hospital processes may go a long way to help deploy resources, both physical and human, more strategically. The Hong Kong Health Authority, for example, is using an AI-based tool to produce monthly or weekly staff rosters that satisfy a set of constraints, such as staff availability, staff preferences, allowed working hours, ward operational requirements and hospital regulations.
In preclinical research, AI is being used to provide insights that may be able to reduce the time needed to develop a drug and get it to market, which can currently take up to 15 years. AtomNet uses deep learning on 3D models of molecules to predict the likelihood of two molecules interacting, and can screen 1 million compounds per day. Its prediction of how well a compound can be used to treat an illness is then used by researchers to narrow down the options. The company has identified potential drugs for Ebola and multiple sclerosis using this screening process; one has already been licensed to a UK pharmaceutical company.
An example of a study involving AI in clinical pathways is provided by Moorfields Eye Hospital. Researchers and doctors at this hospital are collaborating with DeepMind Health on a deep learning tool for analysing optical coherence tomography (OCT) images, to flag up those that are more likely to show significant pathology, acting as a useful triage system. This is particularly desirable in an environment where large volumes of these images are being produced, which reflects the greater availability and reduced cost of this imaging technology.
The objective of Alder Hey Children’s Hospital’s AI-enabled chatbot ‘Oli’ as a patient-facing application of AI is to ease the inevitable anxiety brought on by a hospital visit, whether by guiding parents to the parking lot or advising on what to expect during procedures or their visit. It uses natural language processing to classify intents and respond appropriately, and can recognise a specific entity from a previous question. For example an initial question asking “What is a blood test?” followed up by “Will it hurt?”, will respond with the appropriate answer around whether a blood test will hurt.
At a population level, AI tools are being developed to use non-traditional clinical data sources such as mobile phone activity to forecast the progression of epidemics. In 2016, Malaysia became the first country in the world to use an app to predict dengue outbreak. It analyses parameters including geography, weather and symptoms of dengue cases to predict hotspots, where preventative actions such as the elimination of mosquito larvae are then performed.
This detailed mapping exercise enabled us to define a range of ethical, social, and political challenges thrown up by these applications, which we then grouped into ten categories. One of the themes identified that is quite specific to healthcare is the importance of and need for patient and public engagement. This was something we strove to address in the process of writing our report, by conducting interviews with patients and members of the public, as well as a roundtable discussion. A fascinating outcome was the idea that people who identify as ‘patients’ may differ from other members of the public in terms of their needs and concerns around the use of AI in health, including data sharing. There is a strong desire for more education around how data sharing can benefit people - individuals, their relatives, and society as a whole. Patients who have joined the 100,000 Genomes Project, for example, are willing to share their data even if they will not personally benefit from the results of the study. However, a common worry among our participants was the possibility of insurance companies using medical data to deny them or their children insurance, or to raise premiums.
‘Humanising’ AI applications may be one way of increasing public awareness and fostering public engagement. This could be achieved by the willingness of individuals participating in studies involving AI tools to be named, in order to be able to tell their story and for people to relate to their experience. Another way may be to modify the language used to discuss issues around data and artificial intelligence in healthcare. Should the term ‘data’ itself be replaced with alternatives such as ‘personal health information’, which may be more easily understood and have fewer negative connotations? The need for education brings along with it the need to decide on the educator. The NHS may be the obvious choice, but previous failed initiatives, such as care.data, highlights the need for much clearer and more effective communication with the public and practitioners alike. There is all the more need for clarity when dealing with the complexity of algorithms, artificial intelligence, regulation and medicine all at once.
Another key consideration is determining how these technologies should be regulated. Our experts disagreed over the adequacy of existing frameworks to ensure patient safety. Across the pond, the FDA is currently creating a regulatory framework for software that aids healthcare providers in diagnosing and treating conditions. Whether new frameworks need to be developed, or existing ones adapted, it is undeniable that the pace of development of these algorithms is much faster than bodies regulating drugs and medical devices are used to, so regulatory processes need to be agile to account for this speed. Our interviews also highlighted disagreement over the acceptability of using ‘dynamic’ algorithms in the healthcare context. These use new data as it is presented to them to improve their ability to reach their preset goal (such as making a prediction). While some argued that it is easier to regulate ‘fixed’ algorithms, others pointed to the comparison with human doctors, who learn all the time, and the need instead for regulation that focuses on the outputs and outcomes of these algorithms.
We are thrilled to have written this report and are very pleased with the response so far. This is only a first step towards maximising the benefits and minimising the risks of AI in healthcare, but there is a definite appetite for in-depth research into the ethical, political, and social challenges we have suggested. The Wellcome Trust has launched its new AI-themed Seed Awards, and is calling for researchers from any discipline in the humanities or social sciences to apply for the £1 million in funding on offer. We are eagerly awaiting the development of practical solutions to these challenges, which will be the products of multidisciplinary research drawing on the expertise of those who develop AI tools, those who will use and be impacted by these tools, and those who have knowledge and experience of addressing other major ethical, social and political challenges in health.