精东传媒

Can machines see things your doctor can’t?

Ethical, legal and social implications of adopting artificial intelligence for medical diagnosis and screening

Diagnosis and screening is integral to a clinicians’ workflow and professional identity. Authority and responsibility to diagnose conditions and interpret test results has traditionally belonged uniquely to clinicians. But some say this is about to change.


Machine learning (ML) systems鈥攁 type of artificial intelligence (AI)鈥攁re increasingly able to contribute to diagnostic and screening tasks. A central tension in the field is how fast it should be allowed to develop, and how it should be controlled and regulated, to ensure that the technological promises are fulfilled and the potential harms avoided. 

Professor Stacy Carter, who works within UOW鈥檚 School of Health and Society, and leads ACHEEV (the Australian Centre for Health Engagement, Evidence and Values), is working towards building a substantial stream of work in data use and artificial intelligence in health.

鈥淲e consider the values encoded in algorithms, the need to evaluate outcomes, and issues of bias and transferability, data ownership, confidentiality and consent, and legal, moral and professional responsibility. We consider potential effects for patients, including on trust in healthcare, and provide some social science explanations for the apparent rush to implement AI solutions,鈥 she says.

Machine learning is a form of artificial intelligence (AI). Machine learning systems automatically improve based on 鈥渆xperience鈥 鈥 that is, exposure to large datasets. Machine learning has many applications in healthcare, promising to make things faster, more accurate, more convenient and more tailored to individuals. Today, machine learning is helping to streamline administrative processes in hospitals, map and treat infectious diseases and personalize medical treatments.

鈥淥ne special promise from machine learning is that it will be able to see things that humans can鈥檛 see: this inspired our title The algorithm will see you now. So, for example, ML systems might be able to distinguish new subtypes of conditions that will benefit from different treatments. It could also provide better prognostic predictions by making connections in data that humans wouldn鈥檛 have thought to make.鈥

Machine learning algorithms work somewhat differently to many older forms of AI. They require two things: large quantities of input data, and a goal. Rather than being prescriptively programmed to achieve the goal, the algorithm is programmed to continuously modify itself through exposure to data until it can achieve the set goal.

鈥淭his research is important now because machine learning is increasingly starting to make its way into healthcare. This is potentially exciting, but we are concerned that it could also have negative effects: for example, cause harm, amplify bias and inequity, undermine professional responsibility, and reduce trust in healthcare. We need to pay attention right now to questions about how machine learning should be implemented in healthcare, before the use of it becomes widespread.鈥

In the US and UK there are already machine learning systems being used to optimise management of health services and electronic health records, to assist radiologists to read digital images from breast cancer screening and diagnosis, to speed up drug discovery, and to triage patients via chatbots.

Australia is still early in development of healthcare AI, but we are seeing substantial work on developing the data infrastructure needed to enable AI systems, and some early projects to support more efficient management of healthcare and make predictions about patient outcomes. The investment and policy heat around healthcare ML is not necessarily a good thing: it risks amplifying hype, and leading to uptake being more rapid and less careful than it should be.

There are unfortunately lots of ways that AI is already having negative effects in the world. In criminal justice and policing in some countries, for example, AI systems are being used to direct police attention to certain neighbourhoods, and to suggest to judges what sentences they should give. These systems have often been systematically biased against people of colour. In human resources, job applications are often now processed by AI, and some of these AI systems have been systematically biased against women. The world is unjust, so existing data about the world reflect that injustice. When AI systems are developed on those data, they make prejudiced decisions, just like (and potentially even more than) humans.

There are plenty of other things for us to worry about in a healthcare context. These include:

    • Hospitals and health systems may spend huge amounts of money on AI products that don鈥檛 make patient outcomes better;
    • AI systems need to be introduced carefully, under research-like conditions, to rigorously test whether they work before they are used more widely;
    • Machine learning systems are likely to be trained to identify clinically insignificant conditions, which will lead to more people getting treatments that they don鈥檛 really need;
    • These systems need to be retrained for each new context in which they are used, and if this isn鈥檛 done properly, they will make outcomes worse in that new setting;
    • Humans suffer from 鈥榓utomation bias鈥 鈥 that is, they tend to think that AI systems must be right, even when they are not;
    • It鈥檚 not clear who will be responsible for decisions made by machine learning systems (The treating doctor? The company that made the system? The developers who worked on the system? The hospital managers who bought the system?);
    • Human clinicians are likely to de-skill quickly 鈥 to forget how to do tasks that are being done by AI systems, which will make us dependent on those systems;
    • The more data-intensive healthcare becomes, the more opportunities there are for data breaches and the more difficult it will be for patients to retain privacy.

 

Once artificial intelligence becomes institutionalised, it may be difficult to reverse: a proactive role for government, regulators and professional groups will help ensure introduction in robust research contexts, and the development of a sound evidence base regarding real-world effectiveness.

鈥淚 have been working on the social and ethical dimensions of screening and diagnosis for about a decade, starting with an NHMRC Project Grant in 2012 on the ethical dimensions of cancer screening, and then moving on to our current NHMRC Centre for Research Excellence which is part of Wiser Healthcare, a collaboration between 精东传媒, Sydney, Bond and Monash Universities.鈥

鈥淭his particular project came about because Prof Nehmat Houssami, a breast physician based at the 精东传媒 of Sydney, had been talking to me for some time about the potential future use of AI in breast screening in Australia, and the need to pay attention to the ethical, legal and social implications. After a couple of years of conversations鈥 and recently publishing a paper summarising our thinking鈥攚e decided we needed a grant to support some concerted empirical and theoretical work," says Carter.

The multi-disciplinary group, which includes expertise in clinical practice, public health, epidemiology, health ethics, health economics, health law and of course data science and AI development, are answering international calls for a new kind of AI ELSI (ethical, legal & social implications) research based on concrete cases and engaged deeply with frontline stakeholders. Focussing on breast cancer and cardiovascular screening and diagnosis, the project adopts a multidisciplinary, practice- and public-engaged, multi-method approach which is highly innovative. They will spend time with a wide range of stakeholders, including patients and publics, finding out what is happening, and what people think should happen, in diagnostic and screening AI.

鈥淲e received a big helping hand from Global Challenges, who funded us to do a survey project before we got the latest NHMRC grant. That project is well underway now, and will provide important background information for the work we will do next鈥 Carter says.

This project, one of the first of its kind in the world, will establish Australia as a leader in considering the profound ethical, legal and social implications of allowing or prohibiting artificial intelligence systems for diagnosis and screening of disease. Detailed public discussion is required to consider what kind of AI is acceptable rather than simply accepting what is offered, thus optimising outcomes for health systems, professionals, society and those receiving care.

鈥淕roups like ACHEEV are incredibly important for Universities and for the national and international research ecosystem. We are a concentration of researchers who have similar expertise 鈥 in public health, ethics, social science 鈥 and a shared commitment to bringing public values and perspectives into health policy and practice. This allows us to support one another to keep pushing boundaries and developing new and better ways of doing things, and creates a great environment for students and postdocs to receive training and grow their own careers. As we become better established, we are increasingly recognised as doing something unique in the Australian research landscape, so we are seeing great attendance at events and interest and participation in our projects from a wide range of stakeholders.鈥

The interdisciplinary group of collaborators includes Prof Nehmat Houssami from the 精东传媒 of Sydney, Prof Wendy Rogers from Macquarie 精东传媒, Dr Maame Esi Woode from Monash and A/Prof Bernadette Richards, the President of the Australasian Association of Bioethics and Health Law from the 精东传媒 of Adelaide. It also includes a great group of UOW researchers, including and Prof  from ACHEEV, and A/Prof  and A/Prof  from EIS. The team will work towards results for the next three years thanks to funding received from the  (NHMRC).

The project titled 鈥淭he algorithm will see you now: ethical, legal and social implications of adopting machine learning systems for diagnosis and screening鈥 will facilitate a public conversation to ensure that development and implementation of healthcare AI in Australia does not run ahead of evaluation and deliberation. To stay in touch you can follow the project Twitter account @DiagnosingML, and Prof Carter at @stacymcarter.