AI doctor justifies its diagnoses
German researchers are currently involved in an interdisciplinary and multi-institutional project aimed at making automated diagnoses from a software program more transparent.
The so-called ‘Transparent Medical Expert Companion’ comprises two prototypes: one model uses video material to recognise pain experienced by patients unable to communicate their discomfort themselves and to explain their classification, while another creates confirmable colon cancer diagnoses on the basis of microscopy imaging data.
“Machine learning processes help make diagnoses,” said Dr Ute Schmid, Professor of Cognitive Systems at the University of Bamberg. “But if their decisions are not comprehensible to doctors and patients, they have to be taken with a grain of salt and might even have to be ignored in sensitive fields like medicine.”
In order to enable the software to both recognise an illness and justify its decisions, the research team has combined various computer science methods. With the aid of deep neural networks, or ‘deep learning’, it is possible to classify enormous volumes of imaging material. However, these processes do not provide information on how decisions are reached.
Additional processes are therefore employed to look within the deep neural network and make crucial traits comprehensible to humans. They highlight things like conspicuous areas in the intestinal tissue or use text to explain why a particular section of the tissue structure was classified as abnormal under the microscope.
The Bamberg researchers’ principal task is to program those components which coherently explain the deep neural network’s decisions. In particular, the researchers utilise what’s known as inductive logic programming. Their goal is a system that, for example, not only reports that a person is experiencing pain, but also displays on a monitor the reasoning behind this assessment. A text presents the rationale: the patient’s eyebrows are lowered, the cheeks are raised and the eyelids are pressed together. An image indicates the relevant parts of the face with colouration and arrows. The system would also estimate the degree of certainty of its diagnosis.
“The attending doctors decide whether or not they agree with the assessment,” said University of Bamberg research assistant Bettina Finzel. “They can influence the algorithms by making amendments and corrections in the system. In this way, the software continues to learn and incorporate the experts’ invaluable knowledge.”
Ultimately, responsibility lies with the person who is being assisted — not replaced — by the transparent companion. Furthermore, transparent companions can be used to help train doctors in the future.
Various research groups are involved in the development of the Transparent Medical Expert Companion. The Fraunhofer Institute for Integrated Circuits IIS in Erlangen and the Fraunhofer Heinrich Hertz Institute HHI in Berlin are using deep learning processes to create a software program. In individual use cases, the expertise of specific researchers from the University of Erlangen’s Institute of Pathology as well as the University of Bamberg are also required.
“This research project calls for knowledge from various fields,” said project coordinator Dr Thomas Wittenberg of the Fraunhofer IIS. “Thanks to the interdisciplinary cooperation, it’s possible for us to develop companions for different medical experts that meet important criteria like transparency and explicability while providing sound diagnostic results.”
Sydney start-ups DetectED-X and LENS Immersive have each launched tools designed to take the...
Researchers have managed to make intact human organs transparent, and subsequently used...
AI has identified features relevant to cancer prognosis that were not previously noted by...