Propelling the digital lab in the revolution of AI

Agilent Technologies Australia Pty Ltd

Thursday, 14 October, 2021


Propelling the digital lab in the revolution of AI

Artificial intelligence (AI) and machine learning (ML) technologies are now at the forefront of the future-ready laboratory and have proven to optimise laboratory productivity like never before. Like all other revolutions that take off so fast, AI requires scrutiny from every angle if it is going to serve every facet of society.

Based on a recent podcast, this Q&A article — featuring three industry experts — explores the role of AI and ML applications in the digital lab, and takes a close look at the key learnings of AI and ML as we begin to enter a fully digitalised world.

A deep dive into the governance and ethical principles of AI and ML

Allison Gardner, PhD, Program Director, Data Science Degree Apprenticeship, Keele University, and co-founder of Women Leading in AI, educates us on the governance and ethical principles that need to be involved in AI and ML.

Q: On the frontier of health services, and in general, how thoroughly is AI already a part of our lives?

AG: AI is much more pervasive than people think it is in our lives. It is used in many sectors, from management systems in hospitals, to managing flight paths in the aviation industry, and it even dictates the algorithms in our Netflix and social media. This is a response to big data and how we can utilise that information embedded in it to classify, predict and improve the efficiency of a system. My concern is that people think of it as a bit of cure-all, and only a system which will make life easier and augment human experience, but it is not as perfect as people think.

Q: Can you tell us more about your work with Women Leading in AI?

AG: I saw a significant gap between technologists, policymakers and lawyers in addressing the problems that we have been seeing with AI systems, particularly with regards to algorithmic bias and the discrimination that can result from that. For instance, these algorithms can misclassify black women at much greater rates than white men, meaning these women in high-risk situations could forego necessary health care and benefits.

Q: What was the reaction of the development community when you spoke to them about this?

AG: I noticed that there was a lot of deflection by technologists, who insisted that AI is ‘a black box’ that cannot be managed ethically. In response, I had a little bit of a tantrum because one of the key reasons for the deployment of biased algorithmic systems is the lack of diversity in the development teams for these products. Diverse development teams can identify the obvious mistakes that have been made, highlighting that the data is not diverse. With this in mind, I spoke with others who felt the same, and we decided to bridge this gap where we could bring leading thinkers in AI to leading thinkers in policy and government together, so we can fully understand these systems and actually start developing systems in an ethically aligned way.

Q: What would be the best advice for people developing AI, to avoid falling into the bias trap?

AG: Ensuring diverse input and the engagement of all stakeholders in the design of new systems is integral to using this technology in an unbiased way. Reaching out to impacted and diverse stakeholders so they can have a meaningful involvement in the design of the process is crucial.

For high-risk processes, there needs to be a point where if you have not had the application signed off from an independent auditor or an independent internal reviewer outside of the system confirming its suitability, then the system should not be deployed. I also advocate for a citizen-focus trust mark, not dissimilar for example to food labelling, fair trade, nutrition labelling, recycling labels and such, so informing the person on the receiving end, “An AI system has been involved in this process, go and see this further information.”

Ultimately, we can educate people on these issues, but technology is developing so fast that we cannot educate people quick enough, so we can only inform and empower them to be AI-aware.

Transforming scientific research labs with advances in AI, data science and human–computer interaction

Paul Bonnington, PhD, Professor and Director of Monash eResearch Centre, Monash University, discusses the transformation of scientific research labs with advances in AI, data science and human–computer interaction.

Q: Could you tell us more about how you define eResearch as a concept?

PB: eResearch is best thought of as digital research. All aspects of the research process are undergoing a transformation and it is being applied to all domains — from the humanities, arts and social sciences through to STEM disciplines such as engineering and medicine. These domains have all been fundamentally changed by digital technologies that are making their way into research, which is why the Monash eResearch Centre was established in the mid-2000s to help the university navigate this transformation, given that one of its core business areas is research.

Q: As AI is a big part of your work, what’s your take on how it’s best applied to medical research, or indeed generally?

PB: I believe that the way to apply artificial intelligence is always to make sure that the human is involved. We can see patterns and data that a computer is not going to necessarily be able to find unless we tell it to look for those patterns.

Personally, I find the most exciting application of AI is in computer vision and supported decision-making because it opens the potential for ordinary people to be able to apply decision-making AI tools that have been trained to think like experts in the field and, more importantly, they’re able to do that from almost anywhere. This AI is described as deep learning — essentially training computer models by throwing at the computer lots and lots of data which has been annotated by experts. After a while, the computer model begins to ‘think’ like those experts.

Q: You and your colleague Kimball Marriot were awarded the Agilent Thought Leader award recently for your work on the interface between AI and lab instrumentation. Can you tell us a little about that?

PB: Receiving the Agilent Thought Leader Award has enabled me to shift the focus in my own research. I started to see that there were applications of AI which were going to fundamentally change how people use scientific instruments. So I became much more interested in the use of deep learning capabilities and the use of computer vision to help solve problems.

Our team has been looking at the sample introduction area of an instrument which consists of tubes, spray chambers and nebulisers. There are a few challenges which could arise in the sample introduction area, but we’ve been collaborating on a project with Agilent where we’re using computer vision to see these potential obstacles before the operator does. In doing so, we will be able to warn operators that the instrument might require attention, a component needs a reattachment or that the nebuliser needs to be clear.

Q: The potential benefits to efficiency and decision-making seem clear, but what are the challenges you face in this frontier of research?

PB: A big challenge of our own work is the fact that the people that benefit from our capabilities, techniques and infrastructure are actually generating more and more data. As a consequence, it is difficult to keep up with the growth that we are experiencing in the generation of new data. We therefore need to know if anything we generate will be useful at all, which is an equally complex problem because often data to any human is going to look like noise, but it might be that a hidden gem is in there somewhere. So, this is where AI can also help — it can provide algorithms and models to help pre-screen the data to give you a good indication of whether anything useful is likely to be found in it.

The digital lab: understanding the applications and tools for improving the future of science

John Sadler, Vice President and General Manager, Software and Informatics Division, Agilent Technologies, speaks about the digital lab and, more specifically, the understanding of applications and tools for improving the future of science.

Q: Could you explain your take on the benefit of digital innovation in the lab of the future?

JS: I like to think of it as the digital lab that we’ve needed for a long time now, as opposed to ‘of the future’. Our customers, in general, need to do more with less every year. The customer value behind the digital lab really is about improving lab productivity by reducing labour intensity for analysts, through eliminating sample transcription errors and sample handling errors, and improving the scalability and IT friendliness of the data systems that run the lab.

Q: How has the COVID-19 pandemic changed the way in which day-to-day lab operations may function?

JS: The pandemic has really driven quite a radical change in the level of acceptance of remote deployment, maintenance, support, and the desire to be able to operate, conduct workflow review and other lab operations in a remote way. Vendors like us, who are developing lab systems, have been put in the hot seat to make sure that we do our part to provide secure systems that are still usable, and that provide the ability to do remote work. There has also been a growing recognition that we could use, with appropriately structured data, machine learning to take labour intensity out of lab operations and particularly to make analysts more productive. Overall, we have seen lots of opportunity for labour savings in day-to-day lab operations.

Q: And what are the advantages for labs who invest in these steps towards more automated or remote workflows?

JS: I think adopting more digitally advanced applications and tools typically accrues benefits for labs in a few different dimensions. The first is the reduction of rework and labour, and the improved quality of output. The secondary effect of adopting more digitally advanced techniques is the ability to have access to your data in a way that allows you to do more than just generate a report and sign off on it. Electronic records make it possible to start enabling secondary insights which ultimately improve the quality of lab operations.

Conclusion

Advances in AI and machine learning have certainly allowed for a fast-moving and exhilarating journey towards the digital lab, and one which we have the means to control and direct towards a better society. While there is always a flipside to every exciting innovation, particularly in technology, some of the wider challenges around biases in ML, which can lead to misclassification of subjects, are becoming better understood and therefore can be addressed ethically in the revolution of AI.

Top image credit: ©stock.adobe.com/au/kras99

Please follow us and share on Twitter and Facebook. You can also subscribe for FREE to our weekly newsletters and bimonthly magazine.

Related Articles

AI can detect COVID and other conditions from chest X-rays

As scientists compare different AI models to improve automated chest X-ray interpretation, a new...

Image integrity best practice: the problem with altering western blots

Image integrity issues are most likely to come from western blots, so researchers and...

Leveraging big data and AI in genomic research

AI has fast become an integral part of our daily lives, and embracing it is essential to the...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd