'Crappifier' degrades high-res images for deep learning
Deep learning is a type of artificial intelligence (AI) in which computer algorithms learn and improve by studying examples, and a potential tool for scientists to glean more detail from low-resolution images in microscopy — but it’s often difficult to gather enough baseline data to train computers in the process. Now, a team led by the Salk Institute for Biological Sciences has found a way to make the technology more accessible — by taking high-resolution images and artificially degrading them.
The team’s tool, dubbed a ‘crappifier’ and described in the journal Nature Methods, could make it significantly easier for scientists to get detailed images of cells or cellular structures that have previously been difficult to observe because they require low-light conditions, such as mitochondria, which can divide when stressed by the lasers used to illuminate them. It could also help democratise microscopy, allowing scientists to capture high-resolution images even if they don’t have access to powerful microscopes.
To use deep learning to improve microscope images — either by improving the resolution (sharpness) or reducing background noise — the system would need to be shown many examples of both high- and low-resolution images. That’s a problem, because capturing perfectly identical microscopy images in two separate exposures can be difficult and expensive. It’s especially challenging when imaging living cells that might be moving around during the process.
The crappifier takes high-quality images and computationally degrades them, so that they look something like the lowest low-resolution images the team would acquire. Salk researchers showed high-resolution images and their degraded counterparts to the deep learning software, called Point-Scanning Super-Resolution (PSSR). After studying the degraded images, the system was able to learn how to improve images that were naturally poor quality — a significant breakthrough as, in the past, computer systems that learned on artificially degraded data still struggled when presented with raw data from the real world.
“Using our method, people can benefit from this powerful, deep learning technology without investing a lot of time or resources,” said Linjing Fang, an image analysis specialist at Salk’s Waitt Advanced Biophotonics Core Facility and lead author on the paper. “You can use pre-existing high-quality data, degrade it and train a model to improve the quality of a lower-resolution image.”
Waitt facility Director Uri Manor and his team showed that PSSR works in both electron microscopy and with fluorescence live cell images — two situations where it can be difficult or impossible to obtain the duplicate high- and low-resolution images needed to train AI systems. But while the study demonstrated the method on images of brain tissue, Manor hopes it could be applied to other systems of the body in the future.
He also hopes it could someday be used to make high-resolution microscopic imaging more widely accessible. Currently, the most powerful microscopes in the world can cost upwards of a million dollars, because of the precision engineering required to create high-resolution images.
“One of our visions for the future is to be able to start replacing some of those expensive components with deep learning,” Manor said. “So we could start making microscopes cheaper and more accessible.”
The Australian-designed technology will enable young people living with haemophilia to view the...
ALLSorts software uses RNA sequencing data to identify a patient's subtype of acute...
AI systems can diagnose prostate cancer biopsies with the same level of accuracy as specialist...