The hidden costs of NGS

Tecan Australia

By Dr Enrique Neumann*
Monday, 04 June, 2018


The hidden costs of NGS

The $100 genome era is said to be upon us. But is it really?

The cost analysis of DNA sequencing indicates that this landmark is finally within reach, but the reality is that most next-generation sequencing (NGS) labs are still spending significantly more than that. 

As NGS technologies continue to evolve, the costs associated with sequencing, analysis and data storage continue, making this powerful technology ever more affordable and widely accessible. However, unless you are aware of some of the extra and less obvious expenses associated with NGS, your lab could be losing money unnecessarily, without even realising it. While there’s a lot of talk about having reached the $1000 genome mark, and even getting to as low as $100 per genome, the supporting data (such as that published by NHGRI1) is often based on direct ‘production’ costs associated with sequencing and initial data processing. To be fair, this is openly disclosed and sensible for cost-of-goods estimates, but not so useful if you are looking to optimise the whole process of NGS to make it truly practical and cost-efficient for your lab.

There have been some informative articles about the ‘real cost of NGS’, but for the most part these focus on everything from sequencing onwards, and gloss over the upstream steps of sample and library preparation. Unfortunately, that is a major oversight, because the process of NGS starts long before you press ‘go’ on your sequencer. The good news is that many of these expenses can be reduced and controlled easily, if you know where to look. Here are some things that could be costing you more than you think:

Sample tracking

In a high-volume NGS lab, it is easy to misplace or mix up samples, necessitating recollection or repeat testing. In some cases, replacing samples may not even be an option, or a mix-up may go undetected. Occasional errors like these may not seem that big initially but over time hunting down lost samples and re-running tests is a drain on resources and can add significant costs. In addition to the direct costs, there is also the incalculable impact of client and patient dissatisfaction. Worse still are the consequences of a poor medical decision due to lack of information or an erroneous result stemming from an undetected error. What price would you put on your lab’s reputation? To keep such mistakes to a minimum, it takes reliable and integrated measures for end-to-end traceability at every step of NGS, from sample collection right through to reporting.

Nucleic acid extraction

The ‘massively parallel’ high-throughput nature of NGS has inevitably highlighted upstream bottlenecks in sample preparation, particularly nucleic acid extraction. Conventional methods for DNA and RNA isolation are generally low-throughput and very labour-intensive. In addition to decreasing productivity, manual extraction approaches create unnecessary opportunities for inaccuracy and human error, and that ultimately translates into wasted time, reagents and resources. Once extracted, nucleic acids need to be accurately quantified for subsequent titration. At this stage, you can lower the price per sample significantly by working with the smallest volumes possible, which is typically achieved by using robotics and higher density plates.

Library preparation

If you have large numbers of samples to process, library preparation can actually cost more than the sequencing itself. With many steps involved for amplification, pooling, normalisation, etc, there are numerous opportunities for waste and error. It is critical that reagents, samples and controls are added in the right amounts, to the right wells, at the right time. Since the volumes involved are very small, on the order of microlitres or even lower, it is easy to accidentally skip a sample or add something twice. At best, such mistakes can mean that the process needs to be repeated. At worst, errors such as index cross-contamination can go undetected and lead to erroneous results, with serious consequences — especially in the clinic.

PCR for library preparation and quantitation

It is worth paying special attention to PCR workflows that are integral to library preparation and quantitation. Cross-contamination of samples and amplicons during PCR can be a major issue that is hard to detect. The nature of PCR is such that small errors are readily amplified and can end up skewing your library. This can be particularly problematic when the aim is to assess rare variants (eg, in examining tumor heterogeneity). Library prep for RNA sequencing is particularly vulnerable to PCR-induced distortions, and that adds unnecessary cost to what is already a relatively expensive process. When PCR is used for quantitation, everything is typically done in triplicate, multiplying the total number of pipetting steps and different samples you have to deal with, and increasing chances of error. Seemingly small PCR errors are far from insignificant when it comes to cost control. It is difficult to estimate the frequency and the eventual cost of these sorts of mistakes, since many of them may go undetected. It’s better to introduce measures to avoid such errors in the first place.

Understanding the real cost of NGS

This list is far from exhaustive, but hopefully the point is clear — it is definitely worth scrutinising the NGS process from start to finish, especially the steps upstream of sequencing, and ruthlessly eliminate unnecessary manual steps and sources of error. Investing in quality automation can go a long way in addressing all of the above issues. At the same time, it can free up skilled staff for more important (and interesting) work. Of course, the solutions required will vary, depending on specific applications and methodologies.

References: Wetterstrand KA. DNA Sequencing Costs: Data from the NHGRI Genome Sequencing Program (GSP) Available at: www.genome.gov/sequencingcostsdata. Accessed [16 Mar 2018].

*Dr Enrique Neumann is Product and Application Manager, Genomics, at Tecan, Switzerland. He studied Biology at the University of Santiago de Compostela, Spain. During his PhD at the University of Edinburgh, he focused on the molecular processes in plant cells. He joined Tecan in 2015 and focuses on the development and support of genomic applications for Tecan’s liquid handling platforms.

Image credit: ©stock.adobe.com/au/3dmentat

Related Articles

Found: the most massive stellar black hole in the Milky Way

With a mass 33 times that of the Sun, and based a mere 1926 light-years away, Gaia BH3 is the...

Astronauts will soon grow plants on the Moon

When humans take their first steps back on the Moon after 50 years during the Artemis III...

How plant leaves ensure optimal area for photosynthesis

The small RNA molecules in the cells of the growing leaf set in motion a genetic process that...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd