How labs and vendors can work together to improve reproducibility

Vector Laboratories
By Pamela James, PhD*
Wednesday, 12 October, 2022


How labs and vendors can work together to improve reproducibility

Reproducible experimental findings are fundamental to the quality and progress of scientific research. The ability to reproduce research emphasises the rigour and trustworthiness of the original data, building confidence in the veracity of an experimental observation.

Reproducibility also has important implications for success in research, encouraging publication and bolstering the reputations of researchers and their labs. Conducting reproducible research also enables scientists to repeat and modify experiments quickly and easily without doubting their validity, saving valuable time and money.

However, across scientific disciplines, many researchers are experiencing difficulty when trying to replicate previously observed experimental results. This ‘reproducibility crisis’ is reflected in many forms: a single scientist struggling to reproduce results across their own experiments, members of the same lab attempting to confirm the results of other lab members (sometimes years later) and independent research groups failing to obtain results published by other groups. In a recent poll, nearly 80% of biological scientists experienced failure to reproduce someone else’s results, citing “not replicated enough in the original lab” and “poor experimental design” as major contributors1. Other inherent factors of scientific research, such as the discontinuation of certain reagents and equipment, significant gaps in published protocols and methodologies, and a competitive publishing culture that devalues negative results2, contribute to this issue. However, actionable steps toward research reproducibility can be made by providing sufficient education and validated research tools to new users and empowering them to gather reliable and trustworthy data.

Prioritising new user education

Aside from rare instances of malicious intent, inconsistent results are commonly rooted in an honest mistake due to a lack of proper training and sufficient education provided to new users. Even if a person has spent years in the lab, they are still new users to new methods and applications. Given that the culture of scientific publishing incentivises the speed of producing results to establish ownership of groundbreaking findings, researchers are often forced to cut corners in this first crucial training and education step. Even experienced researchers may assume that they can just ‘figure it out’ when performing a new technique and never pursue sufficient training.

Paradoxically, the emphasis on publishing only data that supports a research claim limits users’ access to literature on what methods or experiments don’t pan out as expected, extending the overall time to obtain positive results for groups unaware of the avenues that have yielded negative results for others. The combination of incentivised rushing and devalued negative publishing paves the way for users to make common mistakes when learning a new technique, particularly when that technique is new to the lab.

Often, the most significant mistakes are not running appropriate or sufficiently stringent controls and not optimising an assay. For example, techniques that require protein detection using primary and secondary antibodies, such as flow cytometry, immunohistochemistry and immunofluorescent microscopy, require fine-tuned optimisation to ensure that the ideal reagent concentrations and conditions are used. If users aren’t provided proper training on assay design and optimisation, they may use too low or too high a concentration of a particular reagent like a primary antibody, resulting in weak or absent signal and off-target binding, respectively3. And unfortunately, if the same concentrations are used over and over and give the same results, results that appear reproducible in that user’s hands are still inaccurate. While peer review in publishing attempts to catch and correct these common errors by suggesting orthogonal experimental approaches, this alone is insufficient to prevent the publication of unreliable results. Therefore, it’s important to identify additional sources of experimental education and expertise for new users to prevent these common — and often critical — technical mistakes.

Research product vendors can offer an underutilised reservoir of expertise. Most experiments require commercially available products and reagents, and these tools are developed by suppliers who often harbour an untapped breadth of knowledge on the scientific applications of their products. It’s valuable to choose product vendors that provide strong customer support and technical services. Working with an accessible and communicative vendor can bridge the education gap for a new user to learn the appropriate controls and optimisation steps required to ensure reproducible results. Vendors often have unique insights into the nuances of their products and established knowledge of other customers’ experiences, so seeking their guidance can foster success in the lab. Reshaping the connection with product vendors from transactional to relationship-based can significantly enhance new user education and, in turn, improve data reproducibility. Working with a vendor shouldn’t be a single transaction, like buying a pair of shoes, but rather a continuous relationship built on effective education and trust.

Selecting validated tools

While it may seem obvious, it’s important to use research products and tools that have been validated by the vendor/manufacturer for the specific application to reduce the potential for unreliable results. Similar reagents, such as target-specific detection reagents, can be used in many applications, but blocking reagents, diluents, concentrations and incubation times may vary greatly depending on the assay and sample type. The art of determining where to purchase validated tools is not a common lesson taught to scientific researchers. The key is to select vendors with experience and expertise in specific applications and who make the intended, validated use of their product readily available to the user.

Scientific expertise within a specific application space allows a vendor or manufacturer to anticipate certain shortcomings and pitfalls of a new product during development, well before it becomes available to researchers. If a company is attuned to how a product is likely to be used in inexperienced hands, it can incorporate mechanisms to prevent these mistakes. Robust product development processes incorporate many tools, one of which is a ‘failure modes and effects analysis’ (FMEA). This analysis, typically employed to ensure safety and reproducibility in diagnostic tool development, can also be applied to research use only (RUO) product development. In terms of product performance, FMEA effectively considers how a product can fail due to incorrect use. FMEA also identifies steps in the manufacturing process that require tightly controlled parameters and takes steps to eliminate or mitigate the points of potential failure. Ultimately, this assessment aims to limit the parameters and variables that the end user must optimise and consider, thereby reducing the possibility of erroneous results and enhancing the likelihood of reproducible data4. For example, if a biological reagent will not function optimally if used in an imperfect buffer, the manufacturer can remove this possibility by providing the ideal buffer along with the reagent. This requires the manufacturer to test the reagent in a variety of commonly used buffers to understand their impact to product performance. Moreover, in areas that can’t be addressed by product design, a manufacturer can include accompanying education in instructions and user guides, bridging the education gap between the user’s experience in a technique and the experiment’s requirements for success.

Each company and manufacturer differs in the level of investment given to this stage of product development. In general, the more stringent the FMEA, the higher the reliability and quality of the product. It’s essential to consider a supplier’s dedication to this critical step of product development to bolster the chance of producing reliable and reproducible results.

Setting yourself up for success

To foster a community of reliable research, it’s important to partner with suppliers that provide both continuous education and well-validated products. My experience in research has informed my priorities working on the supply side: I have seen the value of a vendor’s upfront investment in product development (or lack of it) to an end user’s success. Developing products using a team of in-house experts and striving to establish extended relationships with researchers can support their scientific endeavours from experimental design to manuscript publication. By fostering communication, education and diligence, researchers and product vendors can work together to increase research reproducibility and ensure the reliability of future scientific discoveries.

*Pamela James is VP of Product at Vector Laboratories.

References
  1. Baker, M. 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016). https://doi.org/10.1038/533452a
  2. https://kosheeka.com/factors-affecting-research-reproducibility-in-biomedical-research/
  3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4985043/
  4. https://link.springer.com/article/10.1007/s00769-020-01441-9

Top image credit: iStock.com/AzmanJaka

Please follow us and share on Twitter and Facebook. You can also subscribe for FREE to our weekly newsletters and bimonthly magazine.

Related Articles

Clinical Trial Transformation: Recent Changes and Future Predictions

Accelerated by the COVID-19 pandemic, we have seen a shift in clinical trials, with changes to...

International keynote to headline Accreditation Matters 2024

The landmark two-day conference will focus on the crucial role accreditation and conformity...

Govt agrees to all recommendations of ARC Review

The government has agreed to improve the governance of the Australian Research Council and to...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd