Search

BioELNs -The Need for Adoption in Data-Driven R&D is Foundational

Updated: Jul 19

July 14, 2021|ELN, Science and Technology, Scientific Informatics

AN INDUSTRY PERSPECTIVE By 20/15 Visioneers, Leaders in Science and Technology “Our understanding of biology is very limited, let’s not compound it with an inadequately captured data and process environment.” John F. Conway CONTENTS Some History ........................................................ 2 What Makes a Great BioELN? ...................................... 3 R&D Organizational Size Does and Does not Matter............. 5 Stop the Insanity! ................................................... 6 Implementation: A Phased Approach ............................ 7 The Next Generation BioELNs ..................................... 7 Conclusion .......................................................... 8 SOME HISTORY

As we remember it, the first ELNs were designed to capture Intellectual Property (IP) and small-molecule chemistry. Biologists were on the team but, were also designated to test these new molecules in various assays. They would then enter the data in a bespoke or early decision support system like MDL’s Isis Base, which would marry up the small molecule data with the new biological assay results and summary data that was refreshed daily. Many times, the chemists used an ELN while the biologists were still working on paper or had a rudimentary paper-on-glass ELN. As we have mentioned many times, the market evolved, and several companies produced bio-oriented ELNs. The issue, however, was that these ELNs suffered from a lack of biology-persona coverage, hence diverse solutions covered the gamut of workflows.

This led to silos of non-use that ultimately damaged adoption of this foundational requirement to capture the electronic scientific method, bedrock for FAIR data and processes. (Figure 1.) The time taken to document work is a critical step in R&D organizations, ensuring it will get secondary and tertiary use out of all that data and information. The right level of process rigor and metadata contextualization is imperative to produce Model Quality Data, defined as data of sufficient breadth, resolution, and fidelity to confidently drive scientific understanding to higher levels of abstraction, informing burgeoning Artificial Intelligence (AI) and Machine Learning (ML) approaches capable of uncovering breakthrough insights. This is the golden path to more efficient innovation.

Figure 1 WHAT MAKES A GREAT BioELN?

It is now 2021 and several partner vendors have risen to the challenge, delivering platforms that provide higher levels of utility not only to the variety of different functional personas on a team (including Molecular Biology, Assay Development, Screening Sciences, In Vitro/In Vivo/In Silico Analysis, Bioanalytical, and Bioprocess Research) but also to the six needed high-level workflows. These are Request, Sample, Test, Experiment, Analyze, and Report. Today’s small molecule supportive biology and large molecule discovery and development processes are a hybrid of engineering and experimentation, hence a well-constructed BioELN needs to handle both along with any relevant chemistry since the two worlds collide more than not. Of course, there are organizations running biology-only programs that do not have to deal with the chemistry aspects of a project. The critical point is that the BioELN may not be the repository of all related project content but it must be a reliable element of truth necessary and sufficient to reproduce or replicate a previous experiment. In a high-complexity R&D world dealing with multiple data types and enormous data volumes, deficiencies in data and process contextualization engender poor communication, leading to unreliable reproducibility and replication of work. Institutional knowledge and tech transfer suffer, and the deficient IT environment often ends up like the thing you are trying to tackle, a disease.

You may resign yourself to live with it, but it affects your quality of life and can eventually result in total project failure.

In a simplified baking analogy, unless you are making Iranian Barbari bread, www.thefreshloaf.com/node/62118/most-difficult-bread-world, Biopharma biology can share the same vagaries. We are sure you have experienced a cook that winged their recipes, or maybe later in life were not as rigorous with their preparations. Though sometimes this delivered great results, you never enjoyed a consistent product because the process was not captured properly. This same type of reproducibility and replication failure occurs far too often in industry and academia, which is increasingly being recognized as a large and costly problem.

A GREAT BioELN NEEDS TO COVER THESE:

Biology personas and high-level workflows

1. Molecular Biology (engineering)- like medicinal and synthetic chemists have had now for close to 20+ years (e.g., Reactions, Molecule searching, Stoichiometry=> Recipes/Techniques, Planning, Plasmid mapping, Blast searching, alignment tools, and other genetic engineering tools, plan, visualize, and document DNA cloning and PCR, etc.)

2. Assay Development- Planning, Interactive Plate/Sample Mapping, Curve fitting with (Algorithm configuration), Integration to R, adaptive and highly configurable process, and data environment

3. Screening Sciences- (in vitro/in vivo/in silico)- HTS/HCS/DMPK-PKPD/Multi Assay type handling, High-throughput screening, Imaging, High Content Screening, Surface Plasmon Resonance, Animal Studies, DMPK

4. Bioprocess- Regulation, Fermentation, Scaleup, Real-time monitoring, Historian (continuous) data, engineering

5. BioAnalytical- Standards, planning, workflows, analysis, and reporting

Photos Courtesy of Cristof Gaenzler 6. The intuitiveness and usability of the system

7. Low code/No code configuration

8. Integration to foundational tools like entity registration, large molecule editors (e.g., HELM), Visualization (e.g., Spotfire, R, etc.), Inventory