top of page
Search

Building Sample/Compound/Bio-Specimen Management and Lab Operations in BioPharma in Months not Years

By John F. Conway and Kevin Rissolo, MBA & LSSGB



Over the past 30 years, 20/15 Visioneers have observed several trends in selecting and implementing systems/solutions to assist in Sample and Bio-Specimen Management, from early pharmaceutical research to development to the support of clinical trials.  The right business rules and proper configurable software will provide the right solution to a complex biopharma sample and bio-specimen environment(s).  Organizations will often need to pay more attention to this misunderstood or underestimated complexity.  First, there is the existing logistical complexity in sample pedigree, movement, and tracking, as well as the science complexity that includes uniqueness, scientific metadata/contextualization, storage conditions, and sampling, to name a few.  In the following paragraphs, we will define what we have seen, selected, helped develop, implemented, and supported.  Fig. 1



Figure 1 The Vast and Complex World of Samples, Compounds, and Biospecimens


To orient the reader, one must understand that six (6) high-level processes occur and must be properly managed in all R&D organizations.  They are Request, Sample, Experiment, Analysis, and Reporting.  These processes are also critical to the flow and operation of these R&D organizations.  Some important opinions you might adopt as facts are: 1., everything starts with a request, and 2., sample management becomes an operational backbone of an industrialized R&D organization.  It is simple: If scientists and researchers can’t access compounds, samples, or specimens, they cannot perform their experiments or tests.


Today’s question: “Where is it that BioPharma and CROs are going in their journey to manage its samples?” stated another way is “How can they get the most value and ROI (Return on Investment) out of their sample environments, personnel, investments in automation, and external partner vendors?”   The challenge is managing the entire delivery ecosystem and having the data at the right time, cost, and speed to drive decision-making in the ever-changing environment that constantly impacts their research.


This is a complex problem, which, when done well, can increase your company’s agility and delivery and drive value for your leaders, shareholders, patients, and the bottom line.  If done poorly or not done at all, the company falls behind and does not maintain a competitive edge.  As we mentioned above, your “backbone” won’t be able to support the weight of your research needs, and you will be limited by your ability to manage these ever-increasingly complex sample management environments.


In today’s world, it is no longer acceptable to take 10 to 14 years to deliver medicine or therapy to market.  Companies need to operate faster, smarter, and better to provide the correct information in less time for less cost and get more value out of their research, ultimately delivering safer, more effective, and more personalized medicines and treatments to their patients.  This does not happen by accident.


Over the past 30 years, our team has helped companies navigate these challenges - first in managing and automating access to their small molecule libraries, then their transformed cell line libraries, then large molecule libraries and repositories, and lately, human clinical biospecimen libraries (translational/clinical).  Many companies undergo the same transformational journey, from local tracking lists and databases to global databases and global requesting tools.  Finally, global sample management is GxP compliant and streamlined to allow your personnel to work smarter, not harder, seamlessly between internal and external providers, independent of the region, as regulations, validated, and statute compliance are configured into the solutions/systems.


It would be best to have speed, power, freedom (STAT/Ad hoc/exceptions), and compliance in your systems to deliver the suitable options that support your early research investigations and your highly regulated and tightly controlled SOP-driven late development work.


Be warned that yesterday’s legacy monolithic systems will not cut it in today’s fast-paced and now multi-omics biopharma world.  You will need no-code to low-code to high-code flexibility and cloud-native scalability.  We are not just saying this; we have hands-on experience and evidence from a handful of RFPs, POC, and pilot experiences.

In case you don’t realize it, where you are going is implementing an operating paradigm where every sample (as well as its associated data and metadata) needs to be FAIR (Findable, Accessible, Interoperable, and Reusable), in a regulatory-complaint manner, and having this process linked to your automated fulfillment and testing systems flexibly and seamlessly.  Companies are building these systems to meet two overarching goals:

1. to have their scientists deliver results in a more streamlined fashion (effectively delivering more - more data, more kinds of experiments, more customized and targeted experiments - with fewer employees) and

2. allow them to provide faster with the lowest possible cost.


The idea is simple - to allow scientists the capability to run any experiment they can imagine and still have a significant effort delivered using automation and maybe High Throughput Experimentation (HTE).  Still, the process of getting to this point is anything but simple.  There are a lot of complexities that only seasoned subject matter experts and scientific informaticians would be able to help you navigate.  People who worked in the labs and managed these enormous upheavals before having been through the processes many times and having made a career out of delivering the flexibility and value you are looking to achieve.  You want to choose a team that knows your science, the systems, and the pitfalls of going down a particular path with a specific vendor/solution.  It would help if you also had a team to envision this next-generation solution as the future landscapes are becoming more diverse and highly capable.  A Prime example is Cloud Labs, advanced CROs, On-site/Off-site capabilities, and collaborators. 


While the path every company follows on this journey is different, there are enough similarities between these other processes that we have found true - you can’t hire anyone to do this kind of work and expect it to be successful.  We realize that the software industry, in general, may not have learned what an opportunity this is; the companies that do this right will have the world knocking on its door.  One reason is that multi-omics and microbiome research will require intense testing and the accompanying informatics to keep it organized and efficient.


There is no one-size-fits-all solution.  What works in one company may not work in another due to different cultures, bespoke processes, environments, and levels of automation.

As an approach, our process starts by focusing on your company’s culture, processes, data, and technology landscape.  Have your business processes mapped your workflows?  Are they harmonized?  Optimized?  Understanding your identity and apparent appetite for rigid versus flexible workflows.  For example, if your company has chosen a strict experimental design window – with very few options to choose from for delivery – then we will work within those design constraints so that you can optimize the output to deliver those agreed formats in a streamlined manner.  If your company allows for greater freedom for your scientists, we will work with you to make your scientists able to choose many more options but still deliver and run these using a standardized process.  What is essential is that the process allows for flexibility and freedom to operate.  Still, that design and delivery flexibility exist within an automated framework so that your scientists can bring the benefits of automation to many more kinds of investigations.


The more your scientists can do to automate even one task, the more they can focus on higher value activities, like making sense of and decisions on the data.

For effective small molecule sample management (vendors exist today that do this very well), we believe that a company’s small molecule database, tied in with a requesting front end and site-based storage and liquid handling automation, is the most effective manner for requesting and then feeding request metadata into downstream ELNs and LIMS, and managing shipping manifests for outsourced work – ideal ways of working for many mid-to-large sized Biotech and Pharma companies.  Providing tools that allow for flexible delivery in a standardized manner can fulfill many kinds of experiments without breaking the process.  This provides an effective way of feeding data to in-house and external partner vendors.

For effective Large Molecule and Cell Line Management, we recommend an inventory searching front end, an ample entity storage and characterization system, with the ability to request and receive samples in many standard formats, and the ability to check in extracted, derived, amplified, etc. substances with a clear lineage back to the source sample (i.e., extracted DNA linked back to the parent tissue).


For Human R&D and Clinical Bio-Sample Management, we have been working on a massive program and help pressure test your strategy or build your plan.  We will cover all aspects of the strategy, including Request, Metadata repository, Consent, regionality and Regulation, Automation, Tracking, Accessioning, Chain of Custody, Lineage, Storage, and better practices.  As these processes involve GxP processes, systems must be able to track samples from initial collection through to depletion, destruction, and repatriation (if necessary), with the main change of circumstances captured in an automated manner.

 While the main tasks for each of these workflows seem similar, there are some pronounced differences that your software vendors may need help understanding.  This is where we facilitate learning and provide real value.


What we do know is how to plan a Program, its work streams, and the needed capabilities to jumpstart and efficiently drive your JOURNEY to its different destinations for a pleasurable and highly effective trip.


Concepts include multi-omics testing and experimentation, consent, industry better practices, automation, properly architected integration, vendor selection, and highly effective implementation strategies.  A glue to this is the ability to properly drive stakeholder alignment, change management, and project management.

 We would enjoy discussing our case studies in this space with you.  Here, they are anonymized and at a high level:

 

Major Biopharma Human Bio-Specimen Program

Major Biopharma HTS

Major Biopharma SES - Variable systems

Major Biopharma Small Molecule Compound Management

Major Biopharma Discovery DMPK / ADME

Major BioPharma Study Management & Screening Data Capture Systems

Major Biopharma Cell Line Management

Smaller Biopharma CMG HTS Sample Refresh

Smaller BioPharma Automated Antibody Design and Registration Management

Mid-Sized BioPharma Flexible Instrument Run lists  

64 views0 comments

Comments


bottom of page