Updated: May 3
Bio Drug Development: Informatics Challenges & Recommendations
October 14, 2021
20/15 Visioneers, Leaders in Science and Technology, and BioIT Solutions
“Managing Complexity Should Never Be Taken for Granted” John F. Conway Some Background and Definitions
Your company has developed an investigational new drug-. Congratulations! this was not an easy accomplishment.
Now, you face this complex array of scientific, regulatory, and management challenges to bring your discovery to market. Biological products are difficult to produce and the trials to confirm efficacy and safety are intricate and lengthy. You will be staffing up quickly, outfitting new laboratories, and negotiating partnerships with suppliers, contract research organizations, and clinical trial management companies. You will be required to compile extensive documentation to support regulatory filings.
All this work is data-intensive, as you will need complete, accurate, and auditable records of everything: all the components needed for your drug, manufacturing operations, test methods, assay results, reviews, approvals, lot release, storage conditions, trial design, subject consent, enrollment, and monitoring. All this work is people-intensive, requires expertise in data management, and the knowledge to link processes together into a cohesive operation.
The issue at hand is if you create an environment that is unFAIR and with low data integrity you will end up working in an even more complex environment than anticipated and it will eat at your culture, ability to be efficient, and undoubtedly raise your costs and expenses to very unpredictable levels. This all can be prevented and is spelled out in the following paragraphs. Research to Clinical-Scale Manufacturing with Process Overlap
In this industry perspective, we focus on the progression from research to clinical-scale manufacturing. However, and importantly, many of our observations and recommendations are applicable to clinical trial management, translational medicine, and managing relationships with suppliers and partners.
In the transition from research-scale production to clinical-scale, a strong emphasis on safety, consistency, reproducibility, and quality assurance takes precedence over the discovery of new products. Consequently, the way you capture, store, organize and report information should adapt as well. Your process development, analytical testing and quality functions should feed into a data environment that ensures you will have a comprehensive, accurate, searchable, and navigable resource for monitoring process performance, preparing reports, and assembling regulatory filings. Achieving high data integrity and FAIR (findable, accessible, interoperable, and reusable) should be the goal!
Too often, scientific teams are tasked to build required capabilities without a shared vision or framework for capturing results, reporting status, and ensuring a clean transfer of materials and information from one operation to another. As a result, data “islands” emerge, and we see file systems overflowing with instrument files and spreadsheets. Indiscriminate use of cloud-based applications can contribute to the data fragmentation problem.
Preparing the CMC section for an IND filing becomes incredibly laborious where companies discover missing or inconsistent data late in the game. Can you find all the material records and test results? Can you perform lot-to-lot comparisons to determine the best manufacturing conditions? Are you repeating tests because the initial results are unclear or cannot be verified? These and similar issues will surely arise and need to be addressed. Experiences have shown that late discovery can be especially painful and costly. Proven Methodologies and Design Thinking
So, how do we build a system that results in a FAIR and secure data repository that will pull together information to manage laboratory operations and prepare an IND or BLA? Traditional IT approaches emphasize the preparation of detailed requirements specifications to guide the selection, construction, and roll-out of complex systems. In most cases, the requirements are captured in a static document after many rounds of discussion with stakeholders. There are good reasons to develop and work toward clear specifications.
However, our situation demands a more flexible approach, one in which the requirements are discovered as the organization builds capacity and its product development, manufacturing, and testing processes. In complex settings like drug development, we also need a way to verify that our understanding of the requirements results in a workable system. We need an approach and tool kit that adapts to a complex and evolving set of requirements, can model intricate data relationships, and readily incorporates data from multiple sources.
Well before detailed requirements can be specified, we can make critical decisions on the objectives for a data management system, establish a technical architecture, and agree on general principles for system operation. We know the standards by which drugs must be developed to meet the requirements for patient safety and regulatory approval. You may have some unique attributes to your drug development process, but the general steps for building clinical-scale manufacturing will probably follow well-understood industry practices. Figure 1 identifies the key characteristics for a system to capture and manage drug development data. A key to building an integrated system is determining what links the various components together. Our systems do not just need to store experimental results. Importantly, we also need to understand the context for interpreting a specific result. Did yield improve in response to a change in the process? How does this lot compare with previous lots of the same type? Can we trace an unexpected result back to the source materials and processing steps to determine what happened? In all these cases, results in isolation are less powerful than seeing the data presented with relevant contextual information. To address this need for contextually relevant information retrieval, we recommend a material-first design as a guide to organize the manufacturing data management system into discrete phases.
First, we identify the types of materials involved in the manufacturing process (i.e., raw materials, intermediates, finished product) to form a material catalog. Then, we associate source materials that combine into intermediates and final goods in a Bill of Materials. The procedures and processes for acquiring materials and logging them in (accessioning) are identified and modeled.
Secondly, processes for creating lots and tracking inventory are specified and captured. We ensure the lineage of every vial or flask of material is captured to enable traceability up and down the manufacturing process.
Finally, we introduce workflows to dispatch materials for testing using at Test Requisition concept. A requisition associates a specific sample with a test to be performed, thereby creating a backlog of materials awaiting testing. The backlog should be serviced via the system, where lab managers can view the backlog and select samples for testing. As testing is performed, results are captured by the system.
This approach ensures that data collected on a test material carries its genealogy, storage conditions, and prior results along with it. Dashboards and queries can readily identify the progress of work moving through the various manufacturing steps, highlight exceptional conditions and acquire information needed for regulatory filings. The material-first design concept is shown in Figure 2, below.
With our material-first design approach, and an understanding of biotechnology product development, you can build a highly effective data management solution concurrently as you establish laboratories, refine processes, and implement partnerships. Case Study: Affinivax
A recent example of how we (BioIT Solutions) have applied our “Material-First” design approach is the work we are performing with Affinivax (affinivax.com) in Cambridge, Massachusetts. Affinivax is a clinical-stage vaccine company developing products based on their proprietary Multiple Antigen Presenting System (MAPS) platform. Our initial project with Affinivax involved working with their stability lab to receive and test material from their manufacturing contractor. Each lot of material gets aliquoted into barcoded vials to be stored in different locations and storage conditions. Stability protocols dictate the timepoints, storage conditions, and test methods for assays to assess how well the product maintains its safety and effectiveness over time. Time-series analysis by lot and storage method provide the basis for the product’s shelf-life.
For the stability requirement, we built a material catalog where the critical product attributes are stored. A receipt process enables users to aliquot bulk product into vials, print barcode labels, and store the vials in various freezers and storage locations. The stability protocol gets converted into a virtual calendar, from which future test requests are generated when it is time for the next stability pull. Users receive email alerts when stability intervals have been reached, and a calendar view provides an overview of current and upcoming work. The system generates freezer pick lists and populates the backlog for each of the labs responsible for testing samples and reporting results. Testing labs service the backlog of requests and upload results as tests are completed. Users analyze the performance of lots by viewing critical quality attributes over time. Since lots from different products are being received continually, the tracking of material inventory, pull schedules, test requisitions, results capture, and lot data analysis quickly becomes overwhelming without an integrated system.
When Affinivax decided to build their own manufacturing capability for phase 1/2, it made sense to use the same platform and database technology for product manufacturing requirements, building on the foundations and relationships from the stability work. Now, we are constructing a comprehensive bill of materials, inventory, and test data management system for their new cGMP manufacturing facility.
The results have been impactful. According to Vandana Dole, Associate Director of Quality Assurance at Affinivax, “BioIT has been a game-changer for our Company. We started with the Inventory Management module to manage our stability program and quickly moved to management of QC data and for phase 1/2 manufacturing operations on BioIT. BioIT platforms allow us to quickly connect the dots between different operations.” By starting with a focused but meaningful effort like stability testing, Affinivax ensured the scope of the pilot project was manageable, learned valuable lessons about how to effectively manage their manufacturing data, and established a productive relationship with a key partner. Selecting the Right Solution Provider
For a company building new facilities, launching biotechnology manufacturing capabilities, or expanding its product line, it is important to craft an effective and FAIR data management solution. Essential items to look for in a solution provider include: Select a qualified partner – Work with a company or advisor who knows biotechnology product development, has experience delivering highly effective systems, and will adapt their approach to your processes and priorities. Find a group skilled at working with imprecise requirements that are expected to evolve as the project progresses.
Build the system(s) on an extensible platform – Relying entirely on a collection of point solutions and hand-crafted spreadsheets leads to undesirable data silos. Your partner should offer a suite of tools and pre-configured applications that allow them to quickly configure your data model and prototype your workflows. The platform should have integration tools to acquire data from instruments and exchange data with other software applications. The platform should be built with common, commercial-grade components (servers, database, web server, etc.) that are well established and supported.
Adopt an approach suited to scientific data management – Scientists use iterative refinement, by running an experiment, evaluating the result, and adapting the next experiment to include what was learned. Similarly, the system development methodology should follow a similar pattern. Prototype requirements in a working model, test the prototype and use what is learned to influence the next iteration. Once a working model is established, it can become the basis for the formal requirements specifications and validation testing documentation expected for systems that meet FDA regulations. Building an effective and FAIR informatics capability concurrently with your process development and manufacturing capacity is not easy. However, it is a worthwhile investment to ensure that valuable data is captured, organized, stored, and available when needed. The resulting data integrity that you achieve as an organization is invaluable. It drives better decision making, smoother collaborations, and submissions, and allows you to focus on process optimization and harmonization. Your value chain converting raw materials into finished goods is greatly enhanced by a data management system that spans your entire supply chain.