top of page


Updated: May 3, 2022

June 25, 2020

is What’s Achievable Today in Life & Material Science R&D

By John F. Conway, Peter Rhodes, and Chris Waller2

1, Dotmatics SME and SAM

2, EPAM SME, Chief Scientist In industry, the concept or idea of product lifecycle management (PLM) is the process of managing the entire life-cycle of a product from inception with FAIR informatics and knowledge, through engineering design and manufacture, to service and disposal of manufactured products.[1][2] PLM integrates people, data, processes, and business systems and provides a product information backbone for companies and their extended enterprise.[3] It has been used in physical product industries for close to 30 years now with varying degrees of success. The goal of PLM is simply aligned with FAIR data and processes (Findable, Accessible, and Interoperable which leads to Reusable), which is the crux of the issues today hence preventing very large efficiency gains in R&D. In other industry verticals, like pharmaceutical manufacturing, which has more reproducible processes and higher regulated data usage, some of the FAIR principles (even though the concept of FAIR wasn’t invented) was achievable. Importantly, governance concepts and efforts like Master Data Management (MDM) with limited data diversity promoted higher data integrity.

Many times, ideas and concepts in life are conceived simultaneously by different people around the world, especially when they are exposed to the same data/information, situations, or concepts. The key to success is executing on that idea first! American Motors Corporation is one example of an organization that executed on an idea, Product Life Cycle Management (PLM). How else would they have created the legendary Gremlin! In all seriousness, PLM is what helped drive the company towards excellence and enabled Chrysler to purchase them and turn the Jeep franchise into what it is today. Fig 1 and Fig 2

Figure 1

Figure 2 Product Life-cycle Management (PLM) has historically been associated with complex Engineer-to-Order domains. Engineers have been able to leverage the 3D model as the master product definition. They would then use these definitions to build associated processes such as upstream concept designs, product simulations (for physical prototype or test avoidance), manufacturing process definitions at both part and assembly level, and then into in-service support. Theoretically, the use of PLM in this context is relatively mature and could yet be a great source of learning for the chemical and biological science-based industries. However, there is a significant difference in the underlying science: for engineers typically build on deterministic modeling & simulations, whereas Life & Materials Science remains unavoidably dependent on empirical data gathered from experimentation whether high throughput or captured at the bench by a lab scientist. Atomistic/Molecular 3D modeling for chemistry & biology is a core part of drug and material discovery today, with increasing prevalence, and would form a core part of the overall knowledge base of PLM. It’s a great start but does not scale elegantly for solving complex cellular behaviors in the body due to its level of estimation and representation of the unknown or unseen.

Within the scientific industries, PLM has been typically repositioned to manage the product definition for the then-current production standard including formulations, specifications, ingredients, recipes, process method, packaging, labeling (tuned for each geography), site, plant, machine. It is not typically used to capture all design iterations within the Discovery phase of New Product Introduction. This is largely due to the relatively low level of investment upstream in discovery science IT, probably a point of contention for some, but also a lack of vendors with a big enough ambition to pull all the complex threads together – but these are mutually self-reinforcing. Another factor that separates these industries is the willingness to take a horizontal view; deliberately connecting inputs and outputs that cross-functional boundaries to better understand the process improvements that can be made upstream to help accelerate the tasks & improve work-life quality for those operating downstream. In fact, the further upstream that you can start, the more likely it is that what is produced for sale most closely aligns with the original requirement. Tying customer requirements to the product throughout the NPI process has been a feature of Systems Engineering and Requirements Management for the Aero & Auto industries for many years.

PseudoPLM was a hidden objective when John F. Conway, Mark Everding, and Dave Hartsough wrote the Virtual Biotech Whitepaper, which morphed and was marketed as Research Life Science Cloud (RLSC), around the time Accenture acquired LabAnswer. At a similar time, we were corresponding with Chris Waller, who was exploring PLM software for R&D with several specialized providers. In summary, it didn’t end up in the solution stack. I just recently caught up with Chris about this and this is what he had to say.

I like the branding, pseudoPLM. It accurately describes where I believe we are with our appreciation of Product Life-cycle Management (PLM), as a discipline, applied to pharmaceutical research and development. Except for the manufacturing process, life sciences companies have had a very difficult time getting their collective heads around using PLM to drive (or at least assist) the processes surrounding the scientific process. Inherent in PLM is organizational maturity and willingness to use modeling and simulation (aka data science and analytics) to support decision making. Numerous pharmaceutical company executives have encouraged business transformation through the promotion of data and/or model-driven drug discovery. And, while we have numerous examples of the effective use of analytics within functional domains of organizations, we have not yet seen anything that approaches the scale or scope of change that can be imagined if we embraced PLM fully. I’m suggesting that a change as transformational as the move from the old school drafting table and trial and error flight experiments to the complete in silico simulation of aircraft design, manufacture, and operation is possible within the life sciences industry. However, the rate limiter here is our limited knowledge of human biology related to the pharmaceutical intervention of disease states as compared to our understanding of aerodynamics and fight. Coming back to the terminology of pseudoPLM - in the absence of a complete understanding and ability to accurately model and simulate biological systems dictating pharmaceutical agent activity, we need to start with the sub-systems and functional domains that we can model and extend and iterate across the domains until we construct suitable mechanistic understanding and substrate on which to implement full PLM solutions. In summary, we should stop worrying about what we don’t know and focus on what we know” Chris Waller

This was the conclusion Chris wrote after we discussed the topic and challenged each other on concepts, and we couldn’t agree more!

“ One of the major hang-ups with PLM in a Discovery R&D vs an Aerospace/Jetliner manufacturer is that the plane designers can construct their blueprints and transfer the technology and the plane will be built to spec, of course, iterations will happen and be documented, but the end result is physically seen and performs as mostly expected in the end. In drug discovery, we can’t usually see what we are building but only characterize it to the best of our chemistry/physics abilities, and for biology, currently, we know even an order of a million less” John F. Conway

Another major factor for PLM in a Life & Materials Science Discovery context is that it needs to be inherently flexible, able to adapt from a batch size of 1 (perpetually prototyping), investigating a new compound or method that has just been uncovered, through to larger scale, highly complex set of methods applied to a high throughput robot where the process is being fine-tuned/standardized ready for QA approval to manufacture and beyond. This tech transfer that was just described needs the details of PLM that the teams are going on wild goose chases when the when something deviates from the “norm”. Where transactional-based systems have struggled in the past (e.g. SAP PLM & LIMS) is that they are architected from the beginning to manage repeatable processes with built-in deviation handling or error trapping, rather than to provide digital tools that deliberately set out to support the exploration of new things. With the continued push for validated systems upstream from Manufacturing & Development, the very nature of highly flexible innovation space in Discovery is under threat. For the science-based industries, ready access to high-quality data is key to improving product (and process) definition thereby gaining competitive advantage.

This brings us to the core challenges of PLM in Chem/BioScience R&D. It has been a difficult journey for major aerospace and automotive, consumer packaged goods, and other product-focused companies, and 30 years on some will say PLM hasn’t quite hit the mark. One major reason for this is the complexity of balancing the 4 Pillars (People/Culture, Data, Processes, and Technology), especially the people and culture part. Fig 4

Figure 4

If we now apply this to the drugs/materials discovery world we have a major problem as outlined above: it isn’t usually possible to work with tangible things. A pseudoPLM world must be a step forward from a world that isn’t organized or FAIR from a process and data perspective (not to mention information > knowledge > wisdom). In fact, when companies aspire to embrace digital transformation, and this is evidenced in their annual reports, strategic initiatives, and in the compensation plans for their key research leaders, they should be able to readily achieve a pseudoPLM state. In our opinion, pseudoPLM is tied to an enterprise scientific request management capability. Everything starts with a request, from the very top to the very bottom. Once in place, request management with the proper scientific informatics platform, computational and design, and program knowledge base will drive an organization to pseudoPLM. It doesn’t have to cost you $50million or $100million, but it will take sustained commitment, sacrifice and the adoption of a global scientific data and process strategy that will lead to the largest efficiency gain your organization has ever seen because now you have an in-silico first R&D capability that can start and execute smarter and faster with greater confidence. You have Model Quality Data (MQD) and reproducible processes that are FAIR and you can, actually, use AI and ML to drive better decisions. Oh, and the most important part is this is being driven behind the scenes with proper scientific informatics that allows for the amalgamation of data, information and knowledge-based on Therapeutic Areas (TAs), Programs, Projects, Studies, etc. from the original request to the final request of project closure, all with the relevant metadata and contextualization that that drives reproducibility in R&D.

You can roll up a large majority of your virtual and physical research and get a much better understanding of your legacy, provenance, and pedigree. No, it won’t be perfect and that’s because research is very dynamic in many companies, but we will also caution that this can’t be a reason to do nothing, much can be done and achieved, and the return on investment will be more and faster discoveries and reduced time to market cycle times!

  1. Kurkin, Ondřej; Januška, Marlin (2010). "Product Life Cycle in Digital factory". Knowledge management and innovation: a business competitive edge perspective. Cairo: International Business Information Management Association (IBIMA): 1881–1886.

  2. "About PLM". CIMdata. Retrieved 25 February 2012

  3. "What is PLM?". PLM Technology Guide. Archived from the original on 18 June 2013. Retrieved 25 February 2012.

36 views0 comments


bottom of page