The Art of Master Data Management When Transitioning to a New System
- maurinabignotti
- 2 days ago
- 5 min read
several master data configurations for new LIMS deployments, a.k.a. paper-to-
by Steven Bates, Ph.D.

I’ve led several master data configurations for new LIMS deployments, a.k.a. paper-to- LIMS transitions, for which there’s a certain art. However, it’s just as common for an organization to already have an existing informatics system that they want to transition from to a new system. This has its own particular challenges, but many of the lessons from configuring master data from scratch can still be applied.
There are a few general reasons why an organization may want to upgrade to a new system. Most commonly, they will have a legacy system, which may be over ten or even twenty years old. A LIMS or ELN is much less likely to continue to be able to keep up with current needs of biopharma clients over that length of time. It’s natural for a company to make the decision at this point to switch to one of the latest generation of informatics systems, which have been designed from the ground up as easily configurable cloud-based systems, and which are able to handle data according to ALCOA+ and FAIR data standards.
On the other hand, an organization may want to replace a more recently deployed laboratory informatics system, because of rapidly changing company needs and user requirements, or dissatisfaction with the system’s capabilities or user experience. This is the other side of the coin with the newer generation of systems: they will have inherently been tested less in the real world, and the vendors will have less of a customer service track record. There is a risk of disappointment after a newer system has been deployed, and if the first couple years give a bad enough impression, the decision may be made that the least bad option is to backtrack as quickly as possible and switch again to a different system.
Other likely motivations include an acquisition that requires harmonization with the new parent company’s systems, or the software vendor making the decision themselves to sunset the product.
I’ve previously described master data management for a paper-to-LIMS project as a conceptual kind of sculpting. Initially, there’s no structure to how data is collected, hence there’s no master data per se: at best it’s implied. The first step of the sculpting needs to simply be creating an ontology that captures the users’ legacy data while conforming to the data model of the new system. This is more than half the battle, but following this there will need to be multiple iterations of ever finer adjustment until both of these broad requirements are simultaneously addressed to the extent reasonably possible.
For instance, one of the projects I worked on involved the client’s manufacturing QC data, from the distinct battery of tests used for each of their multiple products. The test selections had a lot of overlap and commonalities across the products, but just about any rule one could define had an exception. Then there were the added complications that different naming conventions were sometimes used for the same test, or for the same results field from the same test, depending on the product.
The goal was to represent these tests in the client’s new LIMS, which had its own framework that would determine the choices that needed to be made. In this particular system, tests were configurable entities, with results fields as their children. There were also optional child entities called test variations, and if those were configured, different subsets of the result fields could be assigned to different variations. Many of the unresolved questions that needed to be answered during the process concerned when it made sense to treat different instances of what appeared to be a single test on paper as different tests entirely, rather than variations of the same test. Consolidation into a minimal number of tests would have been preferable, but business rules would often override this consideration.
Of course, this only covered the first couple iterations of the data sculpting process: there still needed to be refinements of field units and precision, which also sometimes differed across test variations. Following this there was an entire second stage, of configuring specification entities which were used to apply the previously configured tests or test variations to each product as appropriate.
If a paper-to-LIMS transition is akin to sculpting amorphous clay into a desired form, LIMS- to-LIMS transitions are like breaking down an existing sculpture and reassembling it into a different form. As noted, the biggest initial hurdle of the former is explicitly defining the test and results ontologies in a coherent form to begin with. This has already been largely done in the case of the latter, and in some aspects the existing forms of the master data from the old LIMS will map cleanly to the new LIMS’s data model. The difficult part is identifying the parts of the data model that need to be broken down and modified, while maintaining as much of the core structure as is reasonable.
As a concrete example, consider again the paper-to-LIMS project example, and the possibility that the client becomes dissatisfied and wants to migrate to a new system. Rather than being primarily oriented around tests, the new system’s data model may start with test metrics as the fundamental entities, with the ability to configure different tests to measure the same result. For instance, if different methods are used by different products to measure sodium content, instead of configuring two different tests or variations, the tests would be configured as children of a single sodium content entity. On the other hand, even if tests are the primary entity, variations might be configured at the time of product specification entity configuration, instead of being configured beforehand as children of a test.
Differences in the data model such as these will require refactoring the existing master data to conform. This is simplified in many ways by having an existing data model as the baseline for how faithfully real-world test parameters need to be represented. The particular challenge is re-examining the assumptions that were made in the original paper- to-LIMS transition. Considerations that required the master data to fit a particular shape in the first deployment may no longer be a factor in the second. And if refactoring is being done anyway, any new tests that have been added since the first deployment, of which there may be many, should be taken into account as if they had been part of the set from the beginning, which may indicate other refactoring decisions.
All in all, although transitioning to a new LIMS or ELN system can require an investment in time and resources, they tend to lead to future cost savings, and they don’t need to be as intimidating as they seem. The value of partners like 20/15 Visioneers is providing experts who specialize in scientific data, including the contextual domain knowledge which will help inform any refactoring recommendations, and ensure the process goes as smoothly as it can.master data from scratch can still be applied.
LIMS transitions, for which there’s a certain art. However, it’s just as common for an
organization to already have an existing informatics system that they want to transiti
from to a new m. This has its own particular challenges, but many of the lessons from
configuring master data from scratch can still be applied




Comments