Skip to content
    January 23, 2023

    Old lab software can't solve today's R&D problems

    Lab software helps your lab. But is it helping your experiments? In this article, we dive into the problems that existing lab software can’t solve, and why this might be hurting science.

    A better ELN won’t solve your problems,” argues Jesse Johnson in Scaling Biotech. And he’s right. I have already written about how ELNs are a dead end. And while people do love to “talk trash about ELNs and LIMS,” Johnson is also right that both of these pieces of software serve an important purpose in the lab. But here’s the tricky part: these “important purposes” were understood and defined decades ago.

    So what of the needs and requirements that have emerged since then? There’s a fundamental barrier here: you can’t always fix new problems with old tools. And in today's R&D labs, there are plenty of problems that have yet to be solved:

    1. Experiments need massive effort to manage and see through to completion
    2. There’s no standardized model for sharing work in labs
    3. It’s difficult to automate most experiments
    4. Most experiments reveal only part of the picture and aren’t conclusive
    5. Experiments can be tough or even impossible to reproduce
    6. Methods and results from experimentation are difficult/impossible to build upon

    Can any of the above problems be solved by an ELN? By a LIMS? By better automation scheduling? No, they can’t. Let’s look at why.

    Old problems: better the devil you know?

    Today’s lab software landscape is split into roughly four different areas:

    • Electronic Lab Notebooks (ELNs)
    • Laboratory Inventory management systems (LIMS)
    • Lab data platforms
    • Automation software

    Each piece of software solves for a specific problem usually identified decades ago. They are “point solutions” aimed at fixing individual problems found in labs. Some options on the market can do multiple things at once, but all their functionality fits into roughly the same categories.

    Let’s take a look at these different categories, the problems they solve for, and the current landscape of lab software as it exists today.

    Problems existing lab software doesn’t solve

    Electonic lab notebooks (ELNs) were originally a single place to to record discoveries against a certain date to show when an invention was made, for the sake of patents. This has expanded to include the recording of protocols, observations, and other miscellaneous data about the lab, lab work, and other user-generated content. What they can’t do is help you automate those protocols, see their context against experimental data or sample metadata, or do any of this automatically when running experiments in a lab. They also don't tend to have a clear, unified data model or ontology, so data from different experiments—and even within experiments—can still be fragmented. How do you know that the things recorded in your ELN are error-free, or a true record of what really happened?

    Laboratory inventory management systems (LIMS) were originally a way of keeping track of samples, but have become an umbrella term to describe lab informatics systems more generally. They don’t know what happens to your samples in the experiments themselves, and anything related to your experiments has to be self-reported in the LIMS, rather than automatically in the course of experimentation in the lab. If it wasn’t recorded manually by someone in the LIMS, did it even happen?

    Lab data management systems (LDMS) are a way to bring all lab/experimental data together in a single place, either with the aim of then analyzing it in the platform itself, or preparing it for analysis elsewhere (often by another team.) But they don’t help you understand your data in the context of your experiments as they were designed, how actually took place in the lab, or the samples those experiments used. The data is “naked” without potentially vital metadata ( because automation doesn’t generate this). How do you know which protocol, method, or experiment helped produce your data, and how can you understand how effective it was?

    Lab workflow, lab automation, or even “orchestration” tools are a way to automate lab equipment, which usually means only liquid handlers or dispensers. But what they don’t help you know is whether or not your increased utilization or throughput is actually leading to useful data or results. How do you know your automation is generating more valuable scientific data than you had before? They also don’t do more than the individual automation tasks you set them to do. Once a run is done, it’s done.

    We can’t fix new problems with old solutions

    When new customers come to us, they already have some form of lab software covering everything above. But they’re usually on the hunt for something else, or an alternative version of what they already have, in the hope that it will help them move the dial.

    In short, they’re looking for “better” lab software. But better lab software only does a better job of solving the old problems, and can’t solve any of the other problems that are out there. The fundamental problem left unsolved, underlying everything else, is that experiments live in a thousand places, tied together inside the minds of scientists. The rest is scattered around the lab:

    • Experiment designs in notebooks and word documents
    • Experiment calculations on paper and spreadsheets
    • Experiment automation instructions in scripting software
    • Experiment sample information in inventory systems
    • Experiment data and metadata in databases
    • Experiment notes, written down and forgotten.

    We’ll always need high quality labs, and likely much of the functionality we already have in today’s commonplace lab software. But as these two things drive only diminishing scientific returns, we need to transition towards the experiment rather than the lab. Experiments are what the lab is there for, and it’s powerful experiments that drive the insights we need.

    A digital model for experiments

    Where lab software helps solve point-solution problems, digital experiment software takes a holistic approach. A digital model of an experiment could be designed and planned with straightforward tools, worked on from anywhere in the world, and automated in any lab. It could be integrated with equipment to run powerful multifactorial workflows, then gather data and metadata for immediate analysis. It could capture and connect the context and intent of each experiment, helping others better understand them.

    They would also be a powerhouse for collaboration, acting as a common resource for everyone who needs to understand an experiment that persists for all time. At their core, a digital model of an experiment would be a powerful, unified blueprint. For scientists, technicians, engineers, data scientists, bioinformaticians, and leadership, they're a shared way to run experiments that we used to think were impossible.

    If we want to solve new problems with our experiments, we can’t rely on our old tools. We need to forge new ones, and put them to use.

    Want to read more about digital experiment software? Click here.

    Markus Gershater, PhD

    Markus is a co-founder of Synthace and one of the UK’s leading visionaries for how we, as a society, can do better biology. Originally establishing Synthace as a synthetic biology company, he was struck with the conviction that so much potential progress is held back by tedious, one-dimensional, error-prone, manual...

    Other posts you might be interested in

    View All Posts