Skip to content
    February 20, 2023

    Bioscience experiments are too difficult

    Why are experiments difficult? Most biologists I know never ask this question because it’s their lived reality. But when we look at it properly, it’s clear that the process of creating, planning, running, and managing experiments is incredibly difficult.

    But do experiments need to be difficult to be effective? What if experiments were significantly easier? And what if they were both easier and more effective?

    It bears thinking about. First, let’s look at where all this difficulty comes from.

    Experiments are a mental model (and that's a problem)

    Experiments can take weeks to design and plan, then take meticulous patience and resilience to run in the lab. They’re hard to analyze afterward because all the raw data needs beating into some kind of meaningful shape. And then we need to take the sum of all this and write it up somehow, usually cramming it into a PowerPoint we can share with colleagues.

    Given all this work, it might make sense to share the load… but collaborating on an experiment can itself require substantial time and energy to describe all the details to potential collaborators. Those people are often in a neighboring discipline and—without meticulous communication—some things get lost in translation.

    There's also the painful truth that many biological experiments are over-simplistic and underpowered in their design. Single-factor experiments yield annoyingly inconclusive results, leading us into dead-ends or—worse—wild goose chases that fizzle out.

    And that's not all. In the process of dealing with the challenges of running experiments, scientists generate a wide array of different materials:

    • Designs and hypotheses in (electronic) notebooks and Word documents
    • Calculations on paper and Excel spreadsheets
    • Hardware method files in hardware-specific software
    • Sample information in inventory systems or notebooks
    • Data and metadata in USB drives, laptops, and databases 
    • Notes, written down (and sometimes forgotten)
    • Write-ups, reports, and presentations in notebooks, Word and Powerpoint

    Over the course of an experiment, who is it that has to know about all of these different materials, and how they relate to one another? Who knows the history and the context of every decision taken throughout the experiment, and why it was important? Who knows the peculiarities of each piece of equipment? Of each protocol and process?  

    In the end, there is only one place that holds a comprehensive model of everything that was done in an experiment and why it was done.

    The mind of the scientist.

    How "mental model" experiments are self-limiting

    “Yes, Markus,” you might be saying, “of course we keep everything inside our heads.” But that’s the problem. To illustrate, let’s think about the different moving parts of a typical experiment. For example, what happens when I decide I want to run 30 samples through instead of 10, and run them with 4 replicates instead of 2?

    Changes like these require a whole cascade of changes to the experiment, a chain of events I must personally micromanage:

    • The amount of reagent I have to make up has changed
    • The maps of my plates have changed
    • The number of plates (and hence the number of controls and standards I need to run) has changed
    • If I’m using automation, the whole thing will have to be re-scripted (by me... or more likely someone else who'll need some explanation)
    • My data analysis pipeline has to be completely remodeled

    With a couple of small changes, my workload expands in every direction. This is a strong disincentive to change. To put it another way: current experimentation resists improvement and prefers caution at the expense of progress.

    And it gets worse.

    What would happen to my team if, halfway through an experiment, I left because of illness or a new job? What would happen if my boss told me to make the same experiment work in another lab next door, or even on the other side of the world with another team? What if an experiment that worked three months ago fails to replicate today? What happens if we buy new automation equipment that we don’t yet know how to code for?

    And I’m not forgetting the power of the experiment itself, either. Compared to single-factor experimentation, Design of Experiments (DOE) is immensely powerful and perfectly suited to exploring biological complexity. But it's still not yet the norm. Many teams still run experiments “one factor at a time” (OFAT), and is it any wonder? It’s hard enough managing the knock-on chain of events and immense workload from changing one factor, let alone many. 

    To be clear, the problem here is not the scientist. Far from it. The problem is how, in a world of “lab digitalization,” the experiment itself remains so stubbornly analog. If I change one or more factors in my experiment, why can’t my scripts, volumes, concentrations, and experiment design all update automatically in a way that everyone else understands without the need for an email or a meeting?

    Why are we, as scientists who theoretically have access to incredible computing power, shouldering the kinds of burdens that other industries were able to offload long ago?

    What if experiments had a "digital model"?

    The way our industry currently tries to deal with these problems is by cramming more features into roughly four kinds of existing software. There are roughly four different types of “lab software” on the market today:

    • Electronic Lab Notebooks (ELNs)
    • Laboratory Inventory Management Systems (LIMS)
    • Lab data management systems (LDMS)
    • Lab workflow/automation software

    This software was originally created to fix discrete, individual issues in the lab itself but is now subject to “feature creep.” Is a LIMS still only for managing samples? Not so much. But then again, what other option is there? We’ve written at length about how old lab software can’t solve new problems

    It’s clear to me that we need a new approach.

    There is one: a fully digital model of our experiments, which would cut across much of the functionality provided by current lab software, but do it in a way that is directly connected to the experiment itself. 

    • Instead of remembering that the automation script needs changing, the script will change itself automatically, under the hood
    • Instead of wondering how to make the experiment work better on a new machine, I just select the new machine with a drop-down and simulate it
    • Instead of calculating new reagent volumes, it’s already done for me
    • Instead of worrying about data, the digital experiment aggregates and structures my data in the way I defined while planning

    At their core, a digital model of an experiment would be a powerful, unified blueprint. For scientists, technicians, engineers, data scientists, bioinformaticians, and leadership, they would be a shared model and way to run experiments that we used to think were impossible.

    For the individual scientist, it would mean less time spent working on logistical and administrative tasks, and more time working with the science itself. It would also mean access to more powerful experimentation, including automation-without-code and design of experiments methodologies. For leadership, it would mean faster, richer, and more reliable data delivered in a way that would greatly improve time to insight. Not only that, but higher quality IP, better protected from institutional knowledge loss, and generated at a much higher rate of productivity. 

    The benefits of a digital experiment model are clear. If we remain in a world of mostly “analog” experimentation, relying on mental models of our experiments instead of digital ones, then progress in the biosciences will continue to be needlessly arduous.

    Markus Gershater, PhD

    Markus is a co-founder of Synthace and one of the UK’s leading visionaries for how we, as a society, can do better biology. Originally establishing Synthace as a synthetic biology company, he was struck with the conviction that so much potential progress is held back by tedious, one-dimensional, error-prone, manual...

    Other posts you might be interested in

    View All Posts