Skip to content
    September 23, 2019

    New Analytics are Enabling Greater Insights Into Cell & Gene Bioprocesses

    New analytical techniques are enabling novel insights into the development of cell and gene therapies. But, what are the data implications of these techniques?

    Last week, Synthace attended the Cell & Gene Therapy Bioprocessing & Commercialisation conference in Boston, MA. The conference provided a window into the state-of-the-art, with the discussion seemingly moving on from the now widely appreciated ‘COGS’ crisis, to the appraisal of innovative technologies within both R&D and manufacturing.

    In this blog post, Synthace’s corporate strategy manager Dr Peter Crane details 3 analytics trends to watch out for, before highlighting some of the resultant informatics challenges facing the field.

    A window to the bioprocess: From discrete at-line to continuous in-line analytics

    Process analytical technologies (PAT) such as inline real-time sensing technology has the potential to enable proactive bioprocess decision-making, as part of a Quality by Design framework (QbD). QbD is an approach that uses techniques like Design of Experiments (DoE) to improve the robustness of the manufacturing process.

    Many companies already implement some in-line measurements into their bioprocesses (e.g. temperature probes) but newer techniques are rapidly being developed that allow for the interrogation of metabolites without the need for the time-consuming at-line assays.

    One good example of this is Raman spectroscopy. In work published by the Cell & Gene Therapy Catapult in London, it was shown that raw Raman spectra could be processed, analyzed, and compared to LC-MS data sets, allowing the use of Raman as an in-line optical ‘soft sensor’ for key metabolites.

    These continuous in-line data sets are particularly valuable when integrated into an overarching bioprocessing data ecosystem that allows for sophisticated statistical analysis and modeling.

    Aside from the hardware innovation the key limiting step then becomes access to high-quality, contextualized, and structured data from a range of different sources with which to build the statistical models needed to truly leverage the power of in-line process monitoring.

    At Synthace, we’ve worked with the Cell & Gene Therapy Catapult (Figure 1) on integrating flexible physical and data automation. Helping to connect the in-line, at-line analytical data streams with automation of the assays themselves, in essence trying to help solve this data challenge at the source—the lab.

    From biochemical to analytical chemistry-based methods

    Cell and Gene Therapies often have multiple mechanisms of action (MoA). Demonstrating potency of the MoA’s is a key requirement for cGMP product characterization and also for dosing into the patient. Robust ‘panels’ of predictive in-vitro potency assays developed earlier (perhaps even in R&D) would provide tremendous benefit to the field but is difficult to achieve in practice.

    Common examples of potency assays include: the cell cytotoxicity assay for T-cell therapies; flow cytometry assays for surrogate markers for clinical efficacy such as CD8+/CD4+; and IFN-g ELISA for proinflammatory cytokine production.

    Companies are often seeking to triangulate potency by correlating several of the above assays, rather than relying on one method alone. Rapidly aggregating this data across multiple analytical steps naturally introduces data management challenges. These challenges are compounded when leveraging next-generation techniques that generate larger volumes of higher content data, and in autologous products, with variable donor materials.

    One method that is growing in popularity for the cytokine assay is mass spectrometry. The advantages of this approach are that it is a sensitive technique that requires small sample volumes and generates rich data. The challenges of mass spectrometry are with selectivity, initial capital expenditure, as well as the operational difficulties of running and maintaining the equipment (often requiring a dedicated employee). As such, some vendors are developing intermediate solutions such as the Thermo MSIA™ pipette tips (target coated affinity tips) that then allow for automation of the sample preparation (on a liquid handling robot), which also improves selectivity.

    Mass spec data is also very high content, often running into the gigabytes for LC-MS/MS data, and analysis can be complex. As companies begin to adopt this method, we expect the data challenges outlined above to quickly become critical to the successful deployment of this technology.

    From aggregate to single cell measurements

    Single Cell methods such as single cell RNA Sequencing (scRNA-seq) allow researchers to investigate gene expression through the lifecycle of the bioprocess. This is particularly useful in allogeneic cell therapy (e.g. iPSCs) where during bioprocessing the aim is often to both expand the cell population, and to drive the cell towards a particular final cell type. This work was presented at the conference, where Bluerock Therapeutics highlighted their work combining scRNA-seq with machine learning, to investigate gene expression over the course of a bioprocess.

    Whilst extremely exciting for the allogeneic space, these methods again raise questions about how they can be applied within the latter stages of the manufacturing process. Perhaps their greatest use will be in enabling early bioprocessing/R&D teams to understand their processes at a granular level, identifying and isolating increasingly precisely defined cell populations – leading to a more robust and homogenous product.

    Whether these methods can be effectively integrated into a bioprocessing workflow remains to be seen, but they will certainly provide interesting ‘big yet small’ datasets for correlation with either aggregate measures (and in the case of scRNA-seq with the ‘phenotype’).

    Data Management is fast becoming “mission critical”

    Whilst each of these methods provides a new window into our bioprocesses, they will also lead to an increase in the volume of data we are expected to handle, as they either generate more data points - single cell vs population measurements, more time points (continuous vs discrete data) or larger higher content files (e.g. mass spectrometry).

    This all needs to be collected, structured, contextualized (e.g. to a donor) and then analyzed – a task that is going to become quickly prohibitive using current approaches (Figure 2).

    At Synthace, we are already working with leading companies in the Cell and Gene therapy industry to develop a flexible data and physical automation infrastructure that supports this new generation of analytical methods.

    This means no more data wrangling and instead deeper process insights, faster.

     

    To request a copy of our bioprocessing 4.0 talk, click the button below

    Tag(s): Data , Applications

    Other posts you might be interested in

    View All Posts