Skip to content
    September 17, 2018

    Synthace Expert Interviews: Cell and Gene Therapies with Dr Damian Marshall (part 2)

    How does technology adoption differ between autologous and allogeneic therapies?

    This is a good question as the technologies used can be quite different. Autologous therapies are highly personalised medicines and their manufacturing process usually produces a single dose of a product. As a result, they often use small volume technologies during manufacturing. For example, in the autologous T-cell immunotherapy space, a range of technologies are used: from simple culture bags that were adapted from systems used in the blood industry, through to automated platforms such as rocking motion bioreactors and the CliniMACS Prodigy® system from Miltenyi. Conversely, allogeneic therapies are less personalised and are usually produced in large volumes with a single manufacturing run, producing hundreds, or even thousands of doses of a product. As a result, technologies such as single-use stirred tank bioreactors, which are routinely used for large-scale manufacture of biopharmaceuticals, are often adopted. That said, the landscape is changing and we are seeing more cell therapy specific technologies being developed for large scale manufacture. For example, the Vertical-Wheel™  bioreactor from PBS Biotech which is designed to be a low shear system, that is scalable from 100mL through to 500L.

    When it comes to technology integration, irrespective of whether you are making an autologous or allogeneic therapy, it has to improve the manufacturing process and have a tangible benefit on the overall cost of goods (CoGs). If we are going to see cell therapies widely available on the NHS, then price is going to be important. This was emphasised by the recent decision by the UK National Institute for Health and Care Excellence (NICE) who turned down the use of the CAR-T immunotherapy product Yescarta on the NHS, a decision influenced by the ‘cost per patient’.

    With the autologous products, how is technology being employed to accommodate for patient variability in the donor materials?

    Quantifying the variability in patient-specific starting material and defining how it impacts upon cell behaviour is a really big challenge, particularly as there are a number of factors to consider. For example, the age of the patient could impact the way cells proliferate during manufacture or could lead to cell exhaustion, which could affect product efficacy. The transcriptomic and proteomic phenotype of the cells could influence their behaviour during manufacture, while their epigenetic profile could impact changes in gene expression. Add to this the fact that many autologous therapies are targeting relapsed or refractory patients who could have undergone different frontline treatment and it is quite easy to see how these factors could influence the ability to manufacture products to a consistent quality.

     [Synthace] During development of new therapies, access to diseased patient material is limited resulting in a lot of process development work being done on healthy donor material. The result of this is often that processes are optimised using “representative” materials and in addition, in this space clinical trial patient cohorts are quite small.

    Does this present a challenge to implementing a machine learning-based approach where large standardised data sets are required for algorithm training?

    I think that's a really great question. It is possible to generate large data sets as part of a product characterisation strategy using, for example, “omics” technologies or real-time biosensors. These can be really useful during process optimisation, particularly if this is done using Design of Experiments principles. This data could be used for some aspects of machine learning especially if data from several sources can be combined to allow multivariate data modelling.

    During product manufacture things can be slightly different as there is always a trade-off between how much data you can generate versus how much material you can actually spare for analysis.  At the end of the day, when you're making a product to treat a patient,  the patient always has to come first. The key is looking for techniques, that can give you meaningful data from a minimal sample size or techniques which don’t need to measure the cells directly. For example, metabolomics can be performed on the cell culture media to provide a detailed fingerprint of the biological function of the cells based on the way they are consuming nutrients and producing metabolic byproducts. Once you have the data it then comes down to how you use it, and this is where some AI techniques such as artificial neural networks and machine learning could be applied.

    There may be more opportunities for applying AI to support large-scale bioprocesses such as those used for viral vector manufacture. There was some interesting data recently presented by Amgen that showed that for a single biopharmaceutical manufacturing campaign they produce over 500 million continuous data points. Given the overlap in approaches between viral manufacture and biopharmaceutical production it may not be too long before we see greater integration of in-process analytics and the generation of much larger data sets. I think then we will also start to see an increase in the application of AI.

    What are CQAs and why are they crucially important to the CGT space?

    CQAs are Critical Quality Attributes. These are the physical, chemical or biological properties of the product that should be maintained within an appropriate range in order to ensure the quality of the final product. CQA’s should be measurable, preferably using assays that give a quantitative readout and which are amendable to validation. Some CQA’s form part of the product release tests such as those for potency, purity, cell viability etc. Other CQA’s may be measured as part of a product characterisation or in-process control strategy.

    Identifying appropriate CQA’s and understanding their impact on quality can be challenging but with enough product knowledge CQA’s can be defined using a systematic approach. For example, if the target product profile for a therapy has been established it should be possible to identify the quality characteristics that are most likely to relate to product safety, this is known as the quality target product profile or QTPP. Based on the QTPP its then possible to identify the process parameters whose variability will impact most upon product quality. This, in turn, helps define the CQA’s which need to be monitored in order to control process variability and ensure the quality of the product following manufacture. This process is likely to be dynamic with further refinement of the CQA’s as product knowledge increases following clinical evaluation.

    How much crossover is there from biologics into this space in regard to Process Analytical Technology (PAT) to characterise CQAs?

    Process Analytical Technologies or PAT is a very exciting area and one that I really think can have a big impact on cell and gene therapy manufacturing. PAT is a framework established by the FDA in 20014 for: “designing, analysing, and controlling manufacturing through timely measurements of critical quality and performance attributes of raw and in-process materials and processes, with the goal of ensuring final product quality”.

    The aim of PAT is to obtain better process control by identifying and managing sources of variability, reducing cost by optimising the use of raw materials and minimising product cycle times through the use of measurements that are:

    • In-line  – measurements are made directly within the process stream
    • On-line – a sample is removed, analysed and returned to the process stream
    • At-line – a sample is removed and analysed close to the process stream

    PAT has been successfully applied for small molecule manufacture for over a decade and is increasingly used to support biopharmaceutical production. However, there are many challenges to applying PAT to support the manufacture of cell therapy products, not least of which, as we have just discussed, is how to define suitable CQA’s.

    Using PAT to control cell therapy processing also requires an understanding of the relationships between product variables (cell starting material, metabolic profile, apoptosis etc.), raw material variables (nutrients, growth factors, extracellular matrix substrates), and process variables (medium perfusion or exchange rate, feeding regime, stirring speed, pH, dO2, etc), to ensure that product quality is maximised and CQA’s are maintained throughout the process. We also have the challenge of selecting appropriate technologies that have the capability to make robust measurements in a complex culture environment without interference from other components as well as having the ability to provide data in a timeframe sufficient to allow proactive decision making.

    Sample availability and access to the culture system must also be considered when developing a PAT strategy. Unlike large-scale biopharmaceutical production where the majority of cell culture operations involve the use of stirred tank bioreactors, cell therapies are produced using a wide variety of culture systems. This can limit the ability to implement certain technology or even restrict analysis to at-line measurement.

    Dr Damian Marshall

    Damian Marshall is the director of new and enabling technologies at the Cell and Gene Therapy Catapult and has almost 20 years of industrial experience gained working for SME’s and large companies. He is responsible for providing vision, expertise and leadership to a team of ~60 scientists working with a wide range of cell and gene therapy developers. Together they are addressing some of the fundamental challenges in the field, developing novel cell and gene therapy manufacturing processes and implementing technologies for advanced product characterisation.

    Tag(s): Applications

    Other posts you might be interested in

    View All Posts