- How it Works
- Applications Overview
- Bioprocess Development
- Molecular Biology
Back in the early autumn of 2018, I was fortunate to attend both the ELRIG Drug Discovery 2018 conference and the End Points News ‘Pharma ROI’ summit. In both of these events, the theme of tangible outcomes from ‘digital transformation’ loomed heavy on the mind of attendees. The ELRIG conference had an enhanced focus on the early stages of the biopharmaceutical value chain (Hit Discovery/Hit-to-Lead), with the modalities discussed being predominately small molecules. In comparison, the End Points summit was a high-level thematic event that touched upon the major forces impacting the productivity of the biopharmaceutical industry.
ELRIG: Towards a Lab of the Future for Small Molecule Drug Discovery?
At ELRIG a variety of use cases were given as to how digital technologies are transforming small molecule drug discovery. The general feeling from within the synthetic chemistry community was that they were behind the curve on digital as opposed to their colleagues in the biological sciences. As a chemist, who moved to biology, I can ascertain that is true to an extent but there is perhaps more subtlety to this statement.
Within the design and build aspect of lab research, biology is very different from that of synthetic organic chemistry. Biology is intrinsically template-driven and often utilizes the endogenous cellular machinery or optimized natural enzymes to construct complex biopolymers. This lends itself well to a ‘programming’ or engineering biology approach, and as such numerous Bio-CAD programs have appeared – albeit often allied to reagent providers. Likewise, DNA foundries have appeared which, thanks to the robustness of many biological processes, have allowed for the high throughput generation of libraries of biological constructs. In contrast, synthetic organic chemistry – the predominant technique used in small molecule drug discovery, is much more artisanal; with attempts to move this to higher throughput build cycles (e.g. through combinatorial chemistry) often just leading to restrictions in what chemical space can be explored. That said design tools in this space are gathering pace: the Merck KgaA acquisition of Chematica and work by Benevolent.AI are two examples where machine learning, literature contextualization, and retrosynthesis are converging to augment the capabilities of small molecule synthetic chemists. Outside of screening, there are also more use cases for automation in modern biology, with limited examples (e.g. flow chemistry systems) of synthetic chemistry scalable automation.
During the ELRIG conference, and in particular during the SILA standards workshop, a multitude of examples were given as to how ‘small molecule researchers’ are starting to think about closing the loop. "Closing the loop" is the process of linking Design: Build: Test and Analyse. One talk from Garry Pairaudeau of AstraZeneca talked of the challenges and outcomes of creating an in-house closed-loop small molecule synthesis and testing platform. Some of the challenges they encountered were surprising, especially given the wider industry interest in integrating digital and physical tools. Firstly, they found that more modern equipment models were actually harder to integrate. To get around this, AstraZeneca went and found "equipment from the basement" which they connected together in-house to create a closed loop automation system for small molecule drug discovery/organic chemistry. The enterprise customer pull for vendor-to-vendor integration was something mentioned in other talks, and a factor we see as a clear driver for the adoption of vendor-agnostic automation and integration platforms, such as the Synthace platform, across the biopharmaceutical industry.
Something else that struck home was that many in the wider tech community think of the "Smart Lab or Lab of the Future" as an all-encompassing panacea to R&D productivity woe but, in reality, it’s often the little things compounded and aggregated that can deliver real step changes in productivity. Good examples given during ELRIG included: the introduction of the 1536 plate which increased throughput, or the ability to submit ‘dirty’ un-purified compounds to early-stage biochemical screening platforms. A lot of delegates in the screening groups were particularly excited about the potential that acoustic liquid handling, such as the LabCyte Echo, will have as it could dramatically reduce volumes needed and thus increase throughput.
This focus on the ‘real value blockers’ resonated with what we do at Synthace, where we focus on the actual productivity challenges to accelerating R&D e.g. setting up a complex Design of Experiments. The only criticism from my end of the ELRIG event would be the relentless focus on the very beginning of the biopharmaceutical value chain. Whilst, there is huge value to be realised from further optimizing target identification, hit identification, and lead generation; there also needs to be a focus on fixing the traditionally slower (and more expensive) parts of the development process such as CMC (Chemistry, Manufacturing & Control). Again, this is an area where Synthace operates–albeit not yet in small molecules!
The view from the Top: End Points ROI Summit
In contrast to ELRIG, the End Points summit in London was focused far more on the industry-wide challenges of digital tools. From AstraZeneca, Mene Pangalos highlighted how moving towards a company-wide, data-driven culture as exemplified by the 5R framework led to a jump in progression from candidate selection to Ph III selection from 4% up to 19% in 6 years. The 5R framework means the right: patient, tissue, population, safety, and commercial opportunity. One challenge that AstraZeneca faced is the need to break down barriers – often data siloes across the organization. These siloes may have come about as a result of the use of internal/external parties (e.g. CDMOs), different IT systems in different areas, and backward incompatibility of many IT systems. Other challenges that panelists highlighted were the challenges of finding and retaining data science talent in the biopharmaceutical industry, the overhyping of AI/ML, and the need for increased usage of industry-wide standards. Looking to the future, one audience member commented that it would be nice to see ‘tech’ focussed more on the difficult parts of the biopharmaceutical R&D value chain; such as real-world data for in-world trials (potentially allowing us to miss out the Ph III completely), or in process development/CMC.
Despite the two events being very different there is a growing consensus across the industry, both at the lab and executive levels, around the tangible benefits of digital technologies such as AI and automation. If I had to say where the next value inflection will be for the industry I would suggest:
“By focussing on closing the loop in the difficult bits – process dev., tech transfer to manufacturing and in real-world data”.
It is interesting to note that in areas like cell and gene therapy, this is exactly where significant investment is currently being deployed. Without tackling these areas, we simply move bottlenecks to different parts of the biopharmaceutical value chain and we will not see the true industry-wide productivity gains that we need.
For those trying to appraise digital tools in what is becoming a noisy ecosystem, I would thus suggest evaluating whether a digital tool solves real lab research problems at a unit operation level before then aggregating these unit operations into end-to-end workflows.
Fortunately, at Synthace our platform was designed by our biologists to deliver significant unit value to the researcher in the difficult-to-do parts of the biopharma value chain such as bioprocessing. Our customers are already beginning to see enhanced value from integrating and aggregating our software across their organizations; it was nice to see at these two events the wider industry also beginning to realize this trend.