Pradeep Ravindra on Real Human Data, Putting a Pin in Compliance, and Digital Twins
A conversation with Pradeep Ravindra
We are thrilled to have recently launched our new podcast ‘Data in Biotech’ with some of the leading thinkers in the field giving their take on industry challenges and trends. To accompany each podcast, this blog series will pick out the highlights of each episode and cover additional topics using the podcast as a springboard to continue the conversation.
The Interview
Guest Profile
Our first guest is Pradeep Ravindra, Associate Director of Data Analytics Manufacturing at Lyell Immunopharma, where he is responsible for managing the development of data solutions for CAR-T, TCR, and TIL therapies. Transitioning from military intelligence to life science analytics, his journey exemplifies the versatility of data science. After leaving the US military, he worked for a range of organizations, including Celgene and the Memorial Sloan Kettering Cancer Center, forging a career focused on using data to further oncological research. In his current role, he uses the geospatial visualization and data analysis skills honed in the military to pursue his passion in oncology and personalized medicine.
The Highlights
Our conversation with Pradeep covered a wide range of areas and is definitely worth a listen. However, we’ve picked out some of the highlights below:
Dealing with the complexity of personalization in CAR-T (3:21): The courage of patients volunteering for experimental treatments heightens the need to make the resulting product as good as possible to give the patient the best chance of success. However, working with real patient data makes it much more challenging to control variables. Therefore, balancing cell quality, dosage, and delivery timing, all while accounting for variable patient material and processoutcomes is a hugely complicated problem.
Solution first, compliance second in data platform development (15:30): Understanding problems like this becomes even more challenging as research companies feel limited by a need to constrain data exploration and data platform development activities to those explicitly approved on the manufacturing floor. Pradeep suggests that there is a better way of working that will accelerate innovation in biotech manufacturing. Rather than being completely concerned with how to maintain compliance in manufacturing from the outset, “start over from scratch with a blank canvas and say, hey, let's focus on the solution, let’s focus on solving the problem first.” This involves getting the data out of your manufacturing environment and into a sandbox environment (i.e. the cloud). Once the data has been liberated from the compliance constraints of a manufacturing environment, solutions can be developed, tested, and improved. A great solution can always be made to work in a compliant way, but compliance shouldn’t stifle the innovation required to find that great solution in the first place.
A well-defined semantic layer (22:24): A successful data project, in an environment where knowledge of data structure and applicability to biological, manufacturing, and business outcomes is distributed across various domain experts, needs a semantic layer that is developed iteratively with the most critical business goals in mind. The semantic layer is a bridge between the way machines and instruments understand data and how different groups of people understand it. Pradeep discusses the need to define a semantic layer that understands what the business needs from the data and delivers the desired outcomes at an early stage with appropriate biological context. This involves serving as the knowledge broker between groups of domain experts and building the semantic layer to translate between those groups.
Becoming a domain expert as a data team (31:25): To be really successful in that knowledge broker and translational role in the data world, you need to truly understand the domain you are working in as well as your analytics craft. Having a deep understanding of what is useful to each stakeholder group takes data analytics tools from being a set of dashboards that need to be actively monitored to a system that automatically provides alerts when action is required. Domain expertise and the ability to converse fluently with your stakeholders helps make great analytics products.
Digital Twins (34:03): When asked about which emerging technologies were particularly exciting for the industry, Pradeep again commented that the availability of data limits the development of medical technologies. However, Digital Twins, creating simulations using data and then reproducing findings in the wet lab environment, holds huge potential to reduce costs and expedite clinical trials. Whether regulators will be open to the idea remains to be seen, but it holds significant potential, particularly for researching rare diseases.
The conversation with Pradeep was a fascinating one, however, in a 40-minute pod conversation, it is only possible to scratch the surface of some of the interesting perspectives our guests bring. So, here we look to cover a little more ground, and a great starting point for this is Pradeep’s recent blog on Supply Chain Complexity. He starts by looking at the case of Emily Whitehead, whose cancer was cured using CAR T-cell therapy, and then asks the question: how do we scale this treatment?
As Pradeep points out, the supply chain for delivering personalized medicine, like the treatment used for Emily, is exceptionally complex. Managing lead times for drug creation, overseeing transportation and cold chain management, and creating an audit trail to ensure each treatment is matched to the correct patient – all add layers of complexity. The only way to deliver this at scale is with the use of data analytics.
Here is where the topic of a well-defined semantic layer comes into its own. The business needs are very clear; real-time, accurate tracking of each product, and active monitoring of both samples and end products to ensure optimum results. Data can support this in several ways, but three of the key areas are time, accuracy, and automation.
Time: Particularly with CAR T-Cell therapy, time is a critical factor. Whether relating to the lead time of cell collection, the growth time of cells in a bioreactor, or the delivery time of the end product, timing is vital. Data can be used to monitor and attach timestamps to each stage of the process, giving a deeper understanding that allows problems to be identified quickly and allows patients to receive accurate delivery times.
Accuracy: Personalized medicine is unlike virtually any other discipline – any mix-up between patients is potentially catastrophic. Here, accumulated data points give teams end-to-end lifecycle visibility, and they can be anonymized and aggregated to ensure patient confidentiality in line with HIPAA rights while enabling an audit trail that will be an essential part of meeting regulatory compliance.
Automation: As we discussed in the main podcast, a great analytics tool doesn’t need to be actively monitored; it informs teams when action is needed. Manually monitoring dashboards for individualized drugs when they are being manufactured in high volumes is cost- and time-prohibitive; automation will be essential. For example, a system that alerts when a data point exceeds acceptable thresholds and prompts an action only when needed is the only way pharma companies can deliver personalized medicine at scale.
The advancement of science is at the center of being able to make personalized medicine accessible. However, ultimately, data analytics will be an essential part of delivering and optimizing those advancements at scale.