I'm working in...
Arrow for button
Cross Arrow

I'm working in...

Role
Company
Select

What’s next for clinical data management?

The life sciences industry has seen its share of headwinds and tailwinds in recent years. Advances in technology and science heightened the industry’s intent to digitally transform and decentralise clinical trials. Clinical data burdens grew as rising data volume outpaced adoption of innovative data management processes. Now, consolidation and economic uncertainty introduce new speed bumps. We’re at an inflection point where clinical operations and data management professionals must grapple with the push and pull of these opposing factors.

Balancing trial complexity with talent scarcity, while delivering on faster, more inclusive, patient-centric trials is no easy task, but there are opportunities to deliver on our shared industry goals. At the core of this shifting environment is the recognition of clinical data as a critical asset and the drive to adopt modern data management approaches.

A recent report by eClinical Solutions explored how leaders are balancing these factors, delving into which strategies and tools are enabling leaders in the space. Key findings from industry executives included:

Automation is the industry’s top priority

While conversations around decentralised clinical trials (DCTs) and patient-centricity have grown in the past several years, leaders didn’t point to these focus areas even half as much as they did automation. Harnessing automation was ranked as the top overarching industry priority for more than one-third (36%) of respondents. This signifies the continued pressure on cycle time reduction, as incorporating automation into clinical trial data management offers the potential to reduce human hours spent on routine tasks and free up time for higher value work.

For Dele Babalola, senior director of clinical data management at Morphic Therapeutic, the automation focus comes as no surprise: “Automation and AI go hand in hand. They’re not the same, but there is overlap in that one will help the other, and much of the automation will be AI-driven.” In Babalola’s view, automation is a logical first step, and “is probably the biggest piece that will help facilitate the other areas such as DCTs, RBQM (Risk-Based Quality Management), and AI, and […] be the biggest priority for the next couple of years. It’s going to boost efficiency, and how we work will change. We’ll be able to focus our attention on more of the value-add activities. There is a lot of opportunity.”

Meanwhile, for Manny Lazaro, principal at First Tracks Life Sciences Consulting, LLC, the focus on automation is about the need for foundational work: “It’s not surprising automation won out as a top priority. Our industry likes to discuss trends, whether that be patient-centricity, DCT, or AI. But, in a way, it’s a bit like the concept cars or new car of the future that no one is really driving yet. We can agree about what the industry needs to work on, but it’s also important to optimise what needs improvement today and get that right. If we keep moving on to the latest and greatest thing, we would not have established a solid core base of systems and processes from which to add or gain enough traction for it to be impactful and adopted as ‘the standard.’ As an industry, we should ask ourselves what we can do to improve existing platforms and core of efficiencies, and then move on to the next thing.”

There’s a substantial demand to unite disparate data

For some time now, clinical researchers have incorporated external data sources to expand datasets and capture the widest picture possible. This existing trend accelerated as the industry took advantage of new innovations in data collection modalities, especially during the pandemic, with 64% of respondents revealing that they are now leveraging 6 or more external data sources. Furthermore, over one third (36%) of leaders reported that at least half of their collected data was feeding directly into study endpoints. While this is encouraging, it also uncovers a gap – a considerable amount of data collected is still not leading to endpoints.

Both Lazaro and Babalola emphasised that much of this is study phase dependent. As Babalola noted, “there is a tendency to want to keep a broad view of the data, and hopefully that should narrow as you move into phase three. But it comes at a cost – from data creation to data monitoring, it can spread resources too thin. And it affects quality, not only on the back-end, but also on the front-end, because now you have your sites doing so much more and their attention is somewhat divided.” Adds Lazaro: “We need to get better at this and challenge ourselves by asking, ‘Why are we collecting this?’ If there isn’t a scientific or clinical reason for collecting a certain data field or data domain, then we need to down-scope the data collection.”

The findings underscore the critical need for those working with clinical data to leverage risk-based approaches and tech to help streamline review efforts and prioritise the most critical information. Both experts agree that having the right tools to leverage standards and automation and, consequently, advanced capabilities like AI, are essential for tackling data integration challenges while setting companies up to move forward with DCTs and new trial models. This can speed clinical data flows by streamlining data acquisition, mapping, and standardisation, as well as facilitate efficient data collaboration, review, and cleaning, and ultimately enable faster access to data by downstream teams.

Speed and quality are key EDC pain points

Electronic data capture (EDC) databases are a part of the digital framework of clinical trials and without intentional, well-executed database builds a trial may struggle to meet desired outcomes. While there are several challenges industry experts pointed to surrounding EDC database build, speed (30%) and quality (30%) were found to be the largest pain points, indicating the necessity of stronger data management strategies. Tactics like quality checkpoints and automation to streamline routine tasks can help address these pain points.

“Database build has come a long way in the past decade,” says Katrina Rice, chief delivery officer of biometrics services at eClinical Solutions. “To advance further, we must leverage technical and therapeutic expertise to optimise the approach for each unique trial, while ensuring builds are flexible to evolve as necessary. With just under 60% of leaders sharing that protocol amendments lead to database updates, a high level of agility is crucial. Having the right technologies and services solutions in place can address evolving protocols. Starting with the end in mind and ‘future-proofing’ at the point of database build, by anticipating changes, can allow for necessary modifications without compromising quality.”

Given that data is the currency of clinical trials and complexity continues to grow, leaders must deploy solid data management strategies going forward. As Babalola notes, “We need to realise that pharmaceutical and biotech companies are data companies. It makes sense to prioritise your key asset, the clinical data.”

Implemented well, modern data management approaches supported by technology will help teams transform and unite data into updates and information they can readily access. Cycle time reduction can be realised by streamlining review and analysis activities and enabling flexible, risk-based approaches that adapt ahead of possible delays. Finally, investing in technical resources and a data foundation will lay the groundwork for the future state – from realising automation and AI-driven efficiency gains to supporting interoperability for decentralised, diverse, and patient-centric trials.


By submitting, you agree to the processing of your personal data by eClinical Solutions as described in our Privacy Policy.