A large-scale database collating data on statin effects from over 182,000 people may lead to a better understanding of any side-effects caused by these treatments.
Statin therapy has been proven to effectively lower cholesterol levels and reduce the risk of heart attacks and strokes. It is prescribed to millions of people worldwide, but in recent years some have raised concerns that statins may cause a range of adverse effects (particularly muscle pain or weakness) and increase the risk of certain health conditions, such as diabetes. This has resulted in widespread media coverage and some people declining or stopping statin therapies.
Most of these concerns about adverse effects of statins, however, have not been based on controlled studies where participants were randomly allocated to receive either a statin therapy or a control treatment. This makes it possible that the observed differences in the frequency of adverse effects could have been attributable to factors other than the statin treatment being investigated. The most reliable way to investigate this is to create and analyse a database containing individual participant data from many different large-scale randomised controlled trials (known as a ‘meta-analysis’).
Such a database did not exist, so researchers brought together all relevant data on adverse events from 28 large-scale statin trials into a single dataset containing information on almost 182,000 individual patients. This will allow reliable assessment of all statin effects across many different patient groups. The methods have been published today in Clinical Trials.
The new harmonised dataset was collated over six years by the Cholesterol Treatment Trialists’ (CTT) Collaboration, with the project being led by researchers at Oxford Population Health. The research team are now using the resource to quantify the frequency of muscle events such as pain or weakness, new diagnoses of diabetes, and all other potential effects. The first results are expected later this summer.
Importantly, the analysis was restricted to large-scale trials (at least 1000 participants) with a median follow-up period of at least two years in which the treatment was ‘double-blind’, ie neither the trial participants nor those leading the trial knew who was receiving which treatment. In 19 of the trials, participants were randomly allocated to receive either statin treatment or a placebo, whilst in four trials the comparison was of more intensive versus less intensive statin therapy.
The CTT team collected trial protocols, statistical analysis plans, case report forms, and individual participant datasets from each trial. Since the studies used many different methods to capture and record information, the first challenge was to reorganise the datasets into standard formats, based on the Clinical Data Interchange Standards Consortium (CDISC) Study Data Tabulation Model (SDTM). The team then coded information related to any adverse events that occurred to trial participants into a standard terminology using a globally recognised medical dictionary.
Dr Christina Reith, Senior Clinical Research Fellow at Oxford Population Health, who is leading this research effort, said: ‘By creating such a rich database, we are now in a unique position to perform analyses to reliably inform doctors, patients, and the public about the safety of statins.
‘This was a substantial undertaking, since harmonising all the data from the included trials presented a significant challenge, due to both the sheer number of datasets and differences between trials in the way data were collected and organised. As an analogy, this was equivalent to being tasked with sorting and organising around 845 filing cabinets of diverse information from the trials which was labelled in different ways (or not labelled at all), equating to handling over 38 million records and categorising over 45,000 unique adverse event terms.
‘There has been a lack of published methods for harmonising heterogeneous data from numerous different controlled studies into a single analysable dataset, meaning that researchers have typically had to develop their own bespoke systems, which is time-consuming and challenging. Our approach, based on existing, widely recognised data standards, could offer a solution for research into many other major diseases.’