Modeling causes of death: an integrated approach using CODEm
Published in Population Health Metrics, January 2012
Global health policymakers, advocates, and planners need to know the current magnitude of health problems and trends in these problems in order to best help populations in need. The study “Modeling causes of death: an integrated approach using CODEm” proposes five general principles for cause of death model development, validation, and reporting and details an analytical tool – the Cause of Death Ensemble model (CODEm) – that explores a large number of possible models to estimate trends in causes of death.
Researchers at IHME and the University of Queensland School of Population Health found that CODEm produces better estimates of cause of death trends than previous methods.
Data on causes of death are critical for health decision-making, and knowing whether a cause of death is increasing or decreasing in a population is necessary in order to know whether current disease control efforts are working or are inadequate. However, for most countries these cause of death data are often not available or cannot be compared across countries or regions. While there have been many efforts in the past to model causes of death, there has been a lack of accepted standards for good modeling practice. The authors propose five principles for cause of death model development, validation, and reporting, as follows:
- Identify all the available data: Most cause of death data are found through national sources or the World Health Organization, and these sources can be supplemented by subnational studies on select causes or age groups from published literature.
- Maximize the comparability and quality of the dataset: In order to ensure all data are comparable and of high quality, researchers need to map across various revisions of the International Classification of Diseases, reclassify deaths assigned “garbage codes” or causes of death that are not the true causes, and correct for the completeness of death registration in vital registration systems that do not capture all deaths.
- Develop a diverse set of plausible models: While good modeling practice should cast a wide net in terms of proposed models, those chosen need to respect known biological or behavioral relationships (e.g., models for lung cancer should consider tobacco consumption). Hundreds or thousands of individual models need to be tested, ranging from simple linear covariate models to sophisticated spatial-temporal models. Then, these individual models are combined to produce robust ensemble models.
- Assess the predictive validity of each plausible individual model and of ensemble models: When data are sparse or missing, out-of-sample predictive validity is the most robust measure of prediction. Out-of-sample predictive validity is tested by running a model with some of the data removed, and then checking the performance of the model at predicting the data that were removed.
- Choose the model or ensemble model with the best performance in the out-of-sample predictive validity tests: Choosing the best model requires balancing different performance attributes.
The authors use the example of maternal mortality to demonstrate CODEm’s performance compared to other single component models. They used a covariate selection algorithm on 1,984 possible models for mortality rates, or the number of deaths per a given population (e.g., deaths per 100,000 people), and cause-specific mortality fractions, or the proportion of deaths due to a particular cause in a population.
They found 98 rate models and 71 cause fraction models that produced plausible, relevant results for maternal mortality. Each of these models was tested as simple mixed effects models, which use only covariates and country- and region-specific factors to make predictions, and spatial-temporal models, which improve on mixed effects models by taking into account correlation in cause of death data across time, geography, and age. Each model was then evaluated using out-of-sample predictive validity.
The authors created ensemble models by combining the single component models that performed best in terms of predictive validity and then again tested the predictive validity of the ensemble models to choose the best ensemble. For maternal mortality, an ensemble model outperformed all single component models in tests of root mean square error, frequency of predicting correct temporal trends, and achieving 95% coverage of the prediction interval.
CODEm was also compared to other models for several additional causes of death, including cardiovascular disease, chronic respiratory disease, cervical cancer, breast cancer, and lung cancer. For all of these, CODEm had as good or better predictive validity metrics than the component model. In no cases did CODEm perform significantly worse than the top component model, and in many instances CODEm was a substantial improvement.
The authors detail how CODEm follows the five principles for cause of death model development described above and explores a large variety of possible models to estimate trends in causes of death. Potential models were identified using a covariate selection algorithm, which yielded many possible combinations of covariates. The covariate selection algorithm considered evidence linking covariates with cause-specific mortality. Selected covariates were then run through four model families, including mixed effects linear models and spatial-temporal models for cause fractions and cause-specific death rates. This procedure typically results in hundreds or thousands of models to test.
All models for each cause of death were assessed using out-of-sample predictive validity and combined into an ensemble model – CODEm – with optimal out-of-sample predictive performance.
The authors note that they expect this research to substantially decrease the existing debate around cause of death model building, as it tests a wide range of models and model families combined with objective assessment of performance through out-of-sample predictive validity tests. Future debate will likely focus on the data processing step, how multiple data sources are combined, and how misclassification of cause of death assignment is dealt with.
In many cases, cause of death information is most valuable when placed into context with the entire cause of death composition for a population, and the relative burdens of causes of death are often more influential for priority setting than individual causes’ respective sizes. However, designing a model that predicts for all causes of death simultaneously is difficult. For studies in which cause of death estimation for multiple causes will be needed, such as the Global Burden of Diseases, Injuries, and Risk Factors 2010 Study, the authors recommend that models with the best out-of-sample predictive validity for each cause of death be produced independently, and then predicted estimates for each cause of death can be modified to sum to the total all-cause mortality predictions.
Approaches like CODEm require substantial computing power, which is not available in many areas of the developing world. Therefore, the authors recommend that internet-accessed servers be dedicated to processing, so that these tools can be more widely available to all.
Citation: Foreman KJ, Lozano R, Lopez AD, Murray CJL. Modeling causes of death: an integrated approach using CODEm. Population Health Metrics. 2012; 10:1.