SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Batura N, Pulkki-Brännström AM, Agrawal P, Bagra A, Haghparast-Bidgoli H, Bozzani F, Colbourn T, Greco G, Hossain T, Sinha R, Thapa B, Skordis-Worrall J. Glob. Health Action 2014; 7(1): 23257.

Affiliation

Health Economics and Systems Group London School of Hygiene and Tropical Medicine , London , UK.

Copyright

(Copyright © 2014, Centre for Global Health Research (CGH) at Umeå University, Sweden, Publisher Co-Action Publishing)

DOI

10.3402/gha.v7.23257

PMID

28672629

Abstract

Background Current guidelines for the conduct of cost-effectiveness analysis (CEA) are mainly applicable to facility-based interventions in high-income settings. Differences in the unit of analysis and the high cost of data collection can make these guidelines challenging to follow within public health trials in low- and middle- income settings.

OBJECTIVE This paper reflects on the challenges experienced within our own work and proposes solutions that may be useful to others attempting to collect, analyse, and compare cost data between public health research sites in low- and middle-income countries. Design We describe the generally accepted methods (norms) for collecting and analysing cost data in a single-site trial from the provider perspective. We then describe our own experience applying these methods within eight comparable cluster randomised, controlled, trials. We describe the strategies used to maximise adherence to the norm, highlight ways in which we deviated from the norm, and reflect on the learning and limitations that resulted.

RESULTS When the expenses incurred by a number of small research sites are used to estimate the cost-effectiveness of delivering an intervention on a national scale, then deciding which expenses constitute 'start-up' costs will be a nontrivial decision that may differ among sites. Similarly, the decision to include or exclude research or monitoring and evaluation costs can have a significant impact on the findings. We separated out research costs and argued that monitoring and evaluation costs should be reported as part of the total trial cost. The human resource constraints that we experienced are also likely to be common to other trials. As we did not have an economist in each site, we collaborated with key personnel at each site who were trained to use a standardised cost collection tool. This approach both accommodated our resource constraints and served as a knowledge sharing and capacity building process within the research teams.

CONCLUSIONS Given the practical reality of conducting randomised, controlled trials of public health interventions in low- and middle- income countries, it is not always possible to adhere to prescribed guidelines for the analysis of cost effectiveness. Compromises are frequently required as researchers seek a pragmatic balance between rigor and feasibility. There is no single solution to this tension but researchers are encouraged to be mindful of the limitations that accompany compromise, whilst being reassured that meaningful analyses can still be conducted with the resulting data.


Language: en

Keywords

LMIC; cost data; cost-effectiveness analysis; multisite; randomised control trials

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print