Confidence intervals for cost-effectiveness ratios: A comparison of four methods HEALTH ECONOMICS Polsky, D., Glick, H. A., Willke, R., Schulman, K. 1997; 6 (3): 243–52

Abstract

We evaluated four methods for computing confidence intervals for cost-effectiveness ratios developed from randomized controlled trials: the box method, the Taylor series method, the nonparametric bootstrap method and the Fieller theorem method. We performed a Monte Carlo experiment to compare these methods. We investigated the relative performance of each method and assessed whether or not it was affected by differing distributions of costs (normal and log normal) and effects (10% absolute difference in mortality resulting from mortality rates of 25% versus 15% in the two groups as well as from mortality rates of 55% versus 45%) or by differing levels of correlation between the costs and effects (correlations of -0.50, -0.25, 0.0, 0.25 and 0.50). The principal criterion used to evaluate the performance of the methods was the probability of miscoverage. Symmetrical miscoverage of the intervals was used as a secondary criterion for evaluating the four methods. Overall probabilities of miscoverage for the nonparametric bootstrap method and the Fieller theorem method were more accurate than those for the other the methods. The Taylor series method had confidence intervals that asymmetrically underestimated the upper limit of the interval. Confidence intervals for cost-effectiveness ratios resulting from the nonparametric bootstrap method and the Fieller theorem method were more dependably accurate than those estimated using the Taylor series or box methods. Routine reporting of these intervals will allow individuals using cost-effectiveness ratios to make clinical and policy judgments to better identify when an intervention is a good value for its cost.

View details for DOI 10.1002/(SICI)1099-1050(199705)6:3<243::AID-HEC269>3.0.CO;2-Z

View details for Web of Science ID A1997XH34300003

View details for PubMedID 9226142