Guest Commentary: The Role of Outliers in Determining Hospital Quality

Robert Lieberthal
Instructor
Jefferson School of Population Health


One of the exciting new opportunities in healthcare quality and safety is the potential to use large datasets to help us identify high quality hospitals. However, this task is daunting, because we are still figuring out how to extract useful information from all that data. I am currently writing a grant on validating the PRIDIT method for aggregating Hospital Compare data.

Richard Derrig, one of the researchers responsible for developing the model, described PRIDIT as '...a relatively new and versatile technique for producing a rank ordered score for the intensity of a latent variable [hospital quality in this case], given a set of predictor variables that are related to the variable in a monotone way [hospital process measures in this case].’ The work is an extension of two of my papers: “Hospital quality—A PRIDIT approach” and “Grouping hospitals by quality—a PRIDIT approach with bootstrapping. The following is an excerpt from my grant application:

In prior work, we demonstrated how to create bootstrapped standard errors for the PRIDIT scores. We found that the size of standard errors for the overall quality score varies with the level of quality in a u-shaped pattern. Standard errors are small for average quality hospitals and larger for low quality and high quality hospitals. Our conjecture was that, because we classified most hospitals as being of average quality, our confidence in the quality of these hospitals is much higher than “outlier hospitals” of high or low quality.

The u-shaped pattern comes from the following graph in my paper, “Grouping hospitals by quality—a PRIDIT approach with bootstrapping” published in the 2008 Proceedings of the Joint Statistical Meeting. The x-axis shows the percentile of hospital quality, and the y-axis shows the standard deviation of quality scores for a hospital at a certain percentile of the quality distribution. Standard deviations for outliers are almost four times as high as for average hospitals:


I think that this result is instructive as I reflect on the debate over health reform, as well as in our future efforts at quality improvement. When President Obama lauded the Cleveland Clinic as being a high quality, low cost hospital, one rejoinder was that the Cleveland Clinic is indeed special, drawing wealthy patients from around the world, and thus is not a model for most community hospitals (WSJ).

On the other hand, when Martin Luther King Jr.-Harbor Hospital in Los Angeles County was shut down by federal regulators, there was a ripple effect on nearby community hospitals which may have negatively affected population health (NY Times), pointing out the sad truth that sometimes the hospital a community has is better than none at all.

My takeaway is that, while more data is better, we will have to employ a mix of statistical and cultural techniques when evaluating hospital quality in the context of overall population health. The growing power of computers, the gigabytes of new data, and the flood of bright new researchers we are attracting to the study of healthcare quality and safety will help us answer many of the open questions in the academic literature.

However, I think that we should be skeptical about the real world meaning of our results, and focus on research settings most likely to lead to generalizable outcomes. I am hopeful that we can find ways to improve quality at all types of hospitals, but especially those serving the most vulnerable populations.

Blog Archive