Early evidence suggests that price transparency is beginning to increase market competition and drive convergence in healthcare prices. Below is a deep dive into the methodology used in our recent white paper, "Is Price Transparency Helping?", along with the limitations of our approach and avenues for future research. Join me as I walk you through exactly how we analyzed commercially negotiated rates at over 200 hospitals across the 10 largest U.S. metropolitan areas, focusing on 37 common healthcare services from December 2021 to June 2024.
Figure 1: Negotiated rates began converging between Dec 2021 and June 2024
Our white paper strives to measure the change in dispersion in a simple, unbiased, economically meaningful way. The simplest measures of dispersion, such as standard deviation or interquartile range using rate levels, are sensitive to inflation rates and inflation adjustments. When a constant percentage inflation rate (or inflation adjustment) is applied uniformly across the entire distribution of rates, larger rate values experience greater absolute increases. This affects the distribution, causing measures like standard deviation and interquartile range to change, even if there is no change in underlying rates.
Let’s consider a simple example: a set of 100 rates, $1 through $100, for which we calculate the interquartile range before and after applying a uniform 10% inflation rate, with no additional rate change:
While we applied the same inflationary percentage increase across all rates, the larger nominal value experienced a greater absolute increase. This introduced an inflation-driven distortion of the distribution and caused the interquartile range to increase by 10%, even with no real change in rates. In this way, inflation (and likewise, adjusting for inflation) introduces non-stationarity into our time series and biases simple measures of dispersion like changes in standard deviation and interquartile range.
Given the limitations of these simple measures of dispersion, we decided to examine trends in first differences (rate changes) rather than in rate levels. Borrowing from the interquartile range approach, we focused on quartiles, but instead of analyzing the rate levels, we measured the average percentage rate change within each quartile over time. This approach standardizes the analysis across different rate levels and counters the bias introduced by inflation and inflation adjustments on rate-level dispersion measures.
We categorized each contracted rate into one of three market segments, comparing the December 2021 value to other rates for the same service and metro:
- Top 25%: rates above the 75th percentile
- Middle 50%: rates in the 26th to 75th percentile
- Bottom 25%: rates at or below the 25th percentile
For each rate, we calculated the annualized real rate change (ARRC, colloquially “rate change”) between December 2021 and June 2024. ARRC is calculated as follows:
Where:
- Negotiated Rate (Dec 2021) is the negotiated rate in December 2021.
- Negotiated Rate (June 2024) is the negotiated rate in June 2024.
- CPI (Dec 2021) is the Hospital Services component of the Consumer Price Index (CPI) in December 2021.
- CPI (June 2024) is the Hospital Services component of the Consumer Price Index (CPI) in June 2024.
- m=30 is the number of months between December 2021 and June 2024.
We then calculated the average of the annualized real rate changes for each market segment.
Where:
- ARRCs is the average ARRC for market segment s.
- Ns is the total number of individual rates in market segment s.
- ARRCi is the annualized real rate change for each individual rate i in the market segment.
To test whether ARRCs was different between the Top 25%, Middle 50%, and Bottom 25% market segments, we used the non-parametric Kruskal–Wallis test to compare distributions. The Kruskal–Wallis test showed that the Top, Middle, and Bottom market segments had significantly different distributions in ARRC, with a p-value < .001. For post-hoc pairwise comparisons, we conducted Mann-Whitney U tests with a Bonferroni correction, which showed significant differences (p < .001) between all pairs. To ensure consistency with the Kruskal-Wallis framework, we also conducted post-hoc Dunn's tests with a Bonferroni correction. Dunn's tests confirmed significant difference (p < .001) between all pairs.
In addition to the Kruskal-Wallis, Mann-Whitney, and Dunn's tests, we conducted an exploratory analysis using a linear mixed-effects model to more rigorously examine differences in rate changes across market segments while relaxing the independence assumption. We incorporated random effects for hospitals, regions (CBSAs), services, service categories, care settings, and payers, capturing the hierarchical structure of the data and allowing us to control for potential dependencies within it. The fixed effects isolated differences across the market segments ('Bottom 25%', 'Middle 50%', and 'Top 25%'), providing a clearer understanding of rate changes across these groups.
The mixed-effects model results were consistent with those obtained from prior analysis. Higher-priced hospitals ('Top 25%') showed statistically significant decreases in ARRC (estimate = -5.8%, p < .001), while lower-priced hospitals ('Bottom 25%') exhibited statistically significant increases (estimate = +2.9%, p < .001). Additionally, hospitals in the "Middle 50%" showed a statistically significant reduction in ARRC (estimate = -1.1%, p < .001). The estimated rate changes were similar both in magnitude and direction to ARRCs calculated in our primary analysis.
Given the additional complexity of the mixed-effects model and the similarity of results to the simple averages and distributional comparison tests, we chose to present these simpler analyses as a foundational starting point. They provide a clear and accessible interpretation of the data, while the mixed-effects analysis supports and reinforces our initial conclusions.
Table 1, Figure 2, and Figure 3
The more consistently and pervasively we observe price convergence, the greater confidence we gain that the convergence is a real (colloquial, non-economic use here) finding. To examine this, we calculated the number of different markets where we observed price convergence. We defined a market as a single service within a single core-based statistical area (CBSA). For each market, we calculated the ARRCs for each market segment. Markets were counted as converging when ARRCTop - ARRCBottom < 0. We excluded markets where there was not enough data to determine convergence (ie where we did not find rates in all market segments).
We found 217 (82.8%) out of 262 markets showed convergence. To test whether the proportion of converging markets was significantly different from chance, we performed a binomial test with the null hypothesis that the probability of observing convergence in a given market is 50% (i.e., random chance). The binomial test yielded a p-value of < .001, strongly rejecting the null hypothesis and suggesting that the observed rate of convergence was significantly greater than what would be expected by random chance.
For both converging and non-converging markets, we then calculated the average annualized real rate change for each market segment.
Table 2: Top-Driven vs Bottom-Driven Markets
We further categorized converging and non-converging markets based on the dominant source of price movement. We labeled markets as Top-Driven when |ARRCTop - ARRCMiddle| > |ARRCMiddle - ARRCBottom| and vice-versa for Bottom-Driven markets. We performed a chi-squared test of independence, conservatively applying Yates’s correction for continuity, to examine the relationship between market convergence and whether the market was Top-Driven or Bottom-Driven. The results indicated a statistically significant association between these variables, χ²(1) = 7.55, p = .0060.
Within both converging and non-converging markets, we also performed binomial tests to check whether the proportion of Top-Driven vs Bottom-Driven markets were significantly different from random chance (50%). The binomial tests yielded a p-value of .041 in converging markets and p-value of .036 in non-converging markets, rejecting the null hypotheses and suggesting that the observed rates of Top- and Bottom-Driven markets were significantly different than expected by chance.
Figure 4: Variation in Price Convergence Across Services
We categorized each of the 37 services in our dataset into one of 10 service categories, outlined in the “Service Inclusion” section from our whitepaper. For each service category, we calculated ARRCs within each market segment. We then calculated the absolute price convergence for each service category: Absolute Price Convergence = |ARRCTop - ARRCBottom| .
Figure 5: Outpatient Services Show Greater Price Convergence
We categorized each of the 37 services by care setting, outlined in the “Service Inclusion” section above. For each care setting, we calculated ARRCs within each market segment.
Table 3: More Prevalent Convergence in Outpatient Services
We calculated the number of different markets where we observed price convergence, comparing outpatient vs inpatient services. We defined a market as a single service within a single core-based statistical area (CBSA). For each market we calculated the ARRCs for each market segment. Markets were counted as converging when ARRCTop - ARRCBottom < 0. We excluded markets where there was not data in all three market segments.
We found that 187 (87.4%) out of 214 outpatient markets showed convergence, while 30 (62.5%) out of 48 inpatient markets showed convergence. We employed a chi-squared test of independence, again conservatively applying Yates’s correction for continuity, to examine the relationship between care setting and convergence. The results indicated a statistically significant association between these variables, χ²(1) = 15.36, p < .001.
We also performed binomial tests for each care setting with the null hypothesis that the probability of observing convergence in a given market is 50% (i.e., random chance). The binomial test yielded a p-value of < .001 for Outpatient markets, strongly rejecting the null hypothesis and suggesting that the observed rate of convergence is significantly greater than what would be expected by random chance in Outpatient Markets.
In contrast, inpatient markets showed a binomial test p-value of .111. This result indicates that we do not have strong evidence to reject the null hypothesis. The observed rate of convergence in inpatient markets is not significantly different from 50%, suggesting that the convergence observed in these markets could be due to random chance rather than a systematic trend.
Limitations and Future Research
While this white paper provides valuable insights into the early market response to federal price transparency regulations, we want to acknowledge several limitations of our white paper and highlight areas for future research.
Data Coverage and Generalizability of Findings
This white paper relies on data from a select subset of hospitals, payers, and healthcare services, which does not fully represent the diversity of U.S. healthcare markets. Consequently, the findings are specific to certain services and metropolitan areas, constraining their applicability to broader contexts, such as rural regions or less common healthcare services. In addition, inconsistency in data reporting and gaps in the availability of machine-readable files (MRFs) may affect the completeness of the dataset, especially for earlier data.
To enhance generalizability, future research should aim to expand the data to include a more diverse array of hospitals, payers, and service types while expanding on methods to address data gaps. This approach would better reflect the full spectrum of U.S. healthcare and enable a more robust and comprehensive understanding of price transparency impacts across different markets and service types.
Causality
This white paper does not establish causal relationships between price transparency regulations and observed price changes. The complexity of healthcare pricing makes it challenging to attribute observed changes solely to these regulations. While we observe clear price convergence trends, these dynamics may also reflect influences beyond regulatory effects. To identify causal links, future research should explore natural experiments or quasi-experimental designs. Such methods can help control for confounding factors and better isolate the impact of transparency regulations, offering a clearer understanding of how these policies specifically shape healthcare pricing and consumer costs.
Interdependencies Between Markets
While our primary analysis assumes independence between markets, we recognize that rate changes in one market may affect others. For example, markets within the same metro or under the same hospital system may exhibit interconnected rate adjustments. To address this, we conducted exploratory modeling that accounted for interdependencies and found results consistent with the results of our primary analysis. This supports the use of a simplified approach, which avoids added analytical complexity and makes it easier to interpret and communicate overall convergence trends. However, assuming independence may still oversimplify the complexity of market dynamics and understate the influence of geographic or institutional interdependencies on observed trends. Future research could further explore models that account for inter-market dependencies, offering a more nuanced understanding of the factors driving rate changes.
Measure of Dispersion
This white paper relies on a specific measure of dispersion that, while addressing many challenges in analyzing healthcare price dynamics, has its own limitations. The measure uses fixed market segments based on December 2021 price levels, which may not fully capture evolving market dynamics over time or nuanced changes in the shape of the price distribution. The measure uses rate changes as opposed to rate levels, which mitigates distortion introduced by inflation but limits our ability to determine when rates have fully converged. Our binary definition of convergence (where Bottom segment rate change exceeds Top segment rate change) has strength in its simplicity, but this simplicity can also be a potential limitation. For instance, this definition does not consider whether observed market-level changes are statistically significant, and instead we opt to test for significance across markets rather than within markets. To address these limitations, future research could consider alternative measures of dispersion to provide a more nuanced analysis of price dispersion.
Temporal Dynamics
The current white paper focuses on a relatively short time frame (December 2021 to June 2024), which may not fully capture the long-term effects of price transparency regulations. Market responses to such policy changes may evolve over extended periods, as providers, payers, employers, and consumers adjust their behaviors in reaction to new information. The short observation window might not be representative of the long-term response, potentially mischaracterizing the true impact of transparency initiatives. Additionally, our white paper does not explore temporal variation. Future research can analyze healthcare markets over longer periods of time, studying temporal variations and trends.
Cost Reduction
Although we observed price convergence, this white paper does not conclusively determine whether this leads to overall cost reductions for patients. The relationship between price transparency and actual patient cost savings requires additional research. Future research should focus on linking price convergence with reductions in out-of-pocket expenses to evaluate the true financial benefits for consumers and broader economic impact.
Patient Impact
The white paper does not fully address the ultimate impact of price transparency on patient outcomes, including access to care, financial burden, and health outcomes. While price convergence suggests increased market efficiency, it remains uncertain how these changes translate into real-world benefits for patients. Further research is needed to explore the direct and indirect effects of price transparency on healthcare utilization and health outcomes.
While this white paper provides a foundational understanding of the early effects of price transparency regulations, these limitations highlight the need for further research to fully understand the long-term impact on healthcare markets and patient outcomes.