In my last posting I raised the possibility that unusually high average Energy Star scores for buildings seen this last 8 years may reflect problems with the method used by the EPA for calculating Energy Star scores. Like the mythical children of Lake Wobegon – buildings using the Energy Star benchmarking program tend to be “above average.”
Because the Energy Star model for Medical office buildings was based on 1999 CBECS data it was possible to independently test the model predictions by utilizing 2003 CBECS data for Medical Office buildings. What I found was that the Energy Star scores for medical office buildings were not uniformly distributed in the 2003 building population. Instead the results were heavily biased towards higher scores, so much so that the mean score for all Medical Office buildings was 65 — well above the assumed mean of 50. This provides convincing evidence that the Medical Office building model and the Energy Star scores it produces are not valid. The score may still be useful for tracking the relative performance of a particular building over time. But the score cannot have the stated meaning as a percentile ranking of a building’s energy efficiency relative to the national population. In particular it means that quantitative energy savings cannot be inferred from the score. An Energy Star score of 62, for instance, usually suggesting above average performance, in the case of a Medical Office building means it uses more energy than the average Medical office building. And a score of 75 — required for Energy Star certification — does not mean that the building is 30% more efficient than the national average for such buildings. In short, the score does not mean what the EPA has claimed that it means.
It turns out that the Energy Star model for Dormitories/Residence Halls is also based on 1999 CBECS data. Hence the Dormitory/Residence Hall Energy Star model may also be tested by applying it to Dormitories in the 2003 CBECS survey and examining the distribution of these Energy Star scores.
The results are shown in Figure below.
The histogram has the same problems that were apparent in the Medical Office histogram – namely that the scores are biased to the high end. The mean Dormitory Energy Star score here is 70. The graph shows that 35% of all dormitories have Energy Star scores ranging from 91-100. For a uniform distribution only 10% of the buildings would have such scores. The Figure also shows that 9% of all Dormitories have Energy Star scores ranging from 1-30 whereas we expect 30% of buildings to have Energy Star scores in this range. Clearly there is a problem with this Energy Star building model. Scores generated with this model do not have the stated interpretation (as the energy efficiency ranking), clearly are inflated, and simply are not valid.
It is now clear that two of the eleven building Energy Star models are invalid. Moreover, the problem I have identified (inflated scoring) for these two types of buildings has been present since both scores were introduced in 2004. It is troublesome to realize that these problems have gone undetected for nine years, particularly when it has long been known that the mean Energy Star score for all buildings whose data have been entered into Portofolio Manager is in the low 60’s.
In my future iissues I will take a look at Energy Star scores for other building types and see whether these scores stand up to external scrutiny.