A few weeks ago NYC released Energy Benchmarking data for something like 15,000 buildings for 2013. 9500 of these buildings are classified as “Multifamily Housing” — the dominant property type for commercial buildings in NYC. While data from Multifamily Housing buildings were released by NYC last year, none included an ENERGY STAR building rating as the EPA had not yet developed a model for this type of building.
But a few months ago the EPA rolled-out its ENERGY STAR building score for Multifamily Housing. So this latest benchmarking disclosure from NYC includes ENERGY STAR scores for 876 buildings of this type. (Apparently the vast majority of NYC’s multifamily buildings did not qualify to receive an ENERGY STAR score — probably because the appropriate parameters were not entered into Portfolio Manager.) Scores span the full range, some being as low as 1 and others as high as 100. But are these scores meaningful?
Earlier this year I published a paper summarizing my analysis of the science behind 10 of the EPA’s ENERGY STAR models for conventional building types including: Offices, K-12 Schools, Hotels, Supermarkets, Medical Offices, Residence Halls, Worship Facilities, Senior Care Facilities, Retail Stores, and Warehouses. What I found was that these scores were nothing more than placebos — numbers issued in a voluntary game invented by the EPA to encourage building managers to pursue energy efficient practices. The problem with all 10 of these models is that the data on which they are based are simply inadequate for characterizing the parameters that determine building energy consumption. If this were not enough the EPA compounded the problem by making additional mathematical errors in most of its models. The entire system is built on a “house of cards.” The EPA ignores this reality and uses these data to generate a score anyway. But the scores carry no scientific significance. ENERGY STAR certification plaques are as useful as “pet rocks.”
Most of the above 10 models I analyzed were based on public data obtained from the EIA’s Commercial Building Energy Consumption Survey (CBECS). Because these data were publicly available these models could be replicated. One of the models (Senior Care Facilities) was based on voluntary data gathered by a private trade organization — data that were not publicly available. I was able to obtain these data through a Freedom of Information Act (FOIA) request and, once obtained, confirmed that this model was also not based on good science.
Like the Senior Care Facility model, the EPA’s Multifamily Housing ENERGY STAR model is constructed on private data not open to public scrutiny. These data were gathered by Fannie Mae. It is my understanding that a public version of these data will become available in January 2015. Perhaps then I will be able to replicate the EPA’s model and check its veracity. Based on information the EPA has released regarding the Multifamily ENERGY STAR model I fully expect to find it has no more scientific content than any of the other building models I have investigated.
One of the problems encountered when building an ENERGY STAR score on data that are “volunteered” is that they are necessarily skewed. Put more simply, there is no reason to believe that the data submitted voluntarily are representative of the larger building stock. ENERGY STAR scores are supposed to reflect a building’s energy efficiency percentile ranking as compared with similar buildings, nationally. When properly defined, one expects these scores to be uniformly distributed in the national building stock. In other words, if you were to calculate ENERGY STAR scores for thousands of Multifamily Housing Buildings across the nation, you expect 10% of them to be in the top 10% (i.e., scores 91-100), 10% in the lowest 10% (i.e., scores 1-10), and so on. If this is not the case then clearly the scores do not mean what we are told they mean.
Meanwhile, it is interesting to look at the distribution of ENERGY STAR scores that were issued for the 900-or-so Multifamily Housing facilities in NYC’s 2013 benchmarking data. A histogram of these scores is shown below. The dashed line shows the expected result — a uniform distribution of ENERGY STAR scores. Instead we see that NYC has far more low and high scores than expected, and relatively fewer scores in the mid-range. 24% of NYC buildings have ENERGY STAR scores ranging from 91-100, more than twice the expected number. And 31% of its buildings have scores 1-10, more than 3X the expected number. Meanwhile only 12% have scores ranging from 41 to 90. We expect 50% of the buildings to have scores in this range.
Of course it is possible that New York City just doesn’t have many “average” Multifamily Housing buildings. After all, this is a city of extremes — maybe it has lots of bad buildings and lots of great buildings but relatively few just so-so buildings. Maybe all the “so-so” buildings are found in the “fly-over states.”
I ascribe to the scientific principal known as Occam’s Razor. This principal basically says that when faced with several competing explanations for the same phenomenon, choose the simplest explanation rather than more complicated ones. The simplest explanation for the above histogram is that these ENERGY STAR scores do not, in fact, represent national percentile rankings at all. The EPA did not have a nationally representative sample of Multifamily Housing buildings on which to build its model, and its attempt to compensate for this failed. Until the EPA provides evidence to the contrary — this is the simplest explanation.
Interesting post. The “so-so buildings” must be in the fly-over states indeed!
After seeing the grade inflation for the college dorms, it’s surprising to see these low
numbers for the multi-family buildings. Did the data come with basic info about square footage
or units, as it did for the office buildings?
I don’t think I understand the question — please elaborate.