Why does the EPA publish false claims about its Medical Office ENERGY STAR model?

To say that someone “lied” is a strong claim.  It asserts that not only is the statement false but the person making it knows that the statement is false.

The EPA revised and updated its ENERGY STAR Technical Methodology document for Medical Office Buildings in November 2014.  That document makes the following claims:

  1. it describes filters used to extract 82 records from the 1999 CBECS
  2. it claims that the model data contain no buildings less than 5,000 sf in size
  3. with regard to the elimination of buildings < 5000 sf the EPA writes, “Analytical filter – values determined to be statistical outlyers.”
  4. the cumulative distribution for this model from which ENERGY STAR scores are derived is said to be fit with a 2-parameter gamma distribution.

All of the above statements/descriptions are false.  The filters described by the EPA do not produce an 82 record dataset, and the dataset produced do not then have the properties (min, max, and mean) described in Figure 2 of the EPA’s document.  And a regression using the EPA’s variables on the dataset obtained using their stated filters do not produce the results listed in Figure 3 of the EPA’s document.  In short, this EPA document is a work of fiction.

I have published these facts previously in my August 2014 ACEEE paper entitled “ENERGY STAR Building Benchmarking Scores: Good Idea, Bad Science.”  Six months ago I sent copies of this paper to EPA staff responsible for the agency’s ENERGY STAR building program.

I have given the EPA the opportunity to supply facts supporting their claims by filing three Freedom of Information Act (FOIA) requests, the first (EPA-HQ-2013-00927) for the list of 1999 CBECS ID’s that correspond to their 82-building dataset, and the second (EPA-HQ-2013-009668) for the alpha and beta parameters for the gamma distribution that fits their data, and the third (EPA-HQ-2013-010011) for documents justifying their exclusion of buildings <5000 sf from many models, including Medical Offices.  The EPA has closed the first two cases indicating they could not find any documents with the requested information.  17 months after filing the third request it remains open and the EPA has provided no documents pertaining to the Medical Office model.  The EPA is publishing claims for which they have no supporting documents and that I have demonstrated are false.  The details of my analysis are posted on the web and were referenced in my ACEEE paper.

In November 2014 the EPA corrected errors in other Technical Methodology documents yet it saw no need to correct or retract the Medical Office document.  Why is it so hard for the EPA to say they messed up?

It is common for scientists to correct mistakes by publishing “errata” or even withdrawing a previously published paper.  No doubt EPA staff once believed this document they have published was correct.  But how is it possible the EPA remained unaware of the errors while it continued to publish and even revise this document for nearly a decade?  How can the EPA continue to publish such false information six months after it has been informed of the errors?

Is the EPA lying about its Medical Office building model?  I cannot say.  But it is clear that the EPA either has total disregard for the truth or it is incompetent.

If these follks worked for NBC they would have to join Brian Willams on unpaid leave for six months.  Apparently the federal government has a lower standard of competence and/or integrity.

District Department of the Environment premature in claiming energy savings

On January 28, 2015 the District of Columbia published the second year of energy benchmarking data collected from private buildings.  This year’s public disclosure applies to all commercial buildings 100,000 sf and larger while last year’s public disclosure was for all buildings 150,000 sf or bigger.  Data published are drawn from the EPA’s ENERGY STAR Portfolio Manager and include building details such as gsf and principal building activity along with annual consumption for major fuels (electric, natural gas, steam), water, and calculated green house gas emission (associated with fuels).  Also published are annual site EUI (energy use intensity) and weather-normalized source EUI metrics, commonly used to asses building energy use.

The District Department of the Environment has analyzed these two years of data and concluded the following:

  • DC commercial buildings continue to be exceptionally efficient. The median reported ENERGY STAR® score for private commercial buildings in the District was 74 out of 100—well above the national median score of 50.
  • Buildings increased in efficiency from 2012 to 2013. Also,  overall site energy use went up by 1.5% among buildings that reported 2012 and 2013 data. However, when accounting for weather impacts and fuel differences, the weather-normalized source energy use for the same set of buildings decreased by 3% in 2013.

These claims are simply unjustified.

In particular consider the second point — that 2013 source energy used by DC buildings is 3% lower than it was in 2012 — demonstrating improved energy efficiency.  This claim is based on weather-normalized source energy numbers produced by the EPA’s Portfolio Manager.  The problem is that the EPA lowered its site-to-source energy conversion factor for electricity from 3.34 to 3.14 in July 2013 — a 6% reduction.  Because of this simple change, any building that has exactly the same energy purchases for 2013 that it did in 2012 will, according to Portfolio Manager, be using 4-6% less source energy in 2013 (depending on the amount of non-electric energy use).  In other words — the District finds its buildings used 3% less source energy in 2013 than in 2012 when, in fact, by doing nothing, all US buildings saved 5-6% in source energy over this same time frame.

It is said that “a rising tide lifts all boats.”  In this case the Washington DC boat did not rise quite as much as other boats.

More seriously, such small differences (1% – 3%) in average site or source energy are not resolvable within the statistical uncertainty of these numbers.  The standard deviations of the 2012 and 2013 mean site and source EUI for DC buildings are too large to rule out the possibility that such small changes are simply accidental, rather than reflective of any trend.  Scientists would know that.  Politicians would not — nor would they care if it makes or a good sound bite.

Let me now address the other claim.  It may well be true that the median ENERGY STAR score for district buildings is 74.  I cannot confirm this – but I have no reason to doubt its veracity. But there are no data to support the assumption that the median ENERGY STAR score for all commecial buildings is 50.  All evidence suggests that the national median score is substantially higher — in the 60-70 range, depending on the building type.  My recent analysis shows that the science that underpins these ENERGY STAR scores is wanting.  ENERGY STAR scores have little or no quantiative value and certainly DO NOT indicate a building’s energy efficiency ranking with respect to its national peer group — despite the EPA’s claims to the contrary.

The claim that the median score for US buildings is 50 is similar to making the claim that the median college course grade is a “C.”  Imagine your daughter comes home from College and says, “my GPA is 2.8 (C+) which is significantly higher than the (presumed) median grade of 2.0 (C).  You should be very proud of my performance.”  The problem is the actual median college grade is much closer to 3.3 (B+).  Its called grade inflation.  Its gone on for so many years that we all know the median grade is not a “C.”  Until recently ENERGY STAR scores were mostly secret — so the score inflation was not so apparent. But the publication of ENERGY STAR scores for large numbers of buildings as a result of laws such as those passed in Washington DC has removed the cloak — and the inflation is no longer hidden.

ENERGY STAR scores are no more than a “score” in a rating game whose ad hoc rules are set by the EPA in consultation with constituency groups.   It seems to have motivational value, and there is nothing wrong with building owners voluntarily agreeing to play this game.  But like fantasy football, it is not to be confused with the real game.

2013 NYC Benchmarking Raises Questions about EPA’s new Multifamily Housing Model

A few  weeks ago NYC released Energy Benchmarking data for something like 15,000 buildings for 2013.  9500 of these buildings are classified as “Multifamily Housing” — the dominant property type for commercial buildings in NYC. While data from Multifamily Housing buildings were released by NYC last year, none included an ENERGY STAR building rating as the EPA had not yet developed a model for this type of building.

But a few months ago the EPA rolled-out its ENERGY STAR building score for Multifamily Housing.  So this latest benchmarking disclosure from NYC includes ENERGY STAR scores for 876 buildings of this type.  (Apparently the vast majority of NYC’s multifamily buildings did not qualify to receive an ENERGY STAR score — probably because the appropriate parameters were not entered into Portfolio Manager.)  Scores span the full range, some being as low as 1 and others as high as 100.  But are these scores meaningful?

Earlier this year I published a paper summarizing my analysis of the science behind 10 of the EPA’s ENERGY STAR models for conventional building types including: Offices, K-12 Schools, Hotels, Supermarkets, Medical Offices, Residence Halls, Worship Facilities, Senior Care Facilities, Retail Stores, and Warehouses.  What I found was that these scores were nothing more than placebos — numbers issued in a voluntary game invented by the EPA to encourage building managers to pursue energy efficient practices.  The problem with all 10 of these models is that the data on which they are based are simply inadequate for characterizing the parameters that determine building energy consumption.  If this were not enough the EPA compounded the problem by making additional mathematical errors in most of its models.  The entire system is built on a “house of cards.”  The EPA ignores this reality and uses these data to generate a score anyway.  But the scores carry no scientific significance.  ENERGY STAR certification plaques are as useful as “pet rocks.”

Most of the above 10 models I analyzed were based on public data obtained from the EIA’s Commercial Building Energy Consumption Survey (CBECS).  Because these data were publicly available these models could be replicated.  One of the models (Senior Care Facilities) was based on voluntary data gathered by a private trade organization — data that were not publicly available. I was able to obtain these data through a Freedom of Information Act (FOIA) request and, once obtained, confirmed that this model was also not based on good science.

Like the Senior Care Facility model, the EPA’s Multifamily Housing ENERGY STAR model is constructed on private data not open to public scrutiny.  These data were gathered by Fannie Mae.  It is my understanding that a public version of these data will become available in January 2015.  Perhaps then I will be able to replicate the EPA’s model and check its veracity.  Based on information the EPA has released regarding the Multifamily ENERGY STAR model I fully expect to find it has no more scientific content than any of the other building models I have investigated.

One of the problems encountered when building an ENERGY STAR score on data that are “volunteered” is that they are necessarily skewed.  Put more simply, there is no reason to believe that the data submitted voluntarily are representative of the larger building stock.  ENERGY STAR scores are supposed to reflect a building’s energy efficiency percentile ranking as compared with similar buildings, nationally.  When properly defined, one expects these scores to be uniformly distributed in the national building stock.  In other words, if you were to calculate ENERGY STAR scores for thousands of Multifamily Housing Buildings across  the nation, you expect 10% of them to be in the top 10% (i.e., scores 91-100), 10% in the lowest 10% (i.e., scores 1-10), and so on.  If this is not the case then clearly the scores do not mean what we are told they mean.

Meanwhile, it is interesting to look at the distribution of ENERGY STAR scores that were issued for the 900-or-so Multifamily Housing facilities in NYC’s 2013 benchmarking data.  A histogram of these scores is shown below.  The dashed line shows the expected result — a uniform distribution of ENERGY STAR scores.  Instead we see that NYC has far more low and high scores than expected, and relatively fewer scores in the mid-range.  24% of NYC buildings have ENERGY STAR scores ranging from 91-100, more than twice the expected number.  And 31% of its buildings have scores 1-10, more than 3X the expected number.  Meanwhile only 12% have scores ranging from 41 to 90.  We expect 50% of the buildings to have scores in this range.

histogram of 2013 MFH NYC ES scores

Of course it is possible that New York City just doesn’t have many “average” Multifamily Housing buildings.  After all, this is a city of extremes — maybe it has lots of bad buildings and lots of great buildings but relatively few just so-so buildings.  Maybe all the “so-so” buildings are found in the “fly-over states.”

I ascribe to the scientific principal known as Occam’s Razor.  This principal basically says that when faced with several competing explanations for the same phenomenon, choose the simplest explanation rather than more complicated ones.  The simplest explanation for the above histogram is that these ENERGY STAR scores do not, in fact, represent national percentile rankings at all.  The EPA did not have a nationally representative sample of Multifamily Housing buildings on which to build its model, and its attempt to compensate for this failed.  Until the EPA provides evidence to the contrary — this is the simplest explanation.

 

EPA’s ENERGY STAR building benchmarking scores have little validity

I have been spending this week at the American Council for an Energy Efficient Economy’s (ACEEE) Summer Study on Energy Efficiency in Buildings. Yesterday I presented a paper that summarizes my findings from an 18-mos study of the science behind the EPA’s ENERGY STAR building rating systems.

The title of my paper, “ENERGY STAR building benchmarking scores: good idea, bad science,” speaks for itself.  I have replicated the EPA’s models for 10 of their 11 conventional building types: Residence Hall/Dormitory, Medical Office, Office, Retail Store, Supermarket/Grocery, Hotel, K-12 School, House of Worship, Warehouse, and Senior Care.  I have not yet analyzed the Hospital model — but I have no reason to believe the results will be different. (Data for this model were not available at the time I was investigating other models.  I have since obtained these data through a Freedom of Information Act request but have not yet performed the analysis.)

There are many problems with these models that cause the ENERGY STAR scores they produce to be both imprecise (i.e. have large random uncertainty in either direction) and inaccurate (i.e., wrong due to a errors in the analysis).  The bottom line is that, for each of these models, the ENERGY STAR scores they produce are uncertain by about 35 points! That means there is no statistically significant difference between a score of 50 (the presumed mean for the US commercial building stock) and 75 (an ENERGY STAR certifiable building).  It also means that any claims made for energy savings based on these scores are simply unwarranted.  The results are summarized by the abstract of my paper, reproduced below.

Abstract

The EPA introduced its ENERGY STAR building rating system 15 years ago. In the intervening years it has not defended its methodology in the peer-reviewed literature nor has it granted access to ENERGY STAR data that would allow outsiders to scrutinize its results or claims. Until recently ENERGY STAR benchmarking remained a confidential and voluntary exercise practiced by relatively few.

In the last few years the US Green Building Council has adopted the building ENERGY STAR score for judging energy efficiency in connection with its popular green-building certification programs. Moreover, ten US cities have mandated ENERGY STAR benchmarking for commercial buildings and, in many cases, publicly disclose resulting ENERGY STAR scores. As a result of this new found attention the validity of ENERGY STAR scores and the methodology behind them has elevated relevance.

This paper summarizes the author’s 18-month investigation into the science that underpins ENERGY STAR scores for 10 of the 11 conventional building types. Results are based on information from EPA documents, communications with EPA staff and DOE building scientists, and the author’s extensive regression analysis.

For all models investigated ENERGY STAR scores are found to be uncertain by ±35 points. The oldest models are shown to be built on unreliable data and newer models (revised or introduced since 2007) are shown to contain serious flaws that lead to erroneous results. For one building type the author demonstrates that random numbers produce a building model with statistical significance exceeding those achieved by five of the EPA building models.

In subsequent posts I will elaborate on these various findings.

DC Benchmarking data show modest energy savings for LEED buildings

A few months ago Washington DC released its 2012 energy benchmarking data for private commercial buildings 150,000 sf and larger.  Credible energy and water consumption data for some 400 buildings were released, of which 246 were office buildings.  A recent article — stemming from the web site LEED Exposed — has claimed that these data show LEED buildings use more energy than non-LEED buildings.  Specifically it is claimed that LEED buildings have an average weather normalized source EUI of 205 kBtu/sf whereas non-LEED buildings have an average EUI of 199 kBtu/sf.   No details are provided to support this claim.

My students and I have cross-listed the DC benchmarking data with the USGBC LEED Project directory and identified 94 LEED-certified buildings in the 2012 DC benchmarking dataset — all but one being classified as office buildings.  The unweighted mean weather normalized source EUI for these 94 LEED certified buildings is 202 kBtu/sf.   The unweighted mean weather normalized source EUI for remaining 305 buildings is 198 kBtu/sf.  No doubt this is the basis for the claim that LEED buildings use more energy than non-LEED.  However, the difference is not statistically significant.

Moreover, the non-LEED dataset, in addition to 154 office buildings, contains 64 (unrefrigerated) warehouses and 90 multifamily housing buildings — all of which use significantly less energy than do office buildings.  The comparison of these two average EUI is not useful — just a meaningless sound bite.

It should also be noted that the unweighted mean EUI for a collection of buildings is an inappropriate measure of their total energy consumption.  The appropriate measure of energy consumption is their gross energy intensity — their total source energy divided by the total gross square footage.  This issue has been discussed in several papers [2008 IEPEC; 2009 Energy & Buildings].

Note that an apples-to-apples comparison of energy consumed by one set of buildings to that consumed by another requires that the two sets contain the same kinds of buildings in similar proportions.  When possible this is best accomplished by sticking to one specific building type. Since office buildings are far and away the most common in both datasets it makes sense to make an office-to-office comparison — pun intended. 

93 of the LEED-certified buildings are offices.  But many of these buildings were not certified during the period for which data were collected.  Some were certified during 2012 and others were not certified until 2013 or 2014.  Only 46 of the office buildings were certified before Jan. 1, 2012 and are then expected to demonstrate energy and GHG emissions savings for 2012.

The 2012 gross weather-normalized source energy intensity for the 46 LEED certified office buildings is 191 kBtu/sf.  This is 16% lower than the gross weather-normalized source energy intensity for the 154 non-certified office buildings in the dataset, 229 kBtu/sf.  These modest savings are real and statistically significant, though much lower than the 30-40% savings routinely claimed by the USGBC.

Note that similar savings were not found in 2011 or 2012 NYC energy benchmarking data. Analysis of these data showed that LEED-certified office buildings in NYC used the same amount of primary energy and emitted no less green house gases than did other large NYC office buildings.  So the 2012 results from Washington DC are significantly different.  It should be noted that NYC office buildings certified at the gold level were found to exhibit similar modest energy savings.  Perhaps this is a clue as to why Washington DC LEED buildings show energy savings.  More analysis is required.

For the last few years the USGBC has pointed to ENERGY STAR scores for LEED certified buildings as evidence of their energy efficiency.  While ENERGY STAR scores have two important characteristics — they use source rather than site energy and they are based on actual energy measurements — they simply do not have sound scientific basis.  The science has never been vetted, and my own analysis shows these scores are little more than placebos to encourage energy efficiency.  They certainly do not have any quantitative value.

So to summarize, in 2012 LEED offices in Washington used 16% less source energy than  did other office buildings in DC.  What this means and whether such savings justify the added costs of LEED are open questions.

USGBC Continues to “cherry pick” LEED energy data

At the 2007 GreenBuild Conference the USGBC released the results of their first major study of energy consumption by LEED-certified buildings.  Then they presented conclusions from the now infamous study conducted by the New Buildings Institute (paid for by the USGBC and EPA) which, based on data “volunteered by willing building owners” for only 22% of the eligible buildings certified under LEED NC v.2, concluded that LEED certified buildings, on average, were demonstrating the anticipated 25-30% savings in (site) energy.

NBI’s analysis and conclusions were subsequently discredited in the popular media by Henry Gifford and in the peer-reviewed literature by me [see IEPEC 2008 and Energy & Buildings 2009].  NBI’s analytical errors included:

  1. comparing the median of one energy distribution to the mean of another;
  2. comparing energy used by a medium energy subset of LEED buildings with that used by all US commercial buildings (which included types of buildings removed from the LEED set);
  3. improper calculation of the mean (site) energy intensity for LEED buildings and comparing this with the gross mean energy intensity from CBECS;
  4. NBI looked only at building energy used on site (i.e., site EUI) rather than on- and off-site energy use (i.e., source EUI).

To NBI’s credit they made their summary data available to others for independent analysis with no “strings attached.”  In the end even the data gathered by NBI, skewed towards the “better performing” LEED buildings by the method for gathering data, when properly analyzed demonstrated no source energy savings by LEED buildings.  LEED office buildings demonstrated site energy savings of 15-17% — about half that claimed by NBI, the difference being associated with NBI’s improper averaging method.  This site energy savings did not translate into a source energy savings because LEED buildings, on average,  used relatively more electric energy, and the off-site losses associated with this increased electric use wiped out the on-site energy savings.

The lack of representative building energy data was addressed in LEED v.3 (2009) by instituting a requirement that all LEED certified buildings supply the USGBC with annual energy consumption data for five years following certification.  Never again would the USGBC have to publish conclusions based on data volunteered by 1 in 5 buildings.  Expectations were high.

But what has this produced?  The USGBC has learned from their experience with NBI — not to hand over such an important task to an outside organization because you can’t control the outcome.  NBI’s analysis was scientifically flawed — but it was transparent, and such transparency gave critics ammunition to reach different conclusions.  Nowadays the USGBC simply issues carefully packaged sound bites without supplying any details to support their conclusions.  There isn’t even a pretense of conducting scientifically valid analysis.

Consider the most recent claims made by the USGBC at the 2013 Greenbuild conference, summarized by Tristan Roberts in “LEED buildings above average in latest energy data release.”  Roberts asserts the following:

  1. The USGBC has received energy data from 1,861 certified buildings for the 12-mos period July 2012 – June 2013;
  2. About 70% of these were certified through LEED-EBOM (existing buildings);
  3. 450 of these buildings reported their data through the EPA’s Portfolio Manager;
  4. the “building-weighted” (or un-weighted) average source EUI for these 450 buildings is 158 kBtu/sf;
  5. this average is 31% lower than the national median source EUI;
  6. 404 (of the 450) buildings above were eligible for (and received) ENERGY STAR scores;
  7. the average ENERGY STAR score for these 404 buildings was 85.

In addressing the above claims it is hard to know where to begin.  Let’s start with the fact that the USGBC only provides energy information for 450 (or 24%) of the 1,861 buildings for which it has gathered data.  Is this simply due to the fact that it is easier to summarize data gathered by Portfolio Manager than data collected manually?  If so I willingly volunteer my services to go through the data from all 1,861 buildings so that we can get a full picture of LEED building energy performance — not just a snapshot of 24% of the buildings which “self-select themselves” to benchmark through Portfolio Manager.  (The EPA has previously asserted that buildings that benchmark through Portfolio manager tend to be skewed towards “better performing” buildings and are not a random snapshot of commercial buildings.)

Next, consider the “un-weighted” source EUI figure for the 450 buildings.  This is a useless metric.  All EUI reported by CBECS for sets of buildings are “gross energy intensities” equivalent to the gsf-weighted mean EUI (not the un-weighted or building-weighted mean EUI).  This was a major source of error in the 2008 NBI report — leading NBI to incorrectly calculate a 25-30% site energy savings rather than the actual 15-17% site energy savings achieved by that set of LEED buildings.

Consider the assertion that the 158 kBtu/sf source EUI figure is 31% lower than the median source EUI (presumably for all US commercial buildings).  To be correct this would require the median source EUI for all US commercial buildings be 229 kBtu/sf.  This is rubbish.  The best way to obtain such a median EUI figure is from the 2003 CBECS data.  The Energy Information Administration (EIA) does not report source energy figures in any of its CBECS reports.  But the EIA does report site and primary electric energy used by buildings, and these may be combined to calculate source EUI for all 2003 CBECS sampled buildings.  This results in a median source EUI for the estimated 4.9 million commercial buildings to be 118 kBtu/sf.  If you instead restrict this calculation to all buildings with non-zero energy consumption you find these estimated 4.6 million buildings have a median source EUI of 127 kBtu/sf — way below the 229 kBtu/sf figure asserted by the USGBC.  This USGBC claim is patently false.  Of course the USGBC may be referring to the median source EUI of some unspecified subset of U.S. buildings.  By choosing an arbitrary subset you can justify any claim.  And if you don’t specify the subset — well, the claim is nothing more than noise.

What about the average ENERGY STAR score of 85?  Is this impressive?  The answer is no.  Even if you believed that ENERGY STAR scores were, themselves, meaningful, such an average would still mean nothing.  ENERGY STAR scores are supposed to represent percentile rankings in the U.S. building population.  Since there are 4.8 million buildings, by definition we would expect 10% of these (or 480,000) to rank in the top 10% and we would expect another 480,000 of these to rank in the bottom 10%.  That means that if 1,861 buildings are chosen at random from the building population, we expect 10% of these to have ENERGY STAR scores from 91-100.  Similarly, we expect 30% of these (or 558) to have ENERGY STAR scores ranging from 71-100.  Guess what — the average ENERGY STAR scores of these 558 buildings is expected to be 85.  Only those who are mathematically challenged should be impressed that the USGBC has found 404 buildings in its set of 1,861 that have an average ENERGY STAR score of 85.  If you cherry pick your data you can demonstrate any conclusion you like.

And, of course, these 1,861 buildings are not chosen at random — they represent buildings whose owners have a demonstrated interest in energy efficiency apart from LEED.  I would guess that the vast majority of the 404 buildings were certified under the EBOM program and have used Portfolio Manager to benchmark their buildings long before they ever registered for LEED.  LEED certification is just another trophy to be added to their portfolio.  No doubt their ENERGY STAR scores in previous years were much higher than 50 already.  What was the value added by LEED?

I openly offer my services to analyze the USGBC energy data in an unbiased way to accurately asses the collective site and source energy savings by these LEED buildings.  How about it Brendan Owens (VP of technical development for USGBC) — do you have enough confidence in your data to take the challenge?  Which is more important to you, protecting the LEED brand or scientific truth?

ENERGY STAR energy benchmarking is not ready for prime time

I recently had occasion to read an old paper by Janda and Brodsky describing the “first class” of ENERGY STAR certified office buildings.  This is one of only a handful of papers in the peer-reviewed literature regarding ENERGY STAR building scores.  Janda and Brodsky describe the brand name ENERGY STAR as

a set of voluntary partnerships between the U.S. government and product manufacturers, local utilities, home builders, retailers, and businesses.  These partnerships are designed to encourage energy efficiency in products, appliances, homes, offices, and other buildings.

This was the basis for the EPA’s building ENERGY STAR scoring system.  It was a “game” that building managers voluntarily agreed to play with rules (methodology for scoring buildings) set by the EPA in consultation with those playing the game.  There was no scientific vetting of the “rules of the game” — nor did there need to be — it was just a game designed to “encourage energy efficiency.”  No one was forced to play the game.  Data submitted to Portfolio Manager (the EPA’s web-based tool for calculating scores) and ENERGY STAR scores issued by the EPA were confidential — unless a building sought and received ENERGY STAR certification.  Participation was entirely voluntary.  Building managers disappointed with their ENERGY STAR scores could just walk away from the game — no harm, no foul.

But this has all changed.  In recent years 1) the EPA has published specific claims regarding energy savings associated with its ENERGY STAR benchmarking program (real savings not just fantasy football), 2) external organizations like the USGBC have adopted the ENERGY STAR score as their metric for energy efficiency in green building certification programs and are using these scores to make energy savings claims of their own, and 3) major U.S. cities have passed laws requiring commercial building owners to use Portfolio Manager to benchmark their buildings and, in many cases, the resulting ENERGY STAR scores are being made public.  With federal, state, and local governments requiring LEED certification for public buildings this is no longer a voluntary game — it is mandatory and real (testable) energy claims are being made based upon ENERGY STAR scores.  Now the science behind such claims actually matters — and this science has never been vetted.

Its kinda like a small, “mom and pop” operation that has been selling chicken soup using “grandma’s recipe” without obtaining proper license or FDA approval.  Now imagine Walmart decides to market and sell the soup — the scrutiny changes.

As a voluntary game with no connection to reality it is OK that the EPA negotiates rules for its ENERGY STAR ratings with different constituents — like allowing Washington DC office buildings to ignore their “first floors” in seeking ENERGY STAR certification.  After all, who am I to interfere in the activities between consenting adults when these activities do not affect me?  But for ENERGY STAR — these days are gone.

In the next year we will learn much about the science that underpins the EPA’s ENERGY STAR benchmarking system — and the results are likely to be very disappointing.  This benchmarking system is not ready for prime time.

The EPA doesn’t know the basis for its own ENERGY STAR building model

The US Environmental Protection Agency (EPA) issues ENERGY STAR building ratings for 11 different kinds of commercial buildings.  The so-called Technical Methodology for each of these building ratings is described in documents posted on the EPA web site.  Presumably anyone can work through the details of these technical documents to duplicate the EPA’s methodology.

But this is not the case for one of the models — that for Medical Office buildings.  If you follow the instructions set forth in the EPA’s document for extracting the building records from the 1999 CBECS on which this model is based you do not obtain the list of 82 buildings the EPA claims are the basis for this model.  Instead you obtain a list of 71 buildings.  Furthermore, if you calculate the mean properties of this set of 71 buildings you do not obtain those published by the EPA for this building set.  And finally, if you perform the regression the EPA says it has applied to these buildings you obtain different results than those published by the EPA.  In short, it is clear that the EPA’s Technical Methodology document for Medical Offices does not correctly describe their model.

I have petitioned the EPA through the Freedom of Information Act to supply the list of CBECS 1999 building ID’s that are used this model (EPA-HQ-2013-009270).   The EPA has responded that it does not have this list.  This means that the EPA not only has incorrectly described its own Medical Office model — it does not even know what the basis for this model is!  Its document describing the Technical Methodology for this model is fiction — just like the ENERGY STAR scores the EPA hands out for Medical Office buildings.

NYC Energy Benchmarking raises questions about LEED-certification

With growing concern over global climate change and the US Federal government frozen in political gridlock a number of U.S. cities have decided to unilaterally take action to reduce their own green house gas (GHG) emission.  Any serious effort to reduce GHG emission must involve the implementation of some kind of system to track energy consumption.  To this end these same cities have instituted Energy Benchmarking laws — laws that require building owners to annually submit energy consumption data (by fuel) to a designated agency that collects and processes these data.  The Institute for Market Transformation (IMT) has been instrumental in coordinating this effort.

The requirement is typically phased in over a couple of years — starting with municipal buildings, followed by large commercial buildings, smaller commercial buildings, and finally residential buildings.  New York City, Philadelphia, Washington DC, San Francisco, Austin, and Seattle were the first to pass such ordinances.  Minneapolis, Chicago, and Boston have all taken steps to follow suit.

Public disclosure of energy data is an important component of many (but not all) of these local ordinances.  New York City (NYC) is further along than other cities and last October released 2011 energy benchmarking data for commercial buildings that were 50,000 sf or larger — excluding condominiums.  Public benchmarking data were released for more than 4,000 large commercial buildings in the NYC’s five boroughs.  NYC, like many of the other cities engaged in benchmarking, utilized the EPA’s ENERGY STAR Portfolio Manager for gathering and processing benchmarking data.  Data released included building address, building type, total gsf, site energy intensity, weather-normalized source energy intensity, water usage, and total GHG emission.

The NYC benchmarking data included data for more than 1,000 office buildings.  Some of these buildings are certified green buildings, so would be expected to use less energy and have less GHG emission than other NYC office buildings.  These green buildings are not identified in the NYC Benchmarking data, but many may be identified by searching other data bases – such as the US Green Building Council’s LEED project database or the EPA’s list of ENERGY STAR certified buildings.

A few dozen LEED-certified office buildings have been identified in the 2011 NYC Benchmarking database.  (The full peer-reviewed paper is to be published in Energy and Buildings.)  Of these, 21 were certified before 2011 by new construction (NC), existing buildings operation and maintenance (EB:O&M), or core and shell (CS) LEED programs which address whole building energy use.  These 21 buildings constitute 21.6 million gsf.  Their 2011 source energy consumption and GHG emission has been compared with those for the other NYC office buildings with rather surprising results.  The LEED-certified office buildings, collectively are responsible for 3% more source energy consumption and GHG emission than other large NYC office buildings (adjusted for total gsf, of course).  The graph below compares source energy intensity histograms for the two building sets.

Source LEED-21 vs NYC 953pos

The graph shows that the difference in the mean source energy intensities of the two building sets is not statistically meaningful.  In other words, the source energy consumption and green house emission of these LEED-certified office buildings is no different from that of other NYC office buildings — no more and no less.

As of a few months ago there were something like 8,300 buildings certified under one of the LEED programs that claim to reduce whole building energy use.  Measured energy consumption data have been published for 3% of these (now about 250).  While many of these LEED buildings surely save energy, many do not.  Collectively the evidence suggests, that LEED certification does not produce any significant reduction in primary energy use or GHG emission.

Why then does the Federal Government — and other governments (including NYC) — require new government buildings to be LEED certified?   The Federal Drug Administration (FDA) would never certify a medical drug with so little scientific evidence offered — let alone require its use.  The standards here are inverted — apparently the Federal Government believes convincing scientific data must be offered to demonstrate that LEED-certified buildings do not save energy before they will change their policy.

Do Buildings that use Energy Star’s Portfolio Manager save energy?

The EPA regularly puts out press releases claiming the amount of energy that has been saved nationally by its Energy Star program.  In its October 2012 Data Trends publication entitled “Benchmarking and Energy Savings” the EPA writes the following:

Do buildings that consistently benchmark energy performance save energy? The answer is yes, based on the large number of buildings using the U.S. Environmental Protection Agency’s (EPA’s) ENERGY STAR Portfolio Manager to track and manage energy use.

After making this claim the EPA offers the following supporting evidence.

Over 35,000 buildings entered complete energy data in Portfolio Manager and received ENERGY STAR scores for 2008 through 2011, which represents three years of change from a 2008 baseline. These buildings realized savings every year, as measured by average weather-normalized energy use intensity and the ENERGY STAR score, which accounts for business activity. Their average annual savings is 2.4%, with a total savings of 7.0% and score increase of 6 points over the period of analysis.

What does this mean?  Does this mean that every one of the 35,000 buildings in question saw energy savings?  Impossible – over time some buildings saw their energy use go up and others saw it go down.  The statement clearly refers to an average result.  But what is being averaged?  The EPA is referring to the average (weather normalized) source energy intensity (EUI) for these 35,000 buildings — saying that it has decreased by 7% over three years  In addition it points out that the average Energy Star score for these buildings has increased by 6 points over three years.  The graphs below summarize these trends.

Data Trends

So here is the problem.  The average EUI for a set of N buildings has nothing to do with the total energy used by these buildings.  The average EUI could go down while the total energy use goes up and vise versa.  Some buildings see their EUI go up – and these buildings use more energy – and some see their EUI go down – and these buildings use less energy.  But you cannot determine whether more or less energy is used in total without calculating the actual energy saved or lost by each building – and this requires that you know more than the energy intensity (EUI) — you must also factor in each building’s size or gsf.  This set of 35,000 buildings includes buildings that are 5,000 sf in size and others that are 2,000,000 sf in size – a factor of 400 larger.  The EPA calculates mean EUI by treating every building equally.  But each building does not contribute equally to the total energy – bigger buildings use more energy.   (The EPA has employed the methodology used by the New Buildings Institute in their, now discredited, 2008 study of LEED buildings.)

It may be that these 35,000 buildings, in total, save energy.  But we don’t know and the EPA has offered no evidence to show that they do.  Moreover, I have asked the EPA to supply this evidence and they refuse to do so.  It is an easy calculation – but they choose not to share the result.  You can bet they have performed this calculation – why do you suppose they don’t share the result?

Now turn to the increased average Energy Star score.  There is actually no connection whatsoever between the average Energy Star score for a set of buildings and their total energy use.  For a single building, its Energy Star score, combined with its measured EUI and gsf allows you to calculate the energy it saved as compared with its predicted energy use.  Readers might be surprised to learn that a building’s Energy Star score can go up while its energy use rises as well.

But for a collection of buildings no such relationship exists.  If they are all one type of building (for instance, all dormitories) you can combine their individual scores with their individual gsf and their individual EUI to learn something about their total energy – but absent this additional information it is hopeless.  And if the buildings are from more than one building type there is absolutely no meaning to their average Energy Star Score.  Such statistics are intended only to impress the ignorant.

The EPA, therefore, has presented no evidence to support the claim that buildings that are regularly scored in Portfolio Manager collectively save energy.  Instead they have offered meaningless sound bites — claims that sound good but have no scientific relevance.

It is easy to see the problem by considering a simple case — two buildings – one a 100,000 sf office building and the other a 10,000 sf medical office.  Suppose in year 1 the office building has an EUI of 100 kBtu/sf and an Energy Star Score of 60, while in year 2 it has an EUI of 120 kBtu/sf and an Energy Star Score of 58.  Suppose that the medical office building in year 1 has an EUI and Energy Star score of 140 kBtu/sf and 50, respectively, and in the year 2 an EUI of 120 kBtu/sf and an Energy Star score of 60.

In this simple example the “average EUI” for year 1 is 120 kBtu/sf and for year two is 110 kBtu/sf – by the EPA’s standards, an 8% energy savings.  But when you work out the numbers you find their combined energy use in year two actually rose by 14%.  Surely EPA officials understand the difference.

To summarize, the EPA has claimed that the energy consumption of buildings that regularly use portfolio manager has gone down by 2.4% per year but they have offered no evidence to support this – only evidence that the average EUI for these 35,000 buildings – a meaningless figure, has gone down.

The EPA should either withdraw their claim or provide the evidence to back it up.