Power point Slides for ACEEE Talk posted

A video of the power point presentation with audio for my ACEEE talk on building ENERGY STAR scores is now available on the web.  The audio was recorded during a practice  presentation.  The presentation is accompanied by a 16-page paper that may be downloaded from the ACEEE web site (see previous post).

EPA’s ENERGY STAR building benchmarking scores have little validity

I have been spending this week at the American Council for an Energy Efficient Economy’s (ACEEE) Summer Study on Energy Efficiency in Buildings. Yesterday I presented a paper that summarizes my findings from an 18-mos study of the science behind the EPA’s ENERGY STAR building rating systems.

The title of my paper, “ENERGY STAR building benchmarking scores: good idea, bad science,” speaks for itself.  I have replicated the EPA’s models for 10 of their 11 conventional building types: Residence Hall/Dormitory, Medical Office, Office, Retail Store, Supermarket/Grocery, Hotel, K-12 School, House of Worship, Warehouse, and Senior Care.  I have not yet analyzed the Hospital model — but I have no reason to believe the results will be different. (Data for this model were not available at the time I was investigating other models.  I have since obtained these data through a Freedom of Information Act request but have not yet performed the analysis.)

There are many problems with these models that cause the ENERGY STAR scores they produce to be both imprecise (i.e. have large random uncertainty in either direction) and inaccurate (i.e., wrong due to a errors in the analysis).  The bottom line is that, for each of these models, the ENERGY STAR scores they produce are uncertain by about 35 points! That means there is no statistically significant difference between a score of 50 (the presumed mean for the US commercial building stock) and 75 (an ENERGY STAR certifiable building).  It also means that any claims made for energy savings based on these scores are simply unwarranted.  The results are summarized by the abstract of my paper, reproduced below.

Abstract

The EPA introduced its ENERGY STAR building rating system 15 years ago. In the intervening years it has not defended its methodology in the peer-reviewed literature nor has it granted access to ENERGY STAR data that would allow outsiders to scrutinize its results or claims. Until recently ENERGY STAR benchmarking remained a confidential and voluntary exercise practiced by relatively few.

In the last few years the US Green Building Council has adopted the building ENERGY STAR score for judging energy efficiency in connection with its popular green-building certification programs. Moreover, ten US cities have mandated ENERGY STAR benchmarking for commercial buildings and, in many cases, publicly disclose resulting ENERGY STAR scores. As a result of this new found attention the validity of ENERGY STAR scores and the methodology behind them has elevated relevance.

This paper summarizes the author’s 18-month investigation into the science that underpins ENERGY STAR scores for 10 of the 11 conventional building types. Results are based on information from EPA documents, communications with EPA staff and DOE building scientists, and the author’s extensive regression analysis.

For all models investigated ENERGY STAR scores are found to be uncertain by ±35 points. The oldest models are shown to be built on unreliable data and newer models (revised or introduced since 2007) are shown to contain serious flaws that lead to erroneous results. For one building type the author demonstrates that random numbers produce a building model with statistical significance exceeding those achieved by five of the EPA building models.

In subsequent posts I will elaborate on these various findings.

When will the US Senate conduct hearings on “energy loss” programs?

Yesterday a Senate Committee grilled “Dr. Oz” about the promotion of weight-loss products on his show.  At issue are the unsupported claims made for these products and the false hopes of millions of viewers who are looking for quick ways to lose weight.  I would like to know when the Senate will grill proponents of green buildings in the same way.

Don’t get me wrong — I know that Americans need to lose weight, and there are very clear ways to do that with slow, determined change in behavior.  The same is true for improving building energy efficiency.  There are clear ways to cost-effectively improve buildings so that they use 10-20% less energy without any loss of performance.

But Americans want quick solutions — ways to lose 30 pounds in one month without pain or suffering.  And there is an entire industry out there selling products which promise to achieve these very results.  But there is no scientific evidence to support such claims, and mostly people spend their money on these products and never reap their promised benefits.  The few who do achieve the desired weight loss do it because of their regular exercise and reduction in caloric intake — perhaps coincident with the use of some new product, but having no other connection to it.

America’s energy-guzzling buildings have much in common with its overweight population.  And a government-sponsored industry – not unlike the one promoted by Dr. Oz – has emerged promoting green buildings, zero energy buildings, and high-performance buildings — all promising great energy savings for those who adopt their strategies. The US Green Building Council claims that its LEED-certified buildings are achieving 47% energy savings.  The EPA claims that its ENERGY STAR benchmarking program yields significant energy savings.  The New Buildings Institute promotes Zero Energy Buildings as the ultimate “weight loss program.”  The US Federal government pours millions of dollars into GSA, DOE and EPA programs that prumulgate these ideas.  My own state of Ohio has spent millions on LEED-certified schools without a single scientific study to demonstrate that these buildings actually save energy.  The list of organizations and claims goes on and on.

Yet the above claims are, at best, outrageous exagerations.  The USGBC claim is made for a “cherry picked” subset of its buildings and is based on ENERGY STAR scores which have no scientific credibility [see earlier post].  The EPA claims are similarly based on ENERGY STAR scores and do not stand up to close inspection.  And close inspection of data gathered by NBI shows that, at most, about 10 US commercial buildings in the country have demonstrated net-zero performance.  Amercia is spending millions on these green and high performance buildings efforts with little data to demonstrate efficacy.

Don’t get me wrong — I am a stong advocate of cost-effective energy efficiency and energy conservation.  I am also a strong advocate of exercise and sensible nutrition.

 

 

DC Benchmarking data show modest energy savings for LEED buildings

A few months ago Washington DC released its 2012 energy benchmarking data for private commercial buildings 150,000 sf and larger.  Credible energy and water consumption data for some 400 buildings were released, of which 246 were office buildings.  A recent article — stemming from the web site LEED Exposed — has claimed that these data show LEED buildings use more energy than non-LEED buildings.  Specifically it is claimed that LEED buildings have an average weather normalized source EUI of 205 kBtu/sf whereas non-LEED buildings have an average EUI of 199 kBtu/sf.   No details are provided to support this claim.

My students and I have cross-listed the DC benchmarking data with the USGBC LEED Project directory and identified 94 LEED-certified buildings in the 2012 DC benchmarking dataset — all but one being classified as office buildings.  The unweighted mean weather normalized source EUI for these 94 LEED certified buildings is 202 kBtu/sf.   The unweighted mean weather normalized source EUI for remaining 305 buildings is 198 kBtu/sf.  No doubt this is the basis for the claim that LEED buildings use more energy than non-LEED.  However, the difference is not statistically significant.

Moreover, the non-LEED dataset, in addition to 154 office buildings, contains 64 (unrefrigerated) warehouses and 90 multifamily housing buildings — all of which use significantly less energy than do office buildings.  The comparison of these two average EUI is not useful — just a meaningless sound bite.

It should also be noted that the unweighted mean EUI for a collection of buildings is an inappropriate measure of their total energy consumption.  The appropriate measure of energy consumption is their gross energy intensity — their total source energy divided by the total gross square footage.  This issue has been discussed in several papers [2008 IEPEC; 2009 Energy & Buildings].

Note that an apples-to-apples comparison of energy consumed by one set of buildings to that consumed by another requires that the two sets contain the same kinds of buildings in similar proportions.  When possible this is best accomplished by sticking to one specific building type. Since office buildings are far and away the most common in both datasets it makes sense to make an office-to-office comparison — pun intended. 

93 of the LEED-certified buildings are offices.  But many of these buildings were not certified during the period for which data were collected.  Some were certified during 2012 and others were not certified until 2013 or 2014.  Only 46 of the office buildings were certified before Jan. 1, 2012 and are then expected to demonstrate energy and GHG emissions savings for 2012.

The 2012 gross weather-normalized source energy intensity for the 46 LEED certified office buildings is 191 kBtu/sf.  This is 16% lower than the gross weather-normalized source energy intensity for the 154 non-certified office buildings in the dataset, 229 kBtu/sf.  These modest savings are real and statistically significant, though much lower than the 30-40% savings routinely claimed by the USGBC.

Note that similar savings were not found in 2011 or 2012 NYC energy benchmarking data. Analysis of these data showed that LEED-certified office buildings in NYC used the same amount of primary energy and emitted no less green house gases than did other large NYC office buildings.  So the 2012 results from Washington DC are significantly different.  It should be noted that NYC office buildings certified at the gold level were found to exhibit similar modest energy savings.  Perhaps this is a clue as to why Washington DC LEED buildings show energy savings.  More analysis is required.

For the last few years the USGBC has pointed to ENERGY STAR scores for LEED certified buildings as evidence of their energy efficiency.  While ENERGY STAR scores have two important characteristics — they use source rather than site energy and they are based on actual energy measurements — they simply do not have sound scientific basis.  The science has never been vetted, and my own analysis shows these scores are little more than placebos to encourage energy efficiency.  They certainly do not have any quantitative value.

So to summarize, in 2012 LEED offices in Washington used 16% less source energy than  did other office buildings in DC.  What this means and whether such savings justify the added costs of LEED are open questions.

Another look at the analysis by Pollock and Rosiak

A few months ago I called attention the Washington Examiner article by Richard Pollock and Luke Rosiak.  On first read it provided more evidence confirming a trend I have seen in several data sets — that LEED-certified buildings, on average, are not saving primary energy — in this particular case, as measured by ENERGY STAR scores.

But 24 hours later I pulled my original post.  I am not convinced that this cursory study is sufficiently rigorous to stand up to scrutiny.  There are two key reasons for my skepticism.  The first is that over the last year I have leaned that the EPA’s ENERGY STAR building rating system is built on a “house of cards.”  It may encourage building energy efficiency, but it is not founded on good science and there is little reason to believe that a higher ENERGY SCORE means a more energy efficient building.

The second reason is more complicated.  Comparing the energy use of one group of buildings with another is actually difficult to do correctly.  Several of my papers have concluded that other researchers have gotten it wrong in the past.  It is certainly not an activity to be left to people who begin the study with a stake in the outcome — either those promoting LEED or those “dug in against it.”  And, despite the publicity surrounding Thomas Frank’s study in USA Today, it is an activity best left to building professionals and researchers — not reporters.

In this particular case it appears to me that many of the buildings identified by Pollock and Rosiak as “LEED buildings” are not actually LEED-certified at all — they are mostly LEED registered projects.  Anyone can “register” a LEED project — simply stating intentions and requiring a modest fee.  But only minority of those projects that register actually see the process to completion and become LEED-certified.  As critical as I have been of the USGBC’s past claims about energy savings — I cannot hold the USGBC responsible for energy consumption of buildings that have never completed LEED certification. Maybe the registered LEED buildings in this article will soon complete certification.  If and when that occurs then we should look at their subsequent performance.

Washington DC has just released benchmarking data for all its large commercial buildings.  One of my students is in the process of parsing this list to identify LEED and ENERGY STAR certified buildings.  The goal will then be to compare the performance of LEED certified buildings with other buildings.  Only then can we draw any conclusions.

USGBC Continues to “cherry pick” LEED energy data

At the 2007 GreenBuild Conference the USGBC released the results of their first major study of energy consumption by LEED-certified buildings.  Then they presented conclusions from the now infamous study conducted by the New Buildings Institute (paid for by the USGBC and EPA) which, based on data “volunteered by willing building owners” for only 22% of the eligible buildings certified under LEED NC v.2, concluded that LEED certified buildings, on average, were demonstrating the anticipated 25-30% savings in (site) energy.

NBI’s analysis and conclusions were subsequently discredited in the popular media by Henry Gifford and in the peer-reviewed literature by me [see IEPEC 2008 and Energy & Buildings 2009].  NBI’s analytical errors included:

  1. comparing the median of one energy distribution to the mean of another;
  2. comparing energy used by a medium energy subset of LEED buildings with that used by all US commercial buildings (which included types of buildings removed from the LEED set);
  3. improper calculation of the mean (site) energy intensity for LEED buildings and comparing this with the gross mean energy intensity from CBECS;
  4. NBI looked only at building energy used on site (i.e., site EUI) rather than on- and off-site energy use (i.e., source EUI).

To NBI’s credit they made their summary data available to others for independent analysis with no “strings attached.”  In the end even the data gathered by NBI, skewed towards the “better performing” LEED buildings by the method for gathering data, when properly analyzed demonstrated no source energy savings by LEED buildings.  LEED office buildings demonstrated site energy savings of 15-17% — about half that claimed by NBI, the difference being associated with NBI’s improper averaging method.  This site energy savings did not translate into a source energy savings because LEED buildings, on average,  used relatively more electric energy, and the off-site losses associated with this increased electric use wiped out the on-site energy savings.

The lack of representative building energy data was addressed in LEED v.3 (2009) by instituting a requirement that all LEED certified buildings supply the USGBC with annual energy consumption data for five years following certification.  Never again would the USGBC have to publish conclusions based on data volunteered by 1 in 5 buildings.  Expectations were high.

But what has this produced?  The USGBC has learned from their experience with NBI — not to hand over such an important task to an outside organization because you can’t control the outcome.  NBI’s analysis was scientifically flawed — but it was transparent, and such transparency gave critics ammunition to reach different conclusions.  Nowadays the USGBC simply issues carefully packaged sound bites without supplying any details to support their conclusions.  There isn’t even a pretense of conducting scientifically valid analysis.

Consider the most recent claims made by the USGBC at the 2013 Greenbuild conference, summarized by Tristan Roberts in “LEED buildings above average in latest energy data release.”  Roberts asserts the following:

  1. The USGBC has received energy data from 1,861 certified buildings for the 12-mos period July 2012 – June 2013;
  2. About 70% of these were certified through LEED-EBOM (existing buildings);
  3. 450 of these buildings reported their data through the EPA’s Portfolio Manager;
  4. the “building-weighted” (or un-weighted) average source EUI for these 450 buildings is 158 kBtu/sf;
  5. this average is 31% lower than the national median source EUI;
  6. 404 (of the 450) buildings above were eligible for (and received) ENERGY STAR scores;
  7. the average ENERGY STAR score for these 404 buildings was 85.

In addressing the above claims it is hard to know where to begin.  Let’s start with the fact that the USGBC only provides energy information for 450 (or 24%) of the 1,861 buildings for which it has gathered data.  Is this simply due to the fact that it is easier to summarize data gathered by Portfolio Manager than data collected manually?  If so I willingly volunteer my services to go through the data from all 1,861 buildings so that we can get a full picture of LEED building energy performance — not just a snapshot of 24% of the buildings which “self-select themselves” to benchmark through Portfolio Manager.  (The EPA has previously asserted that buildings that benchmark through Portfolio manager tend to be skewed towards “better performing” buildings and are not a random snapshot of commercial buildings.)

Next, consider the “un-weighted” source EUI figure for the 450 buildings.  This is a useless metric.  All EUI reported by CBECS for sets of buildings are “gross energy intensities” equivalent to the gsf-weighted mean EUI (not the un-weighted or building-weighted mean EUI).  This was a major source of error in the 2008 NBI report — leading NBI to incorrectly calculate a 25-30% site energy savings rather than the actual 15-17% site energy savings achieved by that set of LEED buildings.

Consider the assertion that the 158 kBtu/sf source EUI figure is 31% lower than the median source EUI (presumably for all US commercial buildings).  To be correct this would require the median source EUI for all US commercial buildings be 229 kBtu/sf.  This is rubbish.  The best way to obtain such a median EUI figure is from the 2003 CBECS data.  The Energy Information Administration (EIA) does not report source energy figures in any of its CBECS reports.  But the EIA does report site and primary electric energy used by buildings, and these may be combined to calculate source EUI for all 2003 CBECS sampled buildings.  This results in a median source EUI for the estimated 4.9 million commercial buildings to be 118 kBtu/sf.  If you instead restrict this calculation to all buildings with non-zero energy consumption you find these estimated 4.6 million buildings have a median source EUI of 127 kBtu/sf — way below the 229 kBtu/sf figure asserted by the USGBC.  This USGBC claim is patently false.  Of course the USGBC may be referring to the median source EUI of some unspecified subset of U.S. buildings.  By choosing an arbitrary subset you can justify any claim.  And if you don’t specify the subset — well, the claim is nothing more than noise.

What about the average ENERGY STAR score of 85?  Is this impressive?  The answer is no.  Even if you believed that ENERGY STAR scores were, themselves, meaningful, such an average would still mean nothing.  ENERGY STAR scores are supposed to represent percentile rankings in the U.S. building population.  Since there are 4.8 million buildings, by definition we would expect 10% of these (or 480,000) to rank in the top 10% and we would expect another 480,000 of these to rank in the bottom 10%.  That means that if 1,861 buildings are chosen at random from the building population, we expect 10% of these to have ENERGY STAR scores from 91-100.  Similarly, we expect 30% of these (or 558) to have ENERGY STAR scores ranging from 71-100.  Guess what — the average ENERGY STAR scores of these 558 buildings is expected to be 85.  Only those who are mathematically challenged should be impressed that the USGBC has found 404 buildings in its set of 1,861 that have an average ENERGY STAR score of 85.  If you cherry pick your data you can demonstrate any conclusion you like.

And, of course, these 1,861 buildings are not chosen at random — they represent buildings whose owners have a demonstrated interest in energy efficiency apart from LEED.  I would guess that the vast majority of the 404 buildings were certified under the EBOM program and have used Portfolio Manager to benchmark their buildings long before they ever registered for LEED.  LEED certification is just another trophy to be added to their portfolio.  No doubt their ENERGY STAR scores in previous years were much higher than 50 already.  What was the value added by LEED?

I openly offer my services to analyze the USGBC energy data in an unbiased way to accurately asses the collective site and source energy savings by these LEED buildings.  How about it Brendan Owens (VP of technical development for USGBC) — do you have enough confidence in your data to take the challenge?  Which is more important to you, protecting the LEED brand or scientific truth?

ENERGY STAR energy benchmarking is not ready for prime time

I recently had occasion to read an old paper by Janda and Brodsky describing the “first class” of ENERGY STAR certified office buildings.  This is one of only a handful of papers in the peer-reviewed literature regarding ENERGY STAR building scores.  Janda and Brodsky describe the brand name ENERGY STAR as

a set of voluntary partnerships between the U.S. government and product manufacturers, local utilities, home builders, retailers, and businesses.  These partnerships are designed to encourage energy efficiency in products, appliances, homes, offices, and other buildings.

This was the basis for the EPA’s building ENERGY STAR scoring system.  It was a “game” that building managers voluntarily agreed to play with rules (methodology for scoring buildings) set by the EPA in consultation with those playing the game.  There was no scientific vetting of the “rules of the game” — nor did there need to be — it was just a game designed to “encourage energy efficiency.”  No one was forced to play the game.  Data submitted to Portfolio Manager (the EPA’s web-based tool for calculating scores) and ENERGY STAR scores issued by the EPA were confidential — unless a building sought and received ENERGY STAR certification.  Participation was entirely voluntary.  Building managers disappointed with their ENERGY STAR scores could just walk away from the game — no harm, no foul.

But this has all changed.  In recent years 1) the EPA has published specific claims regarding energy savings associated with its ENERGY STAR benchmarking program (real savings not just fantasy football), 2) external organizations like the USGBC have adopted the ENERGY STAR score as their metric for energy efficiency in green building certification programs and are using these scores to make energy savings claims of their own, and 3) major U.S. cities have passed laws requiring commercial building owners to use Portfolio Manager to benchmark their buildings and, in many cases, the resulting ENERGY STAR scores are being made public.  With federal, state, and local governments requiring LEED certification for public buildings this is no longer a voluntary game — it is mandatory and real (testable) energy claims are being made based upon ENERGY STAR scores.  Now the science behind such claims actually matters — and this science has never been vetted.

Its kinda like a small, “mom and pop” operation that has been selling chicken soup using “grandma’s recipe” without obtaining proper license or FDA approval.  Now imagine Walmart decides to market and sell the soup — the scrutiny changes.

As a voluntary game with no connection to reality it is OK that the EPA negotiates rules for its ENERGY STAR ratings with different constituents — like allowing Washington DC office buildings to ignore their “first floors” in seeking ENERGY STAR certification.  After all, who am I to interfere in the activities between consenting adults when these activities do not affect me?  But for ENERGY STAR — these days are gone.

In the next year we will learn much about the science that underpins the EPA’s ENERGY STAR benchmarking system — and the results are likely to be very disappointing.  This benchmarking system is not ready for prime time.