About The Pragmatic Steward

Professor of Physics at Oberlin College. I was originally trained as a condensed matter experimentalist. In the last 15 years my research has focused on photovoltaic devices, PV arrays, wind energy, energy efficiency, and energy use in buildings.

EPA makes a mockery of Freedom of Information Act

In my attempt to understand the EPA’s methodology for calculating ENERGY STAR building benchmarking scores I have frequently requested specific information from the EPA.  Early on I found the EPA to be reluctant to share anything with me that was not already publicly released by the agency.  Dissatisfied with this lack of transparency I decided to formally request information from the EPA through the Freedom of Information Act (FOIA) process.  I filed my first FOIA request in March of 2013.  I have since filed about 30 such requests.

The Freedom of Information Act requires that a Federal agency respond to such requests within 20 working days.  If the Agency fails to comply you can file a law suit with the Federal Courts and be virtually guaranteed a summary judgement ordering the Agency to release the requested documents.  Of course the courts move at a snail’s pace so you cannot expect this process to produce documents anytime soon, or even to get the courts to take action in a rapid time frame.

The EPA keeps track of its statistics at addressing FOIA requests.  It has devised two tracks for such requests, a Simple Track and a Complex Track in which requests are sorted.  EPA policy is to make every attempt to respond to simple FOIA requests within the statutory 20 day time frame.  Complex FOIA requests take longer time to locate documents and process them for public release.  For instance, if you request all of Hillary Clinton’s emails it will take time to locate them and to eliminate any portions that might be classified.

The EPA has also adopted a first in-first out policy for processing FOIA requests from a particular requester.  So, if I already have a complex FOIA request in the queue and I file a second, complex FOIA request, it is the EPA’s policy to complete processing of the first request before turning to the second request.  The same policy applies to any requests in the Simple Track.  But it is EPA policy to treat these two tracks independently.  Meaning that if I have a pending FOIA request in the Complex Track queue and subsequently file a Simple FOIA request, the EPA’s policy is to work on these two requests in parallel.  That is, it will not hold up a Simple FOIA request in order to complete a Complex request that was filed earlier.

I have a Complex FOIA request with the EPA that has been outstanding for nearly two years.  I have no expectation that the EPA will respond to this request unless I seek assistance from the courts.  They are simply intransigent.  This action, combined with the EPA’s first in-first out policy means that the EPA will not process any other complex FOIA requests from me unless I get the courts involved.

On August 9, 2015 I filed a FOIA request with the EPA to provide me with copies of 11 documents that summarize the development and revision of the EPA’s Senior Care Facility ENERGY STAR building model.  I know these documents exist because earlier I received an EPA document entitled, “ENERGY STAR Senior Care Energy Performance Scale Development,” an EPA document that serves as a Table of Contents for documents associated with the development of this model.  This request requires no searching as the requested documents are specifically identified, readily available, and cannot possibly raise national security issues.  Yet the EPA placed this request in its “Complex Track” and provided no response to me for more than 20 days.

On September 14, 2015, having received no response I filed what is called an “Administrative Appeal” to ask the Office of General Counsel to intercede to force the agency to produce the requested documents.  In my appeal I pointed out that my FOIA request was, by very definition, simple, and thus EPA policy required the Agency to act on this request within the 20 day statuatory period.  By Law the EPA has 20 working days to decide an Administrative Appeal.

On Friday, October 30, 2015 the EPA rendered a ruling on my Administrative Appeal.  The ruling is simple — the Office of General Counsel directs the Agency, within 20 working days, to respond to my initial request.  Think of it, 58 working days (two and a half months) after I filed my initial FOIA request — a request which by law should have been responded to within 20 working days, the EPA has now been told by the Office of General Counsel to respond to my request within 20 working days.  What a farse!

 

 

 

 

ENERGY STAR building models fail validation tests

Last month I presented results of research demonstrating that regressions used by the EPA in 6 of the 9 ENERGY STAR building models based on CBECS data are not reproducible in the larger building stock.  What this means is that ENERGY STAR scores built on these regressions are little more than ad hoc scores that have no physical significance.  By that I mean the EPA’s 1-100 building benchmarking score ranks a building’s energy efficiency using the EPA’s current rules, rules which are arbitrary and unrelated to any important performance trends found in the U.S. Commercial building stock.  Below you will find links to my paper as well as power point slides/audio of my presentation.

This last year my student, Gabriel Richman, and I have been devising methods using the R-statistics package to test the validity of the multivariate regressions used by the EPA for their ENERGY STAR building models.  We developed computer programs to internally test the validity of regressions for 13 building models and to externally test the validity of 9 building models.  The results of our external validation tests were presented at the 2015 International Energy Program Evaluation Conference, August 11-13 in Long Beach, CA.  My paper, “Results of validation tests applied to seven ENERGY STAR building models” is available online.  The slides for this presentation may be downloaded and the presentation (audio and slides) may be viewed online.

The basic premise is this.  Anyone can perform a multivariate linear regression on a data set and demonstrate that certain independent variables serve as statistically-significant predictors of a dependent variable which, in the case of EPA building models, is the annual source energy use intensity or EUI.  The point in such regressions, however, is not to predict EUI for buildings within this data set — the point is to use the regression to predict EUI for other buildings outside the data set.  This is, of course, how the EPA uses its regression models — to score thousands of buildings based on a regression performed on a relatively small subset of buildings.

In general there is no a priori reason to believe that such a regression has any predictive value outside the original data on which it is based.  Typically one argues that the data used for the regression are representative of a larger population and therefore it is plausible that the trends uncovered by the regression must also be present in that larger population.  But this is simply an untested hypothesis.  The predictive power must be demonstrated through validation.  External validation involves finding a second representative data set, independent from the one used to perform the regression, and to demonstrate the accuracy of the original regression in predicting EUI for buildings in this second data set.  This is often hard to do because one does not have access to a second, equivalent data set.

Because the EIA’s Commercial Building Energy Consumption Survey (CBECS) is not simply a one-time survey, there are other vintages of this survey to supply a second data set for external validation.  This is what allowed us to perform external validation for the 9 building models that are based on CBECS data.  Results of external validation tests for the two older models were presented at the 2014 ACEEE Summer Study on Energy Use in Buildings and were discussed in a previous blog post.  Tests for the 7 additional models are the subject of today’s post and my recent IEPEC paper.

If the EUI predicted by the EPA’s regressions are real and reproducible then we would expect that a regression performed on the second data set would yield similar results — that is, similar regression coefficients, similar statistical significance for the independent variables, and would predict similar EUI values when applied to the same buildings (i.e., as compared with the EPA regression).  Let the EPA data set be data set A and let our second, equivalent data set be data set B.  We will use the regression on data set A to predict EUI for all the buildings in the combined data se, A+B.  Call these predictions pA.  Now we use the regression on data set B to predict EUI for all these same buildings (data sets A+B) and call these pB.  We expect pA = pB for all buildings, or nearly so, anyway.  A graph of pB vs pA should be a straight line demonstrating strong correlation.

Below is such a graph for the EPA’s Worship Facility model.  What we see is there is essentially no similarity between these two predictions, demonstrating the predictions have little validity.

Worship pBvspA

This “predicted EUI” is at the heart of the ENERGY STAR score methodology.  Without this the ENERGY STAR score would simply be ranking buildings entirely on their source EUI.  But the predicted EUI adjusts the rankings based on operating parameters — so that a building that uses above average energy may still be judged more efficient than average if it has above average operating characteristics (long hours, high worker density, etc.).

What my research shows is this predicted EUI is not a well-defined number, but instead, depends entirely on the subset of buildings used for the regression.  Trends found in one set of buildings are not reproduced in another equally valid set of similar buildings.  The process is analogous to using past stock market values to predict future values.  You can use all the statistical tools available and argue that your regression is valid — yet when you test these models you find they are no better at picking stock winners than are monkeys.

Above I have shown the results for one building type, Worship Facilities.  Similar graphs are obtained when this validation test is performed for Warehouses, K-12 Schools, and Supermarkets.  My earlier work demonstrated that Medical Office and Residence Hall/Dormitories also failed validation tests.  Only the Office model demonstrates strong correlation between the two predicted values pA and pB — and this is only when you remove Banks from the data set.

The release of 2012 CBECS data will provide yet another opportunity to externally validate these 9 ENERGY STAR building models.  I fully expect to find that the models simply have no predictive power with the 2012 CBECS data.

 

 

Jay Whitacre wins 2015 MIT Prize

Today it was announced that Oberlin College physics alumn (and my former student) Jay Whitacre (OC’94) has been awarded the MIT Prize for his inventive work on batteries.  His company, Aquion Energy, has attracted funds from some pretty important investors.  Not bad for a kid who didn’t take calculus in high school.

Jay-Whitacre-Lemelson-MIT_0

Congrats Jay!

Mounting evidence that LEED certified buildings do not save energy

Two recent publications provide corroborating evidence that LEED-certified buildings, on average, do not save primary energy.  One of these looks at energy consumption for 24 academic buildings at a major university.  The other looks at energy consumption by LEED-certified buildings in India.  In both cases there is no evidence that LEED-certification reduced energy consumption.

The study of academic buildings is found in the article entitled “Energy use assessment of educational buildings: toward a campus-wide susainability policy” by Agdas, Srinivasan, Frost, and Masters published in the peer-reviewed journal Sustainable Cities and Societies.  These researchers looked at the 2013 energy consumption of 10 LEED-certified academic buildings and 14 non-certified buildings on the campus of the University of Florida at Gainesville.  They appear to have considered site energy intensity (site EUI) rather than my preferred metric, source energy intensity.  Nevertheless their conclusions are consistent with my own — that LEED certified buildings show no significant energy savings as compared with similar non-certified buildings.  This is also consistent with what has been published now in about 8 peer-reviewed journal articles on this topic.  Only one peer-reviewed article (Newshem et al) reached a different conclusion — and that conclusion was rebutted by my own paper (Scofield).  There are, of course, several reports published by the USGBC and related organizations that draw other conclusions.

The second recent publication comes out of India.  The Indian Green Building Council (IGBC) — India’s equivalent of the USGBC — of its own accord posted energy consumption data for 50 of some 450 LEED certified buildings.  Avikal Somvanshi and his colleagues at the Centre for Science and the Envionment took this opportunity to analyze the energy and water performance of these buildings, finding that the vast majority of these LEED-certified buildings were underperforming expectations.  Moreover, roughly half of the 50 buildings failed even to qualify for the Bureau of Energy Efficiency’s (BEE) Star Rating (India’s equivalent of ENERGY STAR).  The results were so embarrassing that the IGBC removed some of the data from their website and posted a disclaimer discounting the accuracy of the rest.  In the future no doubt the IGBC will follow the practice of the USGBC of denying public access to energy consumption data while releasing selected tidbits for marketing purposes.

How long will the USGBC and its international affiliates be afforded the privilege of making unsupported claims about energy savings while hiding their data?

The Fourth Great American Lie

There is this standing joke about the three great Amercian lies:  1) “the check is in the mail;” 2) “of course I will respect you in the morning;”, and 3) well … let me skip the last one. I think it is time to add a fourth lie to the list — this green project will lower energy use.

In my last post I mentioned that my home town of Oberlin, OH recently purchased new, automatic loader trash/recycling trucks and spent an extra $300,000 so that three of them included fuel-saving, hydraulic-hybrid technology.  Town leaders claimed these trucks would save fuel and reduce carbon emissions.  Simple cost/benefit calculations using their cost and fuel savings figures showed that this was an awful investment that would never pay for itself (in fuel savings) and that the cost per ton of carbon saved was astronomical.

A few weeks ago I requested from the City fuel consumption data for the first six months of operation of the new trucks.  The City Manager and Public Works Director, instead, asked me to wait until after their July 6 report to City Council on the success of the new recycling program.  They both assured me that fuel usage would be covered in this report.  I was promised access to the data following their presentation.

Last Monday, in his presentation to Council, the Public Works Director highlighted data which showed that for the first six months of operation the City recycled 400 tons — as compared with the 337 tons it had recycled in the comparable period prior to acquisition of the new trucks.  This represents a 19% increase in recycling. Unfortunately there was no mention of fuel usage or savings.

Yesterday I obtained fuel consumption data from the Public Works Director for Oberlin’s new garbage/recycing trucks along with comparitive fuel data from previous years using the old trucks. The new trucks are on track to use 2,000 gallons MORE diesel fuel than were used by the old trucks, annually.  That’s right, not less fuel, but MORE fuel.  This is a 19% increase in fuel usage.  Gee what a surprise!

Soon the spin will begin.  City Adminisrators will point out that fuel usage would be even worse were it not for their $300,000 investment in the hybrid technology.  They will point out that the increased fuel usage is due to the new, automatic loading technology included in these trucks (though they failed to mention any expected increased fuel usage when the project was being sold to the public) — which enabled the use of larger recycling containers and the improvement in recycling.  What they will fail to tell us is that they could have achieved the same increase in recycling using the older style truck without automatic loaders.

This is the second recent City project for which the public has been mislead regarding expected enegy savings. The first was the LEED-certified Fire Station renovation.  This green building was supposed to save energy.  It, of course, is bigger and better than the building it replaced — oh yes, and it uses more energy.  But the increase in energy use wasn’t as much as it might have been because it was a green building.  Now we have the same result for the trash and recycle trucks.

Oberlin College is in the process of constructing a new, green hotel — called the “Gateway Project” as it will usher in a new era of green construction.  But people should understand, this new green hotel will use more energy than the old hotel —  it will be bigger and better, and its energy use won’t be as big as it might have been — and this should make us feel good.

And in the next few months Oberlin residents will be asked to approve additional school taxes to construct new, green, energy-efficient public school facilities.  But don’t be surprised when these new facilities actually use more energy than did the old ones.  Don’t get me wrong — they will be more energy efficient than the old facilities, but they will be bigger, and better and — use more energy.

This is the new lie — that our new stuff will use less energy than our old stuff.  But it isn’t true.  Fundamentally we want bigger and better stuff.  People like Donald Trump just build bigger and better stuff and proudly proclaim it.  But isn’t pallitable for most of us — we feel guilty about wanting bigger and better stuff.  So instead we find a way to convince ourseles that our new stuff will be green, it will lower carbon emission, it will make the world a better place — oh, and yes, it will be bigger and better.

We need our lies to make us feel good about doing what we wanted to do all along.  Don’t get me wrong — sometimes the check is in the mail and sometimes the green project does save energy.  But more often than not these lies are offered for temporary expediency,  And, of course, I really will respect you in the morning.

Mis-guided investments in energy efficiency and carbon reduction

Yesterday the Wall Street Journal published an article by Greg Ip which summarized the findings of an economic study conducted by Michael Greenstone, Meredith Fowlie and Catherine Wolfram.  (Their original paper is entitled “Do Energy Efficiency Investments Deliver? Evidence from The Weatherization Assistance Program.”)  These researchers looked at the actual energy savings and costs of a specific Weatherization Assistance Program (WAP). What they found was that the homes that took advantage of the WAP only achieved about 40% of the energy savings that engineering calculations had projected.  When they compared the actual savings (not estimated savings) to the costs they concluded 1) that the investments would never pay for themselves (i.e., the cost of the energy saved over 16 years was less than the amount spent on the energy efficiency investments), and 2) the amount of money spent per metric tonne of carbon saved (over these 16 years) is $338/tonne — about 10X more than estimates for the longterm cost to society to solve the carbon emissions problem.

This article caught my attention for two reasons.  First, this simply illustrates again the large gap between measured energy savings and those estimated by promoters of energy effciency programs.  In particular, I have seen this over and over with green buildings.  All the data I have analyzed show that, on average, LEED-certified buildings do not achieve the energy savings that their designers predict.  Many organizations pride themselves on their portfolio of green buildings yet the fact is, these buildings consume no less primary energy than other other buildings.  Society will not arrest climate change with this approach — even though it leads to all kinds of green awards.

But the second reason this caught my attention is due to the parallel these investments have with what is going on in my community of Oberlin, OH.  The Oberlin City Council has made a commitment to make the City climate-positive (I guess it is like giving 110% effort).  Apparently all divisions of the City are instructed to act in accordance with the City’s Climate Action Plan.  The City’s Municipal Power Company has contracted with Efficiency Smart to promote energy efficiency programs for its customers.  Efficiency Smart reports to the City on how much energy its programs have saved — savings that are based on projected estimates not measurements.

A year ago the City had the opportunity to purchase new garbarge/recycling trucks. The City spent an extra $300,000 in order to include hydraulic-hybrid, fuel saving technology in these trucks.  The City Public Works director estimated the deisel fuel savings to be 2,800 gal annually.  At a cost of $3.75/gal this represents an annual return of $10,600 on a $300,000 investment.  Since the trucks are expected to last only 10 years the invesment will never pay for itself.

What about the carbon savings?  If you work through the math you find that the reduced carbon emission (associated with less fuel usage) comes at a cost of about $600 per ton CO2.  This is equivalent to $2,400 per metric tonne carbon savings.  It was an utterly foolish decision to spend money this way.  And this was made based on projected savings.  In a few months we will see how much fuel the trucks have actually used.

City of Oberlin refuse truck

City of Oberlin refuse truck

Don’t get me wrong.  I am an advocate for energy efficiency that leads to real, cost-effective savings.  But there must be a cost/benefit analysis.  We cannot afford to throw money away on schemes that yield such little return.  And we cannot base our decision on “projected” savings.  I like the way that Wal-mart approaches energy efficiency.  Perform the up-front calculation to find the projected savings.  If these look good, retrofit a couple stores and measure the actual savings.  If the trial study confirms the savings — roll out the same changes to all the other stores.  If not, move on.

 

Why does the EPA publish false claims about its Medical Office ENERGY STAR model?

To say that someone “lied” is a strong claim.  It asserts that not only is the statement false but the person making it knows that the statement is false.

The EPA revised and updated its ENERGY STAR Technical Methodology document for Medical Office Buildings in November 2014.  That document makes the following claims:

  1. it describes filters used to extract 82 records from the 1999 CBECS
  2. it claims that the model data contain no buildings less than 5,000 sf in size
  3. with regard to the elimination of buildings < 5000 sf the EPA writes, “Analytical filter – values determined to be statistical outlyers.”
  4. the cumulative distribution for this model from which ENERGY STAR scores are derived is said to be fit with a 2-parameter gamma distribution.

All of the above statements/descriptions are false.  The filters described by the EPA do not produce an 82 record dataset, and the dataset produced do not then have the properties (min, max, and mean) described in Figure 2 of the EPA’s document.  And a regression using the EPA’s variables on the dataset obtained using their stated filters do not produce the results listed in Figure 3 of the EPA’s document.  In short, this EPA document is a work of fiction.

I have published these facts previously in my August 2014 ACEEE paper entitled “ENERGY STAR Building Benchmarking Scores: Good Idea, Bad Science.”  Six months ago I sent copies of this paper to EPA staff responsible for the agency’s ENERGY STAR building program.

I have given the EPA the opportunity to supply facts supporting their claims by filing three Freedom of Information Act (FOIA) requests, the first (EPA-HQ-2013-00927) for the list of 1999 CBECS ID’s that correspond to their 82-building dataset, and the second (EPA-HQ-2013-009668) for the alpha and beta parameters for the gamma distribution that fits their data, and the third (EPA-HQ-2013-010011) for documents justifying their exclusion of buildings <5000 sf from many models, including Medical Offices.  The EPA has closed the first two cases indicating they could not find any documents with the requested information.  17 months after filing the third request it remains open and the EPA has provided no documents pertaining to the Medical Office model.  The EPA is publishing claims for which they have no supporting documents and that I have demonstrated are false.  The details of my analysis are posted on the web and were referenced in my ACEEE paper.

In November 2014 the EPA corrected errors in other Technical Methodology documents yet it saw no need to correct or retract the Medical Office document.  Why is it so hard for the EPA to say they messed up?

It is common for scientists to correct mistakes by publishing “errata” or even withdrawing a previously published paper.  No doubt EPA staff once believed this document they have published was correct.  But how is it possible the EPA remained unaware of the errors while it continued to publish and even revise this document for nearly a decade?  How can the EPA continue to publish such false information six months after it has been informed of the errors?

Is the EPA lying about its Medical Office building model?  I cannot say.  But it is clear that the EPA either has total disregard for the truth or it is incompetent.

If these follks worked for NBC they would have to join Brian Willams on unpaid leave for six months.  Apparently the federal government has a lower standard of competence and/or integrity.

District Department of the Environment premature in claiming energy savings

On January 28, 2015 the District of Columbia published the second year of energy benchmarking data collected from private buildings.  This year’s public disclosure applies to all commercial buildings 100,000 sf and larger while last year’s public disclosure was for all buildings 150,000 sf or bigger.  Data published are drawn from the EPA’s ENERGY STAR Portfolio Manager and include building details such as gsf and principal building activity along with annual consumption for major fuels (electric, natural gas, steam), water, and calculated green house gas emission (associated with fuels).  Also published are annual site EUI (energy use intensity) and weather-normalized source EUI metrics, commonly used to asses building energy use.

The District Department of the Environment has analyzed these two years of data and concluded the following:

  • DC commercial buildings continue to be exceptionally efficient. The median reported ENERGY STAR® score for private commercial buildings in the District was 74 out of 100—well above the national median score of 50.
  • Buildings increased in efficiency from 2012 to 2013. Also,  overall site energy use went up by 1.5% among buildings that reported 2012 and 2013 data. However, when accounting for weather impacts and fuel differences, the weather-normalized source energy use for the same set of buildings decreased by 3% in 2013.

These claims are simply unjustified.

In particular consider the second point — that 2013 source energy used by DC buildings is 3% lower than it was in 2012 — demonstrating improved energy efficiency.  This claim is based on weather-normalized source energy numbers produced by the EPA’s Portfolio Manager.  The problem is that the EPA lowered its site-to-source energy conversion factor for electricity from 3.34 to 3.14 in July 2013 — a 6% reduction.  Because of this simple change, any building that has exactly the same energy purchases for 2013 that it did in 2012 will, according to Portfolio Manager, be using 4-6% less source energy in 2013 (depending on the amount of non-electric energy use).  In other words — the District finds its buildings used 3% less source energy in 2013 than in 2012 when, in fact, by doing nothing, all US buildings saved 5-6% in source energy over this same time frame.

It is said that “a rising tide lifts all boats.”  In this case the Washington DC boat did not rise quite as much as other boats.

More seriously, such small differences (1% – 3%) in average site or source energy are not resolvable within the statistical uncertainty of these numbers.  The standard deviations of the 2012 and 2013 mean site and source EUI for DC buildings are too large to rule out the possibility that such small changes are simply accidental, rather than reflective of any trend.  Scientists would know that.  Politicians would not — nor would they care if it makes or a good sound bite.

Let me now address the other claim.  It may well be true that the median ENERGY STAR score for district buildings is 74.  I cannot confirm this – but I have no reason to doubt its veracity. But there are no data to support the assumption that the median ENERGY STAR score for all commecial buildings is 50.  All evidence suggests that the national median score is substantially higher — in the 60-70 range, depending on the building type.  My recent analysis shows that the science that underpins these ENERGY STAR scores is wanting.  ENERGY STAR scores have little or no quantiative value and certainly DO NOT indicate a building’s energy efficiency ranking with respect to its national peer group — despite the EPA’s claims to the contrary.

The claim that the median score for US buildings is 50 is similar to making the claim that the median college course grade is a “C.”  Imagine your daughter comes home from College and says, “my GPA is 2.8 (C+) which is significantly higher than the (presumed) median grade of 2.0 (C).  You should be very proud of my performance.”  The problem is the actual median college grade is much closer to 3.3 (B+).  Its called grade inflation.  Its gone on for so many years that we all know the median grade is not a “C.”  Until recently ENERGY STAR scores were mostly secret — so the score inflation was not so apparent. But the publication of ENERGY STAR scores for large numbers of buildings as a result of laws such as those passed in Washington DC has removed the cloak — and the inflation is no longer hidden.

ENERGY STAR scores are no more than a “score” in a rating game whose ad hoc rules are set by the EPA in consultation with constituency groups.   It seems to have motivational value, and there is nothing wrong with building owners voluntarily agreeing to play this game.  But like fantasy football, it is not to be confused with the real game.

2013 NYC Benchmarking Raises Questions about EPA’s new Multifamily Housing Model

A few  weeks ago NYC released Energy Benchmarking data for something like 15,000 buildings for 2013.  9500 of these buildings are classified as “Multifamily Housing” — the dominant property type for commercial buildings in NYC. While data from Multifamily Housing buildings were released by NYC last year, none included an ENERGY STAR building rating as the EPA had not yet developed a model for this type of building.

But a few months ago the EPA rolled-out its ENERGY STAR building score for Multifamily Housing.  So this latest benchmarking disclosure from NYC includes ENERGY STAR scores for 876 buildings of this type.  (Apparently the vast majority of NYC’s multifamily buildings did not qualify to receive an ENERGY STAR score — probably because the appropriate parameters were not entered into Portfolio Manager.)  Scores span the full range, some being as low as 1 and others as high as 100.  But are these scores meaningful?

Earlier this year I published a paper summarizing my analysis of the science behind 10 of the EPA’s ENERGY STAR models for conventional building types including: Offices, K-12 Schools, Hotels, Supermarkets, Medical Offices, Residence Halls, Worship Facilities, Senior Care Facilities, Retail Stores, and Warehouses.  What I found was that these scores were nothing more than placebos — numbers issued in a voluntary game invented by the EPA to encourage building managers to pursue energy efficient practices.  The problem with all 10 of these models is that the data on which they are based are simply inadequate for characterizing the parameters that determine building energy consumption.  If this were not enough the EPA compounded the problem by making additional mathematical errors in most of its models.  The entire system is built on a “house of cards.”  The EPA ignores this reality and uses these data to generate a score anyway.  But the scores carry no scientific significance.  ENERGY STAR certification plaques are as useful as “pet rocks.”

Most of the above 10 models I analyzed were based on public data obtained from the EIA’s Commercial Building Energy Consumption Survey (CBECS).  Because these data were publicly available these models could be replicated.  One of the models (Senior Care Facilities) was based on voluntary data gathered by a private trade organization — data that were not publicly available. I was able to obtain these data through a Freedom of Information Act (FOIA) request and, once obtained, confirmed that this model was also not based on good science.

Like the Senior Care Facility model, the EPA’s Multifamily Housing ENERGY STAR model is constructed on private data not open to public scrutiny.  These data were gathered by Fannie Mae.  It is my understanding that a public version of these data will become available in January 2015.  Perhaps then I will be able to replicate the EPA’s model and check its veracity.  Based on information the EPA has released regarding the Multifamily ENERGY STAR model I fully expect to find it has no more scientific content than any of the other building models I have investigated.

One of the problems encountered when building an ENERGY STAR score on data that are “volunteered” is that they are necessarily skewed.  Put more simply, there is no reason to believe that the data submitted voluntarily are representative of the larger building stock.  ENERGY STAR scores are supposed to reflect a building’s energy efficiency percentile ranking as compared with similar buildings, nationally.  When properly defined, one expects these scores to be uniformly distributed in the national building stock.  In other words, if you were to calculate ENERGY STAR scores for thousands of Multifamily Housing Buildings across  the nation, you expect 10% of them to be in the top 10% (i.e., scores 91-100), 10% in the lowest 10% (i.e., scores 1-10), and so on.  If this is not the case then clearly the scores do not mean what we are told they mean.

Meanwhile, it is interesting to look at the distribution of ENERGY STAR scores that were issued for the 900-or-so Multifamily Housing facilities in NYC’s 2013 benchmarking data.  A histogram of these scores is shown below.  The dashed line shows the expected result — a uniform distribution of ENERGY STAR scores.  Instead we see that NYC has far more low and high scores than expected, and relatively fewer scores in the mid-range.  24% of NYC buildings have ENERGY STAR scores ranging from 91-100, more than twice the expected number.  And 31% of its buildings have scores 1-10, more than 3X the expected number.  Meanwhile only 12% have scores ranging from 41 to 90.  We expect 50% of the buildings to have scores in this range.

histogram of 2013 MFH NYC ES scores

Of course it is possible that New York City just doesn’t have many “average” Multifamily Housing buildings.  After all, this is a city of extremes — maybe it has lots of bad buildings and lots of great buildings but relatively few just so-so buildings.  Maybe all the “so-so” buildings are found in the “fly-over states.”

I ascribe to the scientific principal known as Occam’s Razor.  This principal basically says that when faced with several competing explanations for the same phenomenon, choose the simplest explanation rather than more complicated ones.  The simplest explanation for the above histogram is that these ENERGY STAR scores do not, in fact, represent national percentile rankings at all.  The EPA did not have a nationally representative sample of Multifamily Housing buildings on which to build its model, and its attempt to compensate for this failed.  Until the EPA provides evidence to the contrary — this is the simplest explanation.

 

LEED Certification: intent, implementation, and results

Last week I had the opportunity to deliver the keynote address at the annual conference of the Ohio Public Facilities Maintenance Association (OPFMA) held in Columbus, OH.  Here is a link to the slides used for my presentation, LEED Certification: intent, implemenation, and results.

The thrust of my presenation was to discuss what we know about primary energy savings reduction in green house gas emission for LEED-certified buildings.  Despite the fact that there are roughly 11,000 U.S. commercial buildings certified before Jan. 1, 2013 under LEED New Construction (NC), Core and Shell (CS), Existing Buildings (EB:OM), and LEED for Schools — all LEED programs that address whole building energy use — we have published data from just 2% of these buildings.  This paltry amount of data is mostly gathered by voluntary submissions by building owners willing to share their energy data.  You can bet that such data are skewed towards the better performing buildings.

And even so, the data available show that, on average, LEED-certified buildings show no significant source energy savings or reduction in GHG emission relative to comparable, non-LEED buildings.  That was the thrust of my presentation.

Note that promoters of LEED certification continue to claim energy savings — but these claims are based on design projections not actual performance measurements.  For instance, promoters of Ohio’s Green schools claim 33% reduction in energy use.  But there has never been a study of energy used by Ohio’s LEED-certified schools to demonstrate this assumed savings.  Such claims of energy savings are based on “faith” not “fact.”