EPA makes a mockery of Freedom of Information Act

In my attempt to understand the EPA’s methodology for calculating ENERGY STAR building benchmarking scores I have frequently requested specific information from the EPA.  Early on I found the EPA to be reluctant to share anything with me that was not already publicly released by the agency.  Dissatisfied with this lack of transparency I decided to formally request information from the EPA through the Freedom of Information Act (FOIA) process.  I filed my first FOIA request in March of 2013.  I have since filed about 30 such requests.

The Freedom of Information Act requires that a Federal agency respond to such requests within 20 working days.  If the Agency fails to comply you can file a law suit with the Federal Courts and be virtually guaranteed a summary judgement ordering the Agency to release the requested documents.  Of course the courts move at a snail’s pace so you cannot expect this process to produce documents anytime soon, or even to get the courts to take action in a rapid time frame.

The EPA keeps track of its statistics at addressing FOIA requests.  It has devised two tracks for such requests, a Simple Track and a Complex Track in which requests are sorted.  EPA policy is to make every attempt to respond to simple FOIA requests within the statutory 20 day time frame.  Complex FOIA requests take longer time to locate documents and process them for public release.  For instance, if you request all of Hillary Clinton’s emails it will take time to locate them and to eliminate any portions that might be classified.

The EPA has also adopted a first in-first out policy for processing FOIA requests from a particular requester.  So, if I already have a complex FOIA request in the queue and I file a second, complex FOIA request, it is the EPA’s policy to complete processing of the first request before turning to the second request.  The same policy applies to any requests in the Simple Track.  But it is EPA policy to treat these two tracks independently.  Meaning that if I have a pending FOIA request in the Complex Track queue and subsequently file a Simple FOIA request, the EPA’s policy is to work on these two requests in parallel.  That is, it will not hold up a Simple FOIA request in order to complete a Complex request that was filed earlier.

I have a Complex FOIA request with the EPA that has been outstanding for nearly two years.  I have no expectation that the EPA will respond to this request unless I seek assistance from the courts.  They are simply intransigent.  This action, combined with the EPA’s first in-first out policy means that the EPA will not process any other complex FOIA requests from me unless I get the courts involved.

On August 9, 2015 I filed a FOIA request with the EPA to provide me with copies of 11 documents that summarize the development and revision of the EPA’s Senior Care Facility ENERGY STAR building model.  I know these documents exist because earlier I received an EPA document entitled, “ENERGY STAR Senior Care Energy Performance Scale Development,” an EPA document that serves as a Table of Contents for documents associated with the development of this model.  This request requires no searching as the requested documents are specifically identified, readily available, and cannot possibly raise national security issues.  Yet the EPA placed this request in its “Complex Track” and provided no response to me for more than 20 days.

On September 14, 2015, having received no response I filed what is called an “Administrative Appeal” to ask the Office of General Counsel to intercede to force the agency to produce the requested documents.  In my appeal I pointed out that my FOIA request was, by very definition, simple, and thus EPA policy required the Agency to act on this request within the 20 day statuatory period.  By Law the EPA has 20 working days to decide an Administrative Appeal.

On Friday, October 30, 2015 the EPA rendered a ruling on my Administrative Appeal.  The ruling is simple — the Office of General Counsel directs the Agency, within 20 working days, to respond to my initial request.  Think of it, 58 working days (two and a half months) after I filed my initial FOIA request — a request which by law should have been responded to within 20 working days, the EPA has now been told by the Office of General Counsel to respond to my request within 20 working days.  What a farse!





ENERGY STAR building models fail validation tests

Last month I presented results of research demonstrating that regressions used by the EPA in 6 of the 9 ENERGY STAR building models based on CBECS data are not reproducible in the larger building stock.  What this means is that ENERGY STAR scores built on these regressions are little more than ad hoc scores that have no physical significance.  By that I mean the EPA’s 1-100 building benchmarking score ranks a building’s energy efficiency using the EPA’s current rules, rules which are arbitrary and unrelated to any important performance trends found in the U.S. Commercial building stock.  Below you will find links to my paper as well as power point slides/audio of my presentation.

This last year my student, Gabriel Richman, and I have been devising methods using the R-statistics package to test the validity of the multivariate regressions used by the EPA for their ENERGY STAR building models.  We developed computer programs to internally test the validity of regressions for 13 building models and to externally test the validity of 9 building models.  The results of our external validation tests were presented at the 2015 International Energy Program Evaluation Conference, August 11-13 in Long Beach, CA.  My paper, “Results of validation tests applied to seven ENERGY STAR building models” is available online.  The slides for this presentation may be downloaded and the presentation (audio and slides) may be viewed online.

The basic premise is this.  Anyone can perform a multivariate linear regression on a data set and demonstrate that certain independent variables serve as statistically-significant predictors of a dependent variable which, in the case of EPA building models, is the annual source energy use intensity or EUI.  The point in such regressions, however, is not to predict EUI for buildings within this data set — the point is to use the regression to predict EUI for other buildings outside the data set.  This is, of course, how the EPA uses its regression models — to score thousands of buildings based on a regression performed on a relatively small subset of buildings.

In general there is no a priori reason to believe that such a regression has any predictive value outside the original data on which it is based.  Typically one argues that the data used for the regression are representative of a larger population and therefore it is plausible that the trends uncovered by the regression must also be present in that larger population.  But this is simply an untested hypothesis.  The predictive power must be demonstrated through validation.  External validation involves finding a second representative data set, independent from the one used to perform the regression, and to demonstrate the accuracy of the original regression in predicting EUI for buildings in this second data set.  This is often hard to do because one does not have access to a second, equivalent data set.

Because the EIA’s Commercial Building Energy Consumption Survey (CBECS) is not simply a one-time survey, there are other vintages of this survey to supply a second data set for external validation.  This is what allowed us to perform external validation for the 9 building models that are based on CBECS data.  Results of external validation tests for the two older models were presented at the 2014 ACEEE Summer Study on Energy Use in Buildings and were discussed in a previous blog post.  Tests for the 7 additional models are the subject of today’s post and my recent IEPEC paper.

If the EUI predicted by the EPA’s regressions are real and reproducible then we would expect that a regression performed on the second data set would yield similar results — that is, similar regression coefficients, similar statistical significance for the independent variables, and would predict similar EUI values when applied to the same buildings (i.e., as compared with the EPA regression).  Let the EPA data set be data set A and let our second, equivalent data set be data set B.  We will use the regression on data set A to predict EUI for all the buildings in the combined data se, A+B.  Call these predictions pA.  Now we use the regression on data set B to predict EUI for all these same buildings (data sets A+B) and call these pB.  We expect pA = pB for all buildings, or nearly so, anyway.  A graph of pB vs pA should be a straight line demonstrating strong correlation.

Below is such a graph for the EPA’s Worship Facility model.  What we see is there is essentially no similarity between these two predictions, demonstrating the predictions have little validity.

Worship pBvspA

This “predicted EUI” is at the heart of the ENERGY STAR score methodology.  Without this the ENERGY STAR score would simply be ranking buildings entirely on their source EUI.  But the predicted EUI adjusts the rankings based on operating parameters — so that a building that uses above average energy may still be judged more efficient than average if it has above average operating characteristics (long hours, high worker density, etc.).

What my research shows is this predicted EUI is not a well-defined number, but instead, depends entirely on the subset of buildings used for the regression.  Trends found in one set of buildings are not reproduced in another equally valid set of similar buildings.  The process is analogous to using past stock market values to predict future values.  You can use all the statistical tools available and argue that your regression is valid — yet when you test these models you find they are no better at picking stock winners than are monkeys.

Above I have shown the results for one building type, Worship Facilities.  Similar graphs are obtained when this validation test is performed for Warehouses, K-12 Schools, and Supermarkets.  My earlier work demonstrated that Medical Office and Residence Hall/Dormitories also failed validation tests.  Only the Office model demonstrates strong correlation between the two predicted values pA and pB — and this is only when you remove Banks from the data set.

The release of 2012 CBECS data will provide yet another opportunity to externally validate these 9 ENERGY STAR building models.  I fully expect to find that the models simply have no predictive power with the 2012 CBECS data.



Jay Whitacre wins 2015 MIT Prize

Today it was announced that Oberlin College physics alumn (and my former student) Jay Whitacre (OC’94) has been awarded the MIT Prize for his inventive work on batteries.  His company, Aquion Energy, has attracted funds from some pretty important investors.  Not bad for a kid who didn’t take calculus in high school.


Congrats Jay!

Mounting evidence that LEED certified buildings do not save energy

Two recent publications provide corroborating evidence that LEED-certified buildings, on average, do not save primary energy.  One of these looks at energy consumption for 24 academic buildings at a major university.  The other looks at energy consumption by LEED-certified buildings in India.  In both cases there is no evidence that LEED-certification reduced energy consumption.

The study of academic buildings is found in the article entitled “Energy use assessment of educational buildings: toward a campus-wide susainability policy” by Agdas, Srinivasan, Frost, and Masters published in the peer-reviewed journal Sustainable Cities and Societies.  These researchers looked at the 2013 energy consumption of 10 LEED-certified academic buildings and 14 non-certified buildings on the campus of the University of Florida at Gainesville.  They appear to have considered site energy intensity (site EUI) rather than my preferred metric, source energy intensity.  Nevertheless their conclusions are consistent with my own — that LEED certified buildings show no significant energy savings as compared with similar non-certified buildings.  This is also consistent with what has been published now in about 8 peer-reviewed journal articles on this topic.  Only one peer-reviewed article (Newshem et al) reached a different conclusion — and that conclusion was rebutted by my own paper (Scofield).  There are, of course, several reports published by the USGBC and related organizations that draw other conclusions.

The second recent publication comes out of India.  The Indian Green Building Council (IGBC) — India’s equivalent of the USGBC — of its own accord posted energy consumption data for 50 of some 450 LEED certified buildings.  Avikal Somvanshi and his colleagues at the Centre for Science and the Envionment took this opportunity to analyze the energy and water performance of these buildings, finding that the vast majority of these LEED-certified buildings were underperforming expectations.  Moreover, roughly half of the 50 buildings failed even to qualify for the Bureau of Energy Efficiency’s (BEE) Star Rating (India’s equivalent of ENERGY STAR).  The results were so embarrassing that the IGBC removed some of the data from their website and posted a disclaimer discounting the accuracy of the rest.  In the future no doubt the IGBC will follow the practice of the USGBC of denying public access to energy consumption data while releasing selected tidbits for marketing purposes.

How long will the USGBC and its international affiliates be afforded the privilege of making unsupported claims about energy savings while hiding their data?

The Fourth Great American Lie

There is this standing joke about the three great Amercian lies:  1) “the check is in the mail;” 2) “of course I will respect you in the morning;”, and 3) well … let me skip the last one. I think it is time to add a fourth lie to the list — this green project will lower energy use.

In my last post I mentioned that my home town of Oberlin, OH recently purchased new, automatic loader trash/recycling trucks and spent an extra $300,000 so that three of them included fuel-saving, hydraulic-hybrid technology.  Town leaders claimed these trucks would save fuel and reduce carbon emissions.  Simple cost/benefit calculations using their cost and fuel savings figures showed that this was an awful investment that would never pay for itself (in fuel savings) and that the cost per ton of carbon saved was astronomical.

A few weeks ago I requested from the City fuel consumption data for the first six months of operation of the new trucks.  The City Manager and Public Works Director, instead, asked me to wait until after their July 6 report to City Council on the success of the new recycling program.  They both assured me that fuel usage would be covered in this report.  I was promised access to the data following their presentation.

Last Monday, in his presentation to Council, the Public Works Director highlighted data which showed that for the first six months of operation the City recycled 400 tons — as compared with the 337 tons it had recycled in the comparable period prior to acquisition of the new trucks.  This represents a 19% increase in recycling. Unfortunately there was no mention of fuel usage or savings.

Yesterday I obtained fuel consumption data from the Public Works Director for Oberlin’s new garbage/recycing trucks along with comparitive fuel data from previous years using the old trucks. The new trucks are on track to use 2,000 gallons MORE diesel fuel than were used by the old trucks, annually.  That’s right, not less fuel, but MORE fuel.  This is a 19% increase in fuel usage.  Gee what a surprise!

Soon the spin will begin.  City Adminisrators will point out that fuel usage would be even worse were it not for their $300,000 investment in the hybrid technology.  They will point out that the increased fuel usage is due to the new, automatic loading technology included in these trucks (though they failed to mention any expected increased fuel usage when the project was being sold to the public) — which enabled the use of larger recycling containers and the improvement in recycling.  What they will fail to tell us is that they could have achieved the same increase in recycling using the older style truck without automatic loaders.

This is the second recent City project for which the public has been mislead regarding expected enegy savings. The first was the LEED-certified Fire Station renovation.  This green building was supposed to save energy.  It, of course, is bigger and better than the building it replaced — oh yes, and it uses more energy.  But the increase in energy use wasn’t as much as it might have been because it was a green building.  Now we have the same result for the trash and recycle trucks.

Oberlin College is in the process of constructing a new, green hotel — called the “Gateway Project” as it will usher in a new era of green construction.  But people should understand, this new green hotel will use more energy than the old hotel —  it will be bigger and better, and its energy use won’t be as big as it might have been — and this should make us feel good.

And in the next few months Oberlin residents will be asked to approve additional school taxes to construct new, green, energy-efficient public school facilities.  But don’t be surprised when these new facilities actually use more energy than did the old ones.  Don’t get me wrong — they will be more energy efficient than the old facilities, but they will be bigger, and better and — use more energy.

This is the new lie — that our new stuff will use less energy than our old stuff.  But it isn’t true.  Fundamentally we want bigger and better stuff.  People like Donald Trump just build bigger and better stuff and proudly proclaim it.  But isn’t pallitable for most of us — we feel guilty about wanting bigger and better stuff.  So instead we find a way to convince ourseles that our new stuff will be green, it will lower carbon emission, it will make the world a better place — oh, and yes, it will be bigger and better.

We need our lies to make us feel good about doing what we wanted to do all along.  Don’t get me wrong — sometimes the check is in the mail and sometimes the green project does save energy.  But more often than not these lies are offered for temporary expediency,  And, of course, I really will respect you in the morning.

Mis-guided investments in energy efficiency and carbon reduction

Yesterday the Wall Street Journal published an article by Greg Ip which summarized the findings of an economic study conducted by Michael Greenstone, Meredith Fowlie and Catherine Wolfram.  (Their original paper is entitled “Do Energy Efficiency Investments Deliver? Evidence from The Weatherization Assistance Program.”)  These researchers looked at the actual energy savings and costs of a specific Weatherization Assistance Program (WAP). What they found was that the homes that took advantage of the WAP only achieved about 40% of the energy savings that engineering calculations had projected.  When they compared the actual savings (not estimated savings) to the costs they concluded 1) that the investments would never pay for themselves (i.e., the cost of the energy saved over 16 years was less than the amount spent on the energy efficiency investments), and 2) the amount of money spent per metric tonne of carbon saved (over these 16 years) is $338/tonne — about 10X more than estimates for the longterm cost to society to solve the carbon emissions problem.

This article caught my attention for two reasons.  First, this simply illustrates again the large gap between measured energy savings and those estimated by promoters of energy effciency programs.  In particular, I have seen this over and over with green buildings.  All the data I have analyzed show that, on average, LEED-certified buildings do not achieve the energy savings that their designers predict.  Many organizations pride themselves on their portfolio of green buildings yet the fact is, these buildings consume no less primary energy than other other buildings.  Society will not arrest climate change with this approach — even though it leads to all kinds of green awards.

But the second reason this caught my attention is due to the parallel these investments have with what is going on in my community of Oberlin, OH.  The Oberlin City Council has made a commitment to make the City climate-positive (I guess it is like giving 110% effort).  Apparently all divisions of the City are instructed to act in accordance with the City’s Climate Action Plan.  The City’s Municipal Power Company has contracted with Efficiency Smart to promote energy efficiency programs for its customers.  Efficiency Smart reports to the City on how much energy its programs have saved — savings that are based on projected estimates not measurements.

A year ago the City had the opportunity to purchase new garbarge/recycling trucks. The City spent an extra $300,000 in order to include hydraulic-hybrid, fuel saving technology in these trucks.  The City Public Works director estimated the deisel fuel savings to be 2,800 gal annually.  At a cost of $3.75/gal this represents an annual return of $10,600 on a $300,000 investment.  Since the trucks are expected to last only 10 years the invesment will never pay for itself.

What about the carbon savings?  If you work through the math you find that the reduced carbon emission (associated with less fuel usage) comes at a cost of about $600 per ton CO2.  This is equivalent to $2,400 per metric tonne carbon savings.  It was an utterly foolish decision to spend money this way.  And this was made based on projected savings.  In a few months we will see how much fuel the trucks have actually used.

City of Oberlin refuse truck

City of Oberlin refuse truck

Don’t get me wrong.  I am an advocate for energy efficiency that leads to real, cost-effective savings.  But there must be a cost/benefit analysis.  We cannot afford to throw money away on schemes that yield such little return.  And we cannot base our decision on “projected” savings.  I like the way that Wal-mart approaches energy efficiency.  Perform the up-front calculation to find the projected savings.  If these look good, retrofit a couple stores and measure the actual savings.  If the trial study confirms the savings — roll out the same changes to all the other stores.  If not, move on.


Why does the EPA publish false claims about its Medical Office ENERGY STAR model?

To say that someone “lied” is a strong claim.  It asserts that not only is the statement false but the person making it knows that the statement is false.

The EPA revised and updated its ENERGY STAR Technical Methodology document for Medical Office Buildings in November 2014.  That document makes the following claims:

  1. it describes filters used to extract 82 records from the 1999 CBECS
  2. it claims that the model data contain no buildings less than 5,000 sf in size
  3. with regard to the elimination of buildings < 5000 sf the EPA writes, “Analytical filter – values determined to be statistical outlyers.”
  4. the cumulative distribution for this model from which ENERGY STAR scores are derived is said to be fit with a 2-parameter gamma distribution.

All of the above statements/descriptions are false.  The filters described by the EPA do not produce an 82 record dataset, and the dataset produced do not then have the properties (min, max, and mean) described in Figure 2 of the EPA’s document.  And a regression using the EPA’s variables on the dataset obtained using their stated filters do not produce the results listed in Figure 3 of the EPA’s document.  In short, this EPA document is a work of fiction.

I have published these facts previously in my August 2014 ACEEE paper entitled “ENERGY STAR Building Benchmarking Scores: Good Idea, Bad Science.”  Six months ago I sent copies of this paper to EPA staff responsible for the agency’s ENERGY STAR building program.

I have given the EPA the opportunity to supply facts supporting their claims by filing three Freedom of Information Act (FOIA) requests, the first (EPA-HQ-2013-00927) for the list of 1999 CBECS ID’s that correspond to their 82-building dataset, and the second (EPA-HQ-2013-009668) for the alpha and beta parameters for the gamma distribution that fits their data, and the third (EPA-HQ-2013-010011) for documents justifying their exclusion of buildings <5000 sf from many models, including Medical Offices.  The EPA has closed the first two cases indicating they could not find any documents with the requested information.  17 months after filing the third request it remains open and the EPA has provided no documents pertaining to the Medical Office model.  The EPA is publishing claims for which they have no supporting documents and that I have demonstrated are false.  The details of my analysis are posted on the web and were referenced in my ACEEE paper.

In November 2014 the EPA corrected errors in other Technical Methodology documents yet it saw no need to correct or retract the Medical Office document.  Why is it so hard for the EPA to say they messed up?

It is common for scientists to correct mistakes by publishing “errata” or even withdrawing a previously published paper.  No doubt EPA staff once believed this document they have published was correct.  But how is it possible the EPA remained unaware of the errors while it continued to publish and even revise this document for nearly a decade?  How can the EPA continue to publish such false information six months after it has been informed of the errors?

Is the EPA lying about its Medical Office building model?  I cannot say.  But it is clear that the EPA either has total disregard for the truth or it is incompetent.

If these follks worked for NBC they would have to join Brian Willams on unpaid leave for six months.  Apparently the federal government has a lower standard of competence and/or integrity.