We – electricity and natural gas customers – pay a lot of money every year, collected in our utility bills, to support a multi-billion dollar energy conservation industry. This has consequences. As we keep funnelling more and more billions of public dollars into energy efficiency programs, those programs will attract more attention. People will do the math, and start to wonder whether those programs are money well spent.
Recently, Tom Adams and Ross McKitrick released “Demand-Side Mismanagement: How Conservation Became Waste”, a Fraser Institute paper criticizing the money being spent on energy efficiency programs. Their conclusion is that the high quantities of energy “saved” by these programs have likely not been saved at all, and on balance the programs do more harm than good. Subsidizing energy efficiency with public funds, they argue, should be stopped.
Then, last week, Bruce Sharp authored an article entitled “Electricity Conservation: Kill or Be Killed”, in which he argued that electricity efficiency programs have, in the short term, a net impact of close to zero. Some people have lower bills, because they have conserved most successfully, while others have higher bills, because they didn’t conserve as much and their rates have gone up to pay for the conservation of others. For example, he calculates that if the market as a whole reduces consumption by 15%, and you personally only reduce your consumption by 10%, your total bill will go up by 5.1%, even though you are using less energy.
These are only the local articles, and only in the last few weeks. Themes like these have been increasing throughout North America in recent years, as conservation spending has increased. That is likely to continue. It is no real surprise that the generally-accepted truth – that big, aggressive energy efficiency programs are “good for us” – is being challenged, and not only by ideologues and nutbars, but now by serious people, with serious arguments.
Don’t bet on the energy efficiency industry winning this battle. The truth is murkier than you might first think.
There are three current issues that face energy efficiency programs:
- What are the real savings being delivered by these programs (and are those savings worth the cost)?
- Which conservation and energy efficiency measures, activities, and options should be subsidized?
- How do you ensure that these programs are fair to all energy users, not just a reallocation of costs between winners and losers?
This article will deal only with the first of those: how much are we actually saving when we spend billions of dollars on energy efficiency programs, and how cost-effective is that spending?
Billions and Billions
It is kind of an ongoing joke among some energy industry insiders that energy efficiency programs save “billions and billions” (thank you, Carl Sagan) of cubic meters of gas, or kilowatt-hours of electricity, i.e. astronomical numbers. I can’t count the number of times I have heard a skeptic say: If we had really saved all that energy the conservation program managers have claimed, we would be using nothing today. Zero.
Clearly conservation programs have had some impact. Since we started pouring money into gas and electricity efficiency programs thirty years ago or more, the growth in energy use has slowed, and in some cases average energy use per customer has gone down. This is true despite a general upward pressure on energy use due to the increasing number and size of functions that need energy inputs (electronics, just to give one easy example). We have more reasons to use energy, so using it more efficiently has to deal with those natural increases, as well as our pre-existing uses.
But there is a problem with the connection between efficiency programs and the “savings” claimed (i.e. the reduced energy use). That problem has two components: a) we measure the overall efficiency savings wrong, and then b) we get the causal connection between the programs and the savings wrong. Neither is a small impact.
Bottom-Up “Measurement” of Savings
The first component of the “savings” problem is that we determine the savings from efficiency programs using a bottom-up (i.e. theoretical) calculation, rather than a top-down (i.e. empirical) measurement. If New Widget is designed to use half as much energy as Old Widget, then we calculate the value of the switch from Old to New as the total energy use from Old Widget (called the “baseline”), minus 50%, times how long New Widget will last (the “effective useful life (EUL)”). We never actually test, through metering for example, whether we really used half as much each year, and whether we actually saved it for the whole technical life of New Widget.
What this means is that if we are less likely to turn our lights out because they are LEDs, and thus highly efficient, that is not factored into the calculation of savings from LEDs. If we take longer showers with a low-flow showerhead, that doesn’t count. If you would have replaced your Old Widget with something more efficient next year anyway, that is not considered. If the reduced use by the New Widget means that some other energy use increases, that is (usually) not included. And, if in the real world New Widget doesn’t live up to its manufacturer’s specs, but saves somewhat less, or declines in performance over time, that is also ignored.
I could go on. The point is that the bottom-up approach to calculating energy savings does not faithfully track the actual energy saved. It is an entirely theoretical calculation, with a built-in bias in favour of calculating higher savings amounts.
This was demonstrated in a study done by Pacific Economics Group for the Ontario Energy Board in 2010. That study sought to find a statistical connection between spending on gas efficiency programs, and actual natural gas use. The theory was that, as we spend more money on efficiency programs, gas use should on average be declining on pace with that spending, and we should be able to identify the correlation once we adjust for extraneous factors. If there was a statistical connection, that could validate, at least in part, the level of savings resulting from the programs.
Instead, the general conclusion of the study was that no statistical connection could be found. (Well, it was more complicated than that. This is a simplified summary.)
Perhaps the best example I have seen in a long while of this kind of “bottom-up fallacy” surfaced recently in a meeting I attended about the potential for further gas efficiency savings. A potential study looks at “how much can we save”, as opposed to the “how much have we saved” that standard savings studies determine. The concepts are similar, however.
Top-rated consultants in the field (who are really quite good) had first calculated the technical potential for reducing gas use (regardless of economics) based on efficient technologies available or likely to be available. Then, they calculated the subset of that technical potential that would in theory be economic. Start with what is technically possible, then narrow that down to what makes sense economically. This is the conventional way to forecast potential.
The next step is what is called “achievable potential”, which is how much of the economically viable efficiency potential could actually be achieved in a given time period if money were no object. This factors in things like when people replace their furnaces, and the ability of the market to absorb new technologies, and things like that.
What the study authors found is that, if we are prepared to spend as much as it takes over the next fifteen years, we can reduce residential gas use by about 15% from today’s levels in that time frame. The cost? $22 billion. (No, I am not making this up. I wouldn’t be able to keep a straight face.)
No-one is proposing to spend $22 billion, and just as well. It turns out the calculation, while done correctly, is not actually correct. That 15% reduction in energy use double counts some of the potential savings, so that it is not really “achievable” at all, even if we did shell out $22 billion.
For example, the study calculates the savings that would arise if we installed as many high efficiency windows as we can, comparing energy use with today’s windows to energy use with the new windows. Result: X billion cubic meters of gas “saved”. The study then calculates the savings that would arise if we replaced our mid-efficiency gas furnaces with high efficiency furnaces, again comparing energy use today to energy use with the higher efficiency furnaces. Result: another X billion cubic meters of gas “saved”. Then, the study adds the two together. If we do both, we save 2X billion cubic meters of gas.
What the study doesn’t consider is that, once we have replaced all our windows, there is less gas use for the high-efficiency furnace to save. There is overlap. The same savings were, in effect, being counted twice.
I thought that was bad enough. Oh no. It was even worse. The study, for example, counted savings from two different types of window replacements, even though we can only replace windows one way or another.
There is a long list of examples like this. None of the interactive effects of energy efficiency measures (called “cascading”) were included.
By the time the consultants were finished explaining all of this stuff, there was good news and bad news. It was clear that the 15% savings is not a real number. That’s the bad news. We can’t even reduce use by 15%. The good news is that, even if we had an unlimited budget, we wouldn’t spend $22 billion. It would be much less, because at the time we were spending the money we would not, hopefully, actually chase those phantom savings. We would save a lot less, but we would spend a lot less as well.
We use the same, bottom-up approach as in the potential study to calculate the savings utilities and governments have already achieved in their current energy efficiency programs. It is just as incorrect.
Now, the normal reaction to this example, and all the many other issues with bottom-up calculations, is “Why don’t we just measure actual reductions in energy use?” Measure before, measure after, subtract. How hard can it be?
Sounds great, but reality is not quite so simple. If a house uses more or less energy this year compared to last year, why is that? Is it the new insulation that was installed under a government/utility incentive program? Is it because the weather was colder or warmer this year? Is it because the oldest child went off to university? How about the youngest child’s primary school course on saving the environment? Or the parent’s discovery that there is an internet? Or the new plug-in electric car in the driveway? Or a new puppy that results in more windows being left open to “air the place out”? The list is almost endless. The cost to do an empirically valid analysis of energy use reductions, and in particular what the incentive for that new insulation meant in improved efficiency, is prohibitive.
And, if you think that the private home example is complicated, imagine how difficult measurement would be for a manufacturing facility, with many energy uses and thousands of variables affecting the overall energy levels. When the manufacturer replaces an old, inefficient machine with a new, efficient one, can measurement of actual use really tell you the real world impact of that new machine?
As a result, while both the government and the regulator – the Ontario Energy Board – are recently emphasizing more measurement of actual savings, the number of areas in which that is actually being done is minimal. It is really not possible in most cases.
We are stuck. We can’t measure savings properly. We know that. We also know that the way we do determine savings overstates those savings by some unknown amount. The difference is likely a big number.
When we were spending $1.00 for every $10.00 of future savings, this was less important. With higher budgets, now sometimes we spend $1.00 for every $1.25 of future savings. Now it matters a whole lot more that we have confidence in our savings forecasts.
The “Causation” Question
The other problem with our “savings” claims is that we are trying to measure what savings results are “caused” by particular programs, and by the spending associated with those programs.
Any philosophy major will tell you that causation questions are the most fun you can have without breaking the law. There is even a debate about whether anything really causes anything else at all. But, even setting that aside, there is a rich tapestry of causation concepts and relationships that only a mother could love. (How many different types of causation are there, for example? In law school, they teach about half a dozen – proximate cause, causa causans, causa sine qua non, etc. -, but that is only the main groupings. Philosophers are not so limited.)
Take a simple example. A chemical company launches a program to reduce the leakage of hot steam from its high pressure pipes. The initial reason that the company did this was a visit from a government safety inspector, who saw a twenty-foot “plume” of hot steam, judged it unsafe (go figure), and wrote them up for it.
The company’s head office sent their steam leaks expert up from Wichita, who criticized local management for not already having implemented the international group’s mandatory steam leak protocol. They were much embarrassed.
On the other hand, local management hadn’t implemented the protocol because of its $3 million annual cost. While their energy savings from doing so would, each year, be more than $6 million, and the plant would be much safer, the cost is in one budget, and the savings are in another budget, and it is hard to get them reconciled given the budget approval process. Every time they tried to get it into the capped maintenance budget, another priority intervened.
Along comes the gas utility, telling management that they can provide a $100,000 cheque if the company implements the steam leak protocol. The additional $100K doesn’t actually matter to the economics of the project, which are already very good, but it allows management to treat the spending as a special case, outside of its normal budgets. The project goes ahead. The utility spends $100,000 of ratepayer money, “causes” $6 million of natural gas savings each year for an assumed ten years, and gets an extra profit for its shareholders (also paid by the ratepayers) of $500,000.
What “caused” this savings? Some would say the real cause was the safety report. Others would say it was the international steam protocol already in place within this corporate group. Others would say it was the great project economics, with a six month payback period simply too attractive to pass up. The utility, though, would say that, without their cheque and imprimatur, the project wouldn’t have happened. (“Causa sine qua non”, for those of you who are keeping score at home.)
None of those so-called causes are in fact THE cause. Reality is more complicated than that.
This is just a simple example. More complex examples abound. The homeowner who puts in new insulation? Is it because the gas utility offered them a rebate? Or, is it because this year their mortgage was coming up for renewal, so they could add to it for home improvements? Or, because there are public service messages every day from government and others exhorting them to use energy more efficiently? Or, because the electric utility is also pushing more insulation? Or, because the insulation company knocked on their door and offered to match the utility rebate with a discount of their own. Or, because one of the kids in the house just studied energy efficiency in school, and asked their parents why their house was so inefficient? Or, because energy costs keep rising, and they are sick of paying higher rates?
And so on.
Energy efficiency programs don’t exist in a vacuum. They exist in an environment today in which energy efficiency and conservation are discussed constantly. End users are bombarded with information promoting efficiency, and incentives to do so. Even hermits hear about LED light bulbs.
And, in fact, the reasons why any person implements any efficiency measure are complex. It is never just the rebate, or the one advertisement, or any other single cause. You know you have to quit smoking. Why was today the day? Similarly, you know you have to be more efficient in your use of energy? Why now? There is no simple answer.
To deal with this problem, we measure the results of energy efficiency programs by starting with the assumption that every time a utility pays an incentive to a customer to conserve, that incentive was the sole reason for the customer becoming more efficient. Now, we know that is not true (it’s ridiculous, in fact), so we adjust that number using something called “net to gross”.
Net to gross has two parts. First, we assume that some customers will take the program incentive money, even though they would have implemented the efficiency measure anyway. The program wasn’t the cause of their efficiency. Those people are called “free riders’, and in some more sophisticated calculations they can even be full or partial free riders, reflecting the many possible causes for their actions.
Second, we assume that some customers don’t apply for the program incentive, but implement the efficiency measure anyway because of all the hype about it, or because their neighbour did it, and so are indirectly influenced by the incentive program. This is called “spillover”.
The way we calculate these two effects is that we survey energy customers to find out why they did what they did. Want to know why people install insulation? Do a survey, and ask them. For example, they are asked “You took a $1,000 cheque from the gas utility as an incentive. Did that money influence your decision to install the insulation?” Reliable stuff, this survey data.
Coincidentally, the result is often that the net to gross – the combined impact of spillover increasing the assumed savings, and free riders reducing those savings – is zero, and the utility or government program gets full credit for all “savings” whenever they pay someone an incentive.
The other interesting fact, and perhaps the most telling, is that as the market’s cacophony of messages about efficiency has increased over the years to the current deafening levels, the assumptions of causation have not changed very much. You would think that, with conservation now much more top of mind, the average person would be a lot more likely to be efficient without any incentives. The current methods of savings measurement do not reflect that likelihood.
It is hard not to conclude that much of the “savings” calculated for current energy efficiency programs are caused primarily by other things, rather than by the money being spent on incentive programs. In my own discussions with people who actually manage energy use, they will publicly – and to any survey company that calls – say that utility incentives are a godsend (and deny saying anything different to me). Privately, and more candidly (particularly after a couple of beers), they will say that almost every efficiency measure they have ever installed would have happened sooner or later, with or without the incentives from utility efficiency programs.
Are the Savings Real?
In the last two decades, we have spent billions of dollars to incent people to implement energy efficiency measures. Those programs have certainly had an impact.
The truth, however, is that there is no reliable evidence that the cost of those programs was justified by the efficiency improvements caused by those programs. The evidence we do have is just abysmal:
- The methods we use to count up the value of the efficiency achieved are obviously and materially flawed, and we don’t really have a better solution at hand. It is not for want of trying. It is just hard to get it right.
- Further, even if we were able to do the math more reliably, the causal connection between paying incentive dollars, and achieving efficiency results, is poorly understood and apparently overstated. The methods we use are inadequate, even simplistic, and we have not been able to develop better methods. Again, it is not for want of trying.
It is unlikely that the overall conclusion of Adams and McKitrick – i.e. that these programs have not produced any benefits – is correct. Clearly they have. The real issue is whether they have been, and continue to be, cost-effective, particularly at current spending levels. Are the benefits we are getting more than the costs we are incurring? Does it make sense to spend more money to chase greater and greater “savings”?
The honest answer is: We just don’t know.
What to Do, What to Do?
My first foray into the energy business came about because I was a young “environmentalist”. I had read much of the published material on how we produce dirty energy, then waste it scandalously. At that time – the mid-70s – there was a lot of stuff written about this, and there is little doubt that as a twenty-something I became a bit of a true believer. (Of course, then I went on to become a tax lawyer on Bay Street, so perhaps I was less pure than some of my peers.)
It was a time of intense debate over our energy future. In the US, the Energy Policy and Conservation Act was passed in 1975, largely a response to the oil crisis two years earlier. Generous US government support for wind energy at that time caused rapid cost reductions (and also many abuses). SERI, the Solar Energy Research Institute, was established in 1977. Friends of the Earth (Amory Lovins) published “Soft Energy Paths”, also in 1977. I could go on. There was a lot happening.
All of us – at least those who were looking for a more environmentally sustainable future – thought that conservation would be the single most important route to achieving that future. We all knew that we would still have to have real energy sources (renewable electricity generation, for example), but probably a lot less if we just stopped wasting it.
From the early California energy efficiency programs of the mid-70s, to today, the energy conservation industry has grown to become a multi-billion dollar economic activity. A recent report from the International Energy Agency, “Energy Efficiency Market Report 2015” estimates that around the world energy efficiency improvements over the last 25 years have avoided the use of $5.7 trillion dollars of energy, including $550 billion dollars in 2014 alone. In the last year for which they have done the calculation, they estimate that world energy use in 2012 was about 8.5% lower than it would otherwise have been, but for energy efficiency improvements since 2002. Thus, ten years of improving efficiency produced a cumulative 8.5% improvement. Not chump change. (Unfortunately, the IEA numbers use some of the same flawed approaches that we use in our programs today.)
Along the way, though, the US and Canada are now spending tens of billions of government and ratepayer dollars each and every year pursuing greater energy efficiency. As Adams and McKitrick point out, this spending has never been subjected to an independent audit. No-one really knows if it is money well-spent, or it is money being wasted.
As we continue to approve greater and greater amounts for energy efficiency programs, perhaps now is a good time to take a closer look at whether we are really achieving our goals with these programs.
No, don’t stop the programs. We can multi-task. But let’s not put this off either, continuing to apply blind faith rather than disciplined analysis to expanding our efficiency spending. We know the weaknesses and biases of our “savings” results. Sooner or later we have to get these numbers right. Now seems like a good time to increase this effort.
- Jay Shepherd, April 24, 2016