Evidence-based policy bloggers Evidence Soup called shenanigans this week on a consortium of firms and advocacy groups for asking US President Obama to make a goal of giving consumers more feedback on their energy usage.  Why?  Because, Evidence Soup argues, the letter’s authors don’t muster any specific evidence to support their stand, despite vague references to ‘studies and experience’ in support of their position:

Too often, statements that “studies show” are accepted without challenge. These claims are poorly disguised as evidence-based management, when in fact they are the enemy.

In fact, Evidence Soup grumbles, there isn’t enough data to declare a verdict on energy feedback yet.  They point to a review (PDF) that they describe as highlighting the problems with existing pilot programs of in-home energy consumption displays.  The studies (which have wildly different effects, from a trivial 2.7% to an impressive 18%) just don’t “rise to the scientific standard of reproducible results,” Evidence Soup writes.

In general, many of these pilots are quasi-experiments at best.  Many are underpowered, beset by confounds (like weather), and lack control groups.

Fair enough:  Some of them are promising, but they’re just not well-designed enough to rule out plausible alternate explanations.  And we don’t want utilities to spend their conservation-promotion dollars sending out in-home displays that don’t actually work.

So what’s the problem?

The problem, as I see it, is that the review (and Evidence Soup) turn a blind eye to decades of well-designed, empirical, experimental research on this very topic:

Let’s have a look.  Seligman and Darley (in 1977!) conducted a field experiment (with random assignment and a control condition), and found that feedback on usage (even four times a week) shaved consumption by 10%.  Becker, a year later, found evidence that feedback had to be paired with goal-setting to unlock conservation efforts.  In the early 80s, Midden et al added further experimental evidence that individual feedback helps reduce energy use, in contrast to ineffective approaches like providing general information.  In the late 1980s, Houwelingen and Van Raaij found that immediate read-out displays got people conserving — and kept them conserving, relative to control (but that the advantage of daily over monthly feedback was primarily in the short run).  Around the same time, Sexton et al. changed the nature of the goal in a study, giving people feedback about the cost of their energy, which varied by time of day:  Again, feedback changed behaviour – this time, toward changing time-of-day usage rather than overall conservation.  Siero and colleagues (1996) found that feedback on consumption helped, but not as much as feedback with a basis for comparison (in their industrial context, employees got feedback on their own consumption, plus information about another unit’s consumption).

This is not even close to the totality of the literature on the topic.  Abrahamse et al. (2005) review some thirty-eight studies on energy consumption and conclude that feedback, especially when frequent, has a reliably positive effect on consumption.  Are there moderators at play that change the effectiveness of these kinds of programs? Darn skippy.  Does feedback probably work best when paired with other interventions like goal-setting or social comparison?  Sure.

But when you pair the tidily-designed (but small scale) experiments in these papers with a track record of success in (admittedly poorly designed) utility pilot programs, I find it hard to muster as much skepticism as Evidence Soup has about this kind of intervention.

Photo:  Probably the wrong type of energy usage feedback, via Passive Aggressive Notes.

Edit:  Evidence Soup responded to my post.  Her view isn’t necessarily that energy feedback isn’t effective, but rather that advocates owe us the duty to present the evidence — not just wave their hands around and mention unnamed ‘studies’.