This report is a summary of the main emerging themes of the workshop. It is not a record of the meeting.




Key issues

  • Numerous sources of data already exist. METER should seek to complement these.
  • The research question must be formulated at the outset. Otherwise spurious results will emerge due to the sheer scale of variables collected.
  • The study should attempt to challenge existing theories on DSR
  • Sample size will govern accuracy: the data will help to inform what conclusions can be drawn for different questions
  • A broad sample is needed if ‘flexibility in general’ is to be identified. This flexibility might not be related to ‘green’ or ‘economic’ motivations.
  • The limitations to flexibility should be explored (not just existing flexibility)
  • Potential mismatch of ‘reported’ and actual activities must be dealt with. The collected data needs to be verified


Reflections and responses on key issues


Activities vs. Appliances

The focus of this study remains firmly on activities. However, appliances can inform activities beyond what has been reported in the activity booklets.

The following matrix illustrates how combination of appliance and activity information can yield greater insights into the actual activity performed.

Appliance (TV)
Yes No
Yes Watching TV Watching on
mobile device
Activity (TV)
No Backgroud TV not watching
Others watch


Three options will be explored to gather appliance information:

  1. Ask directly as part of the app. ‘Are you using any of these…{Oven,Washing machine…}?’
  2. Disaggregate load data after collection
  3. Disaggregate load in situ and use suggestions to verify appliances with participant. ‘Are you using an XY?’

Guard against artificial findings

Specific research questions must be formulated prior to data analysis to avoid pseudo causalities in large data. While the data will be made publicly available, a condition for download is to submit the intended research question for the record. Publishing conclusions beyond the scope of the submitted questions will have to be defended. This applies to the Meter team in the same way as to external researchers.


Verifying data

One intensive approach undertaken at the moment is the provision of ‘sense cams’, which take photos during the day and help with reconstruction of the diary. It is not clear to what extent participants change their reporting (or indeed the activities themselves) in the knowledge that pictures are being taken.

We will conduct focus groups with early participants shortly after their trial day and try to reconstruct

  • what they were doing in their own words when reporting activity X
  • how they interpret different activity categories
  • what activities they would not have considered reporting
  • experiment with them annotating their own load profile (‘that must have been my chain saw’)

Testing theories

This will need further thought. Obvious candidates to challenge are concepts like ‘price elasticity of demand’. The concept of flexibility as a capital is worth exploring. The hypothesis that flexibility grows through learning and diminishes through exhaustion (not necessarily contradictory effects) could potentially both be tested.


Episodes or instantaneous sampling?

Time- use budgets presume one ‘primary activity’ for each 10 minute period, making all other activities ‘secondary’. This poses challenges for collection with an app and for the attribution of activities.

We will therefore trial the collection of instantaneous activities. Any number of activities can be reported at a given point in time, without asking when they started. At the next interaction, participants can remove activities they no longer perform and add ones that they now engage in.

No start or end times are collected, such that the duration of activities in not explicitly apparent from the data. This sacrifice will reduce complexity of the interface and (hopefully) increase the precision and detail of activities reported, as well as encouraging multiple parallel activities being reported. In particular activities that would not be deemed sufficient to fill a ‘10 minute slot’ might be captured this way. Boiling a kettle could fall into this category, for instance.


What accuracy is needed?

From a policy makers perspective it is important to know whether support for DSR can confidently produce beneficial effects for the system. This requires high confidence, rather than accuracy. Possible conclusion: The probability that incentive X delivers a load profile that is preferable [define preferable!] to the overall system is greater than Y%.

For potential innovators in DSR business models the scale of response matters. With 95% certainty, intervention X will deliver load reduction Y for subgroup Z.

Previous studies suggest effects on load on the order of 5%. However, this is for combined loads. Loads with particular flexibility may show greater rates and thus require smaller samples, while infrequent activities could call for larger samples.

Independent estimations suggest sample sizes in the range of ‘thousands of households’. Once this order of magnitude has been reached, the data will allow to establish the scale required for given confidence intervals and accuracies.


What interventions should be considered

The workshop reinforced the exploration of ‘non energy’ and ‘non commercial’ approaches to interventions. Flexibility could come from changes in working or schooling arrangements, or appliances that make shifting more convenient, such as silent white goods that do not disturb other activities.

These can’t be tested with the current approach (much as I would like to change school hours). Stated and revealed preferences may differ and approaches need to be carefully designed. The limitations to flexibility should be explored.

After a response event participants could be asked: “You did X during this period. Are there reasons why this happen then rather than earlier or later?”

Desirable extensions / additions - non-domestic sector -



Votes and requests


Should data collection be restricted to winter weekdays?

Yes: 44% (N= 38)

The focus will remain on winter weekdays in order to achieve sufficient accuracy of results, but sampling will be extended to all year.


Should participants get monetary incentives?

Yes: 56% (N=26)

Other means will be attempted first: prize draw, earn ‘stars’, receive your personal profile.

Possible prize: “One year free electricity” - Up to three participant get their last 12 months worth of electricity paid as a bank transfer. Proof of bill required, capped at £1000.

If these do not lead to sufficiently high or diverse uptake, monetary incentives will be considered. Indicators for diversity will be devised prior to recruitment.


Should the duration of activities be recorded?

Yes: 66% (N=15)

I am afraid I have to go against popular opinion. After raising the issue with a time geographer I have concluded that ‘periods’ distort reporting and that simply asking ‘what activities are being performed at a particular point in time’ provides better accuracy, supports multiple activities being named, is simpler to administer and allows for ‘short’ activities to be captured.


Should electricity only be recorded for 24 hours?

Yes: 13% (N=22)

Agreed. Will try to make the technology record for longer.


Should additional data be collected?

Yes: 33% (N=12)

Thank you. One answer that doesn’t increase the burden on me.






Funded by the Engineering and Physical Sciences Research Council (EPSRC) under the Early Career Fellowship scheme. Ref. EP/M024652/1.

EPSRC