Keeping up with the Joneses


The water regulator (Ofwat) has encouraged water companies to include information on other companies’ performance in the customer engagement they must carry out at the next price control review (PR19). We have undertaken a research project to inform and test how comparative information may – or may not – affect customers’ valuations.

At the last price control review, PR14, water companies were required to carry out more in-depth customer engagement to inform their business planning, in particular to identify customers’ priorities for – and valuations of – service improvements. Overall, this shift in the regulatory methodology appears to have been successful:

  • the business planning process was more customer-focused;
  • each company defined its own outcome targets (performance commitments) based on its customers’ priorities;
  • those targets were set at cost-beneficial levels, using information on customers’ valuations of service improvements and the marginal cost of delivering such improvements; and
  • financial incentives related to these targets (outcome delivery incentives) were defined, based on customers’ valuations of service improvements and on marginal costs.

Although this innovative change in the regulatory methodology was considered to be successful, improvements can certainly be made, and the industry is considering how it can build on this success at PR19.  One improvement that Ofwat has identified is for companies to provide comparative information on the performance of other companies to customers and stakeholders during the business planning process. The hope would be that the additional information would help customers to make more informed choices about service improvements, and would also help companies’ Customer Challenge Groups (CCGs) to provide a more robust challenge.

Implementing this option raises questions about the best ways to present comparative information to customers. To inform answers to this practical question, and test how comparative information can be included in customer engagement, we have undertaken a research project on how different types of comparative information appear to influence customers’ valuations. This bulletin explores the conceptual problems and practical implementation of including comparative information in customer engagement, and presents our findings.


Ofwat made clear in its PR19 policy statement that it believes comparative information will improve the quality of customer engagement. It stated that:

  • “having comparative information will allow customers to make more informed judgements about, for example, service levels and PCs; and…
  • …should also facilitate more powerful challenge from CCGs”.[1]

We started by thinking about how, in principle, comparative information may affect the way that customers respond in surveys on service improvements. Behavioural economics suggests that framing of information can have a significant impact on customers’ responses.  For example, if company A has a service level of 90 while company B has a service level of 75, customers may start their mental journey at 90 to determine the service level they would expect from company B.  Including comparative information in customer valuation surveys could change the results in various ways summarised in the table below.


Possible customer response to comparative information Possible impact on stated valuation
If a company is performing below average, some customers may feel that the company, rather than customers, should incur the costs of improvement in this particular area. Reduce stated valuation?
If a company is performing below average, some customers may feel that it is very important that the company improves, and are prepared to spend more to make sure the service improves. Increase stated valuation?
Providing more information to customers in already complex surveys could mean that some customers do not engage with the additional material at all. No effect?
Some customers’ choices may be strongly affected by another driver, so that the introduction of comparative information does not affect customers’ decision making at all. No effect?
Some ways of presenting comparative information may appear more complex than others, and may therefore require more effortful customer engagement. Depends on presentation?

This table illustrates the fact that, at least in theory, comparative information might result in higher stated valuations by some customers in some situations, but lower valuations by some customers in other situations. Given this range of possible outcomes, we considered that this was an important area in which to undertake some practical research ahead of PR19.


To date, Ofwat has not provided guidance as to how companies should present comparative information. In its PR19 policy statement, Ofwat stated that “it will be up to companies to appropriately frame such information [comparative information] and ensure customers understand any reasons or justifications underpinning performance on a particular comparative measure”.[2] 

This is consistent with Ofwat’s approach, which has been that companies should have ownership of their customer engagement process, and it provides flexibility for the companies to be innovative in the way that they present and explain new information.  However, it thereby leaves them to solve the practical problems. Frontier Economics, together with United Utilities, has undertaken a research project on how different types of comparative information may or may not influence customers’ preferences and valuations.

In setting up this project, we selected the three service attributes that appear to be most important to customers. We developed three different versions of our survey: a control group without any comparative information, one survey arm with comparative information presented in tables and one survey arm with comparative information presented in graphs.)

When setting up the survey, we had to make a number of survey design choices.  As framing is a powerful concept in behavioural economics and all of these choices affect how information is framed, we have identified a number of issues that require careful consideration.  These are summarised below.

Practical challenge Issues and our approach
How should information be presented? Options considered: tables of rankings or service levels; bar charts showing companies’ performance; and scatterplots of service levels and average bills. The first two options presented the same information in different formats, whereas the scatterplots were intended to provide additional information on links between service levels and average bills.

In using scatterplots, we found we might need to add additional explanation as to why bills may vary across companies. Including the additional information inevitably tended to make the survey more complex.

We chose to use tables of rankings in one survey and bar charts on performance in another, and not present any scatterplots. The third survey contained no comparative information, and therefore acted as a control.

Which companies should be included? For example, we had to consider whether a water and sewerage company (WaSC) should include the water-only companies in its comparisons of water service performance, or measure itself only against other WaSCS.
Should all companies’ data be presented, or summary statistics only? Individual company performance could be too much information for customers to engage with. The alternative option, which we used, was to present the best and worst performance levels.

Ultimately, a balance needs to be struck between providing customers with relevant information, and keeping the quantity of information manageable for the customer.  The min / max performances were labelled “best” and “worst”, rather than “lowest” and “highest”.

How should companies be labelled? Should all companies be named, or labelled with letters/numbers to maintain the anonymity of the comparators? It did not seem necessary for customers to know the identity of the other companies, and the names of other companies might “frame” customers’ views in some way. Overall we felt that, as there would not be a clear advantage in naming the companies, it would be best to label them anonymously.
How to present future performance? Questions in surveys on willingness to pay for improvements show possible future service levels, relative to the current performance. The question is whether it is possible to provide data of how other companies may perform in future. However, it is unlikely that companies will have this information available when they are developing their surveys. We concluded that it is not realistic for companies to provide information on the expected comparative performance of other companies.
Should comparative information on bill levels be provided? Information on bills as well as service levels and bill levels would provide customers with a more complete picture. But would additional information, explaining differences in bill levels, then be needed? It would be important to make it clear that the bill levels shown were averages, as some customers might feel confused if the bill level shown differed significantly from the amount that they actually had to pay.
Should context / explanatory factors be provided? What additional context is needed? Should this include explanations of companies’ different operating environments, and therefore the costs involved in delivering service improvements? We decided that such information would increase the complexity of the survey to the extent that it would make it more difficult for customers to engage.
How many service options should be considered? How could the overall complexity of customer surveys be contained? We decided that as comparative information would inevitably add extra depth and complexity, the number of service options included should be reduced.

Having identified these challenges, we designed our survey as follows:

  • One set of customers would receive a table of rankings; one set would receive a bar chart showing comparative information; and one set would be a control group, receiving no comparative information.
  • We chose to use only WaSCs in our data set, and decided to present summary statistics (best and worst company, alongside United Utilities) in the bar charts, while presenting all WaSCs in the ranking tables.
  • In the ranking tables we used letters to label the comparator companies (assigned alphabetically). Only historical comparative information was provided.
  • We presented comparative information on bills in the same way as we presented comparative information on service levels (i.e., in bar charts and a table of rankings).
  • We presented comparative information on one page, and had a separate page for each service option choice, which limited the amount of information being presented at any one time.

The pictures below provide an illustration of how we presented comparative information using water quality contacts as an example.

 Picture1.3  Picture2


The customer survey that we developed asked customers to choose between two service options at any one time. We included four service options in our survey, each of which included a defined performance level for three performance measures, and an implied bill level. These options were developed alongside United Utilities’ regulation team, to try to ensure that they accurately represented realistic future options for service levels, and associated bill levels. As we had four options, there were six sets of pairwise comparisons. The order of the pairwise comparisons was randomly assigned for each customer, to mitigate the risk of order bias in the responses.

The results of our analysis show that customers’ choices were not affected by comparative information; the choices made, and therefore the implied customer valuations, were not statistically different between the three surveys.

For example, the table below shows the choices made in one of the pairwise comparisons. The small differences between the three surveys are not statistically significant.  Therefore in our research exercise, the comparative information did not affect customer valuations, relative to the control group.

Survey sub-sample Charts Rankings tables Control group
Proportion that preferred Option A 62% 61% 64%
Proportion that preferred Option B 38% 39% 36%

In addition to this key finding, we also assessed whether there appeared to be any differences in the reasons customers gave for their choices; their satisfaction levels; and how easy they had found the survey to complete.

Survey sub-sample Charts Rankings tables Control group
Proportion of customers who chose options with the lowest average bill[3] 42% 42% 39%
Proportion of customers who chose options that seemed to offer the best value for money3 51% 52% 43%
Proportion of customers who found it very easy or easy to make choices 72% 65% 66%
Proportion of customers who are very satisfied or satisfied with United Utilities 81% 76% 71%

There did appear to be some differences here.  For example, customers who received the charts seemed to find the survey easier to complete, and reported themselves being more satisfied with United Utilities. However, given our sample size, these differences are not statistically significant.

It is important to recognise that the survey does not imply that comparative information will not have an impact on customer’s valuation.  Our specific survey only included three service attributes and United Utilities’ performance is generally not an outlier so the results for other companies may well vary.  So we cannot, at this point, draw firm conclusions about customer behaviour nationwide.


Earlier in this bulletin we explored conceptually the ways that comparative information might or might not affect customer behaviour. While the underlying drivers of customers’ selections are not entirely clear, they may have been driven by the following:

  • Customers may not have engaged with comparative information; or
  • Customers may have engaged with comparative information, but other factors continue to be the key drivers of their choices.

It appears to us, based on the results from our survey, that the second of these is the most likely. When asked why they had made their choices, a significant number of customers (around 30 – 40%) stated that they had chosen the option which they felt gave them the lowest bill, and a significant number also stated that they chose the option that they felt offered the best value (around 50%).3 It seems therefore that cost/value for money were the key drivers for this sample of customers, and that they were unaffected by comparative information.

This suggests, in turn, that willingness to pay for improvements was relatively low, and this low willingness to pay was not affected by the addition of comparative information.


Our research suggests that companies have to make a number of important choices on how to present comparative information in customer valuation surveys.  This requires careful consideration and ideally further trialling to understand how framing affects customers’ choices.  Our results suggest that customers may find it easier to engage with comparative information presented in the form of charts rather than tables, though the size of the sample means we cannot claim this result to be statistically significant.

In our survey comparative information did not affect the results. This may be because other factors were stronger drivers of customers’ choices.  If customers’ responses are unchanged and there is evidence that they engaged with the comparative information, further research could be directed at finding out what the underlying drivers of customer choice are.  For example, some customers may always select the lowest bill regardless of the information available.

Finally, we note that we have focused on how comparative information could be included in customer surveys.  At PR19 however, comparative information will be used not only in customer engagement but also by Customer Challenge Groups (CCGs).  CCGs have the opportunity to use comparative information as a powerful tool to challenge companies and their relative performance and ultimately the use of comparative information by CCGs may have a bigger impact than the inclusion of comparative information in the surveys.




[1]     Ofwat (May 2016), Ofwat’s customer engagement policy statement and expectations for PR19, p. 21

[2]     Ofwat (May 2016), Ofwat’s customer engagement policy statement and expectations for PR19, p. 21

3 Customers were able to select more than one reason, so this does not mean that around 90% of customers chose options with either the lowest bill or the best value for money. Also, it appears that the stated reason(s) differed depending on which options customers had selected. For example, for some option choices, only around 5-10% of customers stated that they chose the option that had the lowest bill.

Comments are closed.

Download full publication

Related Sectors:
Related Disciplines:
Behavioural Economics
Related People:
Rob Francis
Anna Berry
Annabelle Ong