How can we measure our data improvement progress?

Maybe you can help?

Dan Barrett
4 min readJan 26, 2022

This is a short post about something I’m puzzling through at Citizens Advice at the moment. I’d really appreciate any thoughts — you can contact me via Twitter or LinkedIn¹ or comment on this post.

The situation

In the Design, Data, and Technology function at Citizens Advice we align our work to 7 high level outcomes. This is a new approach we’ve taken over the past 18 months. It’s helping our teams to deliver outcomes for the organisation and our clients, rather than outputs.

As the Head of Data Science I’m particularly interested in how we measure progress against these outcomes. We’ve definitely improved our metrics practice over the past year. This has been helped by some our teams — particularly product teams — being mature and experienced in working towards outcomes and developing metrics accordingly. The product teams are supported by the outcomes in our Product Strategy.

We’ve also worked as data specialists to develop metrics with teams, providing coaching, friendly challenge, and data design expertise.

One of the 7 high level outcomes is particularly relevant to me:

Out data allows us to make decisions and take action

To measure progress against this we developed a data experience survey open to all staff at the National Citizens Advice organisation. We run this survey roughly once a quarter. It has provided us rich insights into what we need to improve and how people use our various data products.

The survey also provides us with 3 headline metrics to track scored on a scale of 1 to 10:

  • Finding data
  • Interpreting data
  • Using data for decision making

With ‘finding data’ being the biggest area for improvement.

The problem

We have several hundred staff in the National Citizens Advice organisation and response rates to the data experience survey are low.

In my opinion the response rates are too low to be able to rely on the survey alone to tell us how we are progressing.

We work hard on publicity for the survey and will continue to do so. However, in an organisation the size of Citizens Advice it’s hard to get everybody’s attention when there are competing demands for it. It’s also hard to rely on surveys when we regularly survey staff on a variety of other topics, risking survey fatigue.

Citizens Advice provides a wide variety of advice services across England and Wales to millions of people. The data is rich and really rewarding to work with. That breadth and complexity provides many opportunities for improvement, which is great. On the other hand, we don’t have a ‘bottom line’ to track as such. This kind of measurement work can be more challenging to do well than it would be in a different organisation.

The solution?

We need a couple more meaningful headline measures to guide us in our data improvement work. We are stretched as a team so they need to be quick and cheap to gather.

A couple of years ago I would have said we should do a data maturity assessment to provide a benchmark. Our survey was designed with the ONS data literacy scale in mind and we can align our results to that. Other data maturity assessments are available, of course.

Context is important here though. I wouldn’t want to run a separate data maturity assessment in isolation from wider organisational maturity initiatives. We are coming to the end of our Future of Advice strategy. Now feels like a particularly risky time to be doing something ‘separate’ about data. I try hard to make data accessible to everybody, part of everybody’s work and not an end in itself. Plus a data maturity assessment would come at a cost that we can’t meet at the moment in terms of team time and available funds.

We could look to quantify the improvements we’ve made in our evidence base — in terms of things we know now that we didn’t 12 months ago, or in terms of time saved for processing. This might be an avenue to explore but it feels like it might be expensive to gather.

Looking at the outcome again

Out data allows us to make decisions and take action

We could do something like tracking decisions taken as a result of data but that sounds too hypothetical to me. Decisions don’t get made in a universally measurable way. Plus this would be expensive to gather. Plus plus it might be a perverse incentive, and doesn’t account for whether a decision has been beneficial or not.

Maybe it’s about a very short, targeted set of questions for specific groups. This feels like a backwards step from something that should be about democratising data for all though.

One of the techniques we’ve used in developing metrics is encouraging people to think critically about the outcome they’re working with. Looking at the outcome for a third and final time

Out data allows us to make decisions and take action

I think there’s something in focusing on how our data allows us to continuously improve as an organisation. Perhaps we can quantify or even just elevate the improvements we’ve made in our evidence base more than we’re doing at the moment.

Hopefully you can see I’m a bit stuck. If you have any advice or experience and would be willing to share then as I said please get in touch via Twitter or LinkedIn or comment on this post.

As a team we run a roughly weekly open session about our data work. Maybe you’d like to come along and present? And we’re always happy to reciprocate. We’re doing some interesting work and we love to share 😁

Footnotes

¹ Response time on LinkedIn may be slow!

--

--

Dan Barrett
Dan Barrett

Written by Dan Barrett

Head of Data Science at Citizens Advice. These are my personal thoughts on work.

Responses (1)