Evolved Thinking Blog

The CX Principles that can guarantee a successful program – Part 3 - Evolved Thinking

Written by Garreth Chandler | April 22, 2019

This month, I’m continuing my reflections on the CX principles that can guarantee a successful program.

Last month we looked at the factors that build success amongst employees and customers including the importance of believing in the program’s values, being responded to when in need, being able to easily give feedback, being asked relevant questions and respecting people’s time.

In this post, I am going to discuss the first of five success factors for Program Owners – getting good advice and push back where needed from us.

We work with a variety of CX program owners across our clients. This varies depending on the size of the organisation and sector. It could be a dedicated VOC or NPS program owner, marketing manager, general manager or operations manager. Irrespective, the requirement of The Evolved Group as a good vendor is always the same.

Say it how it is and be honest.

Of course, we try to do that sympathetically and with understanding… and it starts with listening carefully.

Underlying this principle is an acknowledgment that our clients can find themselves in conflicted positions. Examples of this include:

  • Every stakeholder in the business wants to measure something or everything and we are dealing with a questionnaire that is not going to do the job because it tries to make everyone happy.It’s always wonderful to see passion about customer experience and interest from stakeholders. However, like all things in life, success is about compromise and accepting that you can’t make everyone happy. Having 20% of questions deliver 80% of the answers needed is better than doubling the survey duration to get an extra 10% of coverage that may not really have too much impact. Of course, with conversational AI and smart rotation of topics, there is usually a way to find a way to do this. Our job in this situation is to come up with solutions that achieve this, but where it is not possible to the full extent, ensure our Customer Success principles are adhered to e.g. ask relevant questions, respect respondents’ time, etc.
  • Clients writing their own questions that aren’t always best practice. I have learned a great deal from our clients over the last 20 years. Each is an expert in what they do. Equally writing a question or questionnaire is an art, but one that everyone can have a go at. The challenge is that I know from experience there are certain things that don’t work or certain ways of asking questions or structuring questions that work better. The balance we have to strike is giving our clients scope and opportunity to craft their own program and bring their expertise to the table, whilst ensuring the questionnaire adheres to important design principles. As always, the key is walking through the rationale and reasons for our advice but listening whilst absorbing and integrating our client’s ideas. It’s actually one of the great pleasures of what I do in my job.
  • Misunderstanding variation. One of the hardest things to deal with is helping clients understand research results. This can be particularly important when working with operational clients who are used to dealing with systems data. In their world, a 5-second increase in wait time is exactly that, not an estimate or a trend, but an actual fact that is related to an underlying cause – call volume relative to rostered headcount. Customer feedback is different though. We take samples and samples have sampling error. The term error has implicit negative connotations, but it is just another word for variance due to taking an estimate of the actual population which can never be practically sampled 100%. If NPS on a base of n=50 drops from 30 to 25, does that matter? Probably not. Look at the longer term, understand the causes, the trends and build up more evidence before deciding if the result is ‘significant’ in both the statistical and common sense of the word. Variation can undermine confidence in a CX program when reporting is too frequent to act upon and/or the sample bases are low. Ideally, you can boost the sample, but sometimes this just isn’t possible. The best solution is usually to apply rolling averages or reduce the reporting cadence, whilst still providing real-time feedback on open-endeds. Our job is to support our client by ensuring internal recipients of reporting don’t go off on tangents or over-react to results by providing a dispassionate and reasoned explanation. Our HumanListening platform supports this by surfacing alerts in our notifications application based on rigorous testing of underlying data and trends.
  • Results that don’t go down well. It is always easier to deliver good news. Unfortunately, the news is not always good. Many of our programs have been running for more than five years and over that time we have seen cycles of business success and downturns. Our technology is built to create ROI and accountability. Transparency and ownership of results is a critical cultural success factor in positive change, that powers improvement in customer satisfaction and advocacy. This does create challenges when internal stakeholders don’t see the rewards for their efforts, struggle to understand why the numbers are going south, or point-blank refuse to accept the findings. The other challenge here is to provide really strong evidence of the result and data-driven explanations of what the issue is. With the increasing use of natural language in our feedback, this is a lot easier than it used to be. Finding patterns in responses and getting clear examples of where customers are not getting what they want, and a crisp, even emotional explanation of why, is often enough to carry the day. My mantra here is that everyone comes to work to do their best. Everyone is allowed to fail and finding success is about learning on the job and being brave. Our role as a CX program provider is to find the issues and present them, but also offer ideas and strategy on how to address the issues – with an open mind and respectfully, not simplistically or in a high-handed way.
  • Constantly improving the program. CX programs should not be set and forget. We use our technology to build and craft programs around the unique needs of each of our clients – it’s a key part of what we do:

Technology melded with expert business consulting

However, nothing stays the same. We can’t afford to be stubborn about what is working and what is not. Running a successful CX program requires constant challenging of the status quo and finding new and better ways of operating with our clients.

When we have a change in client, it’s always a great opportunity to hear new ideas and explore potential improvements. A great way to approach this is with a horizons model. That is, to spend time outside of the daily grind to review the program, critique it and list out ideas to make it better. These can then be graded according to their risk, potential benefit, and effort into horizons:

(1) small improvements

(2) step change improvements; and

(3) reinvention.

This covers questionnaire design, reporting, process automation, internal engagement with stakeholders – practically every aspect of a CX program needs continuous review.

These are just some examples of how our CX research consulting team helps our clients build successful programs. A bit like painting the Sydney Harbour Bridge, it is a job that doesn’t stop, but one that requires us to understand the challenges our clients face and help support their success by being a trusted advisor and expert who can support good decision making with honest and robust advice.

 

Find out more

Check out our customer experience page to learn more.