By: David Norris, Practice Leader - Analytics, Bloor Research
Published: 21st June 2007
Copyright Bloor Research © 2007
It is now widely recognised that if we are to be competitive in the commercial world, or perceived as offering value for money in the government sector, that we have to understand and manage the customer experience. At the same time everyone has become fixated on how clicks have come to rival bricks as a major channel, so we must include clicks and bricks in our mix. However, it is not just the Internet that has been added as a major channel to market. Call Centres have come to be one of the major channels to the consumer of goods and services. Whilst in a traditional bricks and mortars store, we have a lot of control over what happens and can monitor at close quarters what is going on; over the Internet, we have an abundance of detailed information about customers and their behaviour; within call centres we are still at the very early stages of understanding what is going on and what really constitutes best practise.
Much of the activity that is going on with analytics within call centres is focussing on call classification. Currently most calls are dropped into a single bucket, but we are increasingly becoming aware that people often call about several points. Indeed, the behaviour of call centre operations where we have endeavoured to turn every call into an opportunity to cross- and up-sell, has encouraged calls to become more multi faceted. This simplistic classification is symptomatic of the way we handle Call Centres; they currently lack insight, subtlety and direction to help them consistently achieve goals.
Because of the sheer volume of calls that are handled and recorded within Call Centres the tendency is to sample. This is often necessary because the tools that we have cannot cope with anything more than a sample. But as we are often working to simplistic assumptions about what the total population looks like, it is highly likely that when we sample we are making further false assumptions. As such the validity of any sample to be truly representative of the population is questionable. The reason this is not seen as such a major issue, at present, is that anything is better than nothing when starting out but we are now becoming so dependent on call centres that maturity is an urgent requirement.
In the area of structured data analysis, with data mining, the top end tools such as Fair Issac are now introducing techniques to address the fact that we tend to work from populations which we have skewed and that we are therefore making inaccurate assumptions. They use advanced techniques to recognise the skew we have put on the data and to make it representative of the whole population. It should be remembered that we have been working with structured data for many more years that we have with voice, and yet it is only now that we are starting to realise the limitations of our structured data. I would conjecture that the lessons now being learnt from structured data are far more manifest in unstructured data such as voice. What that means is that if we are really going to understand what is going on in our call centres we really need to get at the whole population, which requires us to work far closer to real time than is possible with most technologies. Working at real time or faster enables us to cope with the sheer volume today, but will then open up exciting opportunities to do something with the analysis in the future.
Another parallel with structured data analysis is meaningful. A decade ago most models were built to be representative of the whole population, so our model was a representation of the generality. This started to be questioned when people recognised that the Pareto rule was the key to understanding. If 80% of profit comes from 20% of the population then someone who represents the core of the market (i.e. was not in the 20% but represented Mr 50%) is not telling us much about the most desirable behaviour. Data Mining has moved away from building single representative models into building lots of problem-solving predictive models. I would argue those models are reliant upon understanding far more about the segments than we can possibly claim today with Voice based record. We have not built up a rigorous understanding within Call Centres because we have been concentrating so much on efficiency.
A further feature that is very important to recognise about what goes on within call centres is that we really need to remove as many filters from the truth as is possible. At present most calls and their outcomes are analysed on the basis of input from either customers after the event, or by the agents whilst completing a call. The problem is that both will filter the results, at times producing a result which is what they believe the audience of the analysis wants to hear, and at other times they will exaggerate features to satisfy an agenda outside of the direct scope of the call. The subtle nuances of human behaviour and judgment rarely fit easily into the 5 boxes that we want people to use to categorise customer feelings. When you actually listen to a call people are rarely stating things in black and white terms, they are constantly qualifying statements, yet it is those qualifying statements we drop when it comes to our customer satisfaction and outcome analysis in the vast majority of cases—because we have such large volumes of data to handle that it is the only feasible way to handle it within the limitations of most of the technology we use. I believe that it is very important to capture far more of the nuance of what is happening if we are to improve the customer experience. We need to know far more precisely when and where to ask, to listen, to respond and to suggest if we are to help the customer to get the most out of the call. Only when we can categorise, analyse and understand what is going on with the population of our calls can we really determine best practise, and establish the regimes to enforce it as universally as possible.
Having moved so many call centres off shore in order to obtain the cost base that justifies the heavy reliance we now make on them we have further exacerbated the importance of understanding the subtle nuances of language. Whilst people in Eastern Europe, India and the Far East may well speak English to a level far beyond the capability that we would ever achieve in their native tongue, all of us know that it is not always possible to conduct a conversation with off shore agents that avoids any confusion. What we need to be able to do is the ensure that our off shore agents maintain clarity and fluency at all times, not just when they start a shift, or are new to the job. We need to be able to spot when there is a drop off in capability and to then introduce the appropriate coaching and support to maintain the required standard.
I believe that call centre analytics is a much-neglected area of focus, and that the vast majority of the available technologies are addressing the issues with such severe restrictions as to maintain that limitation. Nexidia is one of the few technologies that I am aware of that provides a means to tackle the issues of analysing voice in ways that meet both what I would see as the current requirement and also what I see as the emerging requirement.
I remain sceptical of any technology that requires us to sample, because I am not convinced that we understand enough about the population we are dealing with to make any sampling strategy reliable enough to base a customer experience strategy on. I believe that an ability to handle the whole of a population and to analyse it many times faster than real time is a basic requirement that should underpin any technology in this area. The use of a lexicon and word matching is fundamentally flawed and is unlikely to ever offer fast enough or accurate enough capability. Speed is going to be increasingly significant because where Voice recognition will really come into its own is when it can offer real time contextual analysis and then prompts to support the next best action in all voice directed customer interactions.
I find that most of the technologies that are available are far from intuitive and easy to use for the less technically able. That means that the analysis and the promulgation of hypotheses from that analysis tends to sit with technically minded analytical minds. Unfortunately such people tend to ignore the fact that when it comes to customer experience, it is not logic that is driving perceptions but emotions, and we are therefore all too often missing the point. Nexidia is one of the few tools that I have seen that offers sophisticated analysis with the Forensic Search capability that is accessible to the non-technical person who wants to focus on human behaviour.
The Nexidia Language Assessor, which addresses the issue of fluency and clarity of speech within call centres, is one of the most obvious ways of introducing technology in way that will create a clear business justification and start the development of thinking about what we need to analyse and what we can do when we have undertaken that analysis.
We are only just starting to understand the scale of the issue in managing the customer experience through the call centre. Nexidia is probably one of the most important aids to the identification of problem areas and the training and remedial actions that exists. From that relatively limited scope today one can see it growing into a vital real time tool capable of interacting with a knowledge base to assist agents to intelligently and deftly guide the customer to an outcome of mutual benefit to both the enterprise and the customer with a minimum of the hit and miss that typifies the call centre experience today. I am not aware of another tool that is so close to offering that possibility.
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.