In my past blog, I gave some best practices on sampling. The big question will be how do I evaluate?
Most organizations have not paused to develop a set of best practices for the overall evaluation process. Even experienced quality analysts have difficulty designing truly objective quality evaluation criteria, and the process is manual at its heart, creating inconsistent results. That creates an opening for doubters to challenge the credibility of the process.
As with most things in quality management, the process can vary widely by contact center, but the important thing is that you clearly document the monitoring process, since this will force critical thinking as well as consistency within and across contact centers. For example, you may decide to use dedicated analysts, or supervisors or coaches for evaluation, and if you have multiple contact centers, you may want to centralize the quality functions. At a minimum, you should document answers to the following questions:
- What quality management tools do we need?
- What is my evaluation sample? By business?
- How will we sample? Random, targeted, intelligent?
- How will we access interactions?
- How will we balance evaluations across channels?
- How will we assign evaluators to interactions?
- How will we score interactions?
Next blog we will discuss WHAT we should evaluate.