There is a fundamental shift happening in the contact center world that is now more focused on quality. Contact center service leaders are under significant pressure to prioritize service quality. But, according to a Gartner survey there is a lot of work left to be done to match customer expectations.
Call recording, transcription, and AI-based analysis has made it easier to evaluate interactions in a contact center. But there is still a strong need for automation. We addressed that with our Auto QA launch last year and saw tremendous results. Observe.AI customers were able to extend their QA coverage to 100% of all customer interactions. Figo estimated they saved 27K hours a year and about $700K in QA efficiency by using Auto QA.
Similarly, providing timely and consistent feedback to agents in your call center is critical to improve customer experience. The right encouragement, or even coaching, could dramatically improve agent performance.
According to our State of Contact Center Conversation Intelligence survey, contact centers with a formal coaching program were 4x more likely to have mostly top performers than those without a formal coaching program.
Here's the problem: Today, QA results are rarely seen by agents. They're typically routed to the manager for coaching opportunities. This means agents don't receive important feedback right away.
In addition, agents may disagree with the evaluation itself and want to dispute it. Establishing two-way communication around QA evaluations drives more transparency and trust across your entire contact center.
For these reasons, Observe.AI is launching automated Share, Acknowledgement and Dispute features to strengthen your overall QA process.
These features will ensure that evaluators, supervisors, and agents are all aligned on QA parameters, results, and the follow-up actions needed to drive performance improvement.
Contact center leaders are already seeing value: One of our beta customers used these features to save more than 600 hours, resulting in $10K annual savings.
Introducing Automated Share, Acknowledgement and Dispute
Getting timely feedback is very important to improving the contact center agent’s performance. Agents don’t get 100% of the QA feedback or sometimes they receive it late —during a coaching session. Additionally, there is a strong need for a two-way process to help ensure both the agents and supervisors have a fair chance to share feedback and drive improvements in the QA process. Last but not least, there is a stronger need to have an automated mechanism to allow this process to work seamlessly.
We recognize this problem and this is why we are introducing an integrated and end-to-end workflow – for planning and review evaluations – all inside OAI.
Share & Acknowledgement of Evaluations
QA teams using Observe.AI for AI-enabled evaluations now have the option of sharing QA results directly with the agent. Once an evaluation is shared with an agent, they can acknowledge the receipt of the report as well as the feedback in it. This will help contact center leaders track whether QA feedback has been received by the agents.
The main benefits of sharing and acknowledgement include:
- Get a formal commitment from agents to review evaluation data and use it to improve their own performance
- Lend more credibility to the QA process by ensuring that agents actually review the feedback provided on their evaluations
- Drive a more scientific approach to personnel management with a paper trail on performance. Given that career decisions are made based on the agent’s performance, the acknowledgement workflow protects organizations from messy legal situations.
How does the acknowledgement process work in Observe.AI?
- Share for Acknowledgement: Evaluators can share the evaluations for the agent's acknowledgement.
- Email Alert: The agent receives an email notifying them that there's an evaluation that needs their review and sign-off.
- Accept Acknowledgement: Clicking through the email takes the agents to the evaluation form where they can review the feedback provided and acknowledge or dispute the results directly in Observe.AI. (Note: For agents to acknowledge evaluations, they will need access to Observe.AI)
A dispute is raised when an agent doesn’t agree with a certain evaluation or a score determined by the QA team in an evaluation. It is also a form of acknowledgement that allows feedback sharing across agents and a supervisor/Ops lead.
The benefits of a dispute process include:
- Ensure agents and supervisors have a fair chance to share their feedback on evaluations
- Improve your QA process by identifying and correcting frequently occurring issues across agents or evaluations, e.g. if the evaluation question is written incorrectly
- Improve your QA team’s performance by providing a feedback loop e.g. make evaluators with more than 5 disputes undergo a calibration process
How does the dispute process work in Observe.AI?
- Raise a Dispute: Gives admins, supervisors, and agents the ability to dispute the findings of an evaluation.
- Resolving Disputes: Users responsible for resolving disputes can monitor the evaluation tab to see the status and resolve disputes. They can mark the resolution and share resolution comments.
- Tracking Disputes: Users can track the evaluations status from the evaluations tab
Track acknowledgement and disputes of your evaluations.
Tracking helps you know which agents are self-coaching and who's ignoring evaluations. This can inform coaching strategy. On the other hand, tracking disputes can help you inform your calibration strategy and build more trust between agents, their evaluators, and supervisors.
Observe.AI makes insights related to Acknowledgements and Disputes available in our Reporting module. You can also create additional reports as per your requirements.
Want to learn how you can boost agent performance faster Observe.AI’s full suite of post-interaction AI solutions? Here is some useful information for you:
We’d love to hear from you and discuss how we can support the unique needs of your business. See a demo today.