QA Evaluation Forms in Observe.AI empower you to evaluate agent-customer interactions with speed and accuracy, and improve agent performance with contextual and actionable feedback.
You can evaluate more calls and agents in less time, and with a more streamlined agent evaluation and feedback process, your QA teams are now better equipped to deliver more efficient and rich call audits, boosting both EX and CX.
Today we’re excited to announce three new updates bolstering Observe.AI Evaluation forms.
Evaluation Form Builder: Create new evaluation forms without breaking a sweat
With our new, flexible Evaluation Form Builder, quickly create new evaluation forms and align them to your interaction types (like voice calls or chat). You can also customize form questions and scoring logic to fit your exact business parameters. Lastly, preview your forms as you build them to inspect the evaluation form structure.
Form Testing: Get your forms right before rolling them out
Added Form Testing lets you simulate the evaluation experience for new evaluation forms in a sandbox environment with test evaluation data. This ensures the right form, scoring, and grading structure, and surfaces any errors or gaps before your evaluation forms go live.
Score Breakdown & Section-Level Scores: Quicker, deeper performance insights on every agent
We’ve added more robust reporting on agent performance with Score Breakdown and Section-Level Scores.
- Score Breakdown delivers a summary of agent performance on any evaluation that includes points scored by the agent, auto fails, grade, and final score.
- Section-level Scores let you view agent performance scores across each section of the evaluation form without having to do manual calculations. From there, investigate which sections are driving low scores and drill down into those specific areas while coaching agents.