Home
Quick overview for QA, Ops, and Leadership — what’s happening and what needs attention.
Average quality score across all submitted QA evaluations.
Share of evaluations with no fatal errors (fatal compliance).
Percent of evaluations that meet the pass threshold.
🔔 What needs attention
—Follow-up actions past their due date and not closed.
Follow-up actions that are still open (not closed).
Evaluations that are missing calibration sign-off.
Most recent evaluations where a fatal fail occurred.
⚠️ Quality signals
The QA parameter with the highest fail count in your dataset.
The theme/category most associated with failures.
Most frequent failure pattern spotted across recent evaluations.
📋 Evaluation Information
✅ Quality Parameters (32 Total)
💬 General Comments
🧩 Follow-up Action
Tip: Use this to track coaching/follow-ups from QA. It doesn’t affect scoring—just closes the loop.
📊 Live Score
Total Evaluations
ⓘ
Total Evaluations
Count of completed QA evaluations stored in the system.
0
Avg Quality
ⓘ
Avg Quality
Average quality score across all evaluations. Best used for trend direction, not individual decisions.
0%
Pass Rate
ⓘ
Pass Rate
Percentage of evaluations that met the pass threshold (including fatal requirements).
0%
Fatal Accuracy
ⓘ
Fatal Accuracy
Percentage of evaluations where all fatal (non-negotiable) requirements were met. Primary indicator of service risk.
0%
| # | Date | Ticket | Agent | Channel | Quality | Status | Calibration | Follow-up | Actions |
|---|
No evaluations yet
Submit your first evaluation to see it here
DSAT Analysis – Evaluation
Capture root-cause attribution for DSAT cases (diagnostic, not scored).
Total Evaluations
0
Avg Quality
0%
Pass Rate
0%
Fatal Accuracy
0%
All open actions pulled from evaluations (and DSAT where applicable). coaching/actions not closed yet
No data yet
Dashboard will appear once you submit evaluations
📈 Quality Trend
ⓘ
Quality Trend
Weekly movement of average quality. Use to spot improvement/decline over time; small week-to-week noise is normal.
👥 Agent Performance
ⓘ
Agent Performance
Average quality by agent. Use to identify coaching opportunities; always consider sample size (N).
📞 Channel Distribution
ⓘ
Channel Distribution
Share of evaluations by channel (Call/Email/Chat). Helps explain shifts in quality due to channel mix.
🎯 Top Failed Parameters
ⓘ
Top Failed Parameters
Most frequently failed QA parameters. High frequency does not always mean highest impact—pair with Insights impact ranking.
Number of DSAT entries over time (based on entry date).
Breakdown of DSAT by primary failure type.
Which departments have the most DSAT entries.
Most common DSAT sub-reasons across entries.
Auto-generated summary of drivers, hotspots, and recent DSAT notes.
Top DSAT drivers + severity mix to guide what to fix first.
Where DSAT is clustering by agent and department.
Most recent 'what went wrong' summaries from DSAT entries.
Primary Quality Killer
ⓘ
Primary Quality Killer
The single behaviour causing the biggest drop in overall quality across evaluations (severity × frequency).
—
—
Most Impacted Theme
ⓘ
Most Impacted Theme
The coaching theme with the highest failure concentration—useful for targeting training and process fixes.
—
—
Emerging Signal
ⓘ
Emerging Signal
An early pattern showing unusual change (e.g., rising fatal misses or a spike in a theme). Treat as a prompt to investigate.
—
—
No insights yet
Insights will appear once you submit evaluations
✅ Action Summary
ⓘ
Action Summary
Follow-up actions created from QA findings (e.g., coaching, refreshers). Open and overdue actions indicate where improvement work is still pending.
| Type | Open | Closed | Overdue |
|---|
| Owner | Open | Closed | Overdue |
|---|
🔻 Top 3 Behaviours Impacting Quality
ⓘ
Impacting Quality
Ranked using failure rate × parameter weight, with fatal items boosted. This highlights what hurts quality most, not just what fails most.
Ranked by failure rate × weight (fatal boosted)
| Rank | Behaviour (Parameter) | Failure Rate | Impact |
|---|
Evaluator Consistency
ⓘ
Evaluator Consistency
Shows if an evaluator scores systematically higher/lower than the org average once sample size is sufficient. Used for calibration, not performance.
| Evaluator | N | Avg Quality | Avg Fatal | Pass Rate | Δ vs Org | Label |
|---|
🎯 Coaching Themes
ⓘ
Coaching Themes
Grouped behaviours that roll up multiple parameters into a single coaching area. Use themes to prioritize training and process fixes.
📈 Weekly Quality Trend
ⓘ
Weekly Trend
Week-by-week average quality. Use for direction and stability; interpret with sample size.
🧩 Patterns & Signals
ⓘ
Patterns & Signals
Automated highlights from the dataset (e.g., most common failure category, rising fatal misses, channel shifts). Treat as investigation prompts.
Settings
Configure Admin lists, Evaluation scorecard settings, and DSAT taxonomy — without affecting existing data.
👥 Manage Agents
✍️ Manage Evaluators
💾 Data Management
📊 Database Status
0 evaluations stored
Evaluation Settings
Rename parameters and adjust weightage (controlled choices). Changes apply instantly across scoring, dashboard, and insights.
| # | Default Name | Custom Name | Info (ⓘ) | Weight | Fatal | Status |
|---|
DSAT Settings
Customize DSAT dropdowns per client. Changes apply instantly (dynamic).
Departments
These populate the Department dropdown in the DSAT form.
Failure Reasons & Sub-Reasons
These populate the DSAT “Primary Type” and “Sub-Reason” dropdowns.
- Changes apply instantly across DSAT forms (dynamic config).
- Existing DSAT entries keep their original values. If an older value is no longer in the dropdown, it will still display in saved records.
- Weights/scoring are not affected — this is DSAT taxonomy only.