
The numbers tell a common story.
We met every SLA we set, yet nearly 1 in 12 customers still left dissatisfied. The instinct to say "these should both be 100%" is understandable, but it reveals something important: SLAs and CSAT measure different things.
Not at all. In fact, a 100% SLA alongside sub-100% CSAT is the industry norm, not the exception. The reason is structural.
SLAs are a compliance metric. They measure whether your team responded within a defined window. CSAT is an outcome metric that measures whether the customer felt their problem was resolved well. These can diverge for several reasons, even when SLAs are fully met:
Across B2B support benchmarks, a CSAT score of 85-95% is generally considered excellent. At 92%, we were performing well. The gap is not a signal that something is broken; it is a signal that our SLA definition and our customer expectations aren't perfectly aligned.
Our 18/5 SLA is contractually sound, but it creates a perceived time gap for customers, even though our clock doesn't run. A customer who raises an issue at 8 PM on Friday experiences 60 hours of unresolved anxiety before our SLA clock even starts on Monday morning. Our SLA says "resolved in 1 hour." Their lived experience says "waited all weekend."
This is likely responsible for a significant share of our 8% CSAT gap. It's not that we failed our SLAs; it's that the SLA itself doesn't capture what the customer experienced.
I wouldn’t say it is wrongly defined, but I’d go ahead and say that it is incompletely defined. There are two distinct problems:
SLAs measure speed, and not resolution quality. A ticket can be closed within the SLA window with a workaround, a partial fix, or even a "we'll look into this" response. The customer still rates their experience on whether their problem was genuinely solved.
The SLA clock doesn’t reflect customers' perception of time. Customers don't experience business hours. They experience wall-clock time. Our 18/5 schedule is a business constraint, a legitimate one, but it creates invisible wait time that no SLA currently accounts for.
We brainstormed and decided to implement the following levers in phases:
Instead of "resolved within 1 hour," we’d redefine it as "resolved within 1 hour and customer satisfaction rating ≥ 4 stars." This makes the SLA a true quality measure, not just a speed measure. We are looking at using composite SLAs like this.
With 48 tickets, we can almost certainly identify which ones came in on Friday nights vs. weekday hours. If Friday-night tickets have lower CSAT, that confirms the perception gap theory and gives us a clear action item.
We currently don’t measure FCR as a standard practice. We have decided to add that as a metric. FCR measures whether the issue was resolved in a single interaction, without follow-up. Low FCR tends to drag down CSAT even when the SLA is met. If tickets are being closed and reopened, that pattern will surface here.
The 8% CSAT gap is not a red flag; it is a diagnostic signal. A 100% SLA and 92% CSAT together tell you that our team is operationally disciplined, but that our SLA definition doesn't yet fully capture the customer experience, particularly around off-hours submissions.
The most important single action is to identify which tickets drove the 4 ratings that pulled us from 100% and to see whether they share a pattern, such as timing, issue type, or resolution completeness (this is something we have already done for the 48 tickets, but it is written as an action item for us to execute going forward). That analysis will tell us exactly which lever to pull first.