< Back to Blogs

When 100% SLA meets 92% CSAT: Understanding the Gap

blog-image

The core paradox

The numbers tell a common story.

We met every SLA we set, yet nearly 1 in 12 customers still left dissatisfied. The instinct to say "these should both be 100%" is understandable, but it reveals something important: SLAs and CSAT measure different things.

  • SLAs measure your promises to yourself.  
  • CSAT measures the customer's experience of reality.

Is this an anomaly? What does the industry say?

Not at all. In fact, a 100% SLA alongside sub-100% CSAT is the industry norm, not the exception. The reason is structural.

SLAs are a compliance metric. They measure whether your team responded within a defined window. CSAT is an outcome metric that measures whether the customer felt their problem was resolved well. These can diverge for several reasons, even when SLAs are fully met:

  • A ticket closed in 45 minutes with a partial fix still satisfies a 1-hour SLA
  • A customer who waited until Monday morning for a Friday night issue may feel delayed even if your clock only started on Monday
  • Resolution quality, tone, and communication style are invisible to SLA calculations

Across B2B support benchmarks, a CSAT score of 85-95% is generally considered excellent. At 92%, we were performing well. The gap is not a signal that something is broken; it is a signal that our SLA definition and our customer expectations aren't perfectly aligned.

The Friday night problem, and why it’s central here

Our 18/5 SLA is contractually sound, but it creates a perceived time gap for customers, even though our clock doesn't run. A customer who raises an issue at 8 PM on Friday experiences 60 hours of unresolved anxiety before our SLA clock even starts on Monday morning. Our SLA says "resolved in 1 hour." Their lived experience says "waited all weekend."

This is likely responsible for a significant share of our 8% CSAT gap. It's not that we failed our SLAs; it's that the SLA itself doesn't capture what the customer experienced.

Are our SLAs wrong defined?

I wouldn’t say it is wrongly defined, but I’d go ahead and say that it is incompletely defined. There are two distinct problems:

Problem 1

SLAs measure speed, and not resolution quality. A ticket can be closed within the SLA window with a workaround, a partial fix, or even a "we'll look into this" response. The customer still rates their experience on whether their problem was genuinely solved.

Problem 2

The SLA clock doesn’t reflect customers' perception of time. Customers don't experience business hours. They experience wall-clock time. Our 18/5 schedule is a business constraint, a legitimate one, but it creates invisible wait time that no SLA currently accounts for.

How to realign SLAs and CSAT

We brainstormed and decided to implement the following levers in phases:

1. Add a CSAT threshold to our SLA definition.

Instead of "resolved within 1 hour," we’d redefine it as "resolved within 1 hour and customer satisfaction rating ≥ 4 stars." This makes the SLA a true quality measure, not just a speed measure. We are looking at using composite SLAs like this.

2. Segment CSAT by issue timing

With 48 tickets, we can almost certainly identify which ones came in on Friday nights vs. weekday hours. If Friday-night tickets have lower CSAT, that confirms the perception gap theory and gives us a clear action item.

3. Track first contact resolution (FCR) alongside CSAT

We currently don’t measure FCR as a standard practice. We have decided to add that as a metric. FCR measures whether the issue was resolved in a single interaction, without follow-up. Low FCR tends to drag down CSAT even when the SLA is met. If tickets are being closed and reopened, that pattern will surface here.

Here is the takeaway

The 8% CSAT gap is not a red flag; it is a diagnostic signal. A 100% SLA and 92% CSAT together tell you that our team is operationally disciplined, but that our SLA definition doesn't yet fully capture the customer experience, particularly around off-hours submissions.

The most important single action is to identify which tickets drove the 4 ratings that pulled us from 100% and to see whether they share a pattern, such as timing, issue type, or resolution completeness (this is something we have already done for the 48 tickets, but it is written as an action item for us to execute going forward). That analysis will tell us exactly which lever to pull first.

Author:
Sabapathy Narayanan

Related Posts