TABLE OF CONTENTS

1. Ticket Volume: Opened vs Closed

Overview

  • Tickets opened: 2,529
  • Tickets closed: 2,571
  • The team closed 42 more tickets than were opened, reducing the ticket backlog over the month.

By team member

Team memberTickets openedTickets closed
Leo139122
Mercedesz*99125
Jenine433452
Sarah*678728
Zac149154
India1,031990
Total2,5292,571

Notes

  • India and Sarah* handled the highest ticket volumes.
  • Most team members closed more tickets than they opened, contributing to a net reduction in open tickets.
  • Overall closure rate was above 100%, which is where we want to be to keep the queue under control.

2. Phone Calls: Inbound & Outbound

Totals

  • Inbound calls: 238
  • Outbound calls: 366

By team member

Team memberInbound callsOutbound calls
Sarah*33195
Mercedesz*4536
Jenine3376
Leo10322
Zac2437
Total238366

Notes

  • Leo handled the highest number of inbound calls (103), taking the bulk of front-line phone traffic.
  • Sarah* led on outbound calls (195), reflecting strong follow-up and proactive engagement with customers.
  • All listed team members contributed to both inbound and outbound activity.

3. Call Duration

Total talk time by agent

  • Sarah* – 7:16:11
  • Leo – 3:57:25
  • Mercedesz – 2:43:07
  • Jenine – 1:16:24
  • Zac – 0:57:11

This shows a healthy spread of call handling across the team, with Sarah* and Leo carrying the largest share of total talk time.


4. Live Chat & AI Support

AI Agent

  • 83 chats handled by the AI agent.
  • Transfer to agent rate: 46% – just under half of AI interactions required hand-off to a human.

Human chat activity

Team memberChats handled
Sree21
Yashi13
Sargun12
Vaishali3
Total49

Notes

  • AI is handling a significant share of front-line chat volume, with humans focusing on the more complex or escalated conversations.
  • Sree took the largest share of human chats, with Yashi and Sargun also contributing strongly.

5. Training Sessions

Training sessions completed (by trainer)

TrainerSessions completed
Sarah*12
Mercedesz*10
GRATIS/Leo5
Jenine5
Total32

The team delivered 32 training sessions in January, with Sarah* and Mercedesz* leading the activity, supported by Leo and Jenine.


6. CSAT & Training Survey Feedback

6.1 Training CSAT – live training sessions

Surveys received for January sessions:

  • Spencer House (08 Jan 2026)
  • Watershed (13 Jan 2026)
  • Kings Court Hotel (27 Jan 2026)
  • Crooklands Hotel (28 Jan 2026)
  • The Landmark Hotel (29 Jan 2026)

Quantitative results (training)

  • Overall satisfaction: all “Very Satisfied”
  • Expectations: mostly “Exceeded Expectations”, one “Met Expectations”
  • Trainer knowledge & presentation: all “Excellent”
  • Clarity of material: all “Very Clear”
  • Topics covered: all “Yes, all topics were covered”
  • Confidence after training: mainly “Very Confident”, one “Confident”
  • Session duration: all rated “Just Right”

Qualitative feedback – highlights (training)

  • Every element of the platform was covered in a clear and concise manner.
  • Really clear overview of the system.
  • “The training was perfect, no notes on improvement.”
  • “All points were clear and we covered every section that we could. There was plenty of opportunities to ask questions and we felt like all of our points were covered.”
  • Explicit thanks to Mercedesz: “Thank you to Mercedesz who was an excellent and thorough trainer.

Summary (training)

  • 100% of respondents were Very Satisfied with their training.
  • Clients consistently highlighted:
    • Clear explanations
    • Comprehensive coverage of the platform
    • Trainer knowledge and professionalism
  • The only suggestion for change was the idea of future refresher training to avoid bad habits – a positive indication that clients want to stay engaged.

6.2 Support ticket CSAT – January 2026

Using the CSAT survey export for January, we received 113 responses relating to support tickets.

Rating distribution (tickets)

  • Very satisfied: 99 responses (87.6%)
  • Satisfied: 13 responses (11.5%)
  • Very dissatisfied: 1 response (0.9%)
  • Dissatisfied / Neutral: 0 responses

Summary (tickets)

  • Overall CSAT for January support tickets was very strong, with almost 9 out of 10 responses rated “Very satisfied” and the remainder almost entirely “Satisfied”.
  • There was one “Very dissatisfied” response, which we can treat as an outlier to review individually.
  • Comment themes from the positive responses include:
    • Fast response and resolution (“super speedy”, “quick and effective response”, “very swift response”).
    • Helpful, friendly assistance (e.g. “kind and helpful”, “thank you so much!!!”).
    • Confidence in the team’s reliability (e.g. “seamless process and always easy to work with the support team”).

Taken together, training surveys and ticket CSAT show consistently high satisfaction across both structured training and day-to-day support interactions in January.


7. Utilisation & Key Activities

Leo – 136 hours logged

Key activity highlights:

  • 6 hours – Client training
  • 3.5 hours – Compiling monthly agent stats
  • 2.5 hours – Assisting Hull CVB with an issue
  • 3 hours – Investigating preferred venues and reports issues for Calder
  • 1.5 hours – Reviewing all Jira tickets escalated from Freshdesk for relevance
  • 1.5 hours – Investigating an issue with billing notes not pulling through

Summary: Leo balanced frontline work (tickets, calls) with reporting and targeted investigation into platform and billing issues, as well as support for key clients.


Sarah – 137 hours logged

Key activity highlights:

  • 13 hours – Customer training
  • 7 hours – Removing venues from parent groups
  • 1.5 hours – Preparing and training the India team on choosing the correct brand
  • 3 hours – Chasing March renewals

Summary: Strong mix of training, data hygiene (venue groups) and revenue-focused activity via renewal chasing.


Mercedesz

(Overall hours logged not specified, but detailed breakdown of work provided.)

Key activity highlights:

  • 9 hours – Completing AGIITO mapping
  • 12.5 hours – Cleansing user data
  • 6 hours – Updating logos on listings
  • 11 hours – Customer training

Summary: Heavy focus on data quality and presentation (mapping, user data, logos) alongside a solid training workload.


Jenine – 97 hours logged

Key activity highlights:

  • 27 hours – TVD quality report, primarily correcting postcodes
  • 5 hours – Customer training
  • 3.5 hours – Assisting Leonardo Hotels with login issues

Summary: Major contribution to data quality assurance (TVD quality report), backed up with client-facing support and training.


8. Overall Highlights & Key Takeaways

  • Backlog reduction:
    • Opened 2,529 tickets and closed 2,571, bringing the overall ticket volume down.
  • Channel coverage:
    • Good balance across tickets, phone, and chat, with AI picking up a significant portion of initial chat contacts.
    • Leo led inbound calls, while Sarah* led outbound follow-up.
  • Training & CSAT:
    • 32 training sessions delivered in January.
    • Training feedback was universally positive with 100% Very Satisfied ratings and consistently high confidence post-training.
    • Ticket CSAT was very strong: 87.6% Very satisfied, 11.5% Satisfied, 0.9% Very dissatisfied (single outlier).
  • Quality & project work:
    • Ongoing work on mapping, data cleansing, postcodes, logos, and venue structures continues to improve data quality and user experience.
    • Focused investigations into billing issues, Jira escalations, and preferred venue/report issues support platform stability.
  • Individual contributions:
    • India and Sarah* carried the highest ticket volumes.
    • Sree, Yashi, and Sargun were key contributors on live chat.
    • Leo, Sarah*, Mercedesz*, and Jenine all contributed significantly to training and project work alongside their day-to-day support activity.

Overall, January was a productive month with strong ticket management, excellent training feedback, very positive CSAT results, and continued investment in data quality and process improvements across the team.