Back to work
  • Financial Services
  • AI
  • Santander UK

Cassi — AI Chatbot Feedback Redesign

"Three weeks, zero budget and a graphic design competition later, I improved feedback submissions by 30% and took audience filter adherence from 1-in-3 to near 100% on Santander's AI colleague assistant."

  • ~100% Filter adherence achieved
  • +30% Feedback submissions
  • 3 wks Delivery window
  • £0 Additional development cost
Organisation
Santander UK
Role
UX Designer
Timeline
3 weeks
Platform
Colleague Assist (web)

Project summary

What's Colleague Assist?
Colleague Assist — later renamed Cassi — is an AI chatbot that provides Santander colleagues with guidance on how to serve customers. It's a bit like ChatGPT for banking, using i-Exchange as its source of truth to answer colleague queries in real time.
How did I get involved?
I spotted the tool while conducting user research with a frontline colleague. It looked underserved and I was looking for a new challenge after i-Exchange, so I approached the Product Owner directly and offered to help.
Problem statement
Colleagues were not selecting their business area when querying Cassi, and feedback rates were extremely low. This made responses less accurate and prevented the data science team from improving the model.
What did research show?
Users didn't know their feedback directly improved Cassi's responses. The feedback journey offered only three vague options causing drop-off. The audience filter was positioned far from the query input, so most colleagues ignored it entirely.
What did I do?
Within the constraints — no development budget, three weeks before a change freeze — I improved feedback signalling, repositioned the audience filter, added a tooltip for new users, opened a dedicated Community forum for long-form feedback, and ran an internal rebrand competition.
What was the result?
Cassi launched with a new name, logo, and improved UX. Audience filter adherence reached near 100% in month one. Feedback submissions increased by 30%. All delivered at zero additional development cost.

Background

A new problem, tight constraints

After delivering the main updates for i-Exchange, I was looking for the next problem worth solving. I found it by accident — while conducting user research with a frontline colleague, I noticed them switching between i-Exchange and a tool called Colleague Assist: an AI chatbot that answered colleague questions using i-Exchange content as its source.

The product looked interesting and clearly underserved. I introduced myself to the Product Owner and offered to help. They were glad of it — two problems were stalling the tool's rollout: colleagues weren't selecting their business area before querying, and almost nobody was leaving feedback. Both were essential for the data science team to improve the model's responses.

There was one significant constraint: no money left in the project budget. The development team agreed that front-end changes — copy, styles, layout — counted as business as usual and could ship without additional cost. But any back-end functionality was off the table. On top of that, a change freeze was coming in three weeks. Whatever I was going to do, it had to ship within that window.

Empathise

There's always time for research

There was no time for workshops or a formal research plan. But there were plenty of colleagues already using the tool who I could speak to quickly. I organised a town hall meeting to understand why users weren't leaving feedback or selecting their audience — and what came back was clear and consistent.

What the town hall revealed

  • Feedback options were too vague — Only three discrete choices were available. Users found them too broad to accurately describe their experience, so they gave up and left nothing at all.
  • Users didn't understand the model — Nobody had told colleagues that their feedback directly trained and improved Cassi's responses. Once they knew, they were far more motivated to contribute.
  • The audience filter was easy to miss — It sat at the top of the screen, far from the query input at the bottom. With no prompt or guidance, most users simply forgot it existed.

Heuristic analysis

A quick pass against usability principles confirmed what the town hall surfaced. There was no onboarding guidance — colleagues joining through training might retain the basics initially, but knowledge fade was inevitable. The audience filter was poorly positioned with no supporting prompt. The feedback icons were squeezed between responses without enough visual separation to read as clearly interactive. The UI felt like a prototype that had never been handed to a designer — functional, but not ready for the scale of rollout being planned.

Define

Three distinct problems

The research consolidated around a clear problem statement and three distinct failure modes beneath it.

"Colleagues are not selecting their business area when querying Colleague Assist and feedback rates are really low. This causes responses to be less accurate and prevents the data science team from improving the model."

Problem 1 — The feedback journey

Users wanted to leave feedback when Cassi got something wrong, but the three available options were too broad to capture what they actually meant. Faced with choices that didn't fit their experience, they abandoned the journey entirely. Because Cassi uses negative feedback to guard-rail poor responses, low feedback rates had a direct impact on model safety — not just data quality.

The feedback entry point was also poorly signalled. The thumbs up and down icons were visually buried between responses, easy to overlook entirely.

Problem 2 — The understanding gap

Users had no mental model of how Cassi actually worked. They didn't know that their feedback directly trained the model's responses — for themselves and every colleague using the tool. Once that connection was made explicit in the town hall, motivation to leave feedback increased immediately. This was a communication failure, not a missing feature.

Problem 3 — Audience selection

Prior to the update, 1-in-3 guard-railed responses had no audience selected — meaning the model was being penalised for giving the right answer to the wrong person, rather than a genuinely bad response. The audience filter needed repositioning, prompting, and making impossible to miss.

Ideate

Working within the constraints

With three days before designs needed to go to developers, I moved quickly. No back-end changes, no new user journeys — front-end only. Each solution had to solve a real problem without adding development cost.

Solving feedback — the Community forum

The three vague feedback options couldn't be replaced with a richer form without back-end work. But I had an existing relationship with the team running Santander's internal Community forum from my work on i-Exchange. I asked if they could open a dedicated Cassi channel where colleagues could leave longer-form feedback. They said yes — and it was free.

Community moderators would triage submissions to the Cassi team. The discrete feedback options were replaced with a direct route to the forum. Problem solved at zero additional cost.

Closing the understanding gap — tooltip and introduction screen

When a colleague first opens Cassi, the chat window is empty. That blank space was an opportunity. I proposed using it for a brief introduction: what Cassi is for, how to get the best results, and — critically — that their feedback directly improves the model's responses for everyone on the platform.

A persistent tooltip would also keep key guidance accessible without interrupting experienced users who already knew what they were doing.

Fixing audience selection — the law of proximity

The audience filter sat at the top of the screen. The query input was at the bottom. That physical distance was enough for users to forget the filter existed. Moving the selector directly above the query box — where the user's attention already was — applied a basic principle: related elements belong close together.

As a fallback, a prompt would appear if a user submitted a query without selecting an audience first.

Driving engagement — the rebrand competition

Alongside the UX fixes, I ran an internal competition inviting colleagues to design a new logo and suggest a new name. The goal: give colleagues a reason to care about the tool before they'd even experienced the improvements. Dozens of submissions came in. The winning design — by Aishat Arowosegbe — gave the tool a visual identity closer to the customer-facing systems colleagues already knew. Colleague Assist became Cassi.

Test

Rapid validation on the final day

With one day left before handover, I ran group calls with users to pressure-test the design decisions. The reaction was positive — the upgrade felt like a significant improvement to people who'd been using the original daily.

The responsive breakpoint catch

Testing surfaced something I hadn't designed for: many colleagues run Cassi as a narrow side panel so they can use other windows while serving a customer. The layout broke at that width. I introduced a tighter breakpoint — reducing the logo and tightening spacing at the bottom — to keep the chat window functional in the reduced view without affecting larger screens. It went to the wire, but it shipped.

Toast or overlay?

I'd designed two versions of the post-feedback experience: an overlay modal that blocked the screen until dismissed, and a toast notification that appeared briefly at the top of the chat window before fading.

"I'd have much preferred the overlay — but when colleagues explained that a modal blocking their screen mid-customer-call would genuinely get in the way of their work, it was an easy choice."

Users were unanimous on the toast. When you're mid-call with a customer, a dismissal-required modal is a real problem. The toast let colleagues acknowledge the feedback prompt and carry on serving the customer without any friction.

Solution

The final product

Audience filter

Moved directly above the query input field, with a prompt appearing if a query is submitted without a selection. Proximity does the work that instructions and training never could.

Tooltip and introduction screen

A persistent tooltip holds key guidance for new users without cluttering the experience for experienced ones. The empty opening screen now provides a brief introduction to best practice — including the explicit statement that feedback improves Cassi's responses for everyone.

Feedback journey

Thumbs up and down redesigned to read clearly as interactive elements. Selecting one triggers a toast notification — brief, unobtrusive, and surfaced without interrupting a live customer interaction. The toast links to the dedicated Community forum for colleagues who want to leave more detailed context.

Cassi rebrand

New name, new logo, visual identity aligned with customer-facing systems. The rebrand gave Cassi credibility with colleagues who'd previously dismissed it as an unfinished internal tool.

Results

Delivered in three weeks, at zero cost

  • ~100% Filter adherence in month one
  • +30% Feedback submissions
  • £0 Additional development cost

The changes shipped just before the change freeze and received immediate positive feedback via the new Community forum. In the first month after launch, there were 30 guard-railed responses — and in every single case the user had selected their audience. Before the update, 1-in-3 had not.

The Community forum added a dimension the original feedback mechanism never had: context. Colleagues started explaining why a response was wrong, not just that it was. Experienced colleagues began helping newer ones. And a recurring pattern emerged — many "bad" Cassi responses were actually accurate answers based on outdated i-Exchange articles, giving the content team a new stream of actionable improvement signals.

Combined with improved signalling, the repositioned filter, and the new tooltip and toast, feedback submissions increased 30% in the weeks following launch.

Retrospective

What I'd do differently

  • Being adaptable

    This project reinforced that good outcomes don't require a perfect process. Given more time I'd have run proper research and structured testing — but the constraints were real, and working within them delivered genuine value quickly. Knowing when to adapt rather than wait for ideal conditions is a skill in itself, and one worth practising deliberately.

  • Design for every viewport from the start

    I assumed Cassi was a full-width web product. It wasn't — many colleagues run it as a narrow side panel alongside other windows. Finding that out on the final day before handover nearly caused a problem. Going forward: always understand how a product is actually used before designing, not after. Real usage patterns matter more than standard device breakpoints.

  • Putting my bias aside

    I personally preferred the overlay for the feedback journey — it felt more deliberate and harder to miss. But colleagues explained that a modal blocking their screen mid-customer-call was genuinely disruptive to their work. The right call was obvious. The best design works for the user's actual context, not the designer's preference.