Back to work
  • Financial Services
  • Internal Tooling
  • Santander UK

i-Exchange Knowledge Base Overhaul

"I joined the team as a user, pitched my own ideas and spent a year overhauling a knowledge base handling 5 million annual searches, improving NPS by 12% and saving 200+ hours a month."

  • 35% Search accuracy improvement
  • 12% NPS uplift
  • 200+ Hours saved every month
  • 5M Annual searches affected
Organisation
Santander UK
Role
UX Designer / Business Analyst
Timeline
12 months
Platform
ServiceNow

Project summary

What's i-Exchange?
i-Exchange (information exchange) is a process knowledge base used by frontline colleagues at Santander during every customer interaction. It's a big instruction manual for banking processes — 6,000 articles covering everything from fraud procedures to product information.
How did I get involved?
I used i-Exchange daily in a previous fraud prevention role. It was full of bugs and usability problems. Rather than accept it, I pitched my ideas directly to the Product Owner — who offered me a UX Designer / Business Analyst role on the team.
Problem statement
i-Exchange is not effectively providing colleagues with the information they need to carry out their work, preventing customers from receiving the support they need.
What did research show?
Two major pain points: a broken search engine that failed on minor spelling errors, and difficulty finding audience-relevant content across 6,000 articles. Navigation was confusing and the UI was significantly outdated.
What did I do?
Over 12 months I drove the adoption of an AI-powered search engine, introduced saved audience personalisation, overhauled the homepage and article UI, and resolved 34 usability issues and bugs using a dual-track agile approach.
What was the result?
12% NPS increase, 35% improvement in search accuracy, a 50% drop in total searches (fewer repeat attempts), and 200+ hours saved per month across the business.

Background

Let's set the scene

i-Exchange is the central knowledge base for Santander UK's frontline colleagues. During every customer call or branch interaction, colleagues rely on it to look up the correct processes — from handling a disputed payment to opening a new account. It handles around 5 million searches a year.

Before joining the product team, I used i-Exchange daily as a fraud prevention colleague. The system was slow, buggy, and routinely failed me at the worst possible moment — mid-call with a customer waiting. I knew it could be better, so I approached the Product Owner directly, pitched my observations, and was brought in as a UX Designer and Business Analyst.

This gave me an unusual advantage: genuine user empathy built from first-hand frustration, rather than a secondhand account. I wasn't parachuted in — I lived the problem before I solved it.

Empathise

Understanding the problem properly

Despite having first-hand experience, I didn't let my own assumptions drive the work. I teamed up with a UX researcher and built a formal research plan to understand the full picture — covering users, content, constraints, and competing knowledge base platforms.

Research methods

  • Surveys — Quantified colleague experience with the current system at scale
  • Interviews — In-depth conversations to surface pain points beneath survey data
  • Shadowing — Observed colleagues in live interactions to understand real-world usage
  • Workshops — Facilitated sessions to generate user-led ideas for improvement
  • Competitor analysis — Audited other knowledge base platforms for best practices
  • Journey mapping — Mapped the full site to understand where users lost confidence
  • Content audit — Reviewed all 6,000+ articles to understand structure and gaps
  • Heuristic analysis — Evaluated the existing UI against usability principles

What the heuristic analysis found

The heuristic review surfaced problems across every major area of the product. On the homepage: a cluttered navigation bar, no audience filters, a wasted hero section used only for decoration, and a communications list that mixed incident alerts with general updates — making both harder to read.

In search: the engine used exact-match logic with no tolerance for spelling errors, no natural language interpretation, and no autocorrect. A single typo returned zero results with no suggestions. Article metadata in search results was limited to title and ID — not enough to judge relevance before clicking.

Within articles: there was no version history, no feedback mechanism, broken breadcrumb navigation, and content hidden inside collapsed accordions that the search engine couldn't index. Users had to manually expand every section before searching a page.

Define

Two pain points, one overarching problem

The research consolidated around a clear problem statement that became the anchor for everything that followed.

"i-Exchange is not effectively providing colleagues with the information they need to carry out their work, preventing customers from receiving the support they need."

Pain point 1 — Trouble searching

The search engine was the most reported problem. It used outdated exact-match logic: one spelling mistake voided the search entirely, returning no results and no suggestions. There was no natural language processing, no autocorrect, and no "genius" result to surface the most likely answer.

45% of colleagues reported problems with the search engine in the survey.

Pain point 2 — Finding relevant information

i-Exchange held over 6,000 articles across multiple audience groups — branch, contact centre, specialist teams. The majority of content was irrelevant to any given colleague, but audience filters weren't available from the homepage where most searches started.

Less than 10% of colleagues were applying audience filters. That wasn't laziness — it was a design failure. The filters were buried, inconsistent, and required manual reapplication after every search. As I dug deeper, it became clear the navigation problem was really a personalisation problem: if the system knew who you were and what team you worked in, it could filter content automatically.

Ideate

Finding the right solutions

Solving search — AI already in the building

The search problem couldn't be fixed by rewriting content or tweaking metadata. I dived into ServiceNow's technical documentation and discovered that Santander was already paying for an AI-powered search engine — it just hadn't been deployed.

The AI search offered machine learning, natural language interpretation, autocorrect, genius result suggestions, and full analytics reporting on failed searches and knowledge gaps. Exactly what was needed.

The catch: any change to how ServiceNow works globally required a vote from all Santander countries using the platform. I built a business case, secured alignment from IT, HR, and Customer Interactions teams, and submitted the ideation request to the international forum — offering to make the UK the pilot country. It passed the vote.

Solving relevance — workshops and personalisation

For the navigation and relevance problem, I ran workshops with contact centre and branch colleagues. I started each session with a simple exercise — how do you make toast? — to warm people up to the idea that we all do things differently, before asking what frustrated them about finding information on i-Exchange.

Affinity mapping the responses revealed three themes: personalisation, system issues, and content quality. With system issues being addressed through the search upgrade and a parallel content project already in flight, I focused the design work on personalisation.

When I asked "if you could change i-Exchange however you wanted, what would you do to make it more personal?" every participant independently described some form of saved audience: their business area's content surfaced by default, without having to apply filters manually every single time.

That became the design direction. I started wireframing a new homepage built around saved audience data, personalised communications, and a "My processes" section pulling from each user's most-viewed articles.

Test

Validating before building

Search testing at scale

Before committing to the AI search deployment, I needed hard evidence that it would genuinely outperform the existing engine — not just in theory, but on the real queries Santander colleagues were typing.

I set up a team of business analysts to run 4,000 manual searches in the test environment. Each query was tested on both search engines, with and without audience filters applied. Results were graded as promoter (top 3), neutral (positions 4–5), or detractor (position 6 and beyond) — the same NPS framing used to measure the live system.

AI search improved NPS by 5% on day one, without any machine learning optimisation. Applying an audience filter cut failed searches by a third and delivered a top-3 result 66–70% of the time — regardless of which search engine was used.

The data confirmed two things: the AI engine was meaningfully better, and the audience filter was the single highest-impact change available — even before any machine learning kicked in.

User testing

Moderated testing with colleagues validated the homepage wireframes and helped me refine the tab structure and footer navigation. The communications and incidents team confirmed the new separation of message types worked, and gave approval for a customisable banner space. Showing colleagues how their saved audience would automatically filter their entire experience — searches, communications, processes — landed well. They were enthusiastic.

Testing was more constrained than ideal: a required browser plug-in couldn't be approved by IT, which ruled out unmoderated remote testing. I persevered with in-person moderated sessions, but it reduced the sample size and the confidence I could take into production. More on that in the retrospective.

Solution

The final product

A key constraint shaped the whole delivery: colleagues were apprehensive about change. Many had used i-Exchange for years and feared a disruptive overhaul would make their jobs harder during the transition. The design had to feel immediately familiar, even when the functionality underneath was completely new.

Home

  • Saved audience and AI search — Audience selector surfaced on the homepage for the first time, with a temporary prompt to drive adoption at launch. All searches, communications, and content filtered automatically once set.
  • My processes — A personalised section showing each colleague's favourite and most-viewed articles, eliminating the need to search for regularly-used content.
  • Communications and incidents separated — Previously mixed in a single list; now colour-coded and icon-differentiated. Saved audiences apply to both lists automatically.
  • Clickable banner space — The decorative hero section replaced with a flexible, customisable banner for internal initiatives and announcements.
  • Simplified navigation — Cluttered "kitchen drawer" menu replaced with a clear navbar and a new footer containing frequently used system links.

Search

  • AI-powered engine — Machine learning, natural language interpretation, and full analytics reporting on top queries, knowledge gaps, and failed searches.
  • Autocorrect and genius suggestions — Typos no longer void a search. The engine now suggests corrections and surfaces the most likely article directly.
  • Audience filter in search — Saved audience persists into the search page and can be toggled off to see cross-audience results when needed.
  • Richer result cards — View count, share link, and open-in-new-tab now available directly from the search results list.

Articles

  • Version history — Full history for every article, enabling QA teams to review interactions against the version of the process that was live at that time.
  • Feedback mechanism — Colleagues can now flag articles that need improving, giving knowledge managers a direct signal from the people using the content daily.
  • Digital adoption banners — Processes that can be self-served digitally are flagged with a banner, supporting the business goal of reducing assisted channel demand.
  • Fixed article features — Copy to clipboard, link sharing, expand-all accordions, and add-to-favourites all implemented consistently across the platform.

Implementation

Dual-track delivery

I owned the full development lifecycle. With the AI search upgrade requiring nearly a year of back-end work, I adopted a dual-track agile approach to keep the product moving in the meantime.

Long track — AI search. The search upgrade ran in the background for close to a year, complicated by bugs discovered during development on the ServiceNow platform. I kept this from blocking everything else by shipping a minimum viable product with an interim UI on launch day, which I iterated on as the back-end work completed. In November 2024 the new AI search engine went live on i-Exchange.

Short track — UI and personalisation. Saved audiences, the new homepage, search page, and article improvements were broken into small sprints with deployments every two weeks. Saved audiences shipped incrementally: first the data layer, then the selection UI, then the display, then the automatic filtering. By keeping releases small and frequent, colleagues experienced continuous improvement rather than one disruptive big bang.

Results

What changed

  • 35% Search accuracy improvement
  • 12% NPS uplift in one year
  • 200+ Hours saved every month

Once the AI search engine deployed, total search queries fell rapidly — a clear signal that colleagues were finding what they needed on the first attempt rather than repeating searches. Spellcheck, genius suggestions, and machine learning were all working. The analytics dashboard surfaced emerging search trends and knowledge gaps that the content team could act on directly.

Saved audiences reached 50% adoption in the first month after launch — without any mandatory training — and tripled the proportion of filtered searches almost immediately. Santander estimates the saved audience feature alone saves at least 120 hours every month across the business, with the true benefit likely much higher when customer time, error reduction, and cognitive load are factored in.

Over the 12-month period, i-Exchange NPS increased by 11.7% — turning around a long-declining trend and bringing user sentiment back on track.

Retrospective

What I'd do differently

  • Workshop facilitation

    My first workshop ran off track — I spent the opening half explaining context instead of running the session. I invalidated the findings and had to repeat it with a written handout to set expectations before people arrived. In future I'll invest in formal facilitation training to keep sessions focused and productive, regardless of how well I know the topic.

  • Unmoderated testing

    A required browser plug-in couldn't get IT approval, which meant I was limited to in-person moderated testing with a smaller sample than I'd have liked. I went into production with less confidence than was ideal. Going forward I'll investigate testing tool options and IT approval processes much earlier — before the design is ready to test, not after.

  • Figma and design systems

    I wasted significant time on formatting and developer handoffs in Figma before I really understood auto-layout and component structure. Mid-project I was onboarded onto Santander's Flame design system, which accelerated things considerably. The lesson: invest in Figma skills before the project, not during it, and never become dependent on a design system without understanding the principles beneath it.