KeyspiderKeyspider
Knowledge Hub/Case Study
Case Study

How a State Department of Social Services Reduced Contact Centre Calls by 38% in 90 Days

A mid-sized state agency serving 2.3 million residents deployed Keyspider AI Search on their citizen portal and saw transformative results within a single quarter — without a single dollar spent on contact centre headcount.

12 min readState GovernmentMarch 2025Read Case Study

38%

reduction in contact centre call volume

When a mid-Atlantic state Department of Social Services modernised their citizen portal in late 2023, they invested heavily in design, navigation, and content quality. The website looked professional. The information was accurate. Yet 14,000 calls were still flooding their contact centre every month — and 42% of them were about information that was published, correctly, on the website. The problem wasn't the content. It was the search.

The Organisation

The Department of Social Services serves approximately 2.3 million residents across a mix of urban, suburban, and rural communities. Its remit spans benefits administration, housing assistance, childcare subsidies, emergency food support, licensing for care facilities, and a range of federal pass-through programmes including SNAP, Medicaid eligibility screening, and TANF.

The department's digital footprint had grown organically over fifteen years: six sub-domains, each managed by different programme areas with different content standards, different CMS configurations, and no unified search capability. The total indexed page count exceeded 4,200 pages. The digital and IT team comprised approximately 85 staff, and the contact centre employed 62 full-time equivalents handling enquiries across all programme areas.

Average monthly contact centre call volume had held stubbornly at 14,000–14,500 calls for three consecutive years — despite significant investment in the website rebuild, an FAQs expansion programme, and two rounds of plain-language rewrites on key service pages. The volume wasn't coming down, and leadership was beginning to question whether it ever would.

The Problem: A Website That Looked Modern But Searched Like 1999

The department's 2022–23 website modernisation project had done everything right on the surface. A new headless CMS replaced a decade-old Drupal installation. The design was responsive and passed initial accessibility checks. Content was restructured around service journeys rather than programme silos. Page load times dropped by 60%. By most measures, it was a successful project.

What the project hadn't replaced was the legacy keyword search engine — a commercial product that had been in place since 2014. The procurement team had considered replacing it, but the scope and timeline for the website project were already stretched, and the search engine 'worked'. It returned results. It had an interface. It was, in the project manager's words, 'good enough for now'.

It wasn't. Post-launch analytics told a stark story. Of all searches conducted on the new website, 68% produced no clicked result — meaning the user either saw nothing relevant and gave up, or saw a list of results that didn't answer their question and abandoned the search entirely. For a website with 4,200 pages of information, a 68% failure rate on search was not a minor problem. It was systematic.

68%

of website searches resulted in no clicked result

14,200

avg monthly contact centre calls

4,200+

pages across 6 sub-domains

42%

of calls about publicly available information

"We knew citizens were going to our website first. We could see that in the analytics. But then they were calling us anyway — because they couldn't find what they needed. Our search was essentially invisible."

Director of Digital Services, Department of Social Services

Why Keyword Search Was Failing Their Citizens

The department commissioned a search analytics audit in early 2024 to understand precisely why their search was failing. What they found were four compounding failure modes, each independently significant, and collectively catastrophic.

The first was the policy language gap. The department's content used statutory and programme-specific terminology — 'residential tenancy assistance', 'emergency shelter diversion', 'transitional housing supplement' — while citizens searched using everyday language: 'help with rent', 'homeless shelter', 'temporary housing'. Keyword search has no mechanism to bridge this gap. These queries returned zero results not because the information didn't exist, but because the labels were different.

The second failure mode was PDF burial. An estimated 34% of the department's substantive programme information was stored in downloadable PDFs — policy manuals, eligibility guides, application instructions — that the legacy search engine indexed poorly and ranked inconsistently. Searches for 'how to apply for emergency food assistance' would return a generic programme landing page rather than the step-by-step application guide buried in a 40-page PDF.

The third issue was cross-domain invisibility. The six sub-domains were indexed separately, and the central search widget only searched the main domain by default. Citizens landing on the housing sub-domain couldn't search childcare content without navigating to the main site first and initiating a new search. Many simply called.

The fourth failure was FOIA document discoverability. The department maintained a publicly accessible FOIA reading room with hundreds of proactively disclosed documents. These were never integrated into the search index at all, meaning citizens searching for policy records were invisible to even those documents made public specifically to reduce information requests.

Critical finding

The agency's search analytics showed their most common zero-result queries were for benefits and eligibility information that was published on the site — but titled in bureaucratic language that bore no resemblance to how citizens searched for it. 'Help paying my electricity bill' returned nothing. 'Energy Assistance Programme — Low Income Household Eligibility' existed and was accurate. The gap between these two phrases was costing the department thousands of contact centre calls per month.

The Evaluation and Procurement Process

In March 2024, the department issued a formal RFP for an AI-powered search solution. Four vendors responded. The evaluation team included the Director of Digital Services, the IT Security Architect, a representative from the contact centre operations team, and an external accessibility consultant engaged specifically for the evaluation.

The evaluation ran over 90 days and included written responses, live demonstrations on the department's own content, a structured proof-of-concept phase in which each shortlisted vendor indexed 500 pages of department content and was tested against 200 real citizen queries drawn from contact centre call logs, and a reference check process with existing government clients.

Keyspider was selected at the conclusion of the evaluation. The determining factors were not primarily about features — all four vendors could demonstrate semantic search capability. The differences were in the details that matter for government deployment.

  1. 1WCAG 2.1 AA compliance — independently verified through third-party audit, not vendor assertion. Keyspider provided test reports from an accredited auditing firm.
  2. 2Grounded AI with no training on public internet — the AI-generated answers would draw exclusively from the department's indexed content, with every answer citing its source document. No hallucination risk from external knowledge.
  3. 3Deployment timeline under two weeks — the department had been burned by long implementation projects before and required contractual commitment to a go-live timeline.
  4. 4FedRAMP path — while FedRAMP authorization was not yet complete, Keyspider had a documented path and existing state government procurement vehicle relationships.
  5. 5Demonstrated accuracy on actual content — in the POC phase, Keyspider correctly answered 93% of the 200 test queries vs a 71% average across competitors.
  6. 6Cross-domain unified search — the ability to index and search across all six sub-domains through a single search widget with no additional infrastructure.
  7. 7PDF deep indexing — demonstrated ability to extract and surface answers from within multi-page PDF documents, not just surface the PDF as a search result.
  8. 8Transparent pricing with no per-query charges — the department needed budget predictability, and per-query pricing models created unacceptable financial exposure given 14,000 monthly search events.

Deployment: From Contract to Live in 11 Days

Procurement was completed in late May 2024. The contract was signed on a Thursday morning. What followed was one of the fastest enterprise search deployments the department's IT team had ever experienced.

Keyspider's implementation team began the content indexing process within hours of contract signature. The crawlers were configured to reach all six sub-domains, recognise the department's CMS structure, and handle PDF extraction without manual intervention. By end of business on Day 3, all 4,200+ pages — including PDFs, structured forms pages, and the FOIA reading room — were indexed and fully searchable internally.

Days 4 through 6 were spent on configuration: tuning relevance for the department's specific query patterns (the test query set from the evaluation POC was used as a baseline), setting up audience-aware results (no personalisation in the public-facing deployment, but the groundwork for a future staff search extension was laid), and configuring the search widget to match the department's visual design system.

Day 7 marked the beginning of internal staff testing. Forty-two staff volunteers — drawn from contact centre teams, caseworkers, and IT — were given structured test scenarios and free-form testing access. The feedback was overwhelmingly positive, but flagged three specific query patterns that needed relevance tuning. These were addressed by Day 8.

Day 9 was dedicated to accessibility testing. The external accessibility consultant ran a full WCAG 2.1 AA audit on the search widget in the department's production environment, testing keyboard navigation, screen reader compatibility (NVDA and JAWS on Windows, VoiceOver on iOS), colour contrast ratios, focus management, and ARIA labelling. All criteria passed. One minor colour contrast issue on focus indicators was identified and corrected same-day.

Deployment timeline

Day 1: Contract signed. Day 3: All 4,200+ pages indexed across 6 sub-domains, including full PDF content extraction and FOIA reading room. Day 6: Configuration, relevance tuning, and widget design integration complete. Day 7–8: Internal staff testing across contact centre, caseworker, and IT teams. Day 9: Independent WCAG 2.1 AA accessibility audit — passed. Day 11: Public go-live.

Results at 90 Days

38%

reduction in contact centre call volume

91%

citizen satisfaction with search results

4.2 min

avg time saved per successful self-service search

$240K

estimated annualised cost savings

The Results in Detail

Contact Centre Deflection

At the 90-day mark, average monthly contact centre call volume had fallen from 14,200 to 8,800 — a reduction of 5,400 calls per month, or 38%. This was measured against the same 90-day period in the prior year, adjusted for a 2.1% annual trend reduction observed in the preceding 24 months. Net of the trend adjustment, the attributable reduction from AI search deployment was 4,800 calls per month.

The reduction was not evenly distributed across categories. Benefits eligibility enquiries — the highest-volume category and the one most directly affected by the policy language gap — fell by 44%. Housing assistance enquiries dropped by 35%. Licensing enquiries (care facility licensing, childcare provider registration) were down 29%. The smallest reduction was in complex case-specific enquiries — calls where a citizen needed to discuss their individual circumstances rather than find general information — which fell only 9%, consistent with the expectation that AI search addresses information-seeking behaviour rather than case management needs.

Citizen Satisfaction

A post-deployment citizen satisfaction survey was conducted in September 2024, drawing responses from 1,200 website users who had used the search function within the prior 30 days. The survey was distributed via an exit intercept on the search results page and was completed by users who had clicked through to at least one result.

Overall satisfaction with the search experience scored 91% — defined as respondents selecting 'satisfied' or 'very satisfied' on a five-point scale. The equivalent pre-deployment survey, conducted in April 2024 on the same methodology, had registered 54% satisfaction. The 37-point improvement was the largest single-survey gain the department's digital team had recorded in any user experience measurement in the programme's history.

"I found what I needed in under a minute. I didn't have to call anyone. That's never happened before on a government website."

Citizen survey respondent, September 2024

Qualitative feedback from the open-text responses highlighted three themes repeatedly: the search 'understood what I was asking', results were 'specific, not just a list of pages', and the AI answer summaries 'told me exactly what I needed to do next'. Several respondents specifically noted that they had tried to find the information before and failed — and were surprised it now worked.

Staff Productivity

While the primary deployment was citizen-facing, the department extended Keyspider access to contact centre staff and caseworkers in September 2024, connecting it to the same index used by the public search. Staff were now using the same AI search capability for internal policy lookups — finding programme eligibility rules, referencing legislative requirements, and locating specific procedural documents during live calls.

A 30-day staff survey in October 2024, covering 38 caseworkers who had used the tool regularly, found that the average time saved on policy document retrieval was 35 minutes per day. For a caseworker handling 18–22 interactions per shift, that recovery translated into an additional 2–3 client interactions per day per caseworker — a meaningful productivity gain without any change to headcount or staffing structure.

WCAG Compliance: An Unexpected Win

The accessibility audit conducted as part of the Keyspider deployment was the most comprehensive accessibility testing the department's website had received since the 2022 redesign. While the audit's primary focus was on the search widget itself, the accessibility consultant's scope extended to a structured sample of 200 content pages — the same pages used in the POC evaluation.

The audit identified 23 content accessibility failures across those 200 pages: missing alt text on informational images, form fields without associated labels, PDF documents without tagged structure, tables without header row markup, and heading hierarchy violations that disrupted screen reader navigation. None of these failures were caused by the search deployment; they predated it. But the audit surfaced them in a documented, prioritised format that the content team could act on.

The department's content operations team addressed all 23 identified failures within six weeks of the audit report, using the prioritised remediation list the accessibility consultant provided. The department's overall WCAG 2.1 AA conformance score — assessed against the same 200-page sample — improved from 71% to 96% over the same period. This was an outcome that the IT leadership had not anticipated when contracting for a search engine, and it became a significant element of the internal business case for the Phase 2 rollout.

What Comes Next

The success of the AI Search deployment has given the department the internal credibility and executive sponsorship to move into Phase 2 with confidence. Two extensions are now in progress, expected to go live in Q1 2025.

The first is the deployment of AI Assistant on top of the existing AI Search layer. Rather than presenting search results and AI summaries on a results page, the AI Assistant interface will allow citizens to ask multi-step questions in natural language — asking follow-up questions, providing context about their situation, and receiving guided step-by-step responses — all grounded exclusively in the department's indexed content, with citations on every answer. The Phase 2 pilot will focus on the benefits eligibility journey, which accounted for the highest volume of contact centre calls and the largest single category improvement in Phase 1.

The second extension is Workplace Search for internal staff — a separate, permission-aware search index covering the department's intranet, shared document libraries, policy manuals, legislative instruments, and case management knowledge base. Based on the 35 minutes/day productivity finding from the caseworker survey, the projected ROI for the staff search extension is expected to exceed the citizen-facing deployment within 12 months of go-live.

Phase 2 preview

The agency is now piloting Keyspider AI Assistant — enabling citizens to ask multi-step questions about their specific situation and receive accurate, cited answers without agent involvement. Early pilot results show a further 18% reduction in call volume for benefits eligibility queries, on top of the 44% reduction already achieved through AI Search.

See what 90-day results look like for your agency

Book a personalised demo with our SLED team — we'll configure Keyspider on a sample of your actual content and walk you through a live proof of concept.

Book a Demo

Ready to give your users better answers?

AI Search, AI Assistant, and Workplace Search. Deployed in days, not months. See it live on your own content.

No credit card required · Live in 2 weeks · Cancel anytime