The gap between what citizens expect from government digital services and what those services actually deliver has never been wider — or more expensive. For the third consecutive year, citizen satisfaction with government websites has declined, while the operational cost of maintaining under-resourced contact centres to handle avoidable enquiries continues to rise. This report synthesises findings from research conducted with 1,200 government digital leaders across state agencies, local government, school districts, and higher education institutions, alongside a parallel survey of 8,000 US residents about their experience with government digital services.
Research methodology
Government digital leader survey: 1,200 respondents across state agencies (n=340), local government (n=380), K-12 school districts (n=280), and higher education (n=200). Fielded October–November 2024. Citizen survey: 8,000 US adults who had attempted to use a government website in the previous 90 days. Fielded November–December 2024. Margin of error ±2.1% at 95% confidence level.
Key Findings at a Glance
- 73% of citizens expect to fully self-serve on government websites for common transactions — but only 31% report consistently being able to do so.
- 42% of all contact centre calls to government agencies are for information that is already published on the agency's website but cannot be found through search.
- The average government agency loses $2.1M annually to avoidable contact centre calls attributable to poor search and self-service failure.
- AI search adoption has reached an inflection point: 34% of government digital leaders report having deployed or are actively piloting AI-powered search — up from 12% in 2023.
- Among early AI search adopters, 78% report measurable reductions in contact centre volume within 90 days of deployment.
- WCAG 2.1 AA compliance remains the most common barrier to AI search deployment, cited by 61% of organisations that have not yet deployed.
73%
citizens expect full digital self-service on government websites
42%
of contact centre calls are about information already published online
$2.1M
average annual cost of avoidable calls per mid-sized agency
34%
of SLED organisations have deployed or are piloting AI search
78%
of AI search early adopters saw reduced call volume within 90 days
68%
of government website searches result in no clicked result
Section 1: The Self-Service Expectation Gap
The most striking finding in this year's research is the sheer size of the gap between what citizens expect and what government digital services deliver. 73% of respondents said they expect to complete common government transactions — finding information about services, checking eligibility, understanding a process — entirely online, without needing to call or visit in person. That expectation has risen 14 percentage points since 2021, driven largely by the consumer technology experience: search that understands questions, chat interfaces that give direct answers, and platforms that serve results within milliseconds.
The reality on government websites looks very different. Only 31% of citizens report that they can consistently self-serve when they visit a government website. The most common failure point — cited by 64% of citizens who reported an unsuccessful online experience — was search: they knew the information existed on the website, searched for it, and either found nothing relevant or found a list of results that did not include the answer they needed.
"I spent 25 minutes searching the county website for information about my property tax assessment. I found three pages that mentioned property tax and none of them had what I needed. I ended up calling. The person on the phone told me in 30 seconds. That information was definitely on the website somewhere."
— Citizen survey respondent, suburban county, Mid-Atlantic region
The expectation gap has direct operational consequences. Citizens who fail to self-serve don't simply give up — 71% report that they then contact the agency by phone or in person. The digital channel has failed to deflect the workload; it has merely delayed it, often by a few minutes of frustrating search before the phone call. In this pattern, the digital channel is creating friction without creating efficiency.
Section 2: The Contact Centre Burden — Quantifying the Cost of Poor Search
The relationship between digital search quality and contact centre volume is well established in the research literature and widely understood by government digital leaders. What is less well understood is the magnitude of the financial cost — and how conservative estimates of that cost tend to be.
The $22 Call
The most commonly cited figure for the average cost of a government contact centre call is $22, derived from a combination of fully-loaded staff costs, infrastructure, and overhead. For a mid-sized state agency handling 12,000 calls per month, that is $264,000 per month, or $3.2M annually. If 42% of those calls — the proportion attributable to information available on the website — are avoidable with effective digital search, the addressable savings are $1.3M annually, from a single agency.
In practice, the $22 figure is a conservative midpoint. Specialised contact centres for legal, benefits, or healthcare enquiries have fully-loaded costs of $35–55 per call. And it does not account for the opportunity cost: every staff minute spent on an avoidable call is a minute not available for complex enquiries that genuinely require human judgment.
The Avoidable Call Proportion
Our research found that 42% of government contact centre calls concern information published on the agency's website. This figure aligns closely with research by Gartner (2023, 40%) and the UK Government Digital Service (2024, 38%). The key driver is not citizens' preference for phone — only 27% of respondents said they prefer calling a government agency — but rather the failure of digital search to surface the relevant information.
Avoidable calls — what they cost your agency
Use this formula to estimate your agency's annual cost of avoidable calls: (Monthly call volume) × 0.42 × $22 × 12 = Annual avoidable call cost. For a 10,000-call/month agency: 10,000 × 0.42 × $22 × 12 = $1,108,800 per year. This figure represents the addressable savings from effective AI search deployment — not the total savings, which also includes staff time, FOIA processing, and compliance efficiency.
The Equity Dimension
The contact centre burden is not distributed evenly across the population. Our research found that citizens without post-secondary education are 2.3 times more likely to call a government agency instead of self-serving online, compared to citizens with a university degree. Citizens over 65 are 3.1 times more likely to call. Citizens for whom English is a second language are 4.2 times more likely to call — in part because government websites are written in formal, bureaucratic English that search engines interpret literally, making natural-language queries from non-native speakers particularly likely to return zero results.
The implication is that search failure is not equally distributed: it disproportionately affects the citizens who most need government services and who have the fewest alternative sources of information and support. Improving government search is, in this sense, an equity intervention as much as a cost reduction exercise.
Section 3: AI Adoption in SLED — Where We Are
AI search adoption in the SLED sector has reached an inflection point. In our 2023 research, 12% of government digital leaders reported having deployed or piloted AI-powered search. In 2024, that figure has risen to 34% — a near-tripling in 12 months that reflects both the maturation of the technology and the emergence of platforms purpose-built for government compliance requirements.
Adoption by Sector
Adoption rates vary significantly across the SLED sub-sectors. State government agencies show the highest adoption rate (41%), driven in part by executive-level mandates in several states to improve digital self-service outcomes. Higher education institutions follow at 38% — partly driven by student satisfaction metrics and competitive recruitment considerations. Local government adoption sits at 28%, reflecting more constrained budgets and IT capacity. K-12 school districts show the lowest adoption at 22%, with FERPA compliance concerns and the difficulty of demonstrating ROI to school boards as the most commonly cited barriers.
41%
state government AI search adoption rate
38%
higher education AI search adoption rate
28%
local government AI search adoption rate
22%
K-12 school district AI search adoption rate
The early adopter advantage
Organisations that deployed AI search in 2023 now have 12–18 months of search analytics data — revealing precisely what their users are searching for, what content is failing them, and where their biggest self-service opportunities lie. This data asset compounds over time. Organisations that delay deployment are not just delaying the efficiency gains; they are delaying the accumulation of behavioural data that drives content strategy improvement.
Barriers to Adoption
Among organisations that have not yet deployed AI search, we asked about the primary barriers. WCAG compliance concerns were the most commonly cited barrier (61%), followed by data sovereignty concerns (54%), procurement complexity (47%), and budget constraints (43%). Interestingly, 'concern that AI will give wrong answers' — the hallucination concern — was cited by only 31% of respondents, suggesting that the risk of grounded AI is better understood in the government sector than in the consumer market.
Section 4: What Successful Deployments Have in Common
Among the 34% of organisations that have deployed AI search, we identified five characteristics that distinguish high-performing deployments from those with modest or disappointing results.
1. They started with a defined problem, not a technology
The organisations seeing the strongest results began their AI search deployment with a specific operational problem: reducing contact centre volume, cutting FOIA processing time, improving staff policy lookup. They did not begin with 'we should deploy AI' and work backwards. Starting with a specific, measurable problem allowed them to instrument the deployment properly, attribute results accurately, and make the business case for expansion.
2. They prioritised deployment speed over perfect configuration
Organisations that spent months in pre-deployment configuration and testing consistently saw lower initial impact than those that deployed quickly to a subset of their content and iterated. The organisations with the strongest results typically went live in under 2 weeks and used search analytics from the first weeks to guide relevance improvements. The platform learns from real user behaviour; a delayed deployment is a delayed feedback loop.
3. They invested in content quality in parallel with technology deployment
AI search surfaces problems in your content more clearly than keyword search ever did. The top-performing deployments used their zero-results analytics data — queries that returned no clicked result — as a direct content brief. Within 90 days, they had identified and addressed the most common content gaps, improving both the raw content quality and the search experience simultaneously.
4. They communicated the change to citizens
Organisations that announced the deployment of a new search experience — with clear instructions on how to use it and what it could do — saw 23% higher engagement with AI-generated answers than those that deployed silently. Government users are unfamiliar with AI-generated answers and may be sceptical of them. A brief 'we've upgraded our search — you can now ask plain-language questions and get direct answers' message on the homepage dramatically improved user willingness to engage.
5. They had executive sponsorship and a named owner
Deployments with a named executive sponsor (typically the CTO, CDO, or equivalent) and a dedicated internal owner — responsible for reviewing analytics, managing content improvements, and communicating with the vendor — performed consistently better than those treated as a set-and-forget IT deployment. AI search is a living system that improves with attention; organisations that treat it as infrastructure investment rather than an ongoing programme see diminishing returns after the initial deployment impact.
Section 5: The ROI of Getting Search Right
The financial case for AI search investment is strong across all SLED sub-sectors, but the specific return drivers vary by organisation type. The following framework models three primary ROI scenarios; most organisations will benefit from some combination of all three.
Contact Centre Deflection (All sectors)
The most universally applicable ROI driver. Using conservative industry benchmarks: 35% reduction in avoidable calls, $22 average call cost. For a 10,000 call/month agency: annual saving of $924,000. For a 20,000 call/month agency: annual saving of $1.85M. Return on AI search investment typically occurs within 6–10 weeks for organisations in this volume range.
FOIA and Open Records Processing (State & Local Government)
AI-powered document discovery reduces the manual search component of FOIA responses by 60–80%. For an agency processing 200 requests/month at 4 hours average processing time and $45/hr blended legal/admin rate: annual saving of $432,000 — and, critically, a dramatic reduction in statutory non-compliance risk. In jurisdictions with mandatory financial penalties for late FOIA responses, the avoided-penalty value can exceed the direct processing saving.
Staff Productivity (All sectors, especially large agencies)
For large agencies with complex internal policy environments, the staff productivity return from workplace search rivals the citizen-facing return. In our research, staff at agencies with AI-powered workplace search reported saving an average of 38 minutes per day on information retrieval tasks. For a 500-person agency at $40/hr blended rate: annual productivity recovery of $3.17M — though this figure represents potential rather than guaranteed cash savings, as it depends on redeployment of recovered time to higher-value work.
| Organisation Type | Primary ROI Driver | Typical 12-Month Return | Payback Period |
|---|---|---|---|
| Mid-sized State Agency (10,000+ monthly calls) | Contact centre deflection | $900K–$1.5M | 6–10 weeks |
| Large City Government (FOIA-intensive) | FOIA processing efficiency | $300K–$600K | 8–14 weeks |
| K-12 School District (14+ schools) | Parent support deflection + staff productivity | $200K–$500K | 10–16 weeks |
| State University (15,000+ students) | Student enquiry deflection + staff productivity | $500K–$1.2M | 6–12 weeks |
| Large State Agency (500+ staff) | Staff productivity + FOIA efficiency | $1.5M–$4M | 4–8 weeks |
Looking Ahead: The Next 12 Months
Our research points to three developments that will shape the SLED digital experience landscape in 2025–2026. First, the April 2026 ADA Title II WCAG 2.1 AA compliance deadline will drive significant technology procurement activity in 2025 — organisations that haven't yet addressed digital accessibility will be forced to act. AI search deployments that are not WCAG-compliant will be replaced, creating both urgency and opportunity.
Second, the integration of AI chat alongside AI search — a two-layer architecture where AI search finds the right content and AI chat synthesises it into a conversational response — is moving from pilot to mainstream. 47% of AI search adopters in our survey said they plan to add a conversational AI layer in 2025. The organisations moving fastest are those that laid a strong AI search foundation first: a grounded, accurate search layer is the prerequisite for a safe and reliable chat layer.
Third, search analytics are becoming a core input to content strategy. The most sophisticated government digital teams are now treating their zero-results data as a live content brief — identifying what citizens are searching for, what terminology they use, and where content is missing or hard to find. This shift — from treating content and search as separate disciplines — is producing measurably better digital experiences and is likely to accelerate as AI search platforms make analytics more accessible.
Explore the data further
How a State Agency Reduced Contact Centre Calls by 38%
Real-world deployment results in a mid-sized state government context.
The Government Digital Leader's Playbook for AI Search
Evaluation, procurement, and deployment guidance for SLED digital leaders.
AI Chat Without the Risk
The framework for safe conversational AI deployment in public sector.
AI Search — Product Overview
How Keyspider AI Search works for government and SLED organisations.
Your Search Analytics Are Your Best Content Strategy Tool
How to use zero-results data to drive content improvement.
See where your agency sits in the adoption curve
Our SLED team can benchmark your current digital self-service performance against the organisations in this research and model your specific ROI opportunity.
Request a Benchmarking Session