We asked government workers about AI

When Austin, Josh, and I started Civic Roundtable in 2022, we never thought we’d be an AI company.
We had worked in and around government for about a decade, seeing that public servants doing critical work were underserved by technology. We saw opportunities to make it better.
In the public sector, the stakes are high: If an agency’s technology fails, real people don’t get the health services they need, disaster recovery efforts get delayed, and communities lose access to services they rely on. But we also know the right technology can empower public servants to have a bigger impact.
Fast forward three years, and AI is everywhere. Chatbots abound and AI widgets appear in new applications daily. But rather than integrating AI for the sake of AI, we’d like to suggest a different path: Start listening.
We’ve spent the last year on the road asking public servants, “If technology could free up an hour of your time every day, how would your work change?” We’ve toured state agencies in California and Texas and met with county officials from Oregon to New Jersey. We’ve run brainstorming sessions with public health officials, spent hours learning alongside election administrators, and strategized with emergency management professionals.
The result? Some clear ideas about how AI might actually help the public servants behind the healthy functioning of our communities.
So for anyone eager to put AI tools to work in support of the public sector, please steal these three lessons, free of charge.
1. AI must address an actual need
Starting with AI capabilities puts the cart before the horse. Public servants know their own needs. Some processes are not the result of some imagined inefficiency, but arise from intentional, legally mandated processes. Similarly, certain government functions require human judgment for ethical or democratic reasons.
This is a good thing.
AI built for public servants should reflect the reality that specific agency workflows differ based on their department’s function. For example, officials administering elections have different duties than those implementing programs to provide relief from extreme heat, and these require different technology functionality.
Still, useful AI need not limit itself to specific departmental use cases. In our conversations with public servants, we heard again and again that they see clear value in finding information faster. Another area where AI can empower public servants is for repetitive, format-driven tasks, like budget analyses, stakeholder mapping, and memo drafting.
A granular understanding of, and empathy with the public servants who best understand this work, means the difference between another “AI tool” and technology that meets public servants where they are, to help them do more.
2. AI must be reliably accurate
One federal official confessed to us, “I’ve heard that AI is very good at lying. We can’t have that.” He’s right. AI deployed in government agencies needs to be reliably accurate.
Even outside of government, people worry about hallucinations, like when Google’s AI confidently told a user to use glue to keep the cheese on pizza. When the stakes are higher—public health, emergency management, homelessness response—irrelevant or inaccurate information is a deal breaker.
One technique to minimize errors, particularly well-suited to government tools, is intentionally limiting the content underpinning AI responses. This means restricting AI tools to exclusively reference resources that are vetted and approved by the government officials themselves. A second approach to enhance trustworthiness is ensuring that responses come with cited sources. When it’s clear where information is coming from, and easy for public servants to validate those sources, government officials can stand on the firm ground of actual policies, documentation, and their own data without worrying about “trusting what AI says.”
It’s okay if a purpose-built government AI tool can’t tell you what Taylor Swift’s most-streamed single is but can provide state agencies exceptionally precise answers about the resources, points of contact, and proper processes they need to execute their mission.
3. AI must be easy to deploy
Deploying AI within a government agency can be a complex effort requiring significant work and IT expertise. AI tools that can specifically query an organization’s own data (known as “RAG” for retrieval-augmented generation), are a powerful way to increase the accuracy and relevance of AI outputs. But implementing RAG LLMs demands substantial technical competency and careful coordination with information technology teams.
Most government agencies, especially those on a state, county, and local level, don’t have teams of developers waiting to integrate a custom API or put in work to configure a new platform. They need solutions that work from day one with minimal implementation overhead.
Government workers are sophisticated users with complex needs, but they don’t have the luxury of a complex implementation. Technology that serves them well is ready to work immediately, with sensible defaults and clear documentation.
Let public servants point the way
These practical requirements underscore something deeper we learned on the road: You can’t build for government if you don’t listen to government workers and understand that they’re working toward a mission, trying to make a difference, and striving to have an impact. Sometimes this looks like visible acts of heroism, such as responding to natural disasters. Sometimes this work is largely invisible, like ensuring adequate distribution and funding for medical care and services in areas most in need.
This should inspire technologists. It’s profoundly rewarding to build tools that help people navigate complex systems to get the help they need, or that make it easier for dedicated public servants to do their jobs well. Every efficiency gain translates to faster disaster response, better benefit processing, or improved community services.
The public sector deserves technology that’s built for the mission, not retrofitted from consumer applications. When we take the time to understand that mission—and the real requirements that come with it—we can build tools that don’t just work, but actually make government work better for everyone.
Madeleine Smith is cofounder and CEO of Civic Roundtable.
What's Your Reaction?






