 
  Inspire AI: Transforming RVA Through Technology and Automation
Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI.
Inspire AI: Transforming RVA Through Technology and Automation
Ep 49 - Virginia’s Agentic AI Pilot: Clarity, Risks, and the Future of Lawmaking
A state is letting an algorithm read the rulebook—and then asking people to decide what to change. We head to Virginia to unpack the “agentic AI” pilot that scans statutes, regulations, and guidance to flag contradictions, redundancies, and unclear language, promising cleaner code for public life. The vision is compelling: fewer dead ends for citizens and small businesses, faster updates as laws evolve, and a maintainable regulatory corpus that doesn’t require a crisis to fix.
We walk through how the tool triages massive text, the kinds of suggestions it can generate, and where human judgment stays firmly in charge. Alongside the upside, we get specific about risk: explainability in a legal domain, bias that could over-target certain protections, and the danger of treating speed as a substitute for process. Accountability is the throughline—who signs off, who audits the outputs, and how courts and legislatures can see a transparent trail from machine suggestion to human decision.
Beyond the mechanics, we dig into the politics and the guardrails that make innovation legitimate: public logs, before–after drafts, independent audits, risk tiers for sensitive domains, and rollback plans when changes misfire. We also map the bigger picture: states adopting AI for internal governance, the potential for fragmentation if approaches diverge, and the likely federal response. Most importantly, we share how listeners can engage—request transparency from representatives, show up for comment windows, and support civil society groups that stress-test these systems. If AI is going to touch regulation, it must do so in the open, with people in the loop and trust as the benchmark.
Enjoy the conversation, then add your voice. Subscribe, share this with someone who follows civic tech, and leave a review with the one safeguard you think every public-sector AI should have.
Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
Welcome back to Inspire AI, the podcast where we explore how artificial intelligence is reshaping the systems we live and work within, from the boardroom to the state house. Today we're heading to Virginia, where AI isn't just powering startups, it's rewriting how government regulates itself. Imagine a state government office scanning thousands of pages of regulations, laws, guidance, documents, administrative rules, not by humans leafing through binders, but by an AI tool. It flags contradictions, spots redundancies, suggests clear language. It's happening now, right here in Virginia. In July 2025, Governor Glenn Youncan signed Executive Order 51, launching what's being called the First in the Nation Agentic AI Pilot, a regulatory review. Today's episode dives deep into that initiative. What it promises, the concerns it raises, what it signals for the future of AI in the government. We'll explore what's in Virginia's pilot program, how it works, and what it aims to achieve. Promise, efficiency, modernization, regulatory clarity, pitfalls, and public concerns, transparency, bias, accountability, and the broader lessons of governments and AI adoption. What should we watch for? What civic participation looks like. So let's dive in. Virginia has long pushed regulatory modernization, trimming down redundant rules, simplifying language, making governance leaner. In 2022, an executive directive set a goal, reduce regulations by 25%. The state says it has already exceeded that. Agencies have cut 26.8% regulatory requirements and eliminated 47.9% of words in guidance documents. But now the administration wants to go further, faster with AI. The AI tool will scan all regulatory texts and guidance documents in the Commonwealth. It will flag contradictions between regulations and statutes. It will identify redundancies, outdated and overlapping rules. It will suggest streamlined language, more concise, clear wording. The pilot is labeled agentic AI, meaning the system has autonomy to perform tasks with minimal direct human prompt. Agencies will use the AI as a tool, not to unilaterally change law, but to guide human review. The goal is to help agencies that haven't yet met the reduction targets, as well as push those that already have to go further. It's a bold experiment using generative and agentic AI to perform what is effectively legal with regulatory triage at scale. Let's look at a scenario. Think of a small business owner, Jane, who wants to comply with state regulations. She faces cryptic clauses, overlapping rules and outdated guidance. She hires a consultant. If the regulatory code is moot, contradictory and scattered, she wastes time, money, and effort just deciphering what she's supposed to do. An AI-powered code cleanup promises efficiency and cost savings. If the AI can highlight redundant rules or outdated text, regulators can focus human effort where it matters. That saves time and money in the long run. Clarity and accessibility, streamlined language and fewer contradictions help citizens, businesses and administrators alike. The law becomes more readable, less opaque. Scalability. Manual reviews of thousands of regulations are slow, expensive, and error prone to that. AI can augment human capacity to scale that review. AI systems can be rerun periodically without flagging new inconsistencies as laws change, rather than waiting for large legislative scrub campaigns. Lastly, governance, innovation, and leadership signaling. If Virginia succeeds, it may become a model for other states or even federal agencies that can catalyze broader modernization. I quote, Virginia is a national leader in AI governance. In short, aligning AI with a public administration holds the promise of smarter, linear government. But no AI experiment in regulation is risk-free. So let's examine some of the concerns. Transparency and explainability. How will citizens or watchdog groups know which regulatory changes were AI suggested versus human curated? If the AI flags a section for removal or rewriting, will the reasoning be transparent or opaque? Black box AI in legal regulatory domains is especially risky. People deserve to see how decisions are made. How about bias, error, and unintended consequences? So the AI might misinterpret legal language, misclassifying rules, or fail to catch semantic subtleties. It might disproportionately flag rules in certain domains, environmental or health, more than others, introducing SKU. But reducing regulations isn't always good. Some rules exist for public safety, equity and fairness. Overzealous pruning could harm many groups unintentionally. And then there's accountability and responsibility. So if AI suggested deletion leads to legal gap or harm, who's responsible? Will agencies or lawmakers be tempted to defer too much to the AI, reducing human oversight? And how will judicial review or legislative oversight function in the new paradigm? And how about public trust and legitimacy? Citizens may push back and say, but who's watching the machine? The legitimacy of regulations is tied to democratic processes. Input, hearings, stakeholder comment. Can AI skip or shortcut those? And finally, we have legal and institutional constraints where you'll find that some rules are statutory. You can't remove or alter them via administrative guidance, AI or otherwise. The AI must respect legislative bounds. And some agencies might lack the internal capacity or legal culture to vet every AI suggestion. If they don't have time to look at their own documents, who's to say they're going to have time to look at AI suggestions? Let's look at an example vetoing a high-risk AI Act in Virginia. Earlier in 2025, the Virginia governor vetoed a proposed high-risk AI Developer and Deployer Act, which would have regulated certain AI uses. That suggests political tensions over how aggressively to regulate AI. With a pilot like this, the line between innovation and oversight will be under scrutiny. Should be anyway. What does this tell us about AI adoption and government more broadly? Think about governments no longer being passive adopters. So states are now launching AI pilots, not just regulating others' AI. That shifts the narrative from AI is external tech to AI becomes part of statecraft. And government's frameworks must evolve quickly. So traditional rulemaking, oversight, public input processes can't be ignored. AI forces us to rethink how regulation is designed, maintained, and audited. What about experimentation and guardrails? Those usually go hand in hand, right? Pilots are necessary, but they must include transparent logs, auditing, rollback mechanisms, stakeholder engagement, and oversight. We should also consider citizen participation being essential for if AI touches regulation, citizens must have a seat at the table. Transparency, feedback loops, appeals. Otherwise, trust is going to erode. And finally, interstate and federal coordination is are going to matter. If different states adopt divergent AI-driven regulatory models, fragmentation and inconsistencies will naturally arise. As this pilot unfolds, I'd suggest we watch for whether or not Virginia is publishing before after regulatory drafts, AI suggestions, and human edits. We should also think about whether or not they're requesting stakeholder feedback, like businesses, NGOs, citizens having input into which rules stay or go. I also wonder whether or not legislature or independent audit agencies will have enough visibility to create the right governance bodies and oversight committees. We should really think about the pilot as it expands into more domains, like education, health, environment, because these can come with much higher risk. And if you think about it, other states adopting similar AI regulatory pilots, you think the federal government won't respond to that? So how how can we, as citizens, engage? We should request transparency. So ask local state representatives whether AI logs and decision rationales will be published. We should definitely participate in public comment periods if they're made available, and support NGOs, that is, non-government organizations, or academic efforts to audit AI regulatory tools. And we all should absolutely stay informed, because regulation affects daily life more than many people realize. Virginia's AI pilot is absolutely ambitious and may serve as a bellwether for the future of government. It has the potential to streamline red tape, increase clarity, modernize governance, but only if we embed accountability, transparency, and public participation from the start. As you think about AI in your domain, whether business, policy, or community, ask. Who designs the AI? Who audits the AI? Who can question its outputs? Those questions are more than technical. They are the foundations of the trust. So until next time, stay curious, keep informed, and let's together help shape an AI powered future that works for all.
