Inspire AI: Transforming RVA Through Technology and Automation

Ep 70 - The Rise Of AI Runtimes: Google's Agent Development Kit

AI Ready RVA Season 2 Episode 10

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 10:54

Send us Fan Mail

AI is starting to feel less like a feature you bolt onto a product and more like a system you have to run. That shift is easy to miss until you try to build something real: a workflow that calls APIs, keeps context across sessions, coordinates tasks, pauses for human approval, and resumes later without breaking. Suddenly prompts are not the hard part. Architecture is.

I walk through what Google’s Agent Development Kit (ADK) reveals about the future of AI agents and agentic workflows. The core idea is event driven execution: a runner orchestrates the system while an agent emits events like “use this tool,” “update state,” “store an artifact,” or “request confirmation.” It’s a clean mental model for building an AI runtime with resumable execution, observable state, and tool integration that can actually survive production.

We also get practical about agent design. Not every agent should be an LLM free styling its way through a task. I break down LLM agents for reasoning, workflow agents for deterministic reliability, and custom agents for complex orchestration, then connect that to the deeper takeaway: the model is the decision engine, but tools are the capability. Rich tool ecosystems and clear interfaces will matter more than chasing ever larger parameter counts.

Finally, we talk governance and safety. Tool confirmation and human in the loop controls are not optional if agents can send emails, change data, or trigger real world actions. If you’re a leader, builder, or architect trying to scale enterprise AI responsibly, this is the mindset shift to make now. Subscribe, share this with a teammate, and leave a review with the guardrail you think every AI agent should have.

Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

From Prompt Apps To Runtimes

Inside ADK Event Driven Design

Three Agent Types For Production

Tools Beat Bigger Models

Human Approval And Governance

Deploying Agents As Services

Leadership Questions And Takeaways

Architecture Over Prompts Closing

SPEAKER_00

Welcome back to Inspire AI, the podcast where we explore how leaders and builders can stay calm and capable and intentional as intelligence becomes part of every system we design. Today, I'm diving into something that sounds technical on the surface, but actually reveals a much deeper shift happening in AI. I'm going to explore Google's Agent Development Kit, or ADK, is not just another AI tool, it's an attempt to answer a much bigger question. What happens when AI stops being a model you call and starts becoming a system you operate? Because the real story behind agent frameworks like ADK isn't about prompts or chatbots. It's about how we engineer reliable intelligence. And that's the shift every leader is going to face over the next few years. I don't mean every technical leader, I mean every leader. But let's start with a simple observation. Most AI applications today are still built like a user sends a prompt, the model responds, and the application reacts. It's basically a request-response computing with a really smart text generator in the middle. But as organizations try to do more with AI, the model starts to break down. You want the system to do more complex things like call APIs, use internal knowledge, maintain context across sessions, coordinate multiple tasks, pause for human approval, resume work later, or even stream results in real time. So at that point, you're no longer building an AI feature, you're building an AI runtime. And that's the problem frameworks like ADK are trying to solve. So let's start exploring the agent development kit. It's a framework for building and operating AI agents. And its core philosophy is simple. Agent systems should behave like software systems, not like prompt experiments, which means they need structured workflows, observable state, resumable execution, tool integration, and deployment infrastructure. So in other words, the AI can't just live in the prompts. It should live inside architecture. Think about event-driven AI. The most interesting part of ADK is the architecture underneath it. Instead of thinking about AI as a single response generator, ADK treats agent behavior as a sequence of events. The mental model is like this. You have something called a runner. The runner orchestrates the entire system. An agent generates events such as call this tool, update this state, store this artifact, request confirmation, send response to the user. Those events flow through services that manage the session history, stored knowledge, or artifacts like files or images. And the system keeps running until the task is complete. It's not just an implementation detail, it's a philosophical shift. Instead of asking, what should this model say? We're now asking what should the system do next? That's a completely different engineering mindset. Another interesting concept in ADK is that not all agents are the same. Here I'm going to talk about three primary types. One that you might be most familiar with is LLM agents. This is the one you probably imagine most when you think about agentic systems, because they use language models to reason, decide which tools to use, and generate responses. They're flexible, but also unpredictable. Then you have the workflow agents. They're deterministic orchestrators. They don't rely on AI reasoning. Instead, they enforce a predefined sequence. Step A, then step B, and then step C. Think of them like workflow engines controlling AI components. That's super important because many production systems need predictability, not creativity. And finally you have custom agents. These are fully programmable agents where developers define the execution logic, use these when the workflow becomes too complex for predefined patterns. And that's where the system becomes powerful, because you can combine all three LLMs for reasoning, workflow agents for reliability, and custom logic for complex orchestration. One of the biggest misconceptions about AI agents is that the model itself is the system. In reality, the model is just the decision engine. The real power comes from the tools. And in ADK, tools can include things like API integrations, database queries, external services, internal business logic, or even other agents. There are multiple ways to define them. You can have functional tools, open API generated tools, MCP-based tools, the model context protocol that I've talked about before. But the deeper insight is this. The future of AI applications isn't about bigger models. Go from 3 billion to 120 billion to a trillion parameters for these LLMs. It's not about that right here. It's more about richer tool ecosystems. The model becomes the coordinator. It's the intelligence behind how things connect. The tools are the real capability. Now when you release a system like this into the wild, you need to have some controls. One of the most important features in ADK is something called tool confirmation. In simple terms, it allows the system to pause and ask for approval before executing an action. Unless, of course, you want it to send an email on your behalf, or even modify your data without you even knowing it. Now, you can use this tool confirmation to ensure that the system will stop and request confirmation from you and only resume once approval is received. That's not a small adjustment, that's critical. Because governance will define the success of AI systems more than capability will. Because organizations won't and can't deploy agents they can't control. Another interesting thing EK highlights is that agent systems need infrastructure. During development, you can run agents in a web debugging interface or a CLI environment, but production deployments typically expose agents through APIs so other systems can interact with them, which means these agents become part of the broader architecture. They're no longer chatbots, they're services. So why does all this matter? It matters because of what ADK represents. We're moving from a world where AI is used like a tool to a world where AI behaves like a system component. Think about the progression. First came models, then APIs, and now we're building agent runtimes. That shift means the questions leaders must ask are changing. Instead of asking which models should we use, we'll ask how do we operate intelligent systems safely, reliably, and at scale. Think about that for your work. Because this is what I imagine Inspire AI is all about. It's about providing new perspective with these disruptive systems. Because frameworks like ADK are not just technical artifacts. They're signals. Signals that AI is moving from experiments to infrastructure. And whenever technology becomes infrastructure, three things typically happen. Governance becomes essential. Architecture becomes strategic, and leadership decisions become long term. Which means the organizations that win in the AI era won't necessarily have the best models. They'll have the best systems thinking. So if you take one idea from today's episode, let it be this. The next generation of AI applications won't be defined by prompts. They'll be defined by architecture. Frameworks like Google's Agent Development Kit are early glimpses of what that architecture might look like. They're event-driven systems. They're agents. They can coordinate tools. They ensure human oversight is built into the loop. Because AI shouldn't be treated as magic. It's software. And that's the shift happening right now. We're moving from intelligence as a feature to intelligence as infrastructure. So until next time, stay curious, keep innovating, and keep designing systems that make intelligence more useful, more reliable, and more human centered.