Inspire AI: Transforming RVA Through Technology and Automation
Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI.
Inspire AI: Transforming RVA Through Technology and Automation
Ep 78 - Learn By Building: From Strategy Decks To Working Agents w/ Matt Bartles
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
“We’ll learn AI once we understand it” sounds responsible, but it’s one of the fastest ways to fall behind. We sit down with Matt to argue for a different approach: learn AI by building with it, in small scopes, with real users, and with the humility to let the work teach you what the strategy can’t. The result is faster AI adoption, better judgment about what models can and cannot do, and a team that develops true operational muscle instead of slide-deck confidence.
We dig into why long AI roadmaps are so fragile, how experimentation creates better plans, and what the real costs look like when you delay hands-on work. That includes the unglamorous details that decide whether an AI feature scales: token costs, context loading, caching, latency, and picking the right model for the job. We also explore when open models make sense, what it takes to host them, and why workflow design matters just as much as model choice in complex environments like banking and underwriting.
Then we get practical about building agents. A simple “meal planner” agent becomes a lesson in inconsistency, unclear pathways, and why agents can fall apart when they must choose from a long list of similar options. From there, we talk guardrails: where probabilistic AI is fine, where deterministic rules must take over, and how governance should tighten as usage grows. If you’re leading teams through AI strategy, enterprise AI, or agentic AI pilots, you’ll leave with a clearer playbook for building safely and learning fast.
Subscribe for more conversations like this, share the episode with a teammate, and leave a review if it helps. What’s the smallest AI build you could ship in the next 30 days?
Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
Learn AI By Building
SPEAKER_01Welcome back to Inspire AI, where we explore how artificial intelligence is transforming not just our technology, but how we think, learn, and lead. Today's conversation centers on a simple idea that's becoming increasingly true in an AI-driven world. You don't learn AI by studying it. You learn AI by building with it. And yet, there's a real tension here. Most organizations are still trying to understand AI before they use it. In doing so, they're falling behind the teams who are learning in real time through experimentation. So today, we're going to unpack what it actually means to learn through building. Not as a hobby, but as a test. And ultimately as a leadership strategy. Our guest today is someone who has spent their career not just talking about transformation, but driving it. They're the ones who build, test, and learn in real time. So today, we'll explore what it really means to learn through building. Not just as an individual skill, but as a capability leaders can design into their organization. All right, Matt, welcome back to Inspire AI. How are you doing today?
SPEAKER_00Great. Thank you for having me. I'm so glad to be back.
SPEAKER_01Yeah, it's been over a year now since the beginning of this podcast journey of ours here at AI ReadyRVA. I think you were probably one of the first five podcast episodes. I was really excited about the mission and to contribute. I love it. Yeah, yeah. We've come a long way in some ways, but a lot has definitely changed in AI since we last talked. So it's a fascinating place, and I'm really excited to be talking with you about it today.
SPEAKER_00The past couple years have been some of the most fascinating, engaging, and steepest learning curves. I can't stop thinking and talking about it.
SPEAKER_01Yeah. Yeah. I think last time I talked to you, you were working as a consultant. Am I remembering that correctly?
SPEAKER_00I was working as a consulting at a beneficiary corporation that's called Impact Makers. And we our biggest client was George Mason University. Oh, it was George Mason. Okay. Yeah. They were a great client helping out with automation and starting AI several years ago.
SPEAKER_01Okay. Okay. And what are you doing these days? Tell us a little bit about yourself.
SPEAKER_00Yeah. My my journey into AI is weird, just like everyone's. I started out my career as a submarine officer, pivoted to financial services with a business analyst at Capital One. Started out doing the machine learning models in 2010, which just looked like toys today and moved through automation. After that, I pivoted to consulting, working mostly with automation. But then when Chat GPT hit in 2022, it was really clear that automation was going to be gobbled up by uh AI and eventually Agentic AI. And wanted to harness that, I picked up some courseware and degrees and pivoted to AI strategy, where I am today at Navy Federal Credit Union, helping accelerate AI adoption and create great use cases for our members.
SPEAKER_01Nice. You you picked up some courseware degrees. Tell a little bit more about that. What's worked for you?
SPEAKER_00I I have been a lifelong learner. And so I'm always going back to the well to learn new things. And so at Capital One, we saw a huge well of technology, and you've been a part of that. And so I went back and worked on a software engineering degree. And then uh as consulting. So you like a real degree.
SPEAKER_01Okay.
SPEAKER_00Yeah, and uh very generous education benefits to maximize that. And I spent two and a half years at the Harvard Extension School pursuing software engineering. And after two and a half years, yeah, we kept balancing that with a full-time corp corporate job was great. But uh the learning was exceptional too.
SPEAKER_02Yeah.
SPEAKER_00After about after about two and a half years, I had enough credit to get a uh graduate certificate in a web application development. So I got that. And then really started to see my limits in in consulting and picked up an MBA. I went to Darden for that. That was great. And Darden really started to embrace the AI revolution that we're going through now. And that was critical, a critical moment for me to not only understand where AI was going, but also understand how AI was working in the business context. So Darden did a great job of that. And now we're applying those things at Navy Federal, very exciting time, and every day there's something new to learn.
SPEAKER_01Yeah. Just out of curiosity, what was your concentration at Darden?
SPEAKER_00Entrepreneurship. And even though I work a corporate job, some of those entrepreneurial lessons really are applicable for in in the corporate context too.
SPEAKER_02Yeah.
SPEAKER_00And say, like one of the things they teach there is effectual thinking. And how can I put together business ideas for the assets I have under my control? And how do I be creative and create new value with that? That kind of thinking actually really works well in corporate America. Especially companies that embrace that kind of thinking. Especially with AI, because the tools are becoming more and more democratized. And so you can effectuate more things than you once could in a corporate role years and years ago.
SPEAKER_01Yeah, well, the way I see it, even in an inner large enterprise, you're if you're a leader, you're running a small business. And it unless you're in a mature business model running that small business in an enterprise, then you're building something. You're growing something. And so there's a continuum of growth and maturity that if you understand that the life cycle of businesses, you're always on that, whether or not you're in a big company or a small business or build building something out of your garage, right? It's it is a true-to-form model that exists everywhere if you look for it.
SPEAKER_00So I love that. And uh having agency, like that, that the entrepreneurial agency that I'm going to go do, this is in my control, I'm going to move it. That's greatly uh appreciated in the corporate context as well.
Roadmaps Versus Rapid Experiments
SPEAKER_01So I love it. Indeed. Yeah. All right. So you've worked across enterprise, government, consulting. Where do you see organizations over-indexing on planning and underinvest underinvesting on, let's say, building with AI?
SPEAKER_00This is a really interesting one because we find ourselves making roadmaps for technology we don't fully understand. And let me be fair, the technology that we have today is capable of doing many things that we just haven't figured out yet. And so creating a three to five year roadmap for delivery of value in AI when it's shifting so quickly is a bit of over-indexing. And so, how have we pivoted to investing in building things with AI? It's really figuring out something that is feasible, deliverable in a short period of time, and can deliver value quickly. That's how we're building the subject matter expertise to work with these tools, understand what they can do, understand what value they can deliver in small scopes beyond just what a proof of concept is doing. And that's helping enrich through the learning cycle how we can then do better planning. And so this is a place where the technology is moving so quickly. It's better to build and deliver value in small portions to figure out how you can create that better plan.
SPEAKER_01Yeah. No, that really resonates with me. I've had a few guests on in the last six months, and I get a chuckle, and they do too every time I ask where do you see the world going in three to five years? And they're like, that's impossible to even know right now because so much is changing so fast. So, yeah, don't spend your time trying to predict where your business is going to be in th in three to five years or how to get there. You just have to you have to be paying attention to the changes around you and incorporate them as fast as possible.
SPEAKER_00To your exact point, we're in a five-year planning cycle now for our enterprise strategy. And if you look back five years into May 2021, ChatGPT hadn't yet been brought to the mainstream. And it's sitting May 2025. There's no way I could have planned for what we are today. Agents working with tools and the power of the some of the things we've been able to democratize completely unpredictable. And that ground has shifted so fast, there's hesitancy to plan that far out. There are some things that we do have to plan for. Major changes into marketplaces. How do customers find and connect with products today is going to be very different in three years? And some of those relationships we have to plan for now. So there is a place for long-term planning, but uh it is a murky place.
SPEAKER_01Yeah. So what do you think the real cost is of waiting to fully understand AI before experimenting with it, both at a team level and a leadership level?
SPEAKER_00What is the cost of waiting to fully understand AI before experimenting? Is it's fully understanding AI is going to be a difficult task given it's moving very quickly. Clouding that is we are still in a hype cycle for AI and agency AI. There is many problems that need to be resolved. How do we govern it? How do we make it repeatable, transparent? What do customers want from AI experiences? All of that is still remains unknown today. If you wait to learn AI, when those things are figured out, at that point you're so far behind the curve, it's going to be hard to catch up. And so by investing now, when we're still in the height cycle, but these AI POCs are moving from science experiment to real, you will miss out on that learning cycle and be too far behind to catch up.
unknownYeah.
SPEAKER_00And so I that that is the true cost of waiting. And we're why we're investing in training and upskilling our workforce today, even though we recognize we still remain in a hype cycle, is that cost of learning down the road is going to be too costly.
SPEAKER_01Yeah. Yeah. I and I've heard the opposition to that that that mindset as well is where in a year or two, everything is going to be abstracted away and we're just going to be voice dictating everything that shows up on our screen, and the AI is just going to do it for us. So why do we have to learn about how it works right now? And I think about some people, if you're not you, if you don't have the ability to leverage the AI systems as they currently are to experiment, I get it. Maybe you are your company has to wait, or your ideologies are just looking for excuses. But ultimately knowing how these things work will benefit you in the long run, no matter what. I think about my experience in technology, the 20 years of master physical data centers, networking devices, or really using switches and firewalls at my fingertips as opposed to abstracting it away in software, that was where I came from. And knowing how those ones and zeros pass through the systems that's that is gold for even today's age. Because just because you can sit down in front of Amazon Cloud and build a data center yourself doesn't mean you don't need to understand what you're doing when you do it. You know what I mean? I agree.
SPEAKER_00One practical example is we need to learn to manage AI costs and AI operations. And take, for example, if you're running very heavy AI ops, that has a large token cost. And you had just mentioned um the cost of learning or the cost of delaying learning.
SPEAKER_01Yeah, we would go into waiting.
Cost Of Delay: Hype And Tokens
SPEAKER_00Yeah, yeah. I had a practical example to add. And so as you think about the cost of learning, there's an actual monetary cost. And learning to manage AI op, what model you're using for a particular process, how many turns does it have, how are you loading context, what's going to be cached, what's going to be given to the model in each turn. How do you limit the amount of turns and what needs to be done in each turn? That system thinking can really reduce token cost. And if you don't go through that learning cycle, you are going to burn a lot of tokens unnecessarily. And that's unfortunately going to be the cost of your learning if you decide to delay.
SPEAKER_01So you think that the cost of leveraging AI is going to go up?
SPEAKER_00Those lessons that you'll those lessons will be waiting for you in the future. And learning them now is going to save you future costs. I see.
SPEAKER_01Yeah, they're they're pretty expensive using like Claude anthropic model APIs. Like that's just really expensive. Look, I'm lucky to have a company that allows me to leverage as many as I want, theoretically. And the way the way I leverage the coding systems, I use it based on a subscription, not a pay-for-token model. At least building apps, which is a lot more economical when learning, right? Like you don't have to you don't have to use an anthropic API to learn, uh, at least not the way I did.
SPEAKER_00Yeah, I where I run into token consumption costs is when the AI is part of the application itself. And how do you measure, take, for example, uh customer interaction on the phone with a voice agent? And so paying for, do I want to pay for GPT-5 Opus? Do I want to pay for Sonnet? Which of those gives me the customer servicing level that I think meets the need? Which one is the most cost effective? Where's the latency? Which one is the best at following instructions? That's an art that has to be learned for you to be able to scale and use agentic AI in your operation. Yeah. And it'll have a cost, and that learning cycle is not going to be something, it will get easier over time, but having it sooner rather than later will expand the amount of things that you can do with your AI skills.
SPEAKER_01Yeah, it sounds like a very much a machine learning engineer type role to learn how to leverage the right models for the business value that you're looking to abstract.
SPEAKER_00It's something that if you work with consultants or some of the Microsoft or XAI, uh I suppose Claude, I haven't worked with Anthropic. They want to push their model, but some of their models aren't great at intelligent document processing. Right. And where the Microsoft has what do you call it? Uh intelligent document processing, which it seems to be the gold standard for uh IDP. Whereas if you were to use Claude or Chat CPT for that, it's not nearly as robust. And so model selection is something that is a bit of an art. And if you develop it as a corporation, then you're not reliant on the consultants or some of the labs to help you out.
SPEAKER_01Yeah. What about leveraging the models on Hugging Face, the open source marketplace for models? Can't you find one that's built specifically for intelligent document processing? If that's the case you're looking for, uh as opposed to paying Microsoft?
SPEAKER_00This is a great question.
SPEAKER_01You have to be able to host it yourself. Yeah.
SPEAKER_00You gotta host it. And it depends on where your business is. And so, for example, if you're a Microsoft cop, uh you can use the I think it's called the model garden where you can use Azure service. Yeah, you can take a model. Essentially, Azure has a copy of Hugging Face that has a bit of a what would we call it? Curated selection of models. It's still huge. I think it's in the 10,000 range. So if you wanted to do text-to-image, image to text, doc intelligent document processing, that's all hosted on Azure, but the talk token cost remains. In fact, you might even pay more uh for on the Microsoft than you might in AWS using a similar model. So that whole model selection, how do you get it?
SPEAKER_01That's the art. Yeah.
Model Choice, Hosting, And Underwriting
SPEAKER_00It's gonna be an art. I would say primary to that though is taking a process say underwriting. Right? Banks underwrite millions of loans a day, if not more. And if you can reframe that in the agentic in agentic operations, that's a hard thing to do too. Real estate lending is enormously complex. You have a bunch of documents, they have regulatory requirements, you need to chip in two months of your banking statement, your your what is your 401k level, all those documents are different. And so picking the right model that can scan that, picking the right model that can harness that data and put it in an underwriting model, picking the right model to interact with the customer, that's that's an art and it exists in the frame of what how the business uh operates. Wow, yeah. Yeah, yeah.
SPEAKER_01That's very um it's definitely taking us down a rabbit hole uh for where I was headed, but it's good good knowledge to share for sure. And but I need to bring us back so that we can stay focused on our theme for today, uh which is developing experience, right? And and building a muscle of learning fast, right? So in your experience, what do teams actually learn from building that cannot that they cannot learn from, let's say, strategy decks, courses, or certifications?
SPEAKER_00This is a great question and timely because uh I recently taught a class on agentic AI. In this class, we use a co-pilot agent builder, which is available in the N365 tweet. And we all built a meal planner agent. And so what the agent will do is go into a recipe repository, help you pick out recipes for how many family members you may have, and then build a meal plan with cooking instructions and a shopping list. And that was a relatable agent you could build for people who all have very different roles.
SPEAKER_01How do they access the agent? Is it a UI if they talk to the agent with, and they're like, build me a seven-day meal plan? Is that what it is?
SPEAKER_00Exactly. And so you would create agent instructions, and so you define its purpose. You are a meal planning agent. And the first thing instructions is collect, for example, if you have any food or dietary restrictions, like allergies, food allergies. Do you have food preferences? Uh are you vegetarian, for example? How many people are in your family? Which meals do you cook? With your meal preparation like tolerance, can you do you like spending eight hours making me making a meal, or you want to do quick meals with five ingredients? And so that was uh an interactive part of the AI. Then the AI will reach out and Python and or I'm sorry, we'll reach out to the repository to figure out meals that possibly fit that. There's some negotiation with the user, and then it'll create use the Python to create a document that has a stopping list or whatever. So the question is what do we learn at the Team level. So you start to learn what does that agentic experience look like for them? How do you plan meals? How much time do you want to spend actually picking the meals? I don't what really care about picking the meals. I don't have a uh I'm not very selective, but some people really are, and they really want to have spend more time iterating on the meals. And so through this process, they understand, hey, what does an agentic experience feel like? And then they also learn what are the limitations of the agent. And here's an interesting one. So an agent, if you look at a recipe repository, you've got spicy chicken, crispy chicken, you know, chicken Elf, like hundreds of different kinds of chicken recipes, and the AI picking from a long list of very similar things seems to fall apart.
SPEAKER_01Does it?
unknownYeah.
SPEAKER_01If you if you're not able to predict exactly the right one, that sort of thing, and then comes out of left field with the its recommendations.
SPEAKER_00Exactly. So there's 55,000 recipes on in the recipe the repository. And there's of that 55, there's 100 different kinds of pasta that are very similar to an AI, which is like, hey, and so if you randomly pick up a pasta dish. Sure.
SPEAKER_01It's not random so much as it's statistically chosen, right? Yeah.
SPEAKER_00And three people could ask it for a same meal plan and they'll get three different things.
SPEAKER_01And it'll get three different meal plans, yeah.
SPEAKER_00So it it develops some inconsistency. And the whole context of this is to understand the limits and in and building agentic experience. And so that teaches them in the banking context: hey, how many different pathways do I want to put the member the customer on? Because AI would select between a hundred different pathways. Today we have the over several hundred different work a member can go through. And AI is gonna struggle, at least in its current iteration of technology, to pick the right one out of the list. Credit balance refund of overdraft fee, credit balance refund for fee waiver, and like all those things. And so we have to redesign the workflows in order for the agentic AI to operate quickly. And so there's so much learning to be had when you have your hands-on. As an individual, I don't I can't learn all that on behalf of the team. And so by teaching teams agentic AI, they find those limits on themselves. If that makes sense.
SPEAKER_01Yeah, that makes perfect sense. Do you do they ever consider how to put like rails on those systems that they're building where they're getting more deterministic with the model outputs versus just leaving it so open-ended that it it's going to continue to fall apart no matter what?
SPEAKER_00This is part of the art of AI. And it's the art and science of the AI itself is probabilistic. And the tools are deterministic, and the data can be either or. And so how do you balance the probabilistic AI with deterministic things? And this happens in real work, especially in banking, right? And uh another example is an AI that takes credit card payments, for example. So you can't pay uh a credit card, for example, on a from a savings account that has zero dollars. You'd overdraft it.
unknownRight?
SPEAKER_00And so if you don't have sufficient funds, you shouldn't pay that account. If you just put that in context of an AI, do not accept a payment from account that has insufficient funds, it will follow that rule 90% of the time. It's a probabilistic model, and that's tested. You can go to IF and test that, and it'll enforce that rule 90-95% of the time, even the best model. And so you need to have a deterministic tool in your payment API or payment to MCP, for example, that enforces that rule. And so teams have to learn what are they willing to accept in the probabilistic experience and what do they have to accept in the deterministic experience? That is an art and a muscle that is still being built.
SPEAKER_01Yeah. Yeah. So you've scaled automation and AI across large organizations. How do you turn experimentation from something a few people do into a capability that the entire organization benefits from?
Building Agents Reveals Real Limits
SPEAKER_00I will not claim mastery here. This is something that scaling a capability uh is difficult because many of the business units don't specifically want to align to a single capability. And let me give you an example. Intelligent document processing. So many business units are using intelligent document processing, collections, underwriting, legal, auditing are all using intelligent document processing, supplier management intelligent document processing. Right. And so part of the art is creating a capability that it can both be configured to meet a specific need, but also serves that need very well. And I want to be very honest, as we scale these things, we find out that these reusable capabilities really only get people 30% ahead. But that 30% isn't nothing. It also lowers total cost of managing it, total the total cost of ownership. And so by creating these reusable capabilities and marketing them to the business units and showing success there, we're able to align them to capabilities. And that has a lot of value for our IT partners who are managing, instead of managing five or six IDP systems and disparate software, really have them managing one or two different configurations of a single capability. This is uh but there's a lot of human, there's a lot of human messiness in here because there some of the processes have to change, some of the mindset has to change, and so that's the the part we have to play as strategists is the methods to get them to use the central capability versus buying buying a fast partner. Wow. By the way, in terms of these capabilities, this is a really interesting one as we look across complex organizations, they have so many duplicative ones. We have 50 different knowledge management systems, if not more. And so knowledge management is another one where you can have a centralized capability that can be reconfigured and spread across the organization. Uh but it is humbling. That is a very humbling work because every use case is quite different, and alignment is hard. So it is a goal that is worth pursuit. It is an art.
SPEAKER_01Yeah. Yeah, you almost have to build for your 80% of your customers and then worry about the 20% later.
SPEAKER_00Yeah. One of the things I have discovered success here in automation, and I haven't replicated this in AI, is making sure the onboarding process is buttoned down. Hey, we'd like to onboard you onto our intelligent document processing capability and having processy documents and that muscle memory. So the customer of that process feels secure and heard and embraced. That's the soft part of getting somebody on a capability. And then making it low, low cost of management they like. And so it's a soft skill. That has a very technical piece to it. It has to be made in a configurable way, and that's not always easy. Yeah. And there has to be an honesty, a piece of honesty in there that 30% of it can be reused. You get 30% lift, but the business gets a big the business and the overhead cost gets gets really reduced. Yeah.
SPEAKER_01Yeah. Have you seen teams build a lot of things but not really actually create value?
SPEAKER_00Oh, very much, very much. And what is value? Some of these are just lessons learned. Some of these are lessons learned. And I'll give you an example.
SPEAKER_02Fail fast, learn quick.
SPEAKER_00Yeah. And the thing is, these agents, so we have a democratized agentic program where 12,000 people can create agents. And so they're creating agents fairly quickly, but as quickly as they create them, they're c a lot of them are going as so the Darwin Darwinism is happening really quickly in agents. But what the pattern that's happening is people experiment, they don't produce a lot of value, they're deleted, but some do, and then they start to get shared. And we have dashboards to say, wow, some of these agents start to get really popular, and then we layer on additional value. Oh wow.
SPEAKER_01Yeah, that's great.
SPEAKER_00Yeah.
SPEAKER_01So build whatever you want, and if it if if you find use out of it, we'll identify that usefulness, then we'll share it out, and then it'll become part of the marketplace that we maintain.
SPEAKER_00Exactly. Yeah. Exactly. And that process has to happen really quickly. And so there is a McKinsey report where that their organization has 74,000 agents in actively being used. And that doesn't appear to be well managed from a banking standpoint. How would we defend the that against that?
SPEAKER_02No.
SPEAKER_00So we M365 does a great job of helping us with dashboards to take a look at that. And we const constantly monitor that. And it's interesting to see what agents are created, intelligent document processing, how they that's a frequently built one. But really, like two or three of them are actually really good, and those are the ones that get shared and proper created across business units. And then we can take the best practices out of that one and then advise the businesses to adopt that one. And then we when we get to a certain user base, additional governance gets layered on. And the reason for that is it becomes a uh a higher risk because it's scaling. As if if you remember the Excel macro era where that was being macro was the back end of a lot of business calculations, but it wasn't very transparent. We're not going to make that mistake again with uh our AI program.
SPEAKER_01Yeah. Fascinating. To that uh end, how do you guard against teams thinking they understand something when they actually don't? Like building agents. Oh, it's so easy. I just use this template and then I have this automation in front of me. But what then? Like it it becomes garbage quicker than it gets put to use, right? So how do you guard against that?
Guardrails, Governance, And Scaling Value
SPEAKER_00Ultimately every associate is held accountable for their output, whether it was generated from AI or not. And the training goes with that? At the individual level, the agent creator, you always are a liable for the outputs of that agent. And we trust our associates, and in good faith, they have been throwing away the bad agents. Yeah, this didn't work. And there's a bunch of and I'll give you an example. The AI agents we have today don't work well with Excel like people think they do. People see Excel and they think of it like data, and they're like, Oh, I'll upload to an AI, and AI will treat the Excel like data. What actually treats it like text. Does that make sense? Text is we don't configure it right. And so a lot of people were thinking, hey, it will work well with Excel, I can use it to manage tickets. And the AI we have just doesn't do that well. It just it falls apart. And so they quickly self-diagnose that and throw those Excel-based agents out. And so there's some self-management, and then uh on top of that, the business unit's risk officers. Each business unit has a risk officer, are trained in AI and agentic risk. And on top of that, our manager and leader layers are trained to hey, how did you use AI in this process? And that's uh there's two sides of that coin. One of them is that creates the environment for people to share their AI games with their major with their manager, but also it reinforces that you remain liable for the outputs of your AR. Software with the answer, we do have hard deterministic rules uh in place for how those agents can be used, like PII and API data. And then we have trust and governance, of course.
SPEAKER_01So at what point does a leader need to shift from personally experimenting to designing systems that enable others to experiment effectively?
SPEAKER_00Oh man. I've recently read a book on it was called um AI Derived Leadership. How do we how do you lead in the age of AI? And it it's it's a it's an interesting question. Can you recommend that book? If you were new to AI and you're just starting to your AI journey, it may provide value. And so to a new person, it may be an A-book. I see it it seems like some of these things we're already experiencing. If you're in a in an organization that's a bit behind, I would recommend it. And so maybe I was a bit.
SPEAKER_01Sorry, I didn't mean to get you off the your thought track on the function about uh experimenting effectively.
SPEAKER_00Yeah, I think it's important for leaders to experiment as well and get comfortable with the tools so they can effectively challenge the decisions and creations that their teams are building. And so I don't think there is a point where experimentation has to stop, though I understand, hey, it's onerous on a leader to have to get their hands dirty when they have too many other things. And so potentially you get to a point where you're experimenting enough to ask great, insightful, deep questions. And if once you have that sense as a leader that I think I can effectively challenge what my team is creating and their approach and how they're integrating AI, at that point you're learning you're at status quo. And that's where that's a place that will leadership will have to maintain. It's moving fairly quickly. So whether that be experimentation, going to conferences, working with subject matter experts, leaders have more things at their disposal, but they should be able to effectively challenge their team versus being uh a pass-through for risk.
SPEAKER_01Got it. So, Matt, my final question for you would be if you walked into a company today as an AI strategist, what are the first systems or behaviors you'd put in place to accelerate learning through building?
SPEAKER_00This is an exceptional question, and I a lot of companies probably find themselves uh asking this. How do what are the first systems and behaviors we need to reinforce? I think of things in terms of platform people and processes, and so the systems would have to be designed to do those. And so the what the function of the first systems is getting AI in the hands of your teams as fast as possible in a well-managed, well-governed way. And so if there's a in the Microsoft domain, you can pay for$20 a month a copilot premium license, that's a great first start to get your people starting to interact with AI. Then you need to start understanding your processes and where can processes leverage artificial intelligence. So that's the process piece. And the platform piece is really tricky now because many of the platforms are evolving fairly quickly. But getting your tech teams ready to work in a with AI. And some of the things we've discovered here is along a pathway, most software development was deterministic and had very easy testable verification. And AI is a bit more slippery. And starting to get the teams having an SDLC that accounts for that would be the next system I put in place. I'd love to hear your thoughts on this though, Jason.
Leader Playbook And Closing
SPEAKER_01The three Ps as a framework didn't actually jump out at me until you just started explaining them. I think it's it frameworks are brilliant, right? It keeps you grounded in what you should be doing. When I think of managing systems at a large scale, I think of the technology, the processes, and the people, right? And I think you probably just reframe those three things that uh use all the time personally as your own, but you got the platform, right? And a lot of companies are becoming platform-based companies-based based, which are unified the businesses around specific centralized functionality. And that's important because you don't want 10 different document management systems, right? You want one or two for based on the use cases, and you want to be able to fold in all the that content so you can easily access it at any time, right? So the platforms are super important. I think if technology is going in the way of platforms, and so that in my mind, it's all the same. Like how does all this stuff change the SDLC, if you will? So I'm a lifelong learner as well. I've I spent many a month building projects for the sake of learning, or in my learning process, and I also build systems for nonprofits as well as enterprise scalable systems. And I think that this particular episode has been really interesting to me because I truly believe that spending 30 days to learn a skill is no longer the world that we live in. You learn the skill by doing, especially if you're in or around technology. So think about learning now as a new behavior. It's not about opening a book and reading about it. It's about opening an interface and trying it, right? And that's how you learn, right? You're gonna learn by experimenting. That's why I like this episode so much, is because uh I've realized that over the last six months that the new Dimmies of the World, of Coursera, deep learning AI, like all of that serves a good purpose. And I can go and I can learn really small snippets, but uh I'm not gonna sit there and finish an entire course because I one, I don't care about the certificate at the end, but two, I'm gonna pick up what I need from a few videos, and then I'm gonna go and build the thing because the systems that I'm leveraging now allow me to do that at such an accelerated pace. And while I'm experimenting, I don't have to worry about uh the evaluations of these systems. I'm I'm trying things to see what works, right? And uh one of my friends we get together weekly on building systems for our nonprofit AI Ready R VA, and it takes uh uh we're building so much so fast that that's our learning process, but it also allows us to figure out what works and what doesn't, much like your class that you uh shared with your associates or whatever. Teaching them how to build an agent or use an agent by building one themselves and teaching them the limitations, but that's exactly it, right? You have to get your hands on that stuff, and uh and that's where we live in right now. So it started with software because software is closest to ones and zeros where these agency systems know about the world around them. They don't know the language unless we teach them the language. And through their language, we taught them our language. And now they're really good at our language, and they're really good at coding and accessing the systems because of how we taught them. But it's gonna grow, it's gonna grow into so many more fields, and those fields are gonna be impacted just as much as software. Engineering has been impacted in data science and in the field of machine learning. So everybody's gonna be learning at this pace, and they just may not be realizing it yet. So there you go.
SPEAKER_00I'm excited about it. Thank you so much for having me. I really appreciate the time, and I'm loving the ready RPA.
SPEAKER_01And I will have you back again, I promise that.
SPEAKER_00I think the world's gonna be wait another year because it might be exciting to catch up for sure.
SPEAKER_01Thanks, man. Nice talk.