Inspire AI: Transforming RVA Through Technology and Automation
Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI.
Inspire AI: Transforming RVA Through Technology and Automation
Ep 66 - From Prompts To Process: Building Trustworthy AI Workflows w/ Tianzhen Lin
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
When intelligence is everywhere but correctness is scarce, how do we lead without cutting corners? We sit down with Tianzhen (Tangent) Lin—veteran engineer and systems thinker—to unpack a practical, durable approach to building AI‑assisted products that hold up under pressure. No hype, no shortcuts: just the patterns that make teams faster and safer at the same time.
We start by reframing large language models as “eager interns”: fast, helpful, and prone to saying yes. That mental model shifts responsibility back where it belongs—on leaders who must design workflows that surface assumptions, constrain degrees of freedom, and verify outcomes. Tangent explains why context remains a finite resource even with giant windows and how the “lost in the middle” effect undermines long prompts. The fix isn’t more chat; it’s better scaffolding. Specs, plans, and documentation become the backbone for repeatable success because they compress what matters and travel across sessions and teammates.
From there, we dig into decomposition as a risk strategy. Breaking work into small, testable steps gives you early checkpoints to catch hallucinated requirements, unsafe libraries, or performance traps—like UI freezes from naive million‑row operations. Tangent shares a late‑night pivot where a strong, technology‑agnostic spec let the team re‑architect in hours, not days, turning a potential rewrite into a near‑seamless transition. We dive into verification as a non‑negotiable, the value of documentation as compressed context, and how institutional knowledge prevents the “sandcastle effect” when requirements shift or the tide comes in.
The result is a playbook for leaders navigating an AI‑accelerated world: treat context like budget, invest in durable artifacts, decompose to control risk, and verify relentlessly. Do that well and AI stops being a confident amateur and starts acting like a reliable teammate. If you’re serious about trust, safety, and scalable speed, this conversation will sharpen your judgment and strengthen your systems. Subscribe, share with a teammate who ships software, and leave a review with the one workflow change you’ll make this week.
Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
Welcome back to Inspire AI, the podcast where we help leaders, builders, and communities stay capable and intentional in an AI accelerated world. Today's episode is a special one for me personally. I'm joined by Tangent Lin, someone I've had the privilege of learning from for years, and without question, one of the most technically rigorous thinkers I know. Tangent isn't just fluent in AI systems. He's disciplined about how thinking happens around him. While much of the conversation in AI today is dominated by prompts, tools, and surface-level productivity gains, Tangent has been quietly focused on something deeper and far more important. Judgment, accountability, and workflow design. In a world where intelligence is cheap, but correctness is not. He has a rare ability to take deeply technical realities, how large language models actually behave, where they fail and why, and translate them into mental models that leaders can use not to move faster blindly, but to reduce risk, increase reliability, and build systems that hold up over time. And on a personal note, Tangent has been a mentor to me, not just in the technical depth, but in how to think about systems, trade-offs, and long-term consequences. He's someone who constantly pushes for clearer reasoning, stronger foundations, and better design. And who believes that real progress comes from challenging each other to think more rigorously. Many of the ways I approach AI today, especially around systems thinking, continuous learning, and building structures that help people grow rather than cut corners, have been shaped by years of conversations with him. In this episode, we're not talking about hype. We're not talking about shortcuts. We're talking about what it actually takes to lead responsibly when AI becomes ambient, when assistance is everywhere. But accountability still rests with humans. If you care about building systems you can trust, leading teams through uncertainty, and staying grounded as the pace of change accelerates, this conversation is for you. Tangent, welcome to Inspire AI.
SPEAKER_01:Well, thank you for having me.
SPEAKER_00:Outstanding. Can you tell us a little bit about your background? What brings you here today?
SPEAKER_01:Yes, absolutely. My pleasure being here. So I've been a software engineer for a good number of years. I've worked in different startups and also have a privilege to work in big tech, Capital One, Amazon, and Facebook. I have a passion of building software. And the most recently I realized that instead of spending a lot of late night hours troubleshooting, learning about writing code, AI can accelerate that. The more that I use, the more flaws I see, some of the blind spot we have with AI. But also I realize that a lot of opportunities that we can harvest if we use it correctly. It can actually accelerate us building what we love and what we want to bring to the humanity.
SPEAKER_00:Yeah, that sounds great. Bringing our passion to humanity is something we all strive for with identifying with our purpose. I resonate with that. So I've heard you argue that prompting often fails, and you compare it to building a sandcastle. What's the core misunderstanding that teams have about working with LLMs that makes failure almost inevitable?
SPEAKER_01:Yeah, a lot of times that we have seen some of these magical output, maybe from a blog post or maybe from someone else, that you can whistle a small sentence and the AI can come back with an example that or some codes that works and looks at, my goodness, this would have taken me a day or even longer for that. Problem is that our work does not stop there because once you build a software, it's not a software that's going to keep running for the next 10, 20 years. We're going to keep making modifications here, making changes there. So, and therefore, making changes on a problem that you have written about a day or two ago, or even a couple hours ago, becomes increasingly difficult. It is almost like if AI comes back to you is in a large document, and then you have about 15, 20 paragraphs, and you have to describe, I'll make a change, I want to make change in this very sentence, and so on and so forth, things are gradually becoming going sideways in many ways. And then, therefore, not only that you're not really guiding the AI in the way that you want, but also some other times that AI could start creating new requirements or hallucinating things using a function that doesn't exist or using a library that you think that is not shouldn't be there, or missing a security cue here and there. Yeah, so that's kind of where I usually see where the prompt fails, and we spend more time course correcting an AI than actually getting added to do meaningful work.
SPEAKER_00:True. Yeah. With the proliferation of these AI tools, I expect many folks are going to be running into these problems. And if they haven't seen it yet, it's a great opportunity here to tune in to what you're saying. So tell us, I think you describe LLMs as an eager intern, right? And it's helpful, fast, but not accountable. How does adopting that mental model change how leaders should think about responsibility, trust, and risk?
SPEAKER_01:Absolutely. There are a few things that you can equate AI to because some of that is just how AI is being tuned out of the box. But here are a couple problems that you have to realize is that first of all, when AI makes a mistake, it's not the AI's problem. You are still ultimately the one being held accountable for. If AI slip in an algorithm problem and it causes financial institution miscalculated billing or things like that, it's not the AI's problem because it's still you who are being held accountable for. And two, it's getting to the even more of a core of a problem, which is AI tuned out of the box is more of just like an eager intern and try to please you by saying yes to everything that you ask for and start filling the assumptions because they are afraid of telling you what they assume, what they don't know. And therefore, you don't get the pushback needed. So imagine when you hire a very seasonal consultant or someone very knowledgeable, when they come in, they actually ask you more questions than what you actually ask them to do because they're really trying to understand the history of things and context of how the business is built, what you're really trying to achieve, and spell every single assumption or even challenge your thinkings at a good or bad way. So that's a big difference between an intern mentality and some groups very senior and very seasonal.
SPEAKER_00:Yeah. Yeah, it sounds like we're shifting from traditional prompt engineering to what might be considered context engineering.
SPEAKER_01:That's correct. So because a lot of things are context matters. Um, and context is a very interesting thing. You can treat it more like a finite resource, even though you can hear context windows such as 100k, 200k, or and probably just recently drop a model with 1 million context window, and Google Gem and I boast about 2 million context window. But the truth of the fact is that if you remember Bill Gates many years back, that no one used over 640 kilobytes, he has not quite foreseen that the more that you throw it at people, the more they want to get out of it. So the context window is always a very finite resource that you have to kind of work with. And using this resource is like a Goldilocks problem. You give it too little, and they start making a lot of assumptions. But if you give it too much, you actually get into a problem called like the loss in the middle problem, which is a 2023 paper. So if you have a very, very long prompt, talk about many things. And out of the 10 documents you paste it in, the fifth ones talk about some key security principles that AI should not forget. Guess what? That's very likely to get glimpsed over and get forgotten towards the end.
SPEAKER_00:Yep, I remember reading that paper a little over a year ago. It's an interesting one. I recommend it. So you repeatedly emphasize preferred durable artifacts over chat history. Why do specs, plans, and docs suddenly become more valuable to an AI-assisted world and not less?
SPEAKER_01:Yeah, so I think we've all been there many times when we use AI to build code. Let's go dial back a little bit on the context. The longer you have a chat session open, the more of the context window being used. And over time, you can exhaust all of the context window. Even today, AI can automatically compact the window, free up some space again. But then you're at the mercy of AI automatically trigger pick the key information that you really care about and hopefully compact that. So you're preying on a magic that may or may not happen. Right. Therefore, coming back to the context as a finite resource. So the good practice is generally once you finish getting your goal done, exit that and start over again. Refresh the chat window. Exactly. So you have a fresh set of context window. But here's a problem: it is like hiring a new intern every out of hour. And every intern is going to have a whole new context. You have to feed again. It does not remember what is set before. Or something durable is more important. Just think about when you run an enterprise, unless you have someone work with you 30 years and remember everything you have said, but you're likely going to onboard new employees. And what do you have left for them that they can read, they can get up to speed quickly? So these durable assets become very important. Now, surely you can say, here's my Slack chat history from the last couple months, and please read through them. But we probably don't persist that well. And we can treat history as more like a ephemeral asset. A durable asset is like things that you have organized, you have distilled down so that you don't have to explain, re-explain it every single time, which is more important for AI because now AI becomes such an easy resource that you can onboard on a very frequent basis. And therefore, having some of these assets, such as your spec, your architectural decision, and a lot of things that can be good that you can refer to AI so you don't have to re-explain. Instead of wasting your energy to re-explain things, now you have durable assets. If you manage it correctly, it turns the AI into a confused partner because of lack of context, or maybe sometimes it's overfed with way too much raw material into a focus worker because they understand and read your blueprint. Again, the blueprint is a condensed version of everything. And going back to the very same thing about a context window is how you effectively give it just enough in your context window so that the AI can actually perform great work.
SPEAKER_00:That's fantastic.
SPEAKER_01:So I actually have a very good personal experience of things. So I have a code base with a very nice sense of documentation written and generated along with AI that I constantly refer AI to. Not only turn through more tokens, it generally starts to hallucinate. But I noticed that the AI just constantly need a lot more nudging to get the work done correctly. So I've gone through this personal experience that really led me to realize there's a good set of documentations and good sets of durable artifacts. It's very important for your AI to stay focused and perform well.
SPEAKER_00:So I'm picking up on a pattern here. It sounds like you're pretty strict about breaking work into small, verifiable steps. So why does decomposition matter more with LLMs than it did with the traditional software teams?
SPEAKER_01:Yeah. We start with the context windows to find that resource. The more complex a task, the more context it has to turn through. So breaking up a task would allow you to leverage this resource more effectively. After that, another dimension of this is less on just the context windows, is more on risk management. Because as you have all these things laid out or these things broken down, without some supervision, you're actually enlarging the number of degrees of freedoms. So this degree of freedom, what does that really mean? Right. Is that you're giving more room for someone to improvise? You actually knowing it could be picking a library or using a CSS that you don't like, or using a practice you don't like, maybe using a database that you already wrote out. It's not something that you want to go for. These degrees of freedom give AI a room to wander off. But by breaking it down, is not you don't trust that, but you also verify it in everything, give you step an early verification to realize that are you really is the train stay on track or is it going off the wrong direction? And it gives you a lot more room to course correct and start establishing patterns so that the AI can stay on the rail versus wandering off. Again, because of the context window issues, AI sometimes just like an intern that can get a squirrel moment and get distracted.
SPEAKER_00:For sure. So we've talked about offline the usefulness of specs these days. And they're hard to maintain, right? Specs are often seen as just pure process overhead. How does writing a spec, especially one in LLM we'll use, actually increase the speed and alignment instead of slowing teams down?
SPEAKER_01:A good spec, the amount of time you put writing a good spec, it's pretty much the amount of time you would have spent and speed your execution and spent time debug later. It's whether you pay your debt up front or you pay it back debt later. Good specification also allows you to clearly think of what you need ahead of time. I'm not talking about waterfall here, it's basically clear your understanding. Because a lot of time when I see that in a lot of software and product management is that when a spec is not clear and people get suboptimal results, who is that to blame? Is that the implementation problem because of intelligence of the engineering community? Or is it just something they have been given too vague for them or too much degree of freedom or not specific enough so that they actually build a solution and not really addressing the most pressing needs of it, they may over-index on things that actually don't matter. So spec allows you to really giving that a clear brush of what you want to do and clearly laid it out and also clarify a lot of the assumption that you have been taken advantage of because you have all those contexts already in your brain that other people don't inherit.
SPEAKER_00:Yeah. Filling in a lot of gaps that I've had before today's conversation.
SPEAKER_01:Yeah.
SPEAKER_00:This is awesome.
SPEAKER_01:I want to give another example, an interesting experience that I have recently with a spec. A good specification actually can come with many, many different benefits. So we were working late night on this rush piece of software, and in the middle of building that, we realized some new requirements come in and some reality come in that completely flipped over our original assumption how it should be written. Then I realized that this is great because I actually have a very long written spec that is completely agnostic how we implement it and how the system should behave. So go back to the LLM and say, forget about what we've written. But let's look back at the spec, and if I change, instead of this volume of data, we now have 10 times the volume of data spread all over in different resources. We initially envisioned this would be only a 10-second delay when bootstrapping the application. This now will be one how you would have designed it differently. AI start looking into this back and say, well, you should move this data ingestion from this part of the process to another part of our process. And these are three options I think that you should evaluate. And then the very next three hours is when the magic happens, we ideate different approaches. We really kind of make sure that AI can explain every single option and make sure it is going correctly. And actually just rewrite a spec and rewrite the software because it also has architectural blueprint out of a software. So know instead of completely rewrite everything, actually know where to move the code and make where it changed. And I ended up having a software that pretty much nearly buckless during the transition, which would have taken days and days of engineering time. That magic happened within three hours. Kind of helped me understand is that sometimes having a spec not only it is there as in documentation you can refer to, but it also makes some part of a decision become a two-way door that you can reverse some of the decision and change it because situation has changed. So it's not a waterfall and someone actually make your process become more agile because you have a very nice source of truth you can patch or create from or modify them from that can be and still have an elegant result.
SPEAKER_00:Yeah, no, I read a lot these days on success stories about using this, that, or the other AI tool and saving hours, days, weeks, months of developer time and some of it's overblown, I'll be honest. But when you say it, I a hundred percent believe it. You and I have talked also about verification. It's a line that you just don't cross. What's the most dangerous illusion teams fall into when AI generated work looks correct but really isn't?
SPEAKER_01:Yes, because I think it comes down to how AI is being trained. AI is what's called a large language model, is being trained on our language. It's being motivated by observing a pattern of how we use our language and mimic that. And therefore, sometimes it may have understanding of what the language is, but most of the time it doesn't. Very good at generating formatted text that looks legit at a first glimpse. But the devils are usually in the details. I've gone through that lesson many, many times when it glimpses over a huge wall of text, and I think the first two lines look good, the last 10 lines look good, and let me give a stamp of approval. Next thing I realize the software doesn't work. And I cannot just say, well, because I use clawed code, but because I'm the manager of the output of an AI. You're accountable.
SPEAKER_00:Yeah.
SPEAKER_01:Ultimately, I'm the one who's accountable for that. And that puts more pressure on verifying and making sure that this resource that come and go, because we open up a whole new frack chat session every time that we pay no insurance for, we're still ultimately responsible for its own outcome.
SPEAKER_00:Yeah. Okay. You're consistently framing documentation as a way to reduce future context load. So how does leaving these artifacts behind change what teams and future AI sessions are capable of, let's say, six months from now?
SPEAKER_01:Yeah, so we talk a little bit about expecting how you want things being built. Documentation is a reflection how it is being built. Basically, it's a sibling assets as your source code. Imagine a world we only have spec, but without the documentation. And then therefore, what it forces AI to do is they read the spec, and then they have to turn through a lot of lines of source code, which is some of its work and some of it's well documented, some out of work that could be human patch around. Then that means a lot more files that AI has to go through, which will eat up the context window. Again, context window is a finite resource. And then the more context window is being consumed, the more AI would get distracted and actually get derailed from the original, either some of the discipline you. Mentioned before, or some of the goal it has to attain to, or even some of the source goal has glimpsed over before because of the loss in the middle of fact. So imagine documentation is like a hypercondensed version or cliff note of your source code and correctly representing the state of the implementation without actually overblowing that with every single line of detail, only highlighting what's important. And a good documentation does not mean you have to write down every line of source code, but say, if you want to know about this, is the portion of source code you can get to, and therefore it can be at its own discretion and say, well, this is relevant to my work, let me go and read what's necessary instead of being forced to read end-to-end. Because anytime when AI today turns through a documentation, they will have to read all the file end to end because it does not know what to select and what not. And having a documentation can help AI to be selective and prioritize its own resource.
SPEAKER_00:So when you say documentation, you mean functional documentation of how this system works or is supposed to work?
SPEAKER_01:Yeah, there are a lot of details you can go into documentations. The state of implementation, for example, for example, I have this 50 files of code. How are they being divided? What are the architectural patterns being go into it? Some of the trade-off that we have, for example, a reason work has to do with uh we have to make some trade-off between performance and precision. So I have to constantly, without our documentations, would be difficult for AI because let's just say that we have a piece of data grid is going to display a lot of data. Usually when AI is being trained, is that well, the data grid only showed about the first 50 records of data. But our decision is this data grid actually holds about a million records, all in the memory. So therefore, the processing that goes into it, if we're not careful about it, AI can actually write the code that loop through from one to the end in one shot and actually practically will freeze the UI. And then we basically have to come back and help AI to understand that, well, these are some of the trade-offs we have in order to handle this volume of data, right? And please follow this particular pattern. These are things that generally a lot harder for someone to just read this first point and understand it. Because just from a prompt coming back and say, Well, I would like you to do so and so forth with a data set that I have, but knowing that this is a one million records that should be handled very differently from some common scenario. So having that documentation would document some of these nuisance or unconventional approach, or some a lot of times are just pure decision of trade-off to you trace some one off from the other, and documentation would provide that level of guidelines.
SPEAKER_00:Yeah. Yeah, I it sounds like there's a lot of consequences to not providing the right documentation. That's correct. And if I think about it, like you when an organization adopts the LLMs, but not this kind of workflow, what would you say are some of the failure modes they're setting themselves up for?
SPEAKER_01:I've been through that before, and I feel like I'm building a sandcastle. It looks good, it looks great on a sunny day when it's first built, and the moment when a tidal wave comes in, it's gone. So I don't believe in unless that's our intentional for one-day show, and AI can accelerate building that, but I think over time we're building a skyscraper. We need a solid foundation, and we need the discipline, and discipline does not mean it's one magical prompt that can get to. It's a lot of engineering rigor, sometimes it's exciting, sometimes it can be boring, but rigor will help you create a concrete foundation and guardrail that can gradually up-level your enterprise one level at a time. Because AI has been very good at turning out a lot of code, and sometimes I find that AI is eager to churn out more code than I actually wanted. Does not really mean you're accomplishing your goal. You actually, through this process, if not careful, you actually lose institutional knowledge. Why this decision is being made, why this framework is being used, what are the trade-offs we have with all these finite resources where we should be optimizing our decision towards. And a lot of knowledge is key for one AI to handle to another for a newer generation of AI piggyback on top of the older generation, meaning that you have knowledge upgrade of how you would have tackled a problem differently. So then instead of leaving with very little institutional knowledge, you're actually accumulating knowledge, which is a key part for the future AI or your future virtual coworker in many different ways, to have a knowledge you can build upon. I look at this more like process over problem. Problem is only one snapshot of what you have done, but the process is the one to endure over time. And AI is AI does not diminish the traditional engineering excellence or a good process. What I actually find out is that AI actually amplified the penalty of not having this discipline. Because traditionally it takes a long time from the moment having a thought to work it down and change and have the engineering team engineer software and then see the end result. And by the time it's many, many months from the time that I have the idea, now this time frame is completely compressed. Where I find that a good discipline will yield a good result. And this is almost like eating our comma of some of the homework we didn't do before, and now we can actually end up seeing that down the row.
SPEAKER_00:So stepping back, what does this workflow say about the kind of leaders and teams we will need as intelligence becomes a commodity?
SPEAKER_01:I think in the end, it's not about just getting a result. It's more about how do we improve the reliability of the result that we can also over time scale these reliable results. And a lot of this is not about the prompt, or it's not about just one or two assumptions or one or two magical moments. It's about all these recurring rigors on making assumption explicit. And you beat over the spec and make sure the spec is correct. And also, even with the spec is being correct, you verify your outcome. All of these are key disciplines that enabling the AI to stay within the guardrail, creating a pattern that the future AI or future thinking partner can continue to follow and build a level up your enterprise. So a lot of these parts is to bringing AI in as a very knowledgeable resource, turning them from always saying yes to what you're asking for into a critical thinker. Because the moment you invite them as a critical thinker, you open up yourself, open up your assumption that you actually not only they turn out the output that you need, they actually go back to your original idea and offer you insights and ideas. And even blind spot you have not considered about because they have read through a lot of good materials, a lot of things that actually could have added to your idea, but they're not tuned to give you that input unless being asked for. Some of these good use will amplify the outcome. And I always look into that is AI is an accelerator or amplifier of what your discipline is, how you what is your management style is.
SPEAKER_00:There it is. That's amazing. Well, thank you, Tangent. I I've learned so much from you today. Thank you, Jason. You probably have very insightful questions.
SPEAKER_01:I learned a lot. You're like a super intelligence AI to probe into things I've learned and force me to even think about how I should explain this. And a lot of that I start verifying my own assumptions too.
SPEAKER_00:Yeah, we work well together. My imponderables turn into real actions between us. I think it's great. Thank you. All right, folks. As I like to say, until next time, stay curious, keep innovating, and lead with judgment as intelligence becomes ever more abundant. Cheers. Cheers.