<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Adventures in Applied Technology]]></title><description><![CDATA[Writing about practical applications of emerging technology. ]]></description><link>https://vjswami.com</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 03:41:15 GMT</lastBuildDate><atom:link href="https://vjswami.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Vijay Swami]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[vjtechstack@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[vjtechstack@substack.com]]></itunes:email><itunes:name><![CDATA[Vijay Swami]]></itunes:name></itunes:owner><itunes:author><![CDATA[Vijay Swami]]></itunes:author><googleplay:owner><![CDATA[vjtechstack@substack.com]]></googleplay:owner><googleplay:email><![CDATA[vjtechstack@substack.com]]></googleplay:email><googleplay:author><![CDATA[Vijay Swami]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Lessons Learned from Why Agentic AI Projects Fail in Production (Deep Dive)]]></title><description><![CDATA[This is a deeper dive into my LinkedIn post on the pattern that keeps repeating itself with failed Agentic AI projects in the Enterprise....]]></description><link>https://vjswami.com/p/lessons-learned-from-why-agentic</link><guid isPermaLink="false">https://vjswami.com/p/lessons-learned-from-why-agentic</guid><dc:creator><![CDATA[Vijay Swami]]></dc:creator><pubDate>Thu, 02 Apr 2026 20:31:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gXVz!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97be074b-211f-4d8d-8d66-fd2d56fed146_608x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8230; not because the models are bad, but because teams are trying to prompt-engineer their way out of writing actual control flow. We touched on this in the previous post but here I want to get technical about why this happens, what the actual failure modes look like, and what architecture works instead. This isn&#8217;t theoretical! Everything is based on systems we&#8217;ve built, evaluated, and shipped to production in regulated industries where &#8220;the AI decided&#8221; is not an acceptable answer for anything.</p><div><hr></div><h3><strong>What Agentic Actually Means Architecturally</strong></h3><p>Before getting into failure modes, it&#8217;s important to be clear on a few things because marketing folks are doing what marketing folks do and have turned the &#8220;agentic&#8221; term into something that means nothing and something that means everything simultaneously.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p><p>A deterministic pipeline is a fixed code path: Input &#8594; Stage 1 &#8594; Stage 2 &#8594; Stage 3 &#8594; Output. Same input, same execution path, every time. You can unit test every stage, you can trace from input to output, you can reason about behavior. When it breaks you know exactly which stage failed because each one has a defined input and output contract. It&#8217;s boring, it not sexy, but it works.</p><p>An agentic system is a loop: Input &#8594; LLM reasons about what to do &#8594; selects and calls a tools &#8594; observes the result &#8594; LLM reasons again &#8594; selects another tool &#8594; repeat until the LLM decides it&#8217;s done. The LLM is the orchestrator AND the decision-maker AND the executor. Each invocation might take a different path through the tool graph and thus the behavior is emergent, not designed.</p><p>The issue with emergent behavior: you can&#8217;t unit test it. You can&#8217;t write an assertion that says &#8220;given this input, the agent will call tools in this order.&#8221; You can only sample it statistically and hope the distribution of outcomes is acceptable. That&#8217;s fine for a research prototype and it&#8217;s also fine for your vibe coded TODO list, or personal fitness app. It&#8217;s not fine for a system processing clinical lab data in a HIPAA-regulated environment.</p><p>So the question to be answered is this: <strong>what parts of your system need to be deterministic vs agentic?</strong> This is the single most important design decision in applied AI right now and concurrently a key skill to develop.</p><div><hr></div><h3><strong>The Three Failure Modes (With Specifics)</strong></h3><p>In my experience so far, agentic systems break in production in three specific &amp; diagnosable ways.</p><h3><strong>Failure Mode 1: Decision Drift</strong></h3><p>In an agentic loop, the LLM makes a series of decisions: which tool to call, what parameters to pass, how to interpret the result, whether to retry, when to terminate. Each of these is a probabilistic inference. Token generation is stochastic and temperature, sampling, even the order of tokens in the context window can shift the decision.</p><p>Run the same clinical lab data through a full agentic extraction loop ten times. <strong>You won&#8217;t get the same execution path ten times!</strong> The agent might read page 3 before page 1 on run 4, it might choose a different OCR provider on run 7, it might declare extraction &#8220;complete&#8221; earlier on run 9 because the phrasing in its chain-of-thought happened to trigger an end-of-task heuristic.</p><p><strong>When we evaluated a full agentic architecture against the deterministic pipeline, accuracy dropped from 100% to roughly 70% with the same inputs!</strong> The final extraction results were <em>usually</em> consistent, but &#8220;usually&#8221; is not a word that survives a compliance audit. When you need to answer to a regulator the answer has to be <em>yes</em>, not &#8220;statistically, most of the time.&#8221;</p><h3><strong>Failure Mode 2: Cost Multiplication at Scale</strong></h3><p>Every iteration of an agentic loop is an LLM inference call. Each call consumes input tokens (the full context window + tool results) and generates output tokens (reasoning + tool selection). The token count grows with each iteration because the context window grows.</p><p>A deterministic pipeline processes clinical lab data in a single pass through a fixed sequence: render pages at 300 DPI, send to cloud OCR, pattern-match against LOINC codes, parse values and reference ranges, score abnormalities, compute trends. Total cost: $0.02 per document, almost entirely OCR API fees. Zero LLM calls.</p><p>The full agentic alternative we evaluated required 5-15 LLM calls per document as the agent needs to: analyze the document structure, decide on an extraction strategy, execute extraction calls, validate results, handle any edge cases it encounters, and determine when it&#8217;s done. Each call includes the growing conversation context. Per-document cost: $0.07-0.17.</p><p> At enterprise scale (tens of thousands of documents) the agentic approach multiplies costs by 3.5-8.5x with no improvement in accuracy. The POC doesn&#8217;t catch this because POCs process dozens of documents and the per-unit economics are invisible until you multiply them by real volume.</p><h3><strong>Failure Mode 3: The Debugging Black Hole</strong></h3><p>This is the failure mode that burns the most engineering hours and is the hardest to explain to folks who haven&#8217;t lived it.</p><p>The deterministic pipeline has discrete stages with observable boundaries. OCR stage: input is page images, output is text blocks with per-line confidence scores. Pattern matching stage: input is raw text, output is structured lab values matched against LOINC codes. Scoring stage: input is extracted values, output is abnormality flags and clinical priority scores. When something is wrong, you check the output at each boundary. The failure is always localizable: OCR misread the value, or the pattern didn&#8217;t match a format variation, or the reference range was wrong. You fix the stage + you add a test, and you are done.</p><p>Now imagine debugging an agentic extraction that returned wrong results: open a multi-turn conversation log and find that the agent read page 1, then decided to call the table extraction tool with specific parameters, then interpreted the results, then decided to read page 3 for additional context, then attempted to validate a creatinine value, then second-guessed itself and re-extracted. Somewhere in that chain of reasoning, it misinterpreted &#8220;creatinine kinase&#8221; as &#8220;creatinine.&#8221; Why? Because the token probabilities, given the accumulated context at that point in the conversation, made that the most likely completion. There&#8217;s no root cause in the traditional sense. T<strong>here&#8217;s no bug to fix, this is by design, this is how LLMs work!!</strong> There&#8217;s a probability distribution that happened to produce the wrong output on this run.</p><p>In a regulated environment, this isn&#8217;t just an engineering inconvenience. An audit trail requires a traceable chain of custody from input to output. &#8220;The model reasoned differently on this run&#8221; is not a valid audit finding. You need deterministic traceability: this value was extracted by this pattern from this text at this position with this confidence score. Agentic loops break that chain by design.</p><div><hr></div><h3><strong>The Root Cause: Perception vs. Judgment vs. Orchestration</strong></h3><p>All three failure modes trace back to one architectural fact: the agentic framework puts the LLM in charge of three fundamentally different jobs that have fundamentally different reliability requirements.</p><p>Perception is where unstructured input becomes structured data. I.E. reading a document, classifying a document type, extracting entities, matching patterns, etc. This is inherently fuzzy because the inputs are unstructured and variable and in this problem domain LLMs are world-class &amp; <strong>exactly what they were designed for.</strong></p><p>Judgment is where structured data becomes decisions: Is this value abnormal? What&#8217;s the clinical priority? Should this be flagged for urgent review? The inputs are now structured in the way of numbers, classifications, confidence scores, and the logic should be deterministic. Same inputs, same decision, every time.</p><p>Orchestration is the control flow connecting everything: Which stage runs next? What happens on failure? When do you retry vs escalate? This must be deterministic code, not LLM inference as control flow is not a perception task.</p><p>Most agentic frameworks mash all three together in a single LLM loop. The agent perceives AND judges AND orchestrates. Every &#8220;AND&#8221; in that sentence is a place where non-determinism creeps from the perception layer (where it&#8217;s acceptable) into the judgment and orchestration layers (where it&#8217;s not).</p><div><hr></div><h3><strong>What the Best AI Engineers Have Been Doing for 3+ Years</strong></h3><p>The pattern that actually works in production is what the applied ML community has been calling &#8220;fuzzy classifiers with deterministic wrappers.&#8221; This isn&#8217;t new and as I&#8217;ve learned, good AI engineers have been building this way since before the current LLM wave but the broader market is just now catching up w/ the agentic hype cycle.</p><p>The core idea: constrain the LLM to very specific, objective classification tasks. Don&#8217;t ask &#8220;is this code good?&#8221; or &#8220;should this patient be reviewed?&#8221; These are holistic judgment calls, and the LLM will give you a different answer depending on how you frame the question. I&#8217;ve literally asked the same model to review the same authentication middleware 15 minutes apart with two different framings and gotten completely opposite conclusions. Ask it to find security issues, it returns five concerns. Ask it if the code is ready to ship it fluffs my ego about the elegant error handling. Again this is not a bug, this is how probabilistic text generation works.</p><p>Instead, constrain the classification: &#8220;Does this report contain creatinine above 1.5 mg/dL? Return yes/no + confidence 0-1.&#8221; &#8220;Does this function handle all error paths? Return yes/no + list of unhandled exceptions.&#8221; &#8220;Does this document match source pattern A or B? Return classification + confidence.&#8221; These are perception tasks and they need to be narrow &amp; objective with structured output. LLMs are reliable at this because we&#8217;ve removed the ambiguity that makes holistic judgment unreliable.</p><p>THEN implement the value chain of what&#8217;s good/bad OUTSIDE the LLM loop, in deterministic code. This is where the skill of context engineering (an entire discipline of its own!) comes in not just crafting prompts, but designing the boundary between what the model classifies and what the code decides.</p><div><hr></div><h3><strong>What This Looks Like in a Real System</strong></h3><p>Let me walk through the architecture of the healthcare system we shipped to show how this separation works in practice.</p><h3><strong>Perception Layer: Cloud OCR + LOINC Pattern Matching</strong></h3><p>The system processes clinical lab data from the largest lab companies in the US. The perception pipeline has three stages:</p><p><strong>Stage 1: Digitization. </strong>PDF pages rendered at 300 DPI, sent to cloud OCR (AWS Textract, Google Document AI, or Azure Document Intelligence, with runtime provider switching). Returns text blocks with per-line confidence scores plus table structures. This is the fuzzy part as the Cloud OCR is probabilistic &amp; confidence varies by image quality, layout, font, format, etc.</p><p><strong>Stage 2: LOINC code matching.</strong> LOINC (Logical Observation Identifiers Names and Codes) is a universal standard for clinical observations. When the report includes codes like <em>(4548-4)</em>, extraction is a deterministic regex lookup: find the parenthesized code, parse the adjacent numeric value, match to a canonical lab definition; no ambiguity. When LOINC codes aren&#8217;t present, the system falls back to name-based matching with a curated variation table and fuzzy substring logic and explicitly flags those results as needing verification. The system knows when its perception layer is less reliable and says so.</p><p><strong>Stage 3: Value parsing.</strong> Extract numeric values, units, reference ranges. Handle edge cases deterministically: values with &lt; or &gt; prefixes (below/above detection threshold), units in various formats (mg/dL, mIU/L, nmol/L), reference ranges expressed as upper-only, lower-only, or full ranges. All of this is parsing logic, not inference.</p><p>The entire perception layer outputs structured data: lab name, numeric value, unit, reference range, confidence score, extraction method used. That structured output is the contract between the fuzzy layer and the deterministic layer.</p><h3><strong>Judgment Layer: Deterministic Clinical Decision Logic</strong></h3><p>Everything downstream of that structured output is deterministic code. No LLM calls, probability distributions or prompt sensitivity.</p><p><strong>Abnormality detection: </strong>Compare extracted value against reference range. Creatinine 1.8 with range 0.7-1.3? Abnormal.... this is a numeric comparison &amp; it doesn&#8217;t need a model.</p><p><strong>Trend analysis: </strong>Track values over multiple collection dates. Compute absolute and percentage change. If the change is less than 5%, the trend is &#8220;stable.&#8221; For tests where lower is better (HbA1c, LDL, creatinine, triglycerides), a negative change means &#8220;improving.&#8221; For tests where higher is better (HDL, eGFR, hemoglobin), a positive change means &#8220;improving.&#8221; Fluctuation detection counts direction reversals across measurements and two or more reversals flags the pattern. All of this is in a deterministic function with hardcoded clinical rules. Published medical guidelines not learned weights from an LLM.</p><p><strong>Clinical priority scoring:</strong> This is the highest-stakes decision in the system. Each test gets a score from 1 to 10: worsening + abnormal = 10 (urgent attention). Worsening only = 7 (monitor closely). Abnormal + stable = 5 (continue monitoring). Fluctuating = 4 (check medication adherence). Improving + still abnormal = 3 (continue therapy). Improving + normal = 1 (good progress).</p><p>When a clinician asks &#8220;why was this flagged?&#8221; the answer is fully traceable: &#8220;Creatinine extracted at 1.8 mg/dL via LOINC code 2160-0, reference range 0.7-1.3, flagged abnormal. Three measurements over 90 days showing 12% upward trend. Priority score 10: worsening and abnormal.&#8221; Every element is deterministic &amp; auditable with zero model reasoning to reconstruct.</p><h3><strong>Orchestration Layer: Deterministic Control Flow</strong></h3><p><strong>Provider selection: </strong>check if the configured cloud provider is available, if not fall back to the next configured provider. This is an if-statement, not an LLM decision. <strong>Confidence routing:</strong> if OCR confidence is below threshold, flag for human review. If LOINC code extraction succeeds, use it; if not, fall back to name-based extraction and mark as needing verification. <strong>Retry logic:</strong> if cloud OCR fails, retry with exponential backoff then fall back to local regex-only extraction.</p><p>All of this is control flow and <em>NONE of it benefits from LLM reasoning</em>. An agentic system would have the LLM &#8220;decide&#8221; which provider to use, &#8220;decide&#8221; whether to retry, &#8220;decide&#8221; if results are good enough. Each of those decisions introduces non-determinism into a layer where determinism is the entire point.</p><div><hr></div><h3><strong>The Quantitative Proof</strong></h3><p><strong>But don&#8217;t take my word for it. </strong>We didn&#8217;t just build the deterministic architecture and assume it was better, we designed and evaluated four progressively sophisticated agentic alternatives and compared them head to head.</p><p><strong>Level 0: Deterministic fallback chain (no AI). </strong>Predefined retry strategies i.e. try AWS, then Azure, then GCP, then enhanced OCR settings, then pure regex. Fully deterministic, fully testable... this is the baseline and it works. The reason I include it is to make a point: this is what you&#8217;re comparing against when you start adding LLM decision-making into the loop. Everything after this introduces non-determinism, and the question is whether that non-determinism earns its cost.</p><p><strong>Level 1: LLM-assisted strategy selection.</strong> Use Claude to analyze each document and recommend which OCR provider and extraction approach to use. The problem: we&#8217;re paying for an LLM inference call to make a decision that a single conditional handles reliably. The LLM added $0.003-0.01 per document in API costs, 1-3 seconds of latency, and non-deterministic strategy selection, with zero improvement in extraction quality.</p><p><strong>Level 2: Full agentic loop.</strong> The LLM controls the entire extraction through tool use. Reading pages, choosing OCR providers, applying extraction patterns, validating results, deciding when it&#8217;s done resulting in 5-15 LLM calls per document. Processing time: 85-120 seconds vs 72.7 second baseline. Cost per document: $0.07-0.17 vs $0.02. Determinism: ~70% vs 100%. Additional code: 400+ lines of tool definitions, conversation management, and error handling for mid-conversation failures. Accuracy improvement: exactly zero.</p><p><strong>Level 3: Hybrid.</strong> Deterministic primary + agentic fallback. Architecturally the most reasonable BUT maintaining an entire agentic subsystem for a fallback path that never triggers is pure lunacy!! 450 additional lines of code, two complete systems to maintain, full test coverage required for both paths.... for zero benefit. Nah.</p><p><strong>Every agentic option made the system worse on every metric.</strong> Not marginal but significantly worse. The full agentic loop was up to <strong>65% slower</strong>, up to <strong>8.5x more expensive</strong>, and 30% less deterministic with absolutely <strong>zero accuracy improvement.</strong></p><p>The agentic approaches didn&#8217;t fail because the models were bad rather they failed because the problem didn&#8217;t need an agent. Fixed lab sources + detectable formats + LOINC codes + known reference ranges.... that&#8217;s a stability problem and agents solve variability problems. Applying the variability solution to a stability problem <strong>adds cost and risk with no upside.</strong></p><div><hr></div><h3><strong>Why Teams Keep Getting This Wrong</strong></h3><p>If the failure modes are this predictable and the quantitative evidence is this clear, why do teams keep building agentic systems that break in production?</p><p>First, <strong>agentic sells better.</strong> &#8220;We built an AI agent that autonomously processes your clinical documents&#8221; gets funded. &#8220;We built a deterministic pipeline with AI-powered OCR at the perception layer&#8221; does not. The exec dashboard for an agentic system looks like the future sci-fi movie while the deterministic pipeline looks like a boring flowchart. This is a real problem because it means capital flows toward architectures that demo well and not architectures that survive production! You see this all over the place on the interwebs right now. You only have to look a far as the millions of OpenClaw posts.</p><p>Second, <strong>vibe coding has conditioned a generation of developers to throw everything at the LLM</strong>. When your primary tool is an LLM+agentic harness (claude code) the reflex is to prompt for a holistic answer rather than decompose the problem into what the model should classify and what deterministic code should decide. That decomposition requires deep domain knowledge as you have to understand your problem space well enough to enumerate specific, objective criteria. You can&#8217;t outsource that understanding to the model. This is context engineering, not prompt engineering, and it&#8217;s a fundamentally different skill.</p><p>Third, because it requires admitting that <strong>the LLM is not as smart as it seems.</strong> When you separate perception from judgment, you&#8217;re acknowledging that the model&#8217;s &#8220;understanding&#8221; is pattern matching, not comprehension. The model almost certainly knows that a creatinine of 4.0 is clinically dangerous as that&#8217;s basic medical knowledge well-represented in training data, but will it flag it the same way every time? Will the surrounding context, the phrasing of the prompt, the other values in the report shift whether it calls it &#8216;critical&#8217; vs &#8216;elevated&#8217; vs &#8216;worth monitoring&#8217;? A deterministic comparison against a reference range + other variables gets it right every time, with zero variance. And <strong>when a regulator asks</strong> how the decision was made you point to one line of logic instead of a probability distribution.</p><p>The people still trying to prompt-engineer an LLM into making holistic judgment calls like &#8220;is this code good,&#8221; &#8220;should this patient be reviewed urgently,&#8221; &#8220;is this extraction complete&#8221; are building systems that work in demos and break in production. The LLM will wax poetic about how great the code is if you ask it that way. Constrain it to &#8220;does this function handle all error paths, return yes/no + list of unhandled exceptions&#8221; and now you have something you can build actual control flow around.</p><div><hr></div><h3><strong>When Agentic Earns Its Place</strong></h3><p>I want to be clear that there are real problems where fully agentic architectures earn their complexity. I&#8217;ll write about that in the future based on the experiences of another solution we developed.</p><p>If the system needed to support hundreds of unpredictable formats instead of a group of semi-standardized sources, the LOINC pattern matching would break. The name variation table would be unmanageable, the input variability would exceed what deterministic rules can handle and <strong>that&#8217;s when an agentic approach at the perception layer starts earning its trade-offs:</strong> the flexibility to reason about unfamiliar document structures, adapt extraction strategies to novel formats, and handle edge cases that no reasonable set of rules would cover.</p><p>If the scope expanded to radiology narratives, pathology reports, or handwritten clinical notes, the entire perception layer would need to change as deterministic pattern matching doesn&#8217;t work on unstructured narrative text. The variability is too high, the context too important, the semantic interpretation too nuanced. A model-heavy perception layer makes sense here.</p><p><strong>But even in those scenarios, the judgment layer stays deterministic. </strong>The clinical priority scoring &amp; abnormality detection, trend analysis and the entire orchestration stays in code. We can make the classifiers more flexible to handle higher variability at the perception layer but we haven&#8217;t given the model the car keys to the decision layer.</p><p><strong>The framework is simple: agentic solves variability problems while deterministic solves stability problems.</strong> Most production systems I&#8217;m seeing are stability problems wearing a variability costume because someone saw a compelling agentic demo and assumed that&#8217;s the right architecture for everything.</p><div><hr></div><h3><strong>The Practitioner&#8217;s Audit</strong></h3><p>If you&#8217;re running an agentic AI project approaching production, or troubleshooting one that&#8217;s already struggling, here&#8217;s what I&#8217;d evaluate:</p><ol><li><p><strong>Map every LLM call to a layer.</strong> Is the model doing perception (classifying, extracting, pattern-matching structured output)? Or is it doing judgment (deciding, routing, scoring, escalating)? If it&#8217;s doing judgment, that&#8217;s where your production failures will come from. Move those decisions into deterministic code.</p></li><li><p><strong>Check your orchestration. </strong>Is the LLM deciding which tool to call next, or is control flow in code? Every LLM-controlled orchestration decision is a place where decision drift accumulates. If a conditional or a state machine can make the same routing decision, use that.</p></li><li><p><strong>Test for determinism!</strong> Run the same input 10+ times. Do you get the same execution path? The same intermediate states? The same output? If not, identify which layer is introducing variance and whether that variance is in perception (acceptable) or judgment/orchestration (not acceptable).</p></li><li><p><strong>Run the cost math at production volume.</strong> Multiply your per-document LLM token consumption by realistic monthly volume and compare against a deterministic alternative where the LLM is constrained to specific classification calls. If the agentic approach is 3x+ more expensive with no accuracy improvement, you have your answer.</p></li><li><p><strong>Ask the debugging question.</strong> When the system produces a wrong result, can you localize the failure to a specific stage with a defined input/output contract? Or do you need to read through conversation logs? If the latter, your debugging costs in production will dominate your engineering time.</p></li><li><p><strong>Classify your problem. </strong>Is the core challenge high input variability (many unknown sources, unpredictable formats, novel document types)? Or is it a bounded input space with known structure and stable rules? If the latter, the deterministic approach wins. Every. Single. Time.</p></li></ol><div><hr></div><h3><strong>The Architecture That Ships</strong></h3><p>&#8220;Agents everywhere&#8221; is 100% not the answer but neither is &#8220;no agents anywhere.&#8221; <strong>The architecture that actually ships to production and survives at scale is a hybrid:</strong> deterministic pipelines for the stable guardrails, agents (or more precisely, LLM-powered fuzzy classifiers) for the genuinely high-variation perception work, and deterministic wrappers around every model output that touches a decision.</p><p>Fuzzy classification at the perception layer + deterministic wrappers at the decision layer. Control flow in code not in the LLM loop. Confidence scores that route to human escalation when the model is uncertain coupled with an audit trail that traces from input to output without requiring anyone to reconstruct a model&#8217;s &#8220;reasoning.&#8221;</p><p>The hard part isn&#8217;t building this its it&#8217;s having the domain expertise to decompose your problem correctly, to know which parts are perception tasks where the model&#8217;s flexibility adds value, and which parts are judgment tasks where determinism is non-negotiable. That decomposition requires you to understand your domain deeply enough to enumerate specific, objective criteria. <strong>You can&#8217;t outsource that to the model</strong> and you can&#8217;t prompt-engineer your way around it. <strong>You have to actually do the hard work</strong> of defining what good looks like.</p><p>And IME, that&#8217;s exactly where many enterprise AI projects are breaking down. Not at the model layer or data layer.... but at the architecture layer. At the boundary between what should be fuzzy and what should be deterministic which is critically the decision about where the LLM loop ends and control flow begins.</p><p>Get that boundary right and everything else follows. Get it wrong and no amount of model capability fixes it.</p><h2><strong>You can&#8217;t outsource your thinking to AI. </strong></h2><p><strong>(^^^^^^^^^^^^^^^^^I will be saying this over and over, get used to it.)</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Lessons learned from shipping a regulated enterprise Healthcare app with Claude Code]]></title><description><![CDATA[In my last post I talked about how AI tools like Claude Code are redefining the sales process and sales cycle expectations....]]></description><link>https://vjswami.com/p/lessons-learned-from-shipping-a-regulated</link><guid isPermaLink="false">https://vjswami.com/p/lessons-learned-from-shipping-a-regulated</guid><dc:creator><![CDATA[Vijay Swami]]></dc:creator><pubDate>Thu, 26 Mar 2026 22:20:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6Dkd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8230;. moving from persuasion to proof based selling changes the conversation from &#8220;do we like this?&#8221; to &#8220;how do we make this real in production?&#8221; </p><p>This post is all about what actually got built and more importantly the lessons learned and some key insights gained when building enterprise software with AI coding tools. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>First let me start off by saying, very emphatically, based on my experience <strong>you are not &#8220;vibe coding&#8221; real enterprise software</strong>. Both from the perspective of building something that needs to run in production environments in highly regulated industries AND the deep domain knowledge required to even know what to build and how to build it.  I purposely picked this industry/project as the place to start because 1- its the most relevant to my own interest (enterprise sw); and 2- <em>I&#8217;m interested in solving the hardest problems</em>, not the easiest ones. There is nothing more rigorous than selling and shipping into enterprise healthcare where there are lives on the line. Building something like this requires deep understanding of things like the clinician workflow, the dominant lab sources, LOINC coding, lab-specific reference ranges, multi-date panel correlation, what a HIPAA auditor actually looks at  ... down to &#8220;trivial&#8221; things like TSH lab values needing two decimal places because thats clinically meaningful. My perspective around what is going to happen to the enterprise software/SaaS industry is highly informed by this project, but I&#8217;ll save that for another day. </p><p>So what actually was built? I want to be specific about this because the details are what separate this from a vibe-coded demo. In a nutshell the system automates the extraction of lab values from relevant lab sources (Quest, LabCorp, etc) in the exact manner these clinicians need saving over 10h/week/clinician.</p><p>No Vibe Coding Here. Everything on this page (the meat of the product) requires domain knowledge of what to build and how:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Dkd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Dkd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 424w, https://substackcdn.com/image/fetch/$s_!6Dkd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 848w, https://substackcdn.com/image/fetch/$s_!6Dkd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 1272w, https://substackcdn.com/image/fetch/$s_!6Dkd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Dkd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png" width="776" height="884" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:884,&quot;width&quot;:776,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:85868,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://vjswami.com/i/192257178?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Dkd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 424w, https://substackcdn.com/image/fetch/$s_!6Dkd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 848w, https://substackcdn.com/image/fetch/$s_!6Dkd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 1272w, https://substackcdn.com/image/fetch/$s_!6Dkd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42b0ba6b-702b-4a59-bdbf-f58dd0726387_776x884.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Diabetes vs Thyroid Presets. Reference ranges. LOINC codes. MD Summary format (preference for a particular group). This also has the ability to push right into the EHR, or the clinician can hit the &#8220;Copy&#8221; button which allows them to paste it directly in themselves as they get comfortable with the workflow. This is the human-in-the loop component and its critical to understand the best way to integrate large changes into a workflow from a human perspective. I know everyone is going crazy for fully automated magic, but knowing when to do that versus when to have a human-in-the-loop is critical. None of this came from telling Claude Code to build me this app. It came from being in the room with clinicians, understanding the nuances of their specialty, learning their workflow, and then translating that into precise requirements for Claude to build.... the fun stuff! For example, why is it important to display creatinine alongside glucose values? Because diabetes damages kidneys and creatinine measures kidney function.</p><p>Built-In HIPAA Auditing, all available real-time:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0zQ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0zQ5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 424w, https://substackcdn.com/image/fetch/$s_!0zQ5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 848w, https://substackcdn.com/image/fetch/$s_!0zQ5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 1272w, https://substackcdn.com/image/fetch/$s_!0zQ5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0zQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png" width="1122" height="904" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:904,&quot;width&quot;:1122,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:338654,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://vjswami.com/i/192257178?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0zQ5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 424w, https://substackcdn.com/image/fetch/$s_!0zQ5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 848w, https://substackcdn.com/image/fetch/$s_!0zQ5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 1272w, https://substackcdn.com/image/fetch/$s_!0zQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86cae151-501d-413d-93c2-6da72a8700c1_1122x904.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Multi-Cloud OCR <strong>with runtime switching</strong> between the relevant AWS, GCP and Azure services for lab processing. This was done on intentionally so the solution can be sold to customers regardless of which major cloud provider they have settled on, and as a secondary benefit provides a nice fall back incase one of the services has issues. Note: This particular customer is all in on AWS. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!r7-9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!r7-9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 424w, https://substackcdn.com/image/fetch/$s_!r7-9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 848w, https://substackcdn.com/image/fetch/$s_!r7-9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 1272w, https://substackcdn.com/image/fetch/$s_!r7-9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!r7-9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png" width="1128" height="660" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:660,&quot;width&quot;:1128,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:162876,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://vjswami.com/i/192257178?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!r7-9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 424w, https://substackcdn.com/image/fetch/$s_!r7-9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 848w, https://substackcdn.com/image/fetch/$s_!r7-9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 1272w, https://substackcdn.com/image/fetch/$s_!r7-9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a40bd5a-9ade-4424-bf53-e4b3c269c93a_1128x660.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Authentication is done via AWS Cognito with RS256 JWT verification, automatic refresh on key rotation with appropriate timeouts and lockouts preconfigured. <strong>These are not nice-to-haves these are requirements in Healthcare. </strong></p><p>The other point I want to make explicitly is Claude Code didn&#8217;t just write the application code, I also wrangled it to handle the entire lifecycle of standing up the production AWS infrastructure (this customer is all in on AWS). Claude Code generated the Terraform config for the entire stack, it wrote the Dockefile, the docker-compose orchestration, etc, etc. And the entire app can be managed by a few Claude Code skills, never having to ever touch AWS at all:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m4Dl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m4Dl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 424w, https://substackcdn.com/image/fetch/$s_!m4Dl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 848w, https://substackcdn.com/image/fetch/$s_!m4Dl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 1272w, https://substackcdn.com/image/fetch/$s_!m4Dl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m4Dl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png" width="1321" height="518" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:518,&quot;width&quot;:1321,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:121507,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://vjswami.com/i/192257178?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m4Dl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 424w, https://substackcdn.com/image/fetch/$s_!m4Dl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 848w, https://substackcdn.com/image/fetch/$s_!m4Dl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 1272w, https://substackcdn.com/image/fetch/$s_!m4Dl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa17f7b77-9dcd-4712-93d4-170e6c4ac476_1321x518.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Everything is managed via Claude Code skills. These skills are infrastructure-as-conversation --&gt; workflow-as-code. This is a different kind of DevOps: no YAML pipelines, no Jenkins, no bash scripts with cryptic flags to maintain, etc. This is using agents to do deploy. Made changes and want to test locally? Just type &#8220;/local&#8221;. Ready to deploy to prod? Just type &#8220;/deploy&#8221;. This is just one way of handling this, YMMV, but I thought it was pretty cool and efficient. There are of course pros/cons, but the main takeaway is we need to completely reimagine how software is built, deployed and managed from first principles. But just think about this: <strong>Claude Code wrote the app, stood up the production environment, configured HIPAA-compliant cloud services and automated the deployment. Just mind blowing stuff. </strong></p><p>One other interesting thing to mention is that this was originally designed to use AWS Textract AND Amazon Comprehend Medical. After some testing, we ended up removing Comprehend Medical from the pipeline because we were able to find everything deterministically and do it faster &amp; w/ less cost, for this particular use case. Comprehend Medical is truly an awesome service but for this use case with bounded inputs, standardized formats, etc the deterministic strategy won on every metric. There is an entire lesson here on <strong>making deterministic vs probabilistic design decisions &amp; tradeoffs</strong> which I&#8217;ll save for another post. Its truly fascinating and I think one of the key stills that needs to be developed as we go forward. We evaluated 4 levels of agentic architecture and rejected all of them in favor of a deterministic pipeline. </p><p>Few key takeaways for me....</p><p>1- The human role didn&#8217;t shrink, it elevated. What to build. How to build it. Deep domain expertise. Understanding tradeoffs. This is a move towards architecture, domain modeling, judgement and taste. </p><p>2- Vibe coding produces nice demos. Domain expertise + AI produces real solutions. </p><p>3- The right way to think about this is that you are the brain and claude code is your hands.</p><p>4- Think very hard about where your sw is deterministic vs probabilistic (i.e. LLM usage). </p><p>5- Using Claude Code as my DevOps engineer was unexpected, and awesome. </p><p>6- You cannot outsource your thinking. You need to steer these tools. Think of it like you are molding clay. </p><p>Most &#8220;I build X with AI&#8221; posts are dashboards, chatbots, or alike. This is a HIPAA complaint app thats multi-cloud, deployed on AWS with audit logging, JWT auth, PHI redaction and EHR integration. Anyone who says you cannot use AI to build in regulated industries needs to just give it a try. It&#8217;s possible, but not by vibe coding. I&#8217;m truly blown away by what is possible with these tools. Remember this was all done in weeks not months. This is all enabled by proof over persuasion selling. </p><p>Happy building and selling!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA["Selling AI": Moving from Persuasion to Proof]]></title><description><![CDATA[In my last post I touched on how AI is changing the nature of work itself. In this post, I want to specifically focus on the sales process changes enabled by these tools...]]></description><link>https://vjswami.com/p/selling-ai-moving-from-persuasion</link><guid isPermaLink="false">https://vjswami.com/p/selling-ai-moving-from-persuasion</guid><dc:creator><![CDATA[Vijay Swami]]></dc:creator><pubDate>Fri, 20 Mar 2026 19:32:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gXVz!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97be074b-211f-4d8d-8d66-fd2d56fed146_608x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8230; none of this is theoretical, it&#8217;s all lessons learned in the field based on actually selling and shipping an enterprise healthcare solution.</p><p>First I want to talk about the discovery process. How do you even &#8220;sell AI&#8221; in the enterprise? An approach to take is to find a workflow to improve that has real business impact. One of the sharpest sales minds I&#8217;ve ever encountered, <strong><a href="https://www.linkedin.com/in/michaelpage2018/">Michael Page</a></strong>, framed it the best: &#8220;Our job is not to sell. Our job is to solve problems. And if we solve a big enough customer problem, we get outsized rewards.&#8221; But this has to be humanized. How do we figure out what problem to solve? Without a doubt this requires industry domain expertise, but that is not enough. You must get deep IN that customer. Deeper than ever before. I found a very simple way to start this discovery process, in this context:</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p><p>1- Identify someone who is in the middle of these workflows</p><p>2- Ask them one simple question: &#8220;What do you hate most about your job?&#8221;</p><p>In healthcare the the goal is always to optimize patient care. Yes, healthcare organizations need to make money, but it should not be thought of as a pure revenue optimization exercise. But what&#8217;s interesting is that in healthcare these two things are often times tightly coupled. If you find a way to make them more capital efficient, you will also improve the patient care.</p><p>So I started this process by interviewing clinicians at the prospect. After I gathered some data points, I grouped them into cohorts and then did some light modeling on patient care / revenue impact, and then sorted these by the top 3. Now I had the top 3 things to target sorted by which would make the most impact in the organization. I did this modeling with AI tools as well. If there is interest I can discuss what I learned about how sellers should be leveraging AI for their own internal sales processes.</p><p>In this case what was identified was some extremely tedious and manual work on new patient lab data which was taking upwards of 10/hr/week <em>per clinician</em>. If we can improve this process, it will allow for clinicians to <strong>spend more time with patients and see more patients </strong>while also improving their job satisfaction. I want you to internalize this: see more patients (higher revenue) and also give them better care. This is the tight coupling of patient care optimization and revenue.</p><p>From there we went right to a prototype. No slide ware, no drawn out proposals, no weeks of planning, no canned demos/accelerators, no hand-wavy POC (which is different than a prototype). Just a 48-96 hour build cycle with reusable scaffolds and tight guardrails, running the same week. This was the iPhone moment for me. I could not believe how quickly we could get to the working proof.</p><p>Hence the title of this post: <strong>the shift in AI from a sales perspective is moving from persuasion (slide ware, meetings, calls, meetings) to proof (working prototype running quickly). Because building anything was expensive (both capital and human) the sales process optimized around persuasion (slides) before proof (software). In this new world, the sales process needs to optimize for proof-first selling.</strong> You need to quickly deliver a working slice of the customer solution, and it needs to be built fast enough that <em>it becomes the next sales meeting.</em> <strong>Figma wire-frame is a conversation starter but a working prototype is a decision accelerator. </strong>It changes the customer conversation from &#8220;do we like the idea?&#8221; to &#8220;how do we make this real in production?&#8221;</p><p>From there it was a question of sitting down with the buyer stakeholders and rapidly iterating on the workflow specifics and clinical preferences and other technical details to get it production ready. I cannot share much here because this part is proprietary and confidential to this particular customer. The business metrics here made it a straight forward &#8220;sell&#8221; after the prototype: <em>the solution will pay for itself in 6months, with a 20%+ revenue impact for this group</em>.</p><p>This is <strong>prototype-to-production selling</strong>, enabled by AI tooling.</p><p>I want to keep this post mostly focused on the sales process so I&#8217;ll follow up on the solution details in the a follow up post, but here is a quick teaser: 100% built with Claude Code. Multi-Cloud support (AWS, GCP, Azure) with runtime provider switching. HIPAA-compliant audit logging with 7-year retention. Containerized and running in production on AWS. Integrated with the customer&#8217;s EHR (eCW in this case). It&#8217;s as real as it gets in enterprise healthcare. Built in weeks not months/years.</p><p>Key takeaways for me:</p><p>1- Sales economics will completely shift. Credibility used to come from brand + slides + references. Now <strong>credibility comes from speed-to-proof and iteration velocity + quality</strong>.</p><p>2- The buyer psychology shifts from slides which <strong>invite debate</strong>, to working software which <strong>invite decisions.</strong></p><p>3- When the buyer can &#8220;<em>touch/taste/feel the solution</em>&#8220; (hat tip to an ex-team member <strong><a href="https://www.linkedin.com/in/bfarrell26/">Brian Farrell</a></strong> for this great tagline &amp; concept which he introduced to me years ago) the conversation quickly jumps from &#8220;Do we believe you?&#8221; to &#8220;<strong>What would it take to run this in production?</strong>&#8220;</p><p>4- As I stated in my previous post, AI is a compression algorithm. In this case, it compresses the sales cycle.</p><p>5- The key artifacts are not slides, they are a working slice of the customer solution built in days not months.</p><p>6- Most clinicians have an anti-AI perspective. You need to be prepared for this. The right framing is &#8220;what if you could just do the parts of the job you love.&#8221; Again, not metrics, but humanizing.</p><p>7- Even with the sales cycle acceleration, it doesn&#8217;t take away ANY of the sales twists and turns that occur when moving a deal through the sales stages. You still need a sales professional leading the effort, but their focus shifts considerably. You still still have to deal with politics. You will still get last minute surprises. None of that is changing anytime soon IMO. If there is interest, I can dive more into this as well.</p><p>This type of selling will require the entire org (marketing, sales, engineering) to completely re-think their approach. You cannot just &#8220;use an AI tool&#8221; to sell into 2026+. That is not enough. And you cannot just make this change in the sales org. This is the compression of marketing, sales, pre-sales, delivery, etc that I spoke of in my last post and they all have to think differently and move together in a way that is more tightly coupled than ever before.</p><p>If you are selling in this way, I would love to hear from you to compare notes &amp; share. We are very early in all of this and <strong>I don&#8217;t have all the right answers</strong>. What I do know is that change is in the air and its super exciting to be thinking through these changes. An ex-colleague <strong><a href="https://www.linkedin.com/in/robert-swinkin-9132332/">Robert Swinkin</a></strong> captured my feelings on this the best in a post he made on his new exciting journey: &#8220;As my remit has continued to grow into the billions,<em> I found my mind drifting back to those breathless early years &#8212; at the coal face, where everything was a first, every trail blazed was virgin, and the business moved faster than any rule or process could contain it.</em>&#8220;</p><p>Happy Selling!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Just Do Hard Things]]></title><description><![CDATA[Last November, I left my role specifically to step back and think hard about what I wanted the next chapter of my career to be.]]></description><link>https://vjswami.com/p/just-do-hard-things</link><guid isPermaLink="false">https://vjswami.com/p/just-do-hard-things</guid><dc:creator><![CDATA[Vijay Swami]]></dc:creator><pubDate>Fri, 20 Mar 2026 19:30:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gXVz!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97be074b-211f-4d8d-8d66-fd2d56fed146_608x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last November, I left my role specifically to step back and think hard about what I wanted the next chapter of my career to be. I knew I wanted to do something different, but I wasn&#8217;t exactly sure what. Outside of the role, company, or remit I realized a few very important things:</p><p>1- I need hard things that demand more from me than I think I have. That pressure has always (100% of the time) brought out the bigger version of me to show up. If the goal is small, my effort stays small. I do my best work when the challenge is big enough to push me outside of my comfort zone. Without these things, I never reach the next gear.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p><p>2- I&#8217;m at my best when I&#8217;m building into the future, not replaying a version of the past in an incremental fashion.</p><p>3- And I need to do these things with other folks who are also high-agency misfits.</p><p>I will always be extremely grateful to <strong><a href="https://www.linkedin.com/in/darren-mowry-06630711/">Darren Mowry</a></strong> and <strong><a href="https://www.linkedin.com/in/paul-soligon/">Paul Soligon</a></strong> for taking a chance on me way back in 2014 and <strong><a href="https://www.linkedin.com/in/edaporter/">Ed Porter</a></strong> (#3 above!) for mentoring me in the &#8220;ways of Ed&#8221; (if you have worked with Ed, you know what I mean!) Joining AWS as a seller in 2014 was the best example of #1/#2/#3 . At that time: 1) there was no playbook to &#8220;selling&#8221; cloud; 2) what customers were doing with AWS was not just a reframe of on-prem, it was fundamentally different and 3) it was my first foray into going from a Sales Engineering org to leading accounts as the AE. It was one of the hardest things I have done, and also one of the most satisfying. The YoY quota growth expectations were hard; getting promoted was hard; thinking through the lens of the customer from an industry lens was hard. The reality is I have had my &#8220;head in the cloud&#8221; for over a decade now. Cloud is no longer about building into the future, its just status quo.</p><p>&#8220;Building into the future&#8221; in 2026 means AI. I don&#8217;t think this is a very controversial take. But what does that mean exactly? My take: AI is a compression algorithm. It compresses the time between idea, solution, sale and impact. It compresses the boundaries between sales, pre-sales, delivery, product and marketing. I&#8217;ve been fortunate to have been trusted in roles that span most of those disciplines, and this incoming compression of traditional boundaries is extremely obvious to me. <em>Note: Compression does not mean &#8220;disappear entirely&#8221;! It means a much tighter, faster and integrated operating model.</em></p><p>When I started using Claude Code, Codex and other tools in November it was akin to the first time I used an iPhone or hailed an Uber. It just felt like magic. I can&#8217;t describe it any other way. Many can relate to the &#8216;iPhone moment&#8217; or &#8216;Uber moment&#8217;. This timing also coincided with the release of Opus 4.5 which was completely game changing from an LLM perspective. Just lucky timing on my part.</p><p>After a short break during Thanksgiving, I set out to answer the following question: &#8220;With AI tools, can a single person or a much smaller team do it all? Understand the customer problem, go deep into the industry specific workflow, prototype the solution, shape the narrative, test the market, and deliver something real into production?&#8221;</p><p>I&#8217;ve now experienced this first hand. The first product or solution I worked on is in the Healthcare space. What is extremely distracting right now in the AI space is that there are a lot of cool looking demos, &#8220;POCs&#8221; and things which I would consider &#8220;cool toys&#8221;, but have zero relevance in actual companies and have NO chance of scaling or making KPI impacts. Sticking with the theme of &#8220;doing hard things&#8221; I wanted to apply AI into one of the most highly regulated and critical industries in existence: Healthcare. Then I went to another highly regulated industry: Finance.</p><p>Both of these solutions are IN PRODUCTION, making real revenue impact TODAY. I am extremely grateful to my network who placed trust in me to get involved in these projects when all I had to trade on was my past performance and then me saying &#8220;trust me I&#8217;ll lead this and figure this out.&#8221; Thank you!</p><p>In future posts, I will be detailing and giving practical information on these two solutions, how they were built, and what business impact they are making. I firmly believe that knowledge should be free and customers should pay for outcomes, so I will be sharing as much as possible without violating any NDAs or competitive moats. I previously maintained a technology/business focused blog and presence on twitter (@vjswami) that I let lapse for various reasons. But I&#8217;m so energized about this new wave of technology disruption that I&#8217;m going to get back to writing and sharing again.</p><p>AI is changing the nature of work itself. And I&#8217;m here for it. If you have a blended background in engineering/technology &amp; sales you have the highest leverage that you have EVER had. You need to be experimenting with AI tools NOW. Go build something real and try to sell it.... and see what happens. Cloud drastically reduced the cost of experimentation from an infrastructure perspective, and AI is doing the same for apps.</p><p>In addition to sharing the technology journey, I hope to also inspire people to find their purpose. I&#8217;ll also be talking about new roles that are emerging based on what I&#8217;m seeing. Feel free to reach out to me to discuss any of the above.</p><p>Let&#8217;s go build the future!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Adventures in Applied Technology! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Coming soon]]></title><description><![CDATA[This is Adventures in Applied Technology.]]></description><link>https://vjswami.com/p/coming-soon</link><guid isPermaLink="false">https://vjswami.com/p/coming-soon</guid><dc:creator><![CDATA[Vijay Swami]]></dc:creator><pubDate>Sat, 17 Jan 2026 17:53:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gXVz!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97be074b-211f-4d8d-8d66-fd2d56fed146_608x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is Adventures in Applied Technology.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://vjswami.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://vjswami.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>