Blog
Essays and practical notes on provenance, SDLC memory, and AI-era delivery governance.
expand_more
TL;DR
Following the release of the initial version of the Provenance Manifesto, I began examining whether existing market solutions align with principles outlined therein.
✓ The Day the Provenance Manifesto was Born. March 8, 2026
expand_more
March 8, 2026
TL;DR
!The day the Provenance Manifesto was Born
✓ Git for Decisions Needs a Brain, But What Kind? Mar 4, 2026
expand_more
Mar 4, 2026
TL;DR
While building SDLC Memory, I ran into an unexpected architectural dilemma. Should the system reason like an autonomous agent, behave like a deterministic data transformer, or sit somewhere in between? I'm still deciding which direction is the right one for the MVP.
✓ expand_more
TL;DR
In "Part 1 - From RAG to Provenance: How We Realized Vector Alone Is Not Memory", we moved from RAG to Provenance, from similarity to lineage. But if AI agents will generate 50–80% of future work, the real question becomes: How does memory update safely? How do new decisions get validated, linked, and governed, instead of just embedded? This article shows the incremental graph update process behind the decision memory step by step, with a real example. Because in the AI era, memory must evolve, not just retrieve.
expand_more
TL;DR
What if your SDLC doesn’t actually remember anything, and it only retrieves fragments? We’ve built powerful RAG systems that can surface “relevant” text in milliseconds. But relevance is not causality. And when something breaks in production, similarity won’t tell you why it happened, or which decision, risk, or dependency led there. In this article, I unpack why vector search alone is not memory, how graph structure changes the game, and how combining vector with a strict provenance model turns scattered documentation into something closer to organizational cognition. If you care about explainability, decision lineage, and real delivery intelligence - this one is for you.
expand_more
TL;DR
Chapter Next: SDLC Memory & Provenance. In the previous chapters, we explored why SDLC has no real memory and why provenance must become structural, not optional. In this next step, we go deeper into a more uncomfortable question. What if the real bottleneck in delivery isn’t velocity, tooling, or even AI capability… but the biological limits of human context. Humans can actively hold about four meaningful constraints at once. Modern agents can process hundreds of thousands of tokens. And yet, neither can remember a living product over time without structure. This chapter connects cognitive science, AI context windows, and a practical Hot/Warm/Cold memory architecture to show why durable SDLC memory is not documentation overhead; it’s a competitive advantage. If execution is getting cheaper, memory is becoming the differentiator. Let’s talk about how to build it.
✓ expand_more
TL;DR
In the previous chapters, we spoke about SDLC Memory and Provenance as a way to reduce chaos, protect delivery integrity,and make decisions traceable inside engineering organizations. Now I want to zoom it out.Because if AI is changing how software is built, it is also changing something much bigger - how Intellectual Capital itself is valued. This article is not a deviation from the Provenance discussion. It is the next logical step. If execution becomes abundant, then memory, governance, and decision architecture become the real assets. Let’s talk about what happens to Intellectual Capital when AI materially replaces human positions, and what that means for companies that want to survive:
✓ expand_more
TL;DR
!AI will take the “What”, but Humans must own the “Why”
expand_more
TL;DR
!We are teaching AI to decide. But we are forgetting how to remember.
expand_more
TL;DR
!Why SDLC has no memory (and why delivery teams keep paying for it)