Provenance Manifesto

Blog

Essays and practical notes on provenance, SDLC memory, and AI-era delivery governance.

expand_more

Agents are not just smarter objects. OOP was designed for deterministic behavior; agentic systems operate through probabilistic reasoning, context, and runtime decision-making. That is why Agentic-Oriented Programming needs new primitives beyond classes and methods, especially around orchestration, memory, and decision provenance.

expand_more

In the previous article, about Building an Automated Translation Pipeline we designed a GitHub Copilot-based translation pipeline built around an orchestrator, language-specific subagents, reusable skills, and hooks. In this article, we go one step further and turn those ideas into a practical how-to. We walk through how this workspace models agentic inheritance, how instruction layering replaces native inheritance, and how the three execution approaches work: sequential, parallel, hierarchical

expand_more

This guide explains how to automate a Markdown blog into a multilingual publishing pipeline using GitHub Copilot Agents, where an orchestrator coordinates language subagents, updates README summaries, applies hooks and skills as guardrails, and produces reproducible, scalable outputs.

expand_more

While building SDLC Memory, I ran into an unexpected architectural dilemma. Should the system reason like an autonomous agent, behave like a deterministic data transformer, or sit somewhere in between? I'm still deciding which direction is the right one for the MVP.

expand_more

In "Part 1 - From RAG to Provenance: How We Realized Vector Alone Is Not Memory", we moved from RAG to Provenance, from similarity to lineage. But if AI agents will generate 50–80% of future work, the real question becomes: How does memory update safely? How do new decisions get validated, linked, and governed, instead of just embedded? This article shows the incremental graph update process behind the decision memory step by step, with a real example. Because in the AI era, memory must evolve, not just retrieve.

expand_more

What if your SDLC doesn’t actually remember anything, and it only retrieves fragments? We’ve built powerful RAG systems that can surface “relevant” text in milliseconds. But relevance is not causality. And when something breaks in production, similarity won’t tell you why it happened, or which decision, risk, or dependency led there. In this article, I unpack why vector search alone is not memory, how graph structure changes the game, and how combining vector with a strict provenance model turns scattered documentation into something closer to organizational cognition. If you care about explainability, decision lineage, and real delivery intelligence - this one is for you.