Context Graphs and Process Knowledge
It's The Same Thing
The Decision Trace Problem
Foundation Capital’s recent opinion article about context graphs identifies a critical gap in enterprise systems, and I couldn’t agree more with the problems identified. Organizations have mastered recording what happens, but they remain unable to capture why decisions are made. When a VP approves a discount that exceeds policy limits, the reasoning is stuck in Slack threads, hallway conversations, and institutional memory. When a maintenance technician changes a procedure step, the justification exists in their head, not within online systems. These decision traces—the contextual logic, precedents, and judgments that animate organizational workflows—disappear the moment they’re executed.
A context graph, in theory, is positioned as a solution to this tacit and implicit knowledge problem. The proposed solution introduces the concept of a living record of decision traces, stitched across entities and time, where precedent becomes searchable. Stitching the data together, syntactically. Unlike traditional systems of record that store objects and outcomes, context graphs capture the reasoning that connects inputs to outputs. This creates what Foundation Capital calls the “single most valuable asset for companies in the era of AI” because AI agents need access to decision traces to handle the ambiguous, judgment-laden situations that humans navigate through organizational memory.
Building such a system requires more than database architecture. It demands a formal methodology for representing how work happens, tracking how procedures are executed, in practice, and linking decisions to their justifying context. Which is what the Procedural Knowledge Ontology (PKO) and process knowledge management frameworks provide.
Process Knowledge, The Missing Layer
The distinction between process knowledge and procedural knowledge illuminates why context graphs are great in theory but will require solid knowledge management foundations to become a reality. Process knowledge is the raw material—tacit understanding, practiced expertise, the rhythms of how work actually happens. This knowledge lives in the hands of workers, Slack channels, margin notes in outdated manuals, and the institutional memory of workers who may retire or move on. Procedural knowledge is process knowledge that has been formalized, encoded, and represented through structured semantic systems.
The act of formalizing and encoding decisions and reasons for said decisions is what transforms process knowledge into procedural knowledge. The transformation from process to procedural knowledge requires deliberate engineering work. It is one thing to elicit knowledge, but ultimately, that knowledge must be codified and represented formally, be it, tags, metadata, a data schema or ontology. Decisions and decision reasoning will have to be defined, described and structured with enough semantic meaning for an LLM to make sense of decision traces. Because after all, at the end of the day, data does not self organize and describe itself, no matter how much pixie dust is distributed throughout.
The process knowledge elicitation and collection processes require thoughtful design and implementation as indeed, processes are constantly evolving and changing. Someone must observe work practices, interview experts, extract tacit understanding, and encode it in formal representations. Without this investment, the decision traces that Foundation Capital describes remain trapped in unstructured formats—conversation transcripts, email chains, undocumented exceptions—that no system can index or query.
Process knowledge management provides the underlying methodology for the elicitation and capture of decision traces. It establishes frameworks for collecting process knowledge through ethnographic observation, expert interviews, document analysis and the discovery of communication pathways and channels. It defines organizing principles that structure knowledge across conceptual, structural, and contextual levels. Most importantly, it specifies encoding pathways that translate human understanding into machine-readable representations. This systematic approach to knowledge capture is the prerequisite for any context graph implementation.
Ontological Framework for Decision Traces
The Procedural Knowledge Ontology (PKO), developed by researchers at Cefriel in collaboration with industrial partners including Beko Europe, Fagor Automation, Siemens and BOSCH, provides a semantic architecture that context graphs require. PKO emerged from three heterogeneous industrial scenarios—safety procedures in manufacturing plants, CNC machine commissioning processes, and mixed human-machine activities in micro-grid management—each demanding rigorous capture of procedural knowledge and its execution.
PKO’s architecture directly addresses the decision trace problem through its fundamental distinction between procedures, abstract specifications of how to accomplish something, and executions, concrete instances of those procedures being performed. A Lock Out Tag Out (LOTO) safety procedure might specify the general steps for shutting down a conveyor system. A LOTO execution represents a specific maintenance technician performing that lockout on a specific date with specific observations and outcomes. This mirrors the distinction between “what could happen” and “what did happen” that Foundation Capital identifies as essential for context graphs.
The ontology organizes knowledge across six conceptual areas that map directly to context graph requirements:
The Procedure area captures specifications including procedure types, targets, versions, and status.
The Step area models the granular actions, functions, tools, and expertise levels required for execution.
The Change of Status area tracks provenance information—creation, modification, validation, approval, and archival—enabling the audit trails that context graphs demand.
The Procedure Execution area records when and how procedures were actually performed, including step execution sequences, user feedback, errors encountered, and questions raised.
The Agent area models the people, organizations, and software systems that interact with procedures, along with their roles and authority levels.
The Resource area links procedures to supporting documentation, media, and data sources.
As you can plainly see, PKO has fantastic coverage for processes, procedures, workflows and decision traces. All the trappings and promises of a context graph. Yet a context graph, as simplified as it seems, is more than decision traces. A context graph has no mention of process knowledge management, which is actually the most fundamental component of capturing, recording and representing decision traces.
Let’s Be Real
Let’s be real. Why does this list of specific context graph capabilities look identical to process and procedural knowledge management?
After all, this is a knowledge management problem. If you haven’t read my Process Knowledge Management series, do check it out. The Foundation Capital piece has gifted us technologists with yet another thing to add to the stack, this time, to handle a specific type of context—decision traces AKA processes and procedures. Never mind that the name, ‘context graph’ is akin to saying ‘wet water’. Context is exactly what graphs deliver. Ultimately, Foundation Capital and the entire ‘Context Graph’ parade are missing the very essence of the problem. This is a knowledge management problem. Repeat. This is a knowledge management problem.
But here we are, in the age of AI, where marketing wins and we are all betting on futures. And oh boy, did folks jump on this opportunity. All the faux ontologies and graph-esque startups jumped on context graph as if there was finally a name to append to job titles and product flyouts. Websites were quickly spun up to meet the unknown demand, all based upon an opinion article. Not an academic paper or peer reviewed research. There was and is no demo to speak of. More importantly, consider the source for this sensationalized article.
Lots of theorizing and speculation, without substance and science. I must ask, and maybe you’ve been thinking the same, why reinvent the wheel? And how in the hell do you collect tacit and implicit knowledge without knowledge management anyhow? So many more questions than answers. There are plenty of people at the ready, able to tell you exactly what a context graph is, and why it does not involve ontologies. Take the grifters and charlatans with a grain of salt, and tell them to slow their roll. For the sake of herd sanity.
Alas, I digress. Let’s get back to process and procedural knowledge.
Context Through Semantic Relationships
Context graphs require more than isolated records; they demand rich connections across entities and time. PKO achieves this through carefully designed semantic relationships that enable graph traversal and inference.
Sequential relationships, pko:nextStep and pko:previousStep, establish the temporal logic of procedures, while alternative branching, pko:nextAlternativeStep, captures the decision points where different contexts lead to different paths. Each step execution links bidirectionally to its procedure execution, pko:isIncludedInProcedureExecution and to the abstract step it instantiated pko:executesStep, creating the provenance chains that explain why specific actions were taken. This is how to account for steps. Even at runtime.
The temporal dimension—identified by Foundation Capital as often treated as a “second-class citizen” in enterprise systems—receives first class treatment in PKO. Start and end times attach to both procedure executions and step executions as prov:startedAtTime, prov:endedAtTime, enabling queries about what was true when decisions were made. Version tracking through procedure versions with associated statuses, draft, validated, approved, archived, to capture the evolution of organizational knowledge over time.
User questions and feedback during execution pko:hasUserQuestionOccurrence, pko:UserFeedback, preserve the contextual signals that indicate where procedures need refinement. Error tracking pko:hasEncounteredError, pko:errorCode captures exceptions and deviations. Frequently asked questions pko:FrequentlyAskedQuestions distill repeated patterns into searchable precedent. Together, these mechanisms create the “replayable lineage” that Foundation Capital describes—the ability to reconstruct the state of the world at decision time and understand the reasoning applied.
From Raw Knowledge to Queryable Context
PKO provides the target representation, but constructing context graphs requires systematic methodology for populating that representation with organizational knowledge. The Ontology Pipeline® framework offers a proven approach, building semantic knowledge management systems through iterative stages that each produce testable, measurable outputs.
The pipeline begins with controlled vocabularies—standardized sets of terms for referring to process domain concepts. This linguistic infrastructure ensures consistent naming across procedures, steps, agents, and resources. Without vocabulary control, the same decision might be recorded using different terminology in different contexts, fragmenting the graph and defeating semantic search.
Metadata standards establish the entity-value pairs that describe procedures and executions consistently. Taxonomies arrange these concepts hierarchically, enabling navigation from broad process categories to specific procedural instances. Thesauri introduce associative relationships that handle ambiguity—connecting synonymous terms, relating broader and narrower concepts, and encoding the organizational language that domain experts actually use.
These building blocks culminate in knowledge graphs that visualize and query procedural knowledge. Because the pipeline constructs knowledge systematically from controlled foundations, the resulting graphs maintain consistency that enables reasoning. An AI agent querying the graph can traverse from a specific execution instance to its governing procedure, understand the expertise level required, identify precedent from similar past executions, and assess whether proposed actions align with validated patterns.
From Tacit to Explicit: The Human-AI Partnership
The context graph vision assumes that decision traces can be captured. In practice, much procedural knowledge remains tacit—experienced operators carry sequences, conditions, and judgment calls that exist nowhere in digital form. PKO addresses this through tools designed for human-in-the-loop knowledge elicitation.
Web forms derived from the PKO ontology structure guide domain experts through the information required to document a procedure. The forms present concepts in language matching the expert’s mental model—for procedures, sections correspond to standard sequences, and information entry aligns with points that operators already conceptualize. The underlying semantic transformation happens because the ontology and schemas can handle the elicitation, organization, capture and representation of decision traces. No matter how you dice it, capturing decision traces requires knowledge elicitation so requirements can be shaped for how best to represent decisions and decision reasoning. Again, this is knowledge management.
For procedures that exist in partial documentation, automatic extraction tools leverage language models to identify relevant information from documentation, spreadsheets, Excel files, repositories and other sources. The extracted knowledge pre-fills structured forms, allowing operators to verify, correct, and supplement automated outputs. This combination of AI assistance and human validation reflects the human-centric approach —digital tools augment rather than replace human expertise.
Execution tracking then captures how documented procedures actually perform in practice. When operators use chatbot interfaces to navigate procedures step-by-step, their interactions generate execution traces automatically. Questions asked, errors reported, and deviations observed feed back into the knowledge graph, creating the continuous learning loop that makes graphs valuable over time.
Enabling AI Agents Through Procedural Context
The Foundation Capital thesis argues that AI agents are “hitting a wall that governance alone can’t solve. The wall isn’t missing data. It’s missing decision traces.” PKO directly addresses this by providing AI systems with the semantic context they need to handle edge cases appropriately.
Consider a renewal agent proposing a discount that exceeds policy limits. Traditional systems provide the rule, such as 10% maximum discount, but not the organizational memory of past exceptions and the reasoning behind such decisions. A PKO-enabled graph surfaces precedent: similar situations where VP approval was granted, the justifying circumstances documented in execution traces, the policy version in effect at decision time, and the outcomes that followed. The agent can then ground its recommendation in organizational precedent, explaining not just what it proposes but why the proposal aligns with established patterns.
This capability emerges from PKO’s integration of procedure specifications with execution histories. The agent queries not an abstract rule but a knowledge graph linking rules to their applications across time and context. Graph traversal reveals that certain customer tiers consistently receive exceptions, that strategic renewals follow different approval paths, and that specific approvers have authority for certain deviation types. The decision trace becomes the reasoning scaffold that transforms generic AI responses into contextually appropriate recommendations.
Microgrid Systems as Proof of Concept
The application of PKO to microgrid management at Siemens illustrates the theoretical context graph construction, in practice. Microgrids compose heterogeneous devices—electric vehicle chargers, batteries, photovoltaic units—each governed by controller logic that responds to dynamic conditions. Essential contextual information about device configurations, performance patterns, environmental factors, and operational constraints often exists as tacit knowledge held by experienced operators.
PKO models device operations as procedures with specified steps representing operational states and transitions between them. An electric vehicle charger moves through states—not charging, low power, medium power, full power—based on conditions like photovoltaic production, battery capacity, and grid demand. These transitions and their triggering conditions are formalized as procedural knowledge, making previously undocumented controller logic explicit and queryable.
Execution traces record actual device behavior: when charging states transitioned, what conditions prevailed, how long each state persisted. The combination of procedural specification, “charging power increases when PV produces above threshold”, and execution history, “on January 27, charging increased from low to medium power at 14:15 when PV production exceeded 5kW”, creates the context graph that explains microgrid behavior.
Electric vehicle owners can then query this context through natural language interfaces, understanding why their vehicle charged slowly during certain periods or when optimal charging conditions typically occur. Microgrid operators can analyze execution patterns across devices and time, identifying optimization opportunities and troubleshooting anomalies with full contextual visibility. The procedural knowledge graph transforms opaque system behavior into transparent, explainable operations.
So Here We Are
Foundation Capital’s vision of context graphs as “the next trillion-dollar platforms” depends on solving the practical challenges of capturing decision traces at scale. PKO and process knowledge management frameworks provide the technical and methodological foundations that make this vision achievable.
PKO contributes a rigorous ontological architecture that distinguishes procedures from executions, models the conceptual areas relevant to procedural knowledge, and establishes semantic relationships that enable graph traversal and inference. Its grounding in real industrial requirements—safety procedures, machine commissioning, human-machine coordination—ensures that the ontology addresses actual enterprise needs rather than theoretical abstractions.
Process knowledge management is the systematic methodology for transforming tacit expertise into explicit, encoded representations. The Ontology Pipeline and similar frameworks ensure that organizations build semantic systems on controlled foundations, maintaining the consistency and quality that knowledge graphs require. Vocabulary control is able to handle evolving terminology and concepts, able to capture concepts and their definitions, in addition to acronyms and synonyms via altLabels. Human-in-the-loop approaches balance AI assistance with expert validation, capturing the judgment and contextual nuance that pure automation misses.
Together, these approaches enable organizations to construct the living records of decision traces that Foundation Capital describes. The precedent becomes searchable. The reasoning becomes queryable. The organizational memory becomes a durable asset that compounds in value as workflows execute through the system. What was once trapped in conversations, institutional memory, and undocumented expertise becomes the structured foundation for AI agents that can operate with true contextual awareness.
The opportunity lies in recognizing that context graphs require more than new database architectures. They demand ontologies that can represent procedural knowledge faithfully, methodologies that can capture tacit understanding systematically, and implementations that can track executions continuously. PKO and process knowledge management provide these capabilities. The organizations that invest in building this infrastructure will possess what Foundation Capital correctly identifies as the decisive asset for the agentic AI era: the ability to explain not just what happened, but why it was allowed to happen.



Refreshingly precise, Jessica! This also reminds me of the work we are doing at The Norwegian National Archives (where I am hired as an ontologist). We are building an ontology to describe Records, much like Records in Context, but with a much broader perspective, and with a high focus on WHY the Record exists (no: Formål). Also with mappings to CIDOC which covers similar aspects. We are also building an sub-ontology (we have this origo-architecture <3) to cover process information in AI-processes. Like what models used, tech specs, agents involved etc in regards to e.g. image recognition and such.