Semantic Optimization
Strategic Data and Explicit Architecture provide the data structure, rules, and boundaries that set the conditions for AI to operate effectively. The third pillar, Semantic Optimization, is where the AI acceleration payoff really happens. Use AI aggressively as generators and agents, because guardrails and audit processes are in place to repair misalignment. Introduce large changes — new domains, new patterns, new capabilities — into the system with confidence, because the optimization loop will refine them into shared goals rather than fragmented local interpretations.
The mechanism is two forces working in a feedback loop: Patterns define what things should look like, and Semantic Coherence measures whether reality matches intent. The gap between them drives every meaningful action — and AI reduces the energy barrier to maintaining that alignment rather than amplifying drift.
Patterns
A Pattern is a semantic structure with enough self-contained meaning to be recognized and operated on as a whole — by both humans and machines. That's an accurate, but pretty abstract definition - let's use an exampe:
A home cleaning service needs to schedule crews and equipment across a city. Searching for "cleaning industry scheduling solutions" turns up narrow, industry-specific tools. But the actual problem — get crews and equipment to customers on schedule — is a logistics and dispatch problem, and well-established patterns for dispatch and routing optimization already exist across delivery, field service, and fleet management. Those patterns are far more mature, better documented, and already in AI training data. The architecture tells you what problem you are actually solving, and the right domain may not be your industry at all.
That is pattern-based thinking: instead of defining a solution from first principles, tribal knowledge, or pre-packaged tools, find an established, well-defined standard, methodology, or body of practice, and adopt it as a domain model with its own rules and vocabulary. This takes no more effort than creating a document that describes the capability you need and the details of the pattern you found, in plain, human readible text.
Right away, you access two big accelerants:
- Patterns can be "big chunks" of functionality or capability, and since they are composable without loss — a Sales Domain pattern can contain Customer, Order, Product - an entire domain architecture can be added as a single coherent unit rather than hundreds of disconnected requirements.
- Lean into what AI is good at. Good patterns are well defined, well understood, and have consensus behind them, which means a typical LLM's parameters (non-reasoning too) will be well-stocked with reliable learned knowledge — canonical structures, validated rules, and documented relationships that the model can reproduce with high fidelity. Once a pattern is adopted, AI and architecture do the heavy lifting to turn it into a working system (more on that below). Before that, the selection process remains human judgment in a business context.
Picking a Pattern
Starting from zero — you have a capability you need, and you are looking for a pattern to adopt. Patterns come in several types — domain, technical, infrastructure, process — but domain patterns are the most powerful starting point, because they encode business meaning rather than mechanism. Three principles guide the search:
Identify the domain. The domain you need may not depend on industry or discipline specifics. The cleaning service example above is really a logistics problem — dispatch and routing, not cleaning. Be equally wary of discipline-specific solutions that bundle assumptions into core functions. "Customer support" is customer relationships, and you want a solid, robust domain model where "customer" is the driving principle, not one vendor's interpretation of what support means through the lens of some assumed industry requirements. If there are nuances specific to your business, you will adapt them intentionally — not inadvertently acquire someone else's interpretation as cruft. Your Explicit Architecture will tell you what domain you are actually solving for.
Prescribe principles, not implementation. The right source for a domain pattern is a set of principles, rules, and procedures — aligned with open standards, industry standards, or compliance requirements if possible. The pattern describes what the domain says is correct, the Explicit Architecture encodes the rules, and implementation follows from that. Lean toward neutral, well-documented standards (W3C, SKOS, GAAP), not a vendor's data model or a platform's field definitions. Open standards carry no cruft and are already well-represented in AI training data. There is a side benefit worth calling out: enterprise solutions and cloud platform stacks have layered so much abstraction over business operations that many organizations have lost sight of how their own business actually works. Everyone hopes someone else understands it. Adopting domain patterns reverses that, but without the cognitive load of integration details, because that's left to architecture and agents. I think many leaders, even non-technical ones, will be pleasantly surprised at how much clarity and understanding accelerate progress.
Scope as large as it still works. The rule of thumb: as big as you can, and it still works — meaning it can be encoded into an object that fits the architecture, matches the desired capabilities, and can be validated. Going too small is fine, though, because coherence audits will identify pattern overlaps, and consolidating will simply the architecture through a simple audit adjustment.
Bigger pattern scopes are good because you will probably go faster, but there is a better reason: Broader patterns preserve meaning and goals in a way that smaller units cannot, because patterns are complete end-states... they are a goal that contains complete path to finished. I'm not anti-Agile/Scrum, but when I recognized the connection between Patterns and Agile, I found extra motivation. Ok, so maybe I'm anti-Agile, but only because I want something better. When goals are decomposed into stories or features, meaning fragments — the end state and goal get abstracted away from all of the work. Software development and Agile aren't the only domains where this happens, and Patterns don't mean that you can do the work all at once, but it does mean that the plan is much clearer, and other the full process of Semantic Optimization makes execution much clearer too. Need a digital publishing system? Why not just adopt the entire standard Digital Asset Management (DAM) architecture as the Pattern - just bolt it on to your existing DDD architecture, and get the whole thing even if you aren't using most of it yet. I did. Take that, Agile.
For the full practical guide on finding, sizing, adopting, and evolving patterns, see Working with Patterns.
From Pattern to Product
A pattern is a desired end-state — the complete model of what something should look like. Capabilities come from business needs, not from decomposing the pattern. The Explicit Architecture encodes the pattern into the system — capabilities, repositories, infrastructure — all the way down. In SemOps, this is illustrated by the Global Architecture and Strategic DDD docs.
An example: you have been doing accounting manually in QuickBooks — no integration. A business need arises: you want to track LLM API costs more frequently and use that data for analytics — scoping agent features, researching open-source model replacements for expensive ones. Your pattern search reveals that the data plumbing for cost tracking is not a big deal, but it makes more sense to adopt an entire Accounting System pattern and go full agentic integration, because it is actually simpler and safer to implement "the right way to do accounting" than to hack a short-term solution that might expose sensitive data. You do not have to light up every accounting feature the pattern includes — but it is in the architecture and ready when you want to. Now you have a stable core to build your cost modeler on, governance is built in to a natural boundary around financial data, and the pattern's canonical form means AI can do most of the implementation work with high fidelity.
In practice, the chain is Pattern → Capability → Repository. Patterns define the end-state. Capabilities describe understandable functions that fulfill business needs. Repositories are agent and function boundaries — the scope of context an AI agent or team needs to do useful work. Here is a sample from the SemOps Strategic DDD:
| Capability | Implements Patterns | Repository |
|---|---|---|
| Domain Data Model | ddd, skos, prov-o, explicit-architecture | semops-hub-pr |
| Internal Knowledge Access | agentic-rag | semops-hub-pr |
| Coherence Scoring | semantic-coherence | data-pr, semops-hub-pr |
Every capability traces to at least one pattern; a capability with no pattern link either reveals a missing pattern or an unjustified capability. And it goes all the way down — the Global Infrastructure maps shared libraries to the scripts and capabilities they power:
| Library | Repository | Script(s) | Capability |
|---|---|---|---|
| pydantic | semops-hub-pr | source_config.py, api/query.py | Source Configuration, Query API |
| publisher-pr | config.py | Publishing Configuration | |
| backoffice-pr | voice/shared/schemas.py | Voice Control Models | |
| click | publisher-pr | publish.py, export_pdf.py | Publishing CLI, PDF Export |
| data-pr | cli.py | Data Toolkit CLI |
This is how the architecture stays inspectable all the way from intent to infrastructure — and how agents know exactly what context they need.
This is why AI is so effective at this step. The pattern is already in the model's training data. The architecture tells the agent what capabilities are needed and where they belong. The agent generates implementations that conform to the pattern's canonical form, and coherence measurement validates whether the result matches intent. The human judgment that identified the business need and selected the pattern is preserved — AI handles the translation from meaning to mechanism.
From Edge to Core
New patterns do not arrive fully formed. They follow the Stable Core, Flexible Edge principle: ideas emerge at the edge as lightweight data shapes, get tested without committing to schema changes, and promote to the stable core when validated through a promotion loop that evaluates frequency, cross-team relevance, predictive value, stability, and strategic centrality. The core never changes until a shape proves its value. If it does not, the edge absorbs the cost and the core remains untouched.
Semantic Coherence
Semantic Coherence is the degree to which meaning is available, consistent, and stable across an organization and its systems. A coherent system is one where humans and AI agents can operate correctly because the semantics are aligned — rules, decisions, goals, and resources are clear, shared, and operable.
Coherence is measurable as a composite metric across three dimensions:
Availability. Can people and systems find the meaning they need? Definitions that exist but cannot be discovered are effectively absent. Tribal knowledge hoarded by individuals, documentation that nobody can locate, duplicate definitions created because originals are unfindable — these are availability failures.
Consistency. Do different systems and teams interpret concepts the same way? When "revenue" means recognized revenue to Finance, booking value to Sales, and in-app purchase amount to Product, three dashboards show three different numbers and leadership loses trust in all of them. AI amplifies this — models trained on inconsistent definitions confidently produce inconsistent outputs.
Stability. Does meaning stay constant over time without uncontrolled drift? Definitions that change without versioning or communication make historical comparisons invalid. Teams discover that "the data used to work, now it does not" without understanding that the underlying semantics shifted.
The composite score uses a geometric mean: SC = (A × C × S)^(1/3). The geometric mean matters because if any dimension collapses to zero, coherence collapses to zero. High availability cannot compensate for zero consistency. All three must be present. Like a financial close that reconciles accounts before decisions are made, semantic audit reconciles definitions across systems — producing a known-good state that both humans and AI agents can trust.
The Optimization Loop
Patterns provide stable knowledge growth. Coherence creates a stable knowledge state. Together they form an optimization loop — and the interaction between them is what makes Semantic Optimization operational rather than aspirational.
Pattern ──prescribes──→ Implementation
↑ ↓
└──── Coherence ←──measures──┘
(directs)
Patterns prescribe what the organization should look like. Coherence measures whether reality matches that intent. Critically, coherence also directs: the arrow back to patterns is not just "report status." When a coherence assessment identifies a gap, the response can be to adopt a new pattern, modify an existing one, revert a pattern that disrupted alignment, or evolve a pattern to match an implementation that drifted for good reason. Sometimes the implementation drifted because the team discovered something the pattern did not account for — coherence assessment reveals this, and the correct response may be to update the pattern rather than force the implementation back.
In the SemOps domain model, coherence measurement is a first-class object — a Coherence Assessment with its own lifecycle, scope spanning multiple patterns, and genuine authority over pattern evolution. This makes the domain model three core aggregates: Pattern (what should we look like?), Coherence Assessment (how well does reality match intent?), and Entity (what actually exists?). The active feedback loop between prescriptive intent and evaluative reality is what makes "semantic operations" different from "just having a knowledge graph."
Why AI Thrives Here
AI does not solve coherence. It reduces the energy barrier to achieving it. The same operations that humans struggle to maintain at organizational scale — holding context across domains, detecting drift before it compounds, validating consistency across dozens of systems — are operations that AI can perform continuously, cheaply, and without the working memory constraints that make coherence expensive for humans.
| Human Limitation | AI Capability |
|---|---|
| Working memory exceeded across domains | Surface reference patterns from training data, documentation, and any organized corpus |
| Semantic drift occurs faster than coherence can be established manually | Validate consistency across contexts continuously |
| Pattern recognition across systems is vague and siloed | Construct, deconstruct, and compare knowledge structures at speed |
| Context lost between sessions and handoffs | Snapshot and reconstruct complex decision history and lineage |
For standard patterns — the third-party baselines that make up the majority of any organization's operational foundation — AI provides near-immediate adoption. The canonical form is already in the model's training data. AI identifies the right standard, encodes its canonical form, integrates it into the domain model, and validates alignment. Traditional adoption of a standard like SKOS or dimensional modeling takes significant time for research, training, and implementation. Pattern-based adoption with AI shifts the bottleneck from "can we implement this correctly?" to "should we adopt this?" — a question that takes minutes of human judgment rather than weeks of human implementation.
For coherence measurement, the convergence with RAG infrastructure is particularly valuable. Building and maintaining a retrieval-augmented generation corpus is classification: chunking defines pattern boundaries, embedding operationalizes consistency measurement, metadata extraction tracks provenance, and retrieval quality metrics (precision, recall, faithfulness) serve as direct proxies for semantic coherence components. When the RAG solution improves, the coherence measurement infrastructure improves simultaneously.
The goal is not AI as an oracle. It is AI as a context-stabilization system that gives humans enough cognitive space to generate correct meaning before drift occurs.
How the Pillars Connect
Semantic Optimization depends on the other two pillars and completes the framework:
Strategic Data provides the objects. Strategic Data operates at D→I — converting raw data into structured information with explicit meaning. It provides the structured semantic objects, governance discipline, and expanded data sources that feed coherence measurement. Without Strategic Data, Semantic Optimization has nothing reliable to measure or optimize against.
Explicit Architecture provides the rules. Explicit Architecture supplies the bounded contexts, anti-corruption layers, and encoded strategy that give patterns their structural scaffolding. Without Explicit Architecture, patterns exist but drift freely — no boundaries to enforce, no constraints to optimize within.
Semantic Optimization provides the feedback loop. It measures whether the objects created by Strategic Data and governed by Explicit Architecture are maintaining their integrity as meaning flows through the system. It detects when reality diverges from intent and directs the response — closing the loop that keeps the entire framework aligned.
Strategic Data creates the right objects. Explicit Architecture provides the stable rules. Semantic Optimization measures, optimizes, and repeats.
Where to Start
Semantic Optimization is a continuous practice, not a one-time project. Five starting points that deliver value immediately:
Identify the terms that cause confusion. Every organization has a handful of concepts — "customer," "revenue," "active user" — where different teams use different definitions. Documenting the canonical definition and identifying where it diverges is the first coherence measurement, and it requires no tooling beyond a shared document.
Adopt one standard pattern explicitly. Pick a well-established pattern — dimensional modeling for analytics, SKOS for knowledge organization, OAuth for identity — and adopt it with tracked provenance. Document what the standard says, what the organization implements, and where they differ intentionally. This establishes the baseline for tracked evolution.
Measure coherence where it hurts. Start with the concept that causes the most friction: the metric that shows different numbers in different dashboards, the entity whose definition changes depending on which team is asked. Score its availability, consistency, and stability. The composite reveals which dimension is actually broken.
Treat coherence assessment as a recurring event. Not a one-time audit, but a cadence matched to how decisions are made. Strategic decisions happen quarterly — run a deep audit before the planning cycle. Engineering decisions happen in sprints — check coherence at sprint boundaries. Agentic coding happens continuously — coherence becomes operating infrastructure, not a report.
Close the loop. When a coherence assessment reveals a gap, act on it: adopt a pattern, revise one, or realign an implementation. The assessment has no value if it does not drive change. The loop between pattern and coherence is the mechanism — without both halves, there is no optimization.
Related Links
- The Semantic Funnel — The mental model and rule classification behind the framework
- Strategic Data — The D→I pillar: how organizations think about and manage data
- Explicit Architecture — Strategy encoded as inspectable, queryable structure
- Why SemOps? — The full case for why meaning matters and what makes it hard
- What is SemOps? — The framework definition and overview