Proof Points

Systems that remember.


The Recursive Proof

We don't just consult on Semantic Memory Systems. We build them. The methodology we teach is the methodology we use. The systems we recommend are systems we operate.

Three proof points demonstrate this is real, not theoretical:

  1. TerpTune — Semantic memory for personal neurochemistry
  2. Book of Fire — Semantic memory system explaining semantic memory
  3. This Website — The site you're reading, built on its own methodology

All three are live. All three are working. All three demonstrate the principles we consult on.


TerpTune

AI cannabis concierge that maps personal neurochemistry to terpene profiles.

What It Is

TerpTune is a conversational AI system that helps users understand how different cannabis strains affect them personally. The interface is a character named Karl—not a generic chatbot, but an AI with persistent memory of your patterns, preferences, and responses.

Link: terptune.com

The Semantic Memory

Karl doesn't remember every conversation. Karl remembers what matters:

Patterns: "You respond well to pinene-dominant strains for focus work, but myrcene above 0.7% tends to fog you."

Vocabulary: Karl learns your language. If you call a certain sensation "carnival brain" or describe an effect as "homunculus shrinking," Karl remembers and uses those terms back.

Thresholds: Personal neurochemistry varies. Karl tracks when specific terpenes help vs. hinder for you specifically—not generic advice, but your verified data.

Context: Work day vs. creative session vs. evening wind-down requires different guidance. Karl knows the context and adjusts.

How It Demonstrates Semantic Memory

TerpTune is built on the same architectural principles we consult on:

Canonical claims, not documents: Karl's knowledge about you isn't stored as conversation transcripts. It's stored as verified claims: "User reports caryophyllene at 0.8%+ provides anxiety relief" is a claim, owned by the system, updatable when new data arrives.

Strategic forgetting: Karl doesn't remember the timestamp of every session. It remembers the patterns that emerged from those sessions. This is semantic memory—knowing what's true without remembering when you learned it.

Discrimination infrastructure: Karl can be wrong. When Karl says "this strain should work for your afternoon focus" and it doesn't, you tell Karl. The correction sticks. The claim updates. Future recommendations improve.

Self-explanation: Ask Karl "why did you recommend this?" and Karl can show the reasoning: which of your patterns led to this recommendation, which terpene thresholds were considered, what the confidence level is.

The Technical Stack

  • Cloudflare Workers for the backend
  • D1 database storing canonical user claims
  • Claude as the reasoning layer
  • Structured memory that persists across sessions

Karl isn't trained on your data—Karl references verified claims about you. The distinction matters: training creates hallucination risk; referencing canonical claims creates trustworthy personalization.


Book of Fire

Thesis on humans as semantic ordering specialists in the age of AI—built using semantic memory architecture.

What It Is

The Book of Fire is a thesis about what happens when AI collapses the cost of generation but humans retain the monopoly on discrimination. The argument: humans are semantic information ordering specialists, and AI amplifies rather than replaces this role.

Link: s3kai.com

The Recursive Proof

Here's the recursive part: The Book of Fire is itself a semantic memory system.

The thesis exists as 22 canonical claims—verified statements about how semantic ordering works, what AI changes, and what humans uniquely provide. These claims are the source of truth.

From those 22 claims, we generate multiple editions:

Edition Audience Status
Grandmother Edition General readers Live at s3kai.com
CEO Edition Business leaders In development
Academic Edition Researchers Planned

Same thesis. Same canonical claims. Different presentations for different audiences.

How It Demonstrates Semantic Memory

Thesis as service: The canonical claims aren't frozen text—they're running infrastructure. Each edition is a deployment of those claims, rendered for a specific audience.

Verify upstream, generate downstream: When a canonical claim evolves based on new insight, all three editions flag for update. We don't maintain three separate books that might drift—we maintain one thesis that generates multiple presentations.

Semantic versioning for ideas: Breaking changes in canonical claims cascade to applications. Minor refinements can be adopted selectively. The same version control principles that govern software govern the thesis.

Content is code: The Book of Fire treats prose like source code. Chapters are derived artifacts. The canonical claims are the specification. Generation is automated; verification is human.

The Architecture

Canonical Layer (immutable thesis truths)
    ├── 22 core claims
    ├── Binding definitions
    └── AI boot context
            │
            ▼
Framework Layer (methodology + infrastructure)
    ├── Rendering templates
    ├── Verification protocols
    └── Schema definitions
            │
            ▼
Application Layer (deployed editions)
    ├── Grandmother Edition → s3kai.com
    ├── CEO Edition → (in development)
    └── Academic Edition → (planned)

This is the same three-layer architecture we recommend for enterprise semantic memory systems—we just happen to run it ourselves for a book about semantic memory.


This Website

The site you're reading right now is a semantic memory system.

What It Is

SemanticMemorySystems.com isn't just a website about semantic memory. It's built using semantic memory architecture. The methodology we're selling is the methodology that generated this page.

Link: semanticmemorysystems.com (You're here.)

The Recursive Proof

Look at what you're reading:

  • Homepage — Emotional urgency, pain points, overview
  • Methodology — Technical depth for implementers
  • Healthcare — Industry-specific language for clinical leaders
  • Software — Technical framing for engineering teams
  • Retail — Operations focus for multi-location businesses
  • Education — Academic rigor for institutional leaders

Six different pages. Six different audiences. One set of canonical claims.

The same truths rendered for different readers: - "Truth fragments when stored in multiple systems" - "Verification at source scales; verification of copies doesn't" - "AI generation without verification creates confident lies" - "Organizational change requires architecture, not tooling"

Every vertical page expresses these claims in industry-specific language. When a canonical claim evolves, every page updates. There's no drift because there's no copying.

How It Demonstrates Semantic Memory

Canonical layer: Eight markdown files in /pages/ contain the source content. These are the canonical claims—verified prose that represents what we actually believe.

Derivation layer: A Python build system transforms those canonical files into HTML pages. The template injects navigation, applies theme colors, sets meta tags. Generation is automated.

Output layer: The /output/ folder contains the deployed site. These HTML files are derived artifacts—they should never be edited directly because they're regenerated from source.

The Architecture

pages/                        (Canonical Layer)
├── index.md                  Homepage content
├── methodology.md            How it works
├── proof-points.md           This page you're reading
├── how-this-site-works.md    Meta-transparency page
├── healthcare.md             Healthcare vertical
├── software.md               Software vertical
├── retail.md                 Retail vertical
└── education.md              Education vertical
        │
        ▼
build/generate.py             (Derivation Layer)
├── Read markdown
├── Convert to HTML
├── Inject into template
└── Apply theme classes
        │
        ▼
output/                       (Output Layer)
├── index.html
├── methodology.html
├── proof-points.html
├── how-this-site-works.html  Meta-proof
├── healthcare.html           [theme-healthcare - emerald]
├── software.html             [theme-software - amber]
├── retail.html               [theme-retail - pink]
└── education.html            [theme-education - indigo]

Why This Matters

Most company websites are maintained as separate pages. Content drifts. The homepage says one thing, the vertical page says another. Updates require hunting through multiple files.

This site can't drift because it's architecturally impossible. The canonical claims live in one place. Everything else is derived.

View the source yourself: The entire build system is open. The markdown files are readable. The generator is simple Python. This isn't a black box—it's a reference implementation.

See How This Site Works →


What the Proof Points Prove

Principle 1: Canonical Claims Work

All three systems store knowledge as verified claims, not documents. TerpTune stores claims about user neurochemistry. Book of Fire stores claims about semantic ordering theory. This website stores claims about the methodology itself. All three demonstrate that claim-based knowledge management scales and maintains consistency.

Principle 2: Strategic Forgetting Works

None of these systems try to remember everything. Karl forgets session timestamps but remembers patterns. Book of Fire forgets drafting iterations but preserves canonical truths. This website forgets HTML formatting details but preserves markdown content. All three demonstrate that semantic memory—knowing what matters, forgetting what doesn't—produces better systems than perfect recall.

Principle 3: Derivation Works

All three systems generate downstream content from canonical sources. TerpTune generates recommendations from user claims. Book of Fire generates editions from thesis claims. This website generates themed HTML pages from markdown files. All three demonstrate that one source of truth can serve multiple audiences without drift.

Principle 4: Self-Explanation Works

All three systems can show their work. Karl explains why a recommendation was made. Book of Fire traces any sentence to its canonical source. This website exposes its entire build system—you can read the generator code, the templates, the source files. All three demonstrate that transparency is architectural, not aspirational.

Principle 5: This Isn't Theory

We consult on semantic memory systems because we build semantic memory systems. The architecture works. The methodology scales. The benefits compound.

You're looking at the proof right now.


Your Turn

TerpTune proves semantic memory works for personal data.

Book of Fire proves semantic memory works for intellectual content.

This website proves semantic memory works for business communications.

What would semantic memory look like for your organization?

  • Clinical protocols that propagate correctly
  • Documentation that stays current with code
  • Pricing that syncs across channels
  • Training that matches actual operations
  • AI that says "I don't know" instead of confidently lying

The architecture exists. The methodology is proven. The question is whether you're ready to stop maintaining documents and start maintaining truth.


Let's talk about what semantic memory looks like for your organization. Start a Conversation →