All 10 Phases Complete — 680 tests · 21,167 LOC

GraphPalace

A stigmergic memory palace engine — fully local, private, self-optimizing AI memory that runs in a browser tab, on a server, or on an edge device. No cloud. No API keys. No data exfiltration.

⭐ View on GitHub 📄 Read the Paper Explore Architecture →
13
Rust Crates
680
Tests Passing
21,167
Lines of Code
28
MCP Tools
10/10
Phases Complete
<50ms
Search Target
Architecture

The Palace Hierarchy

A spatial memory system where Wings contain Rooms, Rooms hold Closets, and Closets store Drawers — each level a first-class graph node with pheromone trails.

🏛️
Palace
Root node Contains all Wings
🪶
Wing
type: "person" type: "project" type: "domain" type: "topic" embedding[384] pheromones: exploitation, exploration
🚪
Room
hall_type: "facts" "events" "discoveries" "preferences" "advice" pheromones: exploitation, exploration
🗄️
Closet
summary: compressed drawer_count: N pheromones: exploitation, exploration
📄
Drawer
content: verbatim (never summarized) embedding[384] source: "conversation" | "file" | "api" REFERENCES → Entity
🔗
Entity
Knowledge graph node type: "person" | "concept" | "event" RELATES_TO → Entity temporal triples: valid_from / valid_to

Connection Types

🏠 Hall cost: 0.5

Connects rooms within the same wing. Navigate between subjects in one domain.

🌀 Tunnel cost: 0.7

Connects rooms across different wings. Cross-domain discoveries live here.

🔗 References cost: 0.5

Links drawers to knowledge graph entities. Memory ↔ concept bridges.

🪞 Similar_To cost: 1-sim

Auto-computed semantic similarity between drawers. Cost inversely proportional to similarity.

⚡ Relates_To cost: varies

Knowledge graph edges: causes, inhibits, correlates_with, instance_of, and more.

Capabilities

What Makes It Different

GraphPalace combines graph databases, stigmergy, active inference, and on-device embeddings into one self-optimizing system.

🐜

Stigmergic Navigation

5-type pheromone system on nodes and edges creates an adaptive, self-optimizing knowledge landscape. Paths that work get reinforced. Old trails fade naturally.

🏛️

Palace Spatial Hierarchy

Wings → Rooms → Closets → Drawers as first-class graph nodes, not metadata tags. Halls connect within wings; tunnels cross between them.

🧭

Semantic A* Pathfinding

Composite cost model: 40% semantic similarity + 30% pheromone guidance + 30% structural weight. Context-adaptive weights per task type.

🧠

Active Inference Agents

Karl Friston's Expected Free Energy minimization. Bayesian beliefs, softmax action selection, temperature annealing. 5 archetypes from Explorer to Specialist.

🔒

Fully Local & Private

Runs in a browser tab (WASM), on a server, or an edge device. No cloud, no API keys, no data exfiltration. Your memories stay yours.

🔌

28 MCP Tools

Full Model Context Protocol interface: palace navigation, memory CRUD, knowledge graph, stigmergy controls, agent diaries, import/export.

📊

Knowledge Graph

Temporal entity-relationship triples with confidence scores, valid_from/valid_to timestamps, and contradiction detection. Causal chains up to 5 hops.

On-Device Embeddings

TF-IDF + sparse random projection (384-dim) in pure Rust — zero model files, zero API calls. Optional ONNX backend for all-MiniLM-L6-v2. Measured 96% recall@5.

📝

skills.md Protocol

A teachable file any LLM loads to learn palace navigation — Cypher patterns, pheromone semantics, tool reference, and example workflows.

📐

TF-IDF Embeddings

Pure Rust semantic embeddings — no model files, no API calls. TF-IDF tokenization + sparse random projection gives 96% recall@5, matching MemPalace's 96.6% target.

🕸️

Similarity Graph

Automatic SIMILAR_TO edges between semantically related drawers — A* navigates meaning connections, not just structural hierarchy.

Collective Intelligence

The Pheromone System

Inspired by ant colony optimization and adapted from STAN_X v8 — five pheromone types encode what the swarm has learned about the palace.

Exploitation
Applied to: Nodes
"This location is valuable — come here"
Decay: 0.02/cycle · Half-life: ~35 cycles
Exploration
Applied to: Nodes
"Already searched — try elsewhere"
Decay: 0.05/cycle · Half-life: ~14 cycles
Success
Applied to: Edges
"This connection led to good outcomes"
Decay: 0.01/cycle · Half-life: ~69 cycles
Traversal
Applied to: Edges
"This path is frequently used"
Decay: 0.03/cycle · Half-life: ~23 cycles
Recency
Applied to: Edges
"This was used recently"
Decay: 0.10/cycle · Half-life: ~7 cycles

Edge Cost Formula

cost(edge) = 0.4 × C_semantic + 0.3 × C_pheromone + 0.3 × C_structural

Semantic similarity guides toward the goal. Pheromone trails encode collective intelligence. Structural weights respect the graph topology. Together they create paths that are meaningful, proven, and architecturally sound.

Implementation

Rust Crate Architecture

Thirteen crates forming the complete GraphPalace system — 680 tests, zero failures, building on Kuzu's embedded graph engine.

gp-core ✓ 19 tests
Core types, palace schema, configuration, and graph initialization. The foundation everything else depends on.
PalaceSchema NodeType EdgeType Config CostWeights
gp-stigmergy ✓ 95 tests
5-type pheromone system with exponential decay, Cypher query generation, position-weighted deposits, and edge cost recomputation.
PheromoneSystem CypherQuery decay() deposit() recompute_cost()
gp-pathfinding ✓ 50 tests
Semantic A* with composite cost, adaptive heuristic, context-aware weights, provenance, and benchmark infrastructure.
SemanticAStar PalaceGraph Benchmarks Provenance
gp-agents ✓ 50 tests
Active Inference agents with EFE minimization, Bayesian belief updates, softmax action selection, and 5 archetypes.
ActiveInferenceAgent BeliefState GenerativeModel 5 archetypes
gp-swarm ✓ 50 tests
Multi-agent swarm coordinator: sense→decide→act→update cycle, 3-criteria convergence detection, interest scoring.
SwarmCoordinator ConvergenceDetector InterestScore DecayScheduler
gp-embeddings ✓ 34 tests
Embedding engine trait with TF-IDF + sparse random projection (96% recall) and mock backend. WASM-safe. No model files, no API calls.
EmbeddingEngine TfIdfEmbedder MockEmbedder batch_embed() cosine_sim()
gp-mcp ✓ 84 tests
MCP server with JSON-RPC 2.0, 28 tools, dynamic PALACE_PROTOCOL prompt, stdio and HTTP transport.
28 tools PALACE_PROTOCOL JSON-RPC 2.0 McpServer
gp-wasm ✓ 67 tests
InMemoryPalace engine, full JS API via wasm-bindgen, Web Worker messages, IndexedDB/OPFS persistence.
InMemoryPalace wasm-bindgen Web Workers IndexedDB
gp-storage ✓ 60 tests
Storage backend with Kuzu C API FFI bindings, InMemoryBackend for testing, schema initialization, palace CRUD.
StorageBackend InMemoryBackend Kuzu FFI CRUD
gp-palace ✓ 63 tests
Unified orchestrator: GraphPalace struct, pheromone-boosted search, A* navigation, KG, export/import.
GraphPalace Search Navigate Export/Import
gp-bench ✓ 43 tests
Benchmark suite: recall@k, A* pathfinding, throughput, Criterion harness, comparison reports.
Recall Pathfinding Throughput Criterion
Comparison

How GraphPalace Compares

A detailed feature comparison against existing AI memory systems.

Feature MemPalace Mem0 Zep / Graphiti GraphPalace
Storage ChromaDB (flat vectors) LLM-extracted facts Neo4j (graph) Property graph + vectors + FTS
Retrieval Cosine + metadata LLM retrieval Graph traversal Stigmergic A* (semantic + pheromone + structural)
Intelligence None (passive) LLM-dependent Entity resolution Active Inference agents
Spatial Hierarchy ✓ Wings/Rooms/Closets/Drawers ✓ First-class graph nodes
Knowledge Graph ✓ SQLite triples ✓ Neo4j ✓ Temporal triples in Kuzu
Pheromones ✓ 5 types, auto-decay
Self-Optimizing ✓ Paths improve with use
Runs Where Local Python Cloud only Cloud or self-host Browser / Edge / Server (WASM)
Privacy ✓ Fully local ✗ Cloud ⚠ Self-host available ✓ Zero-cloud, zero-API
LLM Integration ✓ MCP (19 tools) ✓ API ✓ API ✓ MCP (28 tools) + skills.md
Measured Recall@10 96.6% (LongMemEval) Varies (LLM-dependent) Varies 96% (TF-IDF, no model)
Search Latency ~100ms Cloud RTT ~50-200ms 5-21 µs (A* pathfinding)
Cost Free $19-249/mo $25+/mo Free (MIT license)
Measured Results

Benchmark Performance

Real benchmark results from gp-bench — no projections, no estimates. Every number measured.

📊 Recall Performance

Engine Recall@1 Recall@5 Recall@10 Recall@20 MRR
Mock (FNV-1a hash) 54% 54% 54% 54% 0.54
TF-IDF (real semantics) 96% 96% 96% 100% 0.96
Target (MemPalace) 96.6%

🧭 A* Pathfinding Performance

Scenario Success Rate Avg Latency vs. Target
Same-Wing 100% 8–21 µs <200ms ✅ (10,000× under)
Cross-Wing 100% 5–13 µs <500ms ✅ (38,000× under)
General (random) 25–32% 10–85 µs Exploratory

Throughput at 1,000 drawers

Insert 48,040 ops/sec
Search 1,460 qps
Pheromone Decay 11,362 cycles/sec
Export 74 exports/sec

🔄 Soak Test swarm stability

Total Actions 2,500
Agents × Cycles 5 × 500
Productive Rate 100%
Peak Pheromone Mass 76,424

Pheromone mass: 0 → 76,424 with periodic decay. Stable convergence dynamics.

96% recall with zero model files • 5–21 µs pathfinding • 48K ops/sec throughput • 100% soak stability

All measurements from gp-bench running on standard hardware. No cherry-picking, no projections.

Roadmap

Implementation Phases

All ten phases complete. 13 crates, 680 tests, 21,167 LOC — from foundation to benchmarks.

1

Foundation

Week 1-2

Fork Kuzu, Rust workspace, gp-core (types, schema, config), gp-embeddings (ONNX), gp-stigmergy, gp-pathfinding, gp-agents, gp-mcp, gp-wasm stubs. 7 crates, 224 tests, 5,800 LOC.

✅ Complete
2

Stigmergy Integration

Week 2-3

Cypher query generation (10 query types), bulk decay operations, position-weighted path rewards, edge cost recomputation. +38 tests.

✅ Complete
3

Pathfinding

Week 3-4

PalaceGraph benchmark infrastructure, full hierarchy traversal tests, cross-wing tunnels, pheromone effects, context-adaptive weights. +21 tests.

✅ Complete
4

Agents + Swarm

Week 4-5

NEW gp-swarm crate: SwarmCoordinator (sense→decide→act→update), 3-criteria ConvergenceDetector, interest scoring, decay scheduling. +50 tests.

✅ Complete
5

MCP + Skills

Week 5-6

JSON-RPC 2.0 MCP server, 28-tool dispatch, dynamic PALACE_PROTOCOL prompt with live stats, 401-line skills.md protocol. +42 tests.

✅ Complete
6

WASM

Week 6-8

InMemoryPalace engine, full JS API (wasm-bindgen), Web Worker message types, IndexedDB/OPFS persistence layer. +63 tests.

✅ Complete
7

Distribution

Week 8-10

CI/CD (GitHub Actions), 7 doc files (1,540 LOC), CLI stub (12 commands), Python bindings (PyO3), NPM package config, 3 example programs.

✅ Complete
8

Kuzu FFI + Storage

Week 10-11

NEW gp-storage crate: StorageBackend trait, InMemoryBackend (full CRUD + cosine search), Kuzu C API FFI bindings (feature-gated), schema initialization, palace operations. +60 tests.

✅ Complete
9

Live Palace

Week 11-12

NEW gp-palace crate: GraphPalace orchestrator, auto-hierarchy creation, pheromone-boosted search, A* navigation, KG CRUD, export/import (Replace/Merge/Overlay). +63 tests.

✅ Complete
10

Benchmarks

Week 12-14

NEW gp-bench crate: recall@k (target ≥96.6%), A* pathfinding (target ≥90.9%), throughput benchmarks, Criterion harness, comparison reports (JSON/Markdown). +43 tests.

✅ Complete
Heritage

Research Foundation

GraphPalace stands on the shoulders of giants — combining insights from memory science, graph databases, swarm intelligence, and neuroscience.

🧠

MemPalace

Verbatim storage philosophy. Palace spatial metaphor. 96.6% LongMemEval recall. Never summarize; store raw, search semantically.

🐜

STAN_X v8

5 pheromone types. Position-weighted rewards. Semantic A* (40/30/30). Active Inference agents. Cosine annealing.

📊

Kùzu

Embedded graph database (163K LOC, MIT). Cypher, native HNSW vector search, FTS, WASM bindings, columnar storage.

🧬

Karl Friston

Active Inference and Expected Free Energy minimization. Bayesian belief updates. The mathematical foundation for agent curiosity.

🏛️

Method of Loci

Simonides (~500 BC). The original memory palace — spatial organization aids recall. 2,500 years of proven effectiveness.

📐

VBRL Architecture

Modular WASM microservices on edge devices. Sandboxed execution. Patent WO 2024/239068 A1.

Publication

Research Paper

An 18-page paper with full methodology, 10 equations, 2 algorithms, 8+ tables, and 19 references — covering architecture, experimental evaluation, and comparison with MemPalace, Mem0, and Zep.

📄 "GraphPalace: A Stigmergic Memory Palace Engine for AI Agents"

web3guru888 · April 2026 · 18 pages · MIT License

Key Results

  • Recall@10: 96% (TF-IDF) — matches MemPalace's 96.6%
  • A* success: 100% — exceeds STAN_X's 90.9%
  • A* latency: 8–21 µs — 10,000× under target
  • Insert: 50K ops/sec

Paper Contents

  • 📐 10 numbered equations
  • 🔄 2 algorithm blocks (A*, Swarm)
  • 📊 8+ benchmark tables
  • 📚 19 references (Grassé '59 → 2026)
📥 Download PDF 📝 View LaTeX Source