Fascinating approach — using VSA to compress graph traversals into O(1) SIMD operations is a clever way to sidestep the RAG vs graph DB trade-off. Curious about a couple of things: how do you handle fact deletion or correction once something is superposed into the accumulators? And what does the query interface look like from the agent's perspective — is it purely similarity-based via Hamming distance, or do you support structured relational queries too?
Inlike vector databases that append embeddings to an HNSW graph, my working memory substrate natively supports mathematical forgetting.
I use a Squelch primitive—a SIMD-parallelized saturating subtraction over an 8-bit probabilistic accumulator.
When an agent finishes a chain-of-thought, we literally subtract the statistical mass of that specific reasoning path out of the 16,384-dimensional superposition.
It intentionally drops the signal back below the Kanerva noise floor, freeing up capacity in the L3 cache without destroying the other superposed facts. It works similarly for episodic memory and procedural memory.
Semantic memory, currently, is for invariants and “the schema” so no “deletions” but this will likely be reworked.
As for retrieval, yeah similarity via Hamming is table stakes. But there’s other stuff too including resonator network factorization and a Datalog variant, too.
We map Datalog semantics directly to VSA using 16,384-dimensional hypervectors.
Instead of relational tables, our EDB consists of 'Pentads' (Subj, Pred, Obj, Context, Lineage) bound together using prime-number circular bit-shifts to encode grammatical roles.
These facts are superposed into an 8-bit probabilistic accumulator.
For the IDB and schema enforcement, we use a 'Warden' actor in Gleam that intercepts state changes in the Tuplespace, validating them against constraints before they ever cross the C-ABI boundary.
When we query the Datalog, there are no B-trees or graph traversals.
We construct a 16k-bit probe and use AVX-512 SIMD to perform Maximum Likelihood Decoding directly against the superposed noise in O(1).
Because standard Datalog struggles with time, we extended the semantics to support LTL+ (Next, Eventually, Always, Until) natively over the vector space.
Our Episodic memory isn't a flat table; it's a strict chronological linked-list of physical accumulators stitched together with Holographic Pointers.
To evaluate temporal modalities like Eventually (◊) or Until (U), we don't use expensive SQL window functions or graph traversals.
The Zig sidecar just follows the Holographic Pointers, performing a O(1) SIMD resonance check at each temporal node.
If the target state resonates out of the noise at, say, Node 5, the LTL query resolves to true.
We execute temporal logic as a recursive physical jump through a non-Euclidean probability space.
Working next week on encoding Agent text into vectors directly without an LLM or SLM to assist.
Fascinating approach — using VSA to compress graph traversals into O(1) SIMD operations is a clever way to sidestep the RAG vs graph DB trade-off. Curious about a couple of things: how do you handle fact deletion or correction once something is superposed into the accumulators? And what does the query interface look like from the agent's perspective — is it purely similarity-based via Hamming distance, or do you support structured relational queries too?
Thanks for the question.
Inlike vector databases that append embeddings to an HNSW graph, my working memory substrate natively supports mathematical forgetting.
I use a Squelch primitive—a SIMD-parallelized saturating subtraction over an 8-bit probabilistic accumulator.
When an agent finishes a chain-of-thought, we literally subtract the statistical mass of that specific reasoning path out of the 16,384-dimensional superposition.
It intentionally drops the signal back below the Kanerva noise floor, freeing up capacity in the L3 cache without destroying the other superposed facts. It works similarly for episodic memory and procedural memory.
Semantic memory, currently, is for invariants and “the schema” so no “deletions” but this will likely be reworked.
As for retrieval, yeah similarity via Hamming is table stakes. But there’s other stuff too including resonator network factorization and a Datalog variant, too.
We map Datalog semantics directly to VSA using 16,384-dimensional hypervectors.
Instead of relational tables, our EDB consists of 'Pentads' (Subj, Pred, Obj, Context, Lineage) bound together using prime-number circular bit-shifts to encode grammatical roles.
These facts are superposed into an 8-bit probabilistic accumulator.
For the IDB and schema enforcement, we use a 'Warden' actor in Gleam that intercepts state changes in the Tuplespace, validating them against constraints before they ever cross the C-ABI boundary.
When we query the Datalog, there are no B-trees or graph traversals.
We construct a 16k-bit probe and use AVX-512 SIMD to perform Maximum Likelihood Decoding directly against the superposed noise in O(1).
Because standard Datalog struggles with time, we extended the semantics to support LTL+ (Next, Eventually, Always, Until) natively over the vector space.
Our Episodic memory isn't a flat table; it's a strict chronological linked-list of physical accumulators stitched together with Holographic Pointers.
To evaluate temporal modalities like Eventually (◊) or Until (U), we don't use expensive SQL window functions or graph traversals.
The Zig sidecar just follows the Holographic Pointers, performing a O(1) SIMD resonance check at each temporal node.
If the target state resonates out of the noise at, say, Node 5, the LTL query resolves to true.
We execute temporal logic as a recursive physical jump through a non-Euclidean probability space.
Working next week on encoding Agent text into vectors directly without an LLM or SLM to assist.
I hope that helps!
I'm interested in this, but only passingly familiar with it from several years ago. Can you link to what you believe the current state of the art is?
State of the art for HDC/VSA? Or for agentic memory?
HDC/VSA.
Well probably this recent piece by Kanerva. https://arxiv.org/abs/2503.23608
great and neat project! I would like to ask, where do you see the value here? a lot of tools on memory, context, etc
Thanks. Yes all spaces are crowded.
IMO the value here will be quasi brain-like operations on data that are fast and efficient.
We overuse LLMs which aren’t too fast and very inefficient.
So the value here is being able to support a shift of some workloads from LLM to smart agentic memory.