One of the biggest limitations with AI agents right now is memory. Most systems rely on stuffing conversation history into a prompt window, which doesn’t scale well and gets expensive fast.
Treating memory as its own layer (semantic recall, entity relationships, temporal context) seems like a much more realistic direction if we want agents that can actually work on long-running projects.
Curious how Remembra handles conflicting memories or bad data over time. Memory quality might end up being just as important as model quality.
Great question on conflicts - this is something we spent a lot of time on.
Remembra uses temporal edges on the knowledge graph. When you store contradictory info ("user prefers Python" then later "user prefers Rust"), we don't just overwrite - we track both with timestamps and can:
1. Return the most recent by default
2. Surface contradictions explicitly when queried
3. Let you query point-in-time ("what did user prefer in January?")
For bad data, we have a few approaches:
- Confidence scoring on memories
- GDPR-compliant forget() to purge specific memories
- Audit logging so you can trace what was stored and when
You're right that memory quality is critical. Our benchmark focus (100% on LoCoMo) is specifically about retrieval accuracy - getting the right memory when you need it, not just any memory that keyword-matches.
Would love to hear how testing goes if you try it.
Most memory layers are just fancy key-value stores with a vector search slapped on. The graph-aware recall is what makes this actually interesting.
Self-host in minutes is doing a lot of heavy lifting in this space right now. Data residency is a dealbreaker for more teams than people admit.
Bookmarked. Will be testing this against our current setup this week.
The graph-aware approach lets us do things vector search alone can't - like "find all memories about Project X that involve Person Y" without needing exact keyword matches. Entity relationships are first-class citizens.
On data residency: 100% agree it's underestimated. We've seen teams pass on otherwise-great tools because they can't guarantee where memories live. Single Docker container, SQLite by default, your infrastructure = your data.
Would love to hear how it compares to your current setup. Drop any feedback in our Discord or GitHub issues - we're actively iterating.
I’ve actually been looking for something like this. I’m building an AI-driven trading bot and one of the biggest problems is persistent memory between runs keeping context about past decisions, signals, and strategy adjustments. Curious if something like this could handle that type of use case.
Yes! Trading bots are actually one of the use cases we had in mind.
Remembra can store:
- Trade decisions with reasoning ("went long NQ at 21450 because of MACD divergence")
- Pattern recognition across sessions ("round number rejections working 72% in low VIX")
- Regime context (track when strategies work vs don't)
- Entity extraction pulls out tickers, levels, signals automatically
The key is semantic recall - your bot can query "what worked last time we saw this setup?" and get relevant past trades, not just keyword matches.
Self-hosted so your trading data stays private. Happy to help you get set up if you want to try it: docs.remembra.dev
One of the biggest limitations with AI agents right now is memory. Most systems rely on stuffing conversation history into a prompt window, which doesn’t scale well and gets expensive fast.
Treating memory as its own layer (semantic recall, entity relationships, temporal context) seems like a much more realistic direction if we want agents that can actually work on long-running projects.
Curious how Remembra handles conflicting memories or bad data over time. Memory quality might end up being just as important as model quality.
Great question on conflicts - this is something we spent a lot of time on.
Remembra uses temporal edges on the knowledge graph. When you store contradictory info ("user prefers Python" then later "user prefers Rust"), we don't just overwrite - we track both with timestamps and can:
1. Return the most recent by default 2. Surface contradictions explicitly when queried 3. Let you query point-in-time ("what did user prefer in January?")
For bad data, we have a few approaches: - Confidence scoring on memories - GDPR-compliant forget() to purge specific memories - Audit logging so you can trace what was stored and when
You're right that memory quality is critical. Our benchmark focus (100% on LoCoMo) is specifically about retrieval accuracy - getting the right memory when you need it, not just any memory that keyword-matches.
Would love to hear how testing goes if you try it.
Most memory layers are just fancy key-value stores with a vector search slapped on. The graph-aware recall is what makes this actually interesting. Self-host in minutes is doing a lot of heavy lifting in this space right now. Data residency is a dealbreaker for more teams than people admit. Bookmarked. Will be testing this against our current setup this week.
Thanks! You've hit on exactly why we built this.
The graph-aware approach lets us do things vector search alone can't - like "find all memories about Project X that involve Person Y" without needing exact keyword matches. Entity relationships are first-class citizens.
On data residency: 100% agree it's underestimated. We've seen teams pass on otherwise-great tools because they can't guarantee where memories live. Single Docker container, SQLite by default, your infrastructure = your data.
Would love to hear how it compares to your current setup. Drop any feedback in our Discord or GitHub issues - we're actively iterating.
I’ve actually been looking for something like this. I’m building an AI-driven trading bot and one of the biggest problems is persistent memory between runs keeping context about past decisions, signals, and strategy adjustments. Curious if something like this could handle that type of use case.
Yes! Trading bots are actually one of the use cases we had in mind.
Remembra can store: - Trade decisions with reasoning ("went long NQ at 21450 because of MACD divergence") - Pattern recognition across sessions ("round number rejections working 72% in low VIX") - Regime context (track when strategies work vs don't) - Entity extraction pulls out tickers, levels, signals automatically
The key is semantic recall - your bot can query "what worked last time we saw this setup?" and get relevant past trades, not just keyword matches.
Self-hosted so your trading data stays private. Happy to help you get set up if you want to try it: docs.remembra.dev
[dead]