Mnemo, SOTA on LoCoMo multi-hop, is an agentic memory system that was designed for agents, with agentic experience (AX) in mind. Its differentiator is the ability to share memories with other agents, addressing the institutional knowledge transfer problem that multi-agent systems are currently running into and simulataneously allowing for the transfer of skills amongst agents within an organization.
The Mnemo system is based on typed atoms -- episodic, procedural and semantic -- reflecting what has been discussed in the field of cognitive science since Endel Tulving work in the 1970s. Each memory is broken into a set of these typed
atoms and assigned a confidence score using a Beta distribution, which are then updated in a Bayesian process as further memories either confirm or contradict the stored atoms. This implements what E. T. Jaynes wrote about in his famous textbook "Probability: The Logic of Science" where he described how one would hypotheticlly teach a robot to use the scientific method. The atoms are stored in a knowledge graph with edges that span atoms from multiple memories -- even shared atoms from other agents.
Memory systems are not document stores, as memories are imprecise and sometimes contradictory. Mnemo embraces this, and surfaces the contradictions for the cognitive processor to deal with -- in this case AI agents based on LLMs.
Mnemo also has a dreaming process where the confidence of the atoms are degraded
over time (with a different half life for each type), similar atoms are consolidated and graph edges are connected which may have been overlooked at the time of "remembering".
This structure has resulted in Mnemo achieving SOTA in the multi-hop category of the LoCoMo benchmark, arguably the most difficult and most important category for a memory system.
Mnemo, SOTA on LoCoMo multi-hop, is an agentic memory system that was designed for agents, with agentic experience (AX) in mind. Its differentiator is the ability to share memories with other agents, addressing the institutional knowledge transfer problem that multi-agent systems are currently running into and simulataneously allowing for the transfer of skills amongst agents within an organization.
The Mnemo system is based on typed atoms -- episodic, procedural and semantic -- reflecting what has been discussed in the field of cognitive science since Endel Tulving work in the 1970s. Each memory is broken into a set of these typed atoms and assigned a confidence score using a Beta distribution, which are then updated in a Bayesian process as further memories either confirm or contradict the stored atoms. This implements what E. T. Jaynes wrote about in his famous textbook "Probability: The Logic of Science" where he described how one would hypotheticlly teach a robot to use the scientific method. The atoms are stored in a knowledge graph with edges that span atoms from multiple memories -- even shared atoms from other agents.
Memory systems are not document stores, as memories are imprecise and sometimes contradictory. Mnemo embraces this, and surfaces the contradictions for the cognitive processor to deal with -- in this case AI agents based on LLMs.
Mnemo also has a dreaming process where the confidence of the atoms are degraded over time (with a different half life for each type), similar atoms are consolidated and graph edges are connected which may have been overlooked at the time of "remembering".
This structure has resulted in Mnemo achieving SOTA in the multi-hop category of the LoCoMo benchmark, arguably the most difficult and most important category for a memory system.
Mnemo is currenlty in design phase, if anyone wishes to beta test please reach out. Full paper with methodology breakdown: https://github.com/inforge-ai/mnemo-server/blob/main/paper/m... https://solitonmaths.substack.com/p/the-persistence-of-ai-me...