It’s a local-first personal memory layer for AI assistants:
- text-first chunks + lightweight metadata
- local ChromaDB persistence
- MCP tools for store/search/update/delete/version chain
- stdio and SSE transports
- Docker-first setup
I’m looking for technical feedback from people building with MCP/RAG:
1. Does this chunk/version model seem practical for long-lived personal memory?
2. Any obvious risks in the retrieval/ranking approach for real usage?
3. For local-first projects, what integration path has worked best for you (stdio vs SSE)?
4. What would make this more useful in your own workflow?
I’m happy to hear blunt criticism and prioritize fixes.
I just open-sourced a project called Local Memory MCP: https://github.com/ptobey/local-memory-mcp
It’s a local-first personal memory layer for AI assistants: - text-first chunks + lightweight metadata - local ChromaDB persistence - MCP tools for store/search/update/delete/version chain - stdio and SSE transports - Docker-first setup
I’m looking for technical feedback from people building with MCP/RAG:
1. Does this chunk/version model seem practical for long-lived personal memory? 2. Any obvious risks in the retrieval/ranking approach for real usage? 3. For local-first projects, what integration path has worked best for you (stdio vs SSE)? 4. What would make this more useful in your own workflow?
I’m happy to hear blunt criticism and prioritize fixes.