I built Quorum because I wanted a way to break out of the single-model echo chamber. I often found myself manually pasting outputs between Claude and GPT to get a second opinion or to find holes in a logic chain.
Quorum is a TUI (built with React Ink + Python Asyncio) that orchestrates these interactions automatically.
Instead of just chatting, you select a protocol (like 'Oxford Debate' or 'Socratic Method') and assign models to roles. For example, you can have a local Llama (via Ollama) propose a code architecture, and force GPT to act as a rigorous critic ("Advocate" mode).
Key focus for this release was the hybrid engine: it runs local models sequentially to save VRAM but parallelizes cloud requests to keep speed up.
Happy to answer questions about the TUI implementation or the consensus protocols!
Hi HN, author here.
I built Quorum because I wanted a way to break out of the single-model echo chamber. I often found myself manually pasting outputs between Claude and GPT to get a second opinion or to find holes in a logic chain.
Quorum is a TUI (built with React Ink + Python Asyncio) that orchestrates these interactions automatically.
Instead of just chatting, you select a protocol (like 'Oxford Debate' or 'Socratic Method') and assign models to roles. For example, you can have a local Llama (via Ollama) propose a code architecture, and force GPT to act as a rigorous critic ("Advocate" mode).
Key focus for this release was the hybrid engine: it runs local models sequentially to save VRAM but parallelizes cloud requests to keep speed up.
Happy to answer questions about the TUI implementation or the consensus protocols!