Kind of amazes me how many people bitch about agent performance but don't hook their guys up to Otel, crack Phoenix and get to work, but instead randomly tweak prompts in response to team vibes.
Good point. Also (tangent), I followed your profile link to https://sibylline.dev and am thoroughly impressed. Stoked to have found your treasure trove of repos and insights.
Don't play with them unless you're good at debugging alpha code (claude/codex can do it fine), I haven't ironed out env specific stuff or clarified the installation/usage, and I'm still doing UI polish/optimization passes (yay async simd rust). I'll do showy releases once I've got the tools one click install ready, in the meantime please feel free to drop an issue on any of my projects if there are features or questions you have.
With Phoenix + Clickhouse being fed from Otel, you can do queries over your traces for deep analysis. If I want to see which tool calls are failing and why (or just get tool statistics), or find common patterns in flagged/failure traces ("simpler solution") and their causes, it's one query and some wiring.
Collecting detailed per-request traces and calculating user-specific metrics finer than a total cost feels about as intrusive as one of those periodic screenshot programs forced by really shitty remote jobs or freelancing contracts. It's pretty gross.
I don't think the primary goal here is "surveillance" but better understanding where in the team are tools like claude code getting adopted, what models are being used, are there best practices to learn in token usage which could make it more efficient
I’d like to see this leveraged for agent platforms & orchestration rather than for surveillance on human software engineers. Humans don’t perform well in panopticons, but robots do (In my humble opinion).
Kind of amazes me how many people bitch about agent performance but don't hook their guys up to Otel, crack Phoenix and get to work, but instead randomly tweak prompts in response to team vibes.
Good point. Also (tangent), I followed your profile link to https://sibylline.dev and am thoroughly impressed. Stoked to have found your treasure trove of repos and insights.
Don't play with them unless you're good at debugging alpha code (claude/codex can do it fine), I haven't ironed out env specific stuff or clarified the installation/usage, and I'm still doing UI polish/optimization passes (yay async simd rust). I'll do showy releases once I've got the tools one click install ready, in the meantime please feel free to drop an issue on any of my projects if there are features or questions you have.
Could you elaborate? How does knowing numerical usage metrics help?
With Phoenix + Clickhouse being fed from Otel, you can do queries over your traces for deep analysis. If I want to see which tool calls are failing and why (or just get tool statistics), or find common patterns in flagged/failure traces ("simpler solution") and their causes, it's one query and some wiring.
Collecting detailed per-request traces and calculating user-specific metrics finer than a total cost feels about as intrusive as one of those periodic screenshot programs forced by really shitty remote jobs or freelancing contracts. It's pretty gross.
I don't think the primary goal here is "surveillance" but better understanding where in the team are tools like claude code getting adopted, what models are being used, are there best practices to learn in token usage which could make it more efficient
I’d like to see this leveraged for agent platforms & orchestration rather than for surveillance on human software engineers. Humans don’t perform well in panopticons, but robots do (In my humble opinion).
I think this tackles a really important area - nice job. Looking forward to following.
great to hear. yes, it can help understand how developers are using Claude Code and also optimise token usage etc.
aka let's spy on our devs more than we already are and give their pointy-haired bosses even more leverage to harass them with AI-usage KPI BS
Very nice!