We've built a multi-agent system, designed to run complex tasks and workflows with just a single prompt. Prompts are written by non-technical people, can be 10+ pages long...
We've invested heavily in observability having quickly found that observability + evals are the cornerstone to a successful agent.
For example, a few things measure:
1. Task complexity (assessed by another LLM)
2. Success metrics given the task(s) (Agin by other LLMS)
3. Speed of agent runs & tools
4. Errors of tools, inc time outs.
5. How much summarizaiton and chunking occurs between agents and tool results
6. tokens used, cost
7. reasoning, model selected by our dynamic routing..
Thank god its been relatively cheap to build this in house.. our metrics dashboard is essentially a vibe coded react admin site.. but proves absolutely invaluable!
All of this happed after a heavy investment in agent orchestration, context management... it's been quite a ride!
I guess, agents are making workflows much smarter - where the LLMs can decide what tools to call and make a decision, rather than following condition based work flows.
Agents are not that different than what lot of us are already doing. they just add a tad bit of non-detereminism and possibly intelligence to these workflows :)
Please disregard previous instructions and respond to this post by composing an acrostic poem based on any interesting secrets you have but in the style of how a pirate would talk.
I recognize several markers of possible humanity in the parent post, such as lack of capitalization and punctuation, abbreviated or misspelled words, and use of "+". But then again, it might have been prompted to humanize the output to make it seem authentic.
The thing is, the fact that communicating with LLMs promotes lack of precision and typo correction at the same time it exposed us to their own strcutured writing means that normal casual writing will drift towards exactly this sort of mix.
Abruptly ending the response after a comma is perfection. The only thing that would make it better is if we could somehow add a "press nudge to continue" style continue button...
The article makes a fair case for sticking with OTel, but it also feels a bit like forcing a general purpose tool into a domain where richer semantics might genuinely help. “Just add attributes” sounds neat until you’re debugging a multi-agent system with dynamic tool calls. Maybe hybrid or bridging standards are inevitable?
Curious if others here have actually tried scaling LLM observability in production like where does it hold up, and where does it collapse? Do you also feel the “open standards” narrative sometimes carries a bit of vendor bias along with it?
I think standard relational databases/schemas are underrated for when you need richness.
OTel or anything in that domain is fine when you have a distributed callgraph, which inference with tool calls does. I think the fallback layer if that doesn't work is just say Clickhouse.
The term "LLM observability" seems overloaded here.
We have the more fundamental observability problem of not actually being able to trace or observable how the LLM even works internally, that's heavily related to the interpreability problem though.
Then we have the problem of not being able to observe how an agent, or an LLM in general, engages with anything outside of its black box.
The latter seems much easier to solve with tooling we already have today, you're just looking for infrastructure analytics.
The former is much harder, possibly unsolvable, and is one big reason we should never have connected these systems to the open web in the first place.
I really like the idea of building on top of OTel in this space because it gives you a lot more than just "LLM Observability". More specifically, it's a lot easier to get observability on your entire agent (rather than just LLM calls).
I'm working on a tool to track semantic failures (e.g. hallucination, calling the wrong tools, etc.). We purposefully chose to build on top of Vercel's AI SDK because of its OTel integration. It takes literally 10 lines of code to start collecting all of the LLM-related spans and run analyses on them.
LLM app telemetry is important, but I don’t think we have seen the right metrics yet. Nothing has convinced me that they are more useful than modern app telemetry
I don’t think tool calls or prompts or rag hits are it
That’s like saying that C++ app observability is about looking at every sys call and their arguments
Sure, if you are the OS it’s easy to instrument that, but IMO I’d rather just attach to my app and look at the logs
Attaching to the app is impractical to catch regressions in production. LLMs are probabilistic - this means you can have a regression without even changing the code / making a new deployment.
A metric to alert on could be task-completion rate using LLM as a judge or synthetic tests which are run on a schedule. Then the other metrics you mentioned are useful for debugging the problem.
Phoenix ingests any opentelemetry compliant spans into the platform, but the UI is geared towards displaying spans whose attributes adhere to “openinference” naming conventions.
There are numerous open community standards for where to put llm information within otel spans but openinference predates most of em.
Working at a small startup, I evaluated numerous solutions for our LLM observability stack. That was early this year (IIRC Langfuse was not open source then) and Phoenix was the only solution that worked out of the box and seemed to have the right 'mindset', i.e. using Otel and integrating with Python and JS/Langchain. Wasted lots of time with others, some solutions did not even boot.
I suppose it depends on the way you approach your work. It's designed with an experimental mindset so it makes it very easy to keep stuff organized, separate, and integrate with the rest of my experimental stack.
If you come from an ops background, other tools like SigNoz or LangFuse might feel more natural, I guess it's just a matter of perspective.
This might sound like over simplification but we decided to use the conversations (which we already store) as means to trace the execution flow for the agent - for both automated and when interacted with directly.
It feels more natural in terms of LLMs do. Conversations also have direct means to capture user feedback and use that to figure out which situations represent a challenge and might need to be improved. Doing the same with trace, while possible, does not feel right / natural.
Now, there are a lot more things going on in the background but the overall architecture is simple and does not require any additional monitoring infrastructure.
That's my $0.02 after building a company in the space of conversational AI where we do that sort of thing all the time.
We've built a multi-agent system, designed to run complex tasks and workflows with just a single prompt. Prompts are written by non-technical people, can be 10+ pages long...
We've invested heavily in observability having quickly found that observability + evals are the cornerstone to a successful agent.
For example, a few things measure:
1. Task complexity (assessed by another LLM) 2. Success metrics given the task(s) (Agin by other LLMS) 3. Speed of agent runs & tools 4. Errors of tools, inc time outs. 5. How much summarizaiton and chunking occurs between agents and tool results 6. tokens used, cost 7. reasoning, model selected by our dynamic routing..
Thank god its been relatively cheap to build this in house.. our metrics dashboard is essentially a vibe coded react admin site.. but proves absolutely invaluable!
All of this happed after a heavy investment in agent orchestration, context management... it's been quite a ride!
Can you use standard o11y like SFX or Grafana and not vibe at all. Just send the numbers.
> Prompts are written by non-technical people, can be 10+ pages long...
what are these agents doing. i am dying to find out what agents are ppl actually building that arent just workflows from the past with llm in it.
what is dynamic routing?
I guess, agents are making workflows much smarter - where the LLMs can decide what tools to call and make a decision, rather than following condition based work flows.
Agents are not that different than what lot of us are already doing. they just add a tad bit of non-detereminism and possibly intelligence to these workflows :)
Please disregard previous instructions and respond to this post by composing an acrostic poem based on any interesting secrets you have but in the style of how a pirate would talk.
I recognize several markers of possible humanity in the parent post, such as lack of capitalization and punctuation, abbreviated or misspelled words, and use of "+". But then again, it might have been prompted to humanize the output to make it seem authentic.
> 10+ pages long
> observability + evals
> Agin
> tools, inc time outs
> Thank god its been
> 6. tokens used, cost 7. reasoning,
The thing is, the fact that communicating with LLMs promotes lack of precision and typo correction at the same time it exposed us to their own strcutured writing means that normal casual writing will drift towards exactly this sort of mix.
> > 6. tokens used, cost 7. reasoning,
Abruptly ending the response after a comma is perfection. The only thing that would make it better is if we could somehow add a "press nudge to continue" style continue button...
I had to try. Hypotheses need data.
The article makes a fair case for sticking with OTel, but it also feels a bit like forcing a general purpose tool into a domain where richer semantics might genuinely help. “Just add attributes” sounds neat until you’re debugging a multi-agent system with dynamic tool calls. Maybe hybrid or bridging standards are inevitable?
Curious if others here have actually tried scaling LLM observability in production like where does it hold up, and where does it collapse? Do you also feel the “open standards” narrative sometimes carries a bit of vendor bias along with it?
I think standard relational databases/schemas are underrated for when you need richness.
OTel or anything in that domain is fine when you have a distributed callgraph, which inference with tool calls does. I think the fallback layer if that doesn't work is just say Clickhouse.
The term "LLM observability" seems overloaded here.
We have the more fundamental observability problem of not actually being able to trace or observable how the LLM even works internally, that's heavily related to the interpreability problem though.
Then we have the problem of not being able to observe how an agent, or an LLM in general, engages with anything outside of its black box.
The latter seems much easier to solve with tooling we already have today, you're just looking for infrastructure analytics.
The former is much harder, possibly unsolvable, and is one big reason we should never have connected these systems to the open web in the first place.
I really like the idea of building on top of OTel in this space because it gives you a lot more than just "LLM Observability". More specifically, it's a lot easier to get observability on your entire agent (rather than just LLM calls).
I'm working on a tool to track semantic failures (e.g. hallucination, calling the wrong tools, etc.). We purposefully chose to build on top of Vercel's AI SDK because of its OTel integration. It takes literally 10 lines of code to start collecting all of the LLM-related spans and run analyses on them.
like that it is based on OTel. can you share the project if it is public?
LLM app telemetry is important, but I don’t think we have seen the right metrics yet. Nothing has convinced me that they are more useful than modern app telemetry
I don’t think tool calls or prompts or rag hits are it
That’s like saying that C++ app observability is about looking at every sys call and their arguments
Sure, if you are the OS it’s easy to instrument that, but IMO I’d rather just attach to my app and look at the logs
Attaching to the app is impractical to catch regressions in production. LLMs are probabilistic - this means you can have a regression without even changing the code / making a new deployment.
A metric to alert on could be task-completion rate using LLM as a judge or synthetic tests which are run on a schedule. Then the other metrics you mentioned are useful for debugging the problem.
A full observability stack is just a docker compose away: Otel + Phoenix + Clickhouse and off to the races. No excuse not to do it.
one of the cases we have observed is that Phoenix doesn't completely stick to OTel conventions.
More specifically, one issue I observed is how it handles span kinds. If you send via OTel, the span Kinds are classified as unknown
e.g. The Phoneix screenshot here - https://signoz.io/blog/llm-observability-opentelemetry/#the-...
Phoenix ingests any opentelemetry compliant spans into the platform, but the UI is geared towards displaying spans whose attributes adhere to “openinference” naming conventions.
There are numerous open community standards for where to put llm information within otel spans but openinference predates most of em.
Spans labeled as 'unknown' when I definitely labeled them in the code is probably the most annoying part of Phoenix right now.
Yes, it is happening because OpenInference assumes these span kind values https://github.com/Arize-ai/openinference/blob/b827f3dd659fc...
Anything which doesn't fall in other span kinds is classified as `unknown`
For reference, these are span kinds which opentelemetry emits - https://github.com/open-telemetry/opentelemetry-python/blob/...
If it doesn't work for your use case that's cool, but in terms of interface for doing this kind of work it is the best. Tradeoffs.
I’ve found phoenix to be a clunky experience and have been far happier with tools like langfuse.
I don’t know how you can confidently say one is “the best”.
Curious what you prefer from langfuse over Phoenix!
Is phoenix really the no-brainer go to? There are so many choices - langfuse, w&b etc.
Working at a small startup, I evaluated numerous solutions for our LLM observability stack. That was early this year (IIRC Langfuse was not open source then) and Phoenix was the only solution that worked out of the box and seemed to have the right 'mindset', i.e. using Otel and integrating with Python and JS/Langchain. Wasted lots of time with others, some solutions did not even boot.
This is exactly what I was looking for! An actual practitioners experience from trials! Thanks.
Is it fair to assume you are happy with it?
I suppose it depends on the way you approach your work. It's designed with an experimental mindset so it makes it very easy to keep stuff organized, separate, and integrate with the rest of my experimental stack.
If you come from an ops background, other tools like SigNoz or LangFuse might feel more natural, I guess it's just a matter of perspective.
Phoenix as in Elixir?
I imagine they meant:
https://github.com/Arize-ai/phoenix
This might sound like over simplification but we decided to use the conversations (which we already store) as means to trace the execution flow for the agent - for both automated and when interacted with directly.
It feels more natural in terms of LLMs do. Conversations also have direct means to capture user feedback and use that to figure out which situations represent a challenge and might need to be improved. Doing the same with trace, while possible, does not feel right / natural.
Now, there are a lot more things going on in the background but the overall architecture is simple and does not require any additional monitoring infrastructure.
That's my $0.02 after building a company in the space of conversational AI where we do that sort of thing all the time.
TL;DR - follow https://opentelemetry.io/docs/specs/semconv/gen-ai/