Specifications for asynchronous LLM computations with Raku's "LLM::Graph" detail how to manage complex, multi-step LLM workflows by representing them as graphs. By defining the workflow as a graph, developers can execute LLM function calls concurrently, enabling higher throughput and lower latency than synchronous, step-by-step processes.
"LLM::Graph" uses a graph structure to manage dependencies between tasks, where each node represents a computation and edges dictate the flow. Asynchronous behavior is a default feature, with specific options available for control.
Ah, yes, Raku's "LLM::Graph" is heavily inspired by the design of the function LLMGraph of Wolfram Language (aka Mathematica.)
WL's LLMGraph is more developed and productized, but Raku's "LLM::Graph" is catching up.
I would like to say that "LLM::Graph" was relatively easy to program because of Raku's introspection, wrappers, asynchronous features, and pre-existing LLM functionalities packages. As a consequence the code of "LLM::Graph" is short.
Wolfram Language does not have that level introspection, but otherwise is likely a better choice mostly for its far greater scope of functionalities. (Mathematics, graphics, computable data, etc.)
In principle a corresponding Python "LLMGraph" package can be developed, for comparison purposes. Then the "better choice" question can be answered in a more informed manner. (The Raku packages "LLM::Functions" and "LLM::Prompts" have their corresponding Python packages implemented already.)
Python is more widely known and a very popular tool for “LLM engineering”, so I’m curious what would be the reason to choose Raku in this case and wondering how the feature benefits of Raku outweigh the general incentive to use more popular tools.
Mostly, because Python is not a good a "discovery" and prototyping language. It is like that by design -- Guido Van Rossum decided that TMTOWTDI is counter-productive.
Another point, which could have mentioned in my previous response -- Raku has more elegant and easy to use asynchronous computations framework.
IMO, Python's introspection matches that Raku's introspection.
Some argue that Python's LLM packages are more and better than Raku's. I agree on the "more" part. I am not sure about the "better" part:
- Generally speaking, different people prefer decomposing computations in a different way.
- When few years ago I re-implemented Raku's LLM packages in Python, Python did not have equally convenient packages.
Specifications for asynchronous LLM computations with Raku's "LLM::Graph" detail how to manage complex, multi-step LLM workflows by representing them as graphs. By defining the workflow as a graph, developers can execute LLM function calls concurrently, enabling higher throughput and lower latency than synchronous, step-by-step processes.
"LLM::Graph" uses a graph structure to manage dependencies between tasks, where each node represents a computation and edges dictate the flow. Asynchronous behavior is a default feature, with specific options available for control.
that’s very interesting as far it goes, but wouldn’t Mathematica or Python be better choices than Raku for this kind of thing?
Ah, yes, Raku's "LLM::Graph" is heavily inspired by the design of the function LLMGraph of Wolfram Language (aka Mathematica.)
WL's LLMGraph is more developed and productized, but Raku's "LLM::Graph" is catching up.
I would like to say that "LLM::Graph" was relatively easy to program because of Raku's introspection, wrappers, asynchronous features, and pre-existing LLM functionalities packages. As a consequence the code of "LLM::Graph" is short.
Wolfram Language does not have that level introspection, but otherwise is likely a better choice mostly for its far greater scope of functionalities. (Mathematics, graphics, computable data, etc.)
In principle a corresponding Python "LLMGraph" package can be developed, for comparison purposes. Then the "better choice" question can be answered in a more informed manner. (The Raku packages "LLM::Functions" and "LLM::Prompts" have their corresponding Python packages implemented already.)
In that case, when would Raku ever be a better choice than something else?
I personally don’t see what advantages Python as a language (not an ecosystem) would have here.
Python is more widely known and a very popular tool for “LLM engineering”, so I’m curious what would be the reason to choose Raku in this case and wondering how the feature benefits of Raku outweigh the general incentive to use more popular tools.
Mostly, because Python is not a good a "discovery" and prototyping language. It is like that by design -- Guido Van Rossum decided that TMTOWTDI is counter-productive.
Another point, which could have mentioned in my previous response -- Raku has more elegant and easy to use asynchronous computations framework.
IMO, Python's introspection matches that Raku's introspection.
Some argue that Python's LLM packages are more and better than Raku's. I agree on the "more" part. I am not sure about the "better" part:
- Generally speaking, different people prefer decomposing computations in a different way. - When few years ago I re-implemented Raku's LLM packages in Python, Python did not have equally convenient packages.
I don’t personally find “why didn’t you do it in X like everyone else?” to be very motivational for explaining my coding choices.
But antononcube has thicker skin than me so..