It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.
This is pretty cool. I like using snippets to run little scripts I have in the terminal (I use Alfred a lot on macOS). And right now I just manually do LLM requests in the scripts if needed, but I'd actually rather have a small library of prompts and then be able to pipe inputs and outputs between different scripts. This seems pretty perfect for that.
I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.
Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".
Can the base URL be overridden so I can point it at eg Ollama or any other OpenAI compatible endpoint? I’d love to use this with local LLMs, for the speed and privacy boost.
It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.
This is pretty cool. I like using snippets to run little scripts I have in the terminal (I use Alfred a lot on macOS). And right now I just manually do LLM requests in the scripts if needed, but I'd actually rather have a small library of prompts and then be able to pipe inputs and outputs between different scripts. This seems pretty perfect for that.
I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.
Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".
Can the base URL be overridden so I can point it at eg Ollama or any other OpenAI compatible endpoint? I’d love to use this with local LLMs, for the speed and privacy boost.
https://github.com/chr15m/runprompt/blob/main/runprompt#L9
seems like it would be, just swap the openai url here or add a new one
Can it be made to be directly executable with a shebang line?
it already has one - https://github.com/chr15m/runprompt/blob/main/runprompt#L1
If you curl/wget a script, you still need to chmod +x it. Git doesn't have this issue as it retains the file metadata.
I'm assuming the intent was to as if the *.prompt files could have a shebang line.
Would be a lot nicer, as then you can just +x the prompt file itself.Why this over md files I already make and can be read by any agent CLI ( Claude, Gemini, codex, etc)?
Thats pretty good, now lets see simonw's one...