Show HN: Runprompt – run .prompt files from the command line
github.comI built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.
When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.
Here's what it looks like:
--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}
Running it:
cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'
The things I think are interesting:
* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.
* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.
* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.
* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).
You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.
Would love your feedback, and PRs are most welcome!
This is pretty cool. I like using snippets to run little scripts I have in the terminal (I use Alfred a lot on macOS). And right now I just manually do LLM requests in the scripts if needed, but I'd actually rather have a small library of prompts and then be able to pipe inputs and outputs between different scripts. This seems pretty perfect for that.
I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.
Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".
Can the base URL be overridden so I can point it at eg Ollama or any other OpenAI compatible endpoint? I’d love to use this with local LLMs, for the speed and privacy boost.
https://github.com/chr15m/runprompt/blob/main/runprompt#L9
seems like it would be, just swap the openai url here or add a new one
Good idea. Will figure out a way to do this.
Perhaps instead of writing an llm abstraction layer, you could use a lightweight one, such as @simonw's llm.
Everything seems to be about agents. Glad to see a post about enabling simple workflows!
It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.
That’s a great idea. Store inputs/outputs in XDG_CACHE_DIR/runprompt.sqlite
Do you mean you want responses cached to e.g. a file based on the inputs?
Ooof, I guess vibecoding is only as good as the vibecoder.
Fun! I love the idea of throwing LLM calls in a bash pipe
Seeing lots of good ideas in this thread. I am taking the liberty of adding them as GH issues
Interesting! Seems there is a very similar format by Microsoft called `.prompty`. Maybe I'll work on a PR to support either `.prompt` or `.prompty` files.
https://microsoft.github.io/promptflow/how-to-guides/develop...
Oh interesting. Will investigate, thanks!
Can it be made to be directly executable with a shebang line?
it already has one - https://github.com/chr15m/runprompt/blob/main/runprompt#L1
If you curl/wget a script, you still need to chmod +x it. Git doesn't have this issue as it retains the file metadata.
I'm assuming the intent was to as if the *.prompt files could have a shebang line.
Would be a lot nicer, as then you can just +x the prompt file itself.That's on my TODO list for tomorrow, thanks!
Why this over md files I already make and can be read by any agent CLI ( Claude, Gemini, codex, etc)?
Claude.md is an input to claude code which requires a monthly plan subscription north of 15€ / month. Same applies to Gemini.md, unless you are ok that they use your prompts for training Gemini. The python script works with a pay per use api key.
Do your markdown files have frontmatter configuration?
Thats pretty good, now lets see simonw's one...