llm
$ pipx install llmPrompting, local plugins, and structured llm outputs from the terminal.
- llm fits local ai well, especially for prompting, local plugins, and structured llm outputs from the terminal.
- 1,158 homebrew installs (30d).
- Good for scripts and agents.
- Good fit for coding-agent workflows and repeatable scripts.
- Structured output is available for automation and parsing.
Llm guide
Prompting, local plugins, and structured llm outputs from the terminal. Built by Simon Willison. Supports structured output — good for scripts and agents.
Open CLI packages the install path, verify step, and safe-start workflow so this tool can move from “interesting CLI” to something you can actually use. It also integrates with skills.sh so each CLI comes with the right companion skills, not just a binary and a docs link.
When to apply
- prompting, local plugins, and structured llm outputs from the terminal.
- You want AI models and inference you can script with structured output.
- You need prompting.
- You need local plugins.
- You need structured llm outputs.
Quick reference
pipx install llmllm --helpllm 'Explain this command: rg TODO src'Open CLI × skills.sh
Open CLI integrates llm with the right skills.sh companions so you get the tool and the workflow together.
Prompt Engineering
Verified pairingOpen CLI integrates llm with this skills.sh skill because it is the clearest fit for how llm is usually used. Use sharper prompts so AI CLIs and agents produce more reliable results.
$ npx skills add https://github.com/inferen-sh/skills --skill prompt-engineeringUse llm together with the Prompt Engineering skills.sh skill. Start with a small prompt or read-only action, show the result, and propose the next loop before escalating scope.
Why this tool
- llm fits local ai well, especially for prompting, local plugins, and structured llm outputs from the terminal.
- 1,158 homebrew installs (30d).
- Good for scripts and agents.
Watch-outs
- Sign in before real work.
- Needs network access.
Example workflow
1. llm 'Explain this command: rg TODO src'Safe start
Install llm.
Run `llm --help` first.
Start with `llm 'Explain this command: rg TODO src'`.
Authenticate llm before asking the agent to do real work.