UV: Python Packaging Solved, Finally
Using uv
to manage Python is good enough that I’ve started
integrating Python tools into my workflow again. This is a huge boon
if you want to work with LLMs, since the machine learning community is
very Python focused.
Python was the language I loved most when I started developing. I bent over backwards to integrate Python and C++ when it was hard, just to keep using Python. It was so difficult and slow to install Python versions, juggle virtual environments, and get everything working on new systems.
uv
is different. It’s made by people who remember how fast
computers should be these days and refuse to accept slow
solutions. Yes, it’s written in rust
, but it isn’t fast because it
uses rust
; it’s written in rust
because that’s the best language
for building efficient software in 2024.
Install UV
The first step is obtaining the tool itself. Its Rust codebase produces self contained binaries, so installation is easy. Full install docs are here, but the typical options include:
- Untrusted shell script:
wget -qO- https://astral.sh/uv/install.sh | sh
- macOS Homebrew:
brew install uv
(which I use) - Pre-compiled release on GitHub (which I use on Linux)
Using Python Tools
The uv tool
feature lets you quickly and independently use whatever
version of a tool you need. All you have to do is add ~/.local/bin
to your PATH
, then run:
$ uv tool install llm
In seconds, the llm
command is ready. This is a fully uncached
install (uv cache clean
was run first):
$ uv tool install llm
Resolved 32 packages in 400ms
Prepared 32 packages in 357ms
Installed 32 packages in 43ms
...
Installed 1 executable: llm
$ llm --help
Usage: llm [OPTIONS] COMMAND [ARGS]...
Access Large Language Models from the command-line
...
Implementation Details
Under the hood, uv tool
creates Python entry points like
~/.local/bin/llm
. This preserves the simplicity of a single language
and the speed of no extra layers. Ruby tools like rbenv
, on the
other hand, rely on multiple layers of shell scripts.
Here’s what a uv
shim looks like—the Python it invokes is a direct
symlink to the interpreter:
$ cat ~/.local/bin/llm
#!/Users/jlisee/.local/share/uv/tools/llm/bin/python
# -*- coding: utf-8 -*-
import sys
from llm.cli import cli
if __name__ == "__main__":
if sys.argv[0].endswith("-script.pyw"):
sys.argv[0] = sys.argv[0][:-11]
elif sys.argv[0].endswith(".exe"):
sys.argv[0] = sys.argv[0][:-4]
sys.exit(cli())
Explore ~/.local/share/uv/tools/llm
, and you’ll find a typical
virtual environment layout. It’s fully isolated from all the other
tools you have, so no more issues with dependencies from different
tools being incompatible. The trade off is disk space, because each
tool gets its own copy of its dependencies.
My Usage
Now, in my personal environment setup, I can easily install Python based tools like llm and aider instead of hunting for Go or Rust alternatives. I just hope Astral finds a way to make money and stick around for the long term.