What is PRX?
PRX is a persistent AI orchestration daemon written in 169K lines of Rust. It is not a wrapper around a single LLM. It is a continuously running process that receives messages from 19 channels, routes them through an intelligent model selector, delegates work to sub-agents, and evolves its own behavior over time.
Role in the Pipeline
Section titled “Role in the Pipeline”In the OpenPRX pipeline (Plan -> Think -> Build -> Ship -> Protect), PRX occupies the Think stage. It is the central nervous system: every AI-driven decision flows through PRX.
OpenPR (Plan) Fenfa (Ship) │ ▲ ▼ │ PRX ── sub-agents ── prx-memory ── CI ───┘ │ ▼ WAF + SD (Protect)OpenPR dispatches tasks. PRX decides which model handles them, manages conversation history, enforces security policy, and delegates subtasks to autonomous sub-agents. Results flow back to OpenPR and downstream stages.
Key Subsystems
Section titled “Key Subsystems”| Subsystem | Purpose |
|---|---|
| Channels | 19 messaging integrations (Signal, WhatsApp, Telegram, Discord, Slack, Matrix, etc.) |
| Providers | 14 LLM backends with unified tool-calling abstraction |
| Router | Intelligent model selection: intent classification, Elo rating, KNN semantic routing, Automix |
| Sub-agents | Three-tier delegation: synchronous named agents, async fire-and-forget sessions, management commands |
| Self-evolution | Autonomous improvement of prompts, memory, and strategies with safety gates |
| Security | 5-layer policy pipeline, approval workflows, sandbox enforcement (Docker, Firejail, Bubblewrap, Landlock, WASM) |
| Plugins | WASM-based plugin system with wasmtime sandboxing |
| MCP Client | Connects to external MCP servers to consume tools |
| Remote Nodes | Distributed execution via prx-node with H2 transport and device pairing |
How It Works
Section titled “How It Works”- A message arrives on any channel (Telegram, Signal, CLI, webhook, etc.)
- PRX maintains per-sender conversation history (last 50 messages) with automatic compaction
- The Router classifies intent, scores candidate models, and selects the best provider
- The selected LLM generates a response, potentially invoking tools
- If the task requires delegation, sub-agents are spawned (sync or async)
- The self-evolution system records outcomes for periodic analysis and improvement
- Security policy is enforced at every layer: command execution, file access, cost limits
Quick Start
Section titled “Quick Start”# Clone and buildgit clone https://github.com/openprx/prx && cd prxcargo build --release
# Configure at least one provider and one channelcp config.example.toml config.toml# Edit config.toml: set your API keys and channel credentials
# Run the daemon./target/release/prx --config config.toml
# Or use the CLI channel for immediate interaction./target/release/prx --cliPRX reads its configuration from a TOML file. At minimum, you need one provider (e.g., Anthropic with an API key) and one channel (e.g., CLI for local testing). See the subsystem pages for detailed configuration.