Open Source · Built with Rust

OpenFang — The Agent Operating System

OpenFang is the open-source agent operating system built with Rust. Deploy autonomous AI agents that run like experts, not chatbots.

$ curl -fsSL https://openfang.sh/install | sh
> irm https://openfang.sh/install.ps1 | iex
16
Security Layers
40
Channel Adapters
27
LLM Providers
53
Built-in Tools

Not a Chatbot Framework.
An Agent OS.

OpenFang is a production-grade, open-source agent operating system built entirely in Rust. Where traditional agent frameworks act as reactive tools — responding only when prompted — OpenFang introduces a fundamentally different paradigm. It treats AI agents as autonomous processes that can be deployed, scheduled, monitored, and managed just like services in a modern operating system.

At the heart of OpenFang is the concept of Hands: self-contained autonomous capability packages that combine configuration, expert domain knowledge, operational procedures, and tool access into a single deployable unit. Once activated, a Hand executes its entire workflow independently — discovering leads, monitoring competitors, generating research reports, or editing videos — and reports back when complete.

This shift from "tool you use" to "expert you deploy" is what sets OpenFang apart from every other agent framework on the market. OpenFang bridges the gap between lightweight runtimes that lack features and heavyweight frameworks that consume excessive resources, delivering both performance and comprehensive functionality in a single package.

Built with Rust

Memory-safe, high-performance foundation. Modular architecture with 14 specialized crates compiled into a single binary. No runtime dependencies, no garbage collection pauses.

Autonomous by Design

Hands run as background processes with complete SOPs. Set your goal, and OpenFang handles the rest — executing multi-stage pipelines, making decisions, recovering from errors, and delivering results.

🔒 Production-Grade Security

16 independent defense layers including WASM sandboxing, capability-based access control, Merkle hash chain auditing, taint tracking, and mandatory human approval for financial operations.

Hands: Autonomous Capability Packages

Hands transform AI agents from reactive chatbots into autonomous experts. Each Hand is a self-contained unit with its own configuration, training manual, and domain expertise — deployed once, running continuously.

Traditional AI agents work like contractors: you give them a task, they do it, then wait for the next instruction. If something breaks mid-workflow, you have to step in manually.

OpenFang Hands work like skilled employees. Each Hand carries a complete standard operating procedure. After activation, it knows what to do, when to do it, and where to deliver the results. The entire workflow runs in a closed loop without human intervention.

Every Hand internally bundles four components: an execution plan, an expert knowledge base, tool invocation permissions, and dashboard metrics. This architecture ensures predictable behavior while maintaining the flexibility to handle edge cases through built-in decision trees and error recovery mechanisms.

Beyond the 7 built-in Hands, you can build your own. Define a HAND.toml file specifying tools, parameters, and prompts, and you have a custom autonomous capability package ready for deployment. Share it with the community through FangHub, the OpenFang marketplace.

Layer 1
HAND.toml — The Job Description

Declarative manifest defining required tools, user-configurable settings, dashboard metrics, and system requirements. Compiled into the binary at build time.

Layer 2
System Prompt — The Training Manual

Multi-stage operational runbook with concrete procedures, decision trees, error recovery mechanisms, and quality gates that the Hand follows autonomously.

Layer 3
SKILL.md — The Domain Expertise

Expert domain knowledge injected into the agent context: best practices, industry standards, evaluation criteria, and known pitfalls that make the Hand a specialist.

7 Production-Ready Hands

🎬
Clip

Transforms long-form videos into viral short clips through an 8-stage pipeline: source analysis, moment detection, clip extraction, subtitle generation, thumbnail creation, AI voiceover, quality scoring, and batch export.

Content Repurposing Social Media Highlights
🎯
Lead

Autonomous lead generation engine that discovers prospects from designated sources, enriches company data and contact info, scores leads 0–100 against your ideal customer profile, deduplicates, and delivers packaged results in CSV or Markdown.

Sales Pipeline Market Research Outreach
📡
Collector

Intelligent surveillance system inspired by OSINT methodologies. Monitors any target — companies, topics, people, technologies — detecting changes, tracking sentiment, and building knowledge graphs in the background.

Competitive Intel Threat Intelligence Brand Monitoring
📈
Predictor

Calibrated super-forecasting engine that produces probability estimates with Brier scores, evidence chains, confidence intervals, and contrarian analysis for strategic decision-making.

Strategic Planning Risk Assessment Trend Forecasting
📚
Researcher

Deep research agent with academic-grade methodology: CRAAP test fact-checking, cross-reference verification, multi-language source support, and APA citation generation for comprehensive reports.

Due Diligence Academic Review Technical Assessment
🐦
Twitter

Autonomous Twitter/X account manager supporting 7 content types with a pre-publish approval queue, engagement analytics tracking, and optimized scheduling for maximum reach.

Brand Presence Thought Leadership Content Distribution
🌐
Browser

Playwright-powered web automation that handles form filling, multi-step workflows, data extraction, and automated testing. Includes a mandatory purchase approval gate — any transaction requires explicit human confirmation.

Web Scraping Form Automation Price Monitoring

Modular Rust Architecture

OpenFang is organized as a Cargo workspace of 14 specialized crates. Dependencies flow strictly downward — lower-level crates never depend on higher ones — ensuring clean separation and independent optimization.

openfang-runtime
Agent Execution Engine

Agent loop with 3 native LLM drivers (Anthropic, Gemini, OpenAI-compatible), 23 built-in tools, WASM sandbox via Wasmtime with dual fuel + epoch metering, MCP client/server, and A2A protocol support.

openfang-kernel
Central Coordinator

Integrates all subsystems: AgentRegistry, AgentScheduler, CapabilityManager, EventBus, Supervisor, WorkflowEngine, TriggerEngine, and WasmSandbox. Handles spawning, dispatch, and graceful shutdown.

openfang-memory
Memory Subsystem

SQLite-backed with structured KV store, vector embedding semantic search, knowledge graph, session management, task boards, and usage event persistence. Supports cross-channel canonical sessions.

openfang-api
HTTP API Server

Built on Axum 0.8 with 76 endpoints covering agents, workflows, triggers, memory, channels, templates, models, providers, skills, and health. Supports WebSocket and SSE streaming with OpenAI-compatible endpoints.

openfang-channels
Channel Bridge

40 adapters connecting to messaging platforms worldwide. Implements message routing, bridge management, per-channel rate limiting, formatters, and channel overrides for seamless multi-platform operation.

openfang-skills
Skill System

Pluggable toolkit framework with 60 bundled skills. Supports FangHub marketplace and ClawHub client integration, SKILL.md parsing, and prompt injection scanning for safe skill execution.

openfang-wire
P2P Agent Protocol

OpenFang Protocol (OFP) for peer-to-peer agent communication through HMAC-SHA256 mutually authenticated JSON frame messages, enabling secure inter-agent coordination.

openfang-desktop
Desktop Application

Native desktop app built on Tauri 2.0 that launches the kernel and Axum server in background threads, providing a WebView-based user interface for local management.

openfang-migrate
Migration Engine

One-command migration from OpenClaw: transfers all agent configurations, conversation history, skills, and config files seamlessly to OpenFang's architecture.

14 Cargo Crates
76 API Endpoints
3 Native LLM Drivers
60 Bundled Skills
23 Built-in Tools

16 Independent Security Layers

When agents control browsers, post to social media, and process sensitive data, security is not optional. OpenFang implements defense-in-depth: 16 independent systems that overlap so a failure in one is caught by the others.

🔐

Capability-Based Security

Agents can only perform operations explicitly granted to them. Capabilities are immutable after agent creation and enforced at the kernel level — not the application layer. This prevents unauthorized privilege escalation regardless of prompt manipulation.

WASM Dual Metering

Untrusted WASM modules run in a Wasmtime sandbox with two independent metering systems. Fuel metering counts WASM instructions to prevent CPU-intensive loops. Epoch metering enforces timeouts to prevent host call blocking or environmental slowdowns.

🔗

Merkle Hash Chain Audit Trail

Every operation is cryptographically recorded in a tamper-evident hash chain. If an error occurs, you can precisely trace the exact sequence of events — no guessing, no log digging. Any attempt to alter audit records is immediately detectable.

👁

Information Flow Taint Tracking

Data flowing through the system is tagged and tracked, preventing prompt injection attacks and data exfiltration. If external input attempts to manipulate agent behavior, the taint tracking system flags it before execution.

Ed25519 Manifest Signing

Every agent manifest and skill package is cryptographically signed with Ed25519. This prevents supply chain attacks by ensuring that deployed Hands and skills have not been tampered with since publication.

🛡

SSRF Protection & Secret Zeroization

Built-in server-side request forgery protection prevents agents from accessing internal network resources. Secret zeroization ensures cryptographic keys and sensitive data are wiped from memory immediately after use, defending against memory forensics.

Human-in-the-loop by default. Whenever an agent encounters a financial transaction or purchase step, execution pauses and requires explicit human approval before proceeding. The AI cannot autonomously authorize spending.

Open Ecosystem, Broad Connectivity

OpenFang connects to the platforms your teams already use. 40 channel adapters, 27 LLM providers, 53 tools, and full MCP/A2A protocol support for maximum interoperability.

40 Channel Adapters

Connect your agents to any messaging platform — consumer, enterprise, social, or privacy-focused. OpenFang adapters handle message routing, rate limiting, and format translation across all channels.

Telegram Discord Slack WhatsApp Signal Microsoft Teams Email Matrix LINE Mastodon Bluesky Reddit Google Chat Webex Feishu / Lark Rocket.Chat XMPP IRC Nostr DingTalk Webhook +19 more

27 LLM Providers

Run your agents on any language model. OpenFang includes 3 native drivers for Anthropic, Gemini, and OpenAI-compatible APIs, with 27 total provider integrations and smart model routing based on task complexity.

Anthropic OpenAI Google Gemini Meta Llama Mistral Cohere DeepSeek Groq Together AI Fireworks Ollama +16 more

MCP — Model Context Protocol

OpenFang operates as both MCP client and server. As a client, it connects to external MCP servers (GitHub, file systems, databases, Puppeteer) and makes their tools available to all agents. As a server, it exposes its own agents as callable tools for external MCP clients. All tools are namespaced as mcp_{server}_{tool} to prevent conflicts.

A2A — Agent-to-Agent Protocol

The A2A protocol enables inter-agent communication and task management through Agent Cards that define each agent's capabilities and interfaces. Combined with OpenFang Wire (OFP) for peer-to-peer communication via HMAC-SHA256 authenticated JSON frames, agents can collaborate across instances securely.

How OpenFang Compares

The agent framework landscape spans from ultra-lightweight runtimes to feature-rich but resource-heavy platforms. OpenFang occupies a unique position — delivering comprehensive OS-level capabilities with efficient resource usage.

Feature OpenFang OpenClaw ZeroClaw Nanobot
Core Position Agent Operating System Heavyweight Framework High-Performance Runtime Research Framework
Language Rust TypeScript Rust Python
Memory Usage Low Very High (1.5 GB+) Ultra-Low (<5 MB) Low
Cold Start Fast Slow (5.98 s) Ultra-Fast (<10 ms) Moderate
Security 16-layer deep defense, WASM sandbox, kernel isolation Application-layer checks Resource-efficiency focused Basic
Autonomous Scheduling Hands
Channel Adapters 40 ~15 Limited Limited
LLM Providers 27 ~10 Growing Several
MCP / A2A Both Partial Partial MCP only
Marketplace FangHub

Up and Running in 3 Commands

Install OpenFang, initialize your workspace, and launch the agent operating system. The console is accessible at localhost:4200.

# Install OpenFang
curl -fsSL https://openfang.sh/install | sh

# Initialize workspace
openfang init

# Start the agent OS
openfang start

# Console → http://localhost:4200

# ─── Migrating from OpenClaw? ───
openfang migrate --from openclaw

Deploy Your First Agent in Minutes

OpenFang ships as a single binary with zero runtime dependencies. The installer detects your platform, downloads the appropriate build, and places it on your PATH. From there, openfang init creates your workspace and openfang start launches the kernel, API server, and web console.

1
Install — One-line installer for Linux, macOS, and Windows. Downloads a single Rust binary with no external dependencies.
2
Initialize — Creates your workspace directory with default configuration, sets up SQLite memory storage, and prepares built-in Hands.
3
Launch — Starts the kernel, API server (76 endpoints), and web console. Access the dashboard at localhost:4200 to configure agents and activate Hands.
4
Migrate — Already using OpenClaw? One command transfers all agent configurations, conversation history, skills, and settings to OpenFang.

Start Deploying Autonomous Agents

OpenFang is free, open-source, and ready for production. Join the community building the future of autonomous AI workflows.