2025-04-08 22:08:11 +02:00
2025-02-26 22:37:48 +01:00
2025-04-02 19:40:27 +02:00
2025-03-05 22:19:13 +01:00
2025-02-05 19:57:25 +01:00
2025-03-05 22:19:07 +01:00
2025-04-03 17:35:28 +02:00
2025-03-09 23:05:55 +01:00
2025-02-05 19:57:25 +01:00

LocalAGI Logo

Your AI. Your Hardware. Your Rules.

Go Report Card License: MIT GitHub stars GitHub issues

LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility. A complete drop-in replacement for OpenAI's Responses APIs—with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that respects your privacy.

🛡️ Take Back Your Privacy

Are you tired of AI wrappers calling out to cloud APIs, risking your privacy? So were we.

LocalAGI ensures your data stays exactly where you want it—on your hardware. No API keys, no cloud subscriptions, no compromise.

🌟 Key Features

  • 🎛 No-Code Agents: Easy-to-configure multiple agents via Web UI.
  • 🖥 Web-Based Interface: Simple and intuitive agent management.
  • 🤖 Advanced Agent Teaming: Instantly create cooperative agent teams from a single prompt.
  • 📡 Connectors Galore: Built-in integrations with Discord, Slack, Telegram, GitHub Issues, and IRC.
  • 🛠 Comprehensive REST API: Seamless integration into your workflows. Every agent created will support OpenAI Responses API out of the box.
  • 📚 Short & Long-Term Memory: Powered by LocalRAG.
  • 🧠 Planning & Reasoning: Agents intelligently plan, reason, and adapt.
  • 🖼 Multimodal Support: Ready for vision, text, and more.
  • 🔧 Extensible Custom Actions: Easily script dynamic agent behaviors in Go (interpreted, no compilation!).
  • 🛠 Fully Customizable Models: Use your own models or integrate seamlessly with LocalAI.

🛠️ Quickstart

# Clone the repository
git clone https://github.com/mudler/LocalAGI
cd LocalAGI

# CPU setup
docker compose up -f docker-compose.yml

# GPU setup
docker compose up -f docker-compose.gpu.yml

Access your agents at http://localhost:3000

🏆 Why Choose LocalAGI?

  • ✓ Ultimate Privacy: No data ever leaves your hardware.
  • ✓ Flexible Model Integration: Supports GGUF, GGML, and more.
  • ✓ Developer-Friendly: Rich APIs and intuitive interfaces.
  • ✓ Effortless Setup: Simple Docker setups and pre-built binaries.
  • ✓ Platform Agnostic: Works seamlessly on Linux, macOS, and Windows.

🌐 The Local Ecosystem

LocalAGI is part of the powerful Local family of privacy-focused AI tools:

  • LocalAI: Run Large Language Models locally.
  • LocalRAG: Retrieval-Augmented Generation with local storage.
  • LocalAGI: Deploy intelligent AI agents securely and privately.

🌟 Screenshots

Powerful Web UI

Web UI Example Web UI Example Web UI Example

Connectors Ready-to-Go

Telegram Discord Slack IRC GitHub

📖 Full Documentation

Explore detailed documentation including:

Environment Configuration

Variable What It Does
LOCALAGENT_MODEL Your go-to model
LOCALAGENT_MULTIMODAL_MODEL Optional model for multimodal capabilities
LOCALAGENT_LLM_API_URL OpenAI-compatible API server URL
LOCALAGENT_LLM_API_KEY API authentication
LOCALAGENT_TIMEOUT Request timeout settings
LOCALAGENT_STATE_DIR Where state gets stored
LOCALAGENT_LOCALRAG_URL LocalRAG connection
LOCALAGENT_ENABLE_CONVERSATIONS_LOGGING Toggle conversation logs
LOCALAGENT_API_KEYS A comma separated list of api keys used for authentication

Installation Options

Pre-Built Binaries

Download ready-to-run binaries from the Releases page.

Source Build

Requirements:

  • Go 1.20+
  • Git
  • Bun 1.2+
# Clone repo
git clone https://github.com/mudler/LocalAgent.git
cd LocalAgent

# Build it
cd webui/react-ui && bun i && bun run build
cd ../..
go build -o localagent

# Run it
./localagent

Development

The development workflow is similar to the source build, but with additional steps for hot reloading of the frontend:

# Clone repo
git clone https://github.com/mudler/LocalAgent.git
cd LocalAgent

# Install dependencies and start frontend development server
cd webui/react-ui && bun i && bun run dev

Then in seperate terminal:

# Start development server
cd ../.. && go run main.go

Note: see webui/react-ui/.vite.config.js for env vars that can be used to configure the backend URL

CONNECTORS

Link your agents to the services you already use. Configuration examples below.

GitHub Issues

{
  "token": "YOUR_PAT_TOKEN",
  "repository": "repo-to-monitor",
  "owner": "repo-owner",
  "botUserName": "bot-username"
}

Discord

After creating your Discord bot:

{
  "token": "Bot YOUR_DISCORD_TOKEN",
  "defaultChannel": "OPTIONAL_CHANNEL_ID"
}

Don't forget to enable "Message Content Intent" in Bot(tab) settings! Enable " Message Content Intent " in the Bot tab!

Slack

Use the included slack.yaml manifest to create your app, then configure:

{
  "botToken": "xoxb-your-bot-token",
  "appToken": "xapp-your-app-token"
}
  • Create Oauth token bot token from "OAuth & Permissions" -> "OAuth Tokens for Your Workspace"
  • Create App level token (from "Basic Information" -> "App-Level Tokens" ( scope connections:writeRoute authorizations:read ))

Telegram

Get a token from @botfather, then:

{ 
  "token": "your-bot-father-token" 
}

IRC

Connect to IRC networks:

{
  "server": "irc.example.com",
  "port": "6667",
  "nickname": "LocalAgentBot",
  "channel": "#yourchannel",
  "alwaysReply": "false"
}

REST API

Agent Management

Endpoint Method Description Example
/api/agents GET List all available agents Example
/api/agent/:name/status GET View agent status history Example
/api/agent/create POST Create a new agent Example
/api/agent/:name DELETE Remove an agent Example
/api/agent/:name/pause PUT Pause agent activities Example
/api/agent/:name/start PUT Resume a paused agent Example
/api/agent/:name/config GET Get agent configuration
/api/agent/:name/config PUT Update agent configuration
/api/meta/agent/config GET Get agent configuration metadata
/settings/export/:name GET Export agent config Example
/settings/import POST Import agent config Example

Actions and Groups

Endpoint Method Description Example
/api/actions GET List available actions
/api/action/:name/run POST Execute an action
/api/agent/group/generateProfiles POST Generate group profiles
/api/agent/group/create POST Create a new agent group

Chat Interactions

Endpoint Method Description Example
/api/chat/:name POST Send message & get response Example
/api/notify/:name POST Send notification to agent Example
/api/sse/:name GET Real-time agent event stream Example
/v1/responses POST Send message & get response OpenAI's Responses
Curl Examples

Get All Agents

curl -X GET "http://localhost:3000/api/agents"

Get Agent Status

curl -X GET "http://localhost:3000/api/agent/my-agent/status"

Create Agent

curl -X POST "http://localhost:3000/api/agent/create" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "my-agent",
    "model": "gpt-4",
    "system_prompt": "You are an AI assistant.",
    "enable_kb": true,
    "enable_reasoning": true
  }'

Delete Agent

curl -X DELETE "http://localhost:3000/api/agent/my-agent"

Pause Agent

curl -X PUT "http://localhost:3000/api/agent/my-agent/pause"

Start Agent

curl -X PUT "http://localhost:3000/api/agent/my-agent/start"

Get Agent Configuration

curl -X GET "http://localhost:3000/api/agent/my-agent/config"

Update Agent Configuration

curl -X PUT "http://localhost:3000/api/agent/my-agent/config" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "system_prompt": "You are an AI assistant."
  }'

Export Agent

curl -X GET "http://localhost:3000/settings/export/my-agent" --output my-agent.json

Import Agent

curl -X POST "http://localhost:3000/settings/import" \
  -F "file=@/path/to/my-agent.json"

Send Message

curl -X POST "http://localhost:3000/api/chat/my-agent" \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, how are you today?"}'

Notify Agent

curl -X POST "http://localhost:3000/api/notify/my-agent" \
  -H "Content-Type: application/json" \
  -d '{"message": "Important notification"}'

Agent SSE Stream

curl -N -X GET "http://localhost:3000/api/sse/my-agent"

Note: For proper SSE handling, you should use a client that supports SSE natively.

Agent Configuration Reference

The agent configuration defines how an agent behaves and what capabilities it has. You can view the available configuration options and their descriptions by using the metadata endpoint:

curl -X GET "http://localhost:3000/api/meta/agent/config"

This will return a JSON object containing all available configuration fields, their types, and descriptions.

Here's an example of the agent configuration structure:

{
  "name": "my-agent",
  "model": "gpt-4",
  "multimodal_model": "gpt-4-vision",
  "hud": true,
  "standalone_job": false,
  "random_identity": false,
  "initiate_conversations": true,
  "enable_planning": true,
  "identity_guidance": "You are a helpful assistant.",
  "periodic_runs": "0 * * * *",
  "permanent_goal": "Help users with their questions.",
  "enable_kb": true,
  "enable_reasoning": true,
  "kb_results": 5,
  "can_stop_itself": false,
  "system_prompt": "You are an AI assistant.",
  "long_term_memory": true,
  "summary_long_term_memory": false
}

LICENSE

MIT License — See the LICENSE file for details.


LOCAL PROCESSING. GLOBAL THINKING.
Made with ❤️ by mudler

Description
LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility. A complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that works on consumer-grade hardware (CPU and GPU).
Readme MIT 7.1 MiB
Languages
Go 60.2%
JavaScript 19.6%
HTML 13.3%
CSS 6.7%
Makefile 0.1%