This saves time when testing on CPU which is the only sensible thing to do on GitHub CI for PRs. For releases or once the commit is merged we could use an external runner with GPU or just wait. Signed-off-by: Richard Palethorpe <io@richiejp.com>
Your AI. Your Hardware. Your Rules
Create customizable AI assistants, automations, chat bots and agents that run 100% locally. No need for agentic Python libraries or cloud service keys, just bring your GPU (or even just CPU) and a web browser.
LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. A complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities. No clouds. No data leaks. Just pure local AI that works on consumer-grade hardware (CPU and GPU).
🛡️ Take Back Your Privacy
Are you tired of AI wrappers calling out to cloud APIs, risking your privacy? So were we.
LocalAGI ensures your data stays exactly where you want it—on your hardware. No API keys, no cloud subscriptions, no compromise.
🌟 Key Features
- 🎛 No-Code Agents: Easy-to-configure multiple agents via Web UI.
- 🖥 Web-Based Interface: Simple and intuitive agent management.
- 🤖 Advanced Agent Teaming: Instantly create cooperative agent teams from a single prompt.
- 📡 Connectors Galore: Built-in integrations with Discord, Slack, Telegram, GitHub Issues, and IRC.
- 🛠 Comprehensive REST API: Seamless integration into your workflows. Every agent created will support OpenAI Responses API out of the box.
- 📚 Short & Long-Term Memory: Powered by LocalRecall.
- 🧠 Planning & Reasoning: Agents intelligently plan, reason, and adapt.
- 🔄 Periodic Tasks: Schedule tasks with cron-like syntax.
- 💾 Memory Management: Control memory usage with options for long-term and summary memory.
- 🖼 Multimodal Support: Ready for vision, text, and more.
- 🔧 Extensible Custom Actions: Easily script dynamic agent behaviors in Go (interpreted, no compilation!).
- 🛠 Fully Customizable Models: Use your own models or integrate seamlessly with LocalAI.
- 📊 Observability: Monitor agent status and view detailed observable updates in real-time.
🛠️ Quickstart
# Clone the repository
git clone https://github.com/mudler/LocalAGI
cd LocalAGI
# CPU setup (default)
docker compose up
# NVIDIA GPU setup
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose -f docker-compose.intel.yaml up
# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
Now you can access and manage your agents at http://localhost:8080
Still having issues? see this Youtube video: https://youtu.be/HtVwIxW3ePg
Videos
📚🆕 Local Stack Family
🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:
|
LocalAILocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI API specifications for local AI inferencing. Does not require GPU. |
|
LocalRecallA REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents. |
🖥️ Hardware Configurations
LocalAGI supports multiple hardware configurations through Docker Compose profiles:
CPU (Default)
- No special configuration needed
- Runs on any system with Docker
- Best for testing and development
- Supports text models only
NVIDIA GPU
- Requires NVIDIA GPU and drivers
- Uses CUDA for acceleration
- Best for high-performance inference
- Supports text, multimodal, and image generation models
- Run with:
docker compose -f docker-compose.nvidia.yaml up - Default models:
- Text:
gemma-3-12b-it-qat - Multimodal:
minicpm-v-2_6 - Image:
sd-1.5-ggml
- Text:
- Environment variables:
MODEL_NAME: Text model to useMULTIMODAL_MODEL: Multimodal model to useIMAGE_MODEL: Image generation model to useLOCALAI_SINGLE_ACTIVE_BACKEND: Set totrueto enable single active backend mode
Intel GPU
- Supports Intel Arc and integrated GPUs
- Uses SYCL for acceleration
- Best for Intel-based systems
- Supports text, multimodal, and image generation models
- Run with:
docker compose -f docker-compose.intel.yaml up - Default models:
- Text:
gemma-3-12b-it-qat - Multimodal:
minicpm-v-2_6 - Image:
sd-1.5-ggml
- Text:
- Environment variables:
MODEL_NAME: Text model to useMULTIMODAL_MODEL: Multimodal model to useIMAGE_MODEL: Image generation model to useLOCALAI_SINGLE_ACTIVE_BACKEND: Set totrueto enable single active backend mode
Customize models
You can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:
# CPU with custom model
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=sd-1.5-ggml \
docker compose -f docker-compose.intel.yaml up
If no models are specified, it will use the defaults:
- Text model:
gemma-3-12b-it-qat - Multimodal model:
minicpm-v-2_6 - Image model:
sd-1.5-ggml
Good (relatively small) models that have been tested are:
qwen_qwq-32b(best in co-ordinating agents)gemma-3-12b-itgemma-3-27b-it
🏆 Why Choose LocalAGI?
- ✓ Ultimate Privacy: No data ever leaves your hardware.
- ✓ Flexible Model Integration: Supports GGUF, GGML, and more thanks to LocalAI.
- ✓ Developer-Friendly: Rich APIs and intuitive interfaces.
- ✓ Effortless Setup: Simple Docker compose setups and pre-built binaries.
- ✓ Feature-Rich: From planning to multimodal capabilities, connectors for Slack, MCP support, LocalAGI has it all.
🌟 Screenshots
Powerful Web UI
Connectors Ready-to-Go
📖 Full Documentation
Explore detailed documentation including:
Environment Configuration
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does |
|---|---|
LOCALAGI_MODEL |
Your go-to model |
LOCALAGI_MULTIMODAL_MODEL |
Optional model for multimodal capabilities |
LOCALAGI_LLM_API_URL |
OpenAI-compatible API server URL |
LOCALAGI_LLM_API_KEY |
API authentication |
LOCALAGI_TIMEOUT |
Request timeout settings |
LOCALAGI_STATE_DIR |
Where state gets stored |
LOCALAGI_LOCALRAG_URL |
LocalRecall connection |
LOCALAGI_ENABLE_CONVERSATIONS_LOGGING |
Toggle conversation logs |
LOCALAGI_API_KEYS |
A comma separated list of api keys used for authentication |
Installation Options
Pre-Built Binaries
Download ready-to-run binaries from the Releases page.
Source Build
Requirements:
- Go 1.20+
- Git
- Bun 1.2+
# Clone repo
git clone https://github.com/mudler/LocalAGI.git
cd LocalAGI
# Build it
cd webui/react-ui && bun i && bun run build
cd ../..
go build -o localagi
# Run it
./localagi
Using as a Library
LocalAGI can be used as a Go library to programmatically create and manage AI agents. Let's start with a simple example of creating a single agent:
Basic Usage: Single Agent
import (
"github.com/mudler/LocalAGI/core/agent"
"github.com/mudler/LocalAGI/core/types"
)
// Create a new agent with basic configuration
agent, err := agent.New(
agent.WithModel("gpt-4"),
agent.WithLLMAPIURL("http://localhost:8080"),
agent.WithLLMAPIKey("your-api-key"),
agent.WithSystemPrompt("You are a helpful assistant."),
agent.WithCharacter(agent.Character{
Name: "my-agent",
}),
agent.WithActions(
// Add your custom actions here
),
agent.WithStateFile("./state/my-agent.state.json"),
agent.WithCharacterFile("./state/my-agent.character.json"),
agent.WithTimeout("10m"),
agent.EnableKnowledgeBase(),
agent.EnableReasoning(),
)
if err != nil {
log.Fatal(err)
}
// Start the agent
go func() {
if err := agent.Run(); err != nil {
log.Printf("Agent stopped: %v", err)
}
}()
// Stop the agent when done
agent.Stop()
This basic example shows how to:
- Create a single agent with essential configuration
- Set up the agent's model and API connection
- Configure basic features like knowledge base and reasoning
- Start and stop the agent
Advanced Usage: Agent Pools
For managing multiple agents, you can use the AgentPool system:
import (
"github.com/mudler/LocalAGI/core/state"
"github.com/mudler/LocalAGI/core/types"
)
// Create a new agent pool
pool, err := state.NewAgentPool(
"default-model", // default model name
"default-multimodal-model", // default multimodal model
"image-model", // image generation model
"http://localhost:8080", // API URL
"your-api-key", // API key
"./state", // state directory
"", // MCP box URL (optional)
"http://localhost:8081", // LocalRAG API URL
func(config *AgentConfig) func(ctx context.Context, pool *AgentPool) []types.Action {
// Define available actions for agents
return func(ctx context.Context, pool *AgentPool) []types.Action {
return []types.Action{
// Add your custom actions here
}
}
},
func(config *AgentConfig) []Connector {
// Define connectors for agents
return []Connector{
// Add your custom connectors here
}
},
func(config *AgentConfig) []DynamicPrompt {
// Define dynamic prompts for agents
return []DynamicPrompt{
// Add your custom prompts here
}
},
func(config *AgentConfig) types.JobFilters {
// Define job filters for agents
return types.JobFilters{
// Add your custom filters here
}
},
"10m", // timeout
true, // enable conversation logs
)
// Create a new agent in the pool
agentConfig := &AgentConfig{
Name: "my-agent",
Model: "gpt-4",
SystemPrompt: "You are a helpful assistant.",
EnableKnowledgeBase: true,
EnableReasoning: true,
// Add more configuration options as needed
}
err = pool.CreateAgent("my-agent", agentConfig)
// Start all agents
err = pool.StartAll()
// Get agent status
status := pool.GetStatusHistory("my-agent")
// Stop an agent
pool.Stop("my-agent")
// Remove an agent
err = pool.Remove("my-agent")
Available Features
Key features available through the library:
- Single Agent Management: Create and manage individual agents with basic configuration
- Agent Pool Management: Create, start, stop, and remove multiple agents
- Configuration: Customize agent behavior through AgentConfig
- Actions: Define custom actions for agents to perform
- Connectors: Add custom connectors for external services
- Dynamic Prompts: Create dynamic prompt templates
- Job Filters: Implement custom job filtering logic
- Status Tracking: Monitor agent status and history
- State Persistence: Automatic state saving and loading
For more details about available configuration options and features, refer to the Agent Configuration Reference section.
Development
The development workflow is similar to the source build, but with additional steps for hot reloading of the frontend:
# Clone repo
git clone https://github.com/mudler/LocalAGI.git
cd LocalAGI
# Install dependencies and start frontend development server
cd webui/react-ui && bun i && bun run dev
Then in separate terminal:
# Start development server
cd ../.. && go run main.go
Note: see webui/react-ui/.vite.config.js for env vars that can be used to configure the backend URL
CONNECTORS
Link your agents to the services you already use. Configuration examples below.
GitHub Issues
{
"token": "YOUR_PAT_TOKEN",
"repository": "repo-to-monitor",
"owner": "repo-owner",
"botUserName": "bot-username"
}
Discord
After creating your Discord bot:
{
"token": "Bot YOUR_DISCORD_TOKEN",
"defaultChannel": "OPTIONAL_CHANNEL_ID"
}
Don't forget to enable "Message Content Intent" in Bot(tab) settings! Enable " Message Content Intent " in the Bot tab!
Slack
Use the included slack.yaml manifest to create your app, then configure:
{
"botToken": "xoxb-your-bot-token",
"appToken": "xapp-your-app-token"
}
- Create Oauth token bot token from "OAuth & Permissions" -> "OAuth Tokens for Your Workspace"
- Create App level token (from "Basic Information" -> "App-Level Tokens" ( scope connections:writeRoute authorizations:read ))
Telegram
Get a token from @botfather, then:
{
"token": "your-bot-father-token"
}
IRC
Connect to IRC networks:
{
"server": "irc.example.com",
"port": "6667",
"nickname": "LocalAGIBot",
"channel": "#yourchannel",
"alwaysReply": "false"
}
REST API
Agent Management
| Endpoint | Method | Description | Example |
|---|---|---|---|
/api/agents |
GET | List all available agents | Example |
/api/agent/:name/status |
GET | View agent status history | Example |
/api/agent/create |
POST | Create a new agent | Example |
/api/agent/:name |
DELETE | Remove an agent | Example |
/api/agent/:name/pause |
PUT | Pause agent activities | Example |
/api/agent/:name/start |
PUT | Resume a paused agent | Example |
/api/agent/:name/config |
GET | Get agent configuration | |
/api/agent/:name/config |
PUT | Update agent configuration | |
/api/meta/agent/config |
GET | Get agent configuration metadata | |
/settings/export/:name |
GET | Export agent config | Example |
/settings/import |
POST | Import agent config | Example |
Actions and Groups
| Endpoint | Method | Description | Example |
|---|---|---|---|
/api/actions |
GET | List available actions | |
/api/action/:name/run |
POST | Execute an action | |
/api/agent/group/generateProfiles |
POST | Generate group profiles | |
/api/agent/group/create |
POST | Create a new agent group |
Chat Interactions
| Endpoint | Method | Description | Example |
|---|---|---|---|
/api/chat/:name |
POST | Send message & get response | Example |
/api/notify/:name |
POST | Send notification to agent | Example |
/api/sse/:name |
GET | Real-time agent event stream | Example |
/v1/responses |
POST | Send message & get response | OpenAI's Responses |
Curl Examples
Get All Agents
curl -X GET "http://localhost:3000/api/agents"
Get Agent Status
curl -X GET "http://localhost:3000/api/agent/my-agent/status"
Create Agent
curl -X POST "http://localhost:3000/api/agent/create" \
-H "Content-Type: application/json" \
-d '{
"name": "my-agent",
"model": "gpt-4",
"system_prompt": "You are an AI assistant.",
"enable_kb": true,
"enable_reasoning": true
}'
Delete Agent
curl -X DELETE "http://localhost:3000/api/agent/my-agent"
Pause Agent
curl -X PUT "http://localhost:3000/api/agent/my-agent/pause"
Start Agent
curl -X PUT "http://localhost:3000/api/agent/my-agent/start"
Get Agent Configuration
curl -X GET "http://localhost:3000/api/agent/my-agent/config"
Update Agent Configuration
curl -X PUT "http://localhost:3000/api/agent/my-agent/config" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"system_prompt": "You are an AI assistant."
}'
Export Agent
curl -X GET "http://localhost:3000/settings/export/my-agent" --output my-agent.json
Import Agent
curl -X POST "http://localhost:3000/settings/import" \
-F "file=@/path/to/my-agent.json"
Send Message
curl -X POST "http://localhost:3000/api/chat/my-agent" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you today?"}'
Notify Agent
curl -X POST "http://localhost:3000/api/notify/my-agent" \
-H "Content-Type: application/json" \
-d '{"message": "Important notification"}'
Agent SSE Stream
curl -N -X GET "http://localhost:3000/api/sse/my-agent"
Note: For proper SSE handling, you should use a client that supports SSE natively.
Agent Configuration Reference
Configuration Structure
The agent configuration defines how an agent behaves and what capabilities it has. You can view the available configuration options and their descriptions by using the metadata endpoint:
curl -X GET "http://localhost:3000/api/meta/agent/config"
This will return a JSON object containing all available configuration fields, their types, and descriptions.
Here's an example of the agent configuration structure:
{
"name": "my-agent",
"model": "gpt-4",
"multimodal_model": "gpt-4-vision",
"hud": true,
"standalone_job": false,
"random_identity": false,
"initiate_conversations": true,
"enable_planning": true,
"identity_guidance": "You are a helpful assistant.",
"periodic_runs": "0 * * * *",
"permanent_goal": "Help users with their questions.",
"enable_kb": true,
"enable_reasoning": true,
"kb_results": 5,
"can_stop_itself": false,
"system_prompt": "You are an AI assistant.",
"long_term_memory": true,
"summary_long_term_memory": false
}
Environment Configuration
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does |
|---|---|
LOCALAGI_MODEL |
Your go-to model |
LOCALAGI_MULTIMODAL_MODEL |
Optional model for multimodal capabilities |
LOCALAGI_LLM_API_URL |
OpenAI-compatible API server URL |
LOCALAGI_LLM_API_KEY |
API authentication |
LOCALAGI_TIMEOUT |
Request timeout settings |
LOCALAGI_STATE_DIR |
Where state gets stored |
LOCALAGI_LOCALRAG_URL |
LocalRecall connection |
LOCALAGI_ENABLE_CONVERSATIONS_LOGGING |
Toggle conversation logs |
LOCALAGI_API_KEYS |
A comma separated list of api keys used for authentication |
LICENSE
MIT License — See the LICENSE file for details.
LOCAL PROCESSING. GLOBAL THINKING.
Made with ❤️ by mudler



