chore: cleanup, identify goal from conversation when evaluting achievement (#29)
* chore: cleanup, identify goal from conversation when evaluting achievement Signed-off-by: mudler <mudler@localai.io> * change base cpu model Signed-off-by: mudler <mudler@localai.io> * this is not necessary anymore Signed-off-by: mudler <mudler@localai.io> * use 12b Signed-off-by: mudler <mudler@localai.io> * use openthinker, it's smaller * chore(tests): set timeout Signed-off-by: mudler <mudler@localai.io> * Enable reasoning in some of the tests Signed-off-by: mudler <mudler@localai.io> * docker compose unification, small changes Signed-off-by: mudler <mudler@localai.io> * Simplify Signed-off-by: mudler <mudler@localai.io> * Back at arcee-agent as default Signed-off-by: mudler <mudler@localai.io> * Better error handling during planning Signed-off-by: mudler <mudler@localai.io> * Ci: do not run jobs for every branch Signed-off-by: mudler <mudler@localai.io> --------- Signed-off-by: mudler <mudler@localai.io>
This commit is contained in:
committed by
GitHub
parent
209a9989c4
commit
60c249f19a
98
README.md
98
README.md
@@ -45,14 +45,100 @@ LocalAGI ensures your data stays exactly where you want it—on your hardware. N
|
||||
git clone https://github.com/mudler/LocalAGI
|
||||
cd LocalAGI
|
||||
|
||||
# CPU setup
|
||||
docker compose up -f docker-compose.yml
|
||||
# CPU setup (default)
|
||||
docker compose up
|
||||
|
||||
# GPU setup
|
||||
docker compose up -f docker-compose.gpu.yml
|
||||
# NVIDIA GPU setup
|
||||
docker compose --profile nvidia up
|
||||
|
||||
# Intel GPU setup (for Intel Arc and integrated GPUs)
|
||||
docker compose --profile intel up
|
||||
|
||||
# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
|
||||
MODEL_NAME=gemma-3-12b-it docker compose up
|
||||
|
||||
# NVIDIA GPU setup with custom multimodal and image models
|
||||
MODEL_NAME=gemma-3-12b-it \
|
||||
MULTIMODAL_MODEL=minicpm-v-2_6 \
|
||||
IMAGE_MODEL=flux.1-dev \
|
||||
docker compose --profile nvidia up
|
||||
```
|
||||
|
||||
Access your agents at `http://localhost:8080`
|
||||
Now you can access and manage your agents at [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
## 🖥️ Hardware Configurations
|
||||
|
||||
LocalAGI supports multiple hardware configurations through Docker Compose profiles:
|
||||
|
||||
### CPU (Default)
|
||||
- No special configuration needed
|
||||
- Runs on any system with Docker
|
||||
- Best for testing and development
|
||||
- Supports text models only
|
||||
|
||||
### NVIDIA GPU
|
||||
- Requires NVIDIA GPU and drivers
|
||||
- Uses CUDA for acceleration
|
||||
- Best for high-performance inference
|
||||
- Supports text, multimodal, and image generation models
|
||||
- Run with: `docker compose --profile nvidia up`
|
||||
- Default models:
|
||||
- Text: `arcee-agent`
|
||||
- Multimodal: `minicpm-v-2_6`
|
||||
- Image: `flux.1-dev`
|
||||
- Environment variables:
|
||||
- `MODEL_NAME`: Text model to use
|
||||
- `MULTIMODAL_MODEL`: Multimodal model to use
|
||||
- `IMAGE_MODEL`: Image generation model to use
|
||||
- `LOCALAI_SINGLE_ACTIVE_BACKEND`: Set to `true` to enable single active backend mode
|
||||
|
||||
### Intel GPU
|
||||
- Supports Intel Arc and integrated GPUs
|
||||
- Uses SYCL for acceleration
|
||||
- Best for Intel-based systems
|
||||
- Supports text, multimodal, and image generation models
|
||||
- Run with: `docker compose --profile intel up`
|
||||
- Default models:
|
||||
- Text: `arcee-agent`
|
||||
- Multimodal: `minicpm-v-2_6`
|
||||
- Image: `sd-1.5-ggml`
|
||||
- Environment variables:
|
||||
- `MODEL_NAME`: Text model to use
|
||||
- `MULTIMODAL_MODEL`: Multimodal model to use
|
||||
- `IMAGE_MODEL`: Image generation model to use
|
||||
- `LOCALAI_SINGLE_ACTIVE_BACKEND`: Set to `true` to enable single active backend mode
|
||||
|
||||
## Customize models
|
||||
|
||||
You can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:
|
||||
|
||||
```bash
|
||||
# CPU with custom model
|
||||
MODEL_NAME=gemma-3-12b-it docker compose up
|
||||
|
||||
# NVIDIA GPU with custom models
|
||||
MODEL_NAME=gemma-3-12b-it \
|
||||
MULTIMODAL_MODEL=minicpm-v-2_6 \
|
||||
IMAGE_MODEL=flux.1-dev \
|
||||
docker compose --profile nvidia up
|
||||
|
||||
# Intel GPU with custom models
|
||||
MODEL_NAME=gemma-3-12b-it \
|
||||
MULTIMODAL_MODEL=minicpm-v-2_6 \
|
||||
IMAGE_MODEL=sd-1.5-ggml \
|
||||
docker compose --profile intel up
|
||||
```
|
||||
|
||||
If no models are specified, it will use the defaults:
|
||||
- Text model: `arcee-agent`
|
||||
- Multimodal model: `minicpm-v-2_6`
|
||||
- Image model: `flux.1-dev` (NVIDIA) or `sd-1.5-ggml` (Intel)
|
||||
|
||||
Good (relatively small) models that have been tested are:
|
||||
|
||||
- `qwen_qwq-32b` (best in co-ordinating agents)
|
||||
- `gemma-3-12b-it`
|
||||
- `gemma-3-27b-it`
|
||||
|
||||
## 🏆 Why Choose LocalAGI?
|
||||
|
||||
@@ -98,6 +184,8 @@ Explore detailed documentation including:
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
|
||||
|
||||
| Variable | What It Does |
|
||||
|----------|--------------|
|
||||
| `LOCALAGI_MODEL` | Your go-to model |
|
||||
|
||||
Reference in New Issue
Block a user