standardize around sd-1.5-ggml since it's smaller

Signed-off-by: mudler <mudler@localai.io>
This commit is contained in:
mudler
2025-04-17 19:41:45 +02:00
parent 4a7c30ea5a
commit 5c466d9b81
4 changed files with 4 additions and 9 deletions

View File

@@ -116,7 +116,7 @@ LocalAGI supports multiple hardware configurations through Docker Compose profil
- Default models:
- Text: `arcee-agent`
- Multimodal: `minicpm-v-2_6`
- Image: `flux.1-dev-ggml`
- Image: `sd-1.5-ggml`
- Environment variables:
- `MODEL_NAME`: Text model to use
- `MULTIMODAL_MODEL`: Multimodal model to use
@@ -163,7 +163,7 @@ docker compose -f docker-compose.intel.yaml up
If no models are specified, it will use the defaults:
- Text model: `arcee-agent`
- Multimodal model: `minicpm-v-2_6`
- Image model: `flux.1-dev-ggml` (NVIDIA) or `sd-1.5-ggml` (Intel)
- Image model: `sd-1.5-ggml`
Good (relatively small) models that have been tested are:

View File

@@ -11,11 +11,6 @@ services:
# On a system with integrated GPU and an Arc 770, this is the Arc 770
- /dev/dri/card1
- /dev/dri/renderD129
command:
- ${MODEL_NAME:-arcee-agent}
- ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-sd-1.5-ggml}
- granite-embedding-107m-multilingual
localrecall:
extends:

View File

@@ -8,7 +8,7 @@ services:
- DEBUG=true
image: localai/localai:master-cublas-cuda12-ffmpeg-core
# For images with python backends, use:
# image: localai/localai:master-cublas-cuda12-ffmpeg-core
# image: localai/localai:master-cublas-cuda12-ffmpeg
deploy:
resources:
reservations:

View File

@@ -9,7 +9,7 @@ services:
command:
- ${MODEL_NAME:-arcee-agent}
- ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-flux.1-dev-ggml}
- ${IMAGE_MODEL:-sd-1.5-ggml}
- granite-embedding-107m-multilingual
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]