fix: correct image name, switch to flux.1-dev-ggml as default

Signed-off-by: mudler <mudler@localai.io>
This commit is contained in:
mudler
2025-04-17 19:32:53 +02:00
parent 4888dfcdca
commit 4a7c30ea5a
3 changed files with 8 additions and 6 deletions

View File

@@ -60,7 +60,7 @@ MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU setup with custom multimodal and image models # NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \ MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \ MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev \ IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up docker compose -f docker-compose.nvidia.yaml up
``` ```
@@ -116,7 +116,7 @@ LocalAGI supports multiple hardware configurations through Docker Compose profil
- Default models: - Default models:
- Text: `arcee-agent` - Text: `arcee-agent`
- Multimodal: `minicpm-v-2_6` - Multimodal: `minicpm-v-2_6`
- Image: `flux.1-dev` - Image: `flux.1-dev-ggml`
- Environment variables: - Environment variables:
- `MODEL_NAME`: Text model to use - `MODEL_NAME`: Text model to use
- `MULTIMODAL_MODEL`: Multimodal model to use - `MULTIMODAL_MODEL`: Multimodal model to use
@@ -150,7 +150,7 @@ MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU with custom models # NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \ MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \ MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev \ IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up docker compose -f docker-compose.nvidia.yaml up
# Intel GPU with custom models # Intel GPU with custom models
@@ -163,7 +163,7 @@ docker compose -f docker-compose.intel.yaml up
If no models are specified, it will use the defaults: If no models are specified, it will use the defaults:
- Text model: `arcee-agent` - Text model: `arcee-agent`
- Multimodal model: `minicpm-v-2_6` - Multimodal model: `minicpm-v-2_6`
- Image model: `flux.1-dev` (NVIDIA) or `sd-1.5-ggml` (Intel) - Image model: `flux.1-dev-ggml` (NVIDIA) or `sd-1.5-ggml` (Intel)
Good (relatively small) models that have been tested are: Good (relatively small) models that have been tested are:

View File

@@ -6,7 +6,9 @@ services:
environment: environment:
- LOCALAI_SINGLE_ACTIVE_BACKEND=true - LOCALAI_SINGLE_ACTIVE_BACKEND=true
- DEBUG=true - DEBUG=true
image: localai/localai:master-sycl-f32-ffmpeg-core image: localai/localai:master-cublas-cuda12-ffmpeg-core
# For images with python backends, use:
# image: localai/localai:master-cublas-cuda12-ffmpeg-core
deploy: deploy:
resources: resources:
reservations: reservations:

View File

@@ -9,7 +9,7 @@ services:
command: command:
- ${MODEL_NAME:-arcee-agent} - ${MODEL_NAME:-arcee-agent}
- ${MULTIMODAL_MODEL:-minicpm-v-2_6} - ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-flux.1-dev} - ${IMAGE_MODEL:-flux.1-dev-ggml}
- granite-embedding-107m-multilingual - granite-embedding-107m-multilingual
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"] test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]