diff --git a/README.md b/README.md index 70f053f..254cd62 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,8 @@ The goal is: Note: Be warned! It was hacked in a weekend, and it's just an experiment to see what can be done with local LLMs. +![Screenshot from 2023-08-05 22-40-40](https://github.com/mudler/LocalAGI/assets/2420543/fc9d3c5d-d522-467b-9a84-fea18a78e75f) + ## Demo Search on internet (interactive mode) @@ -178,4 +180,4 @@ docker-compose run -v main.py:/app/main.py -i --rm localagi - With superhot models looses its magic, but maybe suitable for search - Context size is your enemy. `--postprocess` some times helps, but not always - It can be silly! -- It is slow on CPU, don't expect `7b` models to perform good, and `13b` models perform better but on CPU are quite slow. \ No newline at end of file +- It is slow on CPU, don't expect `7b` models to perform good, and `13b` models perform better but on CPU are quite slow.