From 310639318f419c0b7fae29a9cda683324068904f Mon Sep 17 00:00:00 2001 From: Ettore Di Giacinto Date: Sat, 5 Aug 2023 22:46:07 +0200 Subject: [PATCH] Update README.md --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 70f053f..254cd62 100644 --- a/README.md +++ b/README.md @@ -19,6 +19,8 @@ The goal is: Note: Be warned! It was hacked in a weekend, and it's just an experiment to see what can be done with local LLMs. +![Screenshot from 2023-08-05 22-40-40](https://github.com/mudler/LocalAGI/assets/2420543/fc9d3c5d-d522-467b-9a84-fea18a78e75f) + ## Demo Search on internet (interactive mode) @@ -178,4 +180,4 @@ docker-compose run -v main.py:/app/main.py -i --rm localagi - With superhot models looses its magic, but maybe suitable for search - Context size is your enemy. `--postprocess` some times helps, but not always - It can be silly! -- It is slow on CPU, don't expect `7b` models to perform good, and `13b` models perform better but on CPU are quite slow. \ No newline at end of file +- It is slow on CPU, don't expect `7b` models to perform good, and `13b` models perform better but on CPU are quite slow.