Small fixups
This commit is contained in:
67
README.md
67
README.md
@@ -7,35 +7,67 @@
|
||||
|
||||
From the [LocalAI](https://localai.io) author, μAGI. 100% Local AI assistant.
|
||||
|
||||
AutoGPT, babyAGI, ... and now μAGI!
|
||||
[AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT), [babyAGI](https://github.com/yoheinakajima/babyagi), ... and now LocalAGI!
|
||||
|
||||
LocalAGI is a microAGI that you can run locally.
|
||||
|
||||
The goal is:
|
||||
- Keep it simple, hackable and easy to understand
|
||||
- If you can't run it locally, it is not AGI
|
||||
- No API keys needed
|
||||
- No cloud services needed
|
||||
- No API keys needed, No cloud services needed, 100% Local
|
||||
- Do a smart-agent/virtual assistant that can do tasks
|
||||
- Small set of dependencies
|
||||
- Run with Docker
|
||||
|
||||
- Run with Docker everywhere
|
||||
|
||||
Note: this is a fun project, not a serious one. Be warned!
|
||||
|
||||
## What is μAGI?
|
||||
|
||||
It is a dead simple experiment to show how to tie the various LocalAI functionalities to create a virtual assistant that can do tasks. It is simple on purpose, trying to be minimalistic and easy to understand and customize.
|
||||
It is a dead simple experiment to show how to tie the various LocalAI functionalities to create a virtual assistant that can do tasks. It is simple on purpose, trying to be minimalistic and easy to understand and customize for everyone.
|
||||
|
||||
It is different from babyAGI or AutoGPT as it uses [OpenAI functions](https://openai.com/blog/function-calling-and-other-api-updates) - it is a from scratch attempt built on purpose to run locally with [LocalAI](https://localai.io) (no API keys needed!) instead of expensive, cloud services.
|
||||
It is different from babyAGI or AutoGPT as it uses [LocalAI functions](https://localai.io/features/openai-functions/) - it is a from scratch attempt built on purpose to run locally with [LocalAI](https://localai.io) (no API keys needed!) instead of expensive, cloud services. It sets apart from other projects as it strives to be small, and easy to fork on.
|
||||
|
||||
## Quick start
|
||||
|
||||
No frills, just run docker-compose and start chatting with your virtual assistant:
|
||||
|
||||
```bash
|
||||
docker-compose run --build -i --rm microagi
|
||||
docker-compose run -i --rm microagi
|
||||
```
|
||||
|
||||
## How to use it
|
||||
|
||||
By default microagi starts in interactive mode
|
||||
|
||||
### Basics
|
||||
|
||||
### Advanced
|
||||
|
||||
microagi has several options in the CLI to tweak the experience:
|
||||
|
||||
- `--system-prompt` is the system prompt to use. If not specified, it will use none.
|
||||
- `--prompt` is the prompt to use for batch mode. If not specified, it will default to interactive mode.
|
||||
- `--interactive` is the interactive mode. When used with `--prompt` will drop you in an interactive session after the first prompt is evaluated.
|
||||
- `--skip-avatar` will skip avatar creation. Useful if you want to run it in a headless environment.
|
||||
- `--re-evaluate` will re-evaluate if another action is needed or we have completed the user request.
|
||||
- `--postprocess` will postprocess the reasoning for analysis.
|
||||
- `--subtask-context` will include context in subtasks.
|
||||
- `--search-results` is the number of search results to use.
|
||||
- `--plan-message` is the message to use during planning. You can override the message for example to force a plan to have a different message.
|
||||
- `--tts-api-base` is the TTS API base. Defaults to `http://api:8080`.
|
||||
- `--localai-api-base` is the LocalAI API base. Defaults to `http://api:8080`.
|
||||
- `--images-api-base` is the Images API base. Defaults to `http://api:8080`.
|
||||
- `--embeddings-api-base` is the Embeddings API base. Defaults to `http://api:8080`.
|
||||
- `--functions-model` is the functions model to use. Defaults to `functions`.
|
||||
- `--embeddings-model` is the embeddings model to use. Defaults to `all-MiniLM-L6-v2`.
|
||||
- `--llm-model` is the LLM model to use. Defaults to `gpt-4`.
|
||||
- `--tts-model` is the TTS model to use. Defaults to `en-us-kathleen-low.onnx`.
|
||||
- `--stablediffusion-model` is the Stable Diffusion model to use. Defaults to `stablediffusion`.
|
||||
- `--stablediffusion-prompt` is the Stable Diffusion prompt to use. Defaults to `DEFAULT_PROMPT`.
|
||||
- `--force-action` will force a specific action.
|
||||
- `--debug` will enable debug mode.
|
||||
|
||||
|
||||
### Test it!
|
||||
|
||||
Ask it to:
|
||||
@@ -46,6 +78,25 @@ Ask it to:
|
||||
-> and watch it engaging into dialogues with long-term memory
|
||||
- "I want you to act as a marketing and sales guy in a startup company. I want you to come up with a plan to support our new latest project, XXX, which is an open source project. you are free to come up with creative ideas to engage and attract new people to the project. The XXX project is XXX."
|
||||
|
||||
#### Examples
|
||||
|
||||
Road trip planner by limiting searching to internet to 3 results only:
|
||||
|
||||
```bash
|
||||
docker-compose run -i --rm microagi --skip-avatar --subtask-context --postprocess --prompt "prepare a plan for my roadtrip to san francisco" --search-results 3
|
||||
```
|
||||
|
||||
Limit results of planning to 3 steps:
|
||||
|
||||
```bash
|
||||
docker-compose run -v $PWD/main.py:/app/main.py -i --rm microagi --skip-avatar --subtask-context --postprocess --prompt "do a plan for my roadtrip to san francisco" --search-results 1 --plan-message "The assistant replies with a plan of 3 steps to answer the request with a list of subtasks with logical steps. The reasoning includes a self-contained, detailed and descriptive instruction to fullfill the task."
|
||||
```
|
||||
|
||||
### Customize
|
||||
|
||||
To use a different model, you can see the examples in the `config` folder.
|
||||
To select a model, modify the `.env` file and change the `PRELOAD_MODELS_CONFIG` variable to use a different configuration file.
|
||||
|
||||
### Caveats
|
||||
|
||||
The "goodness" of a model has a big impact on how μAGI works. Currently `13b` models are powerful enough to actually able to perform multi-step tasks or do more actions. However, it is quite slow when running on CPU (no big surprise here).
|
||||
|
||||
65
main.py
65
main.py
@@ -88,10 +88,27 @@ parser.add_argument('--stablediffusion-model', dest='stablediffusion_model', act
|
||||
# Stable diffusion prompt
|
||||
parser.add_argument('--stablediffusion-prompt', dest='stablediffusion_prompt', action='store', default=DEFAULT_PROMPT,
|
||||
help='Stable diffusion prompt')
|
||||
|
||||
# Force action
|
||||
parser.add_argument('--force-action', dest='force_action', action='store', default="",
|
||||
help='Force an action')
|
||||
# Debug mode
|
||||
parser.add_argument('--debug', dest='debug', action='store_true', default=False,
|
||||
help='Debug mode')
|
||||
# Parse arguments
|
||||
args = parser.parse_args()
|
||||
|
||||
# Set log level
|
||||
LOG_LEVEL = "INFO"
|
||||
|
||||
def my_filter(record):
|
||||
return record["level"].no >= logger.level(LOG_LEVEL).no
|
||||
|
||||
logger.remove()
|
||||
logger.add(sys.stderr, filter=my_filter)
|
||||
|
||||
if args.debug:
|
||||
LOG_LEVEL = "DEBUG"
|
||||
logger.debug("Debug mode on")
|
||||
|
||||
FUNCTIONS_MODEL = os.environ.get("FUNCTIONS_MODEL", args.functions_model)
|
||||
EMBEDDINGS_MODEL = os.environ.get("EMBEDDINGS_MODEL", args.embeddings_model)
|
||||
@@ -222,8 +239,8 @@ Function call: """
|
||||
function_parameters = response.choices[0].message["function_call"].arguments
|
||||
# read the json from the string
|
||||
res = json.loads(function_parameters)
|
||||
logger.info(">>> function name: "+function_name)
|
||||
logger.info(">>> function parameters: "+function_parameters)
|
||||
logger.debug(">>> function name: "+function_name)
|
||||
logger.debug(">>> function parameters: "+function_parameters)
|
||||
return res
|
||||
return {"action": REPLY_ACTION}
|
||||
|
||||
@@ -274,7 +291,7 @@ Function call: """
|
||||
if response_message.get("function_call"):
|
||||
function_name = response.choices[0].message["function_call"].name
|
||||
function_parameters = response.choices[0].message["function_call"].arguments
|
||||
logger.info("==> function parameters: {function_parameters}",function_parameters=function_parameters)
|
||||
logger.debug("==> function parameters: {function_parameters}",function_parameters=function_parameters)
|
||||
function_to_call = agent_actions[function_name]["function"]
|
||||
|
||||
function_result = function_to_call(function_parameters, agent_actions=agent_actions)
|
||||
@@ -300,7 +317,7 @@ def function_completion(messages, action="", agent_actions={}):
|
||||
function_call = "auto"
|
||||
if action != "":
|
||||
function_call={"name": action}
|
||||
logger.info("==> function name: {function_call}", function_call=function_call)
|
||||
logger.debug("==> function name: {function_call}", function_call=function_call)
|
||||
# get the functions from the signatures of the agent actions, if exists
|
||||
functions = []
|
||||
for action in agent_actions:
|
||||
@@ -364,19 +381,32 @@ def converse(responses):
|
||||
|
||||
### Fine tune a string before feeding into the LLM
|
||||
|
||||
def analyze(responses, prefix="Analyze the following text highlighting the relevant information and identify a list of actions to take if there are any. If there are errors, suggest solutions to fix them"):
|
||||
def analyze(responses, prefix="Analyze the following text highlighting the relevant information and identify a list of actions to take if there are any. If there are errors, suggest solutions to fix them", suffix=""):
|
||||
string = process_history(responses)
|
||||
messages = []
|
||||
|
||||
if prefix != "":
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": f"""{prefix}:
|
||||
|
||||
```
|
||||
{string}
|
||||
```
|
||||
""",
|
||||
```
|
||||
{string}
|
||||
```
|
||||
""",
|
||||
}
|
||||
]
|
||||
else:
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": f"""{string}""",
|
||||
}
|
||||
]
|
||||
|
||||
if suffix != "":
|
||||
messages[0]["content"]+=f"""{suffix}"""
|
||||
|
||||
response = openai.ChatCompletion.create(
|
||||
model=LLM_MODEL,
|
||||
@@ -509,7 +539,7 @@ Function call: """
|
||||
function_parameters = response.choices[0].message["function_call"].arguments
|
||||
# read the json from the string
|
||||
res = json.loads(function_parameters)
|
||||
logger.info("<<< function name: {function_name} >>>> parameters: {parameters}", function_name=function_name,parameters=function_parameters)
|
||||
logger.debug("<<< function name: {function_name} >>>> parameters: {parameters}", function_name=function_name,parameters=function_parameters)
|
||||
return res
|
||||
return {"action": REPLY_ACTION}
|
||||
|
||||
@@ -634,6 +664,10 @@ def evaluate(user_input, conversation_history = [],re_evaluate=False, agent_acti
|
||||
logger.info("==> LocalAGI wants to call '{action}'", action=action["action"])
|
||||
#logger.info("==> Observation '{reasoning}'", reasoning=action["observation"])
|
||||
logger.info("==> Reasoning '{reasoning}'", reasoning=action["reasoning"])
|
||||
# Force executing a plan instead
|
||||
if args.force_action:
|
||||
action["action"] = args.force_action
|
||||
logger.info("==> Forcing action to '{action}' as requested by the user", action=action["action"])
|
||||
|
||||
reasoning = action["reasoning"]
|
||||
if action["action"] == PLAN_ACTION:
|
||||
@@ -700,7 +734,7 @@ def evaluate(user_input, conversation_history = [],re_evaluate=False, agent_acti
|
||||
#responses = converse(responses)
|
||||
|
||||
# TODO: this needs to be optimized
|
||||
responses = analyze(responses, prefix=f"You are an AI assistant. Return an appropriate answer to the user input '{user_input}' given the context below and summarizing the actions taken\n")
|
||||
responses = analyze(responses[1:], suffix=f"Return an appropriate answer given the context above\n")
|
||||
|
||||
# add responses to conversation history by extending the list
|
||||
conversation_history.append(
|
||||
@@ -853,11 +887,10 @@ if not args.skip_avatar:
|
||||
logger.info("Creating avatar, please wait...")
|
||||
display_avatar()
|
||||
|
||||
if not args.prompt:
|
||||
actions = ""
|
||||
for action in agent_actions:
|
||||
actions = ""
|
||||
for action in agent_actions:
|
||||
actions+=" '"+action+"'"
|
||||
logger.info("LocalAGI internally can do the following actions:{actions}", actions=actions)
|
||||
logger.info("LocalAGI internally can do the following actions:{actions}", actions=actions)
|
||||
|
||||
if not args.prompt:
|
||||
logger.info(">>> Interactive mode <<<")
|
||||
|
||||
Reference in New Issue
Block a user