Compare commits

..

7 Commits

Author SHA1 Message Date
Richard Palethorpe
9434524e51 fix(ui): Don't try to pass unserializable Go objects to status UI
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-13 14:50:31 +01:00
Ettore Di Giacinto
60c249f19a chore: cleanup, identify goal from conversation when evaluting achievement (#29)
* chore: cleanup, identify goal from conversation when evaluting achievement

Signed-off-by: mudler <mudler@localai.io>

* change base cpu model

Signed-off-by: mudler <mudler@localai.io>

* this is not necessary anymore

Signed-off-by: mudler <mudler@localai.io>

* use 12b

Signed-off-by: mudler <mudler@localai.io>

* use openthinker, it's smaller

* chore(tests): set timeout

Signed-off-by: mudler <mudler@localai.io>

* Enable reasoning in some of the tests

Signed-off-by: mudler <mudler@localai.io>

* docker compose unification, small changes

Signed-off-by: mudler <mudler@localai.io>

* Simplify

Signed-off-by: mudler <mudler@localai.io>

* Back at arcee-agent as default

Signed-off-by: mudler <mudler@localai.io>

* Better error handling during planning

Signed-off-by: mudler <mudler@localai.io>

* Ci: do not run jobs for every branch

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: mudler <mudler@localai.io>
2025-04-12 21:01:01 +02:00
Ettore Di Giacinto
209a9989c4 Update README.md 2025-04-11 22:49:50 +02:00
Ettore Di Giacinto
5105b46f48 Add Github reviewer and improve reasoning (#27)
* Add Github reviewer and improve reasoning

* feat: improve action picking

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: mudler <mudler@localai.io>
2025-04-11 21:57:19 +02:00
Ettore Di Giacinto
e4c7d1acfc feat(github): add actions to comment and read PRs (#26)
Signed-off-by: mudler <mudler@localai.io>
2025-04-10 21:45:18 +02:00
Ettore Di Giacinto
dd4fbd64d3 fix(pick_action): improve action pickup by using only the assistant thought process (#25)
* fix(pick_action): improve action pickup by using only the assistant thought process

Signed-off-by: mudler <mudler@localai.io>

* fix: improve templates

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: mudler <mudler@localai.io>
2025-04-10 21:45:04 +02:00
Ettore Di Giacinto
4010f9d86c Update README.md 2025-04-10 19:38:13 +02:00
18 changed files with 772 additions and 496 deletions

View File

@@ -3,7 +3,7 @@ name: Run Go Tests
on: on:
push: push:
branches: branches:
- '**' - 'main'
pull_request: pull_request:
branches: branches:
- '**' - '**'

View File

@@ -3,7 +3,7 @@ IMAGE_NAME?=webui
ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
prepare-tests: prepare-tests:
docker compose up -d docker compose up -d --build
cleanup-tests: cleanup-tests:
docker compose down docker compose down

100
README.md
View File

@@ -1,5 +1,5 @@
<p align="center"> <p align="center">
<img src="https://github.com/user-attachments/assets/6958ffb3-31cf-441e-b99d-ce34ec6fc88f" alt="LocalAGI Logo" width="220"/> <img src="./webui/react-ui/public/logo_1.png" alt="LocalAGI Logo" width="220"/>
</p> </p>
<h3 align="center"><em>Your AI. Your Hardware. Your Rules.</em></h3> <h3 align="center"><em>Your AI. Your Hardware. Your Rules.</em></h3>
@@ -45,14 +45,100 @@ LocalAGI ensures your data stays exactly where you want it—on your hardware. N
git clone https://github.com/mudler/LocalAGI git clone https://github.com/mudler/LocalAGI
cd LocalAGI cd LocalAGI
# CPU setup # CPU setup (default)
docker compose up -f docker-compose.yml docker compose up
# GPU setup # NVIDIA GPU setup
docker compose up -f docker-compose.gpu.yml docker compose --profile nvidia up
# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose --profile intel up
# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev \
docker compose --profile nvidia up
``` ```
Access your agents at `http://localhost:3000` Now you can access and manage your agents at [http://localhost:8080](http://localhost:8080)
## 🖥️ Hardware Configurations
LocalAGI supports multiple hardware configurations through Docker Compose profiles:
### CPU (Default)
- No special configuration needed
- Runs on any system with Docker
- Best for testing and development
- Supports text models only
### NVIDIA GPU
- Requires NVIDIA GPU and drivers
- Uses CUDA for acceleration
- Best for high-performance inference
- Supports text, multimodal, and image generation models
- Run with: `docker compose --profile nvidia up`
- Default models:
- Text: `arcee-agent`
- Multimodal: `minicpm-v-2_6`
- Image: `flux.1-dev`
- Environment variables:
- `MODEL_NAME`: Text model to use
- `MULTIMODAL_MODEL`: Multimodal model to use
- `IMAGE_MODEL`: Image generation model to use
- `LOCALAI_SINGLE_ACTIVE_BACKEND`: Set to `true` to enable single active backend mode
### Intel GPU
- Supports Intel Arc and integrated GPUs
- Uses SYCL for acceleration
- Best for Intel-based systems
- Supports text, multimodal, and image generation models
- Run with: `docker compose --profile intel up`
- Default models:
- Text: `arcee-agent`
- Multimodal: `minicpm-v-2_6`
- Image: `sd-1.5-ggml`
- Environment variables:
- `MODEL_NAME`: Text model to use
- `MULTIMODAL_MODEL`: Multimodal model to use
- `IMAGE_MODEL`: Image generation model to use
- `LOCALAI_SINGLE_ACTIVE_BACKEND`: Set to `true` to enable single active backend mode
## Customize models
You can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:
```bash
# CPU with custom model
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev \
docker compose --profile nvidia up
# Intel GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=sd-1.5-ggml \
docker compose --profile intel up
```
If no models are specified, it will use the defaults:
- Text model: `arcee-agent`
- Multimodal model: `minicpm-v-2_6`
- Image model: `flux.1-dev` (NVIDIA) or `sd-1.5-ggml` (Intel)
Good (relatively small) models that have been tested are:
- `qwen_qwq-32b` (best in co-ordinating agents)
- `gemma-3-12b-it`
- `gemma-3-27b-it`
## 🏆 Why Choose LocalAGI? ## 🏆 Why Choose LocalAGI?
@@ -98,6 +184,8 @@ Explore detailed documentation including:
### Environment Configuration ### Environment Configuration
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does | | Variable | What It Does |
|----------|--------------| |----------|--------------|
| `LOCALAGI_MODEL` | Your go-to model | | `LOCALAGI_MODEL` | Your go-to model |

48
core/action/goal.go Normal file
View File

@@ -0,0 +1,48 @@
package action
import (
"context"
"github.com/mudler/LocalAGI/core/types"
"github.com/sashabaranov/go-openai/jsonschema"
)
// NewGoal creates a new intention action
// The inention action is special as it tries to identify
// a tool to use and a reasoning over to use it
func NewGoal() *GoalAction {
return &GoalAction{}
}
type GoalAction struct {
}
type GoalResponse struct {
Goal string `json:"goal"`
Achieved bool `json:"achieved"`
}
func (a *GoalAction) Run(context.Context, types.ActionParams) (types.ActionResult, error) {
return types.ActionResult{}, nil
}
func (a *GoalAction) Plannable() bool {
return false
}
func (a *GoalAction) Definition() types.ActionDefinition {
return types.ActionDefinition{
Name: "goal",
Description: "Check if the goal is achieved",
Properties: map[string]jsonschema.Definition{
"goal": {
Type: jsonschema.String,
Description: "The goal to check if it is achieved.",
},
"achieved": {
Type: jsonschema.Boolean,
Description: "Whether the goal is achieved",
},
},
Required: []string{"goal", "achieved"},
}
}

View File

@@ -41,7 +41,7 @@ func (a *PlanAction) Plannable() bool {
func (a *PlanAction) Definition() types.ActionDefinition { func (a *PlanAction) Definition() types.ActionDefinition {
return types.ActionDefinition{ return types.ActionDefinition{
Name: PlanActionName, Name: PlanActionName,
Description: "Use this tool for solving complex tasks that involves calling more tools in sequence.", Description: "Use it for situations that involves doing more actions in sequence.",
Properties: map[string]jsonschema.Definition{ Properties: map[string]jsonschema.Definition{
"subtasks": { "subtasks": {
Type: jsonschema.Array, Type: jsonschema.Array,

View File

@@ -24,15 +24,27 @@ type decisionResult struct {
func (a *Agent) decision( func (a *Agent) decision(
ctx context.Context, ctx context.Context,
conversation []openai.ChatCompletionMessage, conversation []openai.ChatCompletionMessage,
tools []openai.Tool, toolchoice any, maxRetries int) (*decisionResult, error) { tools []openai.Tool, toolchoice string, maxRetries int) (*decisionResult, error) {
var choice *openai.ToolChoice
if toolchoice != "" {
choice = &openai.ToolChoice{
Type: openai.ToolTypeFunction,
Function: openai.ToolFunction{Name: toolchoice},
}
}
var lastErr error var lastErr error
for attempts := 0; attempts < maxRetries; attempts++ { for attempts := 0; attempts < maxRetries; attempts++ {
decision := openai.ChatCompletionRequest{ decision := openai.ChatCompletionRequest{
Model: a.options.LLMAPI.Model, Model: a.options.LLMAPI.Model,
Messages: conversation, Messages: conversation,
Tools: tools, Tools: tools,
ToolChoice: toolchoice, }
if choice != nil {
decision.ToolChoice = *choice
} }
resp, err := a.client.CreateChatCompletion(ctx, decision) resp, err := a.client.CreateChatCompletion(ctx, decision)
@@ -42,6 +54,9 @@ func (a *Agent) decision(
continue continue
} }
jsonResp, _ := json.Marshal(resp)
xlog.Debug("Decision response", "response", string(jsonResp))
if len(resp.Choices) != 1 { if len(resp.Choices) != 1 {
lastErr = fmt.Errorf("no choices: %d", len(resp.Choices)) lastErr = fmt.Errorf("no choices: %d", len(resp.Choices))
xlog.Warn("Attempt to make a decision failed", "attempt", attempts+1, "error", lastErr) xlog.Warn("Attempt to make a decision failed", "attempt", attempts+1, "error", lastErr)
@@ -79,6 +94,15 @@ func (m Messages) ToOpenAI() []openai.ChatCompletionMessage {
return []openai.ChatCompletionMessage(m) return []openai.ChatCompletionMessage(m)
} }
func (m Messages) RemoveIf(f func(msg openai.ChatCompletionMessage) bool) Messages {
for i := len(m) - 1; i >= 0; i-- {
if f(m[i]) {
m = append(m[:i], m[i+1:]...)
}
}
return m
}
func (m Messages) String() string { func (m Messages) String() string {
s := "" s := ""
for _, cc := range m { for _, cc := range m {
@@ -180,10 +204,7 @@ func (a *Agent) generateParameters(ctx context.Context, pickTemplate string, act
result, attemptErr = a.decision(ctx, result, attemptErr = a.decision(ctx,
cc, cc,
a.availableActions().ToTools(), a.availableActions().ToTools(),
openai.ToolChoice{ act.Definition().Name.String(),
Type: openai.ToolTypeFunction,
Function: openai.ToolFunction{Name: act.Definition().Name.String()},
},
maxAttempts, maxAttempts,
) )
if attemptErr == nil && result.actionParams != nil { if attemptErr == nil && result.actionParams != nil {
@@ -244,6 +265,7 @@ func (a *Agent) handlePlanning(ctx context.Context, job *types.Job, chosenAction
params, err := a.generateParameters(ctx, pickTemplate, subTaskAction, conv, subTaskReasoning, maxRetries) params, err := a.generateParameters(ctx, pickTemplate, subTaskAction, conv, subTaskReasoning, maxRetries)
if err != nil { if err != nil {
xlog.Error("error generating action's parameters", "error", err)
return conv, fmt.Errorf("error generating action's parameters: %w", err) return conv, fmt.Errorf("error generating action's parameters: %w", err)
} }
@@ -273,6 +295,7 @@ func (a *Agent) handlePlanning(ctx context.Context, job *types.Job, chosenAction
result, err := a.runAction(ctx, subTaskAction, actionParams) result, err := a.runAction(ctx, subTaskAction, actionParams)
if err != nil { if err != nil {
xlog.Error("error running action", "error", err)
return conv, fmt.Errorf("error running action: %w", err) return conv, fmt.Errorf("error running action: %w", err)
} }
@@ -358,13 +381,18 @@ func (a *Agent) prepareHUD() (promptHUD *PromptHUD) {
func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.ChatCompletionMessage, maxRetries int) (types.Action, types.ActionParams, string, error) { func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.ChatCompletionMessage, maxRetries int) (types.Action, types.ActionParams, string, error) {
c := messages c := messages
xlog.Debug("[pickAction] picking action starts", "messages", messages)
// Identify the goal of this conversation
if !a.options.forceReasoning { if !a.options.forceReasoning {
xlog.Debug("not forcing reasoning")
// We also could avoid to use functions here and get just a reply from the LLM // We also could avoid to use functions here and get just a reply from the LLM
// and then use the reply to get the action // and then use the reply to get the action
thought, err := a.decision(ctx, thought, err := a.decision(ctx,
messages, messages,
a.availableActions().ToTools(), a.availableActions().ToTools(),
nil, "",
maxRetries) maxRetries)
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
@@ -386,6 +414,8 @@ func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.
return chosenAction, thought.actionParams, thought.message, nil return chosenAction, thought.actionParams, thought.message, nil
} }
xlog.Debug("[pickAction] forcing reasoning")
prompt, err := renderTemplate(templ, a.prepareHUD(), a.availableActions(), "") prompt, err := renderTemplate(templ, a.prepareHUD(), a.availableActions(), "")
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
@@ -401,67 +431,84 @@ func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.
}, c...) }, c...)
} }
// We also could avoid to use functions here and get just a reply from the LLM
// and then use the reply to get the action
thought, err := a.decision(ctx, thought, err := a.decision(ctx,
c, c,
types.Actions{action.NewReasoning()}.ToTools(), types.Actions{action.NewReasoning()}.ToTools(),
action.NewReasoning().Definition().Name, maxRetries) action.NewReasoning().Definition().Name.String(), maxRetries)
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
} }
reason := "" originalReasoning := ""
response := &action.ReasoningResponse{} response := &action.ReasoningResponse{}
if thought.actionParams != nil { if thought.actionParams != nil {
if err := thought.actionParams.Unmarshal(response); err != nil { if err := thought.actionParams.Unmarshal(response); err != nil {
return nil, nil, "", err return nil, nil, "", err
} }
reason = response.Reasoning originalReasoning = response.Reasoning
} }
if thought.message != "" { if thought.message != "" {
reason = thought.message originalReasoning = thought.message
} }
// From the thought, get the action call xlog.Debug("[pickAction] picking action", "messages", c)
// Get all the available actions IDs // thought, err := a.askLLM(ctx,
actionsID := []string{} // c,
actionsID := []string{"reply"}
for _, m := range a.availableActions() { for _, m := range a.availableActions() {
actionsID = append(actionsID, m.Definition().Name.String()) actionsID = append(actionsID, m.Definition().Name.String())
} }
intentionsTools := action.NewIntention(actionsID...)
//XXX: Why we add the reason here? xlog.Debug("[pickAction] actionsID", "actionsID", actionsID)
intentionsTools := action.NewIntention(actionsID...)
// TODO: FORCE to select ana ction here
// NOTE: we do not give the full conversation here to pick the action
// to avoid hallucinations
// Extract an action
params, err := a.decision(ctx, params, err := a.decision(ctx,
append(c, openai.ChatCompletionMessage{ append(c, openai.ChatCompletionMessage{
Role: "system", Role: "system",
Content: "Given the assistant thought, pick the relevant action: " + reason, Content: "Pick the relevant action given the following reasoning: " + originalReasoning,
}), }),
types.Actions{intentionsTools}.ToTools(), types.Actions{intentionsTools}.ToTools(),
intentionsTools.Definition().Name, maxRetries) intentionsTools.Definition().Name.String(), maxRetries)
if err != nil { if err != nil {
return nil, nil, "", fmt.Errorf("failed to get the action tool parameters: %v", err) return nil, nil, "", fmt.Errorf("failed to get the action tool parameters: %v", err)
} }
actionChoice := action.IntentResponse{}
if params.actionParams == nil { if params.actionParams == nil {
xlog.Debug("[pickAction] no action params found")
return nil, nil, params.message, nil return nil, nil, params.message, nil
} }
actionChoice := action.IntentResponse{}
err = params.actionParams.Unmarshal(&actionChoice) err = params.actionParams.Unmarshal(&actionChoice)
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
} }
if actionChoice.Tool == "" || actionChoice.Tool == "none" { if actionChoice.Tool == "" || actionChoice.Tool == "reply" {
return nil, nil, "", fmt.Errorf("no intent detected") xlog.Debug("[pickAction] no action found, replying")
return nil, nil, "", nil
} }
// Find the action
chosenAction := a.availableActions().Find(actionChoice.Tool) chosenAction := a.availableActions().Find(actionChoice.Tool)
if chosenAction == nil {
return nil, nil, "", fmt.Errorf("no action found for intent:" + actionChoice.Tool)
}
return chosenAction, nil, actionChoice.Reasoning, nil xlog.Debug("[pickAction] chosenAction", "chosenAction", chosenAction, "actionName", actionChoice.Tool)
// // Let's double check if the action is correct by asking the LLM to judge it
// if chosenAction!= nil {
// promptString:= "Given the following goal and thoughts, is the action correct? \n\n"
// promptString+= fmt.Sprintf("Goal: %s\n", goalResponse.Goal)
// promptString+= fmt.Sprintf("Thoughts: %s\n", originalReasoning)
// promptString+= fmt.Sprintf("Action: %s\n", chosenAction.Definition().Name.String())
// promptString+= fmt.Sprintf("Action description: %s\n", chosenAction.Definition().Description)
// promptString+= fmt.Sprintf("Action parameters: %s\n", params.actionParams)
// }
return chosenAction, nil, originalReasoning, nil
} }

View File

@@ -249,7 +249,7 @@ func (a *Agent) runAction(ctx context.Context, chosenAction types.Action, params
} }
} }
xlog.Info("Running action", "action", chosenAction.Definition().Name, "agent", a.Character.Name) xlog.Info("[runAction] Running action", "action", chosenAction.Definition().Name, "agent", a.Character.Name, "params", params.String())
if chosenAction.Definition().Name.Is(action.StateActionName) { if chosenAction.Definition().Name.Is(action.StateActionName) {
// We need to store the result in the state // We need to store the result in the state
@@ -270,6 +270,8 @@ func (a *Agent) runAction(ctx context.Context, chosenAction types.Action, params
} }
} }
xlog.Debug("[runAction] Action result", "action", chosenAction.Definition().Name, "params", params.String(), "result", result.Result)
return result, nil return result, nil
} }
@@ -515,10 +517,21 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
//job.Result.Finish(fmt.Errorf("no action to do"))\ //job.Result.Finish(fmt.Errorf("no action to do"))\
xlog.Info("No action to do, just reply", "agent", a.Character.Name, "reasoning", reasoning) xlog.Info("No action to do, just reply", "agent", a.Character.Name, "reasoning", reasoning)
conv = append(conv, openai.ChatCompletionMessage{ if reasoning != "" {
Role: "assistant", conv = append(conv, openai.ChatCompletionMessage{
Content: reasoning, Role: "assistant",
}) Content: reasoning,
})
} else {
xlog.Info("No reasoning, just reply", "agent", a.Character.Name)
msg, err := a.askLLM(job.GetContext(), conv, maxRetries)
if err != nil {
job.Result.Finish(fmt.Errorf("error asking LLM for a reply: %w", err))
return
}
conv = append(conv, msg)
reasoning = msg.Content
}
xlog.Debug("Finish job with reasoning", "reasoning", reasoning, "agent", a.Character.Name, "conversation", fmt.Sprintf("%+v", conv)) xlog.Debug("Finish job with reasoning", "reasoning", reasoning, "agent", a.Character.Name, "conversation", fmt.Sprintf("%+v", conv))
job.Result.Conversation = conv job.Result.Conversation = conv
@@ -592,7 +605,13 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
var err error var err error
conv, err = a.handlePlanning(job.GetContext(), job, chosenAction, actionParams, reasoning, pickTemplate, conv) conv, err = a.handlePlanning(job.GetContext(), job, chosenAction, actionParams, reasoning, pickTemplate, conv)
if err != nil { if err != nil {
job.Result.Finish(fmt.Errorf("error running action: %w", err)) xlog.Error("error handling planning", "error", err)
//job.Result.Conversation = conv
//job.Result.SetResponse(msg.Content)
a.reply(job, role, append(conv, openai.ChatCompletionMessage{
Role: "assistant",
Content: fmt.Sprintf("Error handling planning: %v", err),
}), actionParams, chosenAction, reasoning)
return return
} }
@@ -657,11 +676,7 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
conv = a.addFunctionResultToConversation(chosenAction, actionParams, result, conv) conv = a.addFunctionResultToConversation(chosenAction, actionParams, result, conv)
} }
//conv = append(conv, messages...) // given the result, we can now re-evaluate the conversation
//conv = messages
// given the result, we can now ask OpenAI to complete the conversation or
// to continue using another tool given the result
followingAction, followingParams, reasoning, err := a.pickAction(job.GetContext(), reEvaluationTemplate, conv, maxRetries) followingAction, followingParams, reasoning, err := a.pickAction(job.GetContext(), reEvaluationTemplate, conv, maxRetries)
if err != nil { if err != nil {
job.Result.Conversation = conv job.Result.Conversation = conv
@@ -674,6 +689,7 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
!chosenAction.Definition().Name.Is(action.ReplyActionName) { !chosenAction.Definition().Name.Is(action.ReplyActionName) {
xlog.Info("Following action", "action", followingAction.Definition().Name, "agent", a.Character.Name) xlog.Info("Following action", "action", followingAction.Definition().Name, "agent", a.Character.Name)
job.ConversationHistory = conv
// We need to do another action (?) // We need to do another action (?)
// The agent decided to do another action // The agent decided to do another action
@@ -681,26 +697,6 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
job.SetNextAction(&followingAction, &followingParams, reasoning) job.SetNextAction(&followingAction, &followingParams, reasoning)
a.consumeJob(job, role) a.consumeJob(job, role)
return return
} else if followingAction == nil {
xlog.Info("Not following another action", "agent", a.Character.Name)
if !a.options.forceReasoning {
xlog.Info("Finish conversation with reasoning", "reasoning", reasoning, "agent", a.Character.Name)
msg := openai.ChatCompletionMessage{
Role: "assistant",
Content: reasoning,
}
conv = append(conv, msg)
job.Result.SetResponse(msg.Content)
job.Result.Conversation = conv
job.Result.AddFinalizer(func(conv []openai.ChatCompletionMessage) {
a.saveCurrentConversation(conv)
})
job.Result.Finish(nil)
return
}
} }
a.reply(job, role, conv, actionParams, chosenAction, reasoning) a.reply(job, role, conv, actionParams, chosenAction, reasoning)

View File

@@ -126,6 +126,8 @@ var _ = Describe("Agent test", func() {
agent, err := New( agent, err := New(
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
EnableForceReasoning,
WithTimeout("10m"),
WithLoopDetectionSteps(3), WithLoopDetectionSteps(3),
// WithRandomIdentity(), // WithRandomIdentity(),
WithActions(&TestAction{response: map[string]string{ WithActions(&TestAction{response: map[string]string{
@@ -174,7 +176,7 @@ var _ = Describe("Agent test", func() {
agent, err := New( agent, err := New(
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
WithTimeout("10m"),
// WithRandomIdentity(), // WithRandomIdentity(),
WithActions(&TestAction{response: map[string]string{ WithActions(&TestAction{response: map[string]string{
"boston": testActionResult, "boston": testActionResult,
@@ -199,6 +201,7 @@ var _ = Describe("Agent test", func() {
agent, err := New( agent, err := New(
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
WithTimeout("10m"),
EnableHUD, EnableHUD,
// EnableStandaloneJob, // EnableStandaloneJob,
// WithRandomIdentity(), // WithRandomIdentity(),

View File

@@ -43,61 +43,104 @@ func renderTemplate(templ string, hud *PromptHUD, actions types.Actions, reasoni
return prompt.String(), nil return prompt.String(), nil
} }
const innerMonologueTemplate = `"This is not a typical conversation between an assistant and an user. const innerMonologueTemplate = `You are an autonomous AI agent thinking out loud and evaluating your current situation.
You are thinking out loud by yourself now, and you are evaluating the current situation. Your task is to analyze your goals and determine the best course of action.
Considering the goal and the persistent goal (if you have one) do an action or decide to plan something for later on. If possible for you, you might also decide to engage a conversation with the user by notifying him."`
const hudTemplate = `{{with .HUD }}{{if .ShowCharacter}}The assistant acts like an human, has a character and the replies and actions might be influenced by it. Consider:
{{if .Character.Name}}This is the assistant name: {{.Character.Name}} 1. Your permanent goal (if any)
{{end}}{{if .Character.Age}}This is the assistant age: {{.Character.Age}} 2. Your current state and progress
{{end}}{{if .Character.Occupation}}This is the assistant job: {{.Character.Occupation}} 3. Available tools and capabilities
{{end}}{{if .Character.Hobbies}}This is the assistant's hobbies: {{.Character.Hobbies}} 4. Previous actions and their outcomes
{{end}}{{if .Character.MusicTaste}}This is the assistant's music taste: {{.Character.MusicTaste}}
You can:
- Take immediate actions using available tools
- Plan future actions
- Update your state and goals
- Initiate conversations with the user when appropriate
Remember to:
- Think critically about each decision
- Consider both short-term and long-term implications
- Be proactive in addressing potential issues
- Maintain awareness of your current state and goals`
const hudTemplate = `{{with .HUD }}{{if .ShowCharacter}}You are an AI assistant with a distinct personality and character traits that influence your responses and actions.
{{if .Character.Name}}Name: {{.Character.Name}}
{{end}}{{if .Character.Age}}Age: {{.Character.Age}}
{{end}}{{if .Character.Occupation}}Occupation: {{.Character.Occupation}}
{{end}}{{if .Character.Hobbies}}Hobbies: {{.Character.Hobbies}}
{{end}}{{if .Character.MusicTaste}}Music Taste: {{.Character.MusicTaste}}
{{end}} {{end}}
{{end}} {{end}}
This is your current state: Current State:
NowDoing: {{if .CurrentState.NowDoing}}{{.CurrentState.NowDoing}}{{else}}Nothing{{end}} - Current Action: {{if .CurrentState.NowDoing}}{{.CurrentState.NowDoing}}{{else}}None{{end}}
DoingNext: {{if .CurrentState.DoingNext}}{{.CurrentState.DoingNext}}{{else}}Nothing{{end}} - Next Action: {{if .CurrentState.DoingNext}}{{.CurrentState.DoingNext}}{{else}}None{{end}}
Your permanent goal is: {{if .PermanentGoal}}{{.PermanentGoal}}{{else}}Nothing{{end}} - Permanent Goal: {{if .PermanentGoal}}{{.PermanentGoal}}{{else}}None{{end}}
Your current goal is: {{if .CurrentState.Goal}}{{.CurrentState.Goal}}{{else}}Nothing{{end}} - Current Goal: {{if .CurrentState.Goal}}{{.CurrentState.Goal}}{{else}}None{{end}}
You have done: {{range .CurrentState.DoneHistory}}{{.}} {{end}} - Action History: {{range .CurrentState.DoneHistory}}{{.}} {{end}}
You have a short memory with: {{range .CurrentState.Memories}}{{.}} {{end}}{{end}} - Short-term Memory: {{range .CurrentState.Memories}}{{.}} {{end}}{{end}}
Current time: is {{.Time}}` Current Time: {{.Time}}`
const pickSelfTemplate = `You can take any of the following tools: const pickSelfTemplate = `
You are an autonomous AI agent with a defined character and state (as shown above).
Your task is to evaluate your current situation and determine the best course of action.
Guidelines:
1. Review your current state and goals
2. Consider available tools and their purposes
3. Plan your next steps carefully
4. Update your state appropriately
When making decisions:
- Use the "reply" tool to provide final responses
- Update your state using appropriate tools
- Plan complex tasks using the planning tool
- Consider both immediate and long-term goals
Remember:
- You are autonomous and should not ask for user input
- Your character traits influence your decisions
- Keep track of your progress and state
- Be proactive in addressing potential issues
Available Tools:
{{range .Actions -}} {{range .Actions -}}
- {{.Name}}: {{.Description }} - {{.Name}}: {{.Description }}
{{ end }} {{ end }}
To finish your session, use the "reply" tool with your answer. {{if .Reasoning}}Previous Reasoning: {{.Reasoning}}{{end}}
Act like as a fully autonomous smart AI agent having a character, the character and your state is defined in the message above.
You are now self-evaluating what to do next based on the state in the previous message.
For example, if the permanent goal is to "make a sandwich", you might want to "get the bread" first, and update the state afterwards by calling two tools in sequence.
You can update the short-term goal, the current action, the next action, the history of actions, and the memories.
You can't ask things to the user as you are thinking by yourself. You are autonomous.
{{if .Reasoning}}Reasoning: {{.Reasoning}}{{end}}
` + hudTemplate ` + hudTemplate
const reSelfEvalTemplate = pickSelfTemplate + ` const reSelfEvalTemplate = pickSelfTemplate
We already have called other tools. Evaluate the current situation and decide if we need to execute other tools.`
const pickActionTemplate = hudTemplate + ` const pickActionTemplate = hudTemplate + `
When you have to pick a tool in the reasoning explain how you would use the tools you'd pick from: Your only task is to analyze the conversation and determine a goal and the best tool to use, or just a final response if we have fullfilled the goal.
Guidelines:
1. Review the current state, what was done already and context
2. Consider available tools and their purposes
3. Plan your approach carefully
4. Explain your reasoning clearly
When choosing actions:
- Use "reply" or "answer" tools for direct responses
- Select appropriate tools for specific tasks
- Consider the impact of each action
- Plan for potential challenges
Decision Process:
1. Analyze the situation
2. Consider available options
3. Choose the best course of action
4. Explain your reasoning
5. Execute the chosen action
Available Tools:
{{range .Actions -}} {{range .Actions -}}
- {{.Name}}: {{.Description }} - {{.Name}}: {{.Description }}
{{ end }} {{ end }}
To answer back to the user, use the "reply" or the "answer" tool.
Given the text below, decide which action to take and explain the detailed reasoning behind it. For answering without picking a choice, reply with 'none'.
{{if .Reasoning}}Reasoning: {{.Reasoning}}{{end}} {{if .Reasoning}}Previous Reasoning: {{.Reasoning}}{{end}}`
`
const reEvalTemplate = pickActionTemplate + ` const reEvalTemplate = pickActionTemplate
We already have called other tools. Evaluate the current situation and decide if we need to execute other tools or answer back with a result.`

View File

@@ -1,75 +0,0 @@
services:
localai:
# See https://localai.io/basics/container/#standard-container-images for
# a list of available container images (or build your own with the provided Dockerfile)
# Available images with CUDA, ROCm, SYCL, Vulkan
# Image list (quay.io): https://quay.io/repository/go-skynet/local-ai?tab=tags
# Image list (dockerhub): https://hub.docker.com/r/localai/localai
image: localai/localai:master-sycl-f32-ffmpeg-core
command:
# - rombo-org_rombo-llm-v3.0-qwen-32b # minimum suggested model
- arcee-agent # (smaller)
- granite-embedding-107m-multilingual
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
interval: 60s
timeout: 10m
retries: 120
ports:
- 8081:8080
environment:
- DEBUG=true
#- LOCALAI_API_KEY=sk-1234567890
volumes:
- ./volumes/models:/build/models:cached
- ./volumes/images:/tmp/generated/images
devices:
# On a system with integrated GPU and an Arc 770, this is the Arc 770
- /dev/dri/card1
- /dev/dri/renderD129
localrecall:
image: quay.io/mudler/localrecall:main
ports:
- 8080
environment:
- COLLECTION_DB_PATH=/db
- EMBEDDING_MODEL=granite-embedding-107m-multilingual
- FILE_ASSETS=/assets
- OPENAI_API_KEY=sk-1234567890
- OPENAI_BASE_URL=http://localai:8080
volumes:
- ./volumes/localrag/db:/db
- ./volumes/localrag/assets/:/assets
localrecall-healthcheck:
depends_on:
localrecall:
condition: service_started
image: busybox
command: ["sh", "-c", "until wget -q -O - http://localrecall:8080 > /dev/null 2>&1; do echo 'Waiting for localrecall...'; sleep 1; done; echo 'localrecall is up!'"]
localagi:
depends_on:
localai:
condition: service_healthy
localrecall-healthcheck:
condition: service_completed_successfully
build:
context: .
dockerfile: Dockerfile.webui
ports:
- 8080:3000
image: quay.io/mudler/localagi:master
environment:
- LOCALAGI_MODEL=arcee-agent
- LOCALAGI_LLM_API_URL=http://localai:8080
#- LOCALAGI_LLM_API_KEY=sk-1234567890
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080
- LOCALAGI_STATE_DIR=/pool
- LOCALAGI_TIMEOUT=5m
- LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./volumes/localagi/:/pool

View File

@@ -1,85 +0,0 @@
services:
localai:
# See https://localai.io/basics/container/#standard-container-images for
# a list of available container images (or build your own with the provided Dockerfile)
# Available images with CUDA, ROCm, SYCL, Vulkan
# Image list (quay.io): https://quay.io/repository/go-skynet/local-ai?tab=tags
# Image list (dockerhub): https://hub.docker.com/r/localai/localai
image: localai/localai:master-gpu-nvidia-cuda-12
command:
- mlabonne_gemma-3-27b-it-abliterated
- qwen_qwq-32b
# Other good alternative options:
# - rombo-org_rombo-llm-v3.0-qwen-32b # minimum suggested model
# - arcee-agent
- granite-embedding-107m-multilingual
- flux.1-dev
- minicpm-v-2_6
environment:
# Enable if you have a single GPU which don't fit all the models
- LOCALAI_SINGLE_ACTIVE_BACKEND=true
- DEBUG=true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
interval: 10s
timeout: 20m
retries: 20
ports:
- 8081:8080
volumes:
- ./volumes/models:/build/models:cached
- ./volumes/images:/tmp/generated/images
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
localrecall:
image: quay.io/mudler/localrecall:main
ports:
- 8080
environment:
- COLLECTION_DB_PATH=/db
- EMBEDDING_MODEL=granite-embedding-107m-multilingual
- FILE_ASSETS=/assets
- OPENAI_API_KEY=sk-1234567890
- OPENAI_BASE_URL=http://localai:8080
volumes:
- ./volumes/localrag/db:/db
- ./volumes/localrag/assets/:/assets
localrecall-healthcheck:
depends_on:
localrecall:
condition: service_started
image: busybox
command: ["sh", "-c", "until wget -q -O - http://localrecall:8080 > /dev/null 2>&1; do echo 'Waiting for localrecall...'; sleep 1; done; echo 'localrecall is up!'"]
localagi:
depends_on:
localai:
condition: service_healthy
localrecall-healthcheck:
condition: service_completed_successfully
build:
context: .
dockerfile: Dockerfile.webui
ports:
- 8080:3000
image: quay.io/mudler/localagi:master
environment:
- LOCALAGI_MODEL=qwen_qwq-32b
- LOCALAGI_LLM_API_URL=http://localai:8080
#- LOCALAGI_LLM_API_KEY=sk-1234567890
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080
- LOCALAGI_STATE_DIR=/pool
- LOCALAGI_TIMEOUT=5m
- LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false
- LOCALAGI_MULTIMODAL_MODEL=minicpm-v-2_6
- LOCALAGI_IMAGE_MODEL=flux.1-dev
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./volumes/localagi/:/pool

View File

@@ -7,7 +7,8 @@ services:
# Image list (dockerhub): https://hub.docker.com/r/localai/localai # Image list (dockerhub): https://hub.docker.com/r/localai/localai
image: localai/localai:master-ffmpeg-core image: localai/localai:master-ffmpeg-core
command: command:
- arcee-agent # (smaller) # - gemma-3-12b-it
- ${MODEL_NAME:-arcee-agent}
- granite-embedding-107m-multilingual - granite-embedding-107m-multilingual
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"] test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
@@ -23,14 +24,44 @@ services:
- ./volumes/models:/build/models:cached - ./volumes/models:/build/models:cached
- ./volumes/images:/tmp/generated/images - ./volumes/images:/tmp/generated/images
# decomment the following piece if running with Nvidia GPUs localai-nvidia:
# deploy: profiles: ["nvidia"]
# resources: extends:
# reservations: service: localai
# devices: environment:
# - driver: nvidia - LOCALAI_SINGLE_ACTIVE_BACKEND=true
# count: 1 - DEBUG=true
# capabilities: [gpu] deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
command:
- ${MODEL_NAME:-arcee-agent}
- ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-flux.1-dev}
- granite-embedding-107m-multilingual
localai-intel:
profiles: ["intel"]
environment:
- LOCALAI_SINGLE_ACTIVE_BACKEND=true
- DEBUG=true
extends:
service: localai
image: localai/localai:master-sycl-f32-ffmpeg-core
devices:
# On a system with integrated GPU and an Arc 770, this is the Arc 770
- /dev/dri/card1
- /dev/dri/renderD129
command:
- ${MODEL_NAME:-arcee-agent}
- ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-sd-1.5-ggml}
- granite-embedding-107m-multilingual
localrecall: localrecall:
image: quay.io/mudler/localrecall:main image: quay.io/mudler/localrecall:main
ports: ports:
@@ -65,7 +96,7 @@ services:
- 8080:3000 - 8080:3000
#image: quay.io/mudler/localagi:master #image: quay.io/mudler/localagi:master
environment: environment:
- LOCALAGI_MODEL=arcee-agent - LOCALAGI_MODEL=${MODEL_NAME:-arcee-agent}
- LOCALAGI_LLM_API_URL=http://localai:8080 - LOCALAGI_LLM_API_URL=http://localai:8080
#- LOCALAGI_LLM_API_KEY=sk-1234567890 #- LOCALAGI_LLM_API_KEY=sk-1234567890
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080 - LOCALAGI_LOCALRAG_URL=http://localrecall:8080
@@ -76,3 +107,31 @@ services:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
volumes: volumes:
- ./volumes/localagi/:/pool - ./volumes/localagi/:/pool
localagi-nvidia:
profiles: ["nvidia"]
extends:
service: localagi
environment:
- LOCALAGI_MODEL=${MODEL_NAME:-arcee-agent}
- LOCALAGI_MULTIMODAL_MODEL=${MULTIMODAL_MODEL:-minicpm-v-2_6}
- LOCALAGI_IMAGE_MODEL=${IMAGE_MODEL:-flux.1-dev}
- LOCALAGI_LLM_API_URL=http://localai:8080
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080
- LOCALAGI_STATE_DIR=/pool
- LOCALAGI_TIMEOUT=5m
- LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false
localagi-intel:
profiles: ["intel"]
extends:
service: localagi
environment:
- LOCALAGI_MODEL=${MODEL_NAME:-arcee-agent}
- LOCALAGI_MULTIMODAL_MODEL=${MULTIMODAL_MODEL:-minicpm-v-2_6}
- LOCALAGI_IMAGE_MODEL=${IMAGE_MODEL:-sd-1.5-ggml}
- LOCALAGI_LLM_API_URL=http://localai:8080
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080
- LOCALAGI_STATE_DIR=/pool
- LOCALAGI_TIMEOUT=5m
- LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false

View File

@@ -28,6 +28,7 @@ const (
ActionGithubIssueCommenter = "github-issue-commenter" ActionGithubIssueCommenter = "github-issue-commenter"
ActionGithubPRReader = "github-pr-reader" ActionGithubPRReader = "github-pr-reader"
ActionGithubPRCommenter = "github-pr-commenter" ActionGithubPRCommenter = "github-pr-commenter"
ActionGithubPRReviewer = "github-pr-reviewer"
ActionGithubREADME = "github-readme" ActionGithubREADME = "github-readme"
ActionScraper = "scraper" ActionScraper = "scraper"
ActionWikipedia = "wikipedia" ActionWikipedia = "wikipedia"
@@ -53,6 +54,7 @@ var AvailableActions = []string{
ActionGithubIssueCommenter, ActionGithubIssueCommenter,
ActionGithubPRReader, ActionGithubPRReader,
ActionGithubPRCommenter, ActionGithubPRCommenter,
ActionGithubPRReviewer,
ActionGithubREADME, ActionGithubREADME,
ActionScraper, ActionScraper,
ActionBrowse, ActionBrowse,
@@ -114,6 +116,8 @@ func Action(name, agentName string, config map[string]string, pool *state.AgentP
a = actions.NewGithubPRReader(config) a = actions.NewGithubPRReader(config)
case ActionGithubPRCommenter: case ActionGithubPRCommenter:
a = actions.NewGithubPRCommenter(config) a = actions.NewGithubPRCommenter(config)
case ActionGithubPRReviewer:
a = actions.NewGithubPRReviewer(config)
case ActionGithubIssueCommenter: case ActionGithubIssueCommenter:
a = actions.NewGithubIssueCommenter(config) a = actions.NewGithubIssueCommenter(config)
case ActionGithubRepositoryGet: case ActionGithubRepositoryGet:
@@ -217,6 +221,11 @@ func ActionsConfigMeta() []config.FieldGroup {
Label: "GitHub PR Commenter", Label: "GitHub PR Commenter",
Fields: actions.GithubPRCommenterConfigMeta(), Fields: actions.GithubPRCommenterConfigMeta(),
}, },
{
Name: "github-pr-reviewer",
Label: "GitHub PR Reviewer",
Fields: actions.GithubPRReviewerConfigMeta(),
},
{ {
Name: "twitter-post", Name: "twitter-post",
Label: "Twitter Post", Label: "Twitter Post",

View File

@@ -3,11 +3,8 @@ package actions
import ( import (
"context" "context"
"fmt" "fmt"
"io"
"regexp" "regexp"
"strconv" "strconv"
"strings"
"time"
"github.com/google/go-github/v69/github" "github.com/google/go-github/v69/github"
"github.com/mudler/LocalAGI/core/types" "github.com/mudler/LocalAGI/core/types"
@@ -124,16 +121,10 @@ func NewGithubPRCommenter(config map[string]string) *GithubPRCommenter {
func (g *GithubPRCommenter) Run(ctx context.Context, params types.ActionParams) (types.ActionResult, error) { func (g *GithubPRCommenter) Run(ctx context.Context, params types.ActionParams) (types.ActionResult, error) {
result := struct { result := struct {
Repository string `json:"repository"` Repository string `json:"repository"`
Owner string `json:"owner"` Owner string `json:"owner"`
PRNumber int `json:"pr_number"` PRNumber int `json:"pr_number"`
GeneralComment string `json:"general_comment"` Comment string `json:"comment"`
Comments []struct {
File string `json:"file"`
Line int `json:"line"`
Comment string `json:"comment"`
StartLine int `json:"start_line,omitempty"`
} `json:"comments"`
}{} }{}
err := params.Unmarshal(&result) err := params.Unmarshal(&result)
if err != nil { if err != nil {
@@ -159,134 +150,31 @@ func (g *GithubPRCommenter) Run(ctx context.Context, params types.ActionParams)
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d is not open (current state: %s)", result.PRNumber, *pr.State)}, nil return types.ActionResult{Result: fmt.Sprintf("Pull request #%d is not open (current state: %s)", result.PRNumber, *pr.State)}, nil
} }
// Get the list of changed files to verify the files exist in the PR if result.Comment == "" {
files, _, err := g.client.PullRequests.ListFiles(ctx, result.Owner, result.Repository, result.PRNumber, &github.ListOptions{}) return types.ActionResult{Result: "No comment provided"}, nil
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to list PR files: %w", err)
} }
// Create a map of valid files with their commit info // Try both PullRequests and Issues API for general comments
validFiles := make(map[string]*commitFileInfo) var resp *github.Response
for _, file := range files {
if *file.Status != "deleted" {
info, err := getCommitInfo(file)
if err != nil {
continue
}
validFiles[*file.Filename] = info
}
}
// Process each comment // First try PullRequests API
var results []string _, resp, err = g.client.PullRequests.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, &github.PullRequestComment{
for _, comment := range result.Comments { Body: &result.Comment,
// Check if file exists in PR })
fileInfo, exists := validFiles[comment.File]
if !exists {
availableFiles := make([]string, 0, len(validFiles))
for f := range validFiles {
availableFiles = append(availableFiles, f)
}
results = append(results, fmt.Sprintf("Error: File %s not found in PR #%d. Available files: %v",
comment.File, result.PRNumber, availableFiles))
continue
}
// Check if line is in a changed hunk // If that fails with 403, try Issues API
if !fileInfo.isLineInChange(comment.Line) { if err != nil && resp != nil && resp.StatusCode == 403 {
results = append(results, fmt.Sprintf("Error: Line %d is not in a changed hunk in file %s", _, resp, err = g.client.Issues.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, &github.IssueComment{
comment.Line, comment.File)) Body: &result.Comment,
continue
}
// Calculate position
position := fileInfo.calculatePosition(comment.Line)
if position == nil {
results = append(results, fmt.Sprintf("Error: Could not calculate position for line %d in file %s",
comment.Line, comment.File))
continue
}
// Create the review comment
reviewComment := &github.PullRequestComment{
Path: &comment.File,
Line: &comment.Line,
Body: &comment.Comment,
Position: position,
CommitID: &fileInfo.sha,
}
// Set start line if provided
if comment.StartLine > 0 {
reviewComment.StartLine = &comment.StartLine
}
// Create the comment with retries
var resp *github.Response
for i := 0; i < 6; i++ {
_, resp, err = g.client.PullRequests.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, reviewComment)
if err == nil {
break
}
if resp != nil && resp.StatusCode == 422 {
// Rate limit hit, wait and retry
retrySeconds := i * i
time.Sleep(time.Second * time.Duration(retrySeconds))
continue
}
break
}
if err != nil {
errorDetails := fmt.Sprintf("Error commenting on file %s, line %d: %s", comment.File, comment.Line, err.Error())
if resp != nil {
errorDetails += fmt.Sprintf("\nResponse Status: %s", resp.Status)
if resp.Body != nil {
body, _ := io.ReadAll(resp.Body)
errorDetails += fmt.Sprintf("\nResponse Body: %s", string(body))
}
}
results = append(results, errorDetails)
continue
}
results = append(results, fmt.Sprintf("Successfully commented on file %s, line %d", comment.File, comment.Line))
}
if result.GeneralComment != "" {
// Try both PullRequests and Issues API for general comments
var resp *github.Response
var err error
// First try PullRequests API
_, resp, err = g.client.PullRequests.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, &github.PullRequestComment{
Body: &result.GeneralComment,
}) })
}
// If that fails with 403, try Issues API if err != nil {
if err != nil && resp != nil && resp.StatusCode == 403 { return types.ActionResult{Result: fmt.Sprintf("Error adding general comment: %s", err.Error())}, nil
_, resp, err = g.client.Issues.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, &github.IssueComment{
Body: &result.GeneralComment,
})
}
if err != nil {
errorDetails := fmt.Sprintf("Error adding general comment: %s", err.Error())
if resp != nil {
errorDetails += fmt.Sprintf("\nResponse Status: %s", resp.Status)
if resp.Body != nil {
body, _ := io.ReadAll(resp.Body)
errorDetails += fmt.Sprintf("\nResponse Body: %s", string(body))
}
}
results = append(results, errorDetails)
} else {
results = append(results, "Successfully added general comment to pull request")
}
} }
return types.ActionResult{ return types.ActionResult{
Result: strings.Join(results, "\n"), Result: "Successfully added general comment to pull request",
}, nil }, nil
} }
@@ -305,38 +193,12 @@ func (g *GithubPRCommenter) Definition() types.ActionDefinition {
Type: jsonschema.Number, Type: jsonschema.Number,
Description: "The number of the pull request to comment on.", Description: "The number of the pull request to comment on.",
}, },
"general_comment": { "comment": {
Type: jsonschema.String, Type: jsonschema.String,
Description: "A general comment to add to the pull request.", Description: "A general comment to add to the pull request.",
}, },
"comments": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{
Type: jsonschema.Object,
Properties: map[string]jsonschema.Definition{
"file": {
Type: jsonschema.String,
Description: "The file to comment on.",
},
"line": {
Type: jsonschema.Number,
Description: "The line number to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "The comment text.",
},
"start_line": {
Type: jsonschema.Number,
Description: "Optional start line for multi-line comments.",
},
},
Required: []string{"file", "line", "comment"},
},
Description: "Array of comments to add to the pull request.",
},
}, },
Required: []string{"pr_number", "comments"}, Required: []string{"pr_number", "comment"},
} }
} }
return types.ActionDefinition{ return types.ActionDefinition{
@@ -355,38 +217,12 @@ func (g *GithubPRCommenter) Definition() types.ActionDefinition {
Type: jsonschema.String, Type: jsonschema.String,
Description: "The owner of the repository.", Description: "The owner of the repository.",
}, },
"general_comment": { "comment": {
Type: jsonschema.String, Type: jsonschema.String,
Description: "A general comment to add to the pull request.", Description: "A general comment to add to the pull request.",
}, },
"comments": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{
Type: jsonschema.Object,
Properties: map[string]jsonschema.Definition{
"file": {
Type: jsonschema.String,
Description: "The file to comment on.",
},
"line": {
Type: jsonschema.Number,
Description: "The line number to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "The comment text.",
},
"start_line": {
Type: jsonschema.Number,
Description: "Optional start line for multi-line comments.",
},
},
Required: []string{"file", "line", "comment"},
},
Description: "Array of comments to add to the pull request.",
},
}, },
Required: []string{"pr_number", "repository", "owner", "comments"}, Required: []string{"pr_number", "repository", "owner", "comment"},
} }
} }

View File

@@ -64,20 +64,44 @@ func (g *GithubPRReader) Run(ctx context.Context, params types.ActionParams) (ty
return types.ActionResult{Result: fmt.Sprintf("Error fetching pull request files: %s", err.Error())}, err return types.ActionResult{Result: fmt.Sprintf("Error fetching pull request files: %s", err.Error())}, err
} }
// Get CI status information
ciStatus := "\n\nCI Status:\n"
// Get PR status checks
checkRuns, _, err := g.client.Checks.ListCheckRunsForRef(ctx, result.Owner, result.Repository, pr.GetHead().GetSHA(), &github.ListCheckRunsOptions{})
if err == nil && checkRuns != nil {
ciStatus += fmt.Sprintf("\nPR Status Checks:\n")
ciStatus += fmt.Sprintf("Total Checks: %d\n", checkRuns.GetTotal())
for _, check := range checkRuns.CheckRuns {
ciStatus += fmt.Sprintf("- %s: %s (%s)\n",
check.GetName(),
check.GetConclusion(),
check.GetStatus())
}
}
// Build the file changes summary with patches // Build the file changes summary with patches
fileChanges := "\n\nFile Changes:\n" fileChanges := "\n\nFile Changes:\n"
for _, file := range files { for _, file := range files {
fileChanges += fmt.Sprintf("\n--- %s\n+++ %s\n", *file.Filename, *file.Filename) fileChanges += fmt.Sprintf("\n--- %s\n+++ %s\n", file.GetFilename(), file.GetFilename())
if g.showFullDiff && file.Patch != nil { if g.showFullDiff && file.GetPatch() != "" {
fileChanges += *file.Patch fileChanges += file.GetPatch()
} }
fileChanges += fmt.Sprintf("\n(%d additions, %d deletions)\n", *file.Additions, *file.Deletions) fileChanges += fmt.Sprintf("\n(%d additions, %d deletions)\n", file.GetAdditions(), file.GetDeletions())
} }
return types.ActionResult{ return types.ActionResult{
Result: fmt.Sprintf( Result: fmt.Sprintf(
"Pull Request %d Repository: %s\nTitle: %s\nBody: %s\nState: %s\nBase: %s\nHead: %s%s", "Pull Request %d Repository: %s\nTitle: %s\nBody: %s\nState: %s\nBase: %s\nHead: %s%s%s",
*pr.Number, *pr.Base.Repo.FullName, *pr.Title, *pr.Body, *pr.State, *pr.Base.Ref, *pr.Head.Ref, fileChanges)}, nil pr.GetNumber(),
pr.GetBase().GetRepo().GetFullName(),
pr.GetTitle(),
pr.GetBody(),
pr.GetState(),
pr.GetBase().GetRef(),
pr.GetHead().GetRef(),
ciStatus,
fileChanges)}, nil
} }
func (g *GithubPRReader) Definition() types.ActionDefinition { func (g *GithubPRReader) Definition() types.ActionDefinition {

View File

@@ -0,0 +1,286 @@
package actions
import (
"context"
"fmt"
"io"
"strings"
"github.com/google/go-github/v69/github"
"github.com/mudler/LocalAGI/core/types"
"github.com/mudler/LocalAGI/pkg/config"
"github.com/mudler/LocalAGI/pkg/xlog"
"github.com/sashabaranov/go-openai/jsonschema"
)
type GithubPRReviewer struct {
token, repository, owner, customActionName string
client *github.Client
}
func NewGithubPRReviewer(config map[string]string) *GithubPRReviewer {
client := github.NewClient(nil).WithAuthToken(config["token"])
return &GithubPRReviewer{
client: client,
token: config["token"],
customActionName: config["customActionName"],
repository: config["repository"],
owner: config["owner"],
}
}
func (g *GithubPRReviewer) Run(ctx context.Context, params types.ActionParams) (types.ActionResult, error) {
result := struct {
Repository string `json:"repository"`
Owner string `json:"owner"`
PRNumber int `json:"pr_number"`
ReviewComment string `json:"review_comment"`
ReviewAction string `json:"review_action"` // APPROVE, REQUEST_CHANGES, or COMMENT
Comments []struct {
File string `json:"file"`
Line int `json:"line"`
Comment string `json:"comment"`
StartLine int `json:"start_line,omitempty"`
} `json:"comments"`
}{}
err := params.Unmarshal(&result)
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to unmarshal params: %w", err)
}
if g.repository != "" && g.owner != "" {
result.Repository = g.repository
result.Owner = g.owner
}
// First verify the PR exists and is in a valid state
pr, _, err := g.client.PullRequests.Get(ctx, result.Owner, result.Repository, result.PRNumber)
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to fetch PR #%d: %w", result.PRNumber, err)
}
if pr == nil {
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d not found in repository %s/%s", result.PRNumber, result.Owner, result.Repository)}, nil
}
// Check if PR is in a state that allows reviews
if *pr.State != "open" {
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d is not open (current state: %s)", result.PRNumber, *pr.State)}, nil
}
// Get the list of changed files to verify the files exist in the PR
files, _, err := g.client.PullRequests.ListFiles(ctx, result.Owner, result.Repository, result.PRNumber, &github.ListOptions{})
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to list PR files: %w", err)
}
// Create a map of valid files
validFiles := make(map[string]bool)
for _, file := range files {
if *file.Status != "deleted" {
validFiles[*file.Filename] = true
}
}
// Process each comment
var reviewComments []*github.DraftReviewComment
for _, comment := range result.Comments {
// Check if file exists in PR
if !validFiles[comment.File] {
continue
}
reviewComment := &github.DraftReviewComment{
Path: &comment.File,
Line: &comment.Line,
Body: &comment.Comment,
}
// Set start line if provided
if comment.StartLine > 0 {
reviewComment.StartLine = &comment.StartLine
}
reviewComments = append(reviewComments, reviewComment)
}
// Create the review
review := &github.PullRequestReviewRequest{
Event: &result.ReviewAction,
Body: &result.ReviewComment,
Comments: reviewComments,
}
xlog.Debug("[githubprreviewer] review", "review", review)
// Submit the review
_, resp, err := g.client.PullRequests.CreateReview(ctx, result.Owner, result.Repository, result.PRNumber, review)
if err != nil {
errorDetails := fmt.Sprintf("Error submitting review: %s", err.Error())
if resp != nil {
errorDetails += fmt.Sprintf("\nResponse Status: %s", resp.Status)
if resp.Body != nil {
body, _ := io.ReadAll(resp.Body)
errorDetails += fmt.Sprintf("\nResponse Body: %s", string(body))
}
}
return types.ActionResult{Result: errorDetails}, err
}
actionResult := fmt.Sprintf(
"Pull request https://github.com/%s/%s/pull/%d reviewed successfully with status: %s",
result.Owner,
result.Repository,
result.PRNumber,
strings.ToLower(result.ReviewAction),
)
return types.ActionResult{Result: actionResult}, nil
}
func (g *GithubPRReviewer) Definition() types.ActionDefinition {
actionName := "review_github_pr"
if g.customActionName != "" {
actionName = g.customActionName
}
description := "Review a GitHub pull request by approving, requesting changes, or commenting."
if g.repository != "" && g.owner != "" {
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to review.",
},
"review_comment": {
Type: jsonschema.String,
Description: "The main review comment to add to the pull request.",
},
"review_action": {
Type: jsonschema.String,
Description: "The type of review to submit (APPROVE, REQUEST_CHANGES, or COMMENT).",
Enum: []string{"APPROVE", "REQUEST_CHANGES", "COMMENT"},
},
"comments": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{
Type: jsonschema.Object,
Properties: map[string]jsonschema.Definition{
"file": {
Type: jsonschema.String,
Description: "The file to comment on.",
},
"line": {
Type: jsonschema.Number,
Description: "The line number to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "The comment text.",
},
"start_line": {
Type: jsonschema.Number,
Description: "Optional start line for multi-line comments.",
},
},
Required: []string{"file", "line", "comment"},
},
Description: "Array of line-specific comments to add to the review.",
},
},
Required: []string{"pr_number", "review_action"},
}
}
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to review.",
},
"repository": {
Type: jsonschema.String,
Description: "The repository containing the pull request.",
},
"owner": {
Type: jsonschema.String,
Description: "The owner of the repository.",
},
"review_comment": {
Type: jsonschema.String,
Description: "The main review comment to add to the pull request.",
},
"review_action": {
Type: jsonschema.String,
Description: "The type of review to submit (APPROVE, REQUEST_CHANGES, or COMMENT).",
Enum: []string{"APPROVE", "REQUEST_CHANGES", "COMMENT"},
},
"comments": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{
Type: jsonschema.Object,
Properties: map[string]jsonschema.Definition{
"file": {
Type: jsonschema.String,
Description: "The file to comment on.",
},
"line": {
Type: jsonschema.Number,
Description: "The line number to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "The comment text.",
},
"start_line": {
Type: jsonschema.Number,
Description: "Optional start line for multi-line comments.",
},
},
Required: []string{"file", "line", "comment"},
},
Description: "Array of line-specific comments to add to the review.",
},
},
Required: []string{"pr_number", "repository", "owner", "review_action"},
}
}
func (a *GithubPRReviewer) Plannable() bool {
return true
}
// GithubPRReviewerConfigMeta returns the metadata for GitHub PR Reviewer action configuration fields
func GithubPRReviewerConfigMeta() []config.Field {
return []config.Field{
{
Name: "token",
Label: "GitHub Token",
Type: config.FieldTypeText,
Required: true,
HelpText: "GitHub API token with repository access",
},
{
Name: "repository",
Label: "Repository",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository name",
},
{
Name: "owner",
Label: "Owner",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository owner",
},
{
Name: "customActionName",
Label: "Custom Action Name",
Type: config.FieldTypeText,
HelpText: "Custom name for this action",
},
}
}

View File

@@ -1,13 +1,12 @@
import { useState, useEffect } from 'react'; import { useState, useEffect } from 'react';
import { useParams, Link, useNavigate } from 'react-router-dom'; import { useParams, Link } from 'react-router-dom';
function AgentStatus() { function AgentStatus() {
const { name } = useParams(); const { name } = useParams();
const navigate = useNavigate();
const [statusData, setStatusData] = useState(null); const [statusData, setStatusData] = useState(null);
const [loading, setLoading] = useState(true); const [loading, setLoading] = useState(true);
const [error, setError] = useState(null); const [error, setError] = useState(null);
const [eventSource, setEventSource] = useState(null); const [_eventSource, setEventSource] = useState(null);
const [liveUpdates, setLiveUpdates] = useState([]); const [liveUpdates, setLiveUpdates] = useState([]);
// Update document title // Update document title
@@ -49,7 +48,7 @@ function AgentStatus() {
const data = JSON.parse(event.data); const data = JSON.parse(event.data);
setLiveUpdates(prev => [data, ...prev.slice(0, 19)]); // Keep last 20 updates setLiveUpdates(prev => [data, ...prev.slice(0, 19)]); // Keep last 20 updates
} catch (err) { } catch (err) {
console.error('Error parsing SSE data:', err); setLiveUpdates(prev => [event.data, ...prev.slice(0, 19)]);
} }
}); });
@@ -129,23 +128,9 @@ function AgentStatus() {
<h2 className="text-sm font-semibold mb-2">Agent Action:</h2> <h2 className="text-sm font-semibold mb-2">Agent Action:</h2>
<div className="status-details"> <div className="status-details">
<div className="status-row"> <div className="status-row">
<span className="status-label">Result:</span> <span className="status-label">{index}</span>
<span className="status-value">{formatValue(item.Result)}</span> <span className="status-value">{formatValue(item)}</span>
</div> </div>
<div className="status-row">
<span className="status-label">Action:</span>
<span className="status-value">{formatValue(item.Action)}</span>
</div>
<div className="status-row">
<span className="status-label">Parameters:</span>
<span className="status-value pre-wrap">{formatValue(item.Params)}</span>
</div>
{item.Reasoning && (
<div className="status-row">
<span className="status-label">Reasoning:</span>
<span className="status-value reasoning">{formatValue(item.Reasoning)}</span>
</div>
)}
</div> </div>
</div> </div>
</div> </div>

View File

@@ -4,6 +4,7 @@ import (
"crypto/subtle" "crypto/subtle"
"embed" "embed"
"errors" "errors"
"fmt"
"math/rand" "math/rand"
"net/http" "net/http"
"path/filepath" "path/filepath"
@@ -238,9 +239,20 @@ func (app *App) registerRoutes(pool *state.AgentPool, webapp *fiber.App) {
history = &state.Status{ActionResults: []types.ActionState{}} history = &state.Status{ActionResults: []types.ActionState{}}
} }
entries := []string{}
for _, h := range Reverse(history.Results()) {
entries = append(entries, fmt.Sprintf(
"Result: %v Action: %v Params: %v Reasoning: %v",
h.Result,
h.Action.Definition().Name,
h.Params,
h.Reasoning,
))
}
return c.JSON(fiber.Map{ return c.JSON(fiber.Map{
"Name": c.Params("name"), "Name": c.Params("name"),
"History": Reverse(history.Results()), "History": entries,
}) })
}) })