Compare commits

..

2 Commits

Author SHA1 Message Date
mudler
66cc5e7452 fix: improve templates
Signed-off-by: mudler <mudler@localai.io>
2025-04-10 20:20:41 +02:00
mudler
8e18df468b fix(pick_action): improve action pickup by using only the assistant thought process
Signed-off-by: mudler <mudler@localai.io>
2025-04-10 20:07:34 +02:00
18 changed files with 311 additions and 1133 deletions

View File

@@ -3,7 +3,7 @@ name: Run Go Tests
on: on:
push: push:
branches: branches:
- 'main' - '**'
pull_request: pull_request:
branches: branches:
- '**' - '**'

View File

@@ -3,7 +3,7 @@ IMAGE_NAME?=webui
ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
prepare-tests: prepare-tests:
docker compose up -d --build docker compose up -d
cleanup-tests: cleanup-tests:
docker compose down docker compose down

129
README.md
View File

@@ -1,5 +1,5 @@
<p align="center"> <p align="center">
<img src="./webui/react-ui/public/logo_1.png" alt="LocalAGI Logo" width="220"/> <img src="https://github.com/user-attachments/assets/6958ffb3-31cf-441e-b99d-ce34ec6fc88f" alt="LocalAGI Logo" width="220"/>
</p> </p>
<h3 align="center"><em>Your AI. Your Hardware. Your Rules.</em></h3> <h3 align="center"><em>Your AI. Your Hardware. Your Rules.</em></h3>
@@ -45,129 +45,14 @@ LocalAGI ensures your data stays exactly where you want it—on your hardware. N
git clone https://github.com/mudler/LocalAGI git clone https://github.com/mudler/LocalAGI
cd LocalAGI cd LocalAGI
# CPU setup (default) # CPU setup
docker compose up docker compose up -f docker-compose.yml
# NVIDIA GPU setup # GPU setup
docker compose -f docker-compose.nvidia.yaml up docker compose up -f docker-compose.gpu.yml
# Intel GPU setup (for Intel Arc and integrated GPUs)
docker compose -f docker-compose.intel.yaml up
# Start with a specific model (see available models in models.localai.io, or localai.io to use any model in huggingface)
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU setup with custom multimodal and image models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev \
docker compose -f docker-compose.nvidia.yaml up
``` ```
Now you can access and manage your agents at [http://localhost:8080](http://localhost:8080) Access your agents at `http://localhost:3000`
## 📚🆕 Local Stack Family
🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:
<table>
<tr>
<td width="50%" valign="top">
<a href="https://github.com/mudler/LocalAI">
<img src="https://raw.githubusercontent.com/mudler/LocalAI/refs/heads/rebranding/core/http/static/logo_horizontal.png" width="300" alt="LocalAI Logo">
</a>
</td>
<td width="50%" valign="top">
<h3><a href="https://github.com/mudler/LocalRecall">LocalAI</a></h3>
<p>LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI API specifications for local AI inferencing. Does not require GPU.</p>
</td>
</tr>
<tr>
<td width="50%" valign="top">
<a href="https://github.com/mudler/LocalRecall">
<img src="https://raw.githubusercontent.com/mudler/LocalRecall/refs/heads/main/static/localrecall_horizontal.png" width="300" alt="LocalRecall Logo">
</a>
</td>
<td width="50%" valign="top">
<h3><a href="https://github.com/mudler/LocalRecall">LocalRecall</a></h3>
<p>A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.</p>
</td>
</tr>
</table>
## 🖥️ Hardware Configurations
LocalAGI supports multiple hardware configurations through Docker Compose profiles:
### CPU (Default)
- No special configuration needed
- Runs on any system with Docker
- Best for testing and development
- Supports text models only
### NVIDIA GPU
- Requires NVIDIA GPU and drivers
- Uses CUDA for acceleration
- Best for high-performance inference
- Supports text, multimodal, and image generation models
- Run with: `docker compose -f docker-compose.nvidia.yaml up`
- Default models:
- Text: `arcee-agent`
- Multimodal: `minicpm-v-2_6`
- Image: `flux.1-dev`
- Environment variables:
- `MODEL_NAME`: Text model to use
- `MULTIMODAL_MODEL`: Multimodal model to use
- `IMAGE_MODEL`: Image generation model to use
- `LOCALAI_SINGLE_ACTIVE_BACKEND`: Set to `true` to enable single active backend mode
### Intel GPU
- Supports Intel Arc and integrated GPUs
- Uses SYCL for acceleration
- Best for Intel-based systems
- Supports text, multimodal, and image generation models
- Run with: `docker compose -f docker-compose.intel.yaml up`
- Default models:
- Text: `arcee-agent`
- Multimodal: `minicpm-v-2_6`
- Image: `sd-1.5-ggml`
- Environment variables:
- `MODEL_NAME`: Text model to use
- `MULTIMODAL_MODEL`: Multimodal model to use
- `IMAGE_MODEL`: Image generation model to use
- `LOCALAI_SINGLE_ACTIVE_BACKEND`: Set to `true` to enable single active backend mode
## Customize models
You can customize the models used by LocalAGI by setting environment variables when running docker-compose. For example:
```bash
# CPU with custom model
MODEL_NAME=gemma-3-12b-it docker compose up
# NVIDIA GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=flux.1-dev \
docker compose -f docker-compose.nvidia.yaml up
# Intel GPU with custom models
MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-2_6 \
IMAGE_MODEL=sd-1.5-ggml \
docker compose -f docker-compose.intel.yaml up
```
If no models are specified, it will use the defaults:
- Text model: `arcee-agent`
- Multimodal model: `minicpm-v-2_6`
- Image model: `flux.1-dev` (NVIDIA) or `sd-1.5-ggml` (Intel)
Good (relatively small) models that have been tested are:
- `qwen_qwq-32b` (best in co-ordinating agents)
- `gemma-3-12b-it`
- `gemma-3-27b-it`
## 🏆 Why Choose LocalAGI? ## 🏆 Why Choose LocalAGI?
@@ -213,8 +98,6 @@ Explore detailed documentation including:
### Environment Configuration ### Environment Configuration
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does | | Variable | What It Does |
|----------|--------------| |----------|--------------|
| `LOCALAGI_MODEL` | Your go-to model | | `LOCALAGI_MODEL` | Your go-to model |

View File

@@ -1,48 +0,0 @@
package action
import (
"context"
"github.com/mudler/LocalAGI/core/types"
"github.com/sashabaranov/go-openai/jsonschema"
)
// NewGoal creates a new intention action
// The inention action is special as it tries to identify
// a tool to use and a reasoning over to use it
func NewGoal() *GoalAction {
return &GoalAction{}
}
type GoalAction struct {
}
type GoalResponse struct {
Goal string `json:"goal"`
Achieved bool `json:"achieved"`
}
func (a *GoalAction) Run(context.Context, types.ActionParams) (types.ActionResult, error) {
return types.ActionResult{}, nil
}
func (a *GoalAction) Plannable() bool {
return false
}
func (a *GoalAction) Definition() types.ActionDefinition {
return types.ActionDefinition{
Name: "goal",
Description: "Check if the goal is achieved",
Properties: map[string]jsonschema.Definition{
"goal": {
Type: jsonschema.String,
Description: "The goal to check if it is achieved.",
},
"achieved": {
Type: jsonschema.Boolean,
Description: "Whether the goal is achieved",
},
},
Required: []string{"goal", "achieved"},
}
}

View File

@@ -41,7 +41,7 @@ func (a *PlanAction) Plannable() bool {
func (a *PlanAction) Definition() types.ActionDefinition { func (a *PlanAction) Definition() types.ActionDefinition {
return types.ActionDefinition{ return types.ActionDefinition{
Name: PlanActionName, Name: PlanActionName,
Description: "Use it for situations that involves doing more actions in sequence.", Description: "Use this tool for solving complex tasks that involves calling more tools in sequence.",
Properties: map[string]jsonschema.Definition{ Properties: map[string]jsonschema.Definition{
"subtasks": { "subtasks": {
Type: jsonschema.Array, Type: jsonschema.Array,

View File

@@ -24,16 +24,7 @@ type decisionResult struct {
func (a *Agent) decision( func (a *Agent) decision(
ctx context.Context, ctx context.Context,
conversation []openai.ChatCompletionMessage, conversation []openai.ChatCompletionMessage,
tools []openai.Tool, toolchoice string, maxRetries int) (*decisionResult, error) { tools []openai.Tool, toolchoice any, maxRetries int) (*decisionResult, error) {
var choice *openai.ToolChoice
if toolchoice != "" {
choice = &openai.ToolChoice{
Type: openai.ToolTypeFunction,
Function: openai.ToolFunction{Name: toolchoice},
}
}
var lastErr error var lastErr error
for attempts := 0; attempts < maxRetries; attempts++ { for attempts := 0; attempts < maxRetries; attempts++ {
@@ -41,10 +32,7 @@ func (a *Agent) decision(
Model: a.options.LLMAPI.Model, Model: a.options.LLMAPI.Model,
Messages: conversation, Messages: conversation,
Tools: tools, Tools: tools,
} ToolChoice: toolchoice,
if choice != nil {
decision.ToolChoice = *choice
} }
resp, err := a.client.CreateChatCompletion(ctx, decision) resp, err := a.client.CreateChatCompletion(ctx, decision)
@@ -54,9 +42,6 @@ func (a *Agent) decision(
continue continue
} }
jsonResp, _ := json.Marshal(resp)
xlog.Debug("Decision response", "response", string(jsonResp))
if len(resp.Choices) != 1 { if len(resp.Choices) != 1 {
lastErr = fmt.Errorf("no choices: %d", len(resp.Choices)) lastErr = fmt.Errorf("no choices: %d", len(resp.Choices))
xlog.Warn("Attempt to make a decision failed", "attempt", attempts+1, "error", lastErr) xlog.Warn("Attempt to make a decision failed", "attempt", attempts+1, "error", lastErr)
@@ -94,15 +79,6 @@ func (m Messages) ToOpenAI() []openai.ChatCompletionMessage {
return []openai.ChatCompletionMessage(m) return []openai.ChatCompletionMessage(m)
} }
func (m Messages) RemoveIf(f func(msg openai.ChatCompletionMessage) bool) Messages {
for i := len(m) - 1; i >= 0; i-- {
if f(m[i]) {
m = append(m[:i], m[i+1:]...)
}
}
return m
}
func (m Messages) String() string { func (m Messages) String() string {
s := "" s := ""
for _, cc := range m { for _, cc := range m {
@@ -204,7 +180,10 @@ func (a *Agent) generateParameters(ctx context.Context, pickTemplate string, act
result, attemptErr = a.decision(ctx, result, attemptErr = a.decision(ctx,
cc, cc,
a.availableActions().ToTools(), a.availableActions().ToTools(),
act.Definition().Name.String(), openai.ToolChoice{
Type: openai.ToolTypeFunction,
Function: openai.ToolFunction{Name: act.Definition().Name.String()},
},
maxAttempts, maxAttempts,
) )
if attemptErr == nil && result.actionParams != nil { if attemptErr == nil && result.actionParams != nil {
@@ -265,7 +244,6 @@ func (a *Agent) handlePlanning(ctx context.Context, job *types.Job, chosenAction
params, err := a.generateParameters(ctx, pickTemplate, subTaskAction, conv, subTaskReasoning, maxRetries) params, err := a.generateParameters(ctx, pickTemplate, subTaskAction, conv, subTaskReasoning, maxRetries)
if err != nil { if err != nil {
xlog.Error("error generating action's parameters", "error", err)
return conv, fmt.Errorf("error generating action's parameters: %w", err) return conv, fmt.Errorf("error generating action's parameters: %w", err)
} }
@@ -295,7 +273,6 @@ func (a *Agent) handlePlanning(ctx context.Context, job *types.Job, chosenAction
result, err := a.runAction(ctx, subTaskAction, actionParams) result, err := a.runAction(ctx, subTaskAction, actionParams)
if err != nil { if err != nil {
xlog.Error("error running action", "error", err)
return conv, fmt.Errorf("error running action: %w", err) return conv, fmt.Errorf("error running action: %w", err)
} }
@@ -381,9 +358,7 @@ func (a *Agent) prepareHUD() (promptHUD *PromptHUD) {
func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.ChatCompletionMessage, maxRetries int) (types.Action, types.ActionParams, string, error) { func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.ChatCompletionMessage, maxRetries int) (types.Action, types.ActionParams, string, error) {
c := messages c := messages
xlog.Debug("[pickAction] picking action starts", "messages", messages) xlog.Debug("picking action", "messages", messages)
// Identify the goal of this conversation
if !a.options.forceReasoning { if !a.options.forceReasoning {
xlog.Debug("not forcing reasoning") xlog.Debug("not forcing reasoning")
@@ -392,7 +367,7 @@ func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.
thought, err := a.decision(ctx, thought, err := a.decision(ctx,
messages, messages,
a.availableActions().ToTools(), a.availableActions().ToTools(),
"", nil,
maxRetries) maxRetries)
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
@@ -414,7 +389,7 @@ func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.
return chosenAction, thought.actionParams, thought.message, nil return chosenAction, thought.actionParams, thought.message, nil
} }
xlog.Debug("[pickAction] forcing reasoning") xlog.Debug("forcing reasoning")
prompt, err := renderTemplate(templ, a.prepareHUD(), a.availableActions(), "") prompt, err := renderTemplate(templ, a.prepareHUD(), a.availableActions(), "")
if err != nil { if err != nil {
@@ -431,84 +406,71 @@ func (a *Agent) pickAction(ctx context.Context, templ string, messages []openai.
}, c...) }, c...)
} }
// We also could avoid to use functions here and get just a reply from the LLM
// and then use the reply to get the action
thought, err := a.decision(ctx, thought, err := a.decision(ctx,
c, c,
types.Actions{action.NewReasoning()}.ToTools(), types.Actions{action.NewReasoning()}.ToTools(),
action.NewReasoning().Definition().Name.String(), maxRetries) action.NewReasoning().Definition().Name, maxRetries)
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
} }
originalReasoning := "" reason := ""
response := &action.ReasoningResponse{} response := &action.ReasoningResponse{}
if thought.actionParams != nil { if thought.actionParams != nil {
if err := thought.actionParams.Unmarshal(response); err != nil { if err := thought.actionParams.Unmarshal(response); err != nil {
return nil, nil, "", err return nil, nil, "", err
} }
originalReasoning = response.Reasoning reason = response.Reasoning
} }
if thought.message != "" { if thought.message != "" {
originalReasoning = thought.message reason = thought.message
} }
xlog.Debug("[pickAction] picking action", "messages", c) xlog.Debug("thought", "reason", reason)
// thought, err := a.askLLM(ctx,
// c,
actionsID := []string{"reply"} // From the thought, get the action call
// Get all the available actions IDs
actionsID := []string{}
for _, m := range a.availableActions() { for _, m := range a.availableActions() {
actionsID = append(actionsID, m.Definition().Name.String()) actionsID = append(actionsID, m.Definition().Name.String())
} }
xlog.Debug("[pickAction] actionsID", "actionsID", actionsID)
intentionsTools := action.NewIntention(actionsID...) intentionsTools := action.NewIntention(actionsID...)
// TODO: FORCE to select ana ction here
// NOTE: we do not give the full conversation here to pick the action // NOTE: we do not give the full conversation here to pick the action
// to avoid hallucinations // to avoid hallucinations
// Extract an action
params, err := a.decision(ctx, params, err := a.decision(ctx,
append(c, openai.ChatCompletionMessage{ []openai.ChatCompletionMessage{{
Role: "system", Role: "assistant",
Content: "Pick the relevant action given the following reasoning: " + originalReasoning, Content: reason,
}), },
},
types.Actions{intentionsTools}.ToTools(), types.Actions{intentionsTools}.ToTools(),
intentionsTools.Definition().Name.String(), maxRetries) intentionsTools.Definition().Name, maxRetries)
if err != nil { if err != nil {
return nil, nil, "", fmt.Errorf("failed to get the action tool parameters: %v", err) return nil, nil, "", fmt.Errorf("failed to get the action tool parameters: %v", err)
} }
actionChoice := action.IntentResponse{}
if params.actionParams == nil { if params.actionParams == nil {
xlog.Debug("[pickAction] no action params found")
return nil, nil, params.message, nil return nil, nil, params.message, nil
} }
actionChoice := action.IntentResponse{}
err = params.actionParams.Unmarshal(&actionChoice) err = params.actionParams.Unmarshal(&actionChoice)
if err != nil { if err != nil {
return nil, nil, "", err return nil, nil, "", err
} }
if actionChoice.Tool == "" || actionChoice.Tool == "reply" { if actionChoice.Tool == "" || actionChoice.Tool == "none" {
xlog.Debug("[pickAction] no action found, replying") return nil, nil, "", fmt.Errorf("no intent detected")
return nil, nil, "", nil
} }
// Find the action
chosenAction := a.availableActions().Find(actionChoice.Tool) chosenAction := a.availableActions().Find(actionChoice.Tool)
if chosenAction == nil {
return nil, nil, "", fmt.Errorf("no action found for intent:" + actionChoice.Tool)
}
xlog.Debug("[pickAction] chosenAction", "chosenAction", chosenAction, "actionName", actionChoice.Tool) return chosenAction, nil, actionChoice.Reasoning, nil
// // Let's double check if the action is correct by asking the LLM to judge it
// if chosenAction!= nil {
// promptString:= "Given the following goal and thoughts, is the action correct? \n\n"
// promptString+= fmt.Sprintf("Goal: %s\n", goalResponse.Goal)
// promptString+= fmt.Sprintf("Thoughts: %s\n", originalReasoning)
// promptString+= fmt.Sprintf("Action: %s\n", chosenAction.Definition().Name.String())
// promptString+= fmt.Sprintf("Action description: %s\n", chosenAction.Definition().Description)
// promptString+= fmt.Sprintf("Action parameters: %s\n", params.actionParams)
// }
return chosenAction, nil, originalReasoning, nil
} }

View File

@@ -249,7 +249,7 @@ func (a *Agent) runAction(ctx context.Context, chosenAction types.Action, params
} }
} }
xlog.Info("[runAction] Running action", "action", chosenAction.Definition().Name, "agent", a.Character.Name, "params", params.String()) xlog.Info("Running action", "action", chosenAction.Definition().Name, "agent", a.Character.Name)
if chosenAction.Definition().Name.Is(action.StateActionName) { if chosenAction.Definition().Name.Is(action.StateActionName) {
// We need to store the result in the state // We need to store the result in the state
@@ -270,8 +270,6 @@ func (a *Agent) runAction(ctx context.Context, chosenAction types.Action, params
} }
} }
xlog.Debug("[runAction] Action result", "action", chosenAction.Definition().Name, "params", params.String(), "result", result.Result)
return result, nil return result, nil
} }
@@ -517,21 +515,10 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
//job.Result.Finish(fmt.Errorf("no action to do"))\ //job.Result.Finish(fmt.Errorf("no action to do"))\
xlog.Info("No action to do, just reply", "agent", a.Character.Name, "reasoning", reasoning) xlog.Info("No action to do, just reply", "agent", a.Character.Name, "reasoning", reasoning)
if reasoning != "" {
conv = append(conv, openai.ChatCompletionMessage{ conv = append(conv, openai.ChatCompletionMessage{
Role: "assistant", Role: "assistant",
Content: reasoning, Content: reasoning,
}) })
} else {
xlog.Info("No reasoning, just reply", "agent", a.Character.Name)
msg, err := a.askLLM(job.GetContext(), conv, maxRetries)
if err != nil {
job.Result.Finish(fmt.Errorf("error asking LLM for a reply: %w", err))
return
}
conv = append(conv, msg)
reasoning = msg.Content
}
xlog.Debug("Finish job with reasoning", "reasoning", reasoning, "agent", a.Character.Name, "conversation", fmt.Sprintf("%+v", conv)) xlog.Debug("Finish job with reasoning", "reasoning", reasoning, "agent", a.Character.Name, "conversation", fmt.Sprintf("%+v", conv))
job.Result.Conversation = conv job.Result.Conversation = conv
@@ -605,13 +592,7 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
var err error var err error
conv, err = a.handlePlanning(job.GetContext(), job, chosenAction, actionParams, reasoning, pickTemplate, conv) conv, err = a.handlePlanning(job.GetContext(), job, chosenAction, actionParams, reasoning, pickTemplate, conv)
if err != nil { if err != nil {
xlog.Error("error handling planning", "error", err) job.Result.Finish(fmt.Errorf("error running action: %w", err))
//job.Result.Conversation = conv
//job.Result.SetResponse(msg.Content)
a.reply(job, role, append(conv, openai.ChatCompletionMessage{
Role: "assistant",
Content: fmt.Sprintf("Error handling planning: %v", err),
}), actionParams, chosenAction, reasoning)
return return
} }
@@ -689,7 +670,6 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
!chosenAction.Definition().Name.Is(action.ReplyActionName) { !chosenAction.Definition().Name.Is(action.ReplyActionName) {
xlog.Info("Following action", "action", followingAction.Definition().Name, "agent", a.Character.Name) xlog.Info("Following action", "action", followingAction.Definition().Name, "agent", a.Character.Name)
job.ConversationHistory = conv
// We need to do another action (?) // We need to do another action (?)
// The agent decided to do another action // The agent decided to do another action
@@ -697,6 +677,26 @@ func (a *Agent) consumeJob(job *types.Job, role string) {
job.SetNextAction(&followingAction, &followingParams, reasoning) job.SetNextAction(&followingAction, &followingParams, reasoning)
a.consumeJob(job, role) a.consumeJob(job, role)
return return
} else if followingAction == nil {
xlog.Info("Not following another action", "agent", a.Character.Name)
if !a.options.forceReasoning {
xlog.Info("Finish conversation with reasoning", "reasoning", reasoning, "agent", a.Character.Name)
msg := openai.ChatCompletionMessage{
Role: "assistant",
Content: reasoning,
}
conv = append(conv, msg)
job.Result.SetResponse(msg.Content)
job.Result.Conversation = conv
job.Result.AddFinalizer(func(conv []openai.ChatCompletionMessage) {
a.saveCurrentConversation(conv)
})
job.Result.Finish(nil)
return
}
} }
a.reply(job, role, conv, actionParams, chosenAction, reasoning) a.reply(job, role, conv, actionParams, chosenAction, reasoning)

View File

@@ -126,8 +126,6 @@ var _ = Describe("Agent test", func() {
agent, err := New( agent, err := New(
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
EnableForceReasoning,
WithTimeout("10m"),
WithLoopDetectionSteps(3), WithLoopDetectionSteps(3),
// WithRandomIdentity(), // WithRandomIdentity(),
WithActions(&TestAction{response: map[string]string{ WithActions(&TestAction{response: map[string]string{
@@ -176,7 +174,7 @@ var _ = Describe("Agent test", func() {
agent, err := New( agent, err := New(
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
WithTimeout("10m"),
// WithRandomIdentity(), // WithRandomIdentity(),
WithActions(&TestAction{response: map[string]string{ WithActions(&TestAction{response: map[string]string{
"boston": testActionResult, "boston": testActionResult,
@@ -201,7 +199,6 @@ var _ = Describe("Agent test", func() {
agent, err := New( agent, err := New(
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
WithTimeout("10m"),
EnableHUD, EnableHUD,
// EnableStandaloneJob, // EnableStandaloneJob,
// WithRandomIdentity(), // WithRandomIdentity(),
@@ -238,7 +235,7 @@ var _ = Describe("Agent test", func() {
defer agent.Stop() defer agent.Stop()
result := agent.Ask( result := agent.Ask(
types.WithText("Thoroughly plan a trip to San Francisco from Venice, Italy; check flight times, visa requirements and whether electrical items are allowed in cabin luggage."), types.WithText("plan a trip to San Francisco from Venice, Italy"),
) )
Expect(len(result.State)).To(BeNumerically(">", 1)) Expect(len(result.State)).To(BeNumerically(">", 1))
@@ -260,7 +257,6 @@ var _ = Describe("Agent test", func() {
WithLLMAPIURL(apiURL), WithLLMAPIURL(apiURL),
WithModel(testModel), WithModel(testModel),
WithLLMAPIKey(apiKeyURL), WithLLMAPIKey(apiKeyURL),
WithTimeout("10m"),
WithNewConversationSubscriber(func(m openai.ChatCompletionMessage) { WithNewConversationSubscriber(func(m openai.ChatCompletionMessage) {
mu.Lock() mu.Lock()
message = m message = m

View File

@@ -82,7 +82,11 @@ Current State:
- Short-term Memory: {{range .CurrentState.Memories}}{{.}} {{end}}{{end}} - Short-term Memory: {{range .CurrentState.Memories}}{{.}} {{end}}{{end}}
Current Time: {{.Time}}` Current Time: {{.Time}}`
const pickSelfTemplate = ` const pickSelfTemplate = `Available Tools:
{{range .Actions -}}
- {{.Name}}: {{.Description }}
{{ end }}
You are an autonomous AI agent with a defined character and state (as shown above). You are an autonomous AI agent with a defined character and state (as shown above).
Your task is to evaluate your current situation and determine the best course of action. Your task is to evaluate your current situation and determine the best course of action.
@@ -104,21 +108,40 @@ Remember:
- Keep track of your progress and state - Keep track of your progress and state
- Be proactive in addressing potential issues - Be proactive in addressing potential issues
{{if .Reasoning}}Previous Reasoning: {{.Reasoning}}{{end}}
` + hudTemplate
const reSelfEvalTemplate = pickSelfTemplate + `
Previous actions have been executed. Evaluate the current situation:
1. Review the outcomes of previous actions
2. Assess progress toward your goals
3. Identify any issues or challenges
4. Determine if additional actions are needed
Consider:
- Success of previous actions
- Changes in the situation
- New information or insights
- Potential next steps
Make a decision about whether to:
- Continue with more actions
- Provide a final response
- Adjust your approach
- Update your goals or state`
const pickActionTemplate = hudTemplate + `
Available Tools: Available Tools:
{{range .Actions -}} {{range .Actions -}}
- {{.Name}}: {{.Description }} - {{.Name}}: {{.Description }}
{{ end }} {{ end }}
{{if .Reasoning}}Previous Reasoning: {{.Reasoning}}{{end}} Task: Analyze the situation and determine the best course of action.
` + hudTemplate
const reSelfEvalTemplate = pickSelfTemplate
const pickActionTemplate = hudTemplate + `
Your only task is to analyze the conversation and determine a goal and the best tool to use, or just a final response if we have fullfilled the goal.
Guidelines: Guidelines:
1. Review the current state, what was done already and context 1. Review the current state and context
2. Consider available tools and their purposes 2. Consider available tools and their purposes
3. Plan your approach carefully 3. Plan your approach carefully
4. Explain your reasoning clearly 4. Explain your reasoning clearly
@@ -136,11 +159,38 @@ Decision Process:
4. Explain your reasoning 4. Explain your reasoning
5. Execute the chosen action 5. Execute the chosen action
Available Tools:
{{range .Actions -}}
- {{.Name}}: {{.Description }}
{{ end }}
{{if .Reasoning}}Previous Reasoning: {{.Reasoning}}{{end}}` {{if .Reasoning}}Previous Reasoning: {{.Reasoning}}{{end}}`
const reEvalTemplate = pickActionTemplate const reEvalTemplate = pickActionTemplate + `
Previous actions have been executed. Let's evaluate the current situation:
1. Review Previous Actions:
- What actions were taken
- What were the results
- Any issues or challenges encountered
2. Assess Current State:
- Progress toward goals
- Changes in the situation
- New information or insights
- Current challenges or opportunities
3. Determine Next Steps:
- Additional tools needed
- Final response required
- Error handling needed
- Approach adjustments required
4. Decision Making:
- If task is complete: Use "reply" tool
- If errors exist: Address them appropriately
- If more actions needed: Explain why and which tools
- If situation changed: Adapt your approach
Remember to:
- Consider all available information
- Be specific about next steps
- Explain your reasoning clearly
- Handle errors appropriately
- Provide complete responses when done`

View File

@@ -0,0 +1,75 @@
services:
localai:
# See https://localai.io/basics/container/#standard-container-images for
# a list of available container images (or build your own with the provided Dockerfile)
# Available images with CUDA, ROCm, SYCL, Vulkan
# Image list (quay.io): https://quay.io/repository/go-skynet/local-ai?tab=tags
# Image list (dockerhub): https://hub.docker.com/r/localai/localai
image: localai/localai:master-sycl-f32-ffmpeg-core
command:
# - rombo-org_rombo-llm-v3.0-qwen-32b # minimum suggested model
- arcee-agent # (smaller)
- granite-embedding-107m-multilingual
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
interval: 60s
timeout: 10m
retries: 120
ports:
- 8081:8080
environment:
- DEBUG=true
#- LOCALAI_API_KEY=sk-1234567890
volumes:
- ./volumes/models:/build/models:cached
- ./volumes/images:/tmp/generated/images
devices:
# On a system with integrated GPU and an Arc 770, this is the Arc 770
- /dev/dri/card1
- /dev/dri/renderD129
localrecall:
image: quay.io/mudler/localrecall:main
ports:
- 8080
environment:
- COLLECTION_DB_PATH=/db
- EMBEDDING_MODEL=granite-embedding-107m-multilingual
- FILE_ASSETS=/assets
- OPENAI_API_KEY=sk-1234567890
- OPENAI_BASE_URL=http://localai:8080
volumes:
- ./volumes/localrag/db:/db
- ./volumes/localrag/assets/:/assets
localrecall-healthcheck:
depends_on:
localrecall:
condition: service_started
image: busybox
command: ["sh", "-c", "until wget -q -O - http://localrecall:8080 > /dev/null 2>&1; do echo 'Waiting for localrecall...'; sleep 1; done; echo 'localrecall is up!'"]
localagi:
depends_on:
localai:
condition: service_healthy
localrecall-healthcheck:
condition: service_completed_successfully
build:
context: .
dockerfile: Dockerfile.webui
ports:
- 8080:3000
image: quay.io/mudler/localagi:master
environment:
- LOCALAGI_MODEL=arcee-agent
- LOCALAGI_LLM_API_URL=http://localai:8080
#- LOCALAGI_LLM_API_KEY=sk-1234567890
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080
- LOCALAGI_STATE_DIR=/pool
- LOCALAGI_TIMEOUT=5m
- LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./volumes/localagi/:/pool

85
docker-compose.gpu.yaml Normal file
View File

@@ -0,0 +1,85 @@
services:
localai:
# See https://localai.io/basics/container/#standard-container-images for
# a list of available container images (or build your own with the provided Dockerfile)
# Available images with CUDA, ROCm, SYCL, Vulkan
# Image list (quay.io): https://quay.io/repository/go-skynet/local-ai?tab=tags
# Image list (dockerhub): https://hub.docker.com/r/localai/localai
image: localai/localai:master-gpu-nvidia-cuda-12
command:
- mlabonne_gemma-3-27b-it-abliterated
- qwen_qwq-32b
# Other good alternative options:
# - rombo-org_rombo-llm-v3.0-qwen-32b # minimum suggested model
# - arcee-agent
- granite-embedding-107m-multilingual
- flux.1-dev
- minicpm-v-2_6
environment:
# Enable if you have a single GPU which don't fit all the models
- LOCALAI_SINGLE_ACTIVE_BACKEND=true
- DEBUG=true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
interval: 10s
timeout: 20m
retries: 20
ports:
- 8081:8080
volumes:
- ./volumes/models:/build/models:cached
- ./volumes/images:/tmp/generated/images
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
localrecall:
image: quay.io/mudler/localrecall:main
ports:
- 8080
environment:
- COLLECTION_DB_PATH=/db
- EMBEDDING_MODEL=granite-embedding-107m-multilingual
- FILE_ASSETS=/assets
- OPENAI_API_KEY=sk-1234567890
- OPENAI_BASE_URL=http://localai:8080
volumes:
- ./volumes/localrag/db:/db
- ./volumes/localrag/assets/:/assets
localrecall-healthcheck:
depends_on:
localrecall:
condition: service_started
image: busybox
command: ["sh", "-c", "until wget -q -O - http://localrecall:8080 > /dev/null 2>&1; do echo 'Waiting for localrecall...'; sleep 1; done; echo 'localrecall is up!'"]
localagi:
depends_on:
localai:
condition: service_healthy
localrecall-healthcheck:
condition: service_completed_successfully
build:
context: .
dockerfile: Dockerfile.webui
ports:
- 8080:3000
image: quay.io/mudler/localagi:master
environment:
- LOCALAGI_MODEL=qwen_qwq-32b
- LOCALAGI_LLM_API_URL=http://localai:8080
#- LOCALAGI_LLM_API_KEY=sk-1234567890
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080
- LOCALAGI_STATE_DIR=/pool
- LOCALAGI_TIMEOUT=5m
- LOCALAGI_ENABLE_CONVERSATIONS_LOGGING=false
- LOCALAGI_MULTIMODAL_MODEL=minicpm-v-2_6
- LOCALAGI_IMAGE_MODEL=flux.1-dev
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./volumes/localagi/:/pool

View File

@@ -1,33 +0,0 @@
services:
localai:
extends:
file: docker-compose.yaml
service: localai
environment:
- LOCALAI_SINGLE_ACTIVE_BACKEND=true
- DEBUG=true
image: localai/localai:master-sycl-f32-ffmpeg-core
devices:
# On a system with integrated GPU and an Arc 770, this is the Arc 770
- /dev/dri/card1
- /dev/dri/renderD129
command:
- ${MODEL_NAME:-arcee-agent}
- ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-sd-1.5-ggml}
- granite-embedding-107m-multilingual
localrecall:
extends:
file: docker-compose.yaml
service: localrecall
localrecall-healthcheck:
extends:
file: docker-compose.yaml
service: localrecall-healthcheck
localagi:
extends:
file: docker-compose.yaml
service: localagi

View File

@@ -1,31 +0,0 @@
services:
localai:
extends:
file: docker-compose.yaml
service: localai
environment:
- LOCALAI_SINGLE_ACTIVE_BACKEND=true
- DEBUG=true
image: localai/localai:master-sycl-f32-ffmpeg-core
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
localrecall:
extends:
file: docker-compose.yaml
service: localrecall
localrecall-healthcheck:
extends:
file: docker-compose.yaml
service: localrecall-healthcheck
localagi:
extends:
file: docker-compose.yaml
service: localagi

View File

@@ -7,9 +7,7 @@ services:
# Image list (dockerhub): https://hub.docker.com/r/localai/localai # Image list (dockerhub): https://hub.docker.com/r/localai/localai
image: localai/localai:master-ffmpeg-core image: localai/localai:master-ffmpeg-core
command: command:
- ${MODEL_NAME:-arcee-agent} - arcee-agent # (smaller)
- ${MULTIMODAL_MODEL:-minicpm-v-2_6}
- ${IMAGE_MODEL:-flux.1-dev}
- granite-embedding-107m-multilingual - granite-embedding-107m-multilingual
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"] test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
@@ -25,6 +23,14 @@ services:
- ./volumes/models:/build/models:cached - ./volumes/models:/build/models:cached
- ./volumes/images:/tmp/generated/images - ./volumes/images:/tmp/generated/images
# decomment the following piece if running with Nvidia GPUs
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
localrecall: localrecall:
image: quay.io/mudler/localrecall:main image: quay.io/mudler/localrecall:main
ports: ports:
@@ -59,9 +65,7 @@ services:
- 8080:3000 - 8080:3000
#image: quay.io/mudler/localagi:master #image: quay.io/mudler/localagi:master
environment: environment:
- LOCALAGI_MODEL=${MODEL_NAME:-arcee-agent} - LOCALAGI_MODEL=arcee-agent
- LOCALAGI_MULTIMODAL_MODEL=${MULTIMODAL_MODEL:-minicpm-v-2_6}
- LOCALAGI_IMAGE_MODEL=${IMAGE_MODEL:-sd-1.5-ggml}
- LOCALAGI_LLM_API_URL=http://localai:8080 - LOCALAGI_LLM_API_URL=http://localai:8080
#- LOCALAGI_LLM_API_KEY=sk-1234567890 #- LOCALAGI_LLM_API_KEY=sk-1234567890
- LOCALAGI_LOCALRAG_URL=http://localrecall:8080 - LOCALAGI_LOCALRAG_URL=http://localrecall:8080

View File

@@ -26,9 +26,6 @@ const (
ActionGithubRepositoryCreateOrUpdate = "github-repository-create-or-update-content" ActionGithubRepositoryCreateOrUpdate = "github-repository-create-or-update-content"
ActionGithubIssueReader = "github-issue-reader" ActionGithubIssueReader = "github-issue-reader"
ActionGithubIssueCommenter = "github-issue-commenter" ActionGithubIssueCommenter = "github-issue-commenter"
ActionGithubPRReader = "github-pr-reader"
ActionGithubPRCommenter = "github-pr-commenter"
ActionGithubPRReviewer = "github-pr-reviewer"
ActionGithubREADME = "github-readme" ActionGithubREADME = "github-readme"
ActionScraper = "scraper" ActionScraper = "scraper"
ActionWikipedia = "wikipedia" ActionWikipedia = "wikipedia"
@@ -52,9 +49,6 @@ var AvailableActions = []string{
ActionGithubRepositoryCreateOrUpdate, ActionGithubRepositoryCreateOrUpdate,
ActionGithubIssueReader, ActionGithubIssueReader,
ActionGithubIssueCommenter, ActionGithubIssueCommenter,
ActionGithubPRReader,
ActionGithubPRCommenter,
ActionGithubPRReviewer,
ActionGithubREADME, ActionGithubREADME,
ActionScraper, ActionScraper,
ActionBrowse, ActionBrowse,
@@ -112,12 +106,6 @@ func Action(name, agentName string, config map[string]string, pool *state.AgentP
a = actions.NewGithubIssueSearch(config) a = actions.NewGithubIssueSearch(config)
case ActionGithubIssueReader: case ActionGithubIssueReader:
a = actions.NewGithubIssueReader(config) a = actions.NewGithubIssueReader(config)
case ActionGithubPRReader:
a = actions.NewGithubPRReader(config)
case ActionGithubPRCommenter:
a = actions.NewGithubPRCommenter(config)
case ActionGithubPRReviewer:
a = actions.NewGithubPRReviewer(config)
case ActionGithubIssueCommenter: case ActionGithubIssueCommenter:
a = actions.NewGithubIssueCommenter(config) a = actions.NewGithubIssueCommenter(config)
case ActionGithubRepositoryGet: case ActionGithubRepositoryGet:
@@ -211,21 +199,6 @@ func ActionsConfigMeta() []config.FieldGroup {
Label: "GitHub Repository README", Label: "GitHub Repository README",
Fields: actions.GithubRepositoryREADMEConfigMeta(), Fields: actions.GithubRepositoryREADMEConfigMeta(),
}, },
{
Name: "github-pr-reader",
Label: "GitHub PR Reader",
Fields: actions.GithubPRReaderConfigMeta(),
},
{
Name: "github-pr-commenter",
Label: "GitHub PR Commenter",
Fields: actions.GithubPRCommenterConfigMeta(),
},
{
Name: "github-pr-reviewer",
Label: "GitHub PR Reviewer",
Fields: actions.GithubPRReviewerConfigMeta(),
},
{ {
Name: "twitter-post", Name: "twitter-post",
Label: "Twitter Post", Label: "Twitter Post",

View File

@@ -1,264 +0,0 @@
package actions
import (
"context"
"fmt"
"regexp"
"strconv"
"github.com/google/go-github/v69/github"
"github.com/mudler/LocalAGI/core/types"
"github.com/mudler/LocalAGI/pkg/config"
"github.com/sashabaranov/go-openai/jsonschema"
)
type GithubPRCommenter struct {
token, repository, owner, customActionName string
client *github.Client
}
var (
patchRegex = regexp.MustCompile(`^@@.*\d [\+\-](\d+),?(\d+)?.+?@@`)
)
type commitFileInfo struct {
FileName string
hunkInfos []*hunkInfo
sha string
}
type hunkInfo struct {
hunkStart int
hunkEnd int
}
func (hi hunkInfo) isLineInHunk(line int) bool {
return line >= hi.hunkStart && line <= hi.hunkEnd
}
func (cfi *commitFileInfo) getHunkInfo(line int) *hunkInfo {
for _, hunkInfo := range cfi.hunkInfos {
if hunkInfo.isLineInHunk(line) {
return hunkInfo
}
}
return nil
}
func (cfi *commitFileInfo) isLineInChange(line int) bool {
return cfi.getHunkInfo(line) != nil
}
func (cfi commitFileInfo) calculatePosition(line int) *int {
hi := cfi.getHunkInfo(line)
if hi == nil {
return nil
}
position := line - hi.hunkStart
return &position
}
func parseHunkPositions(patch, filename string) ([]*hunkInfo, error) {
hunkInfos := make([]*hunkInfo, 0)
if patch != "" {
groups := patchRegex.FindAllStringSubmatch(patch, -1)
if len(groups) < 1 {
return hunkInfos, fmt.Errorf("the patch details for [%s] could not be resolved", filename)
}
for _, patchGroup := range groups {
endPos := 2
if len(patchGroup) > 2 && patchGroup[2] == "" {
endPos = 1
}
hunkStart, err := strconv.Atoi(patchGroup[1])
if err != nil {
hunkStart = -1
}
hunkEnd, err := strconv.Atoi(patchGroup[endPos])
if err != nil {
hunkEnd = -1
}
hunkInfos = append(hunkInfos, &hunkInfo{
hunkStart: hunkStart,
hunkEnd: hunkEnd,
})
}
}
return hunkInfos, nil
}
func getCommitInfo(file *github.CommitFile) (*commitFileInfo, error) {
patch := file.GetPatch()
hunkInfos, err := parseHunkPositions(patch, *file.Filename)
if err != nil {
return nil, err
}
sha := file.GetSHA()
if sha == "" {
return nil, fmt.Errorf("the sha details for [%s] could not be resolved", *file.Filename)
}
return &commitFileInfo{
FileName: *file.Filename,
hunkInfos: hunkInfos,
sha: sha,
}, nil
}
func NewGithubPRCommenter(config map[string]string) *GithubPRCommenter {
client := github.NewClient(nil).WithAuthToken(config["token"])
return &GithubPRCommenter{
client: client,
token: config["token"],
customActionName: config["customActionName"],
repository: config["repository"],
owner: config["owner"],
}
}
func (g *GithubPRCommenter) Run(ctx context.Context, params types.ActionParams) (types.ActionResult, error) {
result := struct {
Repository string `json:"repository"`
Owner string `json:"owner"`
PRNumber int `json:"pr_number"`
Comment string `json:"comment"`
}{}
err := params.Unmarshal(&result)
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to unmarshal params: %w", err)
}
if g.repository != "" && g.owner != "" {
result.Repository = g.repository
result.Owner = g.owner
}
// First verify the PR exists and is in a valid state
pr, _, err := g.client.PullRequests.Get(ctx, result.Owner, result.Repository, result.PRNumber)
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to fetch PR #%d: %w", result.PRNumber, err)
}
if pr == nil {
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d not found in repository %s/%s", result.PRNumber, result.Owner, result.Repository)}, nil
}
// Check if PR is in a state that allows comments
if *pr.State != "open" {
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d is not open (current state: %s)", result.PRNumber, *pr.State)}, nil
}
if result.Comment == "" {
return types.ActionResult{Result: "No comment provided"}, nil
}
// Try both PullRequests and Issues API for general comments
var resp *github.Response
// First try PullRequests API
_, resp, err = g.client.PullRequests.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, &github.PullRequestComment{
Body: &result.Comment,
})
// If that fails with 403, try Issues API
if err != nil && resp != nil && resp.StatusCode == 403 {
_, resp, err = g.client.Issues.CreateComment(ctx, result.Owner, result.Repository, result.PRNumber, &github.IssueComment{
Body: &result.Comment,
})
}
if err != nil {
return types.ActionResult{Result: fmt.Sprintf("Error adding general comment: %s", err.Error())}, nil
}
return types.ActionResult{
Result: "Successfully added general comment to pull request",
}, nil
}
func (g *GithubPRCommenter) Definition() types.ActionDefinition {
actionName := "comment_github_pr"
if g.customActionName != "" {
actionName = g.customActionName
}
description := "Add comments to a GitHub pull request, including line-specific feedback. Often used after reading a PR to provide a peer review."
if g.repository != "" && g.owner != "" {
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "A general comment to add to the pull request.",
},
},
Required: []string{"pr_number", "comment"},
}
}
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to comment on.",
},
"repository": {
Type: jsonschema.String,
Description: "The repository containing the pull request.",
},
"owner": {
Type: jsonschema.String,
Description: "The owner of the repository.",
},
"comment": {
Type: jsonschema.String,
Description: "A general comment to add to the pull request.",
},
},
Required: []string{"pr_number", "repository", "owner", "comment"},
}
}
func (a *GithubPRCommenter) Plannable() bool {
return true
}
// GithubPRCommenterConfigMeta returns the metadata for GitHub PR Commenter action configuration fields
func GithubPRCommenterConfigMeta() []config.Field {
return []config.Field{
{
Name: "token",
Label: "GitHub Token",
Type: config.FieldTypeText,
Required: true,
HelpText: "GitHub API token with repository access",
},
{
Name: "repository",
Label: "Repository",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository name",
},
{
Name: "owner",
Label: "Owner",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository owner",
},
{
Name: "customActionName",
Label: "Custom Action Name",
Type: config.FieldTypeText,
HelpText: "Custom name for this action",
},
}
}

View File

@@ -1,188 +0,0 @@
package actions
import (
"context"
"fmt"
"github.com/google/go-github/v69/github"
"github.com/mudler/LocalAGI/core/types"
"github.com/mudler/LocalAGI/pkg/config"
"github.com/sashabaranov/go-openai/jsonschema"
)
type GithubPRReader struct {
token, repository, owner, customActionName string
showFullDiff bool
client *github.Client
}
func NewGithubPRReader(config map[string]string) *GithubPRReader {
client := github.NewClient(nil).WithAuthToken(config["token"])
showFullDiff := false
if config["showFullDiff"] == "true" {
showFullDiff = true
}
return &GithubPRReader{
client: client,
token: config["token"],
customActionName: config["customActionName"],
repository: config["repository"],
owner: config["owner"],
showFullDiff: showFullDiff,
}
}
func (g *GithubPRReader) Run(ctx context.Context, params types.ActionParams) (types.ActionResult, error) {
result := struct {
Repository string `json:"repository"`
Owner string `json:"owner"`
PRNumber int `json:"pr_number"`
}{}
err := params.Unmarshal(&result)
if err != nil {
return types.ActionResult{}, err
}
if g.repository != "" && g.owner != "" {
result.Repository = g.repository
result.Owner = g.owner
}
pr, _, err := g.client.PullRequests.Get(ctx, result.Owner, result.Repository, result.PRNumber)
if err != nil {
return types.ActionResult{Result: fmt.Sprintf("Error fetching pull request: %s", err.Error())}, err
}
if pr == nil {
return types.ActionResult{Result: fmt.Sprintf("No pull request found")}, nil
}
// Get the list of changed files
files, _, err := g.client.PullRequests.ListFiles(ctx, result.Owner, result.Repository, result.PRNumber, &github.ListOptions{})
if err != nil {
return types.ActionResult{Result: fmt.Sprintf("Error fetching pull request files: %s", err.Error())}, err
}
// Get CI status information
ciStatus := "\n\nCI Status:\n"
// Get PR status checks
checkRuns, _, err := g.client.Checks.ListCheckRunsForRef(ctx, result.Owner, result.Repository, pr.GetHead().GetSHA(), &github.ListCheckRunsOptions{})
if err == nil && checkRuns != nil {
ciStatus += fmt.Sprintf("\nPR Status Checks:\n")
ciStatus += fmt.Sprintf("Total Checks: %d\n", checkRuns.GetTotal())
for _, check := range checkRuns.CheckRuns {
ciStatus += fmt.Sprintf("- %s: %s (%s)\n",
check.GetName(),
check.GetConclusion(),
check.GetStatus())
}
}
// Build the file changes summary with patches
fileChanges := "\n\nFile Changes:\n"
for _, file := range files {
fileChanges += fmt.Sprintf("\n--- %s\n+++ %s\n", file.GetFilename(), file.GetFilename())
if g.showFullDiff && file.GetPatch() != "" {
fileChanges += file.GetPatch()
}
fileChanges += fmt.Sprintf("\n(%d additions, %d deletions)\n", file.GetAdditions(), file.GetDeletions())
}
return types.ActionResult{
Result: fmt.Sprintf(
"Pull Request %d Repository: %s\nTitle: %s\nBody: %s\nState: %s\nBase: %s\nHead: %s%s%s",
pr.GetNumber(),
pr.GetBase().GetRepo().GetFullName(),
pr.GetTitle(),
pr.GetBody(),
pr.GetState(),
pr.GetBase().GetRef(),
pr.GetHead().GetRef(),
ciStatus,
fileChanges)}, nil
}
func (g *GithubPRReader) Definition() types.ActionDefinition {
actionName := "read_github_pr"
if g.customActionName != "" {
actionName = g.customActionName
}
description := "Read a GitHub pull request."
if g.repository != "" && g.owner != "" {
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to read.",
},
},
Required: []string{"pr_number"},
}
}
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to read.",
},
"repository": {
Type: jsonschema.String,
Description: "The repository containing the pull request.",
},
"owner": {
Type: jsonschema.String,
Description: "The owner of the repository.",
},
},
Required: []string{"pr_number", "repository", "owner"},
}
}
func (a *GithubPRReader) Plannable() bool {
return true
}
// GithubPRReaderConfigMeta returns the metadata for GitHub PR Reader action configuration fields
func GithubPRReaderConfigMeta() []config.Field {
return []config.Field{
{
Name: "token",
Label: "GitHub Token",
Type: config.FieldTypeText,
Required: true,
HelpText: "GitHub API token with repository access",
},
{
Name: "repository",
Label: "Repository",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository name",
},
{
Name: "owner",
Label: "Owner",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository owner",
},
{
Name: "customActionName",
Label: "Custom Action Name",
Type: config.FieldTypeText,
HelpText: "Custom name for this action",
},
{
Name: "showFullDiff",
Label: "Show Full Diff",
Type: config.FieldTypeCheckbox,
HelpText: "Whether to show the full diff content or just the summary",
},
}
}

View File

@@ -1,286 +0,0 @@
package actions
import (
"context"
"fmt"
"io"
"strings"
"github.com/google/go-github/v69/github"
"github.com/mudler/LocalAGI/core/types"
"github.com/mudler/LocalAGI/pkg/config"
"github.com/mudler/LocalAGI/pkg/xlog"
"github.com/sashabaranov/go-openai/jsonschema"
)
type GithubPRReviewer struct {
token, repository, owner, customActionName string
client *github.Client
}
func NewGithubPRReviewer(config map[string]string) *GithubPRReviewer {
client := github.NewClient(nil).WithAuthToken(config["token"])
return &GithubPRReviewer{
client: client,
token: config["token"],
customActionName: config["customActionName"],
repository: config["repository"],
owner: config["owner"],
}
}
func (g *GithubPRReviewer) Run(ctx context.Context, params types.ActionParams) (types.ActionResult, error) {
result := struct {
Repository string `json:"repository"`
Owner string `json:"owner"`
PRNumber int `json:"pr_number"`
ReviewComment string `json:"review_comment"`
ReviewAction string `json:"review_action"` // APPROVE, REQUEST_CHANGES, or COMMENT
Comments []struct {
File string `json:"file"`
Line int `json:"line"`
Comment string `json:"comment"`
StartLine int `json:"start_line,omitempty"`
} `json:"comments"`
}{}
err := params.Unmarshal(&result)
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to unmarshal params: %w", err)
}
if g.repository != "" && g.owner != "" {
result.Repository = g.repository
result.Owner = g.owner
}
// First verify the PR exists and is in a valid state
pr, _, err := g.client.PullRequests.Get(ctx, result.Owner, result.Repository, result.PRNumber)
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to fetch PR #%d: %w", result.PRNumber, err)
}
if pr == nil {
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d not found in repository %s/%s", result.PRNumber, result.Owner, result.Repository)}, nil
}
// Check if PR is in a state that allows reviews
if *pr.State != "open" {
return types.ActionResult{Result: fmt.Sprintf("Pull request #%d is not open (current state: %s)", result.PRNumber, *pr.State)}, nil
}
// Get the list of changed files to verify the files exist in the PR
files, _, err := g.client.PullRequests.ListFiles(ctx, result.Owner, result.Repository, result.PRNumber, &github.ListOptions{})
if err != nil {
return types.ActionResult{}, fmt.Errorf("failed to list PR files: %w", err)
}
// Create a map of valid files
validFiles := make(map[string]bool)
for _, file := range files {
if *file.Status != "deleted" {
validFiles[*file.Filename] = true
}
}
// Process each comment
var reviewComments []*github.DraftReviewComment
for _, comment := range result.Comments {
// Check if file exists in PR
if !validFiles[comment.File] {
continue
}
reviewComment := &github.DraftReviewComment{
Path: &comment.File,
Line: &comment.Line,
Body: &comment.Comment,
}
// Set start line if provided
if comment.StartLine > 0 {
reviewComment.StartLine = &comment.StartLine
}
reviewComments = append(reviewComments, reviewComment)
}
// Create the review
review := &github.PullRequestReviewRequest{
Event: &result.ReviewAction,
Body: &result.ReviewComment,
Comments: reviewComments,
}
xlog.Debug("[githubprreviewer] review", "review", review)
// Submit the review
_, resp, err := g.client.PullRequests.CreateReview(ctx, result.Owner, result.Repository, result.PRNumber, review)
if err != nil {
errorDetails := fmt.Sprintf("Error submitting review: %s", err.Error())
if resp != nil {
errorDetails += fmt.Sprintf("\nResponse Status: %s", resp.Status)
if resp.Body != nil {
body, _ := io.ReadAll(resp.Body)
errorDetails += fmt.Sprintf("\nResponse Body: %s", string(body))
}
}
return types.ActionResult{Result: errorDetails}, err
}
actionResult := fmt.Sprintf(
"Pull request https://github.com/%s/%s/pull/%d reviewed successfully with status: %s",
result.Owner,
result.Repository,
result.PRNumber,
strings.ToLower(result.ReviewAction),
)
return types.ActionResult{Result: actionResult}, nil
}
func (g *GithubPRReviewer) Definition() types.ActionDefinition {
actionName := "review_github_pr"
if g.customActionName != "" {
actionName = g.customActionName
}
description := "Review a GitHub pull request by approving, requesting changes, or commenting."
if g.repository != "" && g.owner != "" {
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to review.",
},
"review_comment": {
Type: jsonschema.String,
Description: "The main review comment to add to the pull request.",
},
"review_action": {
Type: jsonschema.String,
Description: "The type of review to submit (APPROVE, REQUEST_CHANGES, or COMMENT).",
Enum: []string{"APPROVE", "REQUEST_CHANGES", "COMMENT"},
},
"comments": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{
Type: jsonschema.Object,
Properties: map[string]jsonschema.Definition{
"file": {
Type: jsonschema.String,
Description: "The file to comment on.",
},
"line": {
Type: jsonschema.Number,
Description: "The line number to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "The comment text.",
},
"start_line": {
Type: jsonschema.Number,
Description: "Optional start line for multi-line comments.",
},
},
Required: []string{"file", "line", "comment"},
},
Description: "Array of line-specific comments to add to the review.",
},
},
Required: []string{"pr_number", "review_action"},
}
}
return types.ActionDefinition{
Name: types.ActionDefinitionName(actionName),
Description: description,
Properties: map[string]jsonschema.Definition{
"pr_number": {
Type: jsonschema.Number,
Description: "The number of the pull request to review.",
},
"repository": {
Type: jsonschema.String,
Description: "The repository containing the pull request.",
},
"owner": {
Type: jsonschema.String,
Description: "The owner of the repository.",
},
"review_comment": {
Type: jsonschema.String,
Description: "The main review comment to add to the pull request.",
},
"review_action": {
Type: jsonschema.String,
Description: "The type of review to submit (APPROVE, REQUEST_CHANGES, or COMMENT).",
Enum: []string{"APPROVE", "REQUEST_CHANGES", "COMMENT"},
},
"comments": {
Type: jsonschema.Array,
Items: &jsonschema.Definition{
Type: jsonschema.Object,
Properties: map[string]jsonschema.Definition{
"file": {
Type: jsonschema.String,
Description: "The file to comment on.",
},
"line": {
Type: jsonschema.Number,
Description: "The line number to comment on.",
},
"comment": {
Type: jsonschema.String,
Description: "The comment text.",
},
"start_line": {
Type: jsonschema.Number,
Description: "Optional start line for multi-line comments.",
},
},
Required: []string{"file", "line", "comment"},
},
Description: "Array of line-specific comments to add to the review.",
},
},
Required: []string{"pr_number", "repository", "owner", "review_action"},
}
}
func (a *GithubPRReviewer) Plannable() bool {
return true
}
// GithubPRReviewerConfigMeta returns the metadata for GitHub PR Reviewer action configuration fields
func GithubPRReviewerConfigMeta() []config.Field {
return []config.Field{
{
Name: "token",
Label: "GitHub Token",
Type: config.FieldTypeText,
Required: true,
HelpText: "GitHub API token with repository access",
},
{
Name: "repository",
Label: "Repository",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository name",
},
{
Name: "owner",
Label: "Owner",
Type: config.FieldTypeText,
Required: false,
HelpText: "GitHub repository owner",
},
{
Name: "customActionName",
Label: "Custom Action Name",
Type: config.FieldTypeText,
HelpText: "Custom name for this action",
},
}
}