Files
Memoh/internal/agent/agent.go
T
Acbox Liu 43c4153938 feat: introduce DCP pipeline layer for unified context assembly (#329)
* refactor: introduce DCP pipeline layer for unified context assembly

Introduce a Deterministic Context Pipeline (DCP) inspired by Cahciua,
providing event-driven context assembly for LLM conversations.

- Add `internal/pipeline/` package with Canonical Event types, Projection
  (reduce), Rendering (XML RC), Pipeline manager, and EventStore persistence
- Change user message format from YAML front-matter to XML `<message>` tags
  with self-contained attributes (sender, channel, conversation, type)
- Merge CLI/Web dual API into single `/local/` endpoint, remove CLI handler
- Add `bot_session_events` table for event persistence and cold-start replay
- Add `discuss` session type (reserved for future Cahciua-style mode)
- Wire pipeline into HandleInbound: adapt → persist → push on every message
- Lazy cold-start replay: load events from DB on first session access

* feat: implement discuss mode with reactive driver and probe gate

Add discuss session mode where the bot autonomously decides when to speak
in group chats via tool-gated output (send tool only, no direct text reply).

- Add discuss driver (per-session goroutine, RC watch, step loop via
  agent.Generate, TR persistence, late-binding prompt with mention hints)
- Add system_discuss.md prompt template ("text = inner monologue, send = speak")
- Add context composition (MergeContext, ComposeContext, TrimContext) for
  RC + assistant/tool message interleaving by timestamp
- Add probe gate: when discuss_probe_model_id is set, cheap model pre-filters
  group messages; no tool calls = silence, tool calls = activate primary
- Add /new [chat|discuss] command: explicit mode selection, defaults to
  discuss in groups, chat in DMs, chat-only for WebUI
- Add ResolveRunConfig on flow.Resolver for discuss driver to reuse
  model/tools/system-prompt resolution without reimplementing
- Fix send tool for discuss mode: same-conversation sends now go through
  SendDirect (channel adapter) instead of the local emitter shortcut
- Add target attribute to XML message format (reply_target for routing)
- Add discuss_probe_model_id to bots table settings
- Remove pipeline compaction (SetCompactCursor) — reuse existing compaction.Service
- Persist full SDK messages (including tool calls) in discuss mode

* refactor: unify DCP event layer, fix persistence and local channel

- Fix bot_session_events dedup index to include event_kind so that
  message + edit events for the same external_message_id coexist.
- Change CreateSessionEvent from :one to :exec so ON CONFLICT DO NOTHING
  does not produce spurious errors on duplicate delivery.
- Move ACL evaluation before event ingest; denied messages no longer
  enter bot_session_events or the in-memory pipeline.
- Let chat mode consume RenderedContext from the DCP pipeline when
  available, sharing the same event-driven context assembly as discuss.
- Collapse local WebSocket handler to route through HandleInbound
  instead of directly calling StreamChatWS, eliminating the dual
  business entry point.
- Extract buildBaseRunConfig shared builder so resolve() and
  ResolveRunConfig() no longer duplicate model/credentials/skills setup.
- Add StoreRound to RunConfigResolver interface so discuss driver
  persists assistant output with full metadata, usage, and memory
  extraction (same quality as chat mode).
- Fix discuss driver context: use context.Background() instead of the
  short-lived HTTP request context that was getting cancelled.
- Fix model ID passed to StoreRound: return database UUID from
  ResolveRunConfig instead of SDK model name.
- Remove dead CLIAdapter/CLIType and update legacy web/cli references
  in tests and comments.

* fix: stop idle discuss goroutines after 10min timeout

Discuss session goroutines were never cleaned up when a session became
inactive (e.g. after /new). Add a 10-minute idle timer that auto-exits
the goroutine and removes it from the sessions map when no new RC
arrives.

* refactor: pipeline details — event types, structured reply, display content

- Remove [User sent N attachments] placeholder text from buildInboundQuery;
  attachment info is now expressed via pipeline <attachment> tags.
- Unify in-reply-to as structured ReplyRef (Sender/Preview fields) across
  Telegram, Discord, Feishu, and Matrix adapters instead of prepending
  [Reply to ...] text into the message body. Remove now-unused
  buildTelegramQuotedText, buildDiscordQuotedText, buildMatrixQuotedText.
- Make AdaptInbound return CanonicalEvent interface and dispatch to
  adaptMessage/adaptEdit/adaptService based on metadata["event_type"].
- Add event_id column to bot_history_messages (migration 0059) so user
  messages can reference their canonical pipeline event.
- PersistEvent now returns the event UUID; HandleInbound passes it through
  to both persistPassiveMessage and ChatRequest.EventID for storeRound.
- Add FillDisplayContent to message service: extracts plain text from
  event_data for clean frontend display.
- Frontend extractMessageText prefers display_content when available,
  falling back to legacy strip logic for old messages.
- Fix: always generate headerifiedQuery for storage even when usePipeline
  is true, so user messages are persisted via storeRound in chat mode.

* fix: use json.Marshal for pipeline context content serialization

The manual string escaping in buildMessagesFromPipeline only handled
double quotes but not newlines, backslashes, and other JSON special
characters, producing invalid json.RawMessage values. The LLM then
received empty/malformed context and complained about having no history.

* fix: restore WebSocket handler to use StreamChatWS directly

The previous refactoring replaced the WS handler with HandleInbound +
RouteHub subscription, which broke streaming because RouteHub events
use a different format (channel.StreamEvent) than what the frontend
expects (flow.WSStreamEvent with text_delta, tool_call_start, etc.).

Restore the original direct StreamChatWS call path so WebUI streaming
works again. The WS handler now matches the pre-refactoring behavior
while all other changes (pipeline, ACL, event types, etc.) are kept.

* feat: store display_text directly in bot_history_messages

Instead of computing display content at API response time by querying
bot_session_events via event_id, store the raw user text in a dedicated
display_text column at write time. This works for all paths including
the WebSocket handler which does not go through the pipeline/event layer.

- Migration 0060: add display_text TEXT column
- PersistInput gains DisplayText; filled from trimmedText (passive) and
  req.Query (storeRound)
- toMessageFields reads display_text into DisplayContent
- Remove FillDisplayContent runtime query and ListSessionEventsByEventID
- Frontend already prefers display_content when available (no change)

* fix: display_text should contain raw user text, not XML-wrapped query

req.Query gets overwritten to headerifiedQuery (with XML <message> tags)
before storeRound runs. Add RawQuery field to ChatRequest to preserve
the original user text, and use it for display_text in storeMessages.

* fix(web): show discuss sessions

* refactor: introduce DCP pipeline layer for unified context assembly

Introduce a Deterministic Context Pipeline (DCP) inspired by Cahciua,
providing event-driven context assembly for LLM conversations.

- Add `internal/pipeline/` package with Canonical Event types, Projection
  (reduce), Rendering (XML RC), Pipeline manager, and EventStore persistence
- Change user message format from YAML front-matter to XML `<message>` tags
  with self-contained attributes (sender, channel, conversation, type)
- Merge CLI/Web dual API into single `/local/` endpoint, remove CLI handler
- Add `bot_session_events` table for event persistence and cold-start replay
- Add `discuss` session type (reserved for future Cahciua-style mode)
- Wire pipeline into HandleInbound: adapt → persist → push on every message
- Lazy cold-start replay: load events from DB on first session access

* feat: implement discuss mode with reactive driver and probe gate

Add discuss session mode where the bot autonomously decides when to speak
in group chats via tool-gated output (send tool only, no direct text reply).

- Add discuss driver (per-session goroutine, RC watch, step loop via
  agent.Generate, TR persistence, late-binding prompt with mention hints)
- Add system_discuss.md prompt template ("text = inner monologue, send = speak")
- Add context composition (MergeContext, ComposeContext, TrimContext) for
  RC + assistant/tool message interleaving by timestamp
- Add probe gate: when discuss_probe_model_id is set, cheap model pre-filters
  group messages; no tool calls = silence, tool calls = activate primary
- Add /new [chat|discuss] command: explicit mode selection, defaults to
  discuss in groups, chat in DMs, chat-only for WebUI
- Add ResolveRunConfig on flow.Resolver for discuss driver to reuse
  model/tools/system-prompt resolution without reimplementing
- Fix send tool for discuss mode: same-conversation sends now go through
  SendDirect (channel adapter) instead of the local emitter shortcut
- Add target attribute to XML message format (reply_target for routing)
- Add discuss_probe_model_id to bots table settings
- Remove pipeline compaction (SetCompactCursor) — reuse existing compaction.Service
- Persist full SDK messages (including tool calls) in discuss mode

* refactor: unify DCP event layer, fix persistence and local channel

- Fix bot_session_events dedup index to include event_kind so that
  message + edit events for the same external_message_id coexist.
- Change CreateSessionEvent from :one to :exec so ON CONFLICT DO NOTHING
  does not produce spurious errors on duplicate delivery.
- Move ACL evaluation before event ingest; denied messages no longer
  enter bot_session_events or the in-memory pipeline.
- Let chat mode consume RenderedContext from the DCP pipeline when
  available, sharing the same event-driven context assembly as discuss.
- Collapse local WebSocket handler to route through HandleInbound
  instead of directly calling StreamChatWS, eliminating the dual
  business entry point.
- Extract buildBaseRunConfig shared builder so resolve() and
  ResolveRunConfig() no longer duplicate model/credentials/skills setup.
- Add StoreRound to RunConfigResolver interface so discuss driver
  persists assistant output with full metadata, usage, and memory
  extraction (same quality as chat mode).
- Fix discuss driver context: use context.Background() instead of the
  short-lived HTTP request context that was getting cancelled.
- Fix model ID passed to StoreRound: return database UUID from
  ResolveRunConfig instead of SDK model name.
- Remove dead CLIAdapter/CLIType and update legacy web/cli references
  in tests and comments.

* fix: stop idle discuss goroutines after 10min timeout

Discuss session goroutines were never cleaned up when a session became
inactive (e.g. after /new). Add a 10-minute idle timer that auto-exits
the goroutine and removes it from the sessions map when no new RC
arrives.

* refactor: pipeline details — event types, structured reply, display content

- Remove [User sent N attachments] placeholder text from buildInboundQuery;
  attachment info is now expressed via pipeline <attachment> tags.
- Unify in-reply-to as structured ReplyRef (Sender/Preview fields) across
  Telegram, Discord, Feishu, and Matrix adapters instead of prepending
  [Reply to ...] text into the message body. Remove now-unused
  buildTelegramQuotedText, buildDiscordQuotedText, buildMatrixQuotedText.
- Make AdaptInbound return CanonicalEvent interface and dispatch to
  adaptMessage/adaptEdit/adaptService based on metadata["event_type"].
- Add event_id column to bot_history_messages (migration 0059) so user
  messages can reference their canonical pipeline event.
- PersistEvent now returns the event UUID; HandleInbound passes it through
  to both persistPassiveMessage and ChatRequest.EventID for storeRound.
- Add FillDisplayContent to message service: extracts plain text from
  event_data for clean frontend display.
- Frontend extractMessageText prefers display_content when available,
  falling back to legacy strip logic for old messages.
- Fix: always generate headerifiedQuery for storage even when usePipeline
  is true, so user messages are persisted via storeRound in chat mode.

* fix: use json.Marshal for pipeline context content serialization

The manual string escaping in buildMessagesFromPipeline only handled
double quotes but not newlines, backslashes, and other JSON special
characters, producing invalid json.RawMessage values. The LLM then
received empty/malformed context and complained about having no history.

* fix: restore WebSocket handler to use StreamChatWS directly

The previous refactoring replaced the WS handler with HandleInbound +
RouteHub subscription, which broke streaming because RouteHub events
use a different format (channel.StreamEvent) than what the frontend
expects (flow.WSStreamEvent with text_delta, tool_call_start, etc.).

Restore the original direct StreamChatWS call path so WebUI streaming
works again. The WS handler now matches the pre-refactoring behavior
while all other changes (pipeline, ACL, event types, etc.) are kept.

* feat: store display_text directly in bot_history_messages

Instead of computing display content at API response time by querying
bot_session_events via event_id, store the raw user text in a dedicated
display_text column at write time. This works for all paths including
the WebSocket handler which does not go through the pipeline/event layer.

- Migration 0060: add display_text TEXT column
- PersistInput gains DisplayText; filled from trimmedText (passive) and
  req.Query (storeRound)
- toMessageFields reads display_text into DisplayContent
- Remove FillDisplayContent runtime query and ListSessionEventsByEventID
- Frontend already prefers display_content when available (no change)

* fix: display_text should contain raw user text, not XML-wrapped query

req.Query gets overwritten to headerifiedQuery (with XML <message> tags)
before storeRound runs. Add RawQuery field to ChatRequest to preserve
the original user text, and use it for display_text in storeMessages.

* fix(web): show discuss sessions

* chore(feishu): change discuss output to stream card

* fix(channel): unify discuss/chat send path and card markdown delivery

* feat(discuss): switch to stream execution with RouteHub broadcasting

* refactor(pipeline): remove context trimming from ComposeContext

The pipeline path should not trim context by token budget — the
upstream IC/RC already bounds the event window. Remove TrimContext,
FindWorkingWindowCursor, EstimateTokens, FormatLastProcessedMs (all
unused or only used for trimming), the maxTokens parameter from
ComposeContext, and MaxContextTokens from DiscussSessionConfig.

---------

Co-authored-by: 晨苒 <16112591+chen-ran@users.noreply.github.com>
2026-04-06 21:56:25 +08:00

527 lines
15 KiB
Go

package agent
import (
"context"
"encoding/json"
"errors"
"fmt"
"log/slog"
"strings"
sdk "github.com/memohai/twilight-ai/sdk"
"github.com/memohai/memoh/internal/agent/tools"
"github.com/memohai/memoh/internal/models"
"github.com/memohai/memoh/internal/workspace/bridge"
)
// Agent is the core agent that handles LLM interactions.
type Agent struct {
client *sdk.Client
toolProviders []tools.ToolProvider
bridgeProvider bridge.Provider
logger *slog.Logger
}
// New creates a new Agent with the given dependencies.
func New(deps Deps) *Agent {
logger := deps.Logger
if logger == nil {
logger = slog.Default()
}
return &Agent{
client: sdk.NewClient(),
bridgeProvider: deps.BridgeProvider,
logger: logger.With(slog.String("service", "agent")),
}
}
// BridgeProvider returns the underlying bridge provider (workspace manager).
func (a *Agent) BridgeProvider() bridge.Provider {
return a.bridgeProvider
}
// SetToolProviders sets the tool providers after construction.
// This allows breaking dependency cycles in the DI graph.
func (a *Agent) SetToolProviders(providers []tools.ToolProvider) {
a.toolProviders = providers
}
// Stream runs the agent in streaming mode, emitting events to the returned channel.
func (a *Agent) Stream(ctx context.Context, cfg RunConfig) <-chan StreamEvent {
ch := make(chan StreamEvent)
go func() {
defer close(ch)
a.runStream(ctx, cfg, ch)
}()
return ch
}
// Generate runs the agent in non-streaming mode, returning the complete result.
func (a *Agent) Generate(ctx context.Context, cfg RunConfig) (*GenerateResult, error) {
return a.runGenerate(ctx, cfg)
}
func (a *Agent) runStream(ctx context.Context, cfg RunConfig, ch chan<- StreamEvent) {
// Stream emitter: tools targeting the current conversation push
// side-effect events (attachments, reactions, speech) directly here.
streamEmitter := tools.StreamEmitter(func(evt tools.ToolStreamEvent) {
ch <- toolStreamEventToAgentEvent(evt)
})
var sdkTools []sdk.Tool
if cfg.SupportsToolCall {
var err error
sdkTools, err = a.assembleTools(ctx, cfg, streamEmitter)
if err != nil {
ch <- StreamEvent{Type: EventError, Error: fmt.Sprintf("assemble tools: %v", err)}
return
}
}
sdkTools, readMediaState := decorateReadMediaTools(cfg.Model, sdkTools)
// Loop detection setup
var textLoopGuard *TextLoopGuard
var textLoopProbeBuffer *TextLoopProbeBuffer
var toolLoopGuard *ToolLoopGuard
toolLoopAbortCallIDs := make(map[string]struct{})
if cfg.LoopDetection.Enabled {
textLoopGuard = NewTextLoopGuard(LoopDetectedStreakThreshold, LoopDetectedMinNewGramsPerChunk, SentialOptions{})
textLoopProbeBuffer = NewTextLoopProbeBuffer(LoopDetectedProbeChars, func(text string) {
result := textLoopGuard.Inspect(text)
if result.Abort {
a.logger.Warn("text loop detected, will abort")
}
})
toolLoopGuard = NewToolLoopGuard(ToolLoopRepeatThreshold, ToolLoopWarningsBeforeAbort)
}
// Wrap tools with loop detection
if toolLoopGuard != nil {
sdkTools = wrapToolsWithLoopGuard(sdkTools, toolLoopGuard, toolLoopAbortCallIDs)
}
var prepareStep func(*sdk.GenerateParams) *sdk.GenerateParams
if readMediaState != nil {
prepareStep = readMediaState.prepareStep
}
initialMsgCount := len(cfg.Messages)
if cfg.InjectCh != nil {
basePrepare := prepareStep
prepareStep = func(p *sdk.GenerateParams) *sdk.GenerateParams {
if basePrepare != nil {
if override := basePrepare(p); override != nil {
p = override
}
}
for {
select {
case injected, ok := <-cfg.InjectCh:
if !ok {
break
}
text := strings.TrimSpace(injected.HeaderifiedText)
if text == "" {
text = strings.TrimSpace(injected.Text)
}
if text != "" {
insertAfter := len(p.Messages) - initialMsgCount
p.Messages = append(p.Messages, sdk.UserMessage(text))
if cfg.InjectedRecorder != nil {
cfg.InjectedRecorder(text, insertAfter)
}
a.logger.Info("injected user message into agent stream",
slog.String("bot_id", cfg.Identity.BotID),
slog.Int("insert_after", insertAfter),
)
}
continue
default:
}
break
}
return p
}
}
opts := a.buildGenerateOptions(cfg, sdkTools, prepareStep)
streamResult, err := a.client.StreamText(ctx, opts...)
if err != nil {
ch <- StreamEvent{Type: EventError, Error: fmt.Sprintf("stream start: %v", err)}
return
}
ch <- StreamEvent{Type: EventAgentStart}
var allText strings.Builder
aborted := false
for part := range streamResult.Stream {
if ctx.Err() != nil {
aborted = true
break
}
switch p := part.(type) {
case *sdk.StartPart:
_ = p // stream start already emitted
case *sdk.TextStartPart:
ch <- StreamEvent{Type: EventTextStart}
case *sdk.TextDeltaPart:
if p.Text != "" {
if textLoopProbeBuffer != nil {
textLoopProbeBuffer.Push(p.Text)
}
ch <- StreamEvent{Type: EventTextDelta, Delta: p.Text}
allText.WriteString(p.Text)
}
case *sdk.TextEndPart:
if textLoopProbeBuffer != nil {
textLoopProbeBuffer.Flush()
}
ch <- StreamEvent{Type: EventTextEnd}
case *sdk.ReasoningStartPart:
ch <- StreamEvent{Type: EventReasoningStart}
case *sdk.ReasoningDeltaPart:
ch <- StreamEvent{Type: EventReasoningDelta, Delta: p.Text}
case *sdk.ReasoningEndPart:
ch <- StreamEvent{Type: EventReasoningEnd}
case *sdk.StreamToolCallPart:
if textLoopProbeBuffer != nil {
textLoopProbeBuffer.Flush()
}
ch <- StreamEvent{
Type: EventToolCallStart,
ToolName: p.ToolName,
ToolCallID: p.ToolCallID,
Input: p.Input,
}
case *sdk.StreamToolResultPart:
shouldAbort := false
if _, ok := toolLoopAbortCallIDs[p.ToolCallID]; ok {
delete(toolLoopAbortCallIDs, p.ToolCallID)
shouldAbort = true
}
ch <- StreamEvent{
Type: EventToolCallEnd,
ToolName: p.ToolName,
ToolCallID: p.ToolCallID,
Input: p.Input,
Result: p.Output,
}
if shouldAbort {
a.logger.Warn("tool loop abort triggered", slog.String("tool_call_id", p.ToolCallID))
aborted = true
}
case *sdk.StreamToolErrorPart:
ch <- StreamEvent{
Type: EventToolCallEnd,
ToolName: p.ToolName,
ToolCallID: p.ToolCallID,
Error: p.Error.Error(),
}
case *sdk.StreamFilePart:
mediaType := p.File.MediaType
if mediaType == "" {
mediaType = "image/png"
}
ch <- StreamEvent{
Type: EventAttachment,
Attachments: []FileAttachment{{
Type: "image",
URL: fmt.Sprintf("data:%s;base64,%s", mediaType, p.File.Data),
Mime: mediaType,
}},
}
case *sdk.ErrorPart:
ch <- StreamEvent{Type: EventError, Error: p.Error.Error()}
aborted = true
case *sdk.AbortPart:
aborted = true
case *sdk.FinishPart:
// handled after loop
}
if aborted {
break
}
}
if textLoopProbeBuffer != nil {
textLoopProbeBuffer.Flush()
}
finalMessages := streamResult.Messages
if readMediaState != nil {
finalMessages = readMediaState.mergeMessages(streamResult.Steps, finalMessages)
}
var totalUsage sdk.Usage
for _, step := range streamResult.Steps {
totalUsage.InputTokens += step.Usage.InputTokens
totalUsage.OutputTokens += step.Usage.OutputTokens
totalUsage.TotalTokens += step.Usage.TotalTokens
totalUsage.ReasoningTokens += step.Usage.ReasoningTokens
totalUsage.CachedInputTokens += step.Usage.CachedInputTokens
totalUsage.InputTokenDetails.NoCacheTokens += step.Usage.InputTokenDetails.NoCacheTokens
totalUsage.InputTokenDetails.CacheReadTokens += step.Usage.InputTokenDetails.CacheReadTokens
totalUsage.InputTokenDetails.CacheWriteTokens += step.Usage.InputTokenDetails.CacheWriteTokens
totalUsage.OutputTokenDetails.TextTokens += step.Usage.OutputTokenDetails.TextTokens
totalUsage.OutputTokenDetails.ReasoningTokens += step.Usage.OutputTokenDetails.ReasoningTokens
}
usageJSON, _ := json.Marshal(totalUsage)
termEvent := StreamEvent{
Messages: mustMarshal(finalMessages),
Usage: usageJSON,
}
if aborted {
termEvent.Type = EventAgentAbort
} else {
termEvent.Type = EventAgentEnd
}
ch <- termEvent
}
func (a *Agent) runGenerate(ctx context.Context, cfg RunConfig) (*GenerateResult, error) {
// Collecting emitter: tools push side-effect events here during generation.
var collected []tools.ToolStreamEvent
collectEmitter := tools.StreamEmitter(func(evt tools.ToolStreamEvent) {
collected = append(collected, evt)
})
var sdkTools []sdk.Tool
if cfg.SupportsToolCall {
var err error
sdkTools, err = a.assembleTools(ctx, cfg, collectEmitter)
if err != nil {
return nil, fmt.Errorf("assemble tools: %w", err)
}
}
sdkTools, readMediaState := decorateReadMediaTools(cfg.Model, sdkTools)
var toolLoopGuard *ToolLoopGuard
var textLoopGuard *TextLoopGuard
toolLoopAbortCallIDs := make(map[string]struct{})
if cfg.LoopDetection.Enabled {
toolLoopGuard = NewToolLoopGuard(ToolLoopRepeatThreshold, ToolLoopWarningsBeforeAbort)
textLoopGuard = NewTextLoopGuard(LoopDetectedStreakThreshold, LoopDetectedMinNewGramsPerChunk, SentialOptions{})
}
if toolLoopGuard != nil {
sdkTools = wrapToolsWithLoopGuard(sdkTools, toolLoopGuard, toolLoopAbortCallIDs)
}
var prepareStep func(*sdk.GenerateParams) *sdk.GenerateParams
if readMediaState != nil {
prepareStep = readMediaState.prepareStep
}
opts := a.buildGenerateOptions(cfg, sdkTools, prepareStep)
opts = append(opts,
sdk.WithOnStep(func(step *sdk.StepResult) *sdk.GenerateParams {
if cfg.LoopDetection.Enabled {
if len(toolLoopAbortCallIDs) > 0 {
return nil // stop
}
if textLoopGuard != nil && isNonEmptyString(step.Text) {
result := textLoopGuard.Inspect(step.Text)
if result.Abort {
return nil // stop
}
}
}
return nil
}),
)
genResult, err := a.client.GenerateTextResult(ctx, opts...)
if err != nil {
return nil, fmt.Errorf("generate: %w", err)
}
// Drain collected tool-emitted side effects into the result.
var attachments []FileAttachment
var reactions []ReactionItem
var speeches []SpeechItem
for _, evt := range collected {
switch evt.Type {
case tools.StreamEventAttachment:
for _, a := range evt.Attachments {
attachments = append(attachments, FileAttachment{
Type: a.Type, Path: a.Path, URL: a.URL,
Mime: a.Mime, Name: a.Name,
ContentHash: a.ContentHash, Size: a.Size,
Metadata: a.Metadata,
})
}
case tools.StreamEventReaction:
for _, r := range evt.Reactions {
reactions = append(reactions, ReactionItem{Emoji: r.Emoji})
}
case tools.StreamEventSpeech:
for _, s := range evt.Speeches {
speeches = append(speeches, SpeechItem{Text: s.Text})
}
}
}
finalMessages := genResult.Messages
if readMediaState != nil {
finalMessages = readMediaState.mergeMessages(genResult.Steps, finalMessages)
}
return &GenerateResult{
Messages: finalMessages,
Text: genResult.Text,
Attachments: attachments,
Reactions: reactions,
Speeches: speeches,
Usage: &genResult.Usage,
}, nil
}
func (*Agent) buildGenerateOptions(cfg RunConfig, tools []sdk.Tool, prepareStep func(*sdk.GenerateParams) *sdk.GenerateParams) []sdk.GenerateOption {
opts := []sdk.GenerateOption{
sdk.WithModel(cfg.Model),
sdk.WithMessages(cfg.Messages),
sdk.WithSystem(cfg.System),
sdk.WithMaxSteps(-1),
}
if len(tools) > 0 && cfg.SupportsToolCall {
opts = append(opts, sdk.WithTools(tools))
}
if prepareStep != nil {
opts = append(opts, sdk.WithPrepareStep(prepareStep))
}
opts = append(opts, models.BuildReasoningOptions(models.SDKModelConfig{
ClientType: models.ResolveClientType(cfg.Model),
ReasoningConfig: &models.ReasoningConfig{
Enabled: cfg.ReasoningEffort != "",
Effort: cfg.ReasoningEffort,
},
})...)
return opts
}
// assembleTools collects tools from all registered ToolProviders.
// emitter is injected into the session context so that tools targeting the
// current conversation can push side-effect events (attachments, reactions,
// speech) directly into the agent stream.
func (a *Agent) assembleTools(ctx context.Context, cfg RunConfig, emitter tools.StreamEmitter) ([]sdk.Tool, error) {
if len(a.toolProviders) == 0 {
return nil, nil
}
skillsMap := make(map[string]tools.SkillDetail, len(cfg.Skills))
for _, s := range cfg.Skills {
skillsMap[s.Name] = tools.SkillDetail{
Description: s.Description,
Content: s.Content,
}
}
session := tools.SessionContext{
BotID: cfg.Identity.BotID,
ChatID: cfg.Identity.ChatID,
SessionID: cfg.Identity.SessionID,
SessionType: cfg.SessionType,
ChannelIdentityID: cfg.Identity.ChannelIdentityID,
SessionToken: cfg.Identity.SessionToken,
CurrentPlatform: cfg.Identity.CurrentPlatform,
ReplyTarget: cfg.Identity.ReplyTarget,
SupportsImageInput: cfg.SupportsImageInput,
IsSubagent: cfg.Identity.IsSubagent,
Skills: skillsMap,
TimezoneLocation: cfg.Identity.TimezoneLocation,
Emitter: emitter,
}
var allTools []sdk.Tool
for _, provider := range a.toolProviders {
providerTools, err := provider.Tools(ctx, session)
if err != nil {
a.logger.Warn("tool provider failed", slog.Any("error", err))
continue
}
allTools = append(allTools, providerTools...)
}
return allTools, nil
}
// toolStreamEventToAgentEvent converts a tool-layer ToolStreamEvent into an
// agent-layer StreamEvent suitable for the output channel.
func toolStreamEventToAgentEvent(evt tools.ToolStreamEvent) StreamEvent {
switch evt.Type {
case tools.StreamEventAttachment:
atts := make([]FileAttachment, 0, len(evt.Attachments))
for _, a := range evt.Attachments {
atts = append(atts, FileAttachment{
Type: a.Type, Path: a.Path, URL: a.URL,
Mime: a.Mime, Name: a.Name,
ContentHash: a.ContentHash, Size: a.Size,
Metadata: a.Metadata,
})
}
return StreamEvent{Type: EventAttachment, Attachments: atts}
case tools.StreamEventReaction:
rs := make([]ReactionItem, 0, len(evt.Reactions))
for _, r := range evt.Reactions {
rs = append(rs, ReactionItem{Emoji: r.Emoji})
}
return StreamEvent{Type: EventReaction, Reactions: rs}
case tools.StreamEventSpeech:
ss := make([]SpeechItem, 0, len(evt.Speeches))
for _, s := range evt.Speeches {
ss = append(ss, SpeechItem{Text: s.Text})
}
return StreamEvent{Type: EventSpeech, Speeches: ss}
default:
return StreamEvent{}
}
}
func wrapToolsWithLoopGuard(tools []sdk.Tool, guard *ToolLoopGuard, abortCallIDs map[string]struct{}) []sdk.Tool {
wrapped := make([]sdk.Tool, len(tools))
for i, tool := range tools {
originalExecute := tool.Execute
toolName := tool.Name
wrapped[i] = tool
wrapped[i].Execute = func(ctx *sdk.ToolExecContext, input any) (any, error) {
warn, abort := guard.Guard(toolName, input)
if abort {
abortCallIDs[ctx.ToolCallID] = struct{}{}
return map[string]any{
"isError": true,
"content": []map[string]any{{
"type": "text",
"text": ToolLoopDetectedAbortMessage,
}},
}, errors.New(ToolLoopDetectedAbortMessage)
}
if warn {
return map[string]any{
ToolLoopWarningKey: true,
"content": []map[string]any{{
"type": "text",
"text": ToolLoopWarningText,
}},
}, nil
}
return originalExecute(ctx, input)
}
}
return wrapped
}