On fresh databases, 0001_init.up.sql creates providers/provider_id
(not llm_providers/llm_provider_id). Migrations 0013, 0041, 0046, 0047
referenced the old names without guards, causing CI migration failures.
- 0013: check llm_provider_id column exists before adding old constraint
- 0041: check llm_providers table exists before backfill/constraint DDL
- 0046: wrap CREATE TABLE in DO block with llm_providers existence check
- 0047: use ALTER TABLE IF EXISTS + DO block guard
0001_init.up.sql still used old names (llm_providers, llm_provider_id)
and included dropped tts_providers/tts_models tables. sqlc could not
parse the PL/pgSQL EXECUTE in migration 0061, so generated code retained
stale columns (input_modalities, supports_reasoning) causing runtime
"column does not exist" errors when adding models.
- Update 0001_init.up.sql to current schema (providers, provider_id,
no tts tables, add provider_oauth_tokens)
- Use ALTER TABLE IF EXISTS in 0010/0041/0042 for backward compat
- Regenerate sqlc
PL/pgSQL pre-validates column/table references in static SQL statements
inside DO blocks before evaluating IF/RETURN guards. This caused
migrations 0010-0061 to fail on fresh databases where the canonical
schema uses `providers`/`provider_id` instead of `llm_providers`/
`llm_provider_id`.
Wrap all SQL that references potentially non-existent old schema objects
(llm_providers, llm_provider_id, tts_providers, tts_models, etc.) in
EXECUTE strings so they are only parsed at runtime when actually reached.
Replace '%-speech' pattern with explicit IN ('edge-speech') for both
ListProviders (exclusion) and ListSpeechProviders (inclusion). New
speech client types must be added to both queries.
ListProviders now filters out client_type matching '%-speech' so Edge
and future speech providers no longer appear on the Providers page.
ListSpeechProviders uses the same pattern match instead of hard-coding
'edge-speech'.
- Rename `llm_providers` → `providers`, `llm_provider_oauth_tokens` → `provider_oauth_tokens`
- Remove `tts_providers` and `tts_models` tables; speech models now live in the unified `models` table with `type = 'speech'`
- Replace top-level `api_key`/`base_url` columns with a JSONB `config` field on `providers`
- Rename `llm_provider_id` → `provider_id` across all references
- Add `edge-speech` client type and `conf/providers/edge.yaml` default provider
- Create new read-only speech endpoints (`/speech-providers`, `/speech-models`) backed by filtered views of the unified tables
- Remove old TTS CRUD handlers; simplify speech page to read-only + test
- Update registry loader to skip malformed YAML files instead of failing entirely
- Fix YAML quoting for model names containing colons in openrouter.yaml
- Regenerate sqlc, swagger, and TypeScript SDK
* refactor: introduce DCP pipeline layer for unified context assembly
Introduce a Deterministic Context Pipeline (DCP) inspired by Cahciua,
providing event-driven context assembly for LLM conversations.
- Add `internal/pipeline/` package with Canonical Event types, Projection
(reduce), Rendering (XML RC), Pipeline manager, and EventStore persistence
- Change user message format from YAML front-matter to XML `<message>` tags
with self-contained attributes (sender, channel, conversation, type)
- Merge CLI/Web dual API into single `/local/` endpoint, remove CLI handler
- Add `bot_session_events` table for event persistence and cold-start replay
- Add `discuss` session type (reserved for future Cahciua-style mode)
- Wire pipeline into HandleInbound: adapt → persist → push on every message
- Lazy cold-start replay: load events from DB on first session access
* feat: implement discuss mode with reactive driver and probe gate
Add discuss session mode where the bot autonomously decides when to speak
in group chats via tool-gated output (send tool only, no direct text reply).
- Add discuss driver (per-session goroutine, RC watch, step loop via
agent.Generate, TR persistence, late-binding prompt with mention hints)
- Add system_discuss.md prompt template ("text = inner monologue, send = speak")
- Add context composition (MergeContext, ComposeContext, TrimContext) for
RC + assistant/tool message interleaving by timestamp
- Add probe gate: when discuss_probe_model_id is set, cheap model pre-filters
group messages; no tool calls = silence, tool calls = activate primary
- Add /new [chat|discuss] command: explicit mode selection, defaults to
discuss in groups, chat in DMs, chat-only for WebUI
- Add ResolveRunConfig on flow.Resolver for discuss driver to reuse
model/tools/system-prompt resolution without reimplementing
- Fix send tool for discuss mode: same-conversation sends now go through
SendDirect (channel adapter) instead of the local emitter shortcut
- Add target attribute to XML message format (reply_target for routing)
- Add discuss_probe_model_id to bots table settings
- Remove pipeline compaction (SetCompactCursor) — reuse existing compaction.Service
- Persist full SDK messages (including tool calls) in discuss mode
* refactor: unify DCP event layer, fix persistence and local channel
- Fix bot_session_events dedup index to include event_kind so that
message + edit events for the same external_message_id coexist.
- Change CreateSessionEvent from :one to :exec so ON CONFLICT DO NOTHING
does not produce spurious errors on duplicate delivery.
- Move ACL evaluation before event ingest; denied messages no longer
enter bot_session_events or the in-memory pipeline.
- Let chat mode consume RenderedContext from the DCP pipeline when
available, sharing the same event-driven context assembly as discuss.
- Collapse local WebSocket handler to route through HandleInbound
instead of directly calling StreamChatWS, eliminating the dual
business entry point.
- Extract buildBaseRunConfig shared builder so resolve() and
ResolveRunConfig() no longer duplicate model/credentials/skills setup.
- Add StoreRound to RunConfigResolver interface so discuss driver
persists assistant output with full metadata, usage, and memory
extraction (same quality as chat mode).
- Fix discuss driver context: use context.Background() instead of the
short-lived HTTP request context that was getting cancelled.
- Fix model ID passed to StoreRound: return database UUID from
ResolveRunConfig instead of SDK model name.
- Remove dead CLIAdapter/CLIType and update legacy web/cli references
in tests and comments.
* fix: stop idle discuss goroutines after 10min timeout
Discuss session goroutines were never cleaned up when a session became
inactive (e.g. after /new). Add a 10-minute idle timer that auto-exits
the goroutine and removes it from the sessions map when no new RC
arrives.
* refactor: pipeline details — event types, structured reply, display content
- Remove [User sent N attachments] placeholder text from buildInboundQuery;
attachment info is now expressed via pipeline <attachment> tags.
- Unify in-reply-to as structured ReplyRef (Sender/Preview fields) across
Telegram, Discord, Feishu, and Matrix adapters instead of prepending
[Reply to ...] text into the message body. Remove now-unused
buildTelegramQuotedText, buildDiscordQuotedText, buildMatrixQuotedText.
- Make AdaptInbound return CanonicalEvent interface and dispatch to
adaptMessage/adaptEdit/adaptService based on metadata["event_type"].
- Add event_id column to bot_history_messages (migration 0059) so user
messages can reference their canonical pipeline event.
- PersistEvent now returns the event UUID; HandleInbound passes it through
to both persistPassiveMessage and ChatRequest.EventID for storeRound.
- Add FillDisplayContent to message service: extracts plain text from
event_data for clean frontend display.
- Frontend extractMessageText prefers display_content when available,
falling back to legacy strip logic for old messages.
- Fix: always generate headerifiedQuery for storage even when usePipeline
is true, so user messages are persisted via storeRound in chat mode.
* fix: use json.Marshal for pipeline context content serialization
The manual string escaping in buildMessagesFromPipeline only handled
double quotes but not newlines, backslashes, and other JSON special
characters, producing invalid json.RawMessage values. The LLM then
received empty/malformed context and complained about having no history.
* fix: restore WebSocket handler to use StreamChatWS directly
The previous refactoring replaced the WS handler with HandleInbound +
RouteHub subscription, which broke streaming because RouteHub events
use a different format (channel.StreamEvent) than what the frontend
expects (flow.WSStreamEvent with text_delta, tool_call_start, etc.).
Restore the original direct StreamChatWS call path so WebUI streaming
works again. The WS handler now matches the pre-refactoring behavior
while all other changes (pipeline, ACL, event types, etc.) are kept.
* feat: store display_text directly in bot_history_messages
Instead of computing display content at API response time by querying
bot_session_events via event_id, store the raw user text in a dedicated
display_text column at write time. This works for all paths including
the WebSocket handler which does not go through the pipeline/event layer.
- Migration 0060: add display_text TEXT column
- PersistInput gains DisplayText; filled from trimmedText (passive) and
req.Query (storeRound)
- toMessageFields reads display_text into DisplayContent
- Remove FillDisplayContent runtime query and ListSessionEventsByEventID
- Frontend already prefers display_content when available (no change)
* fix: display_text should contain raw user text, not XML-wrapped query
req.Query gets overwritten to headerifiedQuery (with XML <message> tags)
before storeRound runs. Add RawQuery field to ChatRequest to preserve
the original user text, and use it for display_text in storeMessages.
* fix(web): show discuss sessions
* refactor: introduce DCP pipeline layer for unified context assembly
Introduce a Deterministic Context Pipeline (DCP) inspired by Cahciua,
providing event-driven context assembly for LLM conversations.
- Add `internal/pipeline/` package with Canonical Event types, Projection
(reduce), Rendering (XML RC), Pipeline manager, and EventStore persistence
- Change user message format from YAML front-matter to XML `<message>` tags
with self-contained attributes (sender, channel, conversation, type)
- Merge CLI/Web dual API into single `/local/` endpoint, remove CLI handler
- Add `bot_session_events` table for event persistence and cold-start replay
- Add `discuss` session type (reserved for future Cahciua-style mode)
- Wire pipeline into HandleInbound: adapt → persist → push on every message
- Lazy cold-start replay: load events from DB on first session access
* feat: implement discuss mode with reactive driver and probe gate
Add discuss session mode where the bot autonomously decides when to speak
in group chats via tool-gated output (send tool only, no direct text reply).
- Add discuss driver (per-session goroutine, RC watch, step loop via
agent.Generate, TR persistence, late-binding prompt with mention hints)
- Add system_discuss.md prompt template ("text = inner monologue, send = speak")
- Add context composition (MergeContext, ComposeContext, TrimContext) for
RC + assistant/tool message interleaving by timestamp
- Add probe gate: when discuss_probe_model_id is set, cheap model pre-filters
group messages; no tool calls = silence, tool calls = activate primary
- Add /new [chat|discuss] command: explicit mode selection, defaults to
discuss in groups, chat in DMs, chat-only for WebUI
- Add ResolveRunConfig on flow.Resolver for discuss driver to reuse
model/tools/system-prompt resolution without reimplementing
- Fix send tool for discuss mode: same-conversation sends now go through
SendDirect (channel adapter) instead of the local emitter shortcut
- Add target attribute to XML message format (reply_target for routing)
- Add discuss_probe_model_id to bots table settings
- Remove pipeline compaction (SetCompactCursor) — reuse existing compaction.Service
- Persist full SDK messages (including tool calls) in discuss mode
* refactor: unify DCP event layer, fix persistence and local channel
- Fix bot_session_events dedup index to include event_kind so that
message + edit events for the same external_message_id coexist.
- Change CreateSessionEvent from :one to :exec so ON CONFLICT DO NOTHING
does not produce spurious errors on duplicate delivery.
- Move ACL evaluation before event ingest; denied messages no longer
enter bot_session_events or the in-memory pipeline.
- Let chat mode consume RenderedContext from the DCP pipeline when
available, sharing the same event-driven context assembly as discuss.
- Collapse local WebSocket handler to route through HandleInbound
instead of directly calling StreamChatWS, eliminating the dual
business entry point.
- Extract buildBaseRunConfig shared builder so resolve() and
ResolveRunConfig() no longer duplicate model/credentials/skills setup.
- Add StoreRound to RunConfigResolver interface so discuss driver
persists assistant output with full metadata, usage, and memory
extraction (same quality as chat mode).
- Fix discuss driver context: use context.Background() instead of the
short-lived HTTP request context that was getting cancelled.
- Fix model ID passed to StoreRound: return database UUID from
ResolveRunConfig instead of SDK model name.
- Remove dead CLIAdapter/CLIType and update legacy web/cli references
in tests and comments.
* fix: stop idle discuss goroutines after 10min timeout
Discuss session goroutines were never cleaned up when a session became
inactive (e.g. after /new). Add a 10-minute idle timer that auto-exits
the goroutine and removes it from the sessions map when no new RC
arrives.
* refactor: pipeline details — event types, structured reply, display content
- Remove [User sent N attachments] placeholder text from buildInboundQuery;
attachment info is now expressed via pipeline <attachment> tags.
- Unify in-reply-to as structured ReplyRef (Sender/Preview fields) across
Telegram, Discord, Feishu, and Matrix adapters instead of prepending
[Reply to ...] text into the message body. Remove now-unused
buildTelegramQuotedText, buildDiscordQuotedText, buildMatrixQuotedText.
- Make AdaptInbound return CanonicalEvent interface and dispatch to
adaptMessage/adaptEdit/adaptService based on metadata["event_type"].
- Add event_id column to bot_history_messages (migration 0059) so user
messages can reference their canonical pipeline event.
- PersistEvent now returns the event UUID; HandleInbound passes it through
to both persistPassiveMessage and ChatRequest.EventID for storeRound.
- Add FillDisplayContent to message service: extracts plain text from
event_data for clean frontend display.
- Frontend extractMessageText prefers display_content when available,
falling back to legacy strip logic for old messages.
- Fix: always generate headerifiedQuery for storage even when usePipeline
is true, so user messages are persisted via storeRound in chat mode.
* fix: use json.Marshal for pipeline context content serialization
The manual string escaping in buildMessagesFromPipeline only handled
double quotes but not newlines, backslashes, and other JSON special
characters, producing invalid json.RawMessage values. The LLM then
received empty/malformed context and complained about having no history.
* fix: restore WebSocket handler to use StreamChatWS directly
The previous refactoring replaced the WS handler with HandleInbound +
RouteHub subscription, which broke streaming because RouteHub events
use a different format (channel.StreamEvent) than what the frontend
expects (flow.WSStreamEvent with text_delta, tool_call_start, etc.).
Restore the original direct StreamChatWS call path so WebUI streaming
works again. The WS handler now matches the pre-refactoring behavior
while all other changes (pipeline, ACL, event types, etc.) are kept.
* feat: store display_text directly in bot_history_messages
Instead of computing display content at API response time by querying
bot_session_events via event_id, store the raw user text in a dedicated
display_text column at write time. This works for all paths including
the WebSocket handler which does not go through the pipeline/event layer.
- Migration 0060: add display_text TEXT column
- PersistInput gains DisplayText; filled from trimmedText (passive) and
req.Query (storeRound)
- toMessageFields reads display_text into DisplayContent
- Remove FillDisplayContent runtime query and ListSessionEventsByEventID
- Frontend already prefers display_content when available (no change)
* fix: display_text should contain raw user text, not XML-wrapped query
req.Query gets overwritten to headerifiedQuery (with XML <message> tags)
before storeRound runs. Add RawQuery field to ChatRequest to preserve
the original user text, and use it for display_text in storeMessages.
* fix(web): show discuss sessions
* chore(feishu): change discuss output to stream card
* fix(channel): unify discuss/chat send path and card markdown delivery
* feat(discuss): switch to stream execution with RouteHub broadcasting
* refactor(pipeline): remove context trimming from ComposeContext
The pipeline path should not trim context by token budget — the
upstream IC/RC already bounds the event window. Remove TrimContext,
FindWorkingWindowCursor, EstimateTokens, FormatLastProcessedMs (all
unused or only used for trimming), the maxTokens parameter from
ComposeContext, and MaxContextTokens from DiscussSessionConfig.
---------
Co-authored-by: 晨苒 <16112591+chen-ran@users.noreply.github.com>
Add native Playwright WebSocket sessions alongside the existing curated
browser tools. Agents can now create remote sessions that expose a full
Playwright API over WebSocket, enabling advanced use cases like HttpOnly
cookie injection, storage state management, and route interception.
Key changes:
- Per-bot isolated browser processes (launchServer via Node child process)
- New session module with create/close/status/heartbeat endpoints
- New browser_remote_session agent tool (Go)
- Storage state export/import on existing browser contexts
- Bot ID plumbing through context creation for process isolation
- Inflight deduplication to prevent duplicate browser launches
- Session janitor for automatic expiry cleanup
The WebSocket handler rejected messages with empty text even when
attachments were present, while the HTTP POST endpoint correctly
used Message.IsEmpty(). Move the empty-check after attachment
parsing so only truly empty messages are rejected.
* refactor(agent): replace XML tag extraction with tool-based send/react/speak
Remove the <attachments>, <reactions>, and <speech> XML tag extraction
system from the agent streaming pipeline. Instead, the send/react/speak
tools now handle both same-conversation and cross-conversation delivery:
- send: omit target to deliver attachments in the current conversation;
specify target for cross-channel messaging
- react: omit target to react in the current conversation
- speak: omit target to speak in the current conversation
Backend changes:
- Add StreamEmitter callback to tools.SessionContext so tools can push
attachment/reaction/speech events directly into the agent stream
- Wire emitter in agent.go for both streaming and non-streaming paths
- Remove StreamTagExtractor, DefaultTagResolvers, emitTagEvents, and
delete internal/agent/tags.go entirely
- Remove StripAgentTags calls from assistant_output.go
- Add IsSameConversation detection in messaging executor; same-conv
sends pass raw paths through the emitter for downstream ingestion
- Auto-resolve relative paths (e.g. "IDENTITY.md" -> "/data/IDENTITY.md")
- Add Metadata propagation through the full attachment chain
(tools.Attachment -> agent.FileAttachment -> parseAttachmentDelta)
- Update system_chat.md and _contacts.md prompts
Frontend changes (apps/web):
- Hide send/react/speak tool_call blocks when result indicates
delivered to current conversation
- Defer attachment_delta blocks to end of message (flush on stream
completion) for consistent positioning with DB-loaded history
* fix(agent): speak tool emits synthesized audio directly as voice attachment
Instead of emitting speech_delta (which requires downstream re-synthesis),
the speak tool now emits the already-synthesized audio as an attachment_delta
with voice type. This avoids double TTS synthesis and eliminates dependency
on ttsService being configured on the inbound processor.
Also fixes speak on WebUI where ReplyTarget is empty (same fix as send).
StripAgentTags was only applied to the merged content string but not to
individual ContentParts. On channels that don't support RichText (e.g.
Telegram), buildChannelMessage joins part texts directly, causing raw
<attachments>/<reactions>/<speech> blocks to appear in the final message.
Bots can now be configured with an image generation model (must have
image-output compatibility). When set, the agent exposes a generate_image
tool that calls the model via Twilight AI SDK, saves the result to the
bot container filesystem, and returns the file path.
- Add image_model_id column to bots table (migration 0053)
- Update settings SQL queries, service, and types
- New ImageGenProvider tool provider in internal/agent/tools/
- Wire provider in both cmd/agent and cmd/memoh entry points
- Add image model selector to frontend bot settings with compat filtering
- Regenerate swagger, SDK types, and sqlc code
Models that lack the "tool-call" compatibility flag now run without
tools, preventing provider errors when the model does not support
function calling.
Introduce three inbound message handling modes for channel adapters:
- inject (default, /btw): when a route has an active agent stream,
inject the new user message into the running stream via the SDK's
PrepareStep hook between tool rounds. The message is interleaved at
the correct position in the persisted round.
- parallel (/now): start a new agent stream immediately, running
concurrently with any existing stream (preserves current behavior).
- queue (/next): enqueue the message and process it after the current
stream completes.
Key components:
- RouteDispatcher: per-route state management with inject channel,
task queue, and active-stream tracking.
- PrepareStep integration: drains inject channel between tool rounds,
records insertion position via InjectedRecorder for correct
persistence ordering.
- interleaveInjectedMessages: inserts injected user messages at their
actual injection position within the persisted message round.
- Parallel mode isolation: /now streams do not interact with the
dispatcher, preventing them from clearing another stream's active
state.
Rename session info endpoint from /sessions/:id/info to /sessions/:id/status
and update frontend tab label accordingly. Add /status slash command that
displays current session metrics (message count, context usage, cache hit
rate, used skills) as formatted text in any channel.
Add GET /bots/:bot_id/sessions/:session_id/info API endpoint that returns
per-session message count, latest input token usage with model context window,
aggregated KV cache hit rate, and skills invoked via use_skill tool calls.
Frontend Info tab in the right sidebar now displays this data in a compact
key-value layout with a context usage progress bar and clickable skill links.
- Change skills storage path from `/data/.skills` to `/data/skills`
- Add usage instructions and directory location to the Skills section
in the system prompt
* feat: add Supermarket integration (MCP & Skill marketplace)
Backend:
- Add [supermarket] config section with base_url (default: supermarket.memoh.ai)
- Add SupermarketHandler with proxy endpoints for MCPs, Skills, and Tags
- Add install endpoints: POST /bots/:id/supermarket/install-mcp (creates MCP
connection with env vars) and install-skill (downloads tar.gz, extracts to
container via gRPC)
- Register handler in FX wiring, generate Swagger docs and TypeScript SDK
Frontend:
- Add /settings/supermarket route with Store icon in sidebar
- Create supermarket page with search, tag filtering, MCP and Skill sections
- Add MCP/Skill card components with tag badges and install buttons
- Add install dialogs: MCP (bot selector + env var form), Skill (bot selector)
- Add i18n entries for en.json and zh.json
* fix: improve supermarket install UX
- Create BotSelect component with avatar + name using UI Select
- Replace NativeSelect in install dialogs and usage page with BotSelect
- Change MCP install flow: navigate to bot detail MCP tab with pre-filled
draft instead of direct install, letting users review before saving
- Move Supermarket sidebar entry between Browser and Usage
* web: remove supermarket page top tag selector bar
Drop the horizontal tag chips and getSupermarketTags fetch; keep
search and tag filter via card tag clicks with clearable badge.
* web: add homepage link to supermarket MCP and Skill cards
Show an external-link icon next to the card title when homepage is
available, opening in a new tab on click.
The fallback provider introduced in 5aeb2fd3 wrapped containerfs but did
not implement storage.ContainerFileOpener, causing IngestContainerFile to
fail with "provider does not support container file reading". This broke
outbound file attachments on all IM channels (Telegram, Discord, etc.)
because container paths like /data/xxx.xlsx were passed as-is to the
platform API instead of being ingested into the media store first.
- Add localfs storage provider as fallback when containerfs is unreachable
- Wrap media service with fallback provider in both entry points
- Fix gallery lightbox src matching by comparing pathnames only
- Add SupportsToolCall to RunConfig; only inject tools into SDK when set
- Update twilight-ai to 497ad09 which adds SSE scanner 10MB buffer
(fixes token-too-long on large image payloads) and parses the images
array from OpenAI-compatible chat completions into StreamFilePart
Large directories like node_modules/.venv could return thousands of entries,
wasting tokens and causing timeouts. Add offset/limit pagination to ListDir
RPC and collapse heavy subdirectories (>50 items) into summaries in recursive
mode. Collapsing runs at the bridge layer before pagination so the page window
reflects the collapsed view.
Three independent bugs fixed:
1. IM channels were sending raw <attachments>/<reactions>/<speech> tag blocks
alongside file attachments. Now ExtractAssistantOutputs strips these tags
before building the outbound channel message.
2. WebUI rendered these tags as markdown after page refresh. Now
extractMessageText strips agent tags for non-user messages.
3. WebUI lost attachment blocks after refresh because convertMessagesToChats
did not call buildAssetBlocks when merging assistant messages into a
pending tool-call group. Also made LinkOutboundAssets session-aware so
assets are linked to the correct assistant message.
Allow users to select a different model and reasoning effort level
directly from the chat input toolbar, overriding the bot defaults
on a per-message basis. The backend accepts optional model_id and
reasoning_effort parameters via both WebSocket and HTTP APIs, with
request-level values taking priority over bot/session settings.
- Backend: extend wsClientMessage and LocalChannelMessageRequest with
model_id/reasoning_effort fields; add ReasoningEffort to ChatRequest;
update resolver to prioritize request-level reasoning effort
- Frontend: add ModelOptions and ReasoningEffortSelect shared components;
refactor model-select to reuse ModelOptions; add model/reasoning
selectors to chat input toolbar; initialize from bot settings
- Regenerate swagger spec and TypeScript SDK
Container terminals were echoing raw ANSI escape sequences (^[[A, ^[[B,
etc.) instead of handling arrow keys because /bin/sh (dash/ash) lacks
readline support. Two changes fix this:
1. Bridge execPTY now directly exec's bare paths (e.g. /bin/bash) instead
of always wrapping through "/bin/sh -c", preserving readline behavior.
2. Terminal handler detects bash/zsh in the container and prefers them
over /bin/sh for interactive PTY sessions.
Allow users to configure what percentage of older messages to compact,
keeping the most recent portion intact. Default ratio is 80%, meaning
the oldest 80% of uncompacted messages are summarized while the newest
20% remain as-is for full-fidelity context.
Add version and commit_hash fields to the /ping endpoint response,
sourced from the existing internal/version package (ldflags or
Go build info). The frontend capabilities store reads these values
and displays them as badges at the bottom of the Profile page.
These two fields controlled history context window (time-based) and token-based
trimming. They are no longer needed — the resolver now always uses the hardcoded
24-hour default and skips token-based history trimming.
- Add `enable` column (default false) to search_providers and tts_providers tables
- Auto-create default entries for all provider types on startup (disabled by default)
- Add enable/disable Switch toggle in frontend for both search and TTS providers
- Show green status dot in sidebar for enabled providers, sort enabled first
- Filter bot settings dropdowns to only show enabled providers
Backend
- New subject kinds: all / channel_identity / channel_type
- Source scope fields on bot_acl_rules: source_channel,
source_conversation_type, source_conversation_id, source_thread_id
- Fix source_scope_check constraint: resolve source_channel server-side
(channel_type → subject_channel_type; channel_identity → DB lookup)
- Add GET /bots/:id/acl/channel-types/:type/conversations to list
observed conversations by platform type
- ListObservedConversations: include private/DM chats, normalise
conversation_type; COALESCE(name, handle) for display name
- enrichConversationAvatar: persist entry.Name → conversation_name
(keeps Telegram group titles current on every message)
- Unify Priority type to int32 across Go types to match DB INTEGER;
remove all int/int32 casts in service layer
- Fix duplicate nil guard in Evaluate; drop dead SourceScope.Channel field
- Migration 0048_acl_redesign
Frontend
- Drag-and-drop rule priority reordering (SortableJS/useSortable);
fix reorder: compute new order from oldIndex/newIndex directly,
not from the array (which useSortable syncs after onEnd)
- Conversation scope selector: searchable popover backed by observed
conversations (by identity or platform type); collapsible manual-ID fallback
- Display: name as primary label, stable channel·type·id always shown
as subtitle for verification
- bot-terminal: accessibility fix on close-tab button (keyboard events)
- i18n: drag-to-reorder, conversation source, manual IDs (en/zh)
Tests: update fakeChatACL to Evaluate interface; fix SourceScope literals.
SDK/spec regenerated.
* feat(web): add provider oauth management ui
* feat: add OAuth callback support on port 1455
* feat: enhance reasoning effort options and support for OpenAI Codex OAuth
* feat: update twilight-ai dependency to v0.3.4
* refactor: promote openai-codex to first-class client_type, remove auth_type
Replace the previous openai-responses + metadata auth_type=openai-codex-oauth
combo with a dedicated openai-codex client_type. OAuth requirement is now
determined solely by client_type, eliminating the auth_type concept from the
LLM provider domain entirely.
- Add openai-codex to DB CHECK constraint (migration 0047) with data migration
- Add ClientTypeOpenAICodex constant and dedicated SDK/probe branches
- Remove AuthType from SDKModelConfig, ModelCredentials, TriggerConfig, etc.
- Simplify supportsOAuth to check client_type == openai-codex
- Add conf/providers/codex.yaml preset with Codex catalog models
- Frontend: replace auth_type selector with client_type-driven OAuth UI
---------
Co-authored-by: Acbox <acbox0328@gmail.com>
- Add reusable TimezoneSelect component with search and UTC offset labels
- Replace plain Select with searchable TimezoneSelect in profile settings,
bot settings, and browser context settings
- Move bot timezone setting from header dialog into bot settings tab
- Resolve timezone with bot > user > system priority for all LLM-facing
time formatting (user message header, system prompt, heartbeat, tools,
memory extraction)
- Format tool output timestamps (history, contacts) in resolved timezone
Move CreateModel, BuildReasoningOptions, ReasoningBudgetTokens and
related types from internal/agent to internal/models as NewSDKChatModel,
SDKModelConfig, etc. This eliminates duplicate ClientType constants and
centralises all Twilight AI SDK instance creation in a single package.
NewSDKEmbeddingModel now accepts a clientType parameter and dispatches
to the native Google embedding provider for google-generative-ai,
instead of always using the OpenAI-compatible endpoint.
Images sent by users were silently dropped when the model supported
vision: routeAttachmentsByCapability classified them as "Native", but
extractFileRefPaths only collected "Fallback" (tool_file_ref) paths,
so the image data URL was computed and then discarded — the model saw
neither the image nor its container path.
- Add InlineImages field to RunConfig to carry native image data
- Replace extractFileRefPaths with extractAttachmentPaths that
collects paths from both Native (FallbackPath) and Fallback
attachments so the YAML header always lists every attachment
- Add extractNativeImageParts to extract inline image data URLs
- Pass InlineImages as sdk.ImagePart in prepareRunConfig so the
LLM receives the actual image content alongside the text query
* feat(channel): add Matrix adapter support
* fix(channel): prevent reasoning leaks in Matrix replies
* fix(channel): persist Matrix sync cursors
* fix(channel): improve Matrix markdown rendering
* fix(channel): support Matrix attachments and multimodal history
* fix(channel): expand Matrix reply media context
* fix(handlers): allow media downloads for chat-access bots
* fix(channel): classify Matrix DMs as direct chats
* fix(channel): auto-join Matrix room invites
* fix(channel): resolve Matrix room aliases for outbound send
* fix(web): use Matrix brand icon in channel badges
Replace the generic Matrix hashtag badge with the official brand asset so channel badges feel recognizable and fit the circular mask cleanly.
* fix(channel): add Matrix room whitelist controls
Let Matrix bots decide whether to auto-join invites and restrict inbound activity to allowed rooms or aliases. Expose the new controls in the web settings UI with line-based whitelist input so access rules stay explicit.
* fix(channel): stabilize Matrix multimodal follow-ups and settings
* fix(flow): avoid gosec panic on byte decoding
* fix: fix golangci-lint
* fix(channel): remove Matrix built-in ACL
* fix(channel): preserve Matrix image captions
* fix(channel): validate Matrix homeserver and sync access
Fail Matrix connections early when the homeserver, access token, or /sync capability is misconfigured so bot health checks surface actionable errors.
* fix(channel): preserve optional toggles and relax Matrix startup validation
* fix(channel): tighten Matrix mention fallback parsing
* fix(flow): skip structured assistant tool-call outputs
* fix(flow): resolve merged resolver duplication
Keep the internal agent resolver implementation after merging main so split helper files do not redeclare flow symbols. Restore user message normalization in sanitize and persistence paths to keep flow tests and command packages building.
* fix(flow): remove unused merged resolver helper
Drop the leftover truncate helper and import from the resolver merge fix so golangci-lint passes again without affecting flow behavior.
---------
Co-authored-by: Acbox Liu <acbox0328@gmail.com>
* refactor: replace persistent subagents with ephemeral spawn tool (#subagent)
- Drop subagents table, remove all persistent subagent infrastructure
- Add 'subagent' session type with parent_session_id on bot_sessions
- Rewrite subagent tool as single 'spawn' tool with parallel execution
- Create system_subagent.md prompt, add _subagent.md include for chat
- Limit subagent tools to file, exec, web_search, web_fetch only
- Merge subagent token usage into parent chat session in reporting
- Remove frontend subagent management page, update chat UI for spawn
- Fix UTF-8 truncation in session title, fix query not passed to agent
* refactor: remove history message page
* refactor: move client_type to provider, replace model fields with config JSONB
- Move `client_type` from `models` to `llm_providers` table
- Add `icon` field to `llm_providers`
- Replace `dimensions`, `input_modalities`, `supports_reasoning` on `models`
with a single `config` JSONB column containing `dimensions`,
`compatibilities` (vision, tool-call, image-output, reasoning),
and `context_window`
- Auto-imported models default to vision + tool-call + reasoning
- Update all backend consumers (agent, flow resolver, handlers, memory)
- Regenerate sqlc, swagger, and TypeScript SDK
- Update frontend forms, display, and i18n for new schema
* ui: show provider icon avatar in sidebar and detail header, remove icon input
* feat: add built-in provider registry with YAML definitions and enable toggle
- Add `enable` column to llm_providers (default true, backward-compatible)
- Create internal/registry package to load YAML provider/model definitions
on startup and upsert into database (new providers disabled by default)
- Add conf/providers/ with OpenAI, Anthropic, Google YAML definitions
- Add RegistryConfig to TOML config (providers_dir, default conf/providers)
- Model listing APIs and conversation flow now filter by enabled providers
- Frontend: enable switch in provider form, green status dot in sidebar,
enabled providers sorted to top
* fix: make 0041 migration idempotent for fresh databases
Guard data migration steps with column-existence checks so the
migration succeeds on databases created from the updated init schema.