Bumps twilight-ai to 3ebcc56 which strips data URL prefixes and
validates media_type before sending to Anthropic API, preventing
400 errors for unsupported image MIME types.
- Prevent Enter from sending message during IME composing (keyCode 229)
- Remove separator line between textarea and toolbar
- Change send/stop buttons to compact circular icon-only style
- Fix send icon color to white for dark mode visibility
- Add missing hidden file input element so the attach button works
Replace all FontAwesome icon usage across 80+ Vue files with lucide-vue-next
components. Remove FontAwesome dependencies (@fortawesome/*) and global
registration from main.ts. Delete unused components (data-table, warning-banner,
session-metadata, bot-sidebar/bot-item in home, message-list, tts-provider-select),
dead utilities (channel-icons.ts, custom-icons.ts), and stale assets (vue.svg).
Update AGENTS.md to reflect the new icon strategy.
Add version and commit_hash fields to the /ping endpoint response,
sourced from the existing internal/version package (ldflags or
Go build info). The frontend capabilities store reads these values
and displays them as badges at the bottom of the Profile page.
Remove per-session-type (chat/heartbeat/schedule) bar series from the
Daily Tokens chart, keeping only aggregated Total Input and Total Output
as stacked bars for a cleaner visualization.
- Fix cookie check logic: use `!includes('sidebar_state=false')` instead
of `includes('sidebar_state=true')` so sidebar defaults to open when
no cookie is set
- Add --sidebar-width CSS variable binding to desktop sidebar element
- Adjust SIDEBAR_WIDTH_MOBILE value
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Provider preset YAML files under conf/providers were not bundled into
the server Docker image or preserved by the install script, so fresh
deployments started without any pre-configured LLM providers.
Replace the single browser image (which required local build) with three
prebuilt images: browser-chromium, browser-firefox, and browser (both).
Each is exposed as a separate Docker Compose profile so users can simply
`docker compose --profile browser-chromium up -d` without any local build
step, significantly reducing install time.
Replace the global streaming ref with a streamingSessionId that
records which session is actively streaming. The streaming computed
now returns true only when the current session matches, so switching
sessions no longer leaks the generating indicator to unrelated sessions.
Constrain the main layout to viewport height (h-dvh) and override
SidebarProvider's min-h-svh so the height chain propagates correctly.
Change main-container overflow from auto to hidden so the outer
container never scrolls. Use absolute-positioning pattern for
session sidebar ScrollArea (matching chat messages pattern) to
ensure sessions scroll independently while the chat input stays fixed.
These two fields controlled history context window (time-based) and token-based
trimming. They are no longer needed — the resolver now always uses the hardcoded
24-hour default and skips token-based history trimming.
- Add `enable` column (default false) to search_providers and tts_providers tables
- Auto-create default entries for all provider types on startup (disabled by default)
- Add enable/disable Switch toggle in frontend for both search and TTS providers
- Show green status dot in sidebar for enabled providers, sort enabled first
- Filter bot settings dropdowns to only show enabled providers
Add a dropdown menu to each bot item in the chat sidebar with:
- "Details" option to navigate to the bot's settings page
- "Pin/Unpin" option to pin bots to the top of the list, persisted via localStorage
Backend
- New subject kinds: all / channel_identity / channel_type
- Source scope fields on bot_acl_rules: source_channel,
source_conversation_type, source_conversation_id, source_thread_id
- Fix source_scope_check constraint: resolve source_channel server-side
(channel_type → subject_channel_type; channel_identity → DB lookup)
- Add GET /bots/:id/acl/channel-types/:type/conversations to list
observed conversations by platform type
- ListObservedConversations: include private/DM chats, normalise
conversation_type; COALESCE(name, handle) for display name
- enrichConversationAvatar: persist entry.Name → conversation_name
(keeps Telegram group titles current on every message)
- Unify Priority type to int32 across Go types to match DB INTEGER;
remove all int/int32 casts in service layer
- Fix duplicate nil guard in Evaluate; drop dead SourceScope.Channel field
- Migration 0048_acl_redesign
Frontend
- Drag-and-drop rule priority reordering (SortableJS/useSortable);
fix reorder: compute new order from oldIndex/newIndex directly,
not from the array (which useSortable syncs after onEnd)
- Conversation scope selector: searchable popover backed by observed
conversations (by identity or platform type); collapsible manual-ID fallback
- Display: name as primary label, stable channel·type·id always shown
as subtitle for verification
- bot-terminal: accessibility fix on close-tab button (keyboard events)
- i18n: drag-to-reorder, conversation source, manual IDs (en/zh)
Tests: update fakeChatACL to Evaluate interface; fix SourceScope literals.
SDK/spec regenerated.
* feat(web): add provider oauth management ui
* feat: add OAuth callback support on port 1455
* feat: enhance reasoning effort options and support for OpenAI Codex OAuth
* feat: update twilight-ai dependency to v0.3.4
* refactor: promote openai-codex to first-class client_type, remove auth_type
Replace the previous openai-responses + metadata auth_type=openai-codex-oauth
combo with a dedicated openai-codex client_type. OAuth requirement is now
determined solely by client_type, eliminating the auth_type concept from the
LLM provider domain entirely.
- Add openai-codex to DB CHECK constraint (migration 0047) with data migration
- Add ClientTypeOpenAICodex constant and dedicated SDK/probe branches
- Remove AuthType from SDKModelConfig, ModelCredentials, TriggerConfig, etc.
- Simplify supportsOAuth to check client_type == openai-codex
- Add conf/providers/codex.yaml preset with Codex catalog models
- Frontend: replace auth_type selector with client_type-driven OAuth UI
---------
Co-authored-by: Acbox <acbox0328@gmail.com>
- Add reusable TimezoneSelect component with search and UTC offset labels
- Replace plain Select with searchable TimezoneSelect in profile settings,
bot settings, and browser context settings
- Move bot timezone setting from header dialog into bot settings tab
- Resolve timezone with bot > user > system priority for all LLM-facing
time formatting (user message header, system prompt, heartbeat, tools,
memory extraction)
- Format tool output timestamps (history, contacts) in resolved timezone
Move CreateModel, BuildReasoningOptions, ReasoningBudgetTokens and
related types from internal/agent to internal/models as NewSDKChatModel,
SDKModelConfig, etc. This eliminates duplicate ClientType constants and
centralises all Twilight AI SDK instance creation in a single package.
NewSDKEmbeddingModel now accepts a clientType parameter and dispatches
to the native Google embedding provider for google-generative-ai,
instead of always using the OpenAI-compatible endpoint.
The immediate watcher on configuredChannels accessed list[0].meta.type
without checking if the list was empty, causing a TypeError on initial
mount before data loaded. This crashed the component during setup and
corrupted KeepAlive state, making all bot detail tabs unresponsive.
Made-with: Cursor
Images sent by users were silently dropped when the model supported
vision: routeAttachmentsByCapability classified them as "Native", but
extractFileRefPaths only collected "Fallback" (tool_file_ref) paths,
so the image data URL was computed and then discarded — the model saw
neither the image nor its container path.
- Add InlineImages field to RunConfig to carry native image data
- Replace extractFileRefPaths with extractAttachmentPaths that
collects paths from both Native (FallbackPath) and Fallback
attachments so the YAML header always lists every attachment
- Add extractNativeImageParts to extract inline image data URLs
- Pass InlineImages as sdk.ImagePart in prepareRunConfig so the
LLM receives the actual image content alongside the text query
* # This is a combination of 6 commits.
# This is the 1st commit message:
feat(channel): add WeChat (weixin) adapter with QR code (#278)
* feat(channel): add WeChat (weixin) adapter with QR code
* fix(channel): fix weixin block streaming
* chore(channel): update weixin logo
# The commit message #2 will be skipped:
# build: 修改lint配置
# The commit message #3 will be skipped:
# build: 修改lint配置
# The commit message #4 will be skipped:
# 修改lint配置
# The commit message #5 will be skipped:
# 检测类型错误
# The commit message #6 will be skipped:
# ts类型检测错误
* feat(husky): update linting configuration to improve pre-commit checks
---------
Co-authored-by: 晨苒 <16112591+chen-ran@users.noreply.github.com>