Acbox Liu 43c4153938 feat: introduce DCP pipeline layer for unified context assembly (#329)
* refactor: introduce DCP pipeline layer for unified context assembly

Introduce a Deterministic Context Pipeline (DCP) inspired by Cahciua,
providing event-driven context assembly for LLM conversations.

- Add `internal/pipeline/` package with Canonical Event types, Projection
  (reduce), Rendering (XML RC), Pipeline manager, and EventStore persistence
- Change user message format from YAML front-matter to XML `<message>` tags
  with self-contained attributes (sender, channel, conversation, type)
- Merge CLI/Web dual API into single `/local/` endpoint, remove CLI handler
- Add `bot_session_events` table for event persistence and cold-start replay
- Add `discuss` session type (reserved for future Cahciua-style mode)
- Wire pipeline into HandleInbound: adapt → persist → push on every message
- Lazy cold-start replay: load events from DB on first session access

* feat: implement discuss mode with reactive driver and probe gate

Add discuss session mode where the bot autonomously decides when to speak
in group chats via tool-gated output (send tool only, no direct text reply).

- Add discuss driver (per-session goroutine, RC watch, step loop via
  agent.Generate, TR persistence, late-binding prompt with mention hints)
- Add system_discuss.md prompt template ("text = inner monologue, send = speak")
- Add context composition (MergeContext, ComposeContext, TrimContext) for
  RC + assistant/tool message interleaving by timestamp
- Add probe gate: when discuss_probe_model_id is set, cheap model pre-filters
  group messages; no tool calls = silence, tool calls = activate primary
- Add /new [chat|discuss] command: explicit mode selection, defaults to
  discuss in groups, chat in DMs, chat-only for WebUI
- Add ResolveRunConfig on flow.Resolver for discuss driver to reuse
  model/tools/system-prompt resolution without reimplementing
- Fix send tool for discuss mode: same-conversation sends now go through
  SendDirect (channel adapter) instead of the local emitter shortcut
- Add target attribute to XML message format (reply_target for routing)
- Add discuss_probe_model_id to bots table settings
- Remove pipeline compaction (SetCompactCursor) — reuse existing compaction.Service
- Persist full SDK messages (including tool calls) in discuss mode

* refactor: unify DCP event layer, fix persistence and local channel

- Fix bot_session_events dedup index to include event_kind so that
  message + edit events for the same external_message_id coexist.
- Change CreateSessionEvent from :one to :exec so ON CONFLICT DO NOTHING
  does not produce spurious errors on duplicate delivery.
- Move ACL evaluation before event ingest; denied messages no longer
  enter bot_session_events or the in-memory pipeline.
- Let chat mode consume RenderedContext from the DCP pipeline when
  available, sharing the same event-driven context assembly as discuss.
- Collapse local WebSocket handler to route through HandleInbound
  instead of directly calling StreamChatWS, eliminating the dual
  business entry point.
- Extract buildBaseRunConfig shared builder so resolve() and
  ResolveRunConfig() no longer duplicate model/credentials/skills setup.
- Add StoreRound to RunConfigResolver interface so discuss driver
  persists assistant output with full metadata, usage, and memory
  extraction (same quality as chat mode).
- Fix discuss driver context: use context.Background() instead of the
  short-lived HTTP request context that was getting cancelled.
- Fix model ID passed to StoreRound: return database UUID from
  ResolveRunConfig instead of SDK model name.
- Remove dead CLIAdapter/CLIType and update legacy web/cli references
  in tests and comments.

* fix: stop idle discuss goroutines after 10min timeout

Discuss session goroutines were never cleaned up when a session became
inactive (e.g. after /new). Add a 10-minute idle timer that auto-exits
the goroutine and removes it from the sessions map when no new RC
arrives.

* refactor: pipeline details — event types, structured reply, display content

- Remove [User sent N attachments] placeholder text from buildInboundQuery;
  attachment info is now expressed via pipeline <attachment> tags.
- Unify in-reply-to as structured ReplyRef (Sender/Preview fields) across
  Telegram, Discord, Feishu, and Matrix adapters instead of prepending
  [Reply to ...] text into the message body. Remove now-unused
  buildTelegramQuotedText, buildDiscordQuotedText, buildMatrixQuotedText.
- Make AdaptInbound return CanonicalEvent interface and dispatch to
  adaptMessage/adaptEdit/adaptService based on metadata["event_type"].
- Add event_id column to bot_history_messages (migration 0059) so user
  messages can reference their canonical pipeline event.
- PersistEvent now returns the event UUID; HandleInbound passes it through
  to both persistPassiveMessage and ChatRequest.EventID for storeRound.
- Add FillDisplayContent to message service: extracts plain text from
  event_data for clean frontend display.
- Frontend extractMessageText prefers display_content when available,
  falling back to legacy strip logic for old messages.
- Fix: always generate headerifiedQuery for storage even when usePipeline
  is true, so user messages are persisted via storeRound in chat mode.

* fix: use json.Marshal for pipeline context content serialization

The manual string escaping in buildMessagesFromPipeline only handled
double quotes but not newlines, backslashes, and other JSON special
characters, producing invalid json.RawMessage values. The LLM then
received empty/malformed context and complained about having no history.

* fix: restore WebSocket handler to use StreamChatWS directly

The previous refactoring replaced the WS handler with HandleInbound +
RouteHub subscription, which broke streaming because RouteHub events
use a different format (channel.StreamEvent) than what the frontend
expects (flow.WSStreamEvent with text_delta, tool_call_start, etc.).

Restore the original direct StreamChatWS call path so WebUI streaming
works again. The WS handler now matches the pre-refactoring behavior
while all other changes (pipeline, ACL, event types, etc.) are kept.

* feat: store display_text directly in bot_history_messages

Instead of computing display content at API response time by querying
bot_session_events via event_id, store the raw user text in a dedicated
display_text column at write time. This works for all paths including
the WebSocket handler which does not go through the pipeline/event layer.

- Migration 0060: add display_text TEXT column
- PersistInput gains DisplayText; filled from trimmedText (passive) and
  req.Query (storeRound)
- toMessageFields reads display_text into DisplayContent
- Remove FillDisplayContent runtime query and ListSessionEventsByEventID
- Frontend already prefers display_content when available (no change)

* fix: display_text should contain raw user text, not XML-wrapped query

req.Query gets overwritten to headerifiedQuery (with XML <message> tags)
before storeRound runs. Add RawQuery field to ChatRequest to preserve
the original user text, and use it for display_text in storeMessages.

* fix(web): show discuss sessions

* refactor: introduce DCP pipeline layer for unified context assembly

Introduce a Deterministic Context Pipeline (DCP) inspired by Cahciua,
providing event-driven context assembly for LLM conversations.

- Add `internal/pipeline/` package with Canonical Event types, Projection
  (reduce), Rendering (XML RC), Pipeline manager, and EventStore persistence
- Change user message format from YAML front-matter to XML `<message>` tags
  with self-contained attributes (sender, channel, conversation, type)
- Merge CLI/Web dual API into single `/local/` endpoint, remove CLI handler
- Add `bot_session_events` table for event persistence and cold-start replay
- Add `discuss` session type (reserved for future Cahciua-style mode)
- Wire pipeline into HandleInbound: adapt → persist → push on every message
- Lazy cold-start replay: load events from DB on first session access

* feat: implement discuss mode with reactive driver and probe gate

Add discuss session mode where the bot autonomously decides when to speak
in group chats via tool-gated output (send tool only, no direct text reply).

- Add discuss driver (per-session goroutine, RC watch, step loop via
  agent.Generate, TR persistence, late-binding prompt with mention hints)
- Add system_discuss.md prompt template ("text = inner monologue, send = speak")
- Add context composition (MergeContext, ComposeContext, TrimContext) for
  RC + assistant/tool message interleaving by timestamp
- Add probe gate: when discuss_probe_model_id is set, cheap model pre-filters
  group messages; no tool calls = silence, tool calls = activate primary
- Add /new [chat|discuss] command: explicit mode selection, defaults to
  discuss in groups, chat in DMs, chat-only for WebUI
- Add ResolveRunConfig on flow.Resolver for discuss driver to reuse
  model/tools/system-prompt resolution without reimplementing
- Fix send tool for discuss mode: same-conversation sends now go through
  SendDirect (channel adapter) instead of the local emitter shortcut
- Add target attribute to XML message format (reply_target for routing)
- Add discuss_probe_model_id to bots table settings
- Remove pipeline compaction (SetCompactCursor) — reuse existing compaction.Service
- Persist full SDK messages (including tool calls) in discuss mode

* refactor: unify DCP event layer, fix persistence and local channel

- Fix bot_session_events dedup index to include event_kind so that
  message + edit events for the same external_message_id coexist.
- Change CreateSessionEvent from :one to :exec so ON CONFLICT DO NOTHING
  does not produce spurious errors on duplicate delivery.
- Move ACL evaluation before event ingest; denied messages no longer
  enter bot_session_events or the in-memory pipeline.
- Let chat mode consume RenderedContext from the DCP pipeline when
  available, sharing the same event-driven context assembly as discuss.
- Collapse local WebSocket handler to route through HandleInbound
  instead of directly calling StreamChatWS, eliminating the dual
  business entry point.
- Extract buildBaseRunConfig shared builder so resolve() and
  ResolveRunConfig() no longer duplicate model/credentials/skills setup.
- Add StoreRound to RunConfigResolver interface so discuss driver
  persists assistant output with full metadata, usage, and memory
  extraction (same quality as chat mode).
- Fix discuss driver context: use context.Background() instead of the
  short-lived HTTP request context that was getting cancelled.
- Fix model ID passed to StoreRound: return database UUID from
  ResolveRunConfig instead of SDK model name.
- Remove dead CLIAdapter/CLIType and update legacy web/cli references
  in tests and comments.

* fix: stop idle discuss goroutines after 10min timeout

Discuss session goroutines were never cleaned up when a session became
inactive (e.g. after /new). Add a 10-minute idle timer that auto-exits
the goroutine and removes it from the sessions map when no new RC
arrives.

* refactor: pipeline details — event types, structured reply, display content

- Remove [User sent N attachments] placeholder text from buildInboundQuery;
  attachment info is now expressed via pipeline <attachment> tags.
- Unify in-reply-to as structured ReplyRef (Sender/Preview fields) across
  Telegram, Discord, Feishu, and Matrix adapters instead of prepending
  [Reply to ...] text into the message body. Remove now-unused
  buildTelegramQuotedText, buildDiscordQuotedText, buildMatrixQuotedText.
- Make AdaptInbound return CanonicalEvent interface and dispatch to
  adaptMessage/adaptEdit/adaptService based on metadata["event_type"].
- Add event_id column to bot_history_messages (migration 0059) so user
  messages can reference their canonical pipeline event.
- PersistEvent now returns the event UUID; HandleInbound passes it through
  to both persistPassiveMessage and ChatRequest.EventID for storeRound.
- Add FillDisplayContent to message service: extracts plain text from
  event_data for clean frontend display.
- Frontend extractMessageText prefers display_content when available,
  falling back to legacy strip logic for old messages.
- Fix: always generate headerifiedQuery for storage even when usePipeline
  is true, so user messages are persisted via storeRound in chat mode.

* fix: use json.Marshal for pipeline context content serialization

The manual string escaping in buildMessagesFromPipeline only handled
double quotes but not newlines, backslashes, and other JSON special
characters, producing invalid json.RawMessage values. The LLM then
received empty/malformed context and complained about having no history.

* fix: restore WebSocket handler to use StreamChatWS directly

The previous refactoring replaced the WS handler with HandleInbound +
RouteHub subscription, which broke streaming because RouteHub events
use a different format (channel.StreamEvent) than what the frontend
expects (flow.WSStreamEvent with text_delta, tool_call_start, etc.).

Restore the original direct StreamChatWS call path so WebUI streaming
works again. The WS handler now matches the pre-refactoring behavior
while all other changes (pipeline, ACL, event types, etc.) are kept.

* feat: store display_text directly in bot_history_messages

Instead of computing display content at API response time by querying
bot_session_events via event_id, store the raw user text in a dedicated
display_text column at write time. This works for all paths including
the WebSocket handler which does not go through the pipeline/event layer.

- Migration 0060: add display_text TEXT column
- PersistInput gains DisplayText; filled from trimmedText (passive) and
  req.Query (storeRound)
- toMessageFields reads display_text into DisplayContent
- Remove FillDisplayContent runtime query and ListSessionEventsByEventID
- Frontend already prefers display_content when available (no change)

* fix: display_text should contain raw user text, not XML-wrapped query

req.Query gets overwritten to headerifiedQuery (with XML <message> tags)
before storeRound runs. Add RawQuery field to ChatRequest to preserve
the original user text, and use it for display_text in storeMessages.

* fix(web): show discuss sessions

* chore(feishu): change discuss output to stream card

* fix(channel): unify discuss/chat send path and card markdown delivery

* feat(discuss): switch to stream execution with RouteHub broadcasting

* refactor(pipeline): remove context trimming from ComposeContext

The pipeline path should not trim context by token budget — the
upstream IC/RC already bounds the event window. Remove TrimContext,
FindWorkingWindowCursor, EstimateTokens, FormatLastProcessedMs (all
unused or only used for trimming), the maxTokens parameter from
ComposeContext, and MaxContextTokens from DiscussSessionConfig.

---------

Co-authored-by: 晨苒 <16112591+chen-ran@users.noreply.github.com>
2026-04-06 21:56:25 +08:00
2026-03-24 16:06:41 +08:00
2026-03-30 21:32:02 +08:00
2026-04-04 21:10:00 +08:00
2026-03-24 21:34:04 +08:00
2026-02-20 22:01:08 +08:00
2026-03-24 16:06:41 +08:00
2026-04-03 01:17:34 +08:00
2026-04-03 01:17:34 +08:00
2026-01-12 17:49:59 +08:00
2026-04-04 21:10:00 +08:00
2026-03-29 20:30:21 +08:00
2026-03-30 21:32:02 +08:00

Memoh

Memoh

Self hosted, always-on AI agent platform run in containers.

📌 Introduction to Memoh - The Case for an Always-On, Containerized Home Agent


Memoh is an always-on, containerized AI agent system. Create multiple AI bots, each running in its own isolated container with persistent memory, and interact with them across Telegram, Discord, Lark (Feishu), QQ, Matrix, WeCom, WeChat, Email, or the built-in Web UI. Bots can execute commands, edit files, browse the web, call external tools via MCP, and remember everything — like giving each bot its own computer and brain.

Quick Start

One-click install (requires Docker):

curl -fsSL https://memoh.sh | sudo sh

Silent install with all defaults: curl -fsSL ... | sudo sh -s -- -y

Or manually:

git clone --depth 1 https://github.com/memohai/Memoh.git
cd Memoh
cp conf/app.docker.toml config.toml
# Edit config.toml
sudo docker compose up -d

Install a specific version:

curl -fsSL https://memoh.sh | sudo MEMOH_VERSION=v0.6.0 sh

Use CN mirror for slow image pulls:

curl -fsSL https://memoh.sh | sudo USE_CN_MIRROR=true sh

On macOS or if your user is in the docker group, sudo is not required.

Visit http://localhost:8082 after startup. Default login: admin / admin123

See DEPLOYMENT.md for custom configuration and production setup.

Why Memoh?

Memoh is built for always-on continuity — an AI that stays online, and a memory that stays yours.

  • Lightweight & Fast: Built with Go as home/studio infrastructure, runs efficiently on edge devices.
  • Containerized by default: Each bot gets an isolated container with its own filesystem, network, and tools.
  • Hybrid split: Cloud inference for frontier model capability, local-first memory and indexing for privacy.
  • Multi-user first: Explicit sharing and privacy boundaries across users and bots.
  • Full graphical configuration: Configure bots, channels, MCP, skills, and all settings through a modern web UI — no coding required.

Features

Core

  • 🤖 Multi-Bot & Multi-User: Create multiple bots that chat privately, in groups, or with each other. Bots distinguish individual users in group chats, remember each person's context, and support cross-platform identity binding.
  • 📦 Containerized: Each bot runs in its own isolated containerd container with a dedicated filesystem and network — like having its own computer. Supports snapshots, data export/import, and versioning.
  • 🧠 Memory Engineering: LLM-driven fact extraction, hybrid retrieval (dense + sparse + BM25), 24-hour context loading, memory compaction & rebuild. Pluggable backends: Built-in (off / sparse / dense), Mem0, OpenViking.
  • 💬 9 Channels: Telegram, Discord, Lark (Feishu), QQ, Matrix, WeCom, WeChat, Email (Mailgun / SMTP / Gmail OAuth), and built-in Web UI — with unified streaming, rich text, and attachments.

Agent Capabilities

  • 🔧 MCP (Model Context Protocol): Full MCP support (HTTP / SSE / Stdio / OAuth). Connect external tool servers for extensibility; each bot manages its own independent MCP connections.
  • 🌐 Browser Automation: Headless Chromium/Firefox via Playwright — navigate, click, fill forms, screenshot, read accessibility trees, manage tabs.
  • 🎭 Skills & Subagents: Define bot personality via modular skill files; delegate complex tasks to sub-agents with independent context.
  • Automation: Cron-based scheduled tasks and periodic heartbeat for autonomous bot activity.

Management

  • 🖥️ Web UI: Modern dashboard (Vue 3 + Tailwind CSS) — streaming chat, tool call visualization, file manager, visual configuration for all settings. Dark/light theme, i18n.
  • 🔐 Access Control: Priority-based ACL rules with allow/deny effects, scoped by channel identity, channel type, or conversation.
  • 🧪 Multi-Model: Any OpenAI-compatible, Anthropic, or Google provider. Per-bot model assignment, provider OAuth, and automatic model import.
  • 🚀 One-Click Deploy: Docker Compose with automatic migration, containerd setup, and CNI networking.

Memory System

Memoh's memory system is built around Memory Providers — pluggable backends that control how a bot stores, retrieves, and manages long-term memory.

Provider Description
Built-in Self-hosted, ships with Memoh. Three modes: Off (file-based, no vector search), Sparse (neural sparse vectors via local model, no API cost), Dense (embedding-based semantic search via Qdrant).
Mem0 SaaS memory via the Mem0 API.
OpenViking Self-hosted or SaaS memory with its own API.

Each bot binds one provider. During chat, the bot automatically extracts key facts from every conversation turn and stores them as structured memories. On each new message, the most relevant memories are retrieved via hybrid search and injected into the bot's context — giving it personalized, long-term recall across conversations.

Additional capabilities include memory compaction (merge redundant entries), rebuild, manual creation/editing, and vector manifold visualization (Top-K distribution & CDF curves). See the documentation for setup details.

Gallery 1 Gallery 2 Gallery 3
Chat Container Providers
Gallery 4 Gallery 5 Gallery 6
File Manager Scheduled Tasks Token Usage

Architecture

flowchart TB
    subgraph Clients [" Clients "]
        direction LR
        CH["Channels<br/>Telegram · Discord · Feishu · QQ<br/>Matrix · WeCom · WeChat · Email"]
        WEB["Web UI (Vue 3 :8082)"]
    end

    CH & WEB --> API

    subgraph Server [" Server · Go :8080 "]
        API["REST API & Channel Adapters"]

        subgraph Agent [" In-process AI Agent "]
            TWILIGHT["Twilight AI SDK<br/>OpenAI · Anthropic · Google"]
            CONV["Conversation Flow<br/>Streaming · Sential · Loop Detection"]
        end

        subgraph ToolProviders [" Tool Providers "]
            direction LR
            T_CORE["Memory · Web Search<br/>Schedule · Contacts · Inbox"]
            T_EXT["Container · Email · Browser<br/>Subagent · Skill · TTS<br/>MCP Federation"]
        end

        API --> Agent --> ToolProviders
    end

    PG[("PostgreSQL")]
    QD[("Qdrant")]
    BROWSER["Browser Gateway<br/>(Playwright :8083)"]

    subgraph Workspace [" Workspace Containers · containerd "]
        direction LR
        BA["Bot A"] ~~~ BB["Bot B"] ~~~ BC["Bot C"]
    end

    Server --- PG
    Server --- QD
    ToolProviders -.-> BROWSER
    ToolProviders -- "gRPC Bridge over UDS" --> Workspace

Sub-projects Born for This Project

  • Twilight AI — A lightweight, idiomatic AI SDK for Go — inspired by Vercel AI SDK. Provider-agnostic (OpenAI, Anthropic, Google), with first-class streaming, tool calling, MCP support, and embeddings.

Roadmap

Please refer to the Roadmap for more details.

Development

Refer to CONTRIBUTING.md for development setup.

Star History

Star History Chart

Contributors

LICENSE: AGPLv3

Copyright (C) 2026 Memoh. All rights reserved.

Languages
Go 66.5%
Vue 19.5%
TypeScript 13%
Shell 0.7%
CSS 0.1%