mirror of
https://github.com/memohai/Memoh.git
synced 2026-04-25 07:00:48 +09:00
docs: update to v0.5
This commit is contained in:
@@ -28,6 +28,10 @@ export const en = [
|
||||
text: 'Bot Management',
|
||||
link: '/getting-started/bot.md'
|
||||
},
|
||||
{
|
||||
text: 'Bot Access Control',
|
||||
link: '/getting-started/access.md'
|
||||
},
|
||||
{
|
||||
text: 'Container Management',
|
||||
link: '/getting-started/container.md'
|
||||
@@ -89,6 +93,27 @@ export const en = [
|
||||
text: 'Built-in',
|
||||
link: '/memory-providers/builtin.md'
|
||||
},
|
||||
{
|
||||
text: 'Mem0',
|
||||
link: '/memory-providers/mem0.md'
|
||||
},
|
||||
{
|
||||
text: 'OpenViking',
|
||||
link: '/memory-providers/openviking.md'
|
||||
},
|
||||
]
|
||||
},
|
||||
{
|
||||
text: 'TTS Providers',
|
||||
items: [
|
||||
{
|
||||
text: 'Overview',
|
||||
link: '/tts-providers/index.md'
|
||||
},
|
||||
{
|
||||
text: 'Edge TTS',
|
||||
link: '/tts-providers/edge.md'
|
||||
},
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
+4
-2
@@ -20,10 +20,11 @@ Each bot runs in its own isolated container (powered by Containerd) with a separ
|
||||
|
||||
### Memory Engineering
|
||||
|
||||
A deeply engineered memory layer:
|
||||
A deeply engineered, multi-provider memory layer:
|
||||
|
||||
- **Multi-provider architecture**: Built-in (with off/sparse/dense modes), Mem0 (SaaS), and OpenViking (self-hosted or SaaS)
|
||||
- Automatically extracts key facts from each conversation turn and stores them as structured memories
|
||||
- Hybrid retrieval: semantic search (via Qdrant vector database) + keyword retrieval (BM25)
|
||||
- Hybrid retrieval: semantic search (via Qdrant vector database) + keyword retrieval (BM25) + neural sparse vectors
|
||||
- Loads the last 24 hours of conversation context by default
|
||||
- Automatic memory compaction and rebuild capabilities
|
||||
- Multi-language auto-detection
|
||||
@@ -49,6 +50,7 @@ Bots come with a rich set of built-in tools:
|
||||
- **Skills** — Define bot personality via IDENTITY.md, SOUL.md, and modular skill files that bots can enable/disable at runtime
|
||||
- **Container Operations** — Read/write files, edit code, and execute commands inside the container
|
||||
- **Memory Management** — Search and manage memories
|
||||
- **Text-to-Speech** — Synthesize speech via configurable TTS providers (Edge TTS with 256+ voices)
|
||||
- **Messaging** — Send messages and reactions to specific users or channels
|
||||
|
||||
### Multi-LLM Provider Support
|
||||
|
||||
@@ -0,0 +1,109 @@
|
||||
# Bot Access Control
|
||||
|
||||
Memoh uses an ACL (Access Control List) system to control who can interact with your bot. You can configure guest access, whitelist specific users or channel identities, and blacklist others — all from the bot's **Access** tab.
|
||||
|
||||
---
|
||||
|
||||
## Concepts
|
||||
|
||||
### Authorization Layers
|
||||
|
||||
Bot access is enforced at two levels:
|
||||
|
||||
1. **Management Access**: Only the bot **owner** and system **admins** can edit bot settings, manage ACL rules, and configure the bot. This is not configurable — it is based on ownership.
|
||||
2. **Chat Trigger Access**: Controls who can send messages to the bot and trigger a response. This is what the ACL system manages.
|
||||
|
||||
### Subject Types
|
||||
|
||||
ACL rules can target three kinds of subjects:
|
||||
|
||||
| Subject | Description |
|
||||
|---------|-------------|
|
||||
| **Guest (all)** | A global toggle — when enabled, anyone can chat with the bot without being explicitly listed. |
|
||||
| **User** | A specific Memoh user account. |
|
||||
| **Channel Identity** | A specific identity on an external channel (e.g. a Telegram user, a Discord member). Useful when the person doesn't have a Memoh account. |
|
||||
|
||||
### Evaluation Order
|
||||
|
||||
When an incoming message arrives, the bot evaluates access in this order:
|
||||
|
||||
1. Bot owner or system admin → **Allow**
|
||||
2. User or channel identity has a **deny** rule → **Deny**
|
||||
3. User or channel identity has an **allow** rule → **Allow**
|
||||
4. Guest access is enabled → **Allow**
|
||||
5. None of the above → **Deny**
|
||||
|
||||
Blacklist (deny) rules are always checked before whitelist (allow) rules. This means a blacklisted user cannot bypass the block even if guest access is enabled.
|
||||
|
||||
---
|
||||
|
||||
## Managing Access
|
||||
|
||||
Open a bot's **Access** tab to configure its access control.
|
||||
|
||||
### Guest Access
|
||||
|
||||
Toggle **Allow Guest Access** to let anyone chat with the bot without an explicit whitelist entry. This is useful for public-facing bots.
|
||||
|
||||
When guest access is disabled, only the bot owner, admins, and explicitly whitelisted users/identities can trigger the bot.
|
||||
|
||||
### Whitelist
|
||||
|
||||
The whitelist grants specific users or channel identities permission to chat with the bot.
|
||||
|
||||
1. Click **Add** in the Whitelist section.
|
||||
2. Select a subject type:
|
||||
- **User**: Search and select a Memoh user.
|
||||
- **Channel Identity**: Search and select a channel identity (e.g. a Telegram user the bot has seen before).
|
||||
3. Optionally set **source scope** to restrict the rule to a specific context:
|
||||
- **Channel**: Only applies when the message comes from a specific channel (e.g. your Telegram bot channel).
|
||||
- **Conversation Type**: `private`, `group`, or `thread`.
|
||||
- **Conversation ID**: A specific chat/group ID.
|
||||
- **Thread ID**: A specific thread within a conversation (requires Conversation ID).
|
||||
4. Click **Save**.
|
||||
|
||||
Without source scope, the rule applies globally — the subject can chat with the bot from any channel.
|
||||
|
||||
### Blacklist
|
||||
|
||||
The blacklist denies specific users or channel identities from chatting with the bot. The setup process is the same as the whitelist.
|
||||
|
||||
Blacklist rules take priority over whitelist rules and guest access. Use this to block specific users while keeping the bot open to others.
|
||||
|
||||
### Source Scope
|
||||
|
||||
Source scope lets you create fine-grained rules. For example:
|
||||
|
||||
- Allow a user to chat only via Telegram, but not Discord
|
||||
- Block a channel identity only in group conversations
|
||||
- Restrict access to a specific thread in a specific group
|
||||
|
||||
Scope fields form a hierarchy: **Channel → Conversation Type → Conversation ID → Thread ID**. Each level is optional, but a Thread ID requires a Conversation ID, and a Conversation ID requires a Channel.
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Public Bot (Anyone Can Chat)
|
||||
|
||||
1. Open the bot's **Access** tab.
|
||||
2. Enable **Allow Guest Access**.
|
||||
3. Done — anyone on any connected channel can now message the bot.
|
||||
|
||||
### Private Bot with Selected Users
|
||||
|
||||
1. Disable **Allow Guest Access**.
|
||||
2. Add each authorized user or channel identity to the **Whitelist**.
|
||||
3. Only listed subjects (plus the bot owner and admins) can trigger the bot.
|
||||
|
||||
### Public Bot with Blocked Users
|
||||
|
||||
1. Enable **Allow Guest Access**.
|
||||
2. Add problematic users/identities to the **Blacklist**.
|
||||
3. Everyone except blacklisted subjects can chat with the bot.
|
||||
|
||||
### Channel-Scoped Access
|
||||
|
||||
1. Add a whitelist rule for a user.
|
||||
2. Set the **Channel** source scope to your Telegram channel.
|
||||
3. The user can only chat with the bot via Telegram — messages from other channels are denied.
|
||||
@@ -39,6 +39,7 @@ After creating a context, select it from the sidebar and update its settings.
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Name** | The display name shown in the UI. |
|
||||
| **Core** | Browser engine: `chromium` (default) or `firefox`. |
|
||||
| **Viewport Width** | Browser viewport width in pixels. |
|
||||
| **Viewport Height** | Browser viewport height in pixels. |
|
||||
| **User Agent** | Optional custom browser user agent string. |
|
||||
@@ -78,6 +79,20 @@ This lets the bot interact with real websites instead of relying only on static
|
||||
|
||||
---
|
||||
|
||||
## Browser Core Selection
|
||||
|
||||
Memoh's browser image can include Chromium, Firefox, or both. The available cores are determined at build time by the `BROWSER_CORES` build argument.
|
||||
|
||||
The install script prompts for browser core selection during setup. To rebuild manually with specific cores:
|
||||
|
||||
```bash
|
||||
BROWSER_CORES=chromium docker compose --profile browser build browser
|
||||
```
|
||||
|
||||
Valid values for `BROWSER_CORES`: `chromium`, `firefox`, `chromium,firefox` (default).
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- If you have not configured memory yet, continue with [Built-in Memory Provider](/memory-providers/builtin.md).
|
||||
|
||||
@@ -6,7 +6,7 @@ Memoh's structured long-term memory system allows bots to remember information a
|
||||
|
||||
Before using the **Memory** tab, make sure your bot already has a **Memory Provider** configured.
|
||||
|
||||
1. Create a provider from the [Built-in Memory Provider](/memory-providers/builtin.md) guide.
|
||||
1. Create a provider from one of the [Memory Providers](/memory-providers/index.md) (Built-in, Mem0, or OpenViking).
|
||||
2. Open your bot's **Settings** tab.
|
||||
3. Select the provider in the **Memory Provider** field.
|
||||
4. Click **Save**.
|
||||
@@ -15,9 +15,9 @@ Without a memory provider, the bot will not have an active memory backend config
|
||||
|
||||
---
|
||||
|
||||
## Concept: Semantic Search
|
||||
## Concept: Memory Retrieval
|
||||
|
||||
With the built-in memory provider, memories are stored in Memoh's memory system and retrieved through semantic search. When a user asks a question, Memoh finds the most relevant memories and includes them in the bot's runtime context.
|
||||
Memories are stored and retrieved through the assigned memory provider. Depending on the provider type and mode, retrieval may use file-based indexing, sparse vectors, dense embeddings, or an external API. When a user sends a message, Memoh finds the most relevant memories and includes them in the bot's runtime context.
|
||||
|
||||
---
|
||||
|
||||
@@ -63,5 +63,5 @@ Visualizes the scoring threshold of retrieved memories, helping you fine-tune ho
|
||||
|
||||
- The bot automatically searches and retrieves memories during chat.
|
||||
- The assigned **Memory Provider** controls the memory backend used by the bot.
|
||||
- For the built-in provider, you can optionally configure a **Memory Model** and an **Embedding Model** inside the provider settings.
|
||||
- Provider-specific settings (such as memory mode, embedding model, or API keys) are configured in the provider itself — see [Memory Providers](/memory-providers/index.md).
|
||||
- Memories provide the long-term knowledge that makes each bot unique to its owner.
|
||||
|
||||
@@ -2,6 +2,46 @@
|
||||
|
||||
Docker is the recommended way to run Memoh. The stack includes PostgreSQL, Qdrant, the main server (with embedded Containerd), agent gateway, and web UI — all orchestrated via Docker Compose. You do not need to install containerd, nerdctl, or buildkit on your host; everything runs inside containers.
|
||||
|
||||
## Service Architecture
|
||||
|
||||
The Docker Compose stack consists of multiple services. Some are always started, others are optional and enabled via `--profile`:
|
||||
|
||||
| Service | Profile | Description |
|
||||
|---------|---------|-------------|
|
||||
| **server** | *(core)* | Main Memoh server with embedded Containerd |
|
||||
| **agent** | *(core)* | Agent gateway for bot execution |
|
||||
| **web** | *(core)* | Web UI (Vue 3) |
|
||||
| **postgres** | *(core)* | PostgreSQL database |
|
||||
| **qdrant** | `qdrant` | Qdrant vector database for memory search (sparse and dense modes) |
|
||||
| **browser** | `browser` | Playwright-based browser gateway for bot web automation |
|
||||
| **sparse** | `sparse` | Neural sparse encoding service for memory retrieval (see below) |
|
||||
|
||||
### Sparse Service
|
||||
|
||||
The **sparse** container provides neural sparse vector encoding for memory retrieval. It runs a lightweight Python (Flask) service on port 8085 that uses the [`opensearch-neural-sparse-encoding-multilingual-v1`](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1) model from OpenSearch.
|
||||
|
||||
**What it does:**
|
||||
|
||||
- Converts document text into sparse vectors (a compact list of token indices + importance weights) using a masked language model
|
||||
- Encodes queries using IDF-weighted term lookup for fast, efficient retrieval
|
||||
- Works with Qdrant to enable semantic memory search without requiring an external embedding API
|
||||
|
||||
**Why use it:**
|
||||
|
||||
- **No embedding API costs** — The model runs locally inside the container, so you don't need an OpenAI/Cohere/etc. embedding API key
|
||||
- **Multilingual** — The underlying model supports multiple languages out of the box
|
||||
- **Good retrieval quality** — Neural sparse encoding provides significantly better results than keyword-only search (BM25), while being lighter than dense embedding models
|
||||
|
||||
**When to enable it:**
|
||||
|
||||
Enable the sparse profile (`--profile sparse`) if you plan to use the built-in memory provider in **sparse mode**. The model is pre-downloaded during the Docker image build, so the container starts quickly without needing to fetch weights at runtime.
|
||||
|
||||
```bash
|
||||
docker compose --profile qdrant --profile sparse --profile browser up -d
|
||||
```
|
||||
|
||||
For more details on memory modes, see [Built-in Memory Provider](/memory-providers/builtin.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/)
|
||||
@@ -19,11 +59,11 @@ curl -fsSL https://memoh.sh | sudo sh
|
||||
The script will:
|
||||
|
||||
1. Check for Docker and Docker Compose
|
||||
2. Prompt for configuration (workspace, data directory, admin credentials, JWT secret, Postgres password)
|
||||
2. Prompt for configuration (workspace, data directory, admin credentials, JWT secret, Postgres password, sparse service toggle, browser core selection)
|
||||
3. Fetch the latest release tag from GitHub and clone the repository
|
||||
4. Generate `config.toml` from the Docker template with your settings
|
||||
5. Pin Docker image versions to the release
|
||||
6. Pull images and start all services
|
||||
6. Build the browser image with selected cores and start all services
|
||||
|
||||
**Silent install** (use all defaults, no prompts):
|
||||
|
||||
@@ -42,7 +82,13 @@ Defaults when running silently:
|
||||
**Install a specific version:**
|
||||
|
||||
```bash
|
||||
curl -fsSL https://memoh.sh | sudo MEMOH_VERSION=v1.0.0 sh
|
||||
curl -fsSL https://memoh.sh | sudo sh -s -- --version v0.5.0
|
||||
```
|
||||
|
||||
Or using the environment variable:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://memoh.sh | sudo MEMOH_VERSION=v0.5.0 sh
|
||||
```
|
||||
|
||||
**Use China mainland mirror** (for slow image pulls):
|
||||
@@ -67,10 +113,10 @@ Edit `config.toml` — at minimum change:
|
||||
- `auth.jwt_secret` — Generate with `openssl rand -base64 32`
|
||||
- `postgres.password` — Database password (also set `POSTGRES_PASSWORD` env var to match)
|
||||
|
||||
Then start (recommended — with Qdrant and Browser):
|
||||
Then start (recommended — with Qdrant, Browser, and Sparse):
|
||||
|
||||
```bash
|
||||
sudo POSTGRES_PASSWORD=your-db-password docker compose --profile qdrant --profile browser up -d
|
||||
sudo POSTGRES_PASSWORD=your-db-password docker compose --profile qdrant --profile browser --profile sparse up -d
|
||||
```
|
||||
|
||||
Or start core services only (no vector DB or browser automation):
|
||||
@@ -136,6 +182,8 @@ docker compose pull && docker compose up -d # Update to latest images
|
||||
| `MEMOH_CONFIG` | `./config.toml` | Path to the configuration file |
|
||||
| `MEMOH_VERSION` | *(latest release)* | Git tag to install (e.g. `v1.0.0`). Also pins Docker image versions. |
|
||||
| `USE_CN_MIRROR` | `false` | Set to `true` to use China mainland mirror for Docker images |
|
||||
| `BROWSER_CORES` | `chromium,firefox` | Browser engines to include in the browser image (`chromium`, `firefox`, or `chromium,firefox`) |
|
||||
| `USE_SPARSE` | `false` | Set to `true` to enable the sparse memory service (`--profile sparse`) |
|
||||
|
||||
## Production Checklist
|
||||
|
||||
|
||||
@@ -7,17 +7,26 @@ The built-in memory provider is the standard memory backend shipped with Memoh.
|
||||
- Manual memory creation and editing
|
||||
- Memory compaction and rebuild workflows
|
||||
|
||||
To configure it well, you usually assign:
|
||||
The built-in provider operates in one of three **memory modes**, each with different infrastructure requirements and retrieval capabilities.
|
||||
|
||||
- **Memory Model**: The LLM used for memory extraction and decision making
|
||||
- **Embedding Model**: The embedding model used for dense vector search
|
||||
---
|
||||
|
||||
## Memory Modes
|
||||
|
||||
| Mode | Index | Requirements | Use Case |
|
||||
|------|-------|-------------|----------|
|
||||
| **Off** | File-based only | None | Lightweight setup, no vector search |
|
||||
| **Sparse** | Neural sparse vectors | Sparse service + Qdrant (`--profile sparse`) | Good retrieval quality without embedding API costs |
|
||||
| **Dense** | Dense embeddings | Embedding model + Qdrant (`--profile qdrant`) | Highest-quality semantic search |
|
||||
|
||||
### How Sparse Mode Works
|
||||
|
||||
Sparse mode uses the [`opensearch-neural-sparse-encoding-multilingual-v1`](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1) model (from the OpenSearch project) to convert text into sparse vectors — compact lists of token indices with importance weights. Unlike dense mode, which requires an external embedding API, the sparse model runs locally in the `sparse` container with no API key or cost. It supports multiple languages and provides significantly better retrieval quality than keyword-only search.
|
||||
|
||||
---
|
||||
|
||||
## Creating a Built-in Provider
|
||||
|
||||
Manage providers from the **Memory Providers** page in the sidebar.
|
||||
|
||||
1. Navigate to the **Memory Providers** page.
|
||||
2. Click **Add Memory Provider**.
|
||||
3. Fill in the following fields:
|
||||
@@ -29,22 +38,62 @@ Manage providers from the **Memory Providers** page in the sidebar.
|
||||
|
||||
## Configuring a Built-in Provider
|
||||
|
||||
After creating a provider, select it from the sidebar and configure its settings.
|
||||
After creating a provider, select it from the list and configure its settings.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Name** | The display name shown in the UI. |
|
||||
| **Provider Type** | The provider implementation. Currently this is `builtin` only. |
|
||||
| **Memory Model** | Optional chat model used for memory extraction and memory-related decisions. |
|
||||
| **Embedding Model** | Optional embedding model used for semantic vector search. |
|
||||
| **Memory Mode** | `off` (default), `sparse`, or `dense`. Controls how memories are indexed and retrieved. |
|
||||
| **Embedding Model** | Embedding model for dense vector search. Only used in `dense` mode. |
|
||||
| **Qdrant Collection** | Qdrant collection name. Defaults to `memory_sparse`. |
|
||||
|
||||
### Managing Providers
|
||||
|
||||
- **Edit**: Select a provider and update its name or model bindings.
|
||||
- **Edit**: Select a provider and update its settings.
|
||||
- **Delete**: Remove a provider you no longer use.
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure Requirements
|
||||
|
||||
### Off Mode
|
||||
|
||||
No additional infrastructure required. Memories are stored and retrieved using file-based indexing only.
|
||||
|
||||
### Sparse Mode
|
||||
|
||||
Requires the **sparse service** (runs the [`opensearch-neural-sparse-encoding-multilingual-v1`](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1) model locally) and **Qdrant** vector database. Enable both with Docker Compose profiles:
|
||||
|
||||
```bash
|
||||
docker compose --profile qdrant --profile sparse up -d
|
||||
```
|
||||
|
||||
The following sections must be present in `config.toml`:
|
||||
|
||||
```toml
|
||||
[qdrant]
|
||||
base_url = "http://qdrant:6334"
|
||||
|
||||
[sparse]
|
||||
base_url = "http://sparse:8085"
|
||||
```
|
||||
|
||||
### Dense Mode
|
||||
|
||||
Requires an **embedding model** (configured in the provider settings) and **Qdrant**:
|
||||
|
||||
```bash
|
||||
docker compose --profile qdrant up -d
|
||||
```
|
||||
|
||||
The Qdrant section must be present in `config.toml`:
|
||||
|
||||
```toml
|
||||
[qdrant]
|
||||
base_url = "http://qdrant:6334"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Assigning a Memory Provider to a Bot
|
||||
|
||||
1. Navigate to the **Bots** page and open your bot.
|
||||
|
||||
@@ -4,11 +4,11 @@ Memoh uses a **Memory Provider** to define how a bot stores, retrieves, and mana
|
||||
|
||||
## Available Providers
|
||||
|
||||
Memoh currently includes the following memory provider:
|
||||
Memoh supports the following memory providers:
|
||||
|
||||
- [Built-in](/memory-providers/builtin.md): The default memory system included with Memoh.
|
||||
|
||||
More provider types may be added in future versions, but right now `builtin` is the only supported provider type in the product and web UI.
|
||||
- [Built-in](/memory-providers/builtin.md): The default memory system included with Memoh. Supports three modes — off (file-based), sparse (neural sparse vectors), and dense (embedding-based semantic search). Fully self-hosted.
|
||||
- [Mem0](/memory-providers/mem0.md): SaaS memory provider via the Mem0 API. Requires an API key.
|
||||
- [OpenViking](/memory-providers/openviking.md): Self-hosted or SaaS memory provider with its own API.
|
||||
|
||||
---
|
||||
|
||||
@@ -24,5 +24,7 @@ More provider types may be added in future versions, but right now `builtin` is
|
||||
|
||||
## Next Steps
|
||||
|
||||
- To configure the currently supported provider, continue with [Built-in Memory Provider](/memory-providers/builtin.md).
|
||||
- To manage memory entries after the provider is assigned, see [Bot Memory Management](/getting-started/memory.md).
|
||||
- [Built-in Memory Provider](/memory-providers/builtin.md) — Default, self-hosted with three memory modes.
|
||||
- [Mem0 Memory Provider](/memory-providers/mem0.md) — SaaS via Mem0 API.
|
||||
- [OpenViking Memory Provider](/memory-providers/openviking.md) — Self-hosted or SaaS.
|
||||
- [Bot Memory Management](/getting-started/memory.md) — Manage memory entries after the provider is assigned.
|
||||
|
||||
@@ -0,0 +1,45 @@
|
||||
# Mem0 Memory Provider
|
||||
|
||||
Mem0 is a SaaS memory provider that connects your bot to the [Mem0](https://mem0.ai) platform. Instead of managing memory infrastructure yourself, Mem0 handles storage, retrieval, and indexing through its cloud API.
|
||||
|
||||
---
|
||||
|
||||
## Creating a Mem0 Provider
|
||||
|
||||
1. Navigate to the **Memory Providers** page.
|
||||
2. Click **Add Memory Provider**.
|
||||
3. Fill in the following fields:
|
||||
- **Name**: A display name for this provider.
|
||||
- **Provider Type**: Select `mem0`.
|
||||
4. Click **Create**.
|
||||
|
||||
---
|
||||
|
||||
## Configuring a Mem0 Provider
|
||||
|
||||
After creating a provider, select it from the list and configure its settings.
|
||||
|
||||
| Field | Required | Description |
|
||||
|-------|----------|-------------|
|
||||
| **Base URL** | No | Mem0 API base URL. Defaults to `https://api.mem0.ai` when empty. |
|
||||
| **API Key** | Yes | API key for Mem0 authentication (stored as a secret). |
|
||||
| **Organization ID** | No | Organization ID for workspace scoping. |
|
||||
| **Project ID** | No | Project ID for workspace scoping. |
|
||||
|
||||
---
|
||||
|
||||
## Assigning a Mem0 Provider to a Bot
|
||||
|
||||
1. Navigate to the **Bots** page and open your bot.
|
||||
2. Go to the **Settings** tab.
|
||||
3. Find the **Memory Provider** dropdown.
|
||||
4. Select the Mem0 provider you created.
|
||||
5. Click **Save**.
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
Once assigned, the bot will use Mem0 as its memory backend. Memory extraction, search, and management operations are routed through the Mem0 API.
|
||||
|
||||
For day-to-day memory operations, see [Bot Memory Management](/getting-started/memory.md).
|
||||
@@ -0,0 +1,43 @@
|
||||
# OpenViking Memory Provider
|
||||
|
||||
OpenViking is a memory provider that can be self-hosted or used as a SaaS service. It provides an alternative memory backend for bots that need a dedicated memory API.
|
||||
|
||||
---
|
||||
|
||||
## Creating an OpenViking Provider
|
||||
|
||||
1. Navigate to the **Memory Providers** page.
|
||||
2. Click **Add Memory Provider**.
|
||||
3. Fill in the following fields:
|
||||
- **Name**: A display name for this provider.
|
||||
- **Provider Type**: Select `openviking`.
|
||||
4. Click **Create**.
|
||||
|
||||
---
|
||||
|
||||
## Configuring an OpenViking Provider
|
||||
|
||||
After creating a provider, select it from the list and configure its settings.
|
||||
|
||||
| Field | Required | Description |
|
||||
|-------|----------|-------------|
|
||||
| **Base URL** | Yes | OpenViking API endpoint (e.g. `http://openviking:8088`). |
|
||||
| **API Key** | No | API key for authentication (stored as a secret). |
|
||||
|
||||
---
|
||||
|
||||
## Assigning an OpenViking Provider to a Bot
|
||||
|
||||
1. Navigate to the **Bots** page and open your bot.
|
||||
2. Go to the **Settings** tab.
|
||||
3. Find the **Memory Provider** dropdown.
|
||||
4. Select the OpenViking provider you created.
|
||||
5. Click **Save**.
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
Once assigned, the bot will use OpenViking as its memory backend. Memory extraction, search, and management operations are routed through the OpenViking API.
|
||||
|
||||
For day-to-day memory operations, see [Bot Memory Management](/getting-started/memory.md).
|
||||
@@ -0,0 +1,44 @@
|
||||
# Edge TTS
|
||||
|
||||
Edge TTS uses Microsoft Edge's public read-aloud API for speech synthesis. It is free, requires no API key, and supports 256+ voices across 50+ languages.
|
||||
|
||||
---
|
||||
|
||||
## Creating an Edge TTS Provider
|
||||
|
||||
1. Navigate to the **TTS Providers** page.
|
||||
2. Click **Add**.
|
||||
3. Select `edge` as the provider type.
|
||||
4. Click **Create**.
|
||||
|
||||
The default model `edge-read-aloud` is automatically imported when the provider is created.
|
||||
|
||||
---
|
||||
|
||||
## Configuring the Model
|
||||
|
||||
Click the `edge-read-aloud` model to configure its settings.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Voice** | Language + voice ID. Default: `en-US-EmmaMultilingualNeural`. Over 256 voices available across 50+ languages. |
|
||||
| **Format** | Audio output format. Options: `audio-24khz-48kbitrate-mono-mp3` (default), `audio-24khz-96kbitrate-mono-mp3`, `webm-24khz-16bit-mono-opus`. |
|
||||
| **Speed** | Playback speed. Options: `0.5`, `1.0` (default), `2.0`, `3.0`. |
|
||||
| **Pitch** | Voice pitch adjustment. Range: `-100` to `+100`, default `0`. |
|
||||
|
||||
---
|
||||
|
||||
## Assigning to a Bot
|
||||
|
||||
1. Open a bot's **Settings** tab.
|
||||
2. Find the **TTS Model** dropdown.
|
||||
3. Select the configured Edge TTS model.
|
||||
4. Click **Save**.
|
||||
|
||||
The bot can now synthesize speech using Edge TTS.
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
Use the built-in synthesis test button on the model configuration page to preview how the selected voice, format, speed, and pitch sound before assigning to a bot.
|
||||
@@ -0,0 +1,33 @@
|
||||
# TTS Providers
|
||||
|
||||
Memoh supports **Text-to-Speech (TTS)** so bots can synthesize spoken audio from text. The TTS system is organized into three layers:
|
||||
|
||||
- **TTS Provider**: A service type (e.g. Edge TTS). You create named provider instances from the TTS Providers page.
|
||||
- **TTS Model**: A specific voice/model under a provider (e.g. `edge-read-aloud`). Models have configurable voice, format, speed, and pitch settings.
|
||||
- **Bot Assignment**: In Bot Settings, select a TTS Model. The bot can then synthesize speech in conversations.
|
||||
|
||||
---
|
||||
|
||||
## Basic Flow
|
||||
|
||||
1. Navigate to the **TTS Providers** page from the sidebar.
|
||||
2. Click **Add** and select a provider type (e.g. `edge`).
|
||||
3. Click **Create** — the provider's default model is auto-imported.
|
||||
4. Click the model to configure voice, format, speed, and pitch.
|
||||
5. Test synthesis with the built-in test button.
|
||||
6. Open a bot's **Settings** tab and select the TTS Model.
|
||||
7. Save — the bot can now synthesize speech.
|
||||
|
||||
---
|
||||
|
||||
## Available Providers
|
||||
|
||||
| Provider | Description |
|
||||
|----------|-------------|
|
||||
| [Edge TTS](/tts-providers/edge.md) | Free, uses Microsoft Edge's public read-aloud API. 256+ voices across 50+ languages. No API key required. |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- To set up the currently available provider, continue with [Edge TTS](/tts-providers/edge.md).
|
||||
Reference in New Issue
Block a user