refactor: move client_type key from provider to model

This commit is contained in:
Acbox
2026-02-18 18:30:27 +08:00
parent 77e9f585a1
commit d6c47472b2
43 changed files with 552 additions and 1015 deletions
+5 -4
View File
@@ -4,7 +4,7 @@ Manage chat and embedding models.
## model list
List all models with their provider, type, and multimodal flag.
List all models with their provider, type, client type, and multimodal flag.
```bash
memoh model list
@@ -12,7 +12,7 @@ memoh model list
## model create
Create a new model. Prompts for provider, model ID, type, and (for embedding models) dimensions.
Create a new model. Prompts for provider, model ID, type, client type, and (for embedding models) dimensions.
```bash
memoh model create [options]
@@ -23,6 +23,7 @@ memoh model create [options]
| `--model_id <id>` | Model ID (e.g. `gpt-4`, `text-embedding-3-small`) |
| `--name <name>` | Display name |
| `--provider <provider>` | Provider name |
| `--client_type <type>` | Client type: `openai-responses`, `openai-completions`, `anthropic-messages`, `google-generative-ai` |
| `--type <type>` | `chat` or `embedding` |
| `--dimensions <n>` | Embedding dimensions (required for embedding models) |
| `--multimodal` | Mark as multimodal |
@@ -30,8 +31,8 @@ memoh model create [options]
Examples:
```bash
memoh model create --model_id gpt-4 --provider my-openai --type chat
memoh model create --model_id text-embedding-3-small --provider my-openai --type embedding --dimensions 1536
memoh model create --model_id gpt-4 --provider my-openai --client_type openai-responses --type chat
memoh model create --model_id text-embedding-3-small --provider my-openai --client_type openai-completions --type embedding --dimensions 1536
memoh model create
# Interactive prompts
```
+2 -5
View File
@@ -1,6 +1,6 @@
# Provider Commands
Manage LLM providers (OpenAI, Anthropic, Ollama, etc.).
Manage LLM providers (API endpoints and credentials).
## provider list
@@ -32,16 +32,13 @@ memoh provider create [options]
| Option | Description |
|--------|-------------|
| `--name <name>` | Provider name |
| `--type <type>` | Client type |
| `--base_url <url>` | Base URL for the API |
| `--api_key <key>` | API key |
Supported client types: `openai`, `openai-compat`, `anthropic`, `google`, `azure`, `bedrock`, `mistral`, `xai`, `ollama`, `dashscope`
Examples:
```bash
memoh provider create --name my-ollama --type ollama --base_url http://localhost:11434
memoh provider create --name my-ollama --base_url http://localhost:11434/v1
memoh provider create
# Interactive prompts
```
+13 -3
View File
@@ -2,8 +2,19 @@
In Memoh, **provider** and **model** are separate but connected concepts:
- A **provider** is the LLM service configuration (API endpoint, key, client type)
- A **model** is the concrete chat or embedding model under that provider
- A **provider** is the LLM service configuration (API endpoint and key)
- A **model** is the concrete chat or embedding model under that provider, including its **client type** which determines which API protocol to use
## Client Types
Each model has a `client_type` that determines how Memoh communicates with the LLM service:
| Client Type | Description |
|-------------|-------------|
| `openai-responses` | OpenAI Responses API |
| `openai-completions` | OpenAI Chat Completions API (also works with compatible services like Ollama, Mistral, etc.) |
| `anthropic-messages` | Anthropic Messages API |
| `google-generative-ai` | Google Generative AI API |
## Typical Setup
@@ -26,4 +37,3 @@ This enables per-bot customization (for quality, latency, or cost).
- `Models > Add Provider > Select Provider > Add Model`
- `Bots > Select a bot > Settings > Choose chat/memory/embedding models`
+3 -3
View File
@@ -46,10 +46,10 @@ Bots come with a rich set of built-in tools:
### Multi-LLM Provider Support
Flexibly switch between a wide range of LLM providers:
Flexibly switch between a wide range of LLM providers via four client types:
- OpenAI, Anthropic, Google, Azure, AWS Bedrock
- Mistral, XAI, Ollama, Dashscope (Qwen), and more
- OpenAI Responses API, OpenAI Chat Completions API (including compatible services)
- Anthropic Messages API, Google Generative AI API
### MCP Protocol Support
@@ -14,7 +14,7 @@ Click **Models** in the left sidebar to open the Provider and Model configuratio
![Models page - sidebar](/getting-started/provider-model-01-sidebar.png)
The page has two panels:
- **Left**: Provider list and filter
- **Left**: Provider list and search
- **Right**: Selected provider's details and models (or an empty state if none selected)
## Step 2: Add a Provider
@@ -30,7 +30,6 @@ In the dialog, fill in:
| **Name** | A display name for this provider (e.g. `my-openai`, `ollama-local`) |
| **API Key** | Your API key. For local services like Ollama, you can use a placeholder like `ollama` |
| **Base URL** | The API base URL (e.g. `https://api.openai.com/v1`, `http://localhost:11434/v1` for Ollama) |
| **Type** | Client type: `openai`, `openai-compat`, `anthropic`, `google`, `azure`, `bedrock`, `mistral`, `xai`, `ollama`, `dashscope` |
![Add Provider dialog](/getting-started/provider-model-03-provider-dialog.png)
@@ -38,13 +37,11 @@ In the dialog, fill in:
- Name: `openai`
- API Key: `sk-...`
- Base URL: `https://api.openai.com/v1`
- Type: `openai`
**Example — Ollama (local):**
- Name: `ollama`
- API Key: `ollama`
- Base URL: `http://localhost:11434/v1`
- Type: `ollama`
Click **Add** to save. The new provider appears in the left sidebar.
@@ -60,6 +57,7 @@ Fill in:
| Field | Description |
|-------|-------------|
| **Client Type** | API protocol: `openai-responses`, `openai-completions`, `anthropic-messages`, or `google-generative-ai` |
| **Type** | `chat` or `embedding` |
| **Model** | Model ID (e.g. `gpt-4`, `llama3.2`, `text-embedding-3-small`) |
| **Display Name** | Optional display name |
@@ -71,8 +69,8 @@ Fill in:
- One **embedding** model (for memory)
Add them under the same or different providers. For example:
- Chat: `gpt-4` (OpenAI) or `llama3.2` (Ollama)
- Embedding: `text-embedding-3-small` (OpenAI) or `nomic-embed-text` (Ollama)
- Chat: `gpt-4` with client type `openai-responses` (OpenAI) or `llama3.2` with client type `openai-completions` (Ollama)
- Embedding: `text-embedding-3-small` with client type `openai-completions` (OpenAI) or `nomic-embed-text` with client type `openai-completions` (Ollama)
## Step 4: Edit or Delete