Files
Memoh/devenv/docker-compose.yml
T
Acbox Liu 8d5c38f0e5 refactor: unify providers and models tables (#338)
* refactor: unify providers and models tables

- Rename `llm_providers` → `providers`, `llm_provider_oauth_tokens` → `provider_oauth_tokens`
- Remove `tts_providers` and `tts_models` tables; speech models now live in the unified `models` table with `type = 'speech'`
- Replace top-level `api_key`/`base_url` columns with a JSONB `config` field on `providers`
- Rename `llm_provider_id` → `provider_id` across all references
- Add `edge-speech` client type and `conf/providers/edge.yaml` default provider
- Create new read-only speech endpoints (`/speech-providers`, `/speech-models`) backed by filtered views of the unified tables
- Remove old TTS CRUD handlers; simplify speech page to read-only + test
- Update registry loader to skip malformed YAML files instead of failing entirely
- Fix YAML quoting for model names containing colons in openrouter.yaml
- Regenerate sqlc, swagger, and TypeScript SDK

* fix: exclude speech providers from providers list endpoint

ListProviders now filters out client_type matching '%-speech' so Edge
and future speech providers no longer appear on the Providers page.
ListSpeechProviders uses the same pattern match instead of hard-coding
'edge-speech'.

* fix: use explicit client_type list instead of LIKE pattern

Replace '%-speech' pattern with explicit IN ('edge-speech') for both
ListProviders (exclusion) and ListSpeechProviders (inclusion). New
speech client types must be added to both queries.

* fix: use EXECUTE for dynamic SQL in migrations referencing old schema

PL/pgSQL pre-validates column/table references in static SQL statements
inside DO blocks before evaluating IF/RETURN guards. This caused
migrations 0010-0061 to fail on fresh databases where the canonical
schema uses `providers`/`provider_id` instead of `llm_providers`/
`llm_provider_id`.

Wrap all SQL that references potentially non-existent old schema objects
(llm_providers, llm_provider_id, tts_providers, tts_models, etc.) in
EXECUTE strings so they are only parsed at runtime when actually reached.

* fix: revert canonical schema to use llm_providers for migration compatibility

The CI migrations workflow (up → down → up) failed because 0061 down
renames `providers` back to `llm_providers`, but 0001 down only dropped
`providers` — leaving `llm_providers` as a remnant. On the second
migrate up, 0010 found the stale `llm_providers` and tried to reference
`models.llm_provider_id` which no longer existed.

Revert 0001 canonical schema to use original names (llm_providers,
tts_providers, tts_models) so incremental migrations work naturally and
0061 handles the final rename. Remove EXECUTE wrappers and unnecessary
guards from migrations that now always operate on llm_providers.

* fix: icons

* fix: sync canonical schema with 0061 migration to fix sqlc column mismatch

0001_init.up.sql still used old names (llm_providers, llm_provider_id)
and included dropped tts_providers/tts_models tables. sqlc could not
parse the PL/pgSQL EXECUTE in migration 0061, so generated code retained
stale columns (input_modalities, supports_reasoning) causing runtime
"column does not exist" errors when adding models.

- Update 0001_init.up.sql to current schema (providers, provider_id,
  no tts tables, add provider_oauth_tokens)
- Use ALTER TABLE IF EXISTS in 0010/0041/0042 for backward compat
- Regenerate sqlc

* fix: guard all legacy migrations against fresh schema for CI compat

On fresh databases, 0001_init.up.sql creates providers/provider_id
(not llm_providers/llm_provider_id). Migrations 0013, 0041, 0046, 0047
referenced the old names without guards, causing CI migration failures.

- 0013: check llm_provider_id column exists before adding old constraint
- 0041: check llm_providers table exists before backfill/constraint DDL
- 0046: wrap CREATE TABLE in DO block with llm_providers existence check
- 0047: use ALTER TABLE IF EXISTS + DO block guard
2026-04-08 01:03:44 +08:00

193 lines
5.2 KiB
YAML

name: "memoh-dev"
services:
postgres:
image: postgres:18-alpine
container_name: memoh-dev-postgres
environment:
POSTGRES_DB: memoh
POSTGRES_USER: memoh
POSTGRES_PASSWORD: memoh123
volumes:
- postgres_data:/var/lib/postgresql
- /etc/localtime:/etc/localtime:ro
ports:
- "${MEMOH_DEV_POSTGRES_PORT:-15432}:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U memoh"]
interval: 5s
timeout: 3s
retries: 5
restart: unless-stopped
qdrant:
image: qdrant/qdrant:latest
container_name: memoh-dev-qdrant
volumes:
- qdrant_data:/qdrant/storage
ports:
- "${MEMOH_DEV_QDRANT_HTTP_PORT:-16333}:6333"
- "${MEMOH_DEV_QDRANT_GRPC_PORT:-16334}:6334"
healthcheck:
test: ["CMD-SHELL", "timeout 5s bash -c ':> /dev/tcp/127.0.0.1/6333' || exit 1"]
interval: 5s
timeout: 3s
retries: 5
restart: unless-stopped
deps:
build:
context: ..
dockerfile: devenv/Dockerfile.web
container_name: memoh-dev-deps
working_dir: /workspace
command: ["pnpm", "install"]
volumes:
- ..:/workspace
- node_modules:/workspace/node_modules
- pnpm_store:/root/.local/share/pnpm/store
restart: "no"
migrate:
build:
context: ..
dockerfile: devenv/Dockerfile.server
container_name: memoh-dev-migrate
working_dir: /workspace
entrypoint: []
command: ["go", "run", "./cmd/agent/main.go", "migrate", "up"]
environment:
CONFIG_PATH: /workspace/devenv/app.dev.toml
GOFLAGS: -buildvcs=false
volumes:
- ..:/workspace
- go_mod_cache:/go/pkg/mod
- go_build_cache:/root/.cache/go-build
depends_on:
postgres:
condition: service_healthy
qdrant:
condition: service_healthy
restart: "no"
server:
build:
context: ..
dockerfile: devenv/Dockerfile.server
container_name: memoh-dev-server
working_dir: /workspace
privileged: true
pid: host
command: ["air", "-c", ".air.toml"]
environment:
CONFIG_PATH: /workspace/devenv/app.dev.toml
GOFLAGS: -buildvcs=false
volumes:
- ..:/workspace
- go_mod_cache:/go/pkg/mod
- go_build_cache:/root/.cache/go-build
- containerd_data:/var/lib/containerd
- server_cni_state:/var/lib/cni
- memoh_data:/opt/memoh/data
# Toolkit: run ./docker/toolkit/install.sh once before first use
- ../.toolkit:/opt/memoh/runtime/toolkit
- ../docker/toolkit/bin:/opt/memoh/runtime/toolkit/bin
- ../cmd/bridge/template:/opt/memoh/runtime/templates
- /etc/localtime:/etc/localtime:ro
ports:
- "${MEMOH_DEV_SERVER_PORT:-18080}:8080"
- "${MEMOH_DEV_OAUTH_PORT:-1455}:8080"
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://127.0.0.1:8080/health || exit 1"]
interval: 5s
timeout: 3s
retries: 20
depends_on:
migrate:
condition: service_completed_successfully
qdrant:
condition: service_healthy
restart: unless-stopped
web:
build:
context: ..
dockerfile: devenv/Dockerfile.web
container_name: memoh-dev-web
working_dir: /workspace/apps/web
command: ["pnpm", "exec", "vite", "--host", "0.0.0.0", "--port", "8082"]
environment:
MEMOH_CONFIG_PATH: /workspace/devenv/app.dev.toml
MEMOH_WEB_PROXY_TARGET: http://host.docker.internal:18080
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ..:/workspace
- node_modules:/workspace/node_modules
ports:
- "${MEMOH_DEV_WEB_PORT:-18082}:8082"
depends_on:
deps:
condition: service_completed_successfully
server:
condition: service_healthy
restart: unless-stopped
browser:
build:
context: ..
dockerfile: devenv/Dockerfile.browser
args:
BROWSER_CORES: ${BROWSER_CORES:-chromium,firefox}
container_name: memoh-dev-browser
working_dir: /workspace/apps/browser
command: ["bun", "run", "--watch", "src/index.ts"]
environment:
- MEMOH_CONFIG_PATH=/workspace/devenv/app.dev.toml
- BROWSER_CORES=${BROWSER_CORES:-chromium,firefox}
volumes:
- ..:/workspace
- node_modules:/workspace/node_modules
ports:
- "${MEMOH_DEV_BROWSER_PORT:-18083}:8083"
depends_on:
deps:
condition: service_completed_successfully
server:
condition: service_healthy
restart: unless-stopped
# sparse:
# build:
# context: ..
# dockerfile: docker/Dockerfile.sparse
# container_name: memoh-dev-sparse
# ports:
# - "${MEMOH_DEV_SPARSE_PORT:-18085}:8085"
# healthcheck:
# test: ["CMD-SHELL", "python -c \"import urllib.request; urllib.request.urlopen('http://127.0.0.1:8085/health')\" || exit 1"]
# interval: 15s
# timeout: 10s
# start_period: 30s
# retries: 3
# restart: unless-stopped
volumes:
postgres_data:
driver: local
qdrant_data:
driver: local
go_mod_cache:
driver: local
go_build_cache:
driver: local
containerd_data:
driver: local
memoh_data:
driver: local
server_cni_state:
driver: local
node_modules:
driver: local
pnpm_store:
driver: local