* refactor(agent): replace TypeScript agent gateway with in-process Go agent using twilight-ai SDK
- Remove apps/agent (Bun/Elysia gateway), packages/agent (@memoh/agent),
internal/bun runtime manager, and all embedded agent/bun assets
- Add internal/agent package powered by twilight-ai SDK for LLM calls,
tool execution, streaming, sential logic, tag extraction, and prompts
- Integrate ToolGatewayService in-process for both built-in and user MCP
tools, eliminating HTTP round-trips to the old gateway
- Update resolver to convert between sdk.Message and ModelMessage at the
boundary (resolver_messages.go), keeping agent package free of
persistence concerns
- Prepend user message before storeRound since SDK only returns output
messages (assistant + tool)
- Clean up all Docker configs, TOML configs, nginx proxy, Dockerfile.agent,
and Go config structs related to the removed agent gateway
- Update cmd/agent and cmd/memoh entry points with setter-based
ToolGateway injection to avoid FX dependency cycles
* fix(web): move form declaration before computed properties that reference it
The `form` reactive object was declared after computed properties like
`selectedMemoryProvider` and `isSelectedMemoryProviderPersisted` that
reference it, causing a TDZ ReferenceError during setup.
* fix: prevent UTF-8 character corruption in streaming text output
StreamTagExtractor.Push() used byte-level string slicing to hold back
buffer tails for tag detection, which could split multi-byte UTF-8
characters. After json.Marshal replaced invalid bytes with U+FFFD,
the corruption became permanent — causing garbled CJK characters (�)
in agent responses.
Add safeUTF8SplitIndex() to back up split points to valid character
boundaries. Also fix byte-level truncation in command/formatter.go
and command/fs.go to use rune-aware slicing.
* fix: add agent error logging and fix Gemini tool schema validation
- Log agent stream errors in both SSE and WebSocket paths with bot/model context
- Fix send tool `attachments` parameter: empty `items` schema rejected by
Google Gemini API (INVALID_ARGUMENT), now specifies `{"type": "string"}`
- Upgrade twilight-ai to d898f0b (includes raw body in API error messages)
* chore(ci): remove agent gateway from Docker build and release pipelines
Agent gateway has been replaced by in-process Go agent; remove the
obsolete Docker image matrix entry, Bun/UPX CI steps, and agent-binary
build logic from the release script.
* fix: preserve attachment filename, metadata, and container path through persistence
- Add `name` column to `bot_history_message_assets` (migration 0034) to
persist original filenames across page refreshes.
- Add `metadata` JSONB column (migration 0035) to store source_path,
source_url, and other context alongside each asset.
- Update SQL queries, sqlc-generated code, and all Go types (MessageAsset,
AssetRef, OutboundAssetRef, FileAttachment) to carry name and metadata
through the full lifecycle.
- Extract filenames from path/URL in AttachmentsResolver before clearing
raw paths; enrich streaming event metadata with name, source_path, and
source_url in both the WebSocket and channel inbound ingestion paths.
- Implement `LinkAssets` on message service and `LinkOutboundAssets` on
flow resolver so WebSocket-streamed bot attachments are persisted to the
correct assistant message after streaming completes.
- Frontend: update MessageAsset type with metadata field, pass metadata
through to attachment items, and reorder attachment-block.vue template
so container files (identified by metadata.source_path) open in the
sidebar file manager instead of triggering a download.
* refactor(agent): decouple built-in tools from MCP, load via ToolProvider interface
Migrate all 13 built-in tool providers from internal/mcp/providers/ to
internal/agent/tools/ using the twilight-ai sdk.Tool structure. The agent
now loads tools through a ToolProvider interface instead of the MCP
ToolGatewayService, which is simplified to only manage external federation
sources. This enables selective tool loading and removes the coupling
between business tools and the MCP protocol layer.
* refactor(flow): split monolithic resolver.go into focused modules
Break the 1959-line resolver.go into 12 files organized by concern:
- resolver.go: core orchestration (Resolver struct, resolve, Chat, prepareRunConfig)
- resolver_stream.go: streaming (StreamChat, StreamChatWS, tryStoreStream)
- resolver_trigger.go: schedule/heartbeat triggers
- resolver_attachments.go: attachment routing, inlining, encoding
- resolver_history.go: message loading, deduplication, token trimming
- resolver_store.go: persistence (storeRound, storeMessages, asset linking)
- resolver_memory.go: memory provider integration
- resolver_model_selection.go: model selection and candidate matching
- resolver_identity.go: display name and channel identity resolution
- resolver_settings.go: bot settings, loop detection, inbox
- user_header.go: YAML front-matter formatting
- resolver_util.go: shared utilities (sanitize, normalize, dedup, UUID)
* fix(agent): enable Anthropic extended thinking by passing ReasoningConfig to provider
Anthropic's thinking requires WithThinking() at provider creation time,
unlike OpenAI which uses per-request ReasoningEffort. The config was
never wired through, so Claude models could not trigger thinking.
* refactor(agent): extract prompts into embedded markdown templates
Move inline prompt strings from prompt.go into separate .md files under
internal/agent/prompts/, using {{key}} placeholders and a simple render
engine. Remove obsolete SystemPromptParams fields (Language,
MaxContextLoadTime, Channels, CurrentChannel) and their call-site usage.
* fix: lint
22 KiB
Twilight AI API Reference
This file is the detailed API companion for skill/SKILL.md.
Use it when the task needs exact package names, exported types, function signatures, provider options, or stream/event shapes.
Package Map
github.com/memohai/twilight-ai/sdkgithub.com/memohai/twilight-ai/provider/openai/completionsgithub.com/memohai/twilight-ai/provider/openai/responsesgithub.com/memohai/twilight-ai/provider/anthropic/messagesgithub.com/memohai/twilight-ai/provider/google/generativeaigithub.com/memohai/twilight-ai/provider/openai/embeddinggithub.com/memohai/twilight-ai/provider/google/embedding
Package sdk
Client And Top-Level Helpers
type Client struct{}
func NewClient() *Client
func (c *Client) GenerateText(ctx context.Context, options ...GenerateOption) (string, error)
func (c *Client) GenerateTextResult(ctx context.Context, options ...GenerateOption) (*GenerateResult, error)
func (c *Client) StreamText(ctx context.Context, options ...GenerateOption) (*StreamResult, error)
func (c *Client) Embed(ctx context.Context, value string, options ...EmbedOption) ([]float64, error)
func (c *Client) EmbedMany(ctx context.Context, values []string, options ...EmbedOption) (*EmbedResult, error)
func GenerateText(ctx context.Context, options ...GenerateOption) (string, error)
func GenerateTextResult(ctx context.Context, options ...GenerateOption) (*GenerateResult, error)
func StreamText(ctx context.Context, options ...GenerateOption) (*StreamResult, error)
func Embed(ctx context.Context, value string, options ...EmbedOption) ([]float64, error)
func EmbedMany(ctx context.Context, values []string, options ...EmbedOption) (*EmbedResult, error)
Provider Contracts
type Provider interface {
Name() string
ListModels(ctx context.Context) ([]Model, error)
Test(ctx context.Context) *ProviderTestResult
TestModel(ctx context.Context, modelID string) (*ModelTestResult, error)
DoGenerate(ctx context.Context, params GenerateParams) (*GenerateResult, error)
DoStream(ctx context.Context, params GenerateParams) (*StreamResult, error)
}
type ProviderStatus string
const (
ProviderStatusOK ProviderStatus = "ok"
ProviderStatusUnhealthy ProviderStatus = "unhealthy"
ProviderStatusUnreachable ProviderStatus = "unreachable"
)
type ProviderTestResult struct {
Status ProviderStatus
Message string
Error error
}
type ModelTestResult struct {
Supported bool
Message string
}
Models
type ModelType string
const ModelTypeChat ModelType = "chat"
type Model struct {
ID string
DisplayName string
Provider Provider
Type ModelType
MaxTokens int
}
func (m *Model) Test(ctx context.Context) (*ModelTestResult, error)
Messages
type MessageRole string
const (
MessageRoleUser MessageRole = "user"
MessageRoleAssistant MessageRole = "assistant"
MessageRoleSystem MessageRole = "system"
MessageRoleTool MessageRole = "tool"
)
type MessagePartType string
const (
MessagePartTypeText MessagePartType = "text"
MessagePartTypeReasoning MessagePartType = "reasoning"
MessagePartTypeImage MessagePartType = "image"
MessagePartTypeFile MessagePartType = "file"
MessagePartTypeToolCall MessagePartType = "tool-call"
MessagePartTypeToolResult MessagePartType = "tool-result"
)
type MessagePart interface {
PartType() MessagePartType
}
type TextPart struct {
Text string
}
type ReasoningPart struct {
Text string
Signature string
}
type ImagePart struct {
Image string
MediaType string
}
type FilePart struct {
Data string
MediaType string
Filename string
}
type ToolCallPart struct {
ToolCallID string
ToolName string
Input any
}
type ToolResultPart struct {
ToolCallID string
ToolName string
Result any
IsError bool
}
type Message struct {
Role MessageRole
Content []MessagePart
}
func UserMessage(text string, extra ...MessagePart) Message
func SystemMessage(text string) Message
func AssistantMessage(text string) Message
func ToolMessage(results ...ToolResultPart) Message
Notes:
UserMessageaccepts a text string plus optional extra parts such asImagePart.Messagesupports JSON marshal and unmarshal with type discrimination.
Generation
type FinishReason string
const (
FinishReasonStop FinishReason = "stop"
FinishReasonLength FinishReason = "length"
FinishReasonContentFilter FinishReason = "content-filter"
FinishReasonToolCalls FinishReason = "tool-calls"
FinishReasonError FinishReason = "error"
FinishReasonOther FinishReason = "other"
FinishReasonUnknown FinishReason = "unknown"
)
type ResponseFormatType string
const (
ResponseFormatText ResponseFormatType = "text"
ResponseFormatJSONObject ResponseFormatType = "json_object"
ResponseFormatJSONSchema ResponseFormatType = "json_schema"
)
type ResponseFormat struct {
Type ResponseFormatType
JSONSchema any
}
type GenerateParams struct {
Model *Model
System string
Messages []Message
Tools []Tool
ToolChoice any
ResponseFormat *ResponseFormat
Temperature *float64
TopP *float64
MaxTokens *int
StopSequences []string
FrequencyPenalty *float64
PresencePenalty *float64
Seed *int
ReasoningEffort *string
}
type StepResult struct {
Text string
Reasoning string
FinishReason FinishReason
RawFinishReason string
Usage Usage
ToolCalls []ToolCall
ToolResults []ToolResult
Response ResponseMetadata
Messages []Message
}
type GenerateResult struct {
Text string
Reasoning string
FinishReason FinishReason
RawFinishReason string
Usage Usage
Sources []Source
Files []GeneratedFile
ToolCalls []ToolCall
ToolResults []ToolResult
Response ResponseMetadata
Steps []StepResult
Messages []Message
}
Generate Options
type GenerateOption func(*generateConfig)
func WithModel(model *Model) GenerateOption
func WithMessages(messages []Message) GenerateOption
func WithSystem(text string) GenerateOption
func WithTools(tools []Tool) GenerateOption
func WithToolChoice(choice any) GenerateOption
func WithResponseFormat(rf ResponseFormat) GenerateOption
func WithTemperature(t float64) GenerateOption
func WithTopP(topP float64) GenerateOption
func WithMaxTokens(n int) GenerateOption
func WithStopSequences(s []string) GenerateOption
func WithFrequencyPenalty(penalty float64) GenerateOption
func WithPresencePenalty(penalty float64) GenerateOption
func WithSeed(s int) GenerateOption
func WithReasoningEffort(effort string) GenerateOption
func WithMaxSteps(n int) GenerateOption
func WithOnFinish(fn func(*GenerateResult)) GenerateOption
func WithOnStep(fn func(*StepResult) *GenerateParams) GenerateOption
func WithPrepareStep(fn func(*GenerateParams) *GenerateParams) GenerateOption
func WithApprovalHandler(fn func(ctx context.Context, call ToolCall) (bool, error)) GenerateOption
Behavior notes:
WithMaxSteps(0)is the default single-call mode.WithMaxSteps(N)enables automatic tool execution for up toNLLM calls.WithMaxSteps(-1)means unlimited loop until the model stops requesting tools.WithToolChoiceaccepts"auto","none", or"required".
Tools
type ToolExecuteFunc func(ctx *ToolExecContext, input any) (any, error)
type ToolExecContext struct {
context.Context
ToolCallID string
ToolName string
SendProgress func(content any)
}
type Tool struct {
Name string
Description string
Parameters any
Execute ToolExecuteFunc
RequireApproval bool
}
func NewTool[T any](
name, description string,
execute func(ctx *ToolExecContext, input T) (any, error),
) Tool
type ToolCall struct {
ToolCallID string
ToolName string
Input any
}
type ToolResult struct {
ToolCallID string
ToolName string
Input any
Output any
IsError bool
}
MCP
type MCPTransportType string
const (
MCPTransportHTTP MCPTransportType = "http"
MCPTransportSSE MCPTransportType = "sse"
)
type MCPClientConfig struct {
Type MCPTransportType
URL string
Headers map[string]string
Transport mcp.Transport
HTTPClient *http.Client
Name string
Version string
}
type MCPClient struct { /* unexported fields */ }
func CreateMCPClient(ctx context.Context, config *MCPClientConfig) (*MCPClient, error)
func (c *MCPClient) Tools(ctx context.Context) ([]Tool, error)
func (c *MCPClient) Close() error
Usage notes:
MCPTransportHTTPis the default built-in transport and uses the official MCP Go SDK's streamable HTTP client transport.MCPTransportSSEuses the official MCP Go SDK's SSE client transport.- For stdio or other custom transports, create the transport with
github.com/modelcontextprotocol/go-sdk/mcpand pass it throughTransport. Tools(ctx)converts remote MCP tools into ordinarysdk.Toolvalues suitable forWithTools(...).- MCP tool schemas are converted from MCP
InputSchemainto*jsonschema.Schema. - MCP execution wrappers call
tools/calland return concatenated text content to the model.
Streaming
type StreamPartType string
const (
StreamPartTypeTextStart StreamPartType = "text-start"
StreamPartTypeTextDelta StreamPartType = "text-delta"
StreamPartTypeTextEnd StreamPartType = "text-end"
StreamPartTypeReasoningStart StreamPartType = "reasoning-start"
StreamPartTypeReasoningDelta StreamPartType = "reasoning-delta"
StreamPartTypeReasoningEnd StreamPartType = "reasoning-end"
StreamPartTypeToolInputStart StreamPartType = "tool-input-start"
StreamPartTypeToolInputDelta StreamPartType = "tool-input-delta"
StreamPartTypeToolInputEnd StreamPartType = "tool-input-end"
StreamPartTypeToolCall StreamPartType = "tool-call"
StreamPartTypeToolResult StreamPartType = "tool-result"
StreamPartTypeToolError StreamPartType = "tool-error"
StreamPartTypeToolOutputDenied StreamPartType = "tool-output-denied"
StreamPartTypeToolApprovalRequest StreamPartType = "tool-approval-request"
StreamPartTypeToolProgress StreamPartType = "tool-progress"
StreamPartTypeSource StreamPartType = "source"
StreamPartTypeFile StreamPartType = "file"
StreamPartTypeStart StreamPartType = "start"
StreamPartTypeFinish StreamPartType = "finish"
StreamPartTypeStartStep StreamPartType = "start-step"
StreamPartTypeFinishStep StreamPartType = "finish-step"
StreamPartTypeError StreamPartType = "error"
StreamPartTypeAbort StreamPartType = "abort"
StreamPartTypeRaw StreamPartType = "raw"
)
type StreamPart interface {
Type() StreamPartType
}
type TextStartPart struct {
ID string
ProviderMetadata map[string]any
}
type TextDeltaPart struct {
ID string
Text string
ProviderMetadata map[string]any
}
type TextEndPart struct {
ID string
ProviderMetadata map[string]any
}
type ReasoningStartPart struct {
ID string
ProviderMetadata map[string]any
}
type ReasoningDeltaPart struct {
ID string
Text string
ProviderMetadata map[string]any
}
type ReasoningEndPart struct {
ID string
ProviderMetadata map[string]any
}
type ToolInputStartPart struct {
ID string
ToolName string
ProviderMetadata map[string]any
}
type ToolInputDeltaPart struct {
ID string
Delta string
ProviderMetadata map[string]any
}
type ToolInputEndPart struct {
ID string
ProviderMetadata map[string]any
}
type StreamToolCallPart struct {
ToolCallID string
ToolName string
Input any
}
type StreamToolResultPart struct {
ToolCallID string
ToolName string
Input any
Output any
}
type StreamToolErrorPart struct {
ToolCallID string
ToolName string
Error error
}
type ToolOutputDeniedPart struct {
ToolCallID string
ToolName string
}
type ToolApprovalRequestPart struct {
ApprovalID string
ToolCallID string
ToolName string
Input any
}
type ToolProgressPart struct {
ToolCallID string
ToolName string
Content any
}
type StreamSourcePart struct {
Source Source
}
type StreamFilePart struct {
File GeneratedFile
}
type StartPart struct{}
type FinishPart struct {
FinishReason FinishReason
RawFinishReason string
TotalUsage Usage
}
type StartStepPart struct{}
type FinishStepPart struct {
FinishReason FinishReason
RawFinishReason string
Usage Usage
Response ResponseMetadata
ProviderMetadata map[string]any
}
type ErrorPart struct {
Error error
}
type AbortPart struct {
Reason string
}
type RawPart struct {
RawValue any
}
type StreamResult struct {
Stream <-chan StreamPart
Steps []StepResult
Messages []Message
}
func (sr *StreamResult) Text() (string, error)
func (sr *StreamResult) ToResult() (*GenerateResult, error)
Usage, Sources, Files, Response Metadata
type Usage struct {
InputTokens int
OutputTokens int
TotalTokens int
ReasoningTokens int
CachedInputTokens int
InputTokenDetails InputTokenDetail
OutputTokenDetails OutputTokenDetail
}
type InputTokenDetail struct {
CacheReadTokens int
CacheCreationTokens int
}
type OutputTokenDetail struct {
TextTokens int
ReasoningTokens int
AudioTokens int
}
type Source struct {
SourceType string
ID string
URL string
Title string
ProviderMetadata map[string]any
}
type GeneratedFile struct {
Data string
MediaType string
}
type ResponseMetadata struct {
ID string
ModelID string
Timestamp time.Time
Headers map[string]string
}
Embeddings
type EmbeddingProvider interface {
DoEmbed(ctx context.Context, params EmbedParams) (*EmbedResult, error)
}
type EmbeddingModel struct {
ID string
Provider EmbeddingProvider
MaxEmbeddingsPerCall int
}
type EmbedParams struct {
Model *EmbeddingModel
Values []string
Dimensions *int
}
type EmbedResult struct {
Embeddings [][]float64
Usage EmbeddingUsage
}
type EmbeddingUsage struct {
Tokens int
}
type EmbedOption func(*embedConfig)
func WithEmbeddingModel(model *EmbeddingModel) EmbedOption
func WithDimensions(d int) EmbedOption
Package provider/openai/completions
Implements the OpenAI Chat Completions API and OpenAI-compatible /chat/completions backends.
type Provider struct { /* unexported fields */ }
type Option func(*Provider)
func WithAPIKey(apiKey string) Option
func WithBaseURL(baseURL string) Option
func WithHTTPClient(client *http.Client) Option
func New(options ...Option) *Provider
func (p *Provider) Name() string
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
func (p *Provider) ChatModel(id string) *sdk.Model
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
Default option values:
WithBaseURL:https://api.openai.com/v1WithHTTPClient:&http.Client{}
Discovery endpoints:
ListModels:GET /modelsTest:GET /models?limit=1TestModel:GET /models/{id}
Package provider/openai/responses
Implements the OpenAI Responses API with reasoning summaries, annotations, and flat input mapping.
type Provider struct { /* unexported fields */ }
type Option func(*Provider)
func WithAPIKey(apiKey string) Option
func WithBaseURL(baseURL string) Option
func WithHTTPClient(client *http.Client) Option
func New(options ...Option) *Provider
func (p *Provider) Name() string
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
func (p *Provider) ChatModel(id string) *sdk.Model
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
Default option values:
WithBaseURL:https://api.openai.com/v1WithHTTPClient:&http.Client{}
Discovery endpoints:
ListModels:GET /modelsTest:GET /models?limit=1TestModel:GET /models/{id}
Responses-specific behavior:
- assistant reasoning maps to
GenerateResult.Reasoning - URL citation annotations map to
GenerateResult.Sources - function-call outputs map to tool-call and tool-result structures
Package provider/anthropic/messages
Implements the Anthropic Messages API.
type ThinkingConfig struct {
Type string
BudgetTokens int
}
type Provider struct { /* unexported fields */ }
type Option func(*Provider)
func WithAPIKey(apiKey string) Option
func WithAuthToken(token string) Option
func WithBaseURL(baseURL string) Option
func WithHTTPClient(client *http.Client) Option
func WithHeaders(headers map[string]string) Option
func WithThinking(cfg ThinkingConfig) Option
func New(options ...Option) *Provider
func (p *Provider) Name() string
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
func (p *Provider) ChatModel(id string) *sdk.Model
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
Default option values:
WithBaseURL:https://api.anthropic.com/v1- default API version header:
2023-06-01 WithHTTPClient:&http.Client{}
Thinking config notes:
Typesupports"enabled","adaptive", or"disabled"BudgetTokensis required whenType == "enabled"
Discovery endpoints:
ListModels:GET /v1/modelsTest:GET /v1/models?limit=1TestModel:GET /v1/models/{id}
Package provider/google/generativeai
Implements the Google Generative AI API for Gemini chat models.
type Provider struct { /* unexported fields */ }
type Option func(*Provider)
func WithAPIKey(apiKey string) Option
func WithBaseURL(baseURL string) Option
func WithHTTPClient(client *http.Client) Option
func New(options ...Option) *Provider
func (p *Provider) Name() string
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
func (p *Provider) ChatModel(id string) *sdk.Model
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
Default option values:
WithBaseURL:https://generativelanguage.googleapis.com/v1betaWithHTTPClient:&http.Client{}
Model ID rules:
- plain names like
gemini-2.5-flashare accepted - full paths like
publishers/google/models/gemini-2.5-flashare also accepted
Discovery endpoints:
ListModels:GET /v1beta/modelsTest:GET /v1beta/models?pageSize=1TestModel:GET /v1beta/models/{id}
Package provider/openai/embedding
Implements the OpenAI Embeddings API.
type Provider struct { /* unexported fields */ }
type Option func(*Provider)
func WithAPIKey(apiKey string) Option
func WithBaseURL(baseURL string) Option
func WithHTTPClient(client *http.Client) Option
func New(options ...Option) *Provider
func (p *Provider) EmbeddingModel(id string) *sdk.EmbeddingModel
func (p *Provider) DoEmbed(ctx context.Context, params sdk.EmbedParams) (*sdk.EmbedResult, error)
Default option values:
WithBaseURL:https://api.openai.com/v1WithHTTPClient:&http.Client{}
Behavior notes:
EmbeddingModel(id)returns a model withMaxEmbeddingsPerCall: 2048DoEmbedcallsPOST /embeddingswithencoding_format: "float"
Package provider/google/embedding
Implements the Google embedding API.
type Provider struct { /* unexported fields */ }
type Option func(*Provider)
func WithAPIKey(apiKey string) Option
func WithBaseURL(baseURL string) Option
func WithHTTPClient(client *http.Client) Option
func WithTaskType(taskType string) Option
func New(options ...Option) *Provider
func (p *Provider) EmbeddingModel(id string) *sdk.EmbeddingModel
func (p *Provider) DoEmbed(ctx context.Context, params sdk.EmbedParams) (*sdk.EmbedResult, error)
Default option values:
WithBaseURL:https://generativelanguage.googleapis.com/v1betaWithHTTPClient:&http.Client{}
Task type values:
RETRIEVAL_QUERYRETRIEVAL_DOCUMENTSEMANTIC_SIMILARITYCLASSIFICATIONCLUSTERINGQUESTION_ANSWERINGFACT_VERIFICATIONCODE_RETRIEVAL_QUERY
Behavior notes:
EmbeddingModel(id)returns a model withMaxEmbeddingsPerCall: 2048- single-value embedding uses
embedContent - multi-value embedding uses
batchEmbedContents
Selection Cheatsheet
- Broad OpenAI-compatible chat API:
provider/openai/completions - OpenAI Responses features such as reasoning summaries or citation annotations:
provider/openai/responses - Claude and extended thinking:
provider/anthropic/messages - Gemini chat and tool calling:
provider/google/generativeai - OpenAI-compatible embeddings:
provider/openai/embedding - Gemini embeddings with task-type tuning:
provider/google/embedding