mirror of
https://github.com/memohai/Memoh.git
synced 2026-04-25 07:00:48 +09:00
chore: add .agents and twilight ai skill
This commit is contained in:
@@ -0,0 +1,205 @@
|
|||||||
|
---
|
||||||
|
name: twilight-ai
|
||||||
|
description: Assist with development in the Twilight AI Go SDK. Use when working in this repository, adding or updating providers, embeddings, tool calling, streaming, examples, or docs for Twilight AI.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Twilight AI
|
||||||
|
|
||||||
|
## When To Use
|
||||||
|
|
||||||
|
Use this skill when the task involves `twilight-ai`, especially:
|
||||||
|
|
||||||
|
- implementing or refactoring SDK APIs in `sdk/`
|
||||||
|
- adding or updating providers under `provider/`
|
||||||
|
- working on `GenerateText`, `GenerateTextResult`, `StreamText`, `Embed`, or `EmbedMany`
|
||||||
|
- adding tool-calling, streaming, reasoning, or embedding support
|
||||||
|
- writing examples, docs, or usage guidance for this library
|
||||||
|
|
||||||
|
## Project Snapshot
|
||||||
|
|
||||||
|
Twilight AI is a lightweight Go AI SDK with a provider-agnostic core API.
|
||||||
|
|
||||||
|
- Text generation: `sdk.GenerateText`, `sdk.GenerateTextResult`, `sdk.StreamText`
|
||||||
|
- Embeddings: `sdk.Embed`, `sdk.EmbedMany`
|
||||||
|
- Tool calling: `sdk.Tool`, `sdk.NewTool[T]`, `WithMaxSteps`, approval flow
|
||||||
|
- Streaming: typed `StreamPart` events over Go channels
|
||||||
|
- Current providers:
|
||||||
|
- `provider/openai/completions`
|
||||||
|
- `provider/openai/responses`
|
||||||
|
- `provider/anthropic/messages`
|
||||||
|
- `provider/google/generativeai`
|
||||||
|
- `provider/openai/embedding`
|
||||||
|
- `provider/google/embedding`
|
||||||
|
|
||||||
|
## Default Mental Model
|
||||||
|
|
||||||
|
Prefer the high-level SDK API first, then drop to provider details only when needed.
|
||||||
|
|
||||||
|
- `sdk.Model` binds a chat model to a `sdk.Provider`
|
||||||
|
- `sdk.EmbeddingModel` binds an embedding model to an `sdk.EmbeddingProvider`
|
||||||
|
- The client orchestrates tool loops, callbacks, approvals, and streaming lifecycle
|
||||||
|
- Providers handle backend-specific HTTP, request mapping, response parsing, and SSE translation
|
||||||
|
|
||||||
|
## Core API Guidance
|
||||||
|
|
||||||
|
Choose the narrowest API that matches the task:
|
||||||
|
|
||||||
|
- Need only final text: use `sdk.GenerateText`
|
||||||
|
- Need usage, finish reason, steps, sources, files, or tool details: use `sdk.GenerateTextResult`
|
||||||
|
- Need live output: use `sdk.StreamText`
|
||||||
|
- Need one vector: use `sdk.Embed`
|
||||||
|
- Need multiple vectors or embedding token usage: use `sdk.EmbedMany`
|
||||||
|
|
||||||
|
If the task introduces examples or docs, prefer simple end-to-end snippets that start with:
|
||||||
|
|
||||||
|
1. construct provider
|
||||||
|
2. get model
|
||||||
|
3. call SDK API
|
||||||
|
4. handle error
|
||||||
|
|
||||||
|
## Provider Selection Rules
|
||||||
|
|
||||||
|
- Use `openai/completions` for broad OpenAI-compatible support such as DeepSeek, Groq, Ollama, Azure-style compatible endpoints, and generic `/chat/completions` backends.
|
||||||
|
- Use `openai/responses` when the task needs OpenAI Responses API features such as first-class reasoning models, reasoning summaries, URL citation annotations, or flat input mapping.
|
||||||
|
- Use `anthropic/messages` for Claude and Anthropic extended thinking via `WithThinking`.
|
||||||
|
- Use `google/generativeai` for Gemini chat, tool calling, vision, streaming, and Gemini reasoning.
|
||||||
|
- Use `openai/embedding` or `google/embedding` for embeddings. Keep embedding-provider work separate from chat-provider work.
|
||||||
|
|
||||||
|
## Implementation Rules
|
||||||
|
|
||||||
|
### Chat Providers
|
||||||
|
|
||||||
|
If adding or changing a chat provider, preserve the `sdk.Provider` contract:
|
||||||
|
|
||||||
|
- `Name()`
|
||||||
|
- `ListModels(ctx)`
|
||||||
|
- `Test(ctx)`
|
||||||
|
- `TestModel(ctx, modelID)`
|
||||||
|
- `DoGenerate(ctx, params)`
|
||||||
|
- `DoStream(ctx, params)`
|
||||||
|
|
||||||
|
Keep provider responsibilities focused:
|
||||||
|
|
||||||
|
- translate SDK messages/options into backend request format
|
||||||
|
- parse backend responses into `sdk.GenerateResult`
|
||||||
|
- map backend streaming events into typed `sdk.StreamPart` values
|
||||||
|
- report usage, finish reasons, reasoning, tool calls, sources, and files when supported
|
||||||
|
|
||||||
|
### Embedding Providers
|
||||||
|
|
||||||
|
Embedding providers are separate from chat providers. Use `sdk.EmbeddingProvider` and return an `sdk.EmbeddingModel` via `EmbeddingModel(id)`.
|
||||||
|
|
||||||
|
When updating embeddings:
|
||||||
|
|
||||||
|
- keep `sdk.Embed` for single-string convenience
|
||||||
|
- keep `sdk.EmbedMany` for batched requests
|
||||||
|
- preserve `Usage.Tokens`
|
||||||
|
- only expose dimensions/task-type behavior when the backend supports it
|
||||||
|
|
||||||
|
### Tool Calling
|
||||||
|
|
||||||
|
Prefer `sdk.NewTool[T]` for new tool examples and integrations. It gives typed input and inferred JSON Schema.
|
||||||
|
|
||||||
|
Use these defaults unless the task requires something else:
|
||||||
|
|
||||||
|
- `WithToolChoice("auto")` for normal use
|
||||||
|
- `WithMaxSteps(0)` for inspection-only tool calls
|
||||||
|
- `WithMaxSteps(N)` for automatic execution loops
|
||||||
|
- `RequireApproval: true` only for sensitive side effects
|
||||||
|
|
||||||
|
When streaming with tools, ensure the implementation can emit:
|
||||||
|
|
||||||
|
- tool input construction parts
|
||||||
|
- tool execution parts
|
||||||
|
- progress updates
|
||||||
|
- denial/error events when applicable
|
||||||
|
|
||||||
|
### Streaming
|
||||||
|
|
||||||
|
Twilight AI streaming is channel-first and type-safe. Prefer type switches over loosely typed event parsing.
|
||||||
|
|
||||||
|
Important expectations:
|
||||||
|
|
||||||
|
- `StreamText` returns `*sdk.StreamResult`
|
||||||
|
- `sr.Stream` must be consumed before relying on `sr.Steps` or `sr.Messages`
|
||||||
|
- `Text()` and `ToResult()` are the convenience paths when callers do not want manual event handling
|
||||||
|
|
||||||
|
### Messages And Results
|
||||||
|
|
||||||
|
Preserve the SDK message model and avoid backend-specific shapes leaking into public usage.
|
||||||
|
|
||||||
|
- user, assistant, system, and tool messages should stay in SDK types
|
||||||
|
- support rich parts where relevant: text, image, file, reasoning, tool call, tool result
|
||||||
|
- keep finish reason mapping aligned with SDK constants such as `stop`, `length`, `content-filter`, and `tool-calls`
|
||||||
|
|
||||||
|
## Common Task Patterns
|
||||||
|
|
||||||
|
### Add A New Usage Example
|
||||||
|
|
||||||
|
Use this structure:
|
||||||
|
|
||||||
|
1. pick the correct provider package
|
||||||
|
2. create provider with explicit options
|
||||||
|
3. create model via `ChatModel` or `EmbeddingModel`
|
||||||
|
4. call the top-level `sdk` function
|
||||||
|
5. show minimal but idiomatic result handling
|
||||||
|
|
||||||
|
### Add Or Update A Provider Feature
|
||||||
|
|
||||||
|
Check all affected layers:
|
||||||
|
|
||||||
|
1. request mapping
|
||||||
|
2. non-streaming response mapping
|
||||||
|
3. streaming event mapping
|
||||||
|
4. finish-reason and usage mapping
|
||||||
|
5. reasoning/tool/source/file support if the backend exposes them
|
||||||
|
6. model discovery and provider health checks if endpoints exist
|
||||||
|
|
||||||
|
### Add A Custom Provider
|
||||||
|
|
||||||
|
Use the built-in providers as the template. A custom provider should feel identical to existing ones from the caller's perspective.
|
||||||
|
|
||||||
|
Minimum behavior:
|
||||||
|
|
||||||
|
1. return a provider-bound model from `ChatModel`
|
||||||
|
2. implement discovery and health-check methods
|
||||||
|
3. support `DoGenerate`
|
||||||
|
4. support `DoStream` with correct lifecycle parts
|
||||||
|
|
||||||
|
## Documentation Rules
|
||||||
|
|
||||||
|
When writing Twilight AI docs or README content:
|
||||||
|
|
||||||
|
- prefer provider-agnostic phrasing first, provider-specific details second
|
||||||
|
- use Go examples, not pseudocode, unless explaining an interface contract
|
||||||
|
- keep examples small and runnable in spirit
|
||||||
|
- mention exact package paths for imports
|
||||||
|
- explain when to choose Completions vs Responses when OpenAI is involved
|
||||||
|
- keep embeddings, tool calling, and streaming as separate concerns unless the example truly combines them
|
||||||
|
|
||||||
|
## Terminology
|
||||||
|
|
||||||
|
Use these terms consistently:
|
||||||
|
|
||||||
|
- Provider: backend implementation for chat generation
|
||||||
|
- Embedding provider: backend implementation for embeddings
|
||||||
|
- Model: provider-bound chat model
|
||||||
|
- Embedding model: provider-bound embedding model
|
||||||
|
- Tool calling: model requests a tool invocation
|
||||||
|
- Multi-step execution: automatic tool loop controlled by `WithMaxSteps`
|
||||||
|
- Stream part: a typed event from `StreamText`
|
||||||
|
|
||||||
|
## Quick Checklist
|
||||||
|
|
||||||
|
Before finishing work in this repo, verify:
|
||||||
|
|
||||||
|
- the chosen provider package matches the intended backend capabilities
|
||||||
|
- chat and embedding concerns are not mixed accidentally
|
||||||
|
- public examples use top-level `sdk` APIs unless lower-level behavior is the point
|
||||||
|
- streaming logic uses typed `StreamPart` handling
|
||||||
|
- tool-calling changes cover both inspection mode and multi-step mode when relevant
|
||||||
|
- provider work includes health checks or model discovery behavior if the backend supports them
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- For exported APIs, signatures, provider options, and stream/event types, see [reference.md](reference.md)
|
||||||
@@ -0,0 +1,798 @@
|
|||||||
|
# Twilight AI API Reference
|
||||||
|
|
||||||
|
This file is the detailed API companion for `skill/SKILL.md`.
|
||||||
|
|
||||||
|
Use it when the task needs exact package names, exported types, function signatures, provider options, or stream/event shapes.
|
||||||
|
|
||||||
|
## Package Map
|
||||||
|
|
||||||
|
- `github.com/memohai/twilight-ai/sdk`
|
||||||
|
- `github.com/memohai/twilight-ai/provider/openai/completions`
|
||||||
|
- `github.com/memohai/twilight-ai/provider/openai/responses`
|
||||||
|
- `github.com/memohai/twilight-ai/provider/anthropic/messages`
|
||||||
|
- `github.com/memohai/twilight-ai/provider/google/generativeai`
|
||||||
|
- `github.com/memohai/twilight-ai/provider/openai/embedding`
|
||||||
|
- `github.com/memohai/twilight-ai/provider/google/embedding`
|
||||||
|
|
||||||
|
## Package `sdk`
|
||||||
|
|
||||||
|
### Client And Top-Level Helpers
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Client struct{}
|
||||||
|
|
||||||
|
func NewClient() *Client
|
||||||
|
|
||||||
|
func (c *Client) GenerateText(ctx context.Context, options ...GenerateOption) (string, error)
|
||||||
|
func (c *Client) GenerateTextResult(ctx context.Context, options ...GenerateOption) (*GenerateResult, error)
|
||||||
|
func (c *Client) StreamText(ctx context.Context, options ...GenerateOption) (*StreamResult, error)
|
||||||
|
func (c *Client) Embed(ctx context.Context, value string, options ...EmbedOption) ([]float64, error)
|
||||||
|
func (c *Client) EmbedMany(ctx context.Context, values []string, options ...EmbedOption) (*EmbedResult, error)
|
||||||
|
|
||||||
|
func GenerateText(ctx context.Context, options ...GenerateOption) (string, error)
|
||||||
|
func GenerateTextResult(ctx context.Context, options ...GenerateOption) (*GenerateResult, error)
|
||||||
|
func StreamText(ctx context.Context, options ...GenerateOption) (*StreamResult, error)
|
||||||
|
func Embed(ctx context.Context, value string, options ...EmbedOption) ([]float64, error)
|
||||||
|
func EmbedMany(ctx context.Context, values []string, options ...EmbedOption) (*EmbedResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Provider Contracts
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Provider interface {
|
||||||
|
Name() string
|
||||||
|
ListModels(ctx context.Context) ([]Model, error)
|
||||||
|
Test(ctx context.Context) *ProviderTestResult
|
||||||
|
TestModel(ctx context.Context, modelID string) (*ModelTestResult, error)
|
||||||
|
DoGenerate(ctx context.Context, params GenerateParams) (*GenerateResult, error)
|
||||||
|
DoStream(ctx context.Context, params GenerateParams) (*StreamResult, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProviderStatus string
|
||||||
|
|
||||||
|
const (
|
||||||
|
ProviderStatusOK ProviderStatus = "ok"
|
||||||
|
ProviderStatusUnhealthy ProviderStatus = "unhealthy"
|
||||||
|
ProviderStatusUnreachable ProviderStatus = "unreachable"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ProviderTestResult struct {
|
||||||
|
Status ProviderStatus
|
||||||
|
Message string
|
||||||
|
Error error
|
||||||
|
}
|
||||||
|
|
||||||
|
type ModelTestResult struct {
|
||||||
|
Supported bool
|
||||||
|
Message string
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Models
|
||||||
|
|
||||||
|
```go
|
||||||
|
type ModelType string
|
||||||
|
|
||||||
|
const ModelTypeChat ModelType = "chat"
|
||||||
|
|
||||||
|
type Model struct {
|
||||||
|
ID string
|
||||||
|
DisplayName string
|
||||||
|
Provider Provider
|
||||||
|
Type ModelType
|
||||||
|
MaxTokens int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Model) Test(ctx context.Context) (*ModelTestResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Messages
|
||||||
|
|
||||||
|
```go
|
||||||
|
type MessageRole string
|
||||||
|
|
||||||
|
const (
|
||||||
|
MessageRoleUser MessageRole = "user"
|
||||||
|
MessageRoleAssistant MessageRole = "assistant"
|
||||||
|
MessageRoleSystem MessageRole = "system"
|
||||||
|
MessageRoleTool MessageRole = "tool"
|
||||||
|
)
|
||||||
|
|
||||||
|
type MessagePartType string
|
||||||
|
|
||||||
|
const (
|
||||||
|
MessagePartTypeText MessagePartType = "text"
|
||||||
|
MessagePartTypeReasoning MessagePartType = "reasoning"
|
||||||
|
MessagePartTypeImage MessagePartType = "image"
|
||||||
|
MessagePartTypeFile MessagePartType = "file"
|
||||||
|
MessagePartTypeToolCall MessagePartType = "tool-call"
|
||||||
|
MessagePartTypeToolResult MessagePartType = "tool-result"
|
||||||
|
)
|
||||||
|
|
||||||
|
type MessagePart interface {
|
||||||
|
PartType() MessagePartType
|
||||||
|
}
|
||||||
|
|
||||||
|
type TextPart struct {
|
||||||
|
Text string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ReasoningPart struct {
|
||||||
|
Text string
|
||||||
|
Signature string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ImagePart struct {
|
||||||
|
Image string
|
||||||
|
MediaType string
|
||||||
|
}
|
||||||
|
|
||||||
|
type FilePart struct {
|
||||||
|
Data string
|
||||||
|
MediaType string
|
||||||
|
Filename string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolCallPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Input any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolResultPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Result any
|
||||||
|
IsError bool
|
||||||
|
}
|
||||||
|
|
||||||
|
type Message struct {
|
||||||
|
Role MessageRole
|
||||||
|
Content []MessagePart
|
||||||
|
}
|
||||||
|
|
||||||
|
func UserMessage(text string, extra ...MessagePart) Message
|
||||||
|
func SystemMessage(text string) Message
|
||||||
|
func AssistantMessage(text string) Message
|
||||||
|
func ToolMessage(results ...ToolResultPart) Message
|
||||||
|
```
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
|
||||||
|
- `UserMessage` accepts a text string plus optional extra parts such as `ImagePart`.
|
||||||
|
- `Message` supports JSON marshal and unmarshal with type discrimination.
|
||||||
|
|
||||||
|
### Generation
|
||||||
|
|
||||||
|
```go
|
||||||
|
type FinishReason string
|
||||||
|
|
||||||
|
const (
|
||||||
|
FinishReasonStop FinishReason = "stop"
|
||||||
|
FinishReasonLength FinishReason = "length"
|
||||||
|
FinishReasonContentFilter FinishReason = "content-filter"
|
||||||
|
FinishReasonToolCalls FinishReason = "tool-calls"
|
||||||
|
FinishReasonError FinishReason = "error"
|
||||||
|
FinishReasonOther FinishReason = "other"
|
||||||
|
FinishReasonUnknown FinishReason = "unknown"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ResponseFormatType string
|
||||||
|
|
||||||
|
const (
|
||||||
|
ResponseFormatText ResponseFormatType = "text"
|
||||||
|
ResponseFormatJSONObject ResponseFormatType = "json_object"
|
||||||
|
ResponseFormatJSONSchema ResponseFormatType = "json_schema"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ResponseFormat struct {
|
||||||
|
Type ResponseFormatType
|
||||||
|
JSONSchema any
|
||||||
|
}
|
||||||
|
|
||||||
|
type GenerateParams struct {
|
||||||
|
Model *Model
|
||||||
|
System string
|
||||||
|
Messages []Message
|
||||||
|
Tools []Tool
|
||||||
|
ToolChoice any
|
||||||
|
ResponseFormat *ResponseFormat
|
||||||
|
Temperature *float64
|
||||||
|
TopP *float64
|
||||||
|
MaxTokens *int
|
||||||
|
StopSequences []string
|
||||||
|
FrequencyPenalty *float64
|
||||||
|
PresencePenalty *float64
|
||||||
|
Seed *int
|
||||||
|
ReasoningEffort *string
|
||||||
|
}
|
||||||
|
|
||||||
|
type StepResult struct {
|
||||||
|
Text string
|
||||||
|
Reasoning string
|
||||||
|
FinishReason FinishReason
|
||||||
|
RawFinishReason string
|
||||||
|
Usage Usage
|
||||||
|
ToolCalls []ToolCall
|
||||||
|
ToolResults []ToolResult
|
||||||
|
Response ResponseMetadata
|
||||||
|
Messages []Message
|
||||||
|
}
|
||||||
|
|
||||||
|
type GenerateResult struct {
|
||||||
|
Text string
|
||||||
|
Reasoning string
|
||||||
|
FinishReason FinishReason
|
||||||
|
RawFinishReason string
|
||||||
|
Usage Usage
|
||||||
|
Sources []Source
|
||||||
|
Files []GeneratedFile
|
||||||
|
ToolCalls []ToolCall
|
||||||
|
ToolResults []ToolResult
|
||||||
|
Response ResponseMetadata
|
||||||
|
Steps []StepResult
|
||||||
|
Messages []Message
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Generate Options
|
||||||
|
|
||||||
|
```go
|
||||||
|
type GenerateOption func(*generateConfig)
|
||||||
|
|
||||||
|
func WithModel(model *Model) GenerateOption
|
||||||
|
func WithMessages(messages []Message) GenerateOption
|
||||||
|
func WithSystem(text string) GenerateOption
|
||||||
|
func WithTools(tools []Tool) GenerateOption
|
||||||
|
func WithToolChoice(choice any) GenerateOption
|
||||||
|
func WithResponseFormat(rf ResponseFormat) GenerateOption
|
||||||
|
func WithTemperature(t float64) GenerateOption
|
||||||
|
func WithTopP(topP float64) GenerateOption
|
||||||
|
func WithMaxTokens(n int) GenerateOption
|
||||||
|
func WithStopSequences(s []string) GenerateOption
|
||||||
|
func WithFrequencyPenalty(penalty float64) GenerateOption
|
||||||
|
func WithPresencePenalty(penalty float64) GenerateOption
|
||||||
|
func WithSeed(s int) GenerateOption
|
||||||
|
func WithReasoningEffort(effort string) GenerateOption
|
||||||
|
|
||||||
|
func WithMaxSteps(n int) GenerateOption
|
||||||
|
func WithOnFinish(fn func(*GenerateResult)) GenerateOption
|
||||||
|
func WithOnStep(fn func(*StepResult) *GenerateParams) GenerateOption
|
||||||
|
func WithPrepareStep(fn func(*GenerateParams) *GenerateParams) GenerateOption
|
||||||
|
func WithApprovalHandler(fn func(ctx context.Context, call ToolCall) (bool, error)) GenerateOption
|
||||||
|
```
|
||||||
|
|
||||||
|
Behavior notes:
|
||||||
|
|
||||||
|
- `WithMaxSteps(0)` is the default single-call mode.
|
||||||
|
- `WithMaxSteps(N)` enables automatic tool execution for up to `N` LLM calls.
|
||||||
|
- `WithMaxSteps(-1)` means unlimited loop until the model stops requesting tools.
|
||||||
|
- `WithToolChoice` accepts `"auto"`, `"none"`, or `"required"`.
|
||||||
|
|
||||||
|
### Tools
|
||||||
|
|
||||||
|
```go
|
||||||
|
type ToolExecuteFunc func(ctx *ToolExecContext, input any) (any, error)
|
||||||
|
|
||||||
|
type ToolExecContext struct {
|
||||||
|
context.Context
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
SendProgress func(content any)
|
||||||
|
}
|
||||||
|
|
||||||
|
type Tool struct {
|
||||||
|
Name string
|
||||||
|
Description string
|
||||||
|
Parameters any
|
||||||
|
Execute ToolExecuteFunc
|
||||||
|
RequireApproval bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewTool[T any](
|
||||||
|
name, description string,
|
||||||
|
execute func(ctx *ToolExecContext, input T) (any, error),
|
||||||
|
) Tool
|
||||||
|
|
||||||
|
type ToolCall struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Input any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolResult struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Input any
|
||||||
|
Output any
|
||||||
|
IsError bool
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streaming
|
||||||
|
|
||||||
|
```go
|
||||||
|
type StreamPartType string
|
||||||
|
|
||||||
|
const (
|
||||||
|
StreamPartTypeTextStart StreamPartType = "text-start"
|
||||||
|
StreamPartTypeTextDelta StreamPartType = "text-delta"
|
||||||
|
StreamPartTypeTextEnd StreamPartType = "text-end"
|
||||||
|
StreamPartTypeReasoningStart StreamPartType = "reasoning-start"
|
||||||
|
StreamPartTypeReasoningDelta StreamPartType = "reasoning-delta"
|
||||||
|
StreamPartTypeReasoningEnd StreamPartType = "reasoning-end"
|
||||||
|
StreamPartTypeToolInputStart StreamPartType = "tool-input-start"
|
||||||
|
StreamPartTypeToolInputDelta StreamPartType = "tool-input-delta"
|
||||||
|
StreamPartTypeToolInputEnd StreamPartType = "tool-input-end"
|
||||||
|
StreamPartTypeToolCall StreamPartType = "tool-call"
|
||||||
|
StreamPartTypeToolResult StreamPartType = "tool-result"
|
||||||
|
StreamPartTypeToolError StreamPartType = "tool-error"
|
||||||
|
StreamPartTypeToolOutputDenied StreamPartType = "tool-output-denied"
|
||||||
|
StreamPartTypeToolApprovalRequest StreamPartType = "tool-approval-request"
|
||||||
|
StreamPartTypeToolProgress StreamPartType = "tool-progress"
|
||||||
|
StreamPartTypeSource StreamPartType = "source"
|
||||||
|
StreamPartTypeFile StreamPartType = "file"
|
||||||
|
StreamPartTypeStart StreamPartType = "start"
|
||||||
|
StreamPartTypeFinish StreamPartType = "finish"
|
||||||
|
StreamPartTypeStartStep StreamPartType = "start-step"
|
||||||
|
StreamPartTypeFinishStep StreamPartType = "finish-step"
|
||||||
|
StreamPartTypeError StreamPartType = "error"
|
||||||
|
StreamPartTypeAbort StreamPartType = "abort"
|
||||||
|
StreamPartTypeRaw StreamPartType = "raw"
|
||||||
|
)
|
||||||
|
|
||||||
|
type StreamPart interface {
|
||||||
|
Type() StreamPartType
|
||||||
|
}
|
||||||
|
|
||||||
|
type TextStartPart struct {
|
||||||
|
ID string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type TextDeltaPart struct {
|
||||||
|
ID string
|
||||||
|
Text string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type TextEndPart struct {
|
||||||
|
ID string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ReasoningStartPart struct {
|
||||||
|
ID string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ReasoningDeltaPart struct {
|
||||||
|
ID string
|
||||||
|
Text string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ReasoningEndPart struct {
|
||||||
|
ID string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolInputStartPart struct {
|
||||||
|
ID string
|
||||||
|
ToolName string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolInputDeltaPart struct {
|
||||||
|
ID string
|
||||||
|
Delta string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolInputEndPart struct {
|
||||||
|
ID string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamToolCallPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Input any
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamToolResultPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Input any
|
||||||
|
Output any
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamToolErrorPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Error error
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolOutputDeniedPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolApprovalRequestPart struct {
|
||||||
|
ApprovalID string
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Input any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ToolProgressPart struct {
|
||||||
|
ToolCallID string
|
||||||
|
ToolName string
|
||||||
|
Content any
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamSourcePart struct {
|
||||||
|
Source Source
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamFilePart struct {
|
||||||
|
File GeneratedFile
|
||||||
|
}
|
||||||
|
|
||||||
|
type StartPart struct{}
|
||||||
|
|
||||||
|
type FinishPart struct {
|
||||||
|
FinishReason FinishReason
|
||||||
|
RawFinishReason string
|
||||||
|
TotalUsage Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
type StartStepPart struct{}
|
||||||
|
|
||||||
|
type FinishStepPart struct {
|
||||||
|
FinishReason FinishReason
|
||||||
|
RawFinishReason string
|
||||||
|
Usage Usage
|
||||||
|
Response ResponseMetadata
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type ErrorPart struct {
|
||||||
|
Error error
|
||||||
|
}
|
||||||
|
|
||||||
|
type AbortPart struct {
|
||||||
|
Reason string
|
||||||
|
}
|
||||||
|
|
||||||
|
type RawPart struct {
|
||||||
|
RawValue any
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamResult struct {
|
||||||
|
Stream <-chan StreamPart
|
||||||
|
Steps []StepResult
|
||||||
|
Messages []Message
|
||||||
|
}
|
||||||
|
|
||||||
|
func (sr *StreamResult) Text() (string, error)
|
||||||
|
func (sr *StreamResult) ToResult() (*GenerateResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage, Sources, Files, Response Metadata
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Usage struct {
|
||||||
|
InputTokens int
|
||||||
|
OutputTokens int
|
||||||
|
TotalTokens int
|
||||||
|
ReasoningTokens int
|
||||||
|
CachedInputTokens int
|
||||||
|
InputTokenDetails InputTokenDetail
|
||||||
|
OutputTokenDetails OutputTokenDetail
|
||||||
|
}
|
||||||
|
|
||||||
|
type InputTokenDetail struct {
|
||||||
|
CacheReadTokens int
|
||||||
|
CacheCreationTokens int
|
||||||
|
}
|
||||||
|
|
||||||
|
type OutputTokenDetail struct {
|
||||||
|
TextTokens int
|
||||||
|
ReasoningTokens int
|
||||||
|
AudioTokens int
|
||||||
|
}
|
||||||
|
|
||||||
|
type Source struct {
|
||||||
|
SourceType string
|
||||||
|
ID string
|
||||||
|
URL string
|
||||||
|
Title string
|
||||||
|
ProviderMetadata map[string]any
|
||||||
|
}
|
||||||
|
|
||||||
|
type GeneratedFile struct {
|
||||||
|
Data string
|
||||||
|
MediaType string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ResponseMetadata struct {
|
||||||
|
ID string
|
||||||
|
ModelID string
|
||||||
|
Timestamp time.Time
|
||||||
|
Headers map[string]string
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Embeddings
|
||||||
|
|
||||||
|
```go
|
||||||
|
type EmbeddingProvider interface {
|
||||||
|
DoEmbed(ctx context.Context, params EmbedParams) (*EmbedResult, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type EmbeddingModel struct {
|
||||||
|
ID string
|
||||||
|
Provider EmbeddingProvider
|
||||||
|
MaxEmbeddingsPerCall int
|
||||||
|
}
|
||||||
|
|
||||||
|
type EmbedParams struct {
|
||||||
|
Model *EmbeddingModel
|
||||||
|
Values []string
|
||||||
|
Dimensions *int
|
||||||
|
}
|
||||||
|
|
||||||
|
type EmbedResult struct {
|
||||||
|
Embeddings [][]float64
|
||||||
|
Usage EmbeddingUsage
|
||||||
|
}
|
||||||
|
|
||||||
|
type EmbeddingUsage struct {
|
||||||
|
Tokens int
|
||||||
|
}
|
||||||
|
|
||||||
|
type EmbedOption func(*embedConfig)
|
||||||
|
|
||||||
|
func WithEmbeddingModel(model *EmbeddingModel) EmbedOption
|
||||||
|
func WithDimensions(d int) EmbedOption
|
||||||
|
```
|
||||||
|
|
||||||
|
## Package `provider/openai/completions`
|
||||||
|
|
||||||
|
Implements the OpenAI Chat Completions API and OpenAI-compatible `/chat/completions` backends.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Provider struct { /* unexported fields */ }
|
||||||
|
|
||||||
|
type Option func(*Provider)
|
||||||
|
|
||||||
|
func WithAPIKey(apiKey string) Option
|
||||||
|
func WithBaseURL(baseURL string) Option
|
||||||
|
func WithHTTPClient(client *http.Client) Option
|
||||||
|
func New(options ...Option) *Provider
|
||||||
|
|
||||||
|
func (p *Provider) Name() string
|
||||||
|
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
|
||||||
|
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
|
||||||
|
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
|
||||||
|
func (p *Provider) ChatModel(id string) *sdk.Model
|
||||||
|
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
|
||||||
|
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default option values:
|
||||||
|
|
||||||
|
- `WithBaseURL`: `https://api.openai.com/v1`
|
||||||
|
- `WithHTTPClient`: `&http.Client{}`
|
||||||
|
|
||||||
|
Discovery endpoints:
|
||||||
|
|
||||||
|
- `ListModels`: `GET /models`
|
||||||
|
- `Test`: `GET /models?limit=1`
|
||||||
|
- `TestModel`: `GET /models/{id}`
|
||||||
|
|
||||||
|
## Package `provider/openai/responses`
|
||||||
|
|
||||||
|
Implements the OpenAI Responses API with reasoning summaries, annotations, and flat input mapping.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Provider struct { /* unexported fields */ }
|
||||||
|
|
||||||
|
type Option func(*Provider)
|
||||||
|
|
||||||
|
func WithAPIKey(apiKey string) Option
|
||||||
|
func WithBaseURL(baseURL string) Option
|
||||||
|
func WithHTTPClient(client *http.Client) Option
|
||||||
|
func New(options ...Option) *Provider
|
||||||
|
|
||||||
|
func (p *Provider) Name() string
|
||||||
|
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
|
||||||
|
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
|
||||||
|
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
|
||||||
|
func (p *Provider) ChatModel(id string) *sdk.Model
|
||||||
|
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
|
||||||
|
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default option values:
|
||||||
|
|
||||||
|
- `WithBaseURL`: `https://api.openai.com/v1`
|
||||||
|
- `WithHTTPClient`: `&http.Client{}`
|
||||||
|
|
||||||
|
Discovery endpoints:
|
||||||
|
|
||||||
|
- `ListModels`: `GET /models`
|
||||||
|
- `Test`: `GET /models?limit=1`
|
||||||
|
- `TestModel`: `GET /models/{id}`
|
||||||
|
|
||||||
|
Responses-specific behavior:
|
||||||
|
|
||||||
|
- assistant reasoning maps to `GenerateResult.Reasoning`
|
||||||
|
- URL citation annotations map to `GenerateResult.Sources`
|
||||||
|
- function-call outputs map to tool-call and tool-result structures
|
||||||
|
|
||||||
|
## Package `provider/anthropic/messages`
|
||||||
|
|
||||||
|
Implements the Anthropic Messages API.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type ThinkingConfig struct {
|
||||||
|
Type string
|
||||||
|
BudgetTokens int
|
||||||
|
}
|
||||||
|
|
||||||
|
type Provider struct { /* unexported fields */ }
|
||||||
|
|
||||||
|
type Option func(*Provider)
|
||||||
|
|
||||||
|
func WithAPIKey(apiKey string) Option
|
||||||
|
func WithAuthToken(token string) Option
|
||||||
|
func WithBaseURL(baseURL string) Option
|
||||||
|
func WithHTTPClient(client *http.Client) Option
|
||||||
|
func WithHeaders(headers map[string]string) Option
|
||||||
|
func WithThinking(cfg ThinkingConfig) Option
|
||||||
|
func New(options ...Option) *Provider
|
||||||
|
|
||||||
|
func (p *Provider) Name() string
|
||||||
|
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
|
||||||
|
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
|
||||||
|
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
|
||||||
|
func (p *Provider) ChatModel(id string) *sdk.Model
|
||||||
|
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
|
||||||
|
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default option values:
|
||||||
|
|
||||||
|
- `WithBaseURL`: `https://api.anthropic.com/v1`
|
||||||
|
- default API version header: `2023-06-01`
|
||||||
|
- `WithHTTPClient`: `&http.Client{}`
|
||||||
|
|
||||||
|
Thinking config notes:
|
||||||
|
|
||||||
|
- `Type` supports `"enabled"`, `"adaptive"`, or `"disabled"`
|
||||||
|
- `BudgetTokens` is required when `Type == "enabled"`
|
||||||
|
|
||||||
|
Discovery endpoints:
|
||||||
|
|
||||||
|
- `ListModels`: `GET /v1/models`
|
||||||
|
- `Test`: `GET /v1/models?limit=1`
|
||||||
|
- `TestModel`: `GET /v1/models/{id}`
|
||||||
|
|
||||||
|
## Package `provider/google/generativeai`
|
||||||
|
|
||||||
|
Implements the Google Generative AI API for Gemini chat models.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Provider struct { /* unexported fields */ }
|
||||||
|
|
||||||
|
type Option func(*Provider)
|
||||||
|
|
||||||
|
func WithAPIKey(apiKey string) Option
|
||||||
|
func WithBaseURL(baseURL string) Option
|
||||||
|
func WithHTTPClient(client *http.Client) Option
|
||||||
|
func New(options ...Option) *Provider
|
||||||
|
|
||||||
|
func (p *Provider) Name() string
|
||||||
|
func (p *Provider) ListModels(ctx context.Context) ([]sdk.Model, error)
|
||||||
|
func (p *Provider) Test(ctx context.Context) *sdk.ProviderTestResult
|
||||||
|
func (p *Provider) TestModel(ctx context.Context, modelID string) (*sdk.ModelTestResult, error)
|
||||||
|
func (p *Provider) ChatModel(id string) *sdk.Model
|
||||||
|
func (p *Provider) DoGenerate(ctx context.Context, params sdk.GenerateParams) (*sdk.GenerateResult, error)
|
||||||
|
func (p *Provider) DoStream(ctx context.Context, params sdk.GenerateParams) (*sdk.StreamResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default option values:
|
||||||
|
|
||||||
|
- `WithBaseURL`: `https://generativelanguage.googleapis.com/v1beta`
|
||||||
|
- `WithHTTPClient`: `&http.Client{}`
|
||||||
|
|
||||||
|
Model ID rules:
|
||||||
|
|
||||||
|
- plain names like `gemini-2.5-flash` are accepted
|
||||||
|
- full paths like `publishers/google/models/gemini-2.5-flash` are also accepted
|
||||||
|
|
||||||
|
Discovery endpoints:
|
||||||
|
|
||||||
|
- `ListModels`: `GET /v1beta/models`
|
||||||
|
- `Test`: `GET /v1beta/models?pageSize=1`
|
||||||
|
- `TestModel`: `GET /v1beta/models/{id}`
|
||||||
|
|
||||||
|
## Package `provider/openai/embedding`
|
||||||
|
|
||||||
|
Implements the OpenAI Embeddings API.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Provider struct { /* unexported fields */ }
|
||||||
|
|
||||||
|
type Option func(*Provider)
|
||||||
|
|
||||||
|
func WithAPIKey(apiKey string) Option
|
||||||
|
func WithBaseURL(baseURL string) Option
|
||||||
|
func WithHTTPClient(client *http.Client) Option
|
||||||
|
func New(options ...Option) *Provider
|
||||||
|
|
||||||
|
func (p *Provider) EmbeddingModel(id string) *sdk.EmbeddingModel
|
||||||
|
func (p *Provider) DoEmbed(ctx context.Context, params sdk.EmbedParams) (*sdk.EmbedResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default option values:
|
||||||
|
|
||||||
|
- `WithBaseURL`: `https://api.openai.com/v1`
|
||||||
|
- `WithHTTPClient`: `&http.Client{}`
|
||||||
|
|
||||||
|
Behavior notes:
|
||||||
|
|
||||||
|
- `EmbeddingModel(id)` returns a model with `MaxEmbeddingsPerCall: 2048`
|
||||||
|
- `DoEmbed` calls `POST /embeddings` with `encoding_format: "float"`
|
||||||
|
|
||||||
|
## Package `provider/google/embedding`
|
||||||
|
|
||||||
|
Implements the Google embedding API.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Provider struct { /* unexported fields */ }
|
||||||
|
|
||||||
|
type Option func(*Provider)
|
||||||
|
|
||||||
|
func WithAPIKey(apiKey string) Option
|
||||||
|
func WithBaseURL(baseURL string) Option
|
||||||
|
func WithHTTPClient(client *http.Client) Option
|
||||||
|
func WithTaskType(taskType string) Option
|
||||||
|
func New(options ...Option) *Provider
|
||||||
|
|
||||||
|
func (p *Provider) EmbeddingModel(id string) *sdk.EmbeddingModel
|
||||||
|
func (p *Provider) DoEmbed(ctx context.Context, params sdk.EmbedParams) (*sdk.EmbedResult, error)
|
||||||
|
```
|
||||||
|
|
||||||
|
Default option values:
|
||||||
|
|
||||||
|
- `WithBaseURL`: `https://generativelanguage.googleapis.com/v1beta`
|
||||||
|
- `WithHTTPClient`: `&http.Client{}`
|
||||||
|
|
||||||
|
Task type values:
|
||||||
|
|
||||||
|
- `RETRIEVAL_QUERY`
|
||||||
|
- `RETRIEVAL_DOCUMENT`
|
||||||
|
- `SEMANTIC_SIMILARITY`
|
||||||
|
- `CLASSIFICATION`
|
||||||
|
- `CLUSTERING`
|
||||||
|
- `QUESTION_ANSWERING`
|
||||||
|
- `FACT_VERIFICATION`
|
||||||
|
- `CODE_RETRIEVAL_QUERY`
|
||||||
|
|
||||||
|
Behavior notes:
|
||||||
|
|
||||||
|
- `EmbeddingModel(id)` returns a model with `MaxEmbeddingsPerCall: 2048`
|
||||||
|
- single-value embedding uses `embedContent`
|
||||||
|
- multi-value embedding uses `batchEmbedContents`
|
||||||
|
|
||||||
|
## Selection Cheatsheet
|
||||||
|
|
||||||
|
- Broad OpenAI-compatible chat API: `provider/openai/completions`
|
||||||
|
- OpenAI Responses features such as reasoning summaries or citation annotations: `provider/openai/responses`
|
||||||
|
- Claude and extended thinking: `provider/anthropic/messages`
|
||||||
|
- Gemini chat and tool calling: `provider/google/generativeai`
|
||||||
|
- OpenAI-compatible embeddings: `provider/openai/embedding`
|
||||||
|
- Gemini embeddings with task-type tuning: `provider/google/embedding`
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"version": 1,
|
||||||
|
"skills": {
|
||||||
|
"twilight-ai": {
|
||||||
|
"source": "memohai/twilight-ai",
|
||||||
|
"sourceType": "github",
|
||||||
|
"computedHash": "f52a544c699944def25f46ac924b9e49cbf6b951f768325a0df9dd3f3fb512ab"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user