Copied to clipboard

Reference

Technical reference for Tapes providers, storage backends, and data model.

Config File

Configuration is stored in .tapes/config.toml. Tapes looks for config in this order:

  1. ./.tapes/config.toml — local project directory
  2. ~/.tapes/config.toml — user home directory
# .tapes/config.toml
version = 0

[storage]
sqlite_path = "~/.tapes/tapes.sqlite"
# postgres_dsn = "postgres://user:pass@localhost:5432/tapes"

[publisher.kafka]
# brokers = "localhost:9092"
# topic = "tapes.nodes.v1"
# client_id = "my-app"

[proxy]
provider = "anthropic"
upstream = "https://api.anthropic.com"
listen = ":8080"
project = "my-app"

[api]
listen = ":8081"

[ingest]
listen = ":8082"

[client]
proxy_target = "http://localhost:8080"
api_target = "http://localhost:8081"

[vector_store]
provider = "sqlite"
target = ""

[embedding]
provider = "ollama"
target = "http://localhost:11434"
model = "embeddinggemma"
dimensions = 768

[opencode]
provider = "anthropic"
model = "claude-sonnet-4-5"

[telemetry]
disabled = false

[update]
# disabled = true

Precedence

Configuration values are resolved in this order (highest to lowest priority):

  1. CLI flags — Always override everything
  2. Environment variablesTAPES_* prefixed variables (see env var reference)
  3. Config file — Values set in config.toml
  4. Defaults — Built-in default values

Use tapes config list to see all current values. See CLI config for management commands.

Providers

Tapes works with any LLM provider. Configure using --provider and --upstream flags.

Provider Upstream URL
Ollama (default) http://localhost:11434
Anthropic https://api.anthropic.com
OpenAI https://api.openai.com

The --provider flag tells tapes how to parse the API format. The -u flag specifies where to forward requests.

Storage

Tapes supports three storage backends:

SQLite (default for tapes serve) Persistent storage. Default path: ~/.tapes/tapes.sqlite. Data persists across restarts.
In-Memory (default for tapes serve proxy) Fast, ephemeral storage. Data lost when server stops. Use --sqlite /path/to/db.sqlite for persistence.
PostgreSQL Production-grade persistent storage for teams and concurrent writes. Use --postgres "postgres://user:pass@host:5432/tapes" to enable. Migrations run automatically on startup. See the PostgreSQL Storage guide.

Streaming

Tapes can publish DAG node events to an external stream alongside storage. Publishing is best-effort — storage remains authoritative.

Kafka Publish conversation events to Kafka topics, partitioned by conversation root hash. Use --kafka-brokers and --kafka-topic to enable. See the Kafka Streaming guide.

Vector Storage

Semantic search for stored conversations. When enabled, embeddings are generated for each turn.

SQLite Vector (default) Built-in vector storage using SQLite. No additional setup required.
Chroma External vector store. Start with: docker run -p 8000:8000 chromadb/chroma
Qdrant High-performance vector store with gRPC API. Start with: docker run -p 6333:6333 -p 6334:6334 qdrant/qdrant. Dashboard at http://localhost:6333/dashboard
pgvector PostgreSQL extension for vector similarity search. Requires PostgreSQL with the pgvector extension installed. Uses HNSW indexing for efficient cosine similarity queries.
Ollama Embeddings Generates embeddings using embeddinggemma by default. Install with: ollama pull embeddinggemma:latest
tapes serve \
  --vector-store-provider sqlite \
  --embedding-provider ollama \
  --embedding-model embeddinggemma:latest
tapes serve \
  --vector-store-provider chroma \
  --vector-store-target http://localhost:8000 \
  --embedding-provider ollama \
  --embedding-model embeddinggemma:latest
tapes serve \
  --vector-store-provider qdrant \
  --vector-store-target http://localhost:6334 \
  --embedding-provider ollama \
  --embedding-model embeddinggemma:latest
tapes serve \
  --vector-store-provider pgvector \
  --vector-store-target "postgres://user:pass@localhost:5432/tapes" \
  --embedding-provider ollama \
  --embedding-model embeddinggemma:latest

Bucket Data Model

Each conversation turn is stored as a Bucket containing:

type Node type (request or response)
role Message role (user, assistant, system)
content Message content
model Model identifier
provider Provider that handled the request
stop_reason Why generation stopped
usage Token usage statistics

Buckets are hashed and stored in a Merkle DAG structure for cryptographic verification.