AbstractCore Documentation Index
This folder contains the canonical user documentation for AbstractCore. The codebase is the source of truth; if you spot a mismatch, please open an issue.
Table of Contents
AbstractFramework ecosystem Start here (recommended reading order) Core guides Media, embeddings, and MCP (optional subsystems) Server (optional HTTP API) Built-in CLI apps Project docs Docs layout (what’s where)AbstractFramework ecosystem
AbstractCore is one of the core packages of the AbstractFramework ecosystem:
- AbstractFramework (umbrella): https://github.com/lpalbou/AbstractFramework
- AbstractCore (this package): unified LLM interface + cross-provider infrastructure (tools, streaming, structured output, media policies)
- AbstractRuntime: durable tool/effect execution, workflows, and state persistence (recommended runtime for executing
response.tool_calls) — https://github.com/lpalbou/abstractruntime
Start here (recommended reading order)
- Prerequisites — install/configure providers (Ollama, LMStudio, vLLM, HuggingFace, MLX, OpenAI, Anthropic, OpenRouter, Portkey, …)
- Getting Started — first call (
create_llm,generate), streaming, tools, structured output - FAQ — install extras, local servers, common gotchas
- Troubleshooting — actionable fixes for common failures
- API (Python) — user-facing map of the public API
- API Reference — complete function/class reference (including events)
Core guides
- Tool Calling — native + prompted tools; passthrough vs execution
- Tool Syntax Rewriting — normalize tool-call markup for different runtimes/clients
- Structured Output —
response_model=...strategies and limitations - Session Management — conversation state, persistence, compaction
- Generation Parameters — unified parameter vocabulary + provider quirks
- Centralized Config — config file + config CLI (
abstractcore --config) - Events and Structured Logging — observability hooks
- Interaction Tracing — record prompts/responses/usage for debugging
- Capabilities — what AbstractCore can and cannot do
- Capability plugins (voice/audio/vision) — optional deterministic outputs via
llm.voice/llm.audio/llm.vision(seecapabilities.mdandserver.md)
Media, embeddings, and MCP (optional subsystems)
- Media Handling System — images/audio/video + documents (policies + fallbacks)
- Vision Capabilities — image/video input, vision fallback, and how this differs from generative vision
- Glyph Visual-Text Compression — optional vision-based document compression (experimental)
- Embeddings —
EmbeddingManagerand local embedding models (opt-in) - MCP (Model Context Protocol) — consume MCP tool servers (HTTP/stdio) as tool sources
Server (optional HTTP API)
- Server — OpenAI-compatible
/v1gateway (installpip install "abstractcore[server]") - Endpoint — single-model OpenAI-compatible
/v1endpoint (installpip install "abstractcore[server]"; runabstractcore-endpoint)
Built-in CLI apps
These are convenience CLIs built on top of the core library:
Project docs
- Changelog — release notes and upgrade guidance
- Contributing — dev setup and PR guidelines
- Security — responsible vulnerability reporting
- Acknowledgements — upstream projects and communities
- License — MIT license text
Docs layout (what’s where)
docs/ is mostly a flat set of guides plus a few subfolders:
docs/apps/— CLI app guidesdocs/known_bugs/— focused notes on known issues (when present)docs/archive/— superseded/historical docs (seedocs/archive/README.md)docs/backlog/— planning notes (seedocs/backlog/README.md)docs/reports/— non-authoritative engineering notes (seedocs/reports/README.md)docs/research/— non-authoritative experiments (seedocs/research/README.md)
Key distinction:
- api.md = API overview (how to use the public API)
- api-reference.md = full Python API reference
- server.md = HTTP server endpoints and deployment