AbstractCore Capabilities
This document clearly explains what AbstractCore can and cannot do, helping you understand when to use it and when to look elsewhere.
What AbstractCore IS
AbstractCore is production-ready LLM infrastructure. It provides a unified, reliable interface to language models with essential features built-in.
Core Philosophy
Infrastructure
Not application logic
Reliability
Over features
Simplicity
Over complexity
Provider Agnostic
Works everywhere
✅ What AbstractCore Does Exceptionally Well
1. Universal LLM Provider Interface
What it does: Provides identical APIs across all major LLM providers.
# Same code works with any provider
def ask_llm(provider_name, question):
llm = create_llm(provider_name, model="default")
return llm.generate(question)
# All of these work identically
ask_llm("openai", "What is Python?")
ask_llm("anthropic", "What is Python?")
ask_llm("ollama", "What is Python?")
Why it's exceptional: No other library provides truly universal tool calling, streaming, and structured output across all providers.
2. Production-Grade Reliability
What it does: Handles failures gracefully with retry logic, circuit breakers, and comprehensive error handling.
- Automatic retries with exponential backoff for rate limits and network errors
- Circuit breakers prevent cascade failures when providers go down
- Smart error classification - retries recoverable errors, fails fast on auth errors
- Event system for monitoring and alerting
Why it's exceptional: Built for production from day one, not research or prototypes.
3. Universal Tool Calling
What it does: Tools work consistently across ALL providers, even those without native tool support.
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 72°F, sunny"
# Works with providers that have native tool support
openai_response = openai_llm.generate("Weather in Paris?", tools=[get_weather])
# Also works with providers that don't (via intelligent prompting)
ollama_response = ollama_llm.generate("Weather in Paris?", tools=[get_weather])
Why it's exceptional: Most libraries only support OpenAI-style tools. AbstractCore makes ANY model work with tools.
4. Centralized Configuration System
What it does: Single source of truth for all settings with clear priority hierarchy.
# One-time configuration
abstractcore --status
abstractcore --set-global-default ollama/llama3:8b
abstractcore --set-app-default summarizer openai gpt-4o-mini
abstractcore --set-api-key openai sk-your-key-here
abstractcore --download-vision-model
# Now use anywhere without repeating config
summarizer document.pdf # Uses configured defaults
python -m abstractcore.utils.cli --prompt "Hello" # Uses CLI default
Why it's exceptional: Configuration priority (Explicit > App-Specific > Global > Hardcoded) eliminates repetitive parameter passing while maintaining full control when needed.
5. Universal Media Handling System
What it does: Attach any file type (images, PDFs, Office docs, data) with simple @filename
syntax or media=[]
parameter.
# Python API - works across ALL providers
llm = create_llm("openai", model="gpt-4o")
response = llm.generate(
"Compare the chart with the data and summarize the document",
media=["chart.png", "data.csv", "report.pdf"]
)
# CLI - same simplicity
# python -m abstractcore.utils.cli --prompt "Analyze @report.pdf and @chart.png"
# Supported: Images (PNG, JPEG, GIF, WEBP, BMP, TIFF)
# Documents (PDF, DOCX, XLSX, PPTX)
# Data (CSV, TSV, TXT, MD, JSON)
Why it's exceptional: Same code works identically across OpenAI, Anthropic, Ollama, and all providers. Automatic processor selection, provider-specific formatting, and graceful fallback.
6. Vision Capabilities with Intelligent Fallback
What it does: Image analysis across all providers with automatic resolution optimization. Text-only models can process images through transparent vision fallback.
# Works with vision-capable models
llm = create_llm("openai", model="gpt-4o")
response = llm.generate(
"What's in this image?",
media=["photo.jpg"]
)
# Also works with text-only models via vision fallback
text_llm = create_llm("lmstudio", model="qwen/qwen3-next-80b") # No native vision
response = text_llm.generate(
"Analyze this image",
media=["complex_scene.jpg"]
)
# Transparent: vision model analyzes → text model processes description
Why it's exceptional: Automatic image optimization per model, cross-provider consistency, and unique vision fallback system enables ANY model to process images.
7. Tool Call Tag Rewriting for Agentic CLI Compatibility
What it does: Automatically rewrites tool call tags to match different agentic CLI requirements in real-time.
# Rewrite tool calls for different CLIs
llm = create_llm("openai", model="gpt-4o-mini")
# For Codex CLI (Qwen3 format)
response = llm.generate("Weather in Paris?", tools=tools, tool_call_tags="qwen3")
# Output: <|tool_call|>{"name": "get_weather", "arguments": {"location": "Paris"}}|tool_call|>
# For Crush CLI (LLaMA3 format)
response = llm.generate("Weather in Paris?", tools=tools, tool_call_tags="llama3")
# Output: {"name": "get_weather", "arguments": {"location": "Paris"}}
# For Gemini CLI (XML format)
response = llm.generate("Weather in Paris?", tools=tools, tool_call_tags="xml")
# Output: {"name": "get_weather", "arguments": {"location": "Paris"}}
Why it's exceptional: Seamless compatibility with any agentic CLI without code changes.
8. Structured Output with Automatic Retry
What it does: Gets typed Python objects from LLMs with automatic validation and retry on failures.
from pydantic import BaseModel
class Product(BaseModel):
name: str
price: float
# Automatically retries with error feedback if validation fails
product = llm.generate(
"Extract: Gaming laptop for $1200",
response_model=Product
)
Why it's exceptional: Built-in validation retry means higher success rates and less manual error handling.
❌ What AbstractCore Does NOT Do
Understanding limitations is crucial for choosing the right tool.
1. RAG Pipelines (Use Specialized Tools)
What AbstractCore provides: Vector embeddings via EmbeddingManager
What it doesn't provide: Document chunking, vector databases, retrieval strategies
# AbstractCore gives you this
from abstractcore.embeddings import EmbeddingManager
embedder = EmbeddingManager()
similarity = embedder.compute_similarity("query", "document")
# You need to build this yourself
def rag_pipeline(query, documents):
# 1. Chunk documents - YOU implement
# 2. Store in vector DB - YOU implement
# 3. Retrieve relevant chunks - YOU implement
# 4. Construct prompt - YOU implement
return llm.generate(prompt)
Better alternatives:
- LlamaIndex - Full RAG framework
- LangChain - RAG components and chains
2. Complex Agent Workflows (Use Agent Frameworks)
What AbstractCore provides: Single LLM calls with tool execution
What it doesn't provide: Multi-step agent reasoning, planning, memory persistence
# AbstractCore is great for this
response = llm.generate("What's 2+2?", tools=[calculator_tool])
# AbstractCore is NOT for this
def complex_agent():
# 1. Plan multi-step solution - NOT provided
# 2. Execute steps with memory - NOT provided
# 3. Reflect and re-plan - NOT provided
# 4. Persist agent state - NOT provided
pass
Better alternatives:
- AbstractAgent - Built on AbstractCore
- LangGraph - Agent orchestration
- AutoGPT - Autonomous agents
When to Choose AbstractCore
✅ Choose AbstractCore When You Need:
-
Reliable LLM Infrastructure
- Production-ready error handling and retry logic
- Consistent interface across different providers
- Built-in monitoring and observability
-
Provider Flexibility
- Easy switching between OpenAI, Anthropic, Ollama, etc.
- Provider-agnostic code that runs anywhere
- Local and cloud provider support
-
Universal Tool Calling
- Tools that work across ALL providers
- Consistent tool execution regardless of native support
- Event-driven tool control and monitoring
-
Centralized Configuration
- Single source of truth for all settings
- Clear priority hierarchy (Explicit > App > Global > Default)
- No repetitive parameter passing
-
Universal Media Handling
- Attach images, PDFs, Office docs, data files
- Same API across all providers
- Automatic processing and formatting
-
Vision Capabilities
- Image analysis across all providers
- Automatic resolution optimization
- Vision fallback for text-only models
❌ Don't Choose AbstractCore When You Need:
- Full RAG Frameworks → Use LlamaIndex or LangChain
- Complex Agent Workflows → Use AbstractAgent or LangGraph
- Advanced Memory Systems → Use AbstractMemory or Mem0
- Prompt Template Management → Use Jinja2 or LangChain Prompts
- Model Training/Fine-tuning → Use Transformers or Axolotl
- Multi-Agent Systems → Use CrewAI or AutoGen
Related Documentation
Getting Started
5-minute quick start guide
Centralized Configuration
Global settings and defaults
Media Handling
Universal file attachment
Vision Capabilities
Image analysis with fallback
API Reference
Complete Python API documentation
Tool Calling
Universal tool system guide
Examples
Real-world usage examples