Getting Started with AbstractCore

This guide will get you up and running with AbstractCore in 5 minutes. You'll learn how to install it, make your first LLM call, and explore the key features.

Prerequisites

  • Python 3.9 or higher
  • pip package manager
  • (Optional) API keys for cloud providers

Installation

Basic Installation

pip install abstractcore

# For media handling (images, PDFs, Office docs)
pip install "abstractcore[media]"

Provider-Specific Installation

Choose based on which LLM providers you want to use:

# Providers (install only what you use; zsh: keep quotes)
pip install "abstractcore[openai]"       # OpenAI SDK
pip install "abstractcore[anthropic]"    # Anthropic SDK
pip install "abstractcore[huggingface]"  # Transformers / torch (heavy)
pip install "abstractcore[mlx]"          # Apple Silicon local inference (heavy)
pip install "abstractcore[vllm]"         # GPU server integration (heavy)

	# Optional features
	pip install "abstractcore[media]"        # images, PDFs, Office docs
	pip install "abstractcore[tools]"        # built-in tools (web_search, skim_websearch, skim_url, fetch_url, read_file, ...)
	pip install "abstractcore[server]"       # OpenAI-compatible HTTP gateway
	pip install "abstractcore[embeddings]"   # vector embeddings + RAG helpers
	pip install "abstractcore[compression]"  # glyph visual-text compression

# Note: providers like ollama, lmstudio, openrouter, and openai-compatible
# work with the core install — just set the server base_url.

Optional capability plugins (voice/audio/vision)

AbstractCore keeps the default install small. Deterministic modality APIs (STT/TTS, generative vision) live in optional plugins:

pip install abstractvoice   # enables llm.voice / llm.audio (TTS/STT)
pip install abstractvision  # enables llm.vision (generative vision via an OpenAI-compatible images endpoint)

See Capabilities and Server.

Quick Setup Examples

Cloud Provider

# OpenAI setup
pip install abstractcore
pip install "abstractcore[openai]"
export OPENAI_API_KEY="your-key-here"

Local Models

# Ollama setup
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen3:4b-instruct-2507-q4_K_M
pip install abstractcore

Your First Program

Create a file called first_llm.py:

from abstractcore import create_llm

# Choose your provider (uncomment one):
llm = create_llm("openai", model="gpt-4o-mini", temperature=0.7)        # Cloud
# llm = create_llm("anthropic", model="claude-haiku-4-5", temperature=0.0)  # Cloud (use temp=0 for consistency)
# llm = create_llm("ollama", model="qwen3:4b-instruct-2507-q4_K_M", seed=42)   # Local

# Generate your first response
response = llm.generate("What is the capital of France?")
print(response.content)

# Best-effort deterministic outputs (provider/model dependent)
deterministic_llm = create_llm("openai", model="gpt-4o-mini", temperature=0.0, seed=42)
response1 = deterministic_llm.generate("Write exactly 3 words about AI")
response2 = deterministic_llm.generate("Write exactly 3 words about AI")
# Determinism depends on provider/model.

Run it:

python first_llm.py
# Output: The capital of France is Paris.

✓ Complete

Your first AbstractCore LLM call is working.

Recommended: Centralized Configuration

Configure AbstractCore once to avoid specifying provider/model every time:

# Check current status
abstractcore --status

# Set global default
abstractcore --set-global-default ollama/qwen3:4b-instruct

# Set app-specific defaults
abstractcore --set-app-default cli ollama qwen3:4b-instruct

# Configure API keys (stored in config file)
abstractcore --set-api-key openai sk-your-key-here

Learn more: Centralized Configuration Guide

Core Concepts (5-Minute Tour)

1. Provider Discovery (Centralized Registry)

AbstractCore provides a centralized registry to discover all available providers and models:

from abstractcore.providers import get_all_providers_with_models

# Get comprehensive information about all providers with available models
providers = get_all_providers_with_models()
for provider in providers:
    print(f"{provider['display_name']}: {provider['model_count']} models")
    print(f"Features: {', '.join(provider['supported_features'])}")
    print(f"Local: {provider['local_provider']}")
    print(f"Auth Required: {provider['authentication_required']}")
    print()

2. Providers and Models

AbstractCore supports multiple providers with the same interface:

from abstractcore import create_llm

# Same interface, different providers
openai_llm = create_llm("openai", model="gpt-4o-mini", seed=42)
claude_llm = create_llm("anthropic", model="claude-haiku-4-5", temperature=0.0)  # Use temp=0 for consistency
local_llm = create_llm("ollama", model="qwen3:4b-instruct-2507-q4_K_M", seed=42)

question = "Explain Python list comprehensions"

# All work the same way
for name, llm in [("OpenAI", openai_llm), ("Claude", claude_llm), ("Ollama", local_llm)]:
    response = llm.generate(question)
    print(f"{name}: {response.content[:50]}...")

3. Structured Output (Game Changer)

Instead of parsing strings, get typed objects directly:

from pydantic import BaseModel
from abstractcore import create_llm

class MovieReview(BaseModel):
    title: str
    rating: int  # 1-5 stars
    summary: str

llm = create_llm("openai", model="gpt-4o-mini")

# Get structured data automatically
review = llm.generate(
    "Review the movie Inception",
    response_model=MovieReview
)

print(f"Title: {review.title}")
print(f"Rating: {review.rating}/5")
print(f"Summary: {review.summary}")

No more string parsing! AbstractCore handles JSON validation and retries automatically.

4. Tool Calling (LLM with Superpowers)

Let your LLM call functions with the @tool decorator:

from abstractcore import create_llm, tool

@tool
def get_weather(city: str) -> str:
    """Get current weather for a specified city."""
    # In a real scenario, you'd call an actual weather API
    return f"The weather in {city} is sunny, 72°F"

@tool
def calculate(expression: str) -> float:
    """Perform a mathematical calculation."""
    try:
        result = eval(expression)  # Simplified for demo - don't use eval in production!
        return result
    except Exception:
        return float('nan')

# Instantiate the LLM
llm = create_llm("openai", model="gpt-4o-mini")

# Automatically extract tool definitions from decorated functions
response = llm.generate(
    "What's the weather in Tokyo and what's 15 * 23?",
    tools=[get_weather, calculate]  # Pass tool functions directly
)

	print(response.content)
	# Output: The weather in Tokyo is sunny, 72°F and 15 * 23 = 345.

Built-in tools (optional)

If you want a ready-made toolset for agentic scripts, install:

pip install "abstractcore[tools]"
  • skim_websearch vs web_search: compact/filtered links vs fuller search results
  • skim_url vs fetch_url: fast URL triage (small output) vs full fetch + parsing for text-first types (HTML/JSON/text)

See Tool Calling for a recommended workflow and the full built-in tool list.

5. Streaming (Real-Time Responses)

Show responses as they're generated:

from abstractcore import create_llm

llm = create_llm("openai", model="gpt-4o-mini")

print("AI: ", end="", flush=True)
for chunk in llm.generate("Write a haiku about programming", stream=True):
    print(chunk.content, end="", flush=True)
print("\n")
# Output appears word by word in real-time

6. Conversations with Memory

Maintain context across multiple turns:

from abstractcore import create_llm, BasicSession

llm = create_llm("openai", model="gpt-4o-mini")
session = BasicSession(provider=llm, system_prompt="You are a helpful coding tutor.")

# First exchange
response1 = session.generate("My name is Alex and I'm learning Python.")
print("AI:", response1.content)

# Second exchange - remembers context
response2 = session.generate("What's my name and what am I learning?")
print("AI:", response2.content)
# Output: Your name is Alex and you're learning Python.

7. Media Handling (Attach Any File)

Attach images, documents, and (policy-driven) audio/video with simple syntax:

from abstractcore import create_llm

llm = create_llm("openai", model="gpt-4o")

# Attach any file type with media parameter
response = llm.generate(
    "What's in this image and document?",
    media=["photo.jpg", "report.pdf"]
)

# Audio/video inputs are policy-driven (no silent semantic changes)
# For speech audio, use STT fallback (requires: pip install abstractvoice)
response = llm.generate(
    "Summarize this call.",
    media=["call.wav"],
    audio_policy="speech_to_text",
)

# Or from CLI with @filename syntax
# abstractcore-chat --prompt "Analyze @report.pdf"

Supported: Images (PNG, JPEG, GIF, WEBP), Documents (PDF, DOCX, XLSX, PPTX), Data (CSV, TSV, TXT, MD, JSON), plus audio/video via explicit policies.

Learn more: Media Handling System Guide and Audio & Voice

8. Vision Capabilities (Image Analysis)

Analyze images across all providers with the same code:

from abstractcore import create_llm

# Works with any vision-capable provider
llm = create_llm("openai", model="gpt-4o")

response = llm.generate(
    "What objects do you see in this image?",
    media=["photo.jpg"]
)

# Same code works with local models
llm = create_llm("ollama", model="qwen2.5vl:7b")
response = llm.generate("Describe this image", media=["scene.jpg"])

Vision Fallback: Text-only models can process images through transparent captioning. Configure once: abstractcore --download-vision-model

Learn more: Vision Capabilities Guide

Next Steps

Centralized Configuration

Set up once, use everywhere

Configure AbstractCore →

Media Handling

Universal file attachment

Learn Media Handling →

Vision Capabilities

Image analysis everywhere

Explore Vision →

Audio & Voice

STT fallback + optional TTS

Explore Audio →

📖 API Reference

Complete Python API documentation

View API Reference →

💻 Examples

Real-world code examples

View Examples →

🌐 Server Guide

HTTP API server setup

View Server Guide →