Getting Started with AbstractCore
This guide will get you up and running with AbstractCore in 5 minutes. You'll learn how to install it, make your first LLM call, and explore the key features.
Prerequisites
- Python 3.9 or higher
- pip package manager
- (Optional) API keys for cloud providers
Installation
Basic Installation
pip install abstractcore
# For media handling (images, PDFs, Office docs)
pip install abstractcore[media]
Provider-Specific Installation
Choose based on which LLM providers you want to use:
# OpenAI
pip install abstractcore[openai]
# Anthropic
pip install abstractcore[anthropic]
# Ollama (local models)
pip install abstractcore[ollama]
# LMStudio (local models)
pip install abstractcore[lmstudio]
# MLX (Apple Silicon)
pip install abstractcore[mlx]
# HuggingFace
pip install abstractcore[huggingface]
# Server support
pip install abstractcore[server]
# Embeddings
pip install abstractcore[embeddings]
# Everything
pip install abstractcore[all]
Quick Setup Examples
Cloud Provider
# OpenAI setup
pip install abstractcore[openai]
export OPENAI_API_KEY="your-key-here"
Local Models
# Ollama setup
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen3:4b-instruct-2507-q4_K_M
pip install abstractcore[ollama]
Your First Program
Create a file called first_llm.py
:
from abstractcore import create_llm
# Choose your provider (uncomment one):
llm = create_llm("openai", model="gpt-4o-mini") # Cloud
# llm = create_llm("anthropic", model="claude-3-5-haiku-latest") # Cloud
# llm = create_llm("ollama", model="qwen3:4b-instruct-2507-q4_K_M") # Local
# Generate your first response
response = llm.generate("What is the capital of France?")
print(response.content)
Run it:
python first_llm.py
# Output: The capital of France is Paris.
🎉 Congratulations!
You've made your first AbstractCore LLM call.
Recommended: Centralized Configuration
Configure AbstractCore once to avoid specifying provider/model every time:
# Check current status
abstractcore --status
# Set global default
abstractcore --set-global-default ollama/llama3:8b
# Set app-specific defaults
abstractcore --set-app-default cli ollama qwen3:4b
# Configure API keys (stored in config file)
abstractcore --set-api-key openai sk-your-key-here
Learn more: Centralized Configuration Guide
Core Concepts (5-Minute Tour)
1. Provider Discovery (Centralized Registry)
AbstractCore provides a centralized registry to discover all available providers and models:
from abstractcore.providers import get_all_providers_with_models
# Get comprehensive information about all providers with available models
providers = get_all_providers_with_models()
for provider in providers:
print(f"{provider['display_name']}: {provider['model_count']} models")
print(f"Features: {', '.join(provider['supported_features'])}")
print(f"Local: {provider['local_provider']}")
print(f"Auth Required: {provider['authentication_required']}")
print()
2. Providers and Models
AbstractCore supports multiple providers with the same interface:
from abstractcore import create_llm
# Same interface, different providers
openai_llm = create_llm("openai", model="gpt-4o-mini")
claude_llm = create_llm("anthropic", model="claude-3-5-haiku-latest")
local_llm = create_llm("ollama", model="qwen3-coder:30b")
question = "Explain Python list comprehensions"
# All work the same way
for name, llm in [("OpenAI", openai_llm), ("Claude", claude_llm), ("Ollama", local_llm)]:
response = llm.generate(question)
print(f"{name}: {response.content[:50]}...")
3. Structured Output (Game Changer)
Instead of parsing strings, get typed objects directly:
from pydantic import BaseModel
from abstractcore import create_llm
class MovieReview(BaseModel):
title: str
rating: int # 1-5 stars
summary: str
llm = create_llm("openai", model="gpt-4o-mini")
# Get structured data automatically
review = llm.generate(
"Review the movie Inception",
response_model=MovieReview
)
print(f"Title: {review.title}")
print(f"Rating: {review.rating}/5")
print(f"Summary: {review.summary}")
No more string parsing! AbstractCore handles JSON validation and retries automatically.
4. Tool Calling (LLM with Superpowers)
Let your LLM call functions with the @tool
decorator:
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a specified city."""
# In a real scenario, you'd call an actual weather API
return f"The weather in {city} is sunny, 72°F"
@tool
def calculate(expression: str) -> float:
"""Perform a mathematical calculation."""
try:
result = eval(expression) # Simplified for demo - don't use eval in production!
return result
except Exception:
return float('nan')
# Instantiate the LLM
llm = create_llm("openai", model="gpt-4o-mini")
# Automatically extract tool definitions from decorated functions
response = llm.generate(
"What's the weather in Tokyo and what's 15 * 23?",
tools=[get_weather, calculate] # Pass tool functions directly
)
print(response.content)
# Output: The weather in Tokyo is sunny, 72°F and 15 * 23 = 345.
5. Streaming (Real-Time Responses)
Show responses as they're generated:
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini")
print("AI: ", end="", flush=True)
for chunk in llm.generate("Write a haiku about programming", stream=True):
print(chunk.content, end="", flush=True)
print("\n")
# Output appears word by word in real-time
6. Conversations with Memory
Maintain context across multiple turns:
from abstractcore import create_llm, BasicSession
llm = create_llm("openai", model="gpt-4o-mini")
session = BasicSession(provider=llm, system_prompt="You are a helpful coding tutor.")
# First exchange
response1 = session.generate("My name is Alex and I'm learning Python.")
print("AI:", response1.content)
# Second exchange - remembers context
response2 = session.generate("What's my name and what am I learning?")
print("AI:", response2.content)
# Output: Your name is Alex and you're learning Python.
7. Media Handling (Attach Any File)
Attach images, PDFs, Office docs with simple syntax:
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o")
# Attach any file type with media parameter
response = llm.generate(
"What's in this image and document?",
media=["photo.jpg", "report.pdf"]
)
# Or from CLI with @filename syntax
# python -m abstractcore.utils.cli --prompt "Analyze @report.pdf"
Supported: Images (PNG, JPEG, GIF, WEBP), Documents (PDF, DOCX, XLSX, PPTX), Data (CSV, TSV, TXT, MD, JSON)
Learn more: Media Handling System Guide
8. Vision Capabilities (Image Analysis)
Analyze images across all providers with the same code:
from abstractcore import create_llm
# Works with any vision-capable provider
llm = create_llm("openai", model="gpt-4o")
response = llm.generate(
"What objects do you see in this image?",
media=["photo.jpg"]
)
# Same code works with local models
llm = create_llm("ollama", model="qwen2.5vl:7b")
response = llm.generate("Describe this image", media=["scene.jpg"])
Vision Fallback: Text-only models can process images through transparent captioning. Configure once: abstractcore --download-vision-model
Learn more: Vision Capabilities Guide