Prerequisites & Setup Guide
Complete setup guide for AbstractCore with all LLM providers. Choose the provider(s) that best fit your needs - you can use multiple providers in the same application.
Quick Decision Guide
OpenAI Setup (requires API key, costs ~$0.001-0.01 per request)
Ollama Setup (free, runs on your machine)
MLX Setup (optimized for M1/M2/M3 chips)
LMStudio Setup (easiest local setup)
Core Installation
First, install AbstractCore with your preferred providers:
# Option 1: Start with cloud providers (fastest setup)
pip install abstractcore[openai,anthropic]
# Option 2: Local models only (free, no API keys needed)
pip install abstractcore[ollama,mlx]
# Option 3: Everything (recommended for development)
pip install abstractcore[all]
# Option 4: Minimal install (add providers as needed)
pip install abstractcore
# For media handling (images, PDFs, Office docs)
pip install abstractcore[media]
Recommended: Centralized Configuration
After installation, configure AbstractCore once to avoid repetitive provider/model specifications:
1. Check Current Status
abstractcore --status
2. Set Global Defaults (One-Time Setup)
# Set global fallback model
abstractcore --set-global-default ollama/llama3:8b
# Or use cloud provider as default
abstractcore --set-global-default openai/gpt-4o-mini
# Set app-specific defaults
abstractcore --set-app-default cli ollama qwen3:4b
abstractcore --set-app-default summarizer openai gpt-4o-mini
3. Configure API Keys (Optional but Recommended)
# Store API keys in config file instead of environment variables
abstractcore --set-api-key openai sk-your-key-here
abstractcore --set-api-key anthropic your-anthropic-key
4. Configure Vision Fallback (Optional)
Enable text-only models to process images:
# Download local vision model (recommended)
abstractcore --download-vision-model
# Or use existing Ollama model
abstractcore --set-vision-caption qwen2.5vl:7b
Benefits: Never specify provider/model again, consistent behavior across apps, environment-specific configurations.
Learn more: Centralized Configuration Guide
OpenAI Setup
OpenAI provides the most reliable and fastest models, including GPT-4 and GPT-3.5.
1. Get API Key
- Visit OpenAI API Keys
- Create a new API key
- Copy the key (starts with
sk-
)
2. Set Environment Variable
# Linux/macOS
export OPENAI_API_KEY="sk-your-key-here"
# Windows
set OPENAI_API_KEY=sk-your-key-here
3. Test Connection
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate("Hello, world!")
print(response.content)
Anthropic Setup
Anthropic's Claude models are excellent for reasoning and analysis tasks.
1. Get API Key
- Visit Anthropic Console
- Create an API key
- Copy the key
2. Set Environment Variable
# Linux/macOS
export ANTHROPIC_API_KEY="your-key-here"
# Windows
set ANTHROPIC_API_KEY=your-key-here
3. Test Connection
from abstractcore import create_llm
llm = create_llm("anthropic", model="claude-3-5-haiku-latest")
response = llm.generate("Hello, world!")
print(response.content)
Ollama Setup
Ollama provides free local models that run entirely on your machine.
1. Install Ollama
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Windows
# Download from https://ollama.ai/download
2. Start Ollama Service
ollama serve
3. Download a Model
# Recommended models
ollama pull qwen3:4b-instruct-2507-q4_K_M # Fast, good quality
ollama pull qwen3-coder:30b # Best for coding
ollama pull granite3.3:2b # Lightweight
4. Test Connection
from abstractcore import create_llm
llm = create_llm("ollama", model="qwen3:4b-instruct-2507-q4_K_M")
response = llm.generate("Hello, world!")
print(response.content)
MLX Setup (Apple Silicon)
MLX provides optimized models for Apple Silicon Macs (M1/M2/M3/M4).
1. Install MLX
pip install abstractcore[mlx]
2. Download a Model
# Models are downloaded automatically on first use
from abstractcore import create_llm
llm = create_llm("mlx", model="mlx-community/Qwen2.5-Coder-7B-Instruct-4bit")
response = llm.generate("Hello, world!")
print(response.content)
LMStudio Setup
LMStudio provides a user-friendly GUI for running local models.
1. Install LMStudio
- Download from lmstudio.ai
- Install and launch the application
- Download a model through the GUI
2. Start Local Server
- In LMStudio, go to "Local Server" tab
- Select your model
- Click "Start Server"
- Note the server URL (usually http://localhost:1234)
3. Test Connection
from abstractcore import create_llm
llm = create_llm("lmstudio", model="your-model-name")
response = llm.generate("Hello, world!")
print(response.content)
HuggingFace Setup
Access thousands of open-source models through HuggingFace.
1. Install Dependencies
pip install abstractcore[huggingface]
2. Optional: Set HF Token
# For private/gated models
export HUGGINGFACE_API_TOKEN="your-token-here"
3. Test Connection
from abstractcore import create_llm
llm = create_llm("huggingface", model="microsoft/DialoGPT-medium")
response = llm.generate("Hello, world!")
print(response.content)
Next Steps
Centralized Configuration
Set up once, use everywhere - global defaults and preferences
Getting Started
Learn the basics and create your first LLM application
Media Handling
Attach images, PDFs, Office docs with simple syntax
Vision Capabilities
Image analysis with vision fallback for text-only models
Tool Calling
Universal tool support across all providers
Examples
Real-world usage examples and patterns
API Reference
Complete API documentation