Prerequisites & Setup Guide

Install AbstractCore and configure one or more providers. You can mix cloud + local providers in the same application.

Quick Decision Guide

Need the simplest cloud setup?
OpenAI (OPENAI_API_KEY)
Want Claude (vision) models?
Anthropic (ANTHROPIC_API_KEY)
Want many models behind one key?
OpenRouter (OPENROUTER_API_KEY)
Want free local models?
Ollama (runs locally)
Want a GUI local server?
LMStudio (OpenAI-compatible)
Running your own GPU server?
vLLM (OpenAI-compatible)
Have an OpenAI-compatible endpoint?
OpenAI-Compatible (bring your base_url)

Core Installation

Install AbstractCore and the provider(s) you plan to use:

# Minimal
pip install abstractcore

# Common cloud providers
pip install abstractcore[openai,anthropic]

# Common local providers
pip install abstractcore[ollama,mlx,lmstudio]

# Media handling (images, PDFs, Office docs)
pip install abstractcore[media]

# Optional: HTTP server (OpenAI-compatible REST API)
pip install abstractcore[server]

# Everything (recommended for development)
pip install abstractcore[all]

Recommended: Centralized Configuration

AbstractCore can store defaults and API keys in ~/.abstractcore/config/abstractcore.json so your apps can run without repeating flags.

# Show current status and defaults
abstractcore --status

# Set a global default (provider/model)
abstractcore --set-global-default ollama/qwen3:4b-instruct

# App-specific defaults
abstractcore --set-app-default cli ollama qwen3:4b-instruct
abstractcore --set-app-default summarizer openai gpt-5-mini

# Store API keys in config (recommended)
abstractcore --set-api-key openai sk-your-key
abstractcore --set-api-key anthropic sk-ant-your-key
abstractcore --set-api-key openrouter sk-or-your-key

Learn more: Centralized Configuration Guide

OpenAI Setup

Use OpenAI directly (fast setup, reliable models). You’ll need an API key.

1. Set Environment Variable

export OPENAI_API_KEY="sk-your-key-here"

2. (Optional) Override Base URL

Useful for proxies or gateways:

export OPENAI_BASE_URL="https://api.openai.com/v1"

3. Test Connection

from abstractcore import create_llm

llm = create_llm("openai", model="gpt-5-mini")
print(llm.generate("Hello, world!").content)

Vision note: For image inputs, use a vision-capable model (e.g. gpt-4o) when needed.

Anthropic Setup

Claude models (including Haiku 4.5) support vision input. You’ll need an API key.

1. Set Environment Variable

export ANTHROPIC_API_KEY="your-key-here"

2. Test Connection

from abstractcore import create_llm

llm = create_llm("anthropic", model="claude-haiku-4-5", temperature=0.0)
print(llm.generate("Hello, world!").content)

Determinism note: Anthropic does not support seed; use temperature=0.0 for more consistent outputs.

OpenRouter Setup (OpenAI-Compatible API)

OpenRouter is an OpenAI-compatible aggregator API with multi-provider routing. You’ll need an API key.

1. Set Environment Variable

export OPENROUTER_API_KEY="sk-or-your-key-here"

2. (Optional) Identify Your App

OpenRouter recommends these headers for analytics/abuse prevention:

export OPENROUTER_SITE_URL="https://www.abstractcore.ai"
export OPENROUTER_APP_NAME="AbstractCore"

3. Test Connection

from abstractcore import create_llm

llm = create_llm("openrouter", model="openai/gpt-4o-mini")
print(llm.generate("Hello from OpenRouter!").content)

Ollama Setup (Local)

Run open-source models locally (free, privacy-first).

1. Install + Start

# macOS
brew install ollama

# Start server (default: http://localhost:11434)
ollama serve

2. Pull a Model

ollama pull qwen3:4b-instruct

3. Test Connection

from abstractcore import create_llm

llm = create_llm("ollama", model="qwen3:4b-instruct")
print(llm.generate("Hello from Ollama!").content)

4. (Optional) Point to a Remote Ollama

# Either of these are supported by AbstractCore's Ollama provider
export OLLAMA_BASE_URL="http://your-host:11434"
export OLLAMA_HOST="http://your-host:11434"

LMStudio Setup (Local OpenAI-Compatible)

LMStudio runs local models behind an OpenAI-compatible API (default http://localhost:1234/v1).

1. Start the LMStudio Local Server

  1. Download LMStudio from lmstudio.ai
  2. Download a model in the UI (recommended: qwen/qwen3-4b-2507)
  3. Start the “Local Server” (note the base URL)

2. (Optional) Set Base URL

export LMSTUDIO_BASE_URL="http://localhost:1234/v1"

3. Test Connection

from abstractcore import create_llm

llm = create_llm("lmstudio", model="qwen/qwen3-4b-2507")
print(llm.generate("Hello from LMStudio!").content)

OpenAI-Compatible Setup (Generic)

Use any OpenAI-compatible endpoint (llama.cpp, LocalAI, custom proxies, etc.) by providing a base_url.

1. Configure Base URL

export OPENAI_COMPATIBLE_BASE_URL="http://127.0.0.1:1234/v1"
# Optional (if your endpoint requires auth)
export OPENAI_COMPATIBLE_API_KEY="your-key"

2. Test Connection

from abstractcore import create_llm

llm = create_llm("openai-compatible", model="your-model-name")
print(llm.generate("Hello from an OpenAI-compatible server!").content)

vLLM Setup (Local/Hosted OpenAI-Compatible)

vLLM exposes an OpenAI-compatible API (default http://localhost:8000/v1) and supports advanced features (guided decoding, Multi-LoRA, beam search).

1. Run a vLLM Server

# Example (adjust to your GPU + model)
vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct --host 0.0.0.0 --port 8000

2. Configure Base URL

export VLLM_BASE_URL="http://localhost:8000/v1"
# Optional if your deployment uses auth
export VLLM_API_KEY="your-key"

3. Test Connection

from abstractcore import create_llm

llm = create_llm("vllm", model="Qwen/Qwen3-Coder-30B-A3B-Instruct")
print(llm.generate("Hello from vLLM!").content)

MLX Setup (Apple Silicon)

MLX provides optimized local inference on Apple Silicon (M1/M2/M3/M4).

pip install abstractcore[mlx]
from abstractcore import create_llm

llm = create_llm("mlx", model="mlx-community/Qwen3-4B-4bit")
print(llm.generate("Hello from MLX!").content)

HuggingFace Setup

Run open-source models via HuggingFace tooling.

pip install abstractcore[huggingface]
# Optional: for private/gated models
export HUGGINGFACE_API_TOKEN="your-token-here"
from abstractcore import create_llm

llm = create_llm("huggingface", model="unsloth/Qwen3-4B-Instruct-2507-GGUF")
print(llm.generate("Hello from HuggingFace!").content)

Next Steps

Getting Started

5-minute setup + first LLM call

Centralized Configuration

Defaults + API keys in one place

Media Handling

Attach images, PDFs, Office docs

Tool Calling

Universal tools + syntax rewriting

HTTP Server

OpenAI-compatible REST API