Troubleshooting
Fix common installation, provider, tool calling, and server issues in minutes.
Quick Diagnosis
Run these checks first:
# Python version (should be 3.9+)
python --version
# Confirm package is installed
pip show abstractcore
# Sanity-check import
python -c "from abstractcore import create_llm; print('✓ import ok')"
# Server health (only if you run the server)
curl -sS http://localhost:8000/health
Installation & Provider Extras
The default install is intentionally lightweight. Install only the extras you need (zsh: keep quotes).
Local OpenAI-compatible servers (Ollama, LM Studio, custom gateways) work with the core install — you just set base_url.
# Core
pip install abstractcore
# Providers (only if you use them)
pip install "abstractcore[openai]"
pip install "abstractcore[anthropic]"
pip install "abstractcore[huggingface]" # heavy (torch/transformers)
pip install "abstractcore[mlx]" # Apple Silicon (heavy)
pip install "abstractcore[vllm]" # GPU server integration (heavy)
# Optional features
pip install "abstractcore[media]" # PDFs / Office docs / images
pip install "abstractcore[tools]" # web_search, fetch_url (needed for deepsearch)
pip install "abstractcore[server]" # OpenAI-compatible HTTP gateway
Note: ollama, lmstudio, openrouter, and openai-compatible do not require a provider extra.
Use pip install abstractcore and configure the relevant API key / base URL.
Authentication Errors
Make sure the right API key env var is set (or store it via the config CLI).
# Environment variables
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENROUTER_API_KEY="sk-or-..."
# Or store keys in AbstractCore config (recommended)
abstractcore --set-api-key openai sk-...
abstractcore --set-api-key anthropic sk-ant-...
abstractcore --set-api-key openrouter sk-or-...
Model Not Found (Local Providers)
Ollama
# Verify Ollama is running
ollama serve
# See installed models
ollama list
# Pull a model if missing
ollama pull qwen3:4b-instruct-2507-q4_K_M
LM Studio
Ensure the server is enabled in the LM Studio UI, and include /v1 in the base URL.
export LMSTUDIO_BASE_URL="http://localhost:1234/v1"
Tool Calls Not Working
By default, AbstractCore returns tool calls in response.tool_calls (passthrough mode).
Your host/runtime decides how to execute them. If you expected tools to run automatically, check your integration.
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
return f"{city}: 22°C and sunny"
llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("What's the weather in Paris? Use the tool.", tools=[get_weather])
print(resp.content)
print(resp.tool_calls)
Server Won't Start
Port conflicts are the most common cause.
# macOS/Linux: find what's using port 8000
lsof -i :8000
# Kill the process (be careful)
kill -9 $(lsof -t -i:8000)
# Start on a different port
uvicorn abstractcore.server.app:app --port 3000
Common Error Messages
Getting Help
If you’re still stuck, these pages usually unblock things: