Troubleshooting Guide
Complete troubleshooting guide for AbstractCore with common issues, solutions, and best practices.
🚨 Quick Diagnosis
Run these checks first:
# Check Python version
python --version # Should be 3.9+
# Check AbstractCore installation
pip show abstractcore
# Test core library
python -c "from abstractcore import create_llm; print('✓ Core library OK')"
# Test server (if installed)
curl http://localhost:8000/health # Should return {"status":"healthy"}
🔥 Top 3 Critical Mistakes
1. 🔑 Incorrect Provider Configuration
Symptom: Authentication failures, no model response
Quick Fix: Always set API keys as environment variables
# Set API keys correctly
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
# Verify they're set
echo $OPENAI_API_KEY
2. 🧩 Mishandling Tool Calls
Symptom: Tools not executing, streaming interruptions
Quick Fix: Use @tool
decorator and handle tool calls properly
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: sunny, 72°F"
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate(
"What's the weather in Paris?",
tools=[get_weather] # Pass as list
)
3. 💻 Provider Dependency Confusion
Symptom: ModuleNotFoundError
for providers
Quick Fix: Install provider-specific packages
# Install with specific provider
pip install abstractcore[openai]
pip install abstractcore[anthropic]
pip install abstractcore[ollama]
# Install everything
pip install abstractcore[all]
🛠️ Common Issues & Solutions
Authentication Errors
Symptoms:
Error: OpenAI API key not found
Error: Authentication failed
Error: Invalid API key
Solutions:
# Check if API key is set
echo $OPENAI_API_KEY # Should show your key
echo $ANTHROPIC_API_KEY
# Set API key
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
# Add to shell profile for persistence
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc
source ~/.bashrc
# Test authentication
python -c "from abstractcore import create_llm; llm = create_llm('openai', model='gpt-4o-mini'); print(llm.generate('test').content)"
Model Not Found
Symptoms:
Error: Model 'qwen3-coder:30b' not found
Error: Unsupported model
For Ollama:
# Check available models
ollama list
# Pull missing model
ollama pull qwen3-coder:30b
# Verify Ollama is running
ollama serve
For OpenAI/Anthropic:
# Use correct model names
llm = create_llm("openai", model="gpt-4o-mini") # ✓ Correct
llm = create_llm("openai", model="gpt4") # ✗ Wrong
llm = create_llm("anthropic", model="claude-3-5-haiku-latest") # ✓ Correct
llm = create_llm("anthropic", model="claude-3") # ✗ Wrong
Server Won't Start
Symptoms:
Address already in use
Port 8000 is already allocated
Solutions:
# Check what's using port 8000
lsof -i :8000 # Linux/Mac
netstat -ano | findstr :8000 # Windows
# Kill process on port
kill -9 $(lsof -t -i:8000) # Linux/Mac
# Use different port
uvicorn abstractcore.server.app:app --port 3000
✅ Best Practices
Configuration
- ✅ Use environment variables for API keys
- ✅ Never commit credentials to version control
- ✅ Use
.env
files (add to.gitignore
) - ✅ Implement configuration validation
Tool Development
- ✅ Always use
@tool
decorator - ✅ Add type hints to all parameters
- ✅ Write clear docstrings
- ✅ Handle edge cases and errors
Error Handling
- ✅ Always use try/except blocks
- ✅ Implement provider fallback strategies
- ✅ Log errors systematically
- ✅ Design for graceful degradation
🔍 Debug Techniques
Enable Debug Logging
Core Library:
import logging
logging.basicConfig(level=logging.DEBUG)
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini")
Server:
# Enable debug mode
export ABSTRACTCORE_DEBUG=true
# Start with debug logging
uvicorn abstractcore.server.app:app --log-level debug
# Monitor logs
tail -f logs/abstractcore_*.log
📋 Common Error Messages
🆘 Getting Help
If you're still stuck:
- Enable Debug Mode:
export ABSTRACTCORE_DEBUG=true
- Collect Information: Error messages, debug logs, system info
- Check Documentation: Getting Started, API Reference
- Community Support: GitHub Issues