API Overview (Python)
A user-facing map of the public Python API. For a complete listing (including events), see the API Reference.
Core entrypoints
create_llm(...)
Create a provider instance:
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini") # requires: pip install "abstractcore[openai]"
resp = llm.generate("Hello!")
print(resp.content)
BasicSession
Keep conversation state:
from abstractcore import BasicSession, create_llm
session = BasicSession(create_llm("anthropic", model="claude-haiku-4-5"))
print(session.generate("Give me 3 name ideas.").content)
print(session.generate("Pick the best one.").content)
Responses (GenerateResponse)
Most calls return a GenerateResponse object (or an iterator for streaming). Common fields:
content: cleaned assistant texttool_calls: structured tool calls (pass-through by default)usage: token usage (provider-dependent)metadata: provider/model-specific fields (for example extracted reasoning text when configured)
Tool calling
Tools are passed explicitly to generate() / agenerate():
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
return f"{city}: 22°C and sunny"
llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("Use the tool.", tools=[get_weather])
print(resp.tool_calls)
See Tool Calling and Tool Syntax Rewriting.
Built-in tools (optional)
If you want a ready-made toolset (web + filesystem helpers), install:
pip install "abstractcore[tools]"
Then import from abstractcore.tools.common_tools (for example web_search, skim_websearch, skim_url, fetch_url).
See Tool Calling for usage patterns and when to use skim_* vs fetch_*.
Structured output
Pass a Pydantic model via response_model=... to receive a typed result:
from pydantic import BaseModel
from abstractcore import create_llm
class Answer(BaseModel):
title: str
bullets: list[str]
llm = create_llm("openai", model="gpt-4o-mini")
result = llm.generate("Summarize HTTP/3 in 3 bullets.", response_model=Answer)
print(result.bullets)
See Structured Output.
Media input
Media handling is opt-in:
pip install "abstractcore[media]"
Then pass media=[...] to generate() / agenerate(). See Media Handling.
Audio & voice (optional)
Audio inputs are handled via an explicit policy to avoid silent semantic changes. For speech audio, use
audio_policy="speech_to_text" (requires a capability plugin such as abstractvoice).
pip install abstractvoice
See Audio & Voice (STT/TTS) for usage, configuration defaults, and server endpoints.
HTTP API (optional)
If you want an OpenAI-compatible /v1 gateway, install and run the server:
pip install "abstractcore[server]"
python -m abstractcore.server.app
See HTTP Server Guide.