Tool Calling System
AbstractCore provides a universal tool calling system that works across all LLM providers, even those without native tool support. This is one of AbstractCore's most powerful features.
🔄 How Tool Calling Works
A) Tool Detection
AbstractCore automatically detects different tool call syntaxes:
- Qwen3:
<|tool_call|>...</|tool_call|> - LLaMA3:
<function_call>...</function_call> - XML:
<tool_call>...</tool_call> - OpenAI/Anthropic: Native API calls
B) Syntax Rewriting
Optionally preserve/rewrite tool-call syntax for downstream consumers:
- Python API:
tool_call_tags="qwen3|llama3|xml|gemma"keeps tool markup inresponse.content - HTTP server: request
agent_format="codex|llama3|xml|gemma|..."(see Server docs) - Default: tool markup is stripped from
content; canonical calls stay intool_calls
C) Optional Execution
Define where tools execute (recommended: host/runtime boundary):
execute_tools=false(default) → Return tool calls only (recommended)execute_tools=true→ Legacy local execution (deprecated)- Event system for security control
- Error handling and validation
🎯 Key Insight: One tool definition works everywhere - AbstractCore handles all the complexity of different formats and execution models automatically.
Table of Contents
Quick Start
The simplest way to create and use tools is with the @tool decorator:
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a specified city."""
# In a real scenario, you'd call an actual weather API
return f"The weather in {city} is sunny, 72°F"
@tool
def calculate(expression: str) -> float:
"""Perform a mathematical calculation."""
try:
result = eval(expression) # Simplified for demo - don't use eval in production!
return result
except Exception:
return float('nan')
# Works with ANY provider
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate(
"What's the weather in Tokyo and what's 15 * 23?",
tools=[get_weather, calculate] # Pass functions directly
)
# By default (`execute_tools=False`), AbstractCore does not execute tools.
# It returns a clean assistant message plus structured tool call requests.
print(response.content)
print(response.tool_calls)
Recommended: execute tool calls in your host/runtime and apply your own safety policy (approval gates, allowlists, timeouts) before sending tool results back to the model in a follow-up turn.
Built-in Tools
AbstractCore includes a comprehensive set of ready-to-use tools in abstractcore.tools.common_tools (requires pip install "abstractcore[tools]"):
Web tools (notably web_search, skim_websearch, skim_url, fetch_url) require
abstractcore[tools] so runtime dependencies like requests/bs4 are available:
pip install "abstractcore[tools]"
from abstractcore import create_llm
from abstractcore.tools.common_tools import (
analyze_code,
skim_url,
skim_websearch,
fetch_url,
search_files,
read_file,
list_files,
)
# Code navigation (recommended before editing)
outline = analyze_code("src/app.py")
print(outline)
# Quick URL preview (fast, small)
preview = skim_url("https://example.com/article")
# Smaller/filtered web search (compact output)
results = skim_websearch("HTTP caching best practices")
# Full web content fetching and parsing (HTML→Markdown, JSON/XML/text)
result = fetch_url("https://api.github.com/repos/python/cpython")
# For PDFs/images/other binaries, fetch_url returns metadata (and optional previews), not full extraction.
# File system operations
files = search_files("def.*fetch", ".", file_pattern="*.py") # Find function definitions
content = read_file("config.json") # Read file contents
directory_listing = list_files(".", pattern="*.py", recursive=True)
# Use with any LLM
llm = create_llm("anthropic", model="claude-haiku-4-5")
response = llm.generate(
"Analyze this API response and summarize the key information",
tools=[fetch_url]
)
Available Built-in Tools
🌐 Web & Network
skim_url- Fast URL skim (title/description/headings + short preview)fetch_url- Fetch + parse common text-first types (HTML→Markdown, JSON/XML/text); binaries return metadata + optional previewsweb_search- Search the web using DuckDuckGoskim_websearch- Smaller/filtered web search (compact result list)
📁 File Operations
analyze_code- Structured outline of Python/JavaScript code (imports/classes/functions with line ranges)search_files- Search for text patterns inside files using regexlist_files- Find and list files by names/paths using glob patternsread_file- Read file contents with optional line range selectionwrite_file- Write content to files with directory creationedit_file- Edit files using pattern matching and replacement
⚙️ System Operations
execute_command- Execute shell commands safely with security controls
Suggested web workflow (agent-friendly)
skim_websearch(...)→ get a small set of candidate URLsskim_url(...)→ quickly decide what’s worth fetchingfetch_url(...)→ parse the selected URL(s); setinclude_full_content=Falsewhen you want a smaller output
Tip: you can measure output footprint/latency locally with python examples/skim_tools_benchmark.py --help.
Note: Built-in tools include error handling, security controls, and work with all providers and tool syntax rewriting.
Side effects: For real email/WhatsApp sending, see Communication Tools. Keep these behind an approval boundary.
The @tool Decorator
The @tool decorator is the primary way to create tools in AbstractCore. It automatically extracts function metadata and creates proper tool definitions.
Basic Usage
from abstractcore import tool
@tool
def list_files(directory: str = ".", pattern: str = "*") -> str:
"""List files in a directory matching a pattern."""
import os
import fnmatch
try:
files = [f for f in os.listdir(directory)
if fnmatch.fnmatch(f, pattern)]
return "\n".join(files) if files else "No files found"
except Exception as e:
return f"Error: {str(e)}"
Type Annotations
The decorator automatically infers parameter types from type annotations:
@tool
def create_user(name: str, age: int, is_admin: bool = False) -> str:
"""Create a new user with the specified details."""
user_data = {
"name": name,
"age": age,
"is_admin": is_admin,
"created_at": "2025-01-14"
}
return f"Created user: {user_data}"
Enhanced Metadata
The @tool decorator supports rich metadata for prompted tool calling. For native tool calling, only standard tool schema fields (name, description, parameters) are sent to provider APIs; custom keys are omitted for compatibility.
@tool(
description="Search the database for records matching the query",
tags=["database", "search", "query"],
when_to_use="When the user asks for specific data from the database or wants to find records",
examples=[
{
"description": "Find all users named John",
"arguments": {
"query": "name=John",
"table": "users"
}
},
{
"description": "Search for products under $50",
"arguments": {
"query": "price<50",
"table": "products"
}
},
{
"description": "Find recent orders",
"arguments": {
"query": "date>2025-01-01",
"table": "orders"
}
}
]
)
def search_database(query: str, table: str = "users") -> str:
"""Search the database for records matching the query."""
# Implementation here
return f"Searching {table} for: {query}"
🎯 How This Metadata is Used:
- Prompted tool calling: the tool formatter injects tool name/description/args into the system prompt. To keep prompts small,
when_to_useis included only for small tool sets and a few high-impact tools (edit/write/execute + web triage tools), and tool examples are globally capped. - Native tool calling: only standard fields (
name,description,parameters) are sent to provider APIs (unknown/custom fields are intentionally omitted for compatibility).
Real-World Example from AbstractCore
Here's an actual example from AbstractCore's codebase showing the full power of the enhanced `@tool` decorator:
@tool(
description="Find and list files and directories by their names/paths using glob patterns (case-insensitive, supports multiple patterns)",
tags=["file", "directory", "listing", "filesystem"],
when_to_use="When you need to find files by their names, paths, or file extensions (NOT for searching file contents)",
examples=[
{
"description": "List all files in current directory",
"arguments": {
"directory_path": ".",
"pattern": "*"
}
},
{
"description": "Find all Python files recursively",
"arguments": {
"directory_path": ".",
"pattern": "*.py",
"recursive": True
}
},
{
"description": "Find all files with 'test' in filename (case-insensitive)",
"arguments": {
"directory_path": ".",
"pattern": "*test*",
"recursive": True
}
},
{
"description": "Find multiple file types using | separator",
"arguments": {
"directory_path": ".",
"pattern": "*.py|*.js|*.md",
"recursive": True
}
},
{
"description": "Complex multiple patterns - documentation, tests, and config files",
"arguments": {
"directory_path": ".",
"pattern": "README*|*test*|config.*|*.yml",
"recursive": True
}
}
]
)
def list_files(directory_path: str = ".", pattern: str = "*", recursive: bool = False, include_hidden: bool = False, head_limit: Optional[int] = 50) -> str:
"""
List files and directories in a specified directory with pattern matching (case-insensitive).
IMPORTANT: Use 'directory_path' parameter (not 'file_path') to specify the directory to list.
Args:
directory_path: Path to the directory to list files from (default: "." for current directory)
pattern: Glob pattern(s) to match files. Use "|" to separate multiple patterns (default: "*")
recursive: Whether to search recursively in subdirectories (default: False)
include_hidden: Whether to include hidden files/directories starting with '.' (default: False)
head_limit: Maximum number of files to return (default: 50, None for unlimited)
Returns:
Formatted string with file and directory listings or error message.
When head_limit is applied, shows "showing X of Y files" in the header.
"""
# Implementation here...
🔄 How This Gets Transformed
When you use this tool with a prompted model (like Ollama), AbstractCore automatically generates a system prompt like this:
**list_files**: Find and list files and directories by their names/paths using glob patterns (case-insensitive, supports multiple patterns)
• When to use: When you need to find files by their names, paths, or file extensions (NOT for searching file contents)
• Tags: file, directory, listing, filesystem
• Parameters: {"directory_path": {"type": "string", "default": "."}, "pattern": {"type": "string", "default": "*"}, ...}
To use a tool, respond with this EXACT format:
<|tool_call|>
{"name": "tool_name", "arguments": {"param1": "value1", "param2": "value2"}}
</|tool_call|>
**EXAMPLES:**
**list_files Examples:**
1. List all files in current directory:
<|tool_call|>
{"name": "list_files", "arguments": {"directory_path": ".", "pattern": "*"}}
</|tool_call|>
2. Find all Python files recursively:
<|tool_call|>
{"name": "list_files", "arguments": {"directory_path": ".", "pattern": "*.py", "recursive": true}}
</|tool_call|>
... and 3 more examples with proper formatting ...
Universal Tool Support
AbstractCore's tool system works across all providers through two mechanisms:
1. Native Tool Support
For providers with native tool APIs (OpenAI, Anthropic):
# OpenAI with native tool support
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate("What's the weather?", tools=[get_weather])
2. Intelligent Prompting
For providers without native tool support (Ollama, MLX, LMStudio):
# Ollama without native tool support
# AbstractCore handles this automatically
llm = create_llm("ollama", model="qwen3:4b-instruct")
response = llm.generate("What's the weather?", tools=[get_weather])
Tool Definition
In most cases, define tools with @tool. Under the hood, tools are represented as
ToolDefinition objects (manual definitions are rarely needed).
from abstractcore.tools import ToolDefinition
def get_weather_function(city: str) -> str:
return f"{city}: 22°C and sunny"
# Manual tool definition (rarely needed)
tool_def = ToolDefinition(
name="get_weather",
description="Get current weather for a city",
parameters={
"city": {"type": "string", "description": "The city name"}
},
function=get_weather_function,
)
Parameter Types
AbstractCore derives tool schemas from Python type hints (including nested objects).
string,integer,number,booleanarray(lists) andobject(dicts / nested structures)
from abstractcore import tool
@tool
def complex_tool(
text: str,
count: int,
threshold: float,
enabled: bool,
tags: list[str],
config: dict,
) -> str:
"""Tool with various parameter types."""
return f"Processed: {text} with {count} items"
Tool Execution
Execution Modes
- Passthrough mode (default):
execute_tools=False→ tool calls are returned inresponse.tool_callsfor your host/runtime to execute. - Direct execution (deprecated):
execute_tools=True→ AbstractCore executes tools locally and appends results (avoid in server/untrusted environments).
Recommended: Host/runtime execution
In passthrough mode, treat tool calls as untrusted requests. Execute only what you allow.
from abstractcore import create_llm, tool
from abstractcore.tools import ToolCall, ToolRegistry
@tool
def get_weather(city: str) -> str:
return f"{city}: 22°C and sunny"
dangerous = {"execute_command", "edit_file", "send_email"}
registry = ToolRegistry()
registry.register(get_weather)
llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("What's the weather in Paris? Use the tool.", tools=[get_weather])
for call in resp.tool_calls or []:
name = call.get("name")
if name in dangerous:
continue
result = registry.execute_tool(
ToolCall(
name=name,
arguments=call.get("arguments") or {},
call_id=call.get("call_id") or call.get("id"),
)
)
print(result)
Safety tip: keep side-effect tools behind an approval boundary (especially filesystem, shell, and messaging tools).
Architecture-Aware Tool Call Detection
AbstractCore automatically detects model architecture and uses the appropriate tool call format:
Qwen3 Architecture
<|tool_call|>{"name": "get_weather", "arguments": {"city": "Paris"}}</|tool_call|>
LLaMA3 Architecture
<function_call>{"name": "get_weather", "arguments": {"city": "Paris"}}</function_call>
OpenAI/Anthropic
Native API tool calls (structured JSON)
XML-based Models
<tool_call>{"name": "get_weather", "arguments": {"city": "Paris"}}</tool_call>
🎯 Key Point: You never need to worry about these formats! AbstractCore handles architecture detection, prompt formatting, and response parsing automatically. Your tools work the same way across all providers.
Advanced Patterns
Tool Chaining
Tools can call other tools or return data that triggers additional tool calls:
@tool
def get_user_location(user_id: str) -> str:
"""Get the location of a user."""
# Mock implementation
locations = {"user123": "Paris", "user456": "Tokyo"}
return locations.get(user_id, "Unknown")
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 72°F, sunny"
# LLM can chain these tools:
response = llm.generate(
"What's the weather like for user123?",
tools=[get_user_location, get_weather]
)
# In an agent loop, your host executes tool calls and feeds results back so the model can chain.
Conditional Tool Execution (Recommended)
In passthrough mode, your host/runtime decides which tool calls to execute:
from abstractcore.tools import ToolCall, ToolRegistry
dangerous_tools = {"execute_command", "edit_file", "send_email"}
registry = ToolRegistry()
registry.register(get_user_location)
registry.register(get_weather)
resp = llm.generate(
"What's the weather like for user123?",
tools=[get_user_location, get_weather],
)
for call in resp.tool_calls or []:
name = call.get("name")
if name in dangerous_tools:
continue
result = registry.execute_tool(
ToolCall(
name=name,
arguments=call.get("arguments") or {},
call_id=call.get("call_id") or call.get("id"),
)
)
print(result)
Error Handling
Tool-Level Error Handling
Handle errors within tools:
@tool
def safe_division(a: float, b: float) -> str:
"""Safely divide two numbers."""
try:
if b == 0:
return "Error: Division by zero is not allowed"
result = a / b
return f"{a} ÷ {b} = {result}"
except Exception as e:
return f"Error: {str(e)}"
System-Level Error Handling
AbstractCore provides comprehensive error handling:
from abstractcore.exceptions import ToolExecutionError
try:
response = llm.generate("Use the broken tool", tools=[broken_tool])
except ToolExecutionError as e:
print(f"Tool execution failed: {e}")
print(f"Failed tool: {e.tool_name}")
print(f"Error details: {e.error_details}")
Performance Optimization
Tool Registry
For many tools, register once and reuse definitions instead of rebuilding them each call:
from abstractcore.tools import ToolRegistry
# Register tools once and reuse the registry
registry = ToolRegistry()
registry.register(get_weather)
registry.register(calculate)
resp = llm.generate("Help me with weather and math.", tools=registry.list_tools())
Lazy Loading
Load expensive resources only when the tool is invoked:
class DatabaseTool:
def __init__(self):
self._connection = None
@property
def connection(self):
if self._connection is None:
self._connection = create_database_connection()
return self._connection
db = DatabaseTool()
Caching Results
from functools import lru_cache
from abstractcore import tool
@tool
@lru_cache(maxsize=100)
def expensive_calculation(input_data: str) -> str:
return run_expensive_step(input_data)
Tool Syntax Rewriting
AbstractCore can rewrite tool call formats for compatibility with different agentic CLIs:
Qwen3 tags (qwen3)
response = llm.generate(
"What's the weather?",
tools=[get_weather],
tool_call_tags="qwen3"
)
# Output: <|tool_call|>{"name": "get_weather"}...|tool_call|>
LLaMA3 tags (llama3)
response = llm.generate(
"What's the weather?",
tools=[get_weather],
tool_call_tags="llama3"
)
# Output: {"name": "get_weather"}...
XML tags (xml)
response = llm.generate(
"What's the weather?",
tools=[get_weather],
tool_call_tags="xml"
)
# Output: {"name": "get_weather"}...
HTTP clients: When consuming tool calls through the OpenAI-compatible server, prefer agent_format on the request body (see Server Guide).
Event System Integration
Observe tool calling and (optional) tool execution through events (event payload shapes may vary by emitter):
Cost Monitoring
from abstractcore.events import EventType, on_global
def monitor_tool_costs(event):
"""Monitor costs of tool executions."""
for call in event.data.get("tool_calls", []) or []:
if call.get("name") in {"expensive_api_call", "premium_service"}:
print(f"Warning: Using expensive tool {call.get('name')}")
on_global(EventType.TOOL_STARTED, monitor_tool_costs)
Performance Tracking
def track_tool_outcomes(event):
"""Track tool execution outcomes (shape varies by emitter)."""
for result in event.data.get("tool_results", []) or []:
if result.get("success") is False:
print(f"Tool failed: {result.get('name')} error={result.get('error')}")
on_global(EventType.TOOL_COMPLETED, track_tool_outcomes)
Best Practices
1. Clear Documentation
Always provide clear docstrings for your tools:
from typing import Any, Dict, Optional
@tool
def send_email(
to: Any,
subject: str,
*,
account: Optional[str] = None,
body_text: Optional[str] = None,
body_html: Optional[str] = None,
) -> Dict[str, Any]:
"""Send an email from an operator-configured account.
Note:
- SMTP/IMAP settings are configured by the operator (not passed via tool args).
- Use with caution as it sends real emails; keep behind an approval boundary.
"""
# Implementation here
2. Input Validation
Always validate and sanitize inputs:
@tool
def process_user_input(user_data: str) -> str:
"""Process user input safely."""
# Validate input length
if len(user_data) > 1000:
return "Error: Input too long (max 1000 characters)"
# Sanitize input
import html
safe_data = html.escape(user_data)
# Process safely
return f"Processed: {safe_data}"
Troubleshooting
Common Issues
- Tool not being called: Check tool description and parameter names
- Invalid JSON in tool calls: Ensure proper error handling in tools
- Tools timing out: Implement proper timeout handling
- Memory issues with large tools: Use streaming or chunking
Debug Mode
Enable debug mode to see tool execution details:
import logging
logging.basicConfig(level=logging.DEBUG)
# Tool execution details will be logged
response = llm.generate("Use tools", tools=[debug_tool])
Testing Tools
Test tools independently:
# Test tool directly
result = get_weather("Paris")
print(f"Direct call result: {result}")
# Test with LLM
response = llm.generate("What's the weather in Paris?", tools=[get_weather])
print(f"LLM result: {response.content}")
Related Documentation
API Reference
Complete API documentation
Tool Syntax Rewriting
Format conversion details
Centralized Configuration
Global defaults and settings
Media Handling
Universal file attachment
Architecture
System design and tool execution flow
Examples
Real-world tool usage examples
MCP Tools
Discover external tools via Model Context Protocol
Communication Tools
Email + WhatsApp tools (opt-in; safety-first)