Tool Calling System
AbstractCore provides a universal tool calling system that works across all LLM providers, even those without native tool support. This is one of AbstractCore's most powerful features.
🔄 How Tool Calling Works
A) Tool Detection
AbstractCore automatically detects different tool call syntaxes:
- Qwen3:
<|tool_call|>...</|tool_call|>
- LLaMA3:
<function_call>...</function_call>
- XML:
<tool_call>...</tool_call>
- OpenAI/Anthropic: Native API calls
B) Real-Time Rewriting
Convert tool calls to specific formats for agent compatibility:
tool_call_tags="qwen3"
→ Codex CLItool_call_tags="llama3"
→ Crush CLItool_call_tags="xml"
→ Gemini CLI- Auto-detection for optimal format
C) Optional Execution
Choose whether AbstractCore executes tools locally:
execute_tools=true
→ Local executionexecute_tools=false
→ Agent handles it- Event system for security control
- Error handling and validation
🎯 Key Insight: One tool definition works everywhere - AbstractCore handles all the complexity of different formats and execution models automatically.
Table of Contents
Quick Start
The simplest way to create and use tools is with the @tool
decorator:
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a specified city."""
# In a real scenario, you'd call an actual weather API
return f"The weather in {city} is sunny, 72°F"
@tool
def calculate(expression: str) -> float:
"""Perform a mathematical calculation."""
try:
result = eval(expression) # Simplified for demo - don't use eval in production!
return result
except Exception:
return float('nan')
# Works with ANY provider
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate(
"What's the weather in Tokyo and what's 15 * 23?",
tools=[get_weather, calculate] # Pass functions directly
)
print(response.content)
# Output: The weather in Tokyo is sunny, 72°F and 15 * 23 = 345.
The @tool Decorator
The @tool
decorator is the primary way to create tools in AbstractCore. It automatically extracts function metadata and creates proper tool definitions.
Basic Usage
from abstractcore import tool
@tool
def list_files(directory: str = ".", pattern: str = "*") -> str:
"""List files in a directory matching a pattern."""
import os
import fnmatch
try:
files = [f for f in os.listdir(directory)
if fnmatch.fnmatch(f, pattern)]
return "\n".join(files) if files else "No files found"
except Exception as e:
return f"Error: {str(e)}"
Type Annotations
The decorator automatically infers parameter types from type annotations:
@tool
def create_user(name: str, age: int, is_admin: bool = False) -> str:
"""Create a new user with the specified details."""
user_data = {
"name": name,
"age": age,
"is_admin": is_admin,
"created_at": "2025-01-14"
}
return f"Created user: {user_data}"
Enhanced Metadata
The @tool
decorator supports rich metadata that gets automatically injected into system prompts for prompted models and passed to native APIs:
@tool(
description="Search the database for records matching the query",
tags=["database", "search", "query"],
when_to_use="When the user asks for specific data from the database or wants to find records",
examples=[
{
"description": "Find all users named John",
"arguments": {
"query": "name=John",
"table": "users"
}
},
{
"description": "Search for products under $50",
"arguments": {
"query": "price<50",
"table": "products"
}
},
{
"description": "Find recent orders",
"arguments": {
"query": "date>2025-01-01",
"table": "orders"
}
}
]
)
def search_database(query: str, table: str = "users") -> str:
"""Search the database for records matching the query."""
# Implementation here
return f"Searching {table} for: {query}"
🎯 How This Metadata is Used:
- Prompted Models: All metadata is injected into the system prompt to guide the LLM
- Native APIs: Metadata is passed through to the provider's tool API
- Examples: Shown in the system prompt with proper formatting for each architecture
- Tags & when_to_use: Help the LLM understand context and appropriate usage
Real-World Example from AbstractCore
Here's an actual example from AbstractCore's codebase showing the full power of the enhanced `@tool` decorator:
@tool(
description="Find and list files and directories by their names/paths using glob patterns (case-insensitive, supports multiple patterns)",
tags=["file", "directory", "listing", "filesystem"],
when_to_use="When you need to find files by their names, paths, or file extensions (NOT for searching file contents)",
examples=[
{
"description": "List all files in current directory",
"arguments": {
"directory_path": ".",
"pattern": "*"
}
},
{
"description": "Find all Python files recursively",
"arguments": {
"directory_path": ".",
"pattern": "*.py",
"recursive": True
}
},
{
"description": "Find all files with 'test' in filename (case-insensitive)",
"arguments": {
"directory_path": ".",
"pattern": "*test*",
"recursive": True
}
},
{
"description": "Find multiple file types using | separator",
"arguments": {
"directory_path": ".",
"pattern": "*.py|*.js|*.md",
"recursive": True
}
},
{
"description": "Complex multiple patterns - documentation, tests, and config files",
"arguments": {
"directory_path": ".",
"pattern": "README*|*test*|config.*|*.yml",
"recursive": True
}
}
]
)
def list_files(directory_path: str = ".", pattern: str = "*", recursive: bool = False, include_hidden: bool = False, head_limit: Optional[int] = 50) -> str:
"""
List files and directories in a specified directory with pattern matching (case-insensitive).
IMPORTANT: Use 'directory_path' parameter (not 'file_path') to specify the directory to list.
Args:
directory_path: Path to the directory to list files from (default: "." for current directory)
pattern: Glob pattern(s) to match files. Use "|" to separate multiple patterns (default: "*")
recursive: Whether to search recursively in subdirectories (default: False)
include_hidden: Whether to include hidden files/directories starting with '.' (default: False)
head_limit: Maximum number of files to return (default: 50, None for unlimited)
Returns:
Formatted string with file and directory listings or error message.
When head_limit is applied, shows "showing X of Y files" in the header.
"""
# Implementation here...
🔄 How This Gets Transformed
When you use this tool with a prompted model (like Ollama), AbstractCore automatically generates a system prompt like this:
**list_files**: Find and list files and directories by their names/paths using glob patterns (case-insensitive, supports multiple patterns)
• When to use: When you need to find files by their names, paths, or file extensions (NOT for searching file contents)
• Tags: file, directory, listing, filesystem
• Parameters: {"directory_path": {"type": "string", "default": "."}, "pattern": {"type": "string", "default": "*"}, ...}
To use a tool, respond with this EXACT format:
<|tool_call|>
{"name": "tool_name", "arguments": {"param1": "value1", "param2": "value2"}}
</|tool_call|>
**EXAMPLES:**
**list_files Examples:**
1. List all files in current directory:
<|tool_call|>
{"name": "list_files", "arguments": {"directory_path": ".", "pattern": "*"}}
</|tool_call|>
2. Find all Python files recursively:
<|tool_call|>
{"name": "list_files", "arguments": {"directory_path": ".", "pattern": "*.py", "recursive": true}}
</|tool_call|>
... and 3 more examples with proper formatting ...
Universal Tool Support
AbstractCore's tool system works across all providers through two mechanisms:
1. Native Tool Support
For providers with native tool APIs (OpenAI, Anthropic):
# OpenAI with native tool support
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate("What's the weather?", tools=[get_weather])
2. Intelligent Prompting
For providers without native tool support (Ollama, MLX, LMStudio):
# Ollama without native tool support
# AbstractCore handles this automatically
llm = create_llm("ollama", model="qwen3-coder:30b")
response = llm.generate("What's the weather?", tools=[get_weather])
Tool Execution
Execution Flow
- LLM generates response with tool calls
- AbstractCore detects tool calls in the response
- Event system emits
TOOL_STARTED
(preventable) - Tools execute locally in AbstractCore (not by provider)
- Results collected and error handling applied
- Event system emits
TOOL_COMPLETED
with results - Results integrated into the final response
Architecture-Aware Tool Call Detection
AbstractCore automatically detects model architecture and uses the appropriate tool call format:
Qwen3 Architecture
<|tool_call|>{"name": "get_weather", "arguments": {"city": "Paris"}}</|tool_call|>
LLaMA3 Architecture
<function_call>{"name": "get_weather", "arguments": {"city": "Paris"}}</function_call>
OpenAI/Anthropic
Native API tool calls (structured JSON)
XML-based Models
<tool_call>{"name": "get_weather", "arguments": {"city": "Paris"}}</tool_call>
🎯 Key Point: You never need to worry about these formats! AbstractCore handles architecture detection, prompt formatting, and response parsing automatically. Your tools work the same way across all providers.
Local Execution
All tools execute locally in AbstractCore, ensuring:
- Consistent behavior across all providers
- Security control through event system
- Error handling and validation
- Performance monitoring
Advanced Patterns
Tool Chaining
Tools can call other tools or return data that triggers additional tool calls:
@tool
def get_user_location(user_id: str) -> str:
"""Get the location of a user."""
# Mock implementation
locations = {"user123": "Paris", "user456": "Tokyo"}
return locations.get(user_id, "Unknown")
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 72°F, sunny"
# LLM can chain these tools:
response = llm.generate(
"What's the weather like for user123?",
tools=[get_user_location, get_weather]
)
# LLM will first call get_user_location, then get_weather with the result
Conditional Tool Execution
Use the event system to control tool execution:
from abstractcore.events import EventType, on_global
def security_check(event):
"""Prevent execution of dangerous tools."""
dangerous_tools = ['delete_file', 'system_command', 'send_email']
for tool_call in event.data.get('tool_calls', []):
if tool_call.name in dangerous_tools:
print(f"Blocking dangerous tool: {tool_call.name}")
event.prevent() # Stop execution
on_global(EventType.TOOL_STARTED, security_check)
Error Handling
Tool-Level Error Handling
Handle errors within tools:
@tool
def safe_division(a: float, b: float) -> str:
"""Safely divide two numbers."""
try:
if b == 0:
return "Error: Division by zero is not allowed"
result = a / b
return f"{a} ÷ {b} = {result}"
except Exception as e:
return f"Error: {str(e)}"
System-Level Error Handling
AbstractCore provides comprehensive error handling:
from abstractcore.exceptions import ToolExecutionError
try:
response = llm.generate("Use the broken tool", tools=[broken_tool])
except ToolExecutionError as e:
print(f"Tool execution failed: {e}")
print(f"Failed tool: {e.tool_name}")
print(f"Error details: {e.error_details}")
Tool Syntax Rewriting
AbstractCore can rewrite tool call formats for compatibility with different agentic CLIs:
Codex CLI (Qwen3 format)
response = llm.generate(
"What's the weather?",
tools=[get_weather],
tool_call_tags="qwen3"
)
# Output: <|tool_call|>{"name": "get_weather"}...|tool_call|>
Crush CLI (LLaMA3 format)
response = llm.generate(
"What's the weather?",
tools=[get_weather],
tool_call_tags="llama3"
)
# Output: {"name": "get_weather"}...
Gemini CLI (XML format)
response = llm.generate(
"What's the weather?",
tools=[get_weather],
tool_call_tags="xml"
)
# Output: {"name": "get_weather"}...
Event System Integration
Monitor and control tool execution through events:
Cost Monitoring
from abstractcore.events import EventType, on_global
def monitor_tool_costs(event):
"""Monitor costs of tool executions."""
tool_calls = event.data.get('tool_calls', [])
for tool_call in tool_calls:
if tool_call.name in ['expensive_api_call', 'premium_service']:
print(f"Warning: Using expensive tool {tool_call.name}")
on_global(EventType.TOOL_STARTED, monitor_tool_costs)
Performance Tracking
def track_tool_performance(event):
"""Track tool execution performance."""
duration = event.data.get('duration_ms', 0)
tool_name = event.data.get('tool_name', 'unknown')
if duration > 5000: # More than 5 seconds
print(f"Slow tool execution: {tool_name} took {duration}ms")
on_global(EventType.TOOL_COMPLETED, track_tool_performance)
Best Practices
1. Clear Documentation
Always provide clear docstrings for your tools:
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to the specified recipient.
Args:
to: Email address of the recipient
subject: Subject line of the email
body: Main content of the email
Returns:
Success message or error description
Note:
This tool requires email configuration to be set up.
Use with caution as it sends actual emails.
"""
# Implementation here
2. Input Validation
Always validate and sanitize inputs:
@tool
def process_user_input(user_data: str) -> str:
"""Process user input safely."""
# Validate input length
if len(user_data) > 1000:
return "Error: Input too long (max 1000 characters)"
# Sanitize input
import html
safe_data = html.escape(user_data)
# Process safely
return f"Processed: {safe_data}"
Troubleshooting
Common Issues
- Tool not being called: Check tool description and parameter names
- Invalid JSON in tool calls: Ensure proper error handling in tools
- Tools timing out: Implement proper timeout handling
- Memory issues with large tools: Use streaming or chunking
Debug Mode
Enable debug mode to see tool execution details:
import logging
logging.basicConfig(level=logging.DEBUG)
# Tool execution details will be logged
response = llm.generate("Use tools", tools=[debug_tool])
Testing Tools
Test tools independently:
# Test tool directly
result = get_weather("Paris")
print(f"Direct call result: {result}")
# Test with LLM
response = llm.generate("What's the weather in Paris?", tools=[get_weather])
print(f"LLM result: {response.content}")