Building Tools
Tools let agents interact with the world. This guide shows how to build custom tools.
Tool Structure
Section titled “Tool Structure”A tool consists of:
interface Tool { name: string; description: string; parameters: JSONSchema; execute: (params: unknown) => Promise<string>;}- name: Unique identifier (lowercase, hyphens allowed)
- description: What the tool does (shown to the LLM)
- parameters: JSON Schema defining inputs
- execute: Function that performs the action
Creating a Tool
Section titled “Creating a Tool”-
Create the tool file
src/tools/builtin/weather.ts import { Tool } from '../../core/types.js';export const weatherTool: Tool = {name: 'weather',description: 'Get current weather for a location',parameters: {type: 'object',properties: {location: {type: 'string',description: 'City name or coordinates',},},required: ['location'],},execute: async (params) => {const { location } = params as { location: string };// Call weather API (example)const response = await fetch(`https://api.weather.example/current?q=${encodeURIComponent(location)}`);const data = await response.json();return `Weather in ${location}: ${data.temp}°F, ${data.condition}`;},}; -
Register the tool
Add it to the tool registry:
src/tools/index.ts import { weatherTool } from './builtin/weather.js';export const tools = {// ... existing toolsweather: weatherTool,}; -
Enable for agents
Add to agent configuration:
config/my-agent.yaml tools:- calculator- datetime- weather # Your new tool -
Restart and test
Terminal window docker compose restartYou: What's the weather in San Francisco?Agent: [calls weather tool] The weather in San Francisco is 65°F, partly cloudy.
Parameter Schema
Section titled “Parameter Schema”Use JSON Schema to define parameters:
parameters: { type: 'object', properties: { query: { type: 'string', description: 'Search query', }, limit: { type: 'number', description: 'Maximum results to return', default: 10, }, filters: { type: 'array', items: { type: 'string' }, description: 'Filter tags', }, }, required: ['query'],}The LLM sees this schema and knows how to call your tool.
Handling Errors
Section titled “Handling Errors”Return error messages as strings:
execute: async (params) => { try { const result = await someOperation(params); return `Success: ${result}`; } catch (error) { return `Error: ${error.message}`; }}The LLM receives the error and can respond appropriately to the user.
Async Operations
Section titled “Async Operations”Tools can perform async operations:
execute: async (params) => { const { url } = params as { url: string };
const response = await fetch(url); if (!response.ok) { return `Failed to fetch: ${response.status}`; }
const data = await response.text(); return data.slice(0, 1000); // Truncate if needed}Accessing Context
Section titled “Accessing Context”Tools can access session context through closure:
export function createContextualTool(getContext: () => Context): Tool { return { name: 'contextual-tool', description: 'Uses session context', parameters: { type: 'object', properties: {} }, execute: async () => { const context = getContext(); return `Session: ${context.sessionId}, Project: ${context.projectId}`; }, };}Example: Database Query Tool
Section titled “Example: Database Query Tool”import { Tool } from '../../core/types.js';import { db } from '../../database.js';
export const queryTool: Tool = { name: 'query_database', description: 'Run a read-only SQL query against the database', parameters: { type: 'object', properties: { query: { type: 'string', description: 'SQL SELECT query', }, }, required: ['query'], }, execute: async (params) => { const { query } = params as { query: string };
// Security: Only allow SELECT if (!query.trim().toLowerCase().startsWith('select')) { return 'Error: Only SELECT queries are allowed'; }
try { const results = await db.all(query); return JSON.stringify(results, null, 2); } catch (error) { return `Query error: ${error.message}`; } },};Best Practices
Section titled “Best Practices”- Single responsibility: Each tool does one thing well
- Clear descriptions: Help the LLM understand when to use it
- Validate inputs: Don’t trust LLM-provided parameters blindly
- Handle errors gracefully: Return helpful error messages
- Limit output size: Truncate large responses
- Log operations: Help with debugging and auditing