Technical design tools fundamentals

Tool Use Patterns

How agents select and invoke tools: best practices for tool descriptions, parameter schemas, and reliable invocation.

Tool use is the foundational pattern in agent skill design. It’s how an AI agent decides which skill to invoke, how to invoke it, and what to do with the result.

How tool selection works

When an agent gets a task, it reads its available tool descriptions and tries to match them against what it needs to do. This means tool descriptions are arguably the most important part of skill design. They’re the interface between the agent’s reasoning and your skill’s functionality.

The selection process goes roughly like this:

  1. The agent receives a user request
  2. It reasons about what actions might be needed
  3. It scans available tool descriptions for relevant capabilities
  4. It picks the best match and constructs the input parameters
  5. The tool executes and returns a result
  6. The agent folds the result into its reasoning

Write descriptions for the agent, not for humans

A common mistake is writing tool descriptions like API documentation, aimed at human developers. But the consumer of your description is an LLM, and it has different needs.

Bad description:

Search files using glob patterns. Supports * and ** wildcards.

Good description:

Search for files by name pattern in the project directory.
Use this when you need to find files matching a specific
name or extension (e.g., find all TypeScript files, locate
a config file by name). Returns matching file paths sorted
by modification time.

Do NOT use this for searching file contents — use the
grep tool instead.

The good description tells the agent:

  • When to use it (finding files by name or extension)
  • When NOT to use it (searching file contents; use grep instead)
  • What it returns (file paths, sorted by modification time)
  • Concrete examples (finding TypeScript files, locating config files)

Parameter schema design

Parameters are how the agent communicates intent to your skill. Well-designed parameters make it easy for the agent to construct correct invocations. Poorly designed ones lead to errors and retries.

Principles

  1. Use descriptive names. search_query beats q.
  2. Include parameter descriptions that explain what each parameter does and what values are valid.
  3. Set sensible defaults. Don’t require parameters that have obvious default values.
  4. Use enums for constrained choices. If a parameter can only be one of several values, enumerate them.
  5. Keep parameters flat. Deeply nested objects are harder for agents to construct correctly.

Example: good parameter schema

{
  name: "search_files",
  parameters: {
    pattern: {
      type: "string",
      description: "Glob pattern to match files (e.g., '**/*.ts', 'src/components/*.tsx')"
    },
    path: {
      type: "string",
      description: "Directory to search in. Defaults to the project root if not specified.",
      optional: true
    }
  }
}

Example: poor parameter schema

{
  name: "search",
  parameters: {
    q: { type: "string" },  // What kind of search? What format?
    opts: {
      type: "object",       // Nested object — harder for agents
      properties: {
        d: { type: "string" },  // Cryptic name
        r: { type: "boolean" }  // What does this do?
      }
    }
  }
}

Handling tool results

What your skill returns matters as much as what it accepts. The agent needs to interpret results and decide what to do next.

Return structured data when possible

Instead of returning a raw string, return structured data that the agent can reason about:

// Better: structured result
{
  matches: [
    { path: "src/index.ts", modified: "2026-03-25" },
    { path: "src/utils.ts", modified: "2026-03-20" }
  ],
  totalMatches: 2,
  searchPath: "/project"
}

// Worse: unstructured string
"Found 2 files:\nsrc/index.ts\nsrc/utils.ts"

Include context for decision-making

Help the agent understand not just what happened, but what it means:

// Good: includes context
{
  results: [...],
  truncated: true,
  totalAvailable: 1500,
  message: "Showing first 100 results. Use a more specific pattern to narrow results."
}

Error handling

Tools will fail. Network requests time out, files don’t exist, permissions get denied. How you handle errors determines whether the agent can recover or gets stuck.

Return errors, don’t throw them

When possible, return error information as part of the result rather than throwing exceptions. This gives the agent a chance to reason about the failure and try something else.

// Good: error as data
{
  success: false,
  error: "File not found: /path/to/missing.ts",
  suggestion: "Check if the file path is correct. Use search_files to find the file."
}

Provide actionable error messages

Tell the agent what went wrong and what it can do about it:

  • “Permission denied: /etc/shadow. This file requires root access and cannot be read.” (agent knows to skip it)
  • “Rate limited. Try again in 30 seconds.” (agent knows to wait and retry)
  • “Invalid pattern syntax: ’[’. Did you mean to escape this character?” (agent knows to fix the input)

Key takeaways

  1. Tool descriptions are your most important interface. Write them for the agent, not for humans.
  2. Say when NOT to use a tool. This prevents misuse more effectively than positive descriptions alone.
  3. Use clear parameter names with descriptions. The agent can’t read your source code.
  4. Return structured data. It’s easier for agents to reason about than raw strings.
  5. Handle errors gracefully. Return error context so the agent can recover.