Toolsets
Toolsets are one of the three core building blocks of Fabrix.ai's Agentic Framework — alongside Personas and Prompt Templates. They define what the agent is capable of doing by providing access to MCP tools that interact with external systems.
1. What Are Toolsets?
A Toolset is a YAML-based configuration that groups one or more MCP tools (Model Context Protocol tools). These are the agent's "capabilities" — such as running pipelines, querying streams, fetching cached documents, reading assets, performing network operations, or executing external tasks.
MCP (Model Context Protocol) tools provide a standardized interface for AI assistants to interact with external systems, execute operations, and retrieve data. Each tool is defined through a configuration that specifies its capabilities, required inputs, and execution behavior. Toolsets are always associated with an AI Project, and only the Personas inside that project can use those tools.
This guide covers the foundational concepts required to understand and create MCP tool configurations. For detailed specifications of individual tool handlers, refer to the Tool Handlers Guide.
Key Characteristics
| Feature | Description |
|---|---|
| Logical grouping of tools | Each toolset contains multiple related MCP tools. |
| Project-scoped | Toolsets belong to an AI project. |
| Persona-controlled access | Personas decide which toolsets are available inside a conversation. |
| YAML-defined | Fully configurable through YAML. |
| Supports multiple tool types | runPipeline, streamQuery, dataQuery, document tools, etc. |
2. Toolset Configuration Guide
2.1 Tool Anatomy
Every MCP tool configuration consists of three primary components:
Tool Metadata
Identifies and describes the tool instance.
name: fetch_device_alerts
type: streamQuery
description: Retrieves active alerts for network devices based on severity and time range
name(string, required) - Unique identifier within the toolset. Must use only underscores (_) to separate words - no hyphens, spaces, or special characters. Used for invocation and reference.type(string, required) - Specifies the tool handler (e.g.,streamQuery,runPipeline,RESTAPI,dashboardManagement). Determines how the tool executes and what configuration it requires.description(string, required) - Human-readable explanation of the tool's purpose and behavior. This field is critical: LLMs use descriptions to determine when and how to invoke the tool. A well-written description directly impacts tool selection accuracy. Should include purpose, functionality, key features, use cases, and cache behavior (if applicable). Supports multi-line text using>or|-YAML syntax.
Configuration Block
Defines tool-specific behavior and settings. Structure varies by tool type.
stream: network-alerts-stream
data_format: csv
save_to_cache: auto
result_columns:
device_name: "Device"
severity: "Alert Severity"
timestamp: "Time"
Common configuration fields:
save_to_cache(string or boolean, optional) - Controls caching behavior for tool outputs. Can beauto,yes,no(ortrue/false). Cached results can be retrieved using context cache tools.stream(string, optional) - Required forstreamQuerytype tools. Specifies the persistent stream name to query.data_format(string, optional) - Specifies the format of data returned by stream queries. Common values:json,csv,text.result_columns(object, optional) - Defines which columns to extract from stream query results. Maps internal pstream column names to user-friendly labels. Only the mapped columns with their labels are displayed to users in the chat app - users will not see the actual pstream column names.
Configuration blocks contain:
- Data sources (streams, databases, APIs)
- Output formatting preferences
- Caching behavior
- Column mappings and transformations
- Custom configuration specific to tool handlers (see Custom Configuration section)
Parameters Section
Declares input values the tool accepts at runtime.
parameters:
- name: severity_level
type: string
description: Filter alerts by severity (critical, warning, info)
operator: "="
queryMapping: "severity = '{{severity_level}}'"
required: true
Parameters enable dynamic behavior through runtime value substitution.
2.2 Parameter Configuration
Parameters define the interface between the tool and its caller. Each parameter specification includes metadata that controls validation, substitution, and query generation.
Core Parameter Fields
name- Identifier used in templates and query mappings.-
type- Data type constraint. Supported types: -
string(default) - Text values integer- Whole numbersnumber- Decimal numbersboolean- true/false valuesarray- Lists of valuesobject- Complex nested structuresdescription- Explains the parameter's purpose and valid values. LLMs rely heavily on parameter descriptions to understand what values to provide and in what format. Include formatting requirements, constraints, and examples in the description.required- Boolean indicating if the parameter must be provided.default- Fallback value when parameter is not supplied.
parameters:
- name: max_results
type: integer
description: Maximum number of records to return
required: false
default: 100
Query-Related Fields
For tools that generate queries (streamQuery, streamQueryAggs), additional fields control how parameters map to query logic.
operator- Comparison operator applied in the query (=,>,<,contains,in,between).queryMapping- Template showing how the parameter integrates into query syntax.use_in_cfxql- Controls whether the parameter participates in CFXQL query generation. Set tofalsewhen the parameter is used only for template substitution.
parameters:
- name: device_ip
type: string
description: IP address of the target device
operator: "="
queryMapping: "ip_address = '{{device_ip}}'"
parameters:
- name: stream
type: string
description: Name of the stream to query
use_in_cfxql: false
2.3 Custom Configuration
custom_config (object, optional)
- Contains tool-specific configuration that varies by tool type
- Structure depends on the tool
typeand handler requirements - Required for some tool types (e.g.,
runPipeline), optional for others - The fields within
custom_configdepend on what the specific tool handler expects
Common custom_config fields:
custom_config.template_type (string, optional)
- Template engine used for rendering content (commonly used with
runPipelinetools) - Supported values:
jinjaormako - Used to process variables and expressions in content
- Choose based on your preference and existing templates
custom_config.pipeline_content (string, optional)
- Pipeline code that executes when the tool is called (used with
runPipelinetools) - Uses RDAF pipeline syntax with operators like
@dm:empty,@sshv2:execute - Supports Jinja or Mako templating for dynamic values (e.g.,
{{ ip_address }}) - Multi-line content should use
|or|-YAML syntax
pipeline_output_columns (array, optional)
- Specifies which columns from the pipeline output should be returned
- Format:
column_nameorsource_column,destination_column - Examples:
source_ip,output,result - Used to map pipeline results to tool output
Note: The custom_config structure varies based on the specific tool handler. Refer to handler-specific documentation for the exact configuration requirements for each tool type.
2.4 Flex Attributes and Templating
Flex attributes enable dynamic configuration through template-based value substitution. Fields supporting flex attributes accept runtime values instead of static strings.
Template Structure
Flex attributes use a two-part structure:
template_type- Specifies the templating engine (jinjaormako)template- The template string containing substitution placeholders
Static vs Dynamic Configuration
Static Configuration:
Dynamic Configuration with Flex Attributes:
With corresponding parameter:
Common Flex Attribute Use Cases
Dynamic Stream Selection:
Dynamic Filters:
Dynamic Output Naming:
Template Syntax
Jinja Templates use double curly braces for variable substitution:
Mako Templates use similar syntax with additional Python expression support:
template_type: mako
template: |
% if severity == 'critical':
Priority: HIGH
% else:
Priority: NORMAL
% endif
For complex logic, prefer pipeline content templates over inline flex attributes.
2.5 Output and Caching
save_to_cache Behavior
save_to_cache (string or boolean, optional)
- Controls whether tool output is stored in the context cache for reuse
- Values:
autoortrue- Automatically caches when output exceeds size threshold (default: 100 lines)yesortrue- Always caches output regardless of sizenoorfalse- Never caches output- Cached results can be retrieved using context cache tools
cache_line_threshold (integer, optional)
- Used with
save_to_cache: autoto set the line count threshold - Default: 100 lines
- Example:
cache_line_threshold: 150
Output Document Naming
When caching is enabled, specify a document name for retrieval:
Dynamic naming with templates:
Static naming:
Data Format Control
data_format (string, optional)
- Specifies output format for downstream consumption
- Common values:
csv,json,text - Supported formats vary by tool type
2.6 Template Engines: Jinja vs Mako
When to Use Jinja
Jinja is the recommended default for most use cases:
- Simple variable substitution
- Basic conditional logic
- String formatting and filters
template_type: jinja
template: |
Device: {{device_name}}
Status: {% if is_active %}Online{% else %}Offline{% endif %}
When to Use Mako
Use Mako for:
- Complex Python expressions
- Advanced control flow
- Inline Python code execution
template_type: mako
template: |
<%
import datetime
current_time = datetime.datetime.now()
%>
Report generated at: ${current_time}
Template Best Practices
- Keep templates simple - Complex logic belongs in pipelines, not templates
- Validate parameters - Ensure required parameters have sensible defaults or validation
- Use descriptive placeholders -
{{start_date}}is clearer than{{d1}} - Escape special characters - Use proper escaping for quotes and special characters in templates
2.7 Tool Configuration Best Practices
Naming Conventions
Tool names - Use descriptive, action-oriented names with underscores:
get_device_alertsupdate_topology_graphfetch_performance_metrics
Important
Tool names must use only underscores (_) to separate words. Do not use hyphens (-), spaces, or any other special characters in tool names. This is a requirement for proper tool registration and execution.
Parameter names - Use clear, lowercase names with underscores:
device_idnotdeviceIdoridstart_timestampnotstartorts
Organize toolsets by domain:
network_automationsnow(ServiceNow)visualizationsecops(Security Operations)- etc.
Writing Effective Descriptions
Descriptions serve as the primary interface between LLMs and tools. The LLM reads descriptions to determine:
- Whether this tool is appropriate for the current task
- What parameters to provide
- What format those parameters should take
- What output to expect
Tool Description Guidelines:
Write descriptions that answer:
- What does this tool do?
- What data does it return?
- When should it be used?
- What are the prerequisites or dependencies?
Good example:
description: >-
Retrieves active alerts for network devices filtered by severity level
and time range. Returns device name, alert type, severity, and timestamp
in CSV format. Use this tool when analyzing current network issues or
generating alert reports. Requires device IDs to exist in the topology database.
Poor example:
Parameter Description Guidelines:
Parameter descriptions must be explicit about:
- Expected format - Date format, IP address notation, comma-separated lists
- Valid values - Enumerated options, ranges, patterns
- Requirements - Dependencies on other parameters, conditional usage
- Examples - Sample values that clarify format
Good examples:
parameters:
- name: date_range
type: string
description: >-
Time range in ISO 8601 format (YYYY-MM-DD).
Example: '2024-01-15' for a single day or '2024-01-15/2024-01-20' for a range.
- name: severity
type: string
description: >-
Alert severity level. Valid values: 'critical', 'warning', 'info'.
Case-sensitive. Use 'critical' for emergency issues requiring immediate attention.
- name: device_ids
type: string
description: >-
Comma-separated list of device IDs to query.
Example: 'router-01,switch-03,firewall-12'. No spaces between IDs.
Leave empty to query all devices (may impact performance).
Poor examples:
parameters:
- name: date_range
description: Date range
- name: severity
description: Severity of the alert
- name: device_ids
description: Device IDs
Impact on LLM Behavior:
Detailed descriptions improve:
- Tool selection accuracy - LLM chooses the right tool for the task
- Parameter correctness - LLM provides properly formatted values
- Error reduction - Fewer invalid inputs and retry attempts
- User experience - More relevant results on first attempt
Treat descriptions as API documentation that the LLM reads and interprets literally.
Parameter Design
Distinguish required from optional parameters:
Mark parameters as required: true only when the tool cannot function without them. Required parameters must not have default values.
Provide defaults for optional parameters:
Default values make parameters optional and reduce caller burden. Defaults also serve as examples of valid values:
```yaml
parameters:
- name: device_id type: string description: Unique identifier of the device to query. Required. Format is 'device-###' where ### is a numeric ID. required: true# No default - this is mandatory
- name: max_results type: integer description: Maximum number of records to return. Optional. Defaults to 100 to balance performance and completeness. default: 100# Presence of default makes this optional
- name: include_resolved type: boolean description: Whether to include resolved alerts in results. Optional. Set to true to see historical data. default: false
- name: sort_order type: string description: Sort direction for results. Valid values are 'asc' (oldest first) or 'desc' (newest first). Optional. default: desc
**Default value behavior:**
- Providing a default value makes a parameter optional (overrides `required: false` or absence of required field)
- Used automatically when parameter is not provided by caller
- Must match the declared parameter type
- Should represent the most common or safest option
- Required parameters must never have defaults
**Use explicit types:**
Type declarations enable validation and prevent errors. The LLM uses type information to format parameter values correctly.
#### Error Prevention {#2.7.4}
**Validate templates:**
- Ensure all template placeholders have corresponding parameters
- Test templates with various parameter combinations
- Verify template syntax is correct for the chosen template engine
**Test with missing parameters:**
- Verify default values work correctly
- Test behavior when optional parameters are omitted
- Ensure required parameters are properly enforced
**Check field names:**
- For `streamQuery` and `streamQueryAggs`, validate field names against stream metadata
- Verify column names exist in target streams
- Check that result column mappings reference valid fields
**Verify credentials:**
- Tools requiring authentication (RESTAPI, Splunk) must reference valid credential names
- Ensure credentials are properly configured in the system
- Test authentication before deploying toolsets
### 2.8 Configuration Syntax Reference
#### YAML Structure {#2.8.1}
Tool configurations use YAML syntax with specific conventions:
**Multi-line strings** - Use `|` for literal blocks or `>-` for folded blocks:
```yaml
# Preserves newlines and formatting
template: |
Line 1
Line 2
Line 3
# Folds into single line, removes trailing newline
description: >-
This is a long description that will be folded
into a single line for readability.
Lists - Use hyphen notation for arrays:
Objects - Use indentation for nested structures:
Required vs Optional Fields
Each tool type defines its own required fields. Common patterns:
Always required:
nametypedescription
Commonly required:
stream(for stream-based tools)parameters(when tool accepts inputs)
Often optional:
save_to_cachedata_formatextra_filter
Refer to the Tool Handlers Guide for specific requirements per tool type.
Field Value Formats
Boolean values:
Integer values:
String values:
Template objects:
2.9 Understanding Tool Execution Flow
Execution Sequence
- Parameter Validation - Required parameters checked, types validated
- Template Rendering - Flex attributes populated with parameter values
- Resource Access - Credentials retrieved, connections established
- Operation Execution - Query runs, pipeline executes, API called
- Output Processing - Results formatted, columns mapped
- Caching Decision - Output stored if caching enabled
- Response Return - Formatted data returned to caller
Parameter Substitution
Parameters flow through the tool in this order:
Example flow:
# Configuration
parameters:
- name: device_ip
type: string
queryMapping: "ip = '{{device_ip}}'"
# User provides: device_ip = "10.0.0.1"
# Template renders: ip = '10.0.0.1'
# Query executes with substituted value
Caching Workflow
When save_to_cache is enabled:
- Tool executes and generates output
- Output size evaluated against threshold (if
auto) - Document created in context cache with specified name
- Subsequent tools can reference cached document
- Cache persists for MCP server session duration
Complete Example: Network Automation Toolset
Here's a more comprehensive example showing multiple tool types:
Example: Complete Network Automation Toolset
enabled: true
domain: network_automation
description: Network Automation related tools.
tools:
- name: run_ssh_command
type: runPipeline
save_to_cache: auto
description: >
Run SSH commands on network devices with caching support. Multiple commands can be combined in a single call by inserting a newline character between them.
The same command(s) can be run on multiple devices by providing a comma-separated list of device IP addresses.
Features:
- Executes SSH commands on target devices
- Implements cache-first strategy for command outputs
- Stores results in cache for future reference
Cache Details:
- Document format: cache_run_{ip_address}_{ssh_command}
- Access cached results using cache-fetch or cache-search tools
- List all cached entries using cache-list
custom_config:
template_type: jinja
pipeline_content: |
@dm:empty
--> @dm:addrow ip_address_column="{{ ip_address }}"
--> @sshv2:execute command="{{ ssh_command }}" & column_name="ip_address_column"
& secret_names="ssh-cred"
pipeline_output_columns:
- source_ip,output
parameters:
- name: ip_address
description: IP Address of the target device. This can be a comma-separated list of IP addresses.
type: string
- name: ssh_command
description: SSH command to execute (e.g., show run, show ip interface brief)
type: string
- name: run_config_change_command
type: runPipeline
save_to_cache: auto
description: >-
This tool is used to run commands that change the configuration of a device. This tool should be the preferred method of sending config changes to a network device.
custom_config:
template_type: jinja
pipeline_content: |
@dm:empty
--> @dm:addrow
profile_name = 'interface_shut' and command = "{{ '\\n'.join(commands.split(',')) }}" and condition="{{ "{% raw %}{{device_type == 'cisco_xe'}}{% endraw %}" }}"
--> @dm:save name = 'profile_dataset'
--> @c:new-block
--> @dm:empty
--> @dm:addrow ip = '{{ ip_address }}' and profiles = 'interface_shut'
--> @dm:save name is 'ip_dataset'
--> @c:new-block
--> @dm:empty
--> @dm:addrow ip = '{{ ip_address }}' and device_type = 'cisco_xe'
--> @dm:save name = 'devices_details_dataset'
--> @c:new-block
--> @dm:recall name is 'ip_dataset'
--> @network-device:execute-config
ip_column_name = 'ip' and
profile_dataset_name = 'profile_dataset' and
profile_column_name = 'profiles' and skip_tcp_check is yes and devices_details_dataset='devices_details_dataset'
pipeline_output_columns:
- output
parameters:
- name: ip_address
description: IP Address of the target device
type: string
- name: commands
description: >-
SSH commands to execute as a comma-separated list. Provide only the commands that change the actual config. The tool takes care of going into and out of configuration mode.
type: string
- name: get_golden_config
type: runPipeline
save_to_cache: yes
stream: syslog-config-backup-stream
description: Get the golden configuration for a network device.
custom_config:
template_type: jinja
pipeline_content: |
--> @dm:empty
--> @dm:addrow name = 'syslog-config-backup-stream'
--> #dm:query-persistent-stream source_ip='{{ip_address}}' and golden_config='Yes'
pipeline_output_columns:
- output
parameters:
- name: ip_address
description: IP address of the device to get golden configuration
- name: get_latest_config
type: runPipeline
stream: syslog-config-backup-stream
save_to_cache: yes
description: Get the latest configuration for a network device.
custom_config:
template_type: jinja
pipeline_content: |-
--> @dm:empty
--> @dm:addrow name = 'syslog-config-backup-stream'
--> #dm:query-persistent-stream source_ip='{{ip_address}}' --> @dm:head n='1' --> @dm:selectcolumns include='output'
--> @dm:eval output = 'output'
parameters:
- name: ip_address
description: IP address of the device to get latest configuration
- name: get_config_compliance_policies
type: streamQuery
stream: network_configuration_compliance_policies
description: Get the configuration compliance rules or policies for a network device
data_format: json
result_columns:
id:
rule:
description:
os_types:
command:
check:
- name: get_approved_network_os_images_list
type: streamQuery
stream: approved_network_os_images
save_to_cache: auto
description: Get the list or criteria of approved network OS images for a network device
data_format: json
result_columns:
vendor:
os_type:
version_criteria:
approval_date:
description:
notes:
- name: run_diagnostic_tests
type: runPipeline
description: |
Run diagnostic tests (ping) on a network device.
Features:
- Executes diagnostic tests (ping) on a network device
- Implements cache-first strategy for command outputs
- Stores results in cache for future reference and then retrievs it using cache-fetch and cache search
Cache Details:
- Access cached results using cache-fetch or cache-search tools
- List all cached entries using cache-list
custom_config:
template_type: jinja
pipeline_content: |
--> @dm:empty
--> @dm:addrow ip_address='{{ip_address}}'
--> @diagnostictools:ping column_name_host="ip_address"
parameters:
- name: ip_address
description: IP address of the device to run diagnostics on
type: string
Example: Visualization Toolset with Dashboard Management
Here's an example showing a visualization toolset with dashboardManagement type tools, subType fields, and a config section:
Example: Visualization Toolset
enabled: true
domain: visualization
description: >-
Visualization for user. Uses Widgets and Dashboard to show visualizations to
user.
tools:
- name: list-widget-schemas
type: dashboardManagement
subType: list-widget-schemas
save_to_cache: 'no'
description: >
List supported widget types. From this get name (schema name) so that
schema can be retrieved using get-widget-schema
parameters: []
- name: get-widget-schema
type: dashboardManagement
subType: get-widget-schema
save_to_cache: 'no'
description: >
Get the schema for a specific widget type. Use this to understand the
required properties and structure for creating widgets.
parameters:
- name: name
description: Name of the widget schema (from list-widget-schemas)
type: string
required: true
- name: add-static-data-widget
type: dashboardManagement
subType: add-static-data-widget
save_to_cache: 'no'
description: >
Add a widget to the dashboard. If there is no incident associated,
use 'Other' for incident_id. First get the schema for the widget, and
then generate data (dict) based on user instructions.
parameters:
- name: data
description: >
Static Data generated using the schema for the widget_type
(obtained using get-widget-schema). This is mandatory.
type: object
required: true
- name: layoutConfig
description: >
Layout information on where to place the widget on the canvas.
Canvas size is 1920 pixels wide, 1080 pixels tall.
type: object
required: true
- name: widget_type
description: Type of widget (such as pie_chart, bar_chart, label)
type: string
required: true
- name: title
description: Title for the widget
type: string
required: true
- name: incident_id
description: Incident ID if any. If no incident ID, use Other
type: string
- name: replace
description: If true, Replace current widget
type: boolean
- name: widget_id
description: >
Unique ID for the widget. Must specify if replace is true.
type: string
config:
test_dashboard: ai_generated_dashboard_playground
widget_types:
- name: pie_chart_static_data
type: pie_chart
data_source: static
description: Pie chart widget
schema: |
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": ["title", "segments"],
"properties": {
"title": {"type": "string"},
"segments": {
"type": "array",
"items": {
"type": "object",
"required": ["label", "value"],
"properties": {
"label": {"type": "string"},
"value": {"type": "number"}
}
}
}
}
}
3. Quick Reference
3.1 Common Tool Types
| Tool Type | Purpose | Key Configuration |
|---|---|---|
streamQuery |
Query pstream data | stream, result_columns |
streamQueryAggs |
Aggregate pstream data | stream, aggs, groupby |
runPipeline |
Execute RDA pipeline | pipeline_name or pipeline_content |
RESTAPI |
HTTP API calls | http.url, http.method |
contextCache |
Manage cached documents | subType (update, fetch, search, etc.) |
template |
Execute templated logic | templateType, template |
Detailed Tool Handler Information
For a detailed deep dive into building tool handlers, including complete examples, handler-specific configuration options, and advanced usage patterns, refer to the Tool Handlers Guide.
3.2 Parameter Type Quick Reference
string:"text value"integer:42number:3.14boolean:trueorfalse
3.3 Template Type Quick Reference
Simple substitution:
Conditional:
Complex logic:
4. Toolset Lifecycle (How They Work in Conversations)
The toolset lifecycle follows this flow:
- Project defines the toolsets - Toolsets are created and configured within an AI Project
- Persona chooses which toolsets it has access to - Personas are granted access to specific toolsets
- In a conversation:
- User prompt → model interprets
- Model selects a tool
- Tool executes inside Fabrix runtime
- Results returned to the model
- Model generates final response
This is why toolsets are the capability layer of the agent.
5. Creating / Editing Toolsets in the UI
The Toolsets interface provides comprehensive management capabilities for toolsets within your AI project. You can perform various operations to manage your toolsets effectively.
Available Operations
Local Toolsets Tab
Shows toolsets that belong to the current AI project. You can:
- View all toolsets - Browse all toolsets in the project with details about tools and resources
- Add new toolsets - Create new toolsets by adding YAML configurations
- Edit / update toolsets - Modify existing toolset configurations
- Delete toolsets - Remove toolsets that are no longer needed
- Inspect toolsets - View the number of tools and resources in each toolset
Imported Toolsets Tab
Shows toolsets that this project inherited from other projects. Imported toolsets are read-only and cannot be edited here. This is useful for shared libraries like:
common- Common utilities and shared toolscontext_cache- Context caching operationsdocument_creator- Document generation tools- Diagnostic tools - System diagnostic capabilities
- Salesforce SDK toolsets - Salesforce integration tools
- And other shared toolset libraries
Creating a New Toolset
To create a new toolset:
- Navigate to AI Administration → Projects
- Select the AI project where you want to create the toolset
- Go to MCP → Toolsets → Add
- Paste the YAML configuration into the editor
- Click Save
Editing an Existing Toolset
To edit an existing toolset:
- Navigate to AI Administration → Projects
- Select the AI project containing the toolset
- Go to MCP → Toolsets
- Select the toolset you want to edit from the Local Toolsets tab
- Modify the YAML configuration in the editor
- Click Save to apply changes
Validation
The system performs validation including:
- YAML syntax check
- Required fields validation
- Tool-type validation
- Pipeline content validation (for runPipeline)
If validated, the toolset becomes active in the project immediately.
Next Steps
- Learn about Personas to understand how toolsets are used by agents
- Explore Prompt Templates to see how tools are invoked in workflows
- Review the MCP Tools Panel documentation for user-facing tool information
- See Building Custom Agents for a complete example
- Refer to the Tool Handlers Guide for detailed specifications of individual tool types, including complete examples and handler-specific configuration options