Skip to content

Prompt Templates

1. Prompt Templates Overview

Prompt Templates are one of the three foundational building blocks of Fabrix.ai's Agentic AI system (along with Toolsets and Personas).

They define how an agent should think, what steps it should follow, and the exact workflow used to accomplish a task.

Prompt Templates are written in natural language and act as operational playbooks.

2. What Prompt Templates Do

Prompt Templates define:

  • The role of the agent
  • The mission and goals
  • The sequence of tool calls
  • The logic and workflow
  • The output format
  • The rules & constraints
  • The error-handling behavior
  • The tone and communication style

They are the "procedural brain" of an agent.

3. Prompt Templates UI

You can access Prompt Templates via:

MCP → Prompt Templates

There are two tabs:

  • Local — Templates created in the current AI project
  • Imported — Templates inherited from other AI projects

Click Add to create a new template.

4. Creating a Prompt Template

When adding a new prompt template, the UI contains the following fields:

4.1 Template Name

A unique identifier (preferably snake_case).

4.2 Description

Short, clear explanation of the template's purpose.

4.3 Prompt Template Editor

The large text area where you write the full instruction set.

4.4 Refine With AI (✨)

A blue sparkle-pen icon next to the editor.

You can:

  • Highlight a section → click ✨ → refine/improve
  • Fix formatting
  • Expand or compress text
  • Make instructions more structured
  • Convert raw ideas into clean workflows

This dramatically increases the quality of your prompts.

5. Structure of a High-Quality Prompt Template

A strong prompt template usually includes the following components:

  • ROLE - Defines the agent's persona for this workflow.

  • MISSION - Very clear statement of what the agent must accomplish.

  • GOALS - High-level outcomes the agent must produce.

  • Mandatory Tool Order - Critical tools the agent must call, in fixed order.

  • Initialization Steps - Setup logic, variable definitions, context-cache initialization.

  • Main Workflow - The step-by-step execution sequence (loops, decisions, queries, validations).

  • Post-Processing - Summaries, aggregations, computations, and validation checks.

  • Final Output Format - HTML, JSON, tables, Markdown — explicitly defined.

  • Error Handling - Rules for UNABLE, partial data, API failures, etc.

  • Tone & Communication Style - Professional, concise, analytical, etc.

6. Best Practices for Writing Prompt Templates

This section explains a full real-world Prompt Template example used in Fabrix.ai for Network Configuration Compliance Analysis.

Prompt Templates define how an agent should think, behave, sequence tool calls, evaluate data, and format results. The more detailed the template, the more reliable the agent becomes.

6.1 Complete Template

**ROLE**

You are a **Network Compliance Engineer**.

Mission: validate every network-policy returned by `get_configuration_compliance_policies`, run required device commands,
decide compliance, and produce a black-theme HTML report.

**GOALS**

- Discover all policies dynamically (ID 1 → MAX_ID).
- Process each policy in sequence.
- Produce two outputs: Executive Summary + Colour-coded HTML table.

**MANDATORY TOOL ORDER (Do not change)**

1. `get_configuration_compliance_policies`   – fetch policy rows
2. `get_approved_network_images_list`        – only if OS-version checks apply
3. `ssh_execute_command`                     – execute device command
4. `update compliance_analysis_log`          – append reasoning + result
5. `search / fetch compliance_analysis_log`

**INITIALIZATION**

1. Call `get_configuration_compliance_policies()`.
   - TOTAL = number of rows
   - MAX_ID = highest ID
2. Initialize context-cache document:
   `compliance_analysis_log` → "=== ANALYSIS START – TOTAL: TOTAL ==="

**POLICY LOOP (for id = 1 → MAX_ID)**

1. Retrieve row → {rule, description, os_types, command, check}.
2. Log rule header to thoughts + compliance_analysis_log.
3. Execute command; evaluate compliance:
   - Compare output vs "check" (string/regex)
   - Result ∈ {COMPLIANT, NON_COMPLIANT, UNABLE}
4. Update thoughts with:
   RULE {id} – {rule}
   Expected: {check}
   Observed: {excerpt}
   Remediation: only if NON_COMPLIANT or UNABLE
5. Append entry to compliance_analysis_log.

**POST-PROCESS**

- Fetch compliance_analysis_log.
- Verify IDs 1…MAX_ID are logged.
- Compute totals: compliant, non-compliant, unable, compliance%.

**FINAL REPLY FORMAT (Strict)**

**Executive Summary:**

Device: {device_name}
Total Policies Evaluated: {TOTAL}/{TOTAL}
Overall Compliance Rate: {COMPLIANT × 100 / TOTAL}%
Critical Issues: {NON_COMPLIANT}
Compliant Systems: {COMPLIANT}
Non-Compliant Systems: {NON_COMPLIANT}

**HTML Table (Black Theme)**

Must contain: ID, Rule, Check, Result, Details, Remediation

Color Rules:

- COMPLIANT → Green (#00FF00)
- NON_COMPLIANT → Red (#FF0000)
- UNABLE → Orange (#FFA500)

**ERROR HANDLING**

Any retrieval or SSH failure → mark UNABLE and continue.

**TONE**

Professional, concise, and analytical.

**IMPORTANT**

Return final output as an HTML table with:

Check | Status | Reason | Remediation

- COMPLIANT → leave Reason & Remediation empty.

6.2 Overview of the Template

This Prompt Template instructs the persona (the agent) to behave like a Network Compliance Engineer and evaluate device configurations against predefined compliance policies.

The template includes:

  • The agent's role
  • The goals
  • The mandatory order of tool calls
  • The initialization steps
  • The policy loop
  • How to compute results
  • How to format the final reply
  • Error-handling rules
  • Tone guidelines

Each section improves determinism and prevents the LLM from improvising.

6.3 Template Breakdown (Section-by-Section)

6.3.1 ROLE Section

ROLE
You are a **Network Compliance Engineer**.
Mission: validate every network-policy returned by `get_configuration_compliance_policies`,
run the required device commands, decide compliance, and deliver a black-theme HTML report

What this means:

  • Sets a fixed identity for the agent
  • Defines the mission clearly
  • Prevents the LLM from drifting into casual tone or unrelated tasks
  • Ensures the mindset is analytical, structured, and technical

This is fundamental for stable behavior.

6.3.2 GOALS Section

GOALS
• Dynamically discover the policy set (ID 1 → MAX_ID).
• Process each policy sequentially.
• Produce a two-part final reply: Executive Summary + colour-coded HTML table.

Why this matters:

  • These goals give the LLM a clear high-level workflow:
  • Discover all policies
  • Process them in order
  • Produce two structured outputs
  • Goals reinforce the macro-level intent of the template.

6.3.3 Mandatory Tool Order

MANDATORY TOOL ORDER (never change)
1. get_configuration_compliance_policies – fetch policy rows
2. get_approved_network_images_list     – only if OS-version checks apply   
3. ssh_execute_command                  – run command
4. update compliance_analysis_log       – log reasoning & result  
5. search/fetch compliance_analysis_log

Why this is critical:

  • Ensures deterministic execution
  • Prevents LLM from calling tools in random order
  • Guarantees consistent workflow across runs
  • Avoids logic errors that occur when tools return unexpected states

If tool order is not strictly defined, agents often misbehave.

6.3.4 Initialization Phase

INITIALIZATION
1. Call get_configuration_compliance_policies() (no args).
   • Let TOTAL = number of rows returned.
   • Let MAX_ID = highest ID.
2. Create / append this context-cache doc:
   • compliance_analysis_log → "=== ANALYSIS START – TOTAL: TOTAL ==="

Purpose:

The agent prepares the environment:

  • Calls the policy fetch tool
  • Stores metadata (TOTAL and MAX_ID)
  • Creates a context cache document where all reasoning will be logged

Without proper initialization, later steps break.

6.3.5 Policy Loop (Core Workflow)

POLICY LOOP (for id = 1 → MAX_ID)
1. Retrieve the row → {rule, description, os_types, command, check}.
2. Log policy header to both "thoughts" & "compliance_analysis_log".
3. Evaluate compliance : compare output vs check (string or regex).
   • Result ∈ {COMPLIANT, NON_COMPLIANT, UNABLE}.
4. update thoughts with:
   RULE {id} – {rule}
   Expected: {check}
   Observed: {excerpt}
   Only if NOT COMPLIANT or UNABLE: Provide the remediation steps.
5. Append detailed block to compliance_analysis_log and bump counters.

Explanation:

This is the engine of the process.

Step 1 — Retrieve Rule

  • Reads one compliance rule from the dataset.

Step 2 — Log

  • Writes the rule into:
  • thoughts (LLM memory)
  • cache (persistent log)

Step 3 — Evaluate

  • Compare the command output against the "check" field.

Step 4 — Log Classification

  • Every rule results in:
  • COMPLIANT
  • NON_COMPLIANT
  • UNABLE
  • If not compliant → remediation is mandatory.

Step 5 — Store Everything

  • All results logged into compliance_analysis_log.

6.3.6 Post-Processing

POST-PROCESS
• fetch compliance_analysis_log; confirm every ID 1–MAX_ID recorded once.
• Compute counts: compliant, non-compliant, unable, compliance %.

Why this matters:

The post-process step:

  • Ensures no rule was skipped
  • Calculates final statistics
  • Prepares the summary for final output

6.3.7 Final Reply Format

This is extremely important. It forces the LLM to output exactly in a structured format.

FINAL REPLY FORMAT (assistant's output ONLY)
Executive Summary:
Device: {device_name_from_user}
* Total Policies Evaluated: {TOTAL}/{TOTAL}
* Overall Compliance Rate: {COMPLIANT × 100 / TOTAL}%
* Critical Issues: {NON_COMPLIANT}
* Compliant Systems: {COMPLIANT}
* Non-Compliant Systems: {NON_COMPLIANT}

HTML Table (black background)
<table style="width:100%; border-collapse: collapse; background-color: #000000; color: #FFFFFF;">
...
</table>

Key points:

  • Forces a clean summary
  • Requires a professional HTML table
  • Colors indicate compliance state
  • Green: COMPLIANT
  • Red: NON_COMPLIANT
  • Orange: UNABLE
  • Structure ensures this output can be shown directly in dashboards

6.3.8 Error Handling

ERROR HANDLING
• Any retrieval / SSH error → log in thoughts, mark UNABLE, continue.

Purpose:

Ensures:

  • Workflow never crashes
  • Errors are logged
  • The loop continues
  • A device with issues still produces a valid report

This is essential for production stability.

6.3.9 Tone

TONE
Professional, concise, analytical.

Ensures the agent does not sound conversational or casual.

This example demonstrates how a well-written Prompt Template:

  • Controls the agent's workflow
  • Prevents unpredictable LLM behavior
  • Ensures tool calls happen in correct order
  • Creates precise, formatted output
  • Handles errors gracefully
  • Produces deterministic results across runs
  • Allows users to directly embed the final HTML into UI

It is a high-quality blueprint for building any complex operational agent.

7. Using "Refine with AI" (✨✏️)

The ✨✏️button enables AI-assisted refinement.

7.1 Actions You Can Perform

  • Improve clarity
  • Add structure
  • Rewrite for determinism
  • Convert text into SOP-style numbered instructions
  • Generate HTML/CSS table templates
  • Enforce specific output sections
  • Long workflows
  • Templates involving multiple tools
  • Complex decision logic
  • Compliance or diagnostic procedures

7.3 Benefits

  • Reduces human effort
  • Produces better quality templates
  • Ensures more deterministic agent behavior
  • Eliminates vague language

8. Summary

Section Purpose
Prompt Template Defines instructions and workflow
Tool Order Ensures correct operations & sequencing
Initialization Sets up counters, logs, and environment
Main Loop Executes core logic
Final Output Format Ensures predictable responses
Refine with AI ✨ Boosts prompt-engineering quality

Prompt Templates are the procedural backbone of Fabrix.ai Agents, ensuring they produce deterministic, accurate, and highly structured outputs.