AI Personas
AI Personas define the behavior, intelligence, communication style, and tool access rights of an agent within a Fabrix.ai AI Project.
A Persona represents a role-specific AI identity that uses LLMs, Toolsets, and Prompt Templates to automate domain-specific tasks.
Personas are central to building agentic workflows:
- They decide which tools to call
- They determine how to interpret user input
- They define rules, tone, domain expertise, and system behavior
📍 Personas Overview
Inside every AI Project → MCP → Personas, developers can:
- View existing personas
- Create new personas
- Configure behavior & system instructions
- Associate LLMs
- Assign guardrails
- Restrict or allow toolsets and prompt templates
- Control runtime optimization & learning options
Personas bring together three critical components:
- LLMs → Cognitive Engine
- Toolsets → Actions the persona can perform
- Prompt Templates → Domain workflows
🧩 Creating & Configuring a Persona
When clicking Add Persona, a full configuration modal opens. This section describes each field in detail.
1. Basic Persona Information
1.1 Name
Human-readable persona identifier (e.g., AIOps Assistant, Resume Agent).
Used in lists and chat headers.
1.2 Description
Explains what the persona does.
Example:
1.3 Color
A UI-only label to visually identify the persona.
2. Introductory Prompt
📌 What is an Introductory Prompt?
The Introductory Prompt defines what initial guidance or quick-start buttons appear when a new chat session is started with this persona.
It serves as:
- A welcome message
- A persona mission statement
- A set of example prompts the user can execute instantly
- Preloaded contextual guidance for the persona
Example Introductory Prompt:
Welcome! I'm your AIOps Assistant. I can analyze logs, detect anomalies, and perform RCA.
Try starting with:
- "Analyze errors from the last 1 hour"
- "Find anomalies in CPU metrics"
- "Summarize syslog activity for device X"
- Multiple introductory prompts may be added.
- Users can click these prompts to auto-execute them.
3. LLM Assignment
This section allows selecting one or more LLM backends that the persona can use.
The persona can use:
- GPT-4.1
- GPT-4o
- Claude Sonnet 4
- Claude Haiku/Opus (if enabled)
- Organization-specific LLM deployments
✔ You may select multiple LLMs
A persona must have at least one LLM selected.
4. Guardrails
Guardrails ensure that the persona stays within safe, policy-compliant behavior. They can:
- Prohibit certain actions
- Restrict content
- Enforce compliance guidelines
- Define behavioral constraints
If no guardrails exist, the section will show "No data available".
Adding guardrails is optional but recommended for enterprise personas.
5. Toolsets & Prompt Templates Access Policy
This JSON configuration determines what the persona can and cannot access.
It controls:
- Which MCP Server the persona uses
- Which Toolsets it can call
- Which Prompt Templates it can execute
- Whether runtime history optimization is enabled
- Whether final formatting uses a secondary LLM
- Whether learning is enabled
- Whether mandatory tools auto-run
Each persona requires at least one access policy object.
Access Policy Example
[
{
"mcpserver": "rdaf",
"toolset_pattern": "aiops.*|snmp.*|syslog.*|backup.*|network.*|context.*|common.*|post_to.*",
"prompt_templates_pattern": "incident_remediate_recommend.*",
"optimizeHistoryUsingAI": true,
"formatFinalResponseUsingAI": true,
"enableLearning": true,
"system_instruction_name": "default system interaction",
"prepolulateMandatoryTools": true
}
]
This configuration is extremely powerful. Below is a deep breakdown of each field.
5.1 MCP Server
mcpserver
Defines the MCP server instance the persona communicates with.
If any external MCP servers are identified, they can be configured as well.
Example:
Personas cannot call tools outside this server's tool registry.
5.2 Toolset Access
toolset_pattern
Regex expression defining which toolsets this persona can use.
Examples:
This gives fine-grained control so personas only use tools relevant to their domain.
5.3 Prompt Template Access
prompt_templates_pattern
Regex pattern controlling which prompt templates this persona can execute.
Examples:
This prevents personas from accidentally using workflows outside their domain.
5.4 Conversation Optimization
optimizeHistoryUsingAI
If true, Fabrix.ai uses internal lightweight AI models (SLMs) to:
- Compress earlier conversation history
- Remove irrelevant content
- Preserve semantic meaning
- Reduce token cost
- Improve long conversation performance
Recommended: true.
5.5 Final Response Formatting
formatFinalResponseUsingAI
After the main LLM generates a response, Fabrix.ai can send it to a second LLM to:
- Convert plain text to HTML
- Produce dashboards
- Improve structure
- Add bullet points, tables, or summaries
- Enforce output consistency
Example:
5.6 Reinforced Learning (optional)
enableLearning
Enables persona-level learning from internal feedback mechanisms.
5.7 System Instruction
system_instruction_name
References a stored system instruction document that defines:
- Persona tone
- Safety behavior
- Domain-specific guidelines
- Formatting rules
- Reasoning approach
Example:
5.8 Mandatory Tools Prefetch
prepolulateMandatoryTools
If true, Fabrix.ai automatically runs required MCP tools before each conversation turn.
These tools typically include:
get_conversation_historylist_prompt_templates_by_personaget_persona_details
This ensures consistent context and reduces LLM decision load.
Summary
AI Personas act as the behavioral brain of your AI agent.
They control:
- Domain expertise
- Communication style
- Tool access
- Workflow execution
- Learning
- History optimization
- Safety enforcement
By combining Personas + Toolsets + Prompt Templates, Fabrix.ai enables fully agentic, domain-aware, enterprise-grade automation.