# Plugins

Plugins add capabilities to your agent that go beyond what workflows alone can do. Connect an LLM provider to power AI nodes, enable session analysis in Observatory, or integrate LiveChat with Helvia LiveChat or third-party platforms like Zendesk.

Each plugin belongs to a category, connects to a Workspace-level integration, and can be activated or deactivated per agent without affecting other agents.

<div data-with-frame="true"><figure><img src="https://604830754-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FBM1xs3i59ajeTgi4uVfN%2Fuploads%2FNK3SgXgwjFiLlYrfd5Ya%2FSCR-20260318-kyot.png?alt=media&#x26;token=3dfebebb-408f-4f5c-bf3c-9ded49786e59" alt=""><figcaption></figcaption></figure></div>

### How Plugins Work

Plugins sit between your agent and external services. The relationship flows like this:

```mermaid
graph LR
    A[Workspace Integration] -->|credentials| B[Plugin]
    B -->|capability| C[Agent]
```

1. You configure an integration at **Workspace > Integrations** with the provider's API key or credentials
2. You activate the plugin on a specific agent and link it to an integration
3. The agent gains access to the capability (e.g., LLM processing, live chat routing, ticket creation)

Integrations are shared across all agents in the Workspace. Plugins are configured per agent. This means two agents can use the same integration but with different plugin settings.

{% hint style="warning" %}
The Helvia LiveChat is the only plugin that does not need an integration. It works out of the box with no setup required.
{% endhint %}

### Available Plugins

Plugins are organized into three groups based on where they apply in the platform.

#### Designer

These plugins enable nodes and features you use when building workflows on the canvas.

| Plugin Category          | Providers             | What It Enables                                                                |
| ------------------------ | --------------------- | ------------------------------------------------------------------------------ |
| **LLM node**             | OpenAI, Azure, Gemini | Powers the LLM node for natural language processing and complex business logic |
| **Semantic Search node** | OpenAI, Azure         | Enables the Semantic Search node for Knowledge Base retrieval (RAG)            |
| **Language Detection**   | OpenAI, Azure         | Automatically detects the user's input language during conversations           |

#### Observatory

These plugins power analytics and testing features in Observatory.

| Plugin Category             | Providers             | What It Enables                                                       |
| --------------------------- | --------------------- | --------------------------------------------------------------------- |
| **Session Analysis**        | OpenAI, Azure, Gemini | Generates summaries, sentiment scores, and insights for chat sessions |
| **Topic Modelling**         | OpenAI                | Groups missed questions into topics for pattern discovery             |
| **Automated Agent Testing** | OpenAI                | Runs automated test scenarios against your agent workflows            |

#### Customer Support

These plugins connect your agent to external customer support platforms.

| Plugin Category | Providers                                         | What It Enables                                             |
| --------------- | ------------------------------------------------- | ----------------------------------------------------------- |
| **LiveChat**    | Helvia LiveChat, Cisco, Zendesk Livechat, Genesys | Routes conversations to human agents for real-time support  |
| **CRM**         | Dynamics 365                                      | Syncs customer data from Microsoft Dynamics 365 CRM         |
| **Ticketing**   | Zendesk Ticketing                                 | Creates and syncs support tickets with your Zendesk account |

### Activating a Plugin

{% stepper %}
{% step %}

#### Open the Plugins Page

Go to **Designer > Plugins** by clicking the plugins section in the agent sidebar.
{% endstep %}

{% step %}

#### Select a Category

Click a category from the left sidebar (e.g., **LLM node**, **LiveChat**). The main area displays all available providers for that category.
{% endstep %}

{% step %}

#### Click Activate

Find the provider you want and click **Activate**. A settings dialog opens.
{% endstep %}

{% step %}

#### Select an Integration&#x20;

In the settings dialog, choose an integration from the **Select Integration** dropdown. This links the plugin to the credentials configured in **Workspace > Integrations**.

{% hint style="warning" %}
If no integrations appear in the dropdown, you must first create one in **Workspace > Integrations** for the selected provider.
{% endhint %}
{% endstep %}

{% step %}

#### Save Changes

Click **Save Changes**. The plugin is now active and you can now access its settings or deactivate it.&#x20;
{% endstep %}
{% endstepper %}

### One Provider Per Category

Only one provider can be active per category at a time. When a provider is already active, the **Activate** buttons for all other providers in that category are greyed out. To switch providers, **Deactivate** the current one first, then activate the new one.

The **LLM node** category is the exception. You can activate multiple LLM providers simultaneously (e.g., OpenAI and Gemini). When configuring an LLM node on the canvas, you select which plugin and model to use per node, giving you flexibility to mix providers across different parts of your workflow.

### Managing Plugins

Click **Settings** on any active plugin card to open its configuration dialog. Here you can configure the integration and swap them without deactivating the plugin. Every plugin has a dedicated settings dialog with its own configuration options.

{% hint style="info" %}
Some plugins like Language Detection and Session Analysis also include an **Expert Mode** toggle. Enabling Expert Mode expands the dialog with advanced options such as custom prompts and model selection.
{% endhint %}

#### Deactivating a Plugin

Click **Deactivate** on the plugin card. The plugin stops immediately and the agent loses the associated capability.

{% hint style="warning" %}
Deactivating an LLM plugin disables all nodes in your workflows that depend on it. Verify your workflows still function correctly after deactivation.
{% endhint %}

### Designer Plugins

#### LLM Node

The LLM node plugin connects large language models to your workflows. Unlike other categories, you can activate multiple LLM providers at the same time (e.g., OpenAI and Gemini). Each LLM node on the canvas lets you pick which plugin and model to use, so you can mix providers across different parts of a single workflow.

<div data-with-frame="true"><figure><img src="https://604830754-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FBM1xs3i59ajeTgi4uVfN%2Fuploads%2FO9WqUzjNKnzsNsGIUfaR%2FLLM%20node%20providers.png?alt=media&#x26;token=c277fdc9-f9ad-4b04-b8ab-e612a958d421" alt="" width="375"><figcaption></figcaption></figure></div>

The available providers are OpenAI, Azure, and Gemini. You can also connect any OpenAI-compatible provider (e.g., Mistral, Groq) through an OpenAI integration. Learn more in [Integrations](https://docs.helvia.ai/workspace/integrations).

#### Semantic Search Node

Activating this plugin enables the Semantic Search node on the canvas. The node performs retrieval queries over the Knowledge Bases connected to your agent, returning the most relevant chunks to feed into your workflow (RAG).

In the settings you can fine-tune RAG retrieval parameters. For most use cases, the default settings work well. Adjust **Visit Neighbors** and **Exact Match** only if you need to fine-tune the trade-off between search accuracy and response speed.

<table data-column-title-hidden data-view="cards"><thead><tr><th>Setting</th><th>Description</th></tr></thead><tbody><tr><td><h4><i class="fa-cube">:cube:</i></h4><h4>Model</h4></td><td>The embedding model used to vectorize queries and corpus data. Select a model from the dropdown</td></tr><tr><td><h4><i class="fa-text-height">:text-height:</i></h4><h4>Normalize Corpus</h4></td><td>Toggle to normalize corpus text when indexing. Disabled by default</td></tr><tr><td><h4><i class="fa-coins">:coins:</i></h4><h4>Max Input Tokens</h4></td><td>Limit results by total tokens. When set, this takes priority over max results. Default: <code>20000</code></td></tr><tr><td><h4><i class="fa-bullseye">:bullseye:</i></h4><h4>Exact Match</h4></td><td>Prefer exact match when searching. Improves precision but results in slower performance. Enabled by default</td></tr><tr><td><h4><i class="fa-diagram-project">:diagram-project:</i></h4><h4>Visit Neighbors</h4></td><td>Number of neighbors to visit during search. Higher values improve accuracy but slow down performance. Default: <code>128</code></td></tr></tbody></table>

**Pipeline ID** and **Access Token** are both read-only and auto-generated by the platform.

#### Language Detection

The Language Detection plugin automatically identifies the language of every user message and updates the `UserInfo.language` contact variable with the detected value. Use it to route multilingual conversations or switch the agent's response language dynamically.

{% tabs %}
{% tab title="Basic Mode" %}
In Basic Mode, select a **Model** from the dropdown. The plugin uses a platform-managed prompt optimized for language detection. No further configuration is needed.
{% endtab %}

{% tab title="Expert Mode" %}
Toggle **Expert Mode** on to access advanced settings:

* **Model**: Select the LLM model used for detection
* **Prompt**: A rich text editor with the full system prompt. The default prompt instructs the model to return an ISO 639 language code (e.g., `el`, `en`, `es`) and handle edge cases like ambiguous input or initialisms
* **Include History:** When enabled, previous user messages are included in the detection request for improved accuracy. Always counts at least the last message

Use Expert Mode when the default detection prompt does not handle your specific language mix or edge cases well enough.
{% endtab %}
{% endtabs %}

### Observatory Plugins

#### Session Analysis

This plugin uses an LLM to analyze chat sessions and generate structured insights. To view or trigger analysis, go to **Observatory > Sessions > Chat Sessions**, open a [session](https://docs.helvia.ai/observatory/sessions#session-details), and scroll down to the Session Analysis section. Use **Expert Mode** for better control over the generated insights.

<div data-with-frame="true"><figure><img src="https://604830754-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FBM1xs3i59ajeTgi4uVfN%2Fuploads%2FPld5u9xpmvz05CUmHhUm%2Fsession%20analysis.png?alt=media&#x26;token=85236216-aee4-4470-b528-31470e541279" alt="" width="340"><figcaption></figcaption></figure></div>

You can choose between two analysis modes:

* **On demand:** Analysis runs when you manually trigger it from a session
* **Automatically:** Analysis runs after each session ends

By default, the plugin generates five insights per session:

<table data-column-title-hidden data-view="cards"><thead><tr><th>Insight</th><th>Description</th></tr></thead><tbody><tr><td><h4><i class="fa-file-lines">:file-lines:</i></h4><h4>Summary</h4></td><td>Generates a concise summary of the conversation. </td></tr><tr><td><h4><i class="fa-tags">:tags:</i></h4><h4>Classification Tags</h4></td><td>Labels sessions with relevant categories you define. Matching tags are automatically added to chat session tags</td></tr><tr><td><h4><i class="fa-face-smile">:face-smile:</i></h4><h4>Sentiment</h4></td><td>Evaluates user satisfaction. Values: Positive, Neutral, Negative</td></tr><tr><td><h4><i class="fa-circle-check">:circle-check:</i></h4><h4>Resolution</h4></td><td>Indicates whether the user's issue was addressed. Values: Resolved, Unclear, Unresolved</td></tr><tr><td><h4><i class="fa-triangle-exclamation">:triangle-exclamation:</i></h4><h4>Urgency</h4></td><td>Determines the priority level of the conversation. Values: Low, Normal, Urgent</td></tr></tbody></table>

Toggle **Expert Mode** on to customize which insights are generated or how they are produced. You can define your own insight categories beyond the defaults, such as live escalation detection or any business-specific metric that matters to your team. The prompt must instruct the LLM to return a JSON object with a separate field for each insight. For example:

```json
{
  "summary": "short summary up to 30 words",
  "resolution": "resolved | unclear | unresolved",
  "sentiment": "positive | neutral | negative",
  "urgency": "low | normal | urgent"
}
```

The advanced LLM configuration includes:

* **Model:** Select the LLM model
* **Prompt:** Rich text editor with the full system prompt, including the JSON output format for all five insights or any additional
* **Temperature:** Controls response randomness (0-2, default `0.5`)
* **Max Tokens:** Limits the response length (default `128`)

You can also add multiple LLM configurations to the same plugin. Click **+** **Add LLM** to append another step to the chain.

{% hint style="info" %}
For a complete walkthrough of session inspection and analysis, see [Chat Sessions](https://docs.helvia.ai/observatory/sessions)
{% endhint %}

#### Topic Modelling

This plugin automatically groups [missed questions](https://docs.helvia.ai/build/broken-reference) into topics in the background. Use it to spot recurring knowledge gaps and prioritize which content to add to your agent.

Missed questions are collected by the Missed Question node in your workflows and recorded in Observatory. Once this plugin is active, the LLM analyzes accumulated missed questions and clusters them into topics. Results appear in **Observatory > Sessions > Missed Questions**.

#### Automated Agent Testing

Automated testing is one of the most advanced features of the platform and a necessary part of the agent development lifecycle. This plugin lets you create test scenarios that validate your agent's responses are consistent, accurate, and reliable before going live.

<div data-with-frame="true"><figure><img src="https://604830754-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FBM1xs3i59ajeTgi4uVfN%2Fuploads%2FZvdX3GvW65TaypVlWZrH%2Ftesting%20dashboard.png?alt=media&#x26;token=340672b7-f0b8-4f0c-b1e6-a4a66fd16f88" alt="" width="563"><figcaption></figcaption></figure></div>

Activate the plugin and select an LLM integration. The platform automatically creates a dedicated API deployment to run your test. You can then create, run, and review tests in **Observatory > Testing**, with pass/fail outcomes, detailed logs, and individual session performance insights.&#x20;

{% hint style="info" %}
For a full walkthrough of creating and running tests, see the [Automated Testing](https://docs.helvia.ai/observatory/testing) page.
{% endhint %}

### Customer Support Plugins

#### LiveChat

LiveChat plugins enable human-in-the-loop handoff. Transfers are not automatic; you control when a conversation is handed off by placing a LiveChat node in your workflow. When the node is reached, the conversation is transferred to a human agent for real-time support. Use this for complex or sensitive scenarios that require human judgement.

Four providers are available: Helvia LiveChat (our own built-in solution) and three third-party integrations (Cisco, Zendesk, Genesys). Helvia LiveChat offers the most flexibility and customization options, while the third-party providers let you route conversations to external support platforms your team already uses.

The LiveChat plugin settings include:

<details>

<summary><strong>LiveChat Availability</strong></summary>

Toggle LiveChat on or off for this agent. When disabled, all handoff requests are rejected.

</details>

<details>

<summary><strong>Agent Masking</strong></summary>

Control how agent names appear to end-users during LiveChat sessions. Available modes:

* **Full Name:** Shows the agent's full real name (e.g., John Joe Doe)
* **First Name + Last Initial**: Partial privacy (e.g., John D.)
* **First Name Only:** Friendly and approachable (e.g., John)
* **Constant Name:** Full anonymity (e.g., Agent)
* **Advanced Masking:** Define custom rules using RegEx

</details>

<details>

<summary><strong>Request Timeout</strong></summary>

Set the number of seconds a LiveChat request stays pending before it expires. Available only in Helvia LiveChat.

</details>

<details>

<summary><strong>Business Hours</strong></summary>

Configure time slots during which LiveChat is available. The timezone is inherited from the Workspace settings. Available only in Helvia LiveChat.

</details>

<details>

<summary><strong>Queue Configuration</strong></summary>

Set two values to estimate waiting time for end-users:

* **Average LiveChat Agent Response Time (seconds)** — How long agents typically take to respond
* **Average End-User Waiting Time (seconds)** — Estimated wait based on queue position

&#x20;Available only in Helvia LiveChat.

</details>

<details>

<summary><strong>System Messages</strong></summary>

Customize the messages sent to end-users during LiveChat events. Each message is configurable per language. Available system messages:

| Case                     | Default Message                                                                                                                                                  |
| ------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Conversation in progress | There is an active live-chat session at the moment.                                                                                                              |
| Conversation terminated  | Live-chat ended                                                                                                                                                  |
| Conversation transferred | Live-chat transferred                                                                                                                                            |
| Generic error            | An error occurred. Please, try again later.                                                                                                                      |
| LiveChat disabled        | Live-chat is not available right now. Please, try again later.                                                                                                   |
| Out of business hours    | Live-chat support is currently out of business hours.                                                                                                            |
| Request accepted         | Live-chat started                                                                                                                                                |
| Request already exists   | Your live-chat request is currently pending and messages are not sent during this time. A live-chat agent will be with you shortly to respond to your inquiries. |
| Request missed           | There is no agent available right now. Please, try again later.                                                                                                  |

</details>

#### CRM

Connect your agent and LiveChat to your CRM so conversations have full customer context. When a user interacts with the agent, the plugin pulls existing customer records, giving the agent and LiveChat operators access to relevant data without switching tools.

#### Ticketing

Create and manage support tickets from within agent conversations and LiveChat sessions. When a conversation requires follow-up beyond the chat session, the agent can create a ticket that syncs with your external ticketing system.

### Best Practices

* **Start with one LLM provider:** Activate a single LLM plugin (e.g., OpenAI) across all LLM-dependent categories before experimenting with others
* **Match the provider to the task:** Use the same provider for LLM node and Session Analysis to keep costs predictable and responses consistent
* **Set business hours early:** Configure LiveChat business hours before going live to avoid routing requests when no human agents are online
* **Use descriptive integration names:** Name integrations in Workspace so the dropdown in plugin settings is clear
* **Test after switching providers:** After changing a plugin's integration, run a test conversation to verify the agent responds correctly

{% hint style="success" %}
You now know how plugins are structured, how to activate and configure them, and how they connect to Workspace integrations. Activate your first plugin and start building.
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.helvia.ai/build/plugins.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
