Helvia.ai Release 2026.01.27

27 January 2026

1. Redesigned Preview Button & Training Status

The Preview experience has been redesigned to provide a clearer and more intuitive way to test agents before deployment. The previous Preview button and training indicator have been replaced with a modern, unified component that clearly reflects the agent’s training state.

The new Preview button opens WebChat in a movable and resizable window, allowing users to test conversations without leaving the workspace.

How it helps: Business users can immediately understand whether an agent is ready and interact with it in a realistic environment.

Example use case: Before publishing an updated customer support agent, a team member previews the conversation flow, verifies responses, and visually confirms the agent is fully trained.

2. End LiveChat Confirmation

Ending a live chat now requires explicit confirmation. When a user clicks the LiveChat icon to end a session, a confirmation prompt appears with options to proceed or cancel. The default prompt is available in all supported WebChat languages and can be customized via customSettings in StyleSetOptions.

How it helps: Prevents accidental termination of live conversations and protects valuable customer interactions.

3. Full Language Support in WebChat

WebChat now supports all available languages in the platform, with translations of system phrases and visual cues like localized flag icons. Users can select their preferred language from the language dropdown in WebChat. All system messages will now display in the selected language.

Use case: Companies operating in multiple regions can provide a fully localized experience for end-users and internal teams. For example, a European company can support French, German, and Spanish users without extra manual setup.

4. HTTP Status Code Variable in LLM Node

The LLM node now allows users to define a variable that captures the HTTP status code of each request. This variable can be reused later in the flow to drive conditional logic or error handling.

How it helps: Provides greater control and reliability when building advanced AI workflows.

Example use case: If an LLM request fails, the stored HTTP status code can trigger a fallback message or notify an administrator, ensuring a smoother user experience.

5. LLM Request History in Interaction Logs

Interaction Logs now store the full conversation history used in LLM requests. This provides transparency into the context sent to the model and allows teams to better understand and audit AI behavior.

How it helps: Enables deeper analysis, easier debugging, and improved governance of AI-driven interactions.

Example use case: An AI operations team reviews historical LLM inputs to understand why a specific response was generated and refine agent behavior accordingly.

6. Tag Operations for Unity & API Deployments

Incoming events in Unity and API deployments now support dynamic tag operations. Developers can add or remove one or more tags from events, enabling more accurate categorization and downstream processing within ChatSessions.

Tags can be added or removed individually or in bulk, providing more granular control over event tracking.

Instructions: Use the tagOperations.add and tagOperations.remove arrays to modify event tags.

Example use case: A product team tags events as "completed" or "in-progress" to drive reporting dashboards or trigger follow-up workflows automatically.

7. New “Missed Question” Node

A new "Missed Question" node is available in the flow editor to automatically flag unanswered user questions. The node requires no configuration and records missed questions directly in the Observatory, using a distinct icon for easy identification.

How it helps: Eliminates manual setup for tracking missed questions and provides immediate insight into knowledge gaps.

Last updated

Was this helpful?