Multi Chat Tool

How to configure and use the Multi Chat Tool for AI-powered conversations with multiple LLM providers.

The Multi Chat Tool is a sophisticated AI conversation interface designed for human-in-the-loop tasks within Manual Queues. It enables users to have multi-turn conversations with AI models, supporting multiple LLM providers, conversation branching, message editing, and comprehensive rating systems.

Overview

The Multi Chat Tool provides a chat interface similar to ChatGPT but with enterprise-grade features:

  • Multi-Provider Support: Connect to multiple LLM providers (OpenAI, Anthropic, Groq, Cohere, Google, Mistral, Perplexity)
  • Conversation Management: Branch conversations, edit messages, and maintain conversation history
  • Turn Limits: Configurable minimum and maximum conversation turns with visual enforcement
  • Global Rating System: Comprehensive rating management across all conversation branches
  • Smart Auto-Save: Automatic saving after AI responses with manual save options
  • Enhanced UI: Real-time progress indicators and missing ratings navigation

Configuration

Basic Settings

Title and Description

  • Title: The name displayed in the tool header (default: "Multi Chat Tool")
  • Description: Optional description shown below the title

LLM Integrations

Configure the AI providers available for conversations:

  • LLM Integrations: Array of integration references from your organization
  • Display Name (Optional): Custom alias for each integration (e.g., "Production OpenAI", "Fast Groq")

Important: You must have LLM integrations configured in your organization's settings before using this tool.

🚨 CRITICAL REQUIREMENT: Each integration must have valid API keys configured. Without properly configured API keys, the tool will not be able to make requests to the LLM providers and will show errors when attempting to send messages.

Setup Steps:

  1. Go to your organization's settings
  2. Navigate to the Integrations section
  3. Add integrations for the LLM providers you want to use (OpenAI, Anthropic, Groq, etc.)
  4. 🔑 Configure the API keys for each integration
  5. Reference these integrations in your Multi Chat Tool configuration

Common Issues:

  • "Provider not configured" error: Integration exists but API key is missing or invalid
  • "Failed to get integration credentials" error: API key is not properly configured
  • "Invalid LLM provider or missing credentials" error: Integration setup is incomplete

Conversation Settings

Turn Limits

Control the conversation flow with configurable limits:

  • Minimum Turns: Required number of conversation turns before submission (default: 1)
  • Maximum Turns: Maximum allowed conversation turns (default: 50)
    • When reached, the chat input is disabled and shows "Maximum turns reached - chat disabled"
    • Visual indicator changes to red when limit is reached
    • Clear status messages indicate turn requirements

System Message

Define the AI's behavior and personality:

  • System Message: Instructions for how the AI should behave, its role, and response style
  • Example: "You are a helpful customer service representative. Be polite and concise."

Default Provider

Pre-select a specific LLM provider when the tool loads:

  • Auto-detect: Uses the first configured integration
  • Specific Provider: Choose from available providers (OpenAI, Anthropic, Groq, etc.)

Rating System

Per-Response Rating

Enable comprehensive rating of AI responses:

  • Enable Per-Response Rating: Toggle rating system on/off
  • Likert Scales: Configure multiple rating types with custom questions and options
  • Global Rating Tracking: Monitor missing ratings across all conversation branches
  • Visual Indicators: Real-time display of rating completion status

Rating Configuration

{
  "enablePerResponseRating": true,
  "likertScales": {
    "helpfulness": {
      "question": "How helpful was this response?",
      "options": ["Not helpful", "Somewhat helpful", "Helpful", "Very helpful"]
    },
    "accuracy": {
      "question": "How accurate was this response?",
      "options": ["Inaccurate", "Somewhat accurate", "Accurate", "Very accurate"]
    }
  }
}

Feature Settings

Core Features

  • Enable Branching: Allow users to create conversation branches from any AI response
  • Enable Message Editing: Allow users to edit their own messages and AI responses
  • Auto-Save: Automatically save conversation progress (disabled by default)
  • Model Switching: Allow users to switch between different LLM providers during conversation

How It Works

Conversation Flow

  1. Initialization: Tool loads with configured integrations and system message
  2. Provider Detection: Automatically detects available LLM providers from integrations
  3. User Input: Users type messages in the chat interface
  4. AI Response: Selected LLM provider generates responses with metadata
  5. Auto-Save: Conversation automatically saved after each AI response
  6. Turn Tracking: System tracks conversation turns and enforces limits
  7. Rating: Users can rate AI responses (if enabled)
  8. Submission: Conversation can be submitted when minimum turns are met and all ratings complete

Turn Limit Enforcement

The tool enforces conversation limits through multiple mechanisms:

  • Visual Indicators:
    • Green: Minimum turns completed, not at maximum
    • Yellow: Below minimum turns
    • Red: Maximum turns reached
  • Input Disabling: Chat input and send button disabled when maximum reached
  • Status Messages: Clear feedback about turn status and requirements
  • Progress Tracking: Real-time display of current turns vs. limits

Global Rating Management

Missing Ratings System

The tool provides comprehensive rating management across all conversation branches:

  • Global Counter: Shows total missing ratings across all branches
  • Visual Alert: Prominent red alert button with missing count
  • Navigation Dropdown: Click to see all missing ratings with navigation
  • Branch Organization: Missing ratings grouped by conversation branch
  • Quick Navigation: One-click navigation to any missing rating

Rating Completion Requirements

  • Submit Validation: Submit button only enabled when ALL ratings across ALL branches are complete
  • Real-time Updates: Rating status updates immediately across all branches
  • Visual Feedback: Clear indicators for rating completion status

Smart Auto-Save System

Automatic Saving

  • Forced Auto-Save: Automatically saves after every AI response (silent background save)
  • Manual Save: Users can manually save progress at any time
  • Before Unload: Saves progress when user leaves page or refreshes
  • Pending Response Protection: Warns user if leaving during AI response

Save Triggers

  1. After AI Response: Automatic save 1 second after each assistant message
  2. Manual Save: User-initiated save via save button
  3. Before Unload: Save when user leaves page (with pending response warning)
  4. Unclaim/Submit: Save before tool state changes

Conversation Management

Branching

  • Users can create new conversation branches from any AI response
  • Each branch maintains its own conversation history
  • Useful for exploring different conversation paths
  • Global rating tracking across all branches

Message Editing

  • Users can edit their own messages
  • AI responses can be edited (if enabled)
  • Edit history is preserved with timestamps
  • Version navigation for complex messages

Message Metadata

  • Model Information: Displays which model generated each response
  • Provider Details: Shows the LLM provider used
  • Token Usage: Displays usage statistics for each response
  • Timestamps: Tracks when each message was created

Security Architecture

The Multi Chat Tool uses a secure architecture for handling LLM credentials:

  • Server-Side Credentials: API keys and tokens never leave the backend
  • LLM Integration References: Frontend only receives document references, not actual credentials
  • Authentication: Backend validates user access to integrations
  • Audit Trail: Complete logging of which integrations are used
  • Provider Detection: Secure detection of available providers without exposing credentials

Configuration Examples

Basic Customer Service Bot

{
  "title": "Customer Service Assistant",
  "description": "AI-powered customer support with human oversight",
  "conversation": {
    "minTurns": 2,
    "maxTurns": 15,
    "systemMessage": "You are a helpful customer service representative. Be polite, professional, and concise. Escalate complex issues to human agents."
  },
  "defaultProvider": "openai",
  "enablePerResponseRating": true,
  "likertScales": {
    "helpfulness": {
      "question": "How helpful was this response?",
      "options": ["Not helpful", "Somewhat helpful", "Helpful", "Very helpful"]
    },
    "accuracy": {
      "question": "How accurate was this response?",
      "options": ["Inaccurate", "Somewhat accurate", "Accurate", "Very accurate"]
    }
  },
  "features": {
    "enableBranching": true,
    "enableEditing": true,
    "autoSave": true
  }
}

Research Assistant

{
  "title": "Research Assistant",
  "description": "AI research partner for data analysis and insights",
  "conversation": {
    "minTurns": 3,
    "maxTurns": 25,
    "systemMessage": "You are a research assistant specializing in data analysis. Provide detailed explanations, cite sources when possible, and help users explore complex topics."
  },
  "defaultProvider": "anthropic",
  "enablePerResponseRating": true,
  "likertScales": {
    "comprehensiveness": {
      "question": "How comprehensive was this response?",
      "options": ["Incomplete", "Basic", "Comprehensive", "Very comprehensive"]
    },
    "clarity": {
      "question": "How clear was this explanation?",
      "options": ["Unclear", "Somewhat clear", "Clear", "Very clear"]
    }
  },
  "features": {
    "enableBranching": true,
    "enableEditing": true,
    "autoSave": false
  }
}

Quality Assurance Tool

{
  "title": "Content Review Assistant",
  "description": "AI-powered content review with human validation",
  "conversation": {
    "minTurns": 1,
    "maxTurns": 10,
    "systemMessage": "You are a content reviewer. Analyze content for accuracy, tone, and compliance. Provide specific feedback and suggestions for improvement."
  },
  "defaultProvider": "groq",
  "enablePerResponseRating": true,
  "likertScales": {
    "quality": {
      "question": "How would you rate the quality of this review?",
      "options": ["Poor", "Fair", "Good", "Excellent"]
    },
    "actionability": {
      "question": "How actionable were the suggestions?",
      "options": ["Not actionable", "Somewhat actionable", "Actionable", "Very actionable"]
    }
  },
  "features": {
    "enableBranching": false,
    "enableEditing": false,
    "autoSave": true
  }
}

Best Practices

Configuration

  1. Set Appropriate Turn Limits: Balance between conversation depth and efficiency
  2. Write Clear System Messages: Define AI behavior and constraints clearly
  3. Use Multiple Integrations: Provide fallback options and provider diversity
  4. Configure Rating Scales: Design rating questions that align with your quality goals
  5. Enable Auto-Save: For long conversations to prevent data loss

User Experience

  1. Clear Instructions: Provide context about the conversation purpose
  2. Reasonable Limits: Don't set maximum turns too low for complex tasks
  3. Feature Selection: Enable only necessary features to avoid confusion
  4. Provider Selection: Choose appropriate default provider for the use case
  5. Rating Design: Create rating scales that are easy to understand and use

Security

  1. Integration Management: Regularly review and update LLM integrations
  2. Access Control: Ensure only authorized users can access the tool
  3. Audit Logging: Monitor usage patterns and integration access
  4. Credential Rotation: Regularly rotate API keys and tokens

Troubleshooting

Common Issues

"No integrations available"

  • Cause: No LLM integrations configured in organization
  • Solution: Configure integrations in organization settings

"Maximum turns reached"

  • Cause: Conversation has reached the configured turn limit
  • Solution: Increase maxTurns in configuration or start new conversation

"Provider not configured"

  • Cause: Selected provider not available in integrations or API key not configured
  • Solution: Add integration for the required provider and ensure API key is properly configured

"Failed to get integration credentials"

  • Cause: API key is missing, invalid, or not properly configured in the integration
  • Solution: Check the integration settings and ensure the API key is valid and active

"Invalid LLM provider or missing credentials"

  • Cause: Integration setup is incomplete or API key is not configured
  • Solution: Complete the integration setup with valid API credentials

"Cannot submit - ratings incomplete"

  • Cause: Missing ratings across conversation branches
  • Solution: Use the missing ratings dropdown to navigate and complete all ratings

Auto-save not working

  • Cause: Auto-save feature disabled or network issues
  • Solution: Enable auto-save in configuration or use manual save button

"Pending response" warning

  • Cause: User trying to leave page while AI is responding
  • Solution: Wait for AI response to complete before leaving

Performance Optimization

  1. Provider Selection: Choose faster providers (like Groq) for real-time interactions
  2. Turn Limits: Set reasonable limits to prevent excessive API usage
  3. Rating Configuration: Limit number of rating types to improve performance
  4. Auto-Save Frequency: Balance between data safety and performance

Integration with Workflows

The Multi Chat Tool integrates seamlessly with Flows pipelines:

  • Input: Receives packet data and metadata
  • Processing: Enables human-AI collaboration with branching
  • Output: Provides conversation history, ratings, and metadata
  • Quality Control: Built-in rating system for response evaluation
  • Progress Tracking: Real-time progress indicators and completion status

Output Format

The tool outputs comprehensive data including:

{
  "messages": [...], // Standard message format with metadata
  "conversations": {...}, // All conversation branches
  "currentConversationId": "...",
  "canSubmit": true,
  "progress": {
    "messageCount": 10,
    "ratingsCount": 20,
    "minTurns": 2,
    "maxTurns": 15,
    "requiredRatings": 8
  },
  "ui_state": {
    "systemPrompt": "...",
    "selectedProvider": "openai",
    "selectedModel": "gpt-4o"
  }
}

This tool is ideal for scenarios requiring human oversight of AI interactions, quality assurance of AI responses, complex conversational workflows, and multi-branch conversation management.