Workflow - Multi-Assistant Orchestration

How to Combine Multiple AI Assistants & Merge Their Outputs

Understand Tech Workflows allow you to orchestrate multiple AI assistants—private or public—inside the same automated pipeline. This feature is particularly useful when you want specialized assistants to produce independent outputs, and then have a final LLM synthesize, compare, or consolidate these outputs into one coherent result.

This tutorial walks you through a full working example.


What You Will Build

You will create a workflow where:

  1. A Trigger starts the flow.

  2. Several “Assistant Chat” nodes, each using a different assistant (private assistant or public LLM), process the same input.

  3. A Text Append node merges these responses.

  4. A Final Assistant Chat node (using any LLM of your choice) synthesizes the combined content into a unified answer.

This is ideal for:

  • Comparing reasoning between models

  • Aggregating multiple expert assistants

  • Performing ensemble-style AI reasoning

  • Using a strong model (e.g., GPT-5) to summarize weaker or specialized models

  • Creating human-like multi-agent analysis pipelines


How to start ?

  1. Navigate to Workflows (top left navigation bar)

  2. Click Create a workflow

  3. Give it a name

  4. And provide a description (optionnal)

  5. Click Create

The canvas will now display your first node: the Trigger.

1. Create a Trigger Node

After creating a workflow, add a Trigger node to define when your workflow begins.

To configure it:

  1. Click the Trigger node

  2. A side panel on the left opens

  3. Select a trigger point:

    • Form trigger

  4. You can enable just one of these options or both:

    • Send a text message

    • Send a file

In future releases, additional trigger types (webhooks, schedules, external events, API triggers, etc.) will be available.


2. Add Multiple “Assistant Chat” Nodes

Each “Assistant Chat” node allows you to choose any assistant:

  • Your private assistants (trained on your data)

  • Understand AI (privacy & security, stays inside your org context)

  • OpenAI GPT-4.1 / GPT-5

  • Mistral Medium

  • Gemini 2.5 Flash

  • DeepSeek V3

  • xAI Grok 3 Mini

  • Claude Sonnet 4.5

  • Perplexity (real-time search) …and others available on the platform.

To configure it:

  1. Click on the small + button in the bottom right corner, and click on Action

  2. Add an Assistant Chat node from the dropdown.

  1. Connect your trigger with your assistant by clicking on the black circle of each.

Repeat this for all assistants whose independent outputs you want to include.


3. Merge All Outputs With a “Text Append” Node

Once your assistants generate their responses, you need to combine their outputs.

Add a Text Append node and connect every assistant’s output into it.

This node will:

  • Concatenate all texts

  • Preserve ordering based on the connection sequence

  • Prepare a unified block of content that the next LLM will analyze

Configure the Text Append node using the Persistent Text field as shown in the screenshot below. In the Persistent Text, add a short instruction that tells the next node how to handle the content — for example, a prompt explaining that the three Assistant Chat outputs should be merged into a single consolidated response.

You can put in the persistent text something like : You will receive three answers generated by three different LLMs. Merge them into one consolidated response that removes duplicates, resolves contradictions, and keeps the strongest reasoning.

Example of what this node may produce

[Assistant A Output]

---

[Assistant B Output]

---

[Assistant C Output]

---
ou will receive three answers generated by three different LLMs. Merge them into one consolidated response that removes duplicates, resolves contradictions, and keeps the strongest reasoning.

You can also include a small label in each branch before append using persistent text (optional).


4. Send the Combined Result to a Final LLM

Now add one more Assistant Chat node. This will act as the “meta-model”: the assistant responsible for synthesizing, comparing, merging, or performing final reasoning.

Choose any model here — often people use:

  • GPT-5 (best reasoning)

  • Understand AI (privacy-preserving consolidation)

  • Claude Sonnet 4.5 (excellent summarization)

  • DeepSeek V3 (analysis + reasoning)

That's it 👍


5. Run the Workflow

Click Execute Workflow and wait until the end of the execution.

When it finishes, the final node will display a green status indicator.

Click the node and open execution panel button to view the merged result.


Tips for Effective Multi-Assistant Orchestration

✔ Use diverse models

Models have different strengths. For example:

  • GPT-5 → reasoning

  • Claude Sonnet → summarization

  • Perplexity → real-time information

  • Understand AI → secure, context-aware, privacy-focused

  • Mistral Medium → strong logic and cost-effective

✔ Use persistent text inside append nodes

Add section titles so your final model knows which assistant wrote what.

Example:

  • "Assistant A Report:"

  • "GPT-5 Analysis:"

  • "Claude Summary:"


🧪 Example Use Cases

🌐 Multi-model summary

Ask multiple LLMs to summarize a long report, then merge into one unified summary.

🔍 Cross-model verification

Generate answers from different assistants and ask a final LLM to verify inconsistencies.

🤖 Multi-agent reasoning

Use specialized assistants (security expert, legal expert, engineering expert) → then merge via a final synthesis assistant.

🧱 Enterprise “Committee of Models”

Your internal assistants + external LLMs produce combined high-confidence outputs.


📌 Final Notes

  • Every assistant you select inside “Assistant Chat” is isolated and processed in your workspace context.

  • Combining models does not affect performance of other assistants or user operations.

  • Workflows remain fully asynchronous; large ensembles may run longer.

  • If you need help building advanced multi-assistant architectures, contact us at [email protected].

Last updated