Workflow - Test Case Generation
Introduction
A practical step-by-step guide to creating and running automated workflows for test case generation inside Understand Tech Workflows.
1. Overview
Workflows allow teams to automate complex logic using modular nodes (trigger → processing → AI → output). This hands-on module shows how to build a Test Case Generator Workflow from scratch.
2. Workflow Canvas Interface
The Workflow Canvas is the visual environment where users design, assemble, and automate processes with drag-and-drop components. Below is an example of a fully populated workflow containing multiple nodes, branches, and outputs. This screenshot helps illustrate how workflows look once they grow in complexity.

This example shows a multi-step test case generation workflow with triggers, input nodes, AI processing blocks, branching logic, and output nodes integrated into a single flow.
3. Create a New Workflow
Navigate to Workflows (top left navigation bar)
Click Create a workflow
Name it, for example: “My Workflow – Test Case Generator”
And provide a Description (optionnal)
Click Create
The canvas will now display your first node: the Trigger.

4. Configure the Trigger
The Trigger determines how and when your workflow starts.
To configure it:
Click the Trigger node
A side panel on the left opens
Select a trigger point:
Form trigger
You can enable just one of these options orboth:
Send a text message
Send a file
In future releases, additional trigger types (webhooks, schedules, external events, API triggers, etc.) will be available.
Input Types and Constraints
1. Text Input (Test Case Format)
If this option is enabled, the Trigger displays a free-form text field.
You must provide the expected test case format in this field.
The text can be written freely
It must include a well-defined and structured test case format
A clear structure improves output quality from the AI engine
No restrictions on length
Example :
TEST CASE FORMAT :
The test case should be a table with exactly two columns and five rows:
Field Description
Purpose Describe what the test verifies, using the requirement intent and any provided context. Keep concise and technically precise.
Requirement under test The exact requirement ID (e.g., TS.34_4.0_REQ_011).
Entry Criteria Preconditions or setup required before executing the test. Use context to clarify what must be ready or configured.
Test Procedure Numbered, reproducible steps directly verifying the requirement. Use context to make the steps realistic but stay aligned to the requirement's scope.
Exit Criteria (Pass Criteria) Observable results confirming compliance. Describe only measurable, requirement-driven outcomes (not performance, power, or optional KPIs unless stated).
References section:
After the table, always include a short References section listing all specification documents, sections, and contextual sources that informed the test case.
Expected Output Example
| Field | Content |
|--------|---------|
| Purpose | To verify that an IoT Device Application transmitting data frequently maintains continuous connectivity rather than performing repeated connection setup and release. |
| Requirement under test | TS.34_4.0_REQ_001 |
| Entry Criteria | 1. IoT Device Application is configured to send frequent data.<br>2. Device has a valid subscription and network registration. |
| Test Procedure | 1. Power on and register the IoT Device to the network.<br>2. Establish data connectivity and start periodic data transmissions.<br>3. Observe radio connection state (setup and release) during the transmission period. |
| Exit Criteria (Pass Criteria) | 1. The device does not perform frequent connection setup and release between transmissions.<br>2. The device maintains a consistent connection state appropriate to its data frequency. |
References:
TS.34_4.0 specification, Requirement TS.34_4.0_REQ_001.
3GPP TS 36.331 for RRC state definitions (context).
Supporting context provided by user.
2. File Input (Target Specification)
If this option is enabled, the Trigger displays a file upload input.
This file represents the specification from which the workflow will generate test cases.
Current constraints:
Only one file can be uploaded
Format supported: PDF only
Maximum size: 1 GB
(Multiple files will be supported in a future update.)
5. Add and Configure the Information Extractor (TOC) Node
After configuring the Trigger, the next step is to extract the relevant sections from the uploaded specification. For this tutorial, we begin by extracting the Introduction section.
The node you will use for this purpose is: Information Extractor TOC
This node allows the workflow to extract targeted sections such as:
Introduction
Overview
Appendix
References (and any others, depending on the structure of the document)
Later in the tutorial, we will extend this to extract References as well.
5.1 Add the Information Extractor Node
Click the “+” button on the canvas (Bottom right)
Choose Action → Information Extractor TOC
A new node appears on the canvas

5.2 Configuring the Node
Click the Information Extractor TOC node to open the configuration panel on the right. Then connect it directly after the Trigger node (as shown in the workflow screenshot below)

You will see two important fields on the configuration panel on the left :
A. Instructions
This field defines what part of the specification you want to extract. For this tutorial, instruct the user to extract only the Introduction.
Enter exactly the following in the Instructions field:
Introduction(Do NOT add anything else here; additional sections like References will be added later.)
B. Base LLM
This dropdown lets you select which model will perform the extraction.
Two engines are available:
OpenAI
Default option
Suitable for non-confidential specifications
Understand AI
Use this if your specification contains confidential, proprietary, or IPR-sensitive information
This model ensures full sovereignty and avoids external cloud exposure
For this tutorial, the user can keep OpenAI, unless the specification is sensitive.
Engineering note: This choice has no impact on workflow structure — only on data sovereignty and processing location.
5.3 Node Behavior (What Happens Internally)
The node receives the PDF specification uploaded in the Trigger
It applies your Instruction to extract only the Introduction
The output becomes a clean text chunk representing the Introduction
This output is then passed to the next nodes in the workflow (We will see this together below)
You don’t need to write any parsing logic, the Information Extractor TOC node handles it.
5.4 Extract the References Section
After extracting the Introduction, the next step is to extract the References section from the specification. This is done by adding a second Information Extractor TOC node configured specifically for References.
A. Add a New Information Extractor TOC Node
Click the “+” button on the canvas
Select Action → Information Extractor TOC
A new node will appear (empty by default)
Place it near the first extractor node to keep the workflow clear
B. Configure Instructions for References Extraction
Click the new Information Extractor TOC node to open the configuration panel.
In the Instructions field, enter exactly:
ReferencesThis instructs the node to extract the References section from the specification PDF.
Note for engineers: Do not add additional text around this instruction. The extractor is optimized for short, explicit section keywords.
C. Select the LLM (Same Rules as Before)
Base LLM options remain the same:
OpenAI → Default
Understand AI → Use if the spec contains confidential or IPR-sensitive material
For the tutorial, the user may keep OpenAI.
D. Connect the Node to the Trigger
Just like the Introduction extractor, the References extractor must be connected directly to the Trigger.
This ensures both extractors receive the same input specification and run in parallel.
Your workflow sequence at this point should look like:
Trigger
├── Information Extractor TOC (Introduction)
└── Information Extractor TOC (References)This is also visible in your sample workflow screenshot below,

5.5 Add and Configure the Requirement Extractor Node
After extracting the Introduction and References sections, the next step is to extract the requirements themselves. This is the most important part of the workflow, since all downstream test-case generation depends on correctly identifying formal requirements.
To do this, you will add the Requirement Extractor node.
A. What the Requirement Extractor Does
The Requirement Extractor is a specialized extraction node designed specifically to detect and extract requirements from a specification. It does not require the user to define any extraction rules.
Engineering characteristics:
Automatically analyzes the specification page by page
Uses the Table of Contents when available to locate requirement-related sections
Detects requirements regardless of formatting, including:
Numbered lists
Tables
Inline requirement labels
Sentence-style requirements
Mixed formatting
Produces a structured list of extracted requirements for use by downstream nodes
There is almost nothing to configure, the node is optimized for requirements extraction out of the box.
B. Add the Requirement Extractor Node
Click the “+” button
Select Action → Requirement Extractor
Place the node beneath the existing extractors (Introduction, References)
C. Configure the Node
Click the node to open the configuration panel.
There is only one parameter:
LLM Selection
OpenAI → Default
Understand AI → Recommended if the specification is confidential or IPR-sensitive
Important: This node does not require instructions. It does not require section names. It does not require templates. It is fully automatic.
This design ensures maximum reliability across different standards, specs, and document styles.
D. Connect the Node to the Trigger
Just like the Introduction and References extractors, the Requirement Extractor must be connected directly to the Trigger.

Your extraction stage now looks like:
Trigger
├── Information Extractor TOC (Introduction)
├── Information Extractor TOC (References)
└── Requirement ExtractorThis ensures all extractors run in parallel on the same input PDF.
6. Add and Configure the Text Append Node
Once the extraction nodes are connected to the Trigger, the next step is to build the base system prompt that will later be used to create the Knowledge Base Assistant and generate the final test cases.
To do this, we add a Text Append node.
This node allows you to combine persistent text (your instruction or format template) with text coming from previous nodes.
For this stage, we use the Text Append node to preprocess and rewrite the user-provided test case format.
6.1 Purpose of the Text Append Node in This Workflow
The goal of this node is to:
Take the test case format provided by the user in the Trigger
Combine it with a persistent instruction you provide in this node
Generate a clean, reformulated test case format that the LLM can follow reliably
Build a controlled, predictable system prompt foundation for the assistant we will create later
This ensures that test case generation is consistent, reproducible, and aligned with the expected structure.
6.2 Add the Text Append Node
Click the “+” button
Select Action → Text Append
Place the node in the flow
Connect it directly to the Trigger
The structure should now look like:
Trigger
├── Introduction Extractor
├── References Extractor
├── Requirement Extractor
└── Text Append ← (rewrites test case format)
6.3 Configure the Persistent Text
Click the Text Append node to open the configuration panel.
In the Persistent text field, enter the following exact instruction (your GSMA example):
Here is a draft of a test case format to be be used by a test case generator. The test case generator should strictly follow the test case format. Rewrite it to be clear for the LLM to be able to follow it correctly. At the end of the reformulated test case format add this tag: "[BELOW ARE ADDITIONAL INFORMATIONS TO HELP GENERATE THE TEST CASES]" as more additional information will be appended to the response later.Note: This instruction is fixed and does not depend on the specification. It ensures that the test case format is rewritten cleanly and remains machine-usable.
6.4 Ignore the File Input
Enable the option:
✔ Ignore the file_id input
Why? Because this node supports three possible input types:
text
file_id
both
Since the Trigger also passes a file_id (the spec PDF) to all downstream nodes, the Text Append node would, by default, try to concatenate the file content as well. We do NOT want that for this step.
We only want to append:
The test case format (text from Trigger)
The persistent instruction (text written in this node)
By enabling “Ignore the file_id input”, we ensure the node concatenates text only, not the PDF.
6.5 What the Node Will Output
This Text Append node produces a single, reformulated test case format, containing:
Your rewriting instruction (persistent text)
The user’s original test case format (text input from Trigger)
The appended tag:
[BELOW ARE ADDITIONAL INFORMATIONS TO HELP GENERATE THE TEST CASES]
This output will later be passed to the System Prompt and Assistant Builder nodes to construct the assistant’s base knowledge.
6.6 Summary
You now have:
Text Append
Rewrites and normalizes the test case format to create a stable, LLM-friendly base prompt
Ignore file_id
Ensures only text is concatenated, not the PDF
Persistent text
Contains your rewriting instruction + the control tag
This completes the system prompt preparation stage.
7. Add and Configure the Assistant Chat Node
After the Text Append node rewrites and normalizes the test case format, the next step is to pass this text to an Assistant Chat node.
The purpose of this node is to let an LLM produce a clean, fully formatted, final version of the test case format that will be used later in the system prompt and assistant creation.

7.1 Purpose of the Assistant Chat Node
This node performs a simple but important function:
Receives the rewritten test case format produced by the Text Append node
Sends it to an LLM for final formatting and cleanup
Produces a completed, polished test case format ready for downstream nodes
This ensures consistency and helps avoid formatting drift caused by raw template concatenation.
7.2 Add the Assistant Chat Node
Click “+”
Select Action → Assistant Chat
Place the node to the right of Text Append
Connect Text Append → Assistant Chat
Your flow now looks like:
Trigger
├── Intro Extractor
├── References Extractor
├── Requirement Extractor
└── Text Append → Assistant Chat7.3 Configure the Node
Click the Assistant Chat node to open the configuration panel.
You will see one key field:
Model
This dropdown lets the user select which LLM will process and finalize the test case format.
Available models include:
GPT-4.1 (default in your screenshot)
Other OpenAI models
Understand AI models (depending on deployment)
Engineering note: There is no need to provide instructions here — the content coming from Text Append already contains very explicit formatting instructions.
7.4 Why Use Assistant Chat Here?
It guarantees a stable, clearly formatted output
It ensures the LLM follows the rewritten instructions produced earlier
It normalizes edge cases (HTML tables, Markdown tables, inconsistencies)
It reduces prompt errors in later nodes such as the System Prompt, Assistant Builder, and Chained Prompt
This makes the final test case generator deterministic and minimizes formatting drift.
7.5 Summary
At this stage, your workflow has:
Text Append
Rewrites test case format + adds control tag
Assistant Chat
Finalizes and normalizes the rewritten format using an LLM chosen by the user
8. Add a Second Text Append Node (Combine TC Format + Introduction)
After the Assistant Chat node produces the finalized, clean test case format, we now need to combine that output with the Introduction section extracted earlier.
This prepares the complete system prompt foundation that will later be passed to the Assistant Builder to create the Test-Case Generator Assistant.
8.1 Purpose of This Second Text Append Node
This node merges two critical inputs:
Formatted test case structure Output from the Assistant Chat node
Introduction section Output from Information Extractor TOC (Introduction)
This combined content becomes the core contextual prompt for the later stages of the workflow.
It gives the assistant:
The test-case structure
The domain context (Introduction)
The correct setup to generate compliant and context-aware test cases

8.2 Add the Node
Click “+”
Select Action → Text Append
Place the node to the right of the Assistant Chat output
Connect:
Assistant Chat → Text Append (System Prompt Builder)
Introduction Extractor → Text Append (System Prompt Builder)This node receives two text inputs simultaneously.
Your topology now looks like:
Trigger
├── Introduction Extractor →──┐
├── References Extractor │
├── Requirement Extractor ├── (later used in chained prompt)
└── Text Append → Assistant Chat → Text Append (combine TC format + Introduction)8.3 Node Configuration
Open the new Text Append node.
Persistent text
Leave empty, unless you want to prepend fixed system text. For this tutorial, we leave it blank.
Ignore file_id input
Enable ✔ Ignore the file_id input
Reason: Both Assistant Chat and Introduction Extractor will pass text only. We don’t want to append the large PDF file_id by accident.
What is being concatenated?
This node combines:
[Formatted Test Case Format]
+
[Introduction Section Extracted from the PDF]This becomes the final context block the Assistant Builder will use.
8.4 Why We Do This Merge Here
The formatted test case format needs to be included in the system context
The Introduction provides the foundational domain context (e.g., technology overview, definitions, scope)
Merging them early ensures the Assistant Builder receives a complete, structured, ready-to-use context
This reduces hallucinations and ensures the assistant always produces test cases aligned with:
The specification
The intended domain
The expected format
8.5 Output of This Node
The output of this Text Append node is a single text block containing:
The rewritten, finalized test case format
The Introduction extracted from the specification
This text will be passed directly into the System Prompt for the upcoming Assistant Builder.
8.6 Summary
Text Append (System Prompt Builder)
Assistant Chat output + Introduction section
Creates final system prompt context for the Test-Case Assistant
You now have a clean and complete contextual block ready to be injected into the AI model.
9. Add the System Prompt Node
After combining the formatted test case structure and the Introduction section in the previous Text Append node, the next step is to pass this merged text into a System Prompt node.
This node simply converts the incoming text into the correct internal format used by the Assistant Builder.
9.1 Add and Connect the Node
Click “+”
Select Action → System Prompt
Connect the output of the second Text Append node to the System Prompt node
The structure now looks like:
Text Append (TC Format + Introduction)
↓
System Prompt
9.2 Configuration
The System Prompt node does not require any configuration. There are no fields to fill and no parameters to adjust.
It automatically:
Wraps the input text into the correct system-prompt format
Prepares the content for the Assistant Builder node
9.3 Summary
The System Prompt node acts as a simple formatting bridge between:
Your combined context text → Assistant Builder-compatible system promptThis completes the system prompt preparation stage.
10. Add and Configure the Assistant Builder Node
The Assistant Builder node is where the workflow constructs the actual AI Test-Case Generator. All contextual elements gathered so far — the test-case format, the Introduction, the References, and the Specification itself — are combined here to build a fully functional assistant capable of generating high-quality, requirement-driven test cases.

10.1 Purpose of the Assistant Builder
This node creates a dedicated AI agent whose job is to generate test cases that strictly follow:
The Test Case Format
The Specification’s Introduction and context
The list of References extracted from the spec
The full specification content (PDF)
The organization’s master Knowledge Base (external repository)
The Assistant Builder also automatically loads the additional reference documents from the Knowledge Base so that the agent has context beyond the uploaded PDF.
This enables enterprise-grade consistency across specifications of the same family or domain.
10.2 Inputs the Assistant Builder Requires
The Assistant Builder takes three major inputs:
1. System Prompt (mandatory)
Connected from the System Prompt node.
This contains:
The finalized test case structure (formatted by Assistant Chat)
The Introduction section
Any additional context merged earlier
This becomes the assistant’s internal “identity and rules.”
2. References Section (optional but recommended)
Connected from the TOC References Extractor.
This provides the names of referenced documents. The Assistant Builder uses these names to fetch the actual referenced materials from the Knowledge Base.
For example, if the spec mentions:
GSMA TS.34
3GPP TS 23.501
3GPP TS 36.331 The Assistant Builder links those references to matching documents stored in the organization's Knowledge Base.
3. The Specification Itself (PDF)
Connected from the Trigger output.
This gives the assistant direct access to:
Actual requirements
Tables
Notes
Definitions
Procedures
Technical constraints
This ensures the agent does not rely only on extracted sections but on the full specification.
10.3 Knowledge Base Selection
Inside the Assistant Builder configuration panel, the user must choose a Knowledge Base.
What is the Knowledge Base?
A Knowledge Base is created outside this workflow (in the Understand Tech platform). It typically contains:
All reference documents used by an organization
All technical standards relevant to the domain
All past or related specifications
Internal or external normative documents
Think of it as the company's reference document library.
Why is it needed?
Because the References Extractor node only extracts names of referenced documents — not their content.
The Assistant Builder uses the Knowledge Base to:
Locate the referenced document by name
Retrieve its contents
Pass that content to the assistant
Ensure test-case generation is always compliant with referenced standards
This step is critical for generating accurate and traceable test cases.
Note: Please refer to our main platform documentation to learn how to create and manage your Knowledge Base.
10.4 Add and Connect the Assistant Builder Node
Click “+”
Select Action → Assistant Builder
Place it to the right of the System Prompt
Connect these nodes:
System Prompt → Assistant Builder
References Extractor → Assistant Builder
Trigger (spec PDF) → Assistant BuilderThis gives it all required inputs.
10.5 Configure the Node
In the Knowledge base dropdown, the user selects the appropriate collection of reference documents.
Typical examples:
“GSMA Standards”
“3GPP References”
“Enterprise Wireless Standards Library”
“Internal Compliance Specs”
This step ensures the assistant is built with full contextual support.
There is no other configuration for this node.
10.6 Output of the Assistant Builder
The output of this node is:
A fully constructed AI Assistant
With integrated:
System Prompt
Specification
Extracted References
Linked referenced documents from the Knowledge Base
This assistant is now able to generate test cases for any requirement extracted earlier.
10.7 Summary
System Prompt
System Prompt node
Rules + context + test case format
References (names)
TOC Extractor
Identify which documents to load from the Knowledge Base
Specification (PDF)
Trigger
Main document to derive requirements + test cases
Knowledge Base
User selection
Provides referenced documents for deeper context
The Assistant Builder node transforms all of this into a domain-aware, format-aware test-case generator.
11. Add and Configure the Chained Prompt Node
The Chained Prompt node is the final processing block of the workflow. Its role is to:
Take the fully built AI Test-Case Assistant
Take the complete list of extracted requirements (CSV generated by the Requirement Extractor)
Loop through each requirement one-by-one
Generate a test case for each requirement
Combine all generated test cases into a single final document
Produce an output file in the user-selected format (Word, PDF, etc.)
This node represents the generation stage of the workflow.

11.1 Purpose of the Chained Prompt Node
The Chained Prompt enables requirement-by-requirement test case generation.
It solves three engineering problems:
Iteration: Requirements can be hundreds of lines long; they must be processed sequentially.
Context preservation: Each test case generation reuses the same Assistant built earlier.
Aggregation: All individual outputs must be merged into one cohesive final document.
This node automates that entire pipeline.
11.2 Inputs Required
The node requires two inputs:
1. The Assistant (Expert test-case generator)
Connected from the Assistant Builder node. This assistant already contains:
Test case format
Introduction
Specification context
References fetched from the Knowledge Base
It is the “brain” used for generating each test case.
2. Requirement List (CSV)
Connected from the Requirement Extractor.
The Requirement Extractor outputs a structured CSV containing:
Requirement IDs
Requirement text
Metadata (depending on the spec)
The Chained Prompt uses this CSV to iterate.
11.3 How the Chained Prompt Works
Step 1 — Read CSV
It loads the list of all extracted requirements.
Step 2 — Loop Over Requirements
For each requirement in the CSV:
It passes the requirement text to the Assistant
The Assistant generates exactly one test case following the strict test-case format
The Chained Prompt collects the output
Step 3 — Combine All Test Cases
Once all requirements have been processed, the node aggregates the individual outputs into a single unified document.
Step 4 — Produce Final File
The node produces a downloadable file in the selected format.
11.4 Output Format
In the node configuration, the user can choose:
Word (DOCX)
Other supported formats (depending on platform capabilities)
For this tutorial, the screenshot shows Word selected.
The final deliverable will be a clean Word document with:
One test case per requirement
Uniform structure
Fully formatted and assistant-generated content
Aggregated into a single export file
11.5 Adding and Connecting the Node
Click “+” → Action → Chained Prompt
Connect:
Assistant Builder → Chained Prompt Requirement Extractor → Chained PromptSelect the Output Format you want (Word recommended)
The workflow is now complete.
11.6 Summary
Assistant Builder
Builds the contextualized AI Test-Case Assistant
Requirement Extractor (CSV)
Supplies the list of requirements
Chained Prompt
Iterates over requirements, generates test cases, aggregates into one document
The Chained Prompt is the final step that executes the full end-to-end generation of test cases.
12. Execute the Workflow and Retrieve the Final Test Case Document
Once all nodes are configured, the workflow is ready to run end-to-end. You just need to click on Execute Workflow button, on the bottom of the page.

The system will now:
Extract Introduction
Extract References
Extract Requirements
Build the AI Assistant
Iterate through every requirement
Generate one test case per requirement
Combine everything into a final exportable document
12.1 Run the Workflow
Click Execute Workflow at the bottom of the screen.
You will see each node starting to run in sequence. This process can take significant time, depending on:
The size of the specification
The number of requirements
The number of reference documents
The selected model
Typical runtime is:
≈ 45 minutes to 1 hour, but large specifications may take longer.
This is expected.
12.2 Monitoring the Execution
You can click Open Execution Panel at any time to see:
Execution logs
Node-by-node progress
Which requirement is currently being processed
Any error or retry indicators
The Chained Prompt node will take the longest, as it performs a loop across all requirements.
12.3 Download the Final File
When the workflow is fully completed:
The Chained Prompt node will show a green success icon
Click that node
a download link will appear

This file contains:
All generated test cases
In the selected output format (e.g., Word)
Fully structured
Aligned with the test-case format
Based on the specification + references + assistant logic
Congratulations — your automatically generated test case document is ready.
12.4 Final Notes
You may re-run the workflow with updates for new versions of the specification.
You can use the same Knowledge Base for other specifications belonging to the same domain or organization.
Always verify that the correct Knowledge Base is selected before starting the workflow.
For very large documents, consider breaking the workload into smaller chunks or increasing model capacity.
Last updated