Reviewing editing
After generation, you arrive at the Results view. This is where you read, refine, and manage your test cases.

Browsing test cases
Expanding a test case

Click any test case row to expand it and see its full content:
Title - Concise description of what is being tested
Objective - One-sentence purpose statement
Preconditions - Setup required before executing the test
Test Steps - Numbered sequence of actions
Expected Result - Observable outcome that constitutes a pass
Priority - Critical / High / Medium / Low
Requirement Reference - Link back to the source requirement
Sorting
Use the sort controls to order by:
ID - Original generation order
Confidence Score - AI-assessed quality (highest first)
Priority - Critical items first
Searching
Type in the search bar to filter test cases by keyword. Results match against title, steps, and expected results. Results update as you type.

Filtering by requirement
Use the Requirement filter dropdown to show only test cases linked to a specific requirement.
Editing a test case
Inline edit
Expand the test case and click any test case row to open the full view.
Click Edit (pencil icon). All fields become editable text areas.
Make changes: Edit the Title, Objective, Preconditions, Test Steps, Expected Result, Priority, or Requirement Reference as needed.
Click Save, your changes are saved locally and synced to the database automatically, or click Cancel to discard changes.
Edits are auto-saved to the database in the background. You can safely close the tab after editing and your changes will be there when you return.
Managing individual test cases
Deleting a test case
Open the test case row to reveal actions.
Click the Delete (trash) icon.
Confirm deletion. The test case is removed from the list. Auto-save updates the database. This action affects coverage metrics.

Copying content to clipboard
Expand the test case and open the full test case view.
Click Copy (clipboard icon).
The full test case text is copied to your clipboard.

Spec Coverage panel
The Coverage tab (or the side panel in widescreen) shows a summary of how well your requirements are covered:

Requirements covered
Number of requirements with at least one test case
Spec section coverage %
Covered / Total requirements
AVG AI Confidence
AI Defect rate
A yellow or red indicator appears when the single-coverage ratio exceeds the threshold configured by your admin.
Requirements view
Click the Requirements tab to see all extracted requirements and the reference to it.

Confidence scores
Each test case has a Confidence Score 'AI Score' badge (0–100) based on three sub-metrics:

Requirement Coverage
How well this test case reflects the requirement you provided.
Spec Utilization
How much the test case uses the provided specification/context beyond the requirement itself. Low values mean the requirement alone was sufficient.
Grounded Accuracy
Whether the test case stays factual without adding unsupported assumptions (no hallucinations).
Green badge (70–100): High confidence
Yellow badge (40–69): Moderate confidence, review recommended
Red badge (0–39): Low confidence, manual review strongly advised
Productivity badge
At the top of the results view, a Productivity badge estimates the time saved compared to writing test cases manually. This is a motivational metric based on average manual TC writing times.
Last updated