Each test run contains one or more test cases. A test case is a single scenario the AI agent executed — for example, submitting a checkout form, testing an authentication flow, or verifying that an edge case is handled correctly.Documentation Index
Fetch the complete documentation index at: https://ito.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- Passed test cases confirm that a scenario worked as expected.
- Failed test cases represent bugs Ito found, complete with reproduction steps, code analysis, and a video recording of the failure.
- Skipped test cases were not executed in this run.
What a test case contains
| Field | Description |
|---|---|
| Name | A short description of the scenario that was tested. |
| Status | passed, failed, or skipped. |
| Severity | For failed cases, a severity level indicating the bug’s impact (critical, high, medium, or low). |
| Category | The type of test — for example, Happy-path, Edge, Adversarial, Logic, Accessibility, or Mobile. |
| Impact | A description of how the bug affects users or the application. |
| Video | A recording of the agent’s session, seekable to the moment of failure. |
| Reproduction steps | Numbered steps the agent took to reach the failure. |
| Code analysis | The files, line ranges, and code snippets implicated in the bug. |
| Stub / mock context | Any mocked services, stubbed data, or bypassed authentication active during the run. |
The split-panel view
When you click a pull request in the dashboard, a split-panel view opens:- Left panel — A scrollable list of all test cases in the latest test run, grouped by status. Tabs at the top let you filter to All Test Cases, passed cases, failed cases, or additional findings.
- Right panel — The detail view for the currently selected test case.
Test case detail
Video recording
The video player shows a full-session recording of what the AI agent did. When you select a failed test case, the player automatically seeks to the timestamp where the failure occurred. Use the standard video controls to scrub backward or forward through the session.Video is available when screen recording was captured during the run. If no video appears, recording was not active for that test case.
Reproduction steps
The Reproduction Steps accordion lists every action the agent performed to reach the failure, in order. Use these steps to reproduce the bug manually in your local environment or on staging.Code analysis
The Code Analysis accordion contains:- A written explanation of why the failure is likely a real bug.
- One or more code snippets showing the relevant file path, line range, and source code.
Stub / mock context
The Stub / mock context accordion (shown when present) describes any mocked API responses, stubbed authentication flows, or seeded test data the agent relied on during the run. Review this section when you want to confirm that a failure reflects real production behavior rather than a test-environment artifact.Filtering test cases
The tabs above the test case list let you filter by result:- All test cases
- Passed
- Failed
- Additional findings
Shows every test case in the run regardless of status.
Related
- Test Runs — the container that holds a set of test cases
- Severity Levels — how Ito classifies bug severity
- PR Comments — the GitHub comment that summarizes test case results