Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ito.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Each test run contains one or more test cases. A test case is a single scenario the AI agent executed — for example, submitting a checkout form, testing an authentication flow, or verifying that an edge case is handled correctly.
  • Passed test cases confirm that a scenario worked as expected.
  • Failed test cases represent bugs Ito found, complete with reproduction steps, code analysis, and a video recording of the failure.
  • Skipped test cases were not executed in this run.

What a test case contains

FieldDescription
NameA short description of the scenario that was tested.
Statuspassed, failed, or skipped.
SeverityFor failed cases, a severity level indicating the bug’s impact (critical, high, medium, or low).
CategoryThe type of test — for example, Happy-path, Edge, Adversarial, Logic, Accessibility, or Mobile.
ImpactA description of how the bug affects users or the application.
VideoA recording of the agent’s session, seekable to the moment of failure.
Reproduction stepsNumbered steps the agent took to reach the failure.
Code analysisThe files, line ranges, and code snippets implicated in the bug.
Stub / mock contextAny mocked services, stubbed data, or bypassed authentication active during the run.

The split-panel view

When you click a pull request in the dashboard, a split-panel view opens:
  • Left panel — A scrollable list of all test cases in the latest test run, grouped by status. Tabs at the top let you filter to All Test Cases, passed cases, failed cases, or additional findings.
  • Right panel — The detail view for the currently selected test case.
Selecting a test case in the left panel immediately loads its details on the right.

Test case detail

Video recording

The video player shows a full-session recording of what the AI agent did. When you select a failed test case, the player automatically seeks to the timestamp where the failure occurred. Use the standard video controls to scrub backward or forward through the session.
Video is available when screen recording was captured during the run. If no video appears, recording was not active for that test case.

Reproduction steps

The Reproduction Steps accordion lists every action the agent performed to reach the failure, in order. Use these steps to reproduce the bug manually in your local environment or on staging.

Code analysis

The Code Analysis accordion contains:
  • A written explanation of why the failure is likely a real bug.
  • One or more code snippets showing the relevant file path, line range, and source code.
This lets you navigate directly to the code that needs to change without searching through the diff yourself.

Stub / mock context

The Stub / mock context accordion (shown when present) describes any mocked API responses, stubbed authentication flows, or seeded test data the agent relied on during the run. Review this section when you want to confirm that a failure reflects real production behavior rather than a test-environment artifact.

Filtering test cases

The tabs above the test case list let you filter by result:
Shows every test case in the run regardless of status.
  • Test Runs — the container that holds a set of test cases
  • Severity Levels — how Ito classifies bug severity
  • PR Comments — the GitHub comment that summarizes test case results