1) Start With the Fast Path: Inspect the Last Run
The quickest way to debug is to look at the most recent execution. In Twin, your first stop should always be:- The latest run details (what the agent did, step by step)
- The inputs that were provided
- Any errors / warnings surfaced during execution
Examples of what to ask in chat
- “This agent run failed — explain what happened and how to fix it.”
- “The run completed but the output is wrong. Identify where it went off track.”
- “Rerun with the same input, but add stronger extraction rules and return structured output.”
2) If the Agent Finished but the Output Is Wrong
Sometimes everything “succeeds,” but the result is incomplete, oddly formatted, or inaccurate. Treat this differently than a hard failure.Debug checklist
Validate inputs
Validate inputs
- Were you using the expected URL, file, row, or payload?
- Are fields missing or in a different format than usual?
Follow the execution path
Follow the execution path
- Review the run steps and confirm the agent visited the right page(s) and captured the right data.
- Look for skipped branches (filters, conditions, early exits).
Reduce the problem
Reduce the problem
Re-run with a very simple input you can predict:
- Example: use one test URL instead of a list of 50
- Example: use a single row instead of the full spreadsheet
Check decision points
Check decision points
If the agent is making choices (e.g., “pick the best result”, “find the right page”), tighten your instructions:
- Specify which page to prefer (About vs Team vs Contact)
- Define what counts as a “match”
- Define the exact output schema
Verify external systems
Verify external systems
If the agent writes into a tool (Sheets, CRM, email), confirm:
- Correct destination
- Correct permissions
- Correct field mapping
3) Common Failure Patterns
These are the most frequent reasons an agent fails or becomes unreliable:Web Access & Page Changes
- The page layout changed
- Content loads dynamically and needs extra waiting
- The site blocks automation or rate-limits requests
Authentication Issues
- The agent is being redirected to a login page
- A session expired
- Multi-factor authentication is required
Data & Formatting Issues
- Input fields are empty or malformed
- Outputs are in a different format than expected
- A filter is too strict (or not strict enough)
Retry Loops & Excessive Credits
- The agent retries a failed step repeatedly (e.g., sandbox or permission errors)
- Expensive tools are chosen when simpler alternatives exist
- Long-running agents accumulate context, increasing cost per step
Agent Ignoring Instructions
- Complex goals with conflicting constraints confuse priority
- The agent falls back to a prohibited tool when preferred methods fail
- Step limits or explicit prohibitions aren’t consistently tracked
4) Escalate to Support With the Right Details
If you’ve tried the steps above and you’re still blocked, reach out — but include enough information for a fast diagnosis.Where to escalate
Twin has live in-app support with real human agents from the team, responding in under 20 minutes. Open Twin and use the support chat — it’s the fastest way to get help.What to include
| Item | Description |
|---|---|
| Agent link | Agent or workflow link (or the agent name in your workspace) |
| Run ID | The exact failing run identifier |
| Summary | One-line summary of the issue |
| Expected vs actual | What should have happened vs what did happen |
| Inputs used | Redact sensitive info if needed |
| What you tried | So support doesn’t repeat steps |
Quick Rule of Thumb
Hard error / crash
Focus on logs, permissions, authentication, and missing inputs
Success but wrong output
Focus on data flow, decision rules, and tightening instructions
Intermittent failures
Focus on website variability, rate limits, and retries
