AI Janitor Troubleshooting
Solutions for common AI Janitor issues. If you encounter a problem not covered here, check the scan logs in the AI Janitor dashboard or contact FeatureSignals support.
Built-in diagnostics
- Connection Test — Verifies Git provider connectivity and token validity.
- LLM Ping — Sends a minimal request to verify LLM provider configuration.
- Scan Preview — Shows which files and flags would be scanned without consuming LLM tokens.
- Log Viewer — Full scan logs with LLM request/response details for debugging.
Common Issues
Scan not finding expected flags
A flag you know is stale doesn't appear in the scan results, or the scan returns zero results despite having flags.
- Check the flag's toggle_category — ops and permission flags are excluded from staleness checks by default.
- Verify the flag has actually not been evaluated within the stale_threshold_days window. Check the evaluation history in the flag detail page.
- Ensure your Git repository contains references to the flag key. AI Janitor searches for exact string matches of the flag key in source files.
- Check that the flag's key is not in an excluded file pattern (ignore_patterns) or on an excluded branch (branch_patterns).
PR creation fails
AI Janitor identifies a stale flag but cannot create a pull request. Error message mentions permissions or Git provider connectivity.
- Verify the Git provider connection has write access to pull requests. Read-only connections can scan but cannot create PRs.
- Check that the access token or OAuth authorization has not expired. Reauthorize from the AI Janitor settings page.
- Ensure the target branch (usually main/master) exists and is accessible. AI Janitor cannot create branches from deleted or protected branches.
- Check branch protection rules — some configurations prevent automated branch creation even with valid credentials.
- For self-hosted Git providers, verify the instance is reachable from FeatureSignals. Test connectivity from the settings page.
False positives — flag incorrectly marked stale
AI Janitor recommends removing a flag that is still actively used or still needed.
- Increase min_confidence_score to require higher AI confidence before flagging.
- Adjust stale_threshold_days if your flags have longer evaluation cycles (e.g., flags evaluated monthly for billing cycles).
- Check if the flag is evaluated via a different mechanism — direct API calls, webhook-triggered evaluations, or offline jobs may not be captured if they don't use the standard SDK evaluation path.
- Mark the flag as 'keep' in the scan results to prevent it from appearing in future scans.
- If the flag uses conditional logic that the AI cannot resolve, add a comment in your source code referencing the flag key with context about why it's still needed.
LLM rate limit errors during scan
Scans fail midway with rate limit errors (HTTP 429) from the LLM provider, or scans take unusually long.
- AI Janitor automatically retries with exponential backoff, but persistent rate limits indicate you're hitting provider limits. Upgrade your API tier or reduce scan frequency.
- Reduce the number of concurrent LLM calls. The default is 5; you can lower it in the configuration.
- Consider switching to a different LLM provider with higher rate limits or a self-hosted model with no API rate limits.
- Use a cheaper/faster model (gpt-4o-mini instead of gpt-4o, or claude-3-haiku instead of sonnet) for higher throughput.
Git provider connection fails or times out
Cannot connect a Git provider, or existing connections suddenly stop working.
- For OAuth connections, the authorization may have been revoked. Reauthorize from the AI Janitor settings page.
- For access token connections, the token may have expired or been revoked. Generate a new token and update the connection.
- For self-hosted instances, verify network connectivity. Ensure your instance allows inbound connections from FeatureSignals IP ranges.
- Check that your Git provider is not experiencing an outage. AI Janitor will retry failed connections automatically.
- Verify the token has the required scopes — missing scopes are the most common cause of connection issues.
Generated PR contains incorrect code changes
The AI-generated diff removes the wrong code path, deletes unrelated code, or introduces syntax errors.
- This is rare but possible with complex flag logic. Always review AI-generated PRs carefully before merging.
- Increase min_confidence_score to 90+ to reduce the likelihood of incorrect suggestions appearing.
- Use a more capable LLM model (gpt-4o or claude-3-5-sonnet) for improved accuracy on complex code.
- If the flag has complex conditional logic (nested if/else, ternary operators, switch statements), consider removing it manually instead.
- Report the issue via the feedback button on the scan results page. This helps improve the AI's accuracy over time.
Scan results inconsistent between runs
The same flag appears stale in one scan but not the next, or confidence scores fluctuate significantly.
- LLM outputs are non-deterministic by nature. Small variations in confidence scores (±5%) are normal.
- If a flag appears and disappears between scans, check if it was recently evaluated. Even a single evaluation resets the staleness timer.
- Ensure the code on the scanned branches hasn't changed between scans. If someone refactored the flag-related code, the AI analysis will differ.
- For consistent results, pin the LLM model version and set the temperature to 0 in custom prompt configuration.
Enabling Debug Logging
For persistent issues, enable debug logging to capture detailed information about AI Janitor's operations:
{
"debug_mode": true,
"log_level": "debug",
"log_llm_requests": true,
"log_llm_responses": true
}
Debug logs include full LLM request payloads and responses. Enable this only temporarily for troubleshooting — it increases log volume and may expose source code in logs.
Getting Help
If you've tried the solutions above and are still experiencing issues:
- In-app support: Use the chat widget in the AI Janitor dashboard to contact the FeatureSignals team directly.
- Community forum: Search or post in the FeatureSignals Community.
- Export diagnostics: From the AI Janitor settings page, click Export Diagnostics to generate a support bundle with scan logs, configuration, and error reports (source code is never included).