AI Janitor Configuration
Fine-tune AI Janitor to match your team's workflow. Configure scan schedules, branch patterns, file extensions, LLM model selection, and PR automation behavior.
Configuration File
AI Janitor can be configured via the dashboard UI or a .ai-janitor.json file in your repository root. Repository-level config takes precedence over dashboard settings, letting teams customize behavior per project.
{
"scan_schedule": "0 6 * * 1",
"branch_patterns": ["main", "master", "develop"],
"file_extensions": [
".ts", ".tsx", ".js", ".jsx",
".go", ".py", ".java", ".rb"
],
"ignore_patterns": [
"node_modules/**",
"vendor/**",
"*.test.*",
"*.spec.*",
"dist/**",
"build/**",
"generated/**"
],
"stale_threshold_days": 30,
"min_confidence_score": 70,
"llm_provider": "openai",
"llm_model": "gpt-4o-mini",
"auto_create_prs": false,
"auto_merge_prs": false,
"pr_labels": ["ai-janitor", "flag-cleanup"],
"notify_on_scan_complete": true
}
Configuration Reference
| Option | Type | Default | Description |
|---|---|---|---|
| scan_schedule | cron expression | 0 6 * * 1 (every Monday at 6 AM UTC) | When AI Janitor runs automated scans. Use standard cron syntax. Set to 'manual' to disable scheduled scans and run only on-demand. |
| branch_patterns | string[] | ["main", "master", "develop"] | Git branches AI Janitor scans for flag references. Only branches matching these patterns are analyzed. Supports glob patterns (e.g., 'release/*'). |
| file_extensions | string[] | [".ts", ".tsx", ".js", ".jsx", ".go", ".py", ".java", ".rb", ".cs", ".php", ".swift", ".kt"] | File extensions to scan for flag references. AI Janitor only opens and analyzes files matching these extensions. Add or remove extensions based on your tech stack. |
| ignore_patterns | string[] | ["node_modules/**", "vendor/**", "*.test.*", "*.spec.*", "dist/**", "build/**"] | Glob patterns for files and directories to exclude from scanning. Useful for ignoring generated code, dependencies, and test fixtures that may contain stale flag references. |
| stale_threshold_days | number | 30 | Number of days a flag must show no evaluation activity before being considered stale. Flags in the 'ops' and 'permission' toggle categories are exempt from staleness checks. |
| min_confidence_score | number (0–100) | 70 | Minimum AI confidence score required for a flag to appear in the 'Ready to Remove' list. Flags below this threshold appear in 'Needs Review'. Increase for fewer false positives, decrease for more aggressive cleanup. |
| llm_provider | string | openai | LLM provider for AI analysis. Supported values: 'openai', 'anthropic', 'self_hosted'. Self-hosted requires a compatible OpenAI-compatible API endpoint. |
| llm_model | string | gpt-4o-mini | Specific LLM model to use. For OpenAI: gpt-4o-mini (default, cost-effective), gpt-4o (higher accuracy). For Anthropic: claude-3-5-sonnet-latest, claude-3-haiku-latest. Self-hosted: any compatible model identifier. |
| auto_create_prs | boolean | false | When true, AI Janitor automatically creates PRs for flags with confidence ≥ min_confidence_score. When false, PRs must be manually triggered from the scan results page. Recommended to keep false until you trust the results. |
| auto_merge_prs | boolean | false | When true, PRs that pass CI checks are automatically merged. Requires auto_create_prs to be true and branch protection rules that don't require human approval. NOT recommended for production repositories. |
| pr_labels | string[] | ["ai-janitor", "flag-cleanup"] | Labels automatically applied to AI Janitor pull requests. Use these to filter, track, and measure AI Janitor activity in your repository. |
| notify_on_scan_complete | boolean | true | Send a notification (email, Slack, or webhook) when a scheduled scan completes. The notification includes a summary of stale flags found and links to the full report. |
Scan Schedules
The scan schedule determines how often AI Janitor checks for stale flags. Choose a cadence that balances freshness with noise:
Daily
0 6 * * *Every day at 6 AM UTC. Best for active teams shipping daily with many flags.
Weekly
0 6 * * 1Every Monday at 6 AM UTC. Good default for most teams — review stale flags weekly.
Manual
manualNo scheduled scans. Run on-demand from the dashboard or via API. Good for low-flag-count projects.
LLM Model Selection
AI Janitor uses LLMs to analyze flag usage patterns and generate code removal suggestions. Choose the provider and model that fits your budget, accuracy needs, and data residency requirements.
OpenAI
Models: gpt-4o-mini (fast, cheap), gpt-4o (most accurate)
Best for: Default choice. Best speed/cost/accuracy balance.
Anthropic
Models: claude-3-5-sonnet (balanced), claude-3-haiku (fast)
Best for: Strong code understanding. Good for complex refactors.
Self-Hosted
Models: Any OpenAI-compatible endpoint (vLLM, Ollama, etc.)
Best for: Data never leaves your infrastructure. Requires GPU capacity.
Token usage and costs