FeatureSignals

AI Janitor Configuration

Fine-tune AI Janitor to match your team's workflow. Configure scan schedules, branch patterns, file extensions, LLM model selection, and PR automation behavior.

Configuration File

AI Janitor can be configured via the dashboard UI or a .ai-janitor.json file in your repository root. Repository-level config takes precedence over dashboard settings, letting teams customize behavior per project.

.ai-janitor.jsonJSON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
  "scan_schedule": "0 6 * * 1",
  "branch_patterns": ["main", "master", "develop"],
  "file_extensions": [
    ".ts", ".tsx", ".js", ".jsx",
    ".go", ".py", ".java", ".rb"
  ],
  "ignore_patterns": [
    "node_modules/**",
    "vendor/**",
    "*.test.*",
    "*.spec.*",
    "dist/**",
    "build/**",
    "generated/**"
  ],
  "stale_threshold_days": 30,
  "min_confidence_score": 70,
  "llm_provider": "openai",
  "llm_model": "gpt-4o-mini",
  "auto_create_prs": false,
  "auto_merge_prs": false,
  "pr_labels": ["ai-janitor", "flag-cleanup"],
  "notify_on_scan_complete": true
}

Configuration Reference

OptionTypeDefaultDescription
scan_schedulecron expression0 6 * * 1 (every Monday at 6 AM UTC)When AI Janitor runs automated scans. Use standard cron syntax. Set to 'manual' to disable scheduled scans and run only on-demand.
branch_patternsstring[]["main", "master", "develop"]Git branches AI Janitor scans for flag references. Only branches matching these patterns are analyzed. Supports glob patterns (e.g., 'release/*').
file_extensionsstring[][".ts", ".tsx", ".js", ".jsx", ".go", ".py", ".java", ".rb", ".cs", ".php", ".swift", ".kt"]File extensions to scan for flag references. AI Janitor only opens and analyzes files matching these extensions. Add or remove extensions based on your tech stack.
ignore_patternsstring[]["node_modules/**", "vendor/**", "*.test.*", "*.spec.*", "dist/**", "build/**"]Glob patterns for files and directories to exclude from scanning. Useful for ignoring generated code, dependencies, and test fixtures that may contain stale flag references.
stale_threshold_daysnumber30Number of days a flag must show no evaluation activity before being considered stale. Flags in the 'ops' and 'permission' toggle categories are exempt from staleness checks.
min_confidence_scorenumber (0–100)70Minimum AI confidence score required for a flag to appear in the 'Ready to Remove' list. Flags below this threshold appear in 'Needs Review'. Increase for fewer false positives, decrease for more aggressive cleanup.
llm_providerstringopenaiLLM provider for AI analysis. Supported values: 'openai', 'anthropic', 'self_hosted'. Self-hosted requires a compatible OpenAI-compatible API endpoint.
llm_modelstringgpt-4o-miniSpecific LLM model to use. For OpenAI: gpt-4o-mini (default, cost-effective), gpt-4o (higher accuracy). For Anthropic: claude-3-5-sonnet-latest, claude-3-haiku-latest. Self-hosted: any compatible model identifier.
auto_create_prsbooleanfalseWhen true, AI Janitor automatically creates PRs for flags with confidence ≥ min_confidence_score. When false, PRs must be manually triggered from the scan results page. Recommended to keep false until you trust the results.
auto_merge_prsbooleanfalseWhen true, PRs that pass CI checks are automatically merged. Requires auto_create_prs to be true and branch protection rules that don't require human approval. NOT recommended for production repositories.
pr_labelsstring[]["ai-janitor", "flag-cleanup"]Labels automatically applied to AI Janitor pull requests. Use these to filter, track, and measure AI Janitor activity in your repository.
notify_on_scan_completebooleantrueSend a notification (email, Slack, or webhook) when a scheduled scan completes. The notification includes a summary of stale flags found and links to the full report.

Scan Schedules

The scan schedule determines how often AI Janitor checks for stale flags. Choose a cadence that balances freshness with noise:

Daily

0 6 * * *

Every day at 6 AM UTC. Best for active teams shipping daily with many flags.

Weekly

0 6 * * 1

Every Monday at 6 AM UTC. Good default for most teams — review stale flags weekly.

Manual

manual

No scheduled scans. Run on-demand from the dashboard or via API. Good for low-flag-count projects.

LLM Model Selection

AI Janitor uses LLMs to analyze flag usage patterns and generate code removal suggestions. Choose the provider and model that fits your budget, accuracy needs, and data residency requirements.

OpenAI

Models: gpt-4o-mini (fast, cheap), gpt-4o (most accurate)

Best for: Default choice. Best speed/cost/accuracy balance.

Anthropic

Models: claude-3-5-sonnet (balanced), claude-3-haiku (fast)

Best for: Strong code understanding. Good for complex refactors.

Self-Hosted

Models: Any OpenAI-compatible endpoint (vLLM, Ollama, etc.)

Best for: Data never leaves your infrastructure. Requires GPU capacity.

Token usage and costs

Each scan consumes LLM tokens proportional to the number of flagged files and their size. A typical scan of 50 flags across a medium-sized repository consumes approximately 10K–50K tokens. Self-hosted models bypass API costs but require your own infrastructure. Monitor token usage in the AI Janitor dashboard.

Next Steps