AI Issue Triage
Date: 2026-01-15
Status: ✅ Fully Operational with npm-based Testing & Validation
Table of Contents
Quick Start
See AI Triage in Action (30 seconds)
# Option 1: Mock tests (free, no setup needed)
npm run test:workflows:run -- ai-issue-triage-mock
# Option 2: Real OpenAI API (requires API key)
echo "OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE" >> .env.local
npm run test:workflows:realExpected Output:
✓ Bug Report: labels=['bug','help wanted'], priority='high', cost=$0.0001
✓ Feature: labels=['enhancement'], priority='medium'
✓ Docs: labels=['documentation'], priority='low'
✓ Question: labels=['question'], priority='low'What It Does
Core Workflows
issue-triage.yml - Main AI Auto-Apply Workflow
- Triggers: When ANY new issue is created
- Process:
- Sends issue title + body to OpenAI GPT-4o-mini
- AI suggests 1-3 labels + priority level + reasoning
- Validates labels against allowed list
- Automatically applies labels to the issue
- Posts comment explaining what was applied and why
- Fallback: If API fails, posts warning and continues
- Philosophy: Something is better than nothing - auto-apply immediately
apply-triage.yml - Legacy Apply Workflow (Backward Compatibility)
- Triggers: When someone comments
/apply-triageon an issue - Status: Kept for backward compatibility but mostly redundant now
User Experience
1. Create issue normally
↓
2. Wait ~30 seconds
↓
3. Labels AUTO-APPLIED by AI
↓
4. AI posts comment explaining reasoning
↓
5. Issue is ready for work!Cost Analysis
Model: GPT-4o-mini
| Volume | Cost per Issue | Monthly Cost |
|---|---|---|
| 100 issues | $0.00006 | ~$0.01 |
| 500 issues | $0.00006 | ~$0.03 |
| 1000 issues | $0.00006 | ~$0.06 |
Total cost: Less than $1/month even at high volume
Testing
Test Commands (npm-based)
| Command | Type | Cost | Speed | When to Use |
|---|---|---|---|---|
npm run test:workflows:run | Unit + Mock | Free | <1s | Every commit |
npm run test:workflows:run -- ai-issue-triage-mock | Mock only | Free | ~400ms | Development |
npm run test:workflows:real | Real API | ~$0.0002 | ~8s | Before deploy |
npm run test:workflows | All (with real tests in watch mode) | ~$0.0002 | ~10s | Full validation |
Path 1: Mock Tests (Free, Recommended for Development)
npm run test:workflows:run -- ai-issue-triage-mockCoverage:
- ✅ Bug report triage
- ✅ Feature request triage
- ✅ Documentation request triage
- ✅ Question triage
- ✅ Invalid label filtering
- ✅ API key errors
- ✅ Rate limit errors
- ✅ Network failures (timeout, DNS)
- ✅ Token usage tracking
- ✅ Cost estimation
Best for: Daily development, rapid iteration, no API key needed
Path 2: Real API Tests (Recommended for Local Dev Before Deploy)
# 1. Get OpenAI API key
# Go to: https://platform.openai.com/api-keys
# Click: "Create new secret key"
# Copy: Your key (starts with sk-proj-...)
# 2. Add to .env.local
echo "OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE" >> .env.local
# 3. Run real API tests
npm run test:workflows:real
# Or explicitly enable:
export RUN_REAL_API_TESTS=true
npm run test:workflows -- ai-issue-triage-realCoverage:
- ✅ Real bug report with OpenAI
- ✅ Real feature request with OpenAI
- ✅ Real documentation request with OpenAI
- ✅ Real question with OpenAI
Cost: ~$0.0002 per run (~$0.01 per 50 runs)
Best for: Local validation before deployment
Path 3: Unit Tests Only (Free, Fast)
npm run test:workflows:runCoverage: 25 unit tests (pure logic validation)
Speed: <1ms
Best for: CI/CD, quick validation
Expected Output
Test Files 2 passed | 1 skipped (3)
Tests 37 passed | 4 skipped (41)
Duration 453ms
✅ 25 unit tests (pure logic, free)
✅ 12 mock integration tests (simulated API, free)
✅ 4 real API tests (actual OpenAI, ~$0.0002)Troubleshooting Tests
Tests are skipped?
- Real API tests skip if
OPENAI_API_KEYis not set ORRUN_REAL_API_TESTSis nottrue - This prevents accidental API costs
"Invalid API key" error?
- Verify key starts with
sk-proj- - Check it's from https://platform.openai.com/api-keys
- Ensure it hasn't expired
- Verify it has available credits
Need verbose output?
npm run test:workflows -- ai-issue-triage-mock --reporter=verboseRun specific test?
npm run test:workflows -- ai-issue-triage-mock -t "should triage a bug"Watch mode (re-run on changes)?
npm run test:workflows -- ai-issue-triage-mock --watchTest Files
src/__tests__/workflows/
├── ai-issue-triage.test.js # 25 unit tests
├── ai-issue-triage-mock.test.js # 12 mock integration tests
└── ai-issue-triage-real.test.js # 4 real API testsConfiguration Validation
Purpose
Keep triage categories in sync across:
.github/workflows/config/triage-categories.json(source of truth).github/workflows/issue-triage.yml(workflow references)src/__tests__/workflows/*.test.js(test fixtures)- GitHub Projects V2 "Area" field (where labels are applied)
npm Validation Task
npm run test:workflows:validateWhat it checks:
- ✅ JSON syntax validity
- ✅ Required fields present (id, name, description)
- ✅ Categories in
.ymlfiles match config - ✅ Categories in test fixtures match config
- ✅ GitHub Projects V2 "Area" field in sync (if GH_TOKEN set)
Local Validation (Without GitHub)
npm run test:workflows:validateValidates JSON and cross-references but skips GitHub sync check.
With GitHub Sync Check
GH_TOKEN=$(gh auth token) npm run test:workflows:validateAlso validates against GitHub Projects V2 configuration.
Manual Validation (Shell Script)
bash .github/workflows/validate-triage-config.shFlags:
--strict- Fail on any mismatches (for CI/CD)--fix- Auto-sync where possible (limited by GitHub API)--help- Show help
Validation Scenarios
Add New Category to Config
- Edit
.github/workflows/config/triage-categories.json - Add new category with
id,name,description - Commit and push
- Validation workflow runs automatically
- If using GitHub Projects V2: Add option to "Area" field
- Run:
npm run test:workflows:validate - ✅ All checks pass
Update Existing Category
- Edit description in config
- Run:
npm run test:workflows:validate - ✅ Passes (descriptions aren't validated against GitHub)
Fix Out-of-Sync Issues
Config has categories GitHub doesn't:
# Add them to GitHub Projects V2 manually:
# 1. Go to Lantern App project > Area field settings
# 2. Add missing options
# 3. Re-run: npm run test:workflows:validateGitHub has categories config doesn't:
# Option A: Add to config
# Option B: Remove from GitHub Projects V2
# Run: npm run test:workflows:validateGitHub Actions Workflow
File: .github/workflows/validate-triage-config.yml
Runs automatically on:
- Pull requests that modify
.github/workflows/config/triage-categories.json - Weekly schedule (Monday 9 AM UTC)
- Manual trigger via Actions tab
Failure alert:
- Posts Discord notification to
#automation-alerts - Includes link to validation output
- Suggests steps to fix
Implementation Details
Key Features & Safeguards
✅ Auto-Apply for ALL Issues
- No bot detection - ALL issues get triaged (including bot-created ones)
- Runs immediately on issue creation
- Safe label application: Uses append-only
--add-label(never deletes existing labels) - Philosophy: Something is better than nothing
✅ Label Validation
VALID_LABELS="bug,documentation,duplicate,enhancement,good first issue,help wanted,invalid,question,wontfix"- Only existing GitHub labels can be applied
- AI cannot create new labels
- Typos and hallucinations filtered out
✅ Permission Checks
PERMISSION=$(gh api repos/$REPO/collaborators/$COMMENTER/permission)
if [ "$PERMISSION" = "admin" ] || [ "$PERMISSION" = "write" ]; then
# Allow apply-triage
fi- Only maintainers can use
/apply-triagecommand - Contributors cannot auto-apply labels
✅ Graceful Fallback
- API key missing → Warning comment posted
- API error → Error comment posted with helpful message
- Invalid JSON → Handled with sensible defaults
- No issue creation is ever blocked
Setup for Production
1. Add OpenAI API Key
# Get key from: https://platform.openai.com/api-keys
# Add to: Repository Settings → Secrets → Actions
Name: OPENAI_API_KEY
Value: sk-proj-...2. Test Workflow
# Create test issue manually
# Wait ~30 seconds
# Verify AI comment appears and labels are applied3. Monitor Costs
# Set budget alert in OpenAI dashboard
# Recommended: $10/month limit
# Check usage: https://platform.openai.com/usageCost Breakdown
| Item | Cost | Frequency | Total/Month |
|---|---|---|---|
| Mock tests (local) | Free | Every commit | $0 |
| Real API tests (local) | ~$0.0002 | Before deploy | <$0.01 |
| Production triage (100 issues) | ~$0.006 | Monthly | <$0.01 |
| Total | — | — | <$0.02 |
Files & Locations
Core Workflows:
.github/workflows/issue-triage.yml # Main auto-apply workflow
.github/workflows/apply-triage.yml # Legacy apply workflow
.github/workflows/validate-triage-config.yml # Validation workflow
Configuration:
.github/workflows/config/triage-categories.json # Categories source of truth
Test Files:
src/__tests__/workflows/ai-issue-triage.test.js # 25 unit tests
src/__tests__/workflows/ai-issue-triage-mock.test.js # 12 mock tests
src/__tests__/workflows/ai-issue-triage-real.test.js # 4 real API tests
Scripts:
scripts/validate-triage-consistency.js # npm-based consistency checker
Documentation:
docs/engineering/github/workflows/AI_TRIAGE.md # This file (single source of truth)Testing Checklist (Before Production)
- [ ] Add
OPENAI_API_KEYto repository secrets - [ ] Run mock tests locally:
npm run test:workflows:run -- ai-issue-triage-mock - [ ] Run real API tests:
npm run test:workflows:real - [ ] Create test issue via GitHub UI (human user)
- [ ] Verify AI comment appears and labels applied within 60 seconds
- [ ] Check OpenAI usage dashboard: https://platform.openai.com/usage
- [ ] Set up monthly budget alert ($10 recommended)
- [ ] Test
/apply-triagecommand as maintainer - [ ] Verify non-maintainers cannot use
/apply-triage - [ ] Run validation:
npm run test:workflows:validate
Rollback Plan
If needed, disable AI triage:
Option 1: Disable Workflows
Actions → issue-triage.yml → ⋯ → Disable workflow
Actions → apply-triage.yml → ⋯ → Disable workflowOption 2: Remove API Key
Settings → Secrets → OPENAI_API_KEY → Delete
# Workflows will gracefully degrade with warning commentsOption 3: Delete Workflows
git rm .github/workflows/issue-triage.yml
git rm .github/workflows/apply-triage.yml
git commit -m "Remove AI triage workflows"Success Metrics
After deployment, track:
- Accuracy: % of AI suggestions accepted by maintainers
- Coverage: % of issues triaged by AI vs. manual
- Time Savings: Average time to first triage
- Cost: Actual monthly API costs
- Errors: Failed API calls, invalid suggestions
References
- Configuration: triage-categories.json
- OpenAI Docs: https://platform.openai.com/docs/api-reference
- GitHub Projects V2: https://docs.github.com/en/issues/planning-and-tracking-with-projects
- Label Safety: LABEL_SAFETY.md
Status: ✅ Production Ready
Breaking Changes: ❌ None
Backward Compatible: ✅ Yes
Reversible: ✅ Yes (disable workflows or remove API key)