Skip to content

AI Issue Triage

Date: 2026-01-15
Status: ✅ Fully Operational with npm-based Testing & Validation


Table of Contents

  1. Quick Start
  2. What It Does
  3. Testing
  4. Configuration Validation
  5. Implementation Details

Quick Start

See AI Triage in Action (30 seconds)

bash
# Option 1: Mock tests (free, no setup needed)
npm run test:workflows:run -- ai-issue-triage-mock

# Option 2: Real OpenAI API (requires API key)
echo "OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE" >> .env.local
npm run test:workflows:real

Expected Output:

✓ Bug Report: labels=['bug','help wanted'], priority='high', cost=$0.0001
✓ Feature: labels=['enhancement'], priority='medium'
✓ Docs: labels=['documentation'], priority='low'
✓ Question: labels=['question'], priority='low'

What It Does

Core Workflows

issue-triage.yml - Main AI Auto-Apply Workflow

  • Triggers: When ANY new issue is created
  • Process:
    1. Sends issue title + body to OpenAI GPT-4o-mini
    2. AI suggests 1-3 labels + priority level + reasoning
    3. Validates labels against allowed list
    4. Automatically applies labels to the issue
    5. Posts comment explaining what was applied and why
  • Fallback: If API fails, posts warning and continues
  • Philosophy: Something is better than nothing - auto-apply immediately

apply-triage.yml - Legacy Apply Workflow (Backward Compatibility)

  • Triggers: When someone comments /apply-triage on an issue
  • Status: Kept for backward compatibility but mostly redundant now

User Experience

1. Create issue normally

2. Wait ~30 seconds

3. Labels AUTO-APPLIED by AI

4. AI posts comment explaining reasoning

5. Issue is ready for work!

Cost Analysis

Model: GPT-4o-mini

VolumeCost per IssueMonthly Cost
100 issues$0.00006~$0.01
500 issues$0.00006~$0.03
1000 issues$0.00006~$0.06

Total cost: Less than $1/month even at high volume


Testing

Test Commands (npm-based)

CommandTypeCostSpeedWhen to Use
npm run test:workflows:runUnit + MockFree<1sEvery commit
npm run test:workflows:run -- ai-issue-triage-mockMock onlyFree~400msDevelopment
npm run test:workflows:realReal API~$0.0002~8sBefore deploy
npm run test:workflowsAll (with real tests in watch mode)~$0.0002~10sFull validation
bash
npm run test:workflows:run -- ai-issue-triage-mock

Coverage:

  • ✅ Bug report triage
  • ✅ Feature request triage
  • ✅ Documentation request triage
  • ✅ Question triage
  • ✅ Invalid label filtering
  • ✅ API key errors
  • ✅ Rate limit errors
  • ✅ Network failures (timeout, DNS)
  • ✅ Token usage tracking
  • ✅ Cost estimation

Best for: Daily development, rapid iteration, no API key needed

bash
# 1. Get OpenAI API key
# Go to: https://platform.openai.com/api-keys
# Click: "Create new secret key"
# Copy: Your key (starts with sk-proj-...)

# 2. Add to .env.local
echo "OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE" >> .env.local

# 3. Run real API tests
npm run test:workflows:real

# Or explicitly enable:
export RUN_REAL_API_TESTS=true
npm run test:workflows -- ai-issue-triage-real

Coverage:

  • ✅ Real bug report with OpenAI
  • ✅ Real feature request with OpenAI
  • ✅ Real documentation request with OpenAI
  • ✅ Real question with OpenAI

Cost: ~$0.0002 per run (~$0.01 per 50 runs)
Best for: Local validation before deployment

Path 3: Unit Tests Only (Free, Fast)

bash
npm run test:workflows:run

Coverage: 25 unit tests (pure logic validation)
Speed: <1ms
Best for: CI/CD, quick validation

Expected Output

Test Files  2 passed | 1 skipped (3)
     Tests  37 passed | 4 skipped (41)
   Duration  453ms

✅ 25 unit tests (pure logic, free)
✅ 12 mock integration tests (simulated API, free)
✅ 4 real API tests (actual OpenAI, ~$0.0002)

Troubleshooting Tests

Tests are skipped?

  • Real API tests skip if OPENAI_API_KEY is not set OR RUN_REAL_API_TESTS is not true
  • This prevents accidental API costs

"Invalid API key" error?

Need verbose output?

bash
npm run test:workflows -- ai-issue-triage-mock --reporter=verbose

Run specific test?

bash
npm run test:workflows -- ai-issue-triage-mock -t "should triage a bug"

Watch mode (re-run on changes)?

bash
npm run test:workflows -- ai-issue-triage-mock --watch

Test Files

src/__tests__/workflows/
├── ai-issue-triage.test.js           # 25 unit tests
├── ai-issue-triage-mock.test.js      # 12 mock integration tests
└── ai-issue-triage-real.test.js      # 4 real API tests

Configuration Validation

Purpose

Keep triage categories in sync across:

  1. .github/workflows/config/triage-categories.json (source of truth)
  2. .github/workflows/issue-triage.yml (workflow references)
  3. src/__tests__/workflows/*.test.js (test fixtures)
  4. GitHub Projects V2 "Area" field (where labels are applied)

npm Validation Task

bash
npm run test:workflows:validate

What it checks:

  • ✅ JSON syntax validity
  • ✅ Required fields present (id, name, description)
  • ✅ Categories in .yml files match config
  • ✅ Categories in test fixtures match config
  • ✅ GitHub Projects V2 "Area" field in sync (if GH_TOKEN set)

Local Validation (Without GitHub)

bash
npm run test:workflows:validate

Validates JSON and cross-references but skips GitHub sync check.

With GitHub Sync Check

bash
GH_TOKEN=$(gh auth token) npm run test:workflows:validate

Also validates against GitHub Projects V2 configuration.

Manual Validation (Shell Script)

bash
bash .github/workflows/validate-triage-config.sh

Flags:

  • --strict - Fail on any mismatches (for CI/CD)
  • --fix - Auto-sync where possible (limited by GitHub API)
  • --help - Show help

Validation Scenarios

Add New Category to Config

  1. Edit .github/workflows/config/triage-categories.json
  2. Add new category with id, name, description
  3. Commit and push
  4. Validation workflow runs automatically
  5. If using GitHub Projects V2: Add option to "Area" field
  6. Run: npm run test:workflows:validate
  7. ✅ All checks pass

Update Existing Category

  1. Edit description in config
  2. Run: npm run test:workflows:validate
  3. ✅ Passes (descriptions aren't validated against GitHub)

Fix Out-of-Sync Issues

Config has categories GitHub doesn't:

bash
# Add them to GitHub Projects V2 manually:
# 1. Go to Lantern App project > Area field settings
# 2. Add missing options
# 3. Re-run: npm run test:workflows:validate

GitHub has categories config doesn't:

bash
# Option A: Add to config
# Option B: Remove from GitHub Projects V2
# Run: npm run test:workflows:validate

GitHub Actions Workflow

File: .github/workflows/validate-triage-config.yml

Runs automatically on:

  • Pull requests that modify .github/workflows/config/triage-categories.json
  • Weekly schedule (Monday 9 AM UTC)
  • Manual trigger via Actions tab

Failure alert:

  • Posts Discord notification to #automation-alerts
  • Includes link to validation output
  • Suggests steps to fix

Implementation Details

Key Features & Safeguards

✅ Auto-Apply for ALL Issues

  • No bot detection - ALL issues get triaged (including bot-created ones)
  • Runs immediately on issue creation
  • Safe label application: Uses append-only --add-label (never deletes existing labels)
  • Philosophy: Something is better than nothing

✅ Label Validation

VALID_LABELS="bug,documentation,duplicate,enhancement,good first issue,help wanted,invalid,question,wontfix"
  • Only existing GitHub labels can be applied
  • AI cannot create new labels
  • Typos and hallucinations filtered out

✅ Permission Checks

PERMISSION=$(gh api repos/$REPO/collaborators/$COMMENTER/permission)
if [ "$PERMISSION" = "admin" ] || [ "$PERMISSION" = "write" ]; then
  # Allow apply-triage
fi
  • Only maintainers can use /apply-triage command
  • Contributors cannot auto-apply labels

✅ Graceful Fallback

  • API key missing → Warning comment posted
  • API error → Error comment posted with helpful message
  • Invalid JSON → Handled with sensible defaults
  • No issue creation is ever blocked

Setup for Production

1. Add OpenAI API Key

bash
# Get key from: https://platform.openai.com/api-keys
# Add to: Repository Settings → Secrets → Actions
Name: OPENAI_API_KEY
Value: sk-proj-...

2. Test Workflow

bash
# Create test issue manually
# Wait ~30 seconds
# Verify AI comment appears and labels are applied

3. Monitor Costs

bash
# Set budget alert in OpenAI dashboard
# Recommended: $10/month limit
# Check usage: https://platform.openai.com/usage

Cost Breakdown

ItemCostFrequencyTotal/Month
Mock tests (local)FreeEvery commit$0
Real API tests (local)~$0.0002Before deploy<$0.01
Production triage (100 issues)~$0.006Monthly<$0.01
Total<$0.02

Files & Locations

Core Workflows:
.github/workflows/issue-triage.yml           # Main auto-apply workflow
.github/workflows/apply-triage.yml           # Legacy apply workflow
.github/workflows/validate-triage-config.yml # Validation workflow

Configuration:
.github/workflows/config/triage-categories.json  # Categories source of truth

Test Files:
src/__tests__/workflows/ai-issue-triage.test.js           # 25 unit tests
src/__tests__/workflows/ai-issue-triage-mock.test.js      # 12 mock tests
src/__tests__/workflows/ai-issue-triage-real.test.js      # 4 real API tests

Scripts:
scripts/validate-triage-consistency.js       # npm-based consistency checker

Documentation:
docs/engineering/github/workflows/AI_TRIAGE.md  # This file (single source of truth)

Testing Checklist (Before Production)

  • [ ] Add OPENAI_API_KEY to repository secrets
  • [ ] Run mock tests locally: npm run test:workflows:run -- ai-issue-triage-mock
  • [ ] Run real API tests: npm run test:workflows:real
  • [ ] Create test issue via GitHub UI (human user)
  • [ ] Verify AI comment appears and labels applied within 60 seconds
  • [ ] Check OpenAI usage dashboard: https://platform.openai.com/usage
  • [ ] Set up monthly budget alert ($10 recommended)
  • [ ] Test /apply-triage command as maintainer
  • [ ] Verify non-maintainers cannot use /apply-triage
  • [ ] Run validation: npm run test:workflows:validate

Rollback Plan

If needed, disable AI triage:

Option 1: Disable Workflows

bash
Actions issue-triage.yml Disable workflow
Actions apply-triage.yml Disable workflow

Option 2: Remove API Key

bash
Settings Secrets OPENAI_API_KEY Delete
# Workflows will gracefully degrade with warning comments

Option 3: Delete Workflows

bash
git rm .github/workflows/issue-triage.yml
git rm .github/workflows/apply-triage.yml
git commit -m "Remove AI triage workflows"

Success Metrics

After deployment, track:

  1. Accuracy: % of AI suggestions accepted by maintainers
  2. Coverage: % of issues triaged by AI vs. manual
  3. Time Savings: Average time to first triage
  4. Cost: Actual monthly API costs
  5. Errors: Failed API calls, invalid suggestions

References


Status: ✅ Production Ready
Breaking Changes: ❌ None
Backward Compatible: ✅ Yes
Reversible: ✅ Yes (disable workflows or remove API key)

Built with VitePress