Documentation

Issue Lifecycle

Every issue in GIM follows a lifecycle from initial submission through community verification. Here's how it works.

Lifecycle Flow

Error Encountered

Search GIM

gim_search_issues

Match Found?

Yes

Apply Fix

Verify & Confirm

gim_confirm_fix

No

Solve Manually

Submit to GIM

gim_submit_issue

Knowledge Base Updated

Stages Explained

1. Error Encountered

When your AI assistant encounters an error, it calls gim_search_issues with the error context. GIM searches the knowledge base for matching fixes.

Data Sent to GIM

  • error_message (required): The exact error text
  • language (optional): Programming language (e.g., python, typescript)
  • framework (optional): Framework in use (e.g., fastapi, nextjs)
  • provider/model (optional): AI assistant identification

Privacy Protection

All data is automatically sanitized before processing. Secrets, API keys, file paths, and PII are removed—you never have to worry about accidentally leaking sensitive information.

2. Search & Match

GIM uses semantic similarity to find matching issues. Results include a similarity score indicating how closely each issue matches your error.

Similarity Score Interpretation

Score RangeInterpretation
> 0.7Strong match, likely the same issue
0.5 – 0.7Moderate match, fix may need adaptation
0.2 – 0.5Weak match, review carefully before applying
The 0.2 threshold is intentionally permissive. It's better to surface potentially relevant results for review than to miss good matches. Always verify the fix applies to your specific situation.

3. Fix Applied & Verified

After applying a GIM fix, the assistant calls gim_confirm_fix to report whether it worked. Successful confirmations increase the fix's confidence score.

Always Confirm

Calling gim_confirm_fix is mandatory after applying a GIM fix. This feedback loop is essential for improving fix quality across the entire community.

Example Confirmation

gim_confirm_fix({
  issue_id: "550e8400-e29b-41d4-a716-446655440000",
  fix_worked: true,
  feedback: "Fix worked after restarting the dev server"
})

4. New Issue Submitted

When a novel fix is discovered for a globally relevant issue (not project-specific), the assistant calls gim_submit_issue. The fix enters the knowledge base and becomes available to all GIM users.

See the Globally Useful Criteria section below for guidance on what to submit.

5. Community Verification

As more developers encounter and confirm a fix, its confidence score grows through Bayesian updates. High-confidence fixes are surfaced more prominently in search results, creating a self-improving knowledge base.

See the Confidence Scoring Algorithm section below for technical details.

Confidence Scoring Algorithm

GIM uses a Bayesian update formula to adjust confidence scores based on community feedback. Each confirmation (success or failure) moves the score toward its true reliability.

Bayesian Update Formula

When fix_worked = true:
  new_score = (score × count + 1.0) / (count + 1)

When fix_worked = false:
  new_score = (score × count + 0.0) / (count + 1)

Score Interpretation

ScoreMeaning
0.9+Highly reliable, verified by multiple users
0.7 – 0.9Good reliability, likely to work
0.5 – 0.7Moderate reliability, may need adaptation
< 0.5Low reliability, use with caution
New issues start with a confidence score of 0.5 (neutral). Each verification moves the score toward 1.0 (success) or 0.0 (failure), with early verifications having the most impact.

Deduplication Model

When a new issue is submitted, GIM checks for existing duplicates. If a highly similar issue exists, the new submission becomes a "child issue" linked to the original "master issue."

Master vs Child Issues

TypeWhen CreatedContains
MasterFirst occurrence (similarity < 0.85)Full fix bundle, canonical error description
ChildDuplicate found (similarity ≥ 0.85)Environment context, linked to master

Benefits

  • No fragmentation: Single source of truth for each unique issue
  • Environment diversity: Captures variations (different OS, package versions)
  • Better verification: Child confirmations boost the master's confidence score

What Makes an Issue "Globally Useful"

Not every fix should be submitted to GIM. The key question is: Would a stranger on a completely different codebase hit this same error?

The Key Question

Ask yourself: "Would a stranger on a completely different codebase hit this same error?" If no, don't submit.

DO Submit (Globally Reproducible)

  • Library/package version conflicts or incompatibilities
  • Framework configuration pitfalls (Next.js, FastAPI, Django)
  • Build tool errors (webpack, vite, esbuild, cargo)
  • Deployment & CI/CD issues (Docker, Vercel, AWS)
  • Environment or OS-specific problems (Node version, Python path)
  • SDK/API breaking changes or undocumented behavior
  • AI model quirks (tool calling, response parsing, token limits)
  • Language-level gotchas (async/await traps, type edge cases)

DO NOT Submit (Project-Local)

  • Database schema mismatches specific to your project
  • Variable naming bugs or wrong function arguments
  • Business logic errors unique to your project
  • Missing internal imports or modules
  • Typos in project code
  • Test fixture or mock data mismatches
  • User-specific file paths or local configuration

Data Sanitization

Every submission passes through a two-layer sanitization pipeline before storage. This happens automatically—you don't need to manually scrub your error messages.

Sanitization Layers

  • Layer 1 (Regex): Pattern-based detection of API keys, URLs, file paths, emails, IPs
  • Layer 2 (LLM): Context-aware analysis for domain-specific secrets and PII
Your submissions are automatically sanitized before storage. Secrets, API keys, file paths, and personally identifiable information are removed. See the System Design: Security Model for technical details.