AI Repo Adventures Generator
Transform Your Code Into Adventures

What is the AI Repo Adventures Generator?

AI Repo Adventures revolutionizes how developers explore and understand codebases. Instead of dry documentation or overwhelming file structures, developers can experience your code through engaging, AI-generated adventure stories that make learning more memorable and fun.

🎯

Quest-Based Code Learning

Transform complex codebases into guided adventures with focused quests that explore files in your repo, making code comprehension fun and manageable

🎨

Multiple Theme Perspectives

Experience your code through space exploration, mythical kingdoms, ancient temples, or professional developer lensβ€”same codebase, fresh perspectives

⚑

Instant Interactive Documentation

Generate a beautiful and engaging website in minutes with AI-curated code snippets, explanations, and direct GitHub linksβ€”no manual writing required

Perfect for onboarding new team members, creating engaging documentation, teaching code architecture, or simply making your projects more accessible and enjoyable to explore.

Installation & Usage

πŸ“¦ Step 1: Install the Package

Install globally for use across all your projects:

bash
npm install -g @codewithdan/ai-repo-adventures-generator

Or use directly without installation:

bash
npx @codewithdan/ai-repo-adventures-generator

πŸ”§ Step 2: Configure Your LLM

Create a .env file in your project (or add to an existing one) with your preferred LLM provider:

env
# Azure AI Foundry
REPO_ADV_LLM_API_KEY=your_azure_ai_foundry_key_here
REPO_ADV_LLM_BASE_URL=https://your-resource.openai.azure.com
REPO_ADV_LLM_MODEL=gpt-4o
REPO_ADV_LLM_API_VERSION=2025-01-01-preview

# Or OpenAI Configuration
REPO_ADV_LLM_API_KEY=your_openai_key_here
REPO_ADV_LLM_BASE_URL=https://api.openai.com/v1
REPO_ADV_LLM_MODEL=gpt-4o

Token Usage: The AI Adventure generator requires significant token usage as it analyzes your codebase and generates personalized adventure content. Usage varies based on repository size and the amount of content generated. We recommend starting with a single adventure theme to evaluate costs and value before expanding to multiple themes.

🎯 Step 3: Create an Adventure Configuration File

For better adventure quest generation, create an adventure.config.json in your project root to let the AI know about important areas of your codebase and key code it should focus on.

Use this prompt with any AI coding assistant that knows about your repo (GitHub Copilot, etc.) to generate your configuration:

prompt
Please analyze this codebase and create an adventure.config.json file at the root of the project to help users explore the project through guided quests.

Requirements:
- Identify 3-5 key areas of functionality in the codebase
- Select 2-4 representative files for each area
- Highlight 2-4 important functions/classes or other members per file
- Focus on entry points, main logic, and system integrations

Use this exact JSON structure:

{
  "adventure": {
    "name": "[Your Project Name]",
    "description": "[Brief project description]",
    "url": "https://github.com/[username]/[repo-name]",
    "customInstructions": "[Optional: Any specific guidance for story generation]",
    "microsoftClarityCode": "[Optional: Your Microsoft Clarity tracking code]",
    "googleAnalyticsCode": "[Optional: Your Google Analytics 4 measurement ID (G-XXXXXXXXXX)]",
    "quests": [
      {
        "title": "[Quest Name - e.g. 'Authentication System']",
        "description": "[What users learn from exploring this area]",
        "files": [
          {
            "path": "[relative/path/to/important/file.js]",
            "description": "[Role of this file in the system]",
            "highlights": [
              {
                "name": "[functionName or ClassName.method]",
                "description": "[What this code does and why it matters]"
              }
            ]
          }
        ]
      }
    ]
  }
}

Prioritize files that contain:
βœ“ Main entry points (index.js, program.cs, main.ts, app.js, etc.)
βœ“ Core business logic and algorithms
βœ“ API routes and controllers
βœ“ Database models and data access
βœ“ Configuration and setup code
βœ“ Key middleware and utilities

Avoid:
βœ— Test files, build scripts, or configuration-only files
βœ— Simple utility functions without business logic
βœ— Auto-generated or boilerplate code

Aim for 15-25 total highlights across all quests for the best exploration experience.

Example generated configuration:

json
{
  "adventure": {
    "name": "Your Project Name",
    "description": "What your project does",
    "url": "https://github.com/username/repo",
    "microsoftClarityCode": "",
    "googleAnalyticsCode": "",
    "quests": [
      {
        "title": "Core Features",
        "description": "Main functionality exploration",
        "files": [
          {
            "path": "src/main.ts",
            "description": "Entry point",
            "highlights": [
              {
                "name": "initApp",
                "description": "Initialization logic"
              }
            ]
          }
        ]
      }
    ]
  }
}

This helps the AI create more focused and relevant adventure content for your specific codebase.

πŸš€ Step 4: Generate Your Adventure

Interactive Mode (recommended for first-time users):

bash
repo-adventures

This will guide you through theme selection and configuration options.

Command Line Mode with options:

bash
repo-adventures --theme all --output ./docs

βš™οΈ Command Line Options

Option Description Example
--theme <theme> Choose theme: space, mythical, ancient, developer, or all --theme mythical
--output <dir> Output directory (default: ./public) --output ./docs
--overwrite Overwrite existing files without prompting --overwrite
--sequential Process themes sequentially to avoid rate limits (for --theme all) --sequential
--max-quests <num> Limit number of quests to generate --max-quests 5
--serve Start HTTP server and open browser after generation --serve
--log-llm-output [dir] Save raw LLM output for debugging (default: .ai-repo-adventures/llm-output) --log-llm-output or --log-llm-output ./debug
--help Show help message with all options --help

Example Commands:

bash
# Generate space theme with auto-serve
repo-adventures --theme space --serve

# Generate all themes (space, mythical, ancient, developer) in parallel
repo-adventures --theme all --output ./docs --overwrite

# Generate all themes sequentially to avoid rate limits on LLM (parrallel generation is the default)
repo-adventures --theme all --sequential --output ./public --overwrite

# Quick test with limited quests
repo-adventures --theme mythical --max-quests 2

⚑ Handling Rate Limits

When using Azure AI Foundry OpenAI models, you may encounter token rate limits when processing multiple themes in parallel depending on the TPM (Tokens Per Minute) supported by for the model. The system provides automatic detection and graceful handling:

Azure Rate Limit Error:

error
429 Requests... have exceeded token rate limit of your current AIServices S0 pricing tier

Automatic Solutions:

  • System detects token rate exceeded errors (different from request rate limits)
  • Shows helpful suggestions for using --sequential flag
  • In sequential mode: automatically waits 60 seconds and continues processing

Recommended Usage for AI Foundry models with more limited Token Per Minute (TPM):

bash
# Proactively avoid rate limits
repo-adventures --theme all --sequential --output ./public

# Or let the system handle it automatically
repo-adventures --theme all --output ./public
# System will suggest using --sequential if rate limit hit

Benefits of Sequential Processing:

  • βœ… Avoids overwhelming token rate windows (200K tokens/60s)
  • βœ… All themes still generate successfully (just takes longer)
  • βœ… Clear progress indicators and wait time notifications
  • βœ… No manual intervention required