A self-compiling tool that transforms prompt stacks into code and tests using LLM generation.
vibec is a unique compiler that processes markdown-based prompt stacks to generate code, tests, and documentation. It can compile itself through a bootstrap process, evolving its own implementation (bin/vibec.js) across numbered stages. The tool supports both static (.md) and dynamic (.js) plugins, maintains staged outputs in output/stages/ for Git history, and aggregates the latest runtime version in output/current/ using a "Last-Wins" merge strategy.
vibec/
├── bin/ # Initial implementation
│ ├── vibec.js # Core compiler script
│ └── test.sh # Test runner
├── bootstrap/ # Bootstrap documentation
├── stacks/ # Prompt stacks
│ ├── core/ # Core functionality
│ │ ├── 001_add_logging.md
│ │ ├── 002_add_plugins.md
│ │ ├── 003_add_cli.md
│ │ ├── 004_add_config.md
│ │ └── plugins/ # Core plugins
│ └── tests/ # Test generation
├── output/ # Generated artifacts
│ ├── bootstrap/ # Bootstrap outputs
│ │ ├── bin/ # Bootstrap compiler
│ │ │ └── vibec.js
│ │ ├── bootstrap.js # Bootstrap script
│ │ └── test.sh # Bootstrap test script
│ ├── current/ # Latest merged runtime version
│ │ ├── bin/ # Current compiler
│ │ │ └── vibec.js
│ │ ├── bootstrap.js # Current bootstrap script
│ │ ├── test.js # Current test suite
│ │ └── test.sh # Current test script
│ └── stacks/ # Staged stack outputs
│ ├── core/ # Core stack stages
│ │ ├── 001_add_logging/
│ │ │ └── bin/
│ │ │ └── vibec.js
│ │ ├── 002_add_plugins/
│ │ │ └── bin/
│ │ │ └── vibec.js
│ │ ├── 003_add_cli/
│ │ │ └── bin/
│ │ │ └── vibec.js
│ │ ├── 004_add_config/
│ │ │ └── bin/
│ │ │ └── vibec.js
│ └── tests/ # Test stack stages
│ ├── 001_basic_tests/
│ │ ├── test.js
│ │ └── test.sh
│ ├── 002_feature_tests/
│ │ └── test.js
│ ├── 003_cli_tests/
│ │ └── test.js
│ └── 004_config_tests/
│ └── test.js
├── .vibec_hashes.json # Prompt hashes and test results
├── vibec.json # Configuration file
└── package.json # Node dependencies
vibec employs a progressive bootstrapping process:
- Begins with the initial
bin/vibec.jsimplementation - Processes numbered stages sequentially (001, 002, etc.)
- Updates
vibec.jswhen generated in a stage, using the new version for subsequent stages - Creates a self-improving cycle where the compiler evolves during compilation
Install globally:
npm install -g vibecOr use via npx:
npx vibec --versionSet your LLM API key:
export VIBEC_API_KEY=your_api_key_hereRun with custom options:
npx vibec --stacks=core,tests --test-cmd="npm test" --retries=2 --output=outputCLI options:
--workdir=<dir>: Working directory (default:.)--stacks=<stack1,stack2,...>: Stacks to process (default:core)--dry-run: Simulate without modifications (default:false)--start=<number>: Start with specific stage number (default: none)--end=<number>: End with specific stage number (default: none)--api-url=<url>: LLM API endpoint (default:https://openrouter.ai/api/v1)--api-model=<model>: LLM model (default:anthropic/claude-3.7-sonnet)--test-cmd=<command>: Test command to run (default: none)--retries=<number>: Retry attempts (≥ 0, default:0)--output=<dir>: Output directory (default:output)--help: Display usage information--version: Show version (e.g.,vibec v1.0.0)
Configure via vibec.json:
{
"workdir": ".",
"stacks": ["core", "tests"],
"dryRun": false,
"start": null,
"end": null,
"testCmd": "npm test",
"retries": 2,
"pluginTimeout": 5000,
"apiUrl": "https://openrouter.ai/api/v1",
"apiModel": "anthropic/claude-3.7-sonnet",
"output": "output"
}Option precedence: CLI > Environment Variables > vibec.json > Defaults
Validation:
retries: Must be non-negative (≥ 0)pluginTimeout: Must be positive (> 0)- Malformed JSON in
vibec.jsontriggers an error log and falls back to defaults
VIBEC_WORKDIR: Working directory path.VIBEC_STACKS: Comma-separated stacks (e.g.,core,tests)VIBEC_DRY_RUN:true/false.VIBEC_START: Numeric stage value.VIBEC_END: Numeric stage value.VIBEC_OUTPUT: Output directory.VIBEC_TEST_CMD: Test commandVIBEC_RETRIES: Retry countVIBEC_PLUGIN_TIMEOUT: Plugin timeout (ms)VIBEC_API_URL: LLM API endpointVIBEC_API_KEY: LLM API key (recommended over config)VIBEC_API_MODEL: LLM modelVIBEC_DEBUG: Enable debug logging (1to enable)
Compatible with OpenAI-style APIs. Configure via VIBEC_API_URL and VIBEC_API_KEY.
Prompts use markdown:
# Component Name
Description of the generation task.
## Context: file1.js, file2.js
## Output: path/to/output.js## Context:: Reference files for context## Output:: Specify output file paths (multiple allowed)
Added in stage 002_add_plugins.md:
- Static Plugins (
.md): Stored instacks/<stack>/plugins/, appended to prompts in alphabetical order - Dynamic Plugins (
.js): Async functions instacks/<stack>/plugins/, executed with configurable timeout:module.exports = async ({ config, stack, promptNumber, promptContent, workingDir, testCmd, testResult }) => { return "Generated content"; };
- Plugin errors are logged and skipped without halting execution
- Create a new numbered file (e.g.,
stacks/core/005_new_feature.md) - Use
NNN_name.mdnaming convention - Specify outputs with
## Output:
Tests in stacks/tests/ generate:
test.sh: Validatesvibec.jsand runstest.jstest.js: Usestapefor unit tests (Node builtins only)
mkdir pong-game && cd pong-game
mkdir -p stacks/pong output# Pong Game Base
Create a basic Pong game:
- HTML: Canvas element in a centered container
- CSS: Black canvas with borders
- JS: Canvas-based paddle and ball with arrow key controls
## Output: index.html
## Output: styles.css
## Output: game.js{
"stacks": ["pong"],
"output": "output"
}export VIBEC_API_KEY=your_api_key_here
npx vibeccd output/current
python3 -m http.server 8000Visit http://localhost:8000.
# Pong Scoring
Add scoring:
- Display score above canvas
- Increment when ball passes paddle, reset ball
## Context: index.html, game.js
## Output: game.jsRe-run npx vibec.
Use VIBEC_DEBUG=1 npx vibec for detailed logs.
- API Key Missing: Set
VIBEC_API_KEY - No Output: Verify
## Output:in prompts - Command Not Found: Use
npx vibecor install globally
MIT