A collection of working examples for remote debugging applications running in Kubernetes pods from local VS Code. This repository demonstrates debugging workflows for multiple languages and frameworks using kubectl commands to connect your local development environment to remote processes.
This repository provides production-ready examples of remote debugging in Kubernetes, focusing on:
- Local VS Code to Remote Pod: Debug processes running in K8s from your local machine
- Per-Developer Namespaces: Isolated debugging environments for team collaboration
- Multiple Languages: C#, F#, Node.js, Python, Go, and more
- Consistent Tooling: Unified
manage.shscripts across all examples - Real-World Integration: Works with nginx-dev-gateway for microservices debugging
- AI-Assisted Setup: Reference examples designed to work with coding assistants like Claude Code
Follow the examples to understand how remote debugging works for your language/framework.
Use these examples as a reference when setting up remote debugging for your own applications. Each example includes:
- Complete working code
- Docker configuration with debugger setup
- Kubernetes manifests
- VS Code configuration
- Management scripts
- Troubleshooting guides
This repository is structured to work seamlessly with coding assistants like Claude Code. You can:
Example prompt:
"Using the k8s-vscode-remote-debug repository as a reference, add remote debugging support to my Node.js application. I want to debug it running in Kubernetes from local VS Code."
The AI can:
- Analyze your existing application
- Reference the appropriate example (e.g.,
examples/nodejs-express/) - Generate Dockerfile with debugger setup
- Create VS Code launch configuration
- Add Kubernetes manifests with debug ports
- Provide deployment and debugging instructions
Why this works:
- ✅ Complete examples - Every component needed for debugging
- ✅ Consistent patterns - Similar structure across languages
- ✅ Well-documented - Extensive comments and READMEs
- ✅ Battle-tested - All examples verified working
- ✅ Troubleshooting included - Common issues documented
Supported languages for AI-assisted setup:
- C# / ASP.NET Core
- F# / Giraffe
- Node.js / Express
- Python / FastAPI
- Go / Gin
- Java / Spring Boot
- Rust / Actix-web (tracing-based debugging)
- Elixir / Phoenix (Remote - Kubernetes in-pod debugging)
Each example demonstrates:
- ✅ Setting and hitting breakpoints
- ✅ Variable inspection and watches
- ✅ Step debugging (step over, into, out)
- ✅ Call stack navigation
- ✅ Conditional breakpoints
- ✅ Expression evaluation
# Set your developer namespace (removes need for -n flag)
export NAMESPACE=dev-yourname
export REGISTRY=your-registry.azurecr.io # Or docker.io/username, etc.
# Build and push example image (C# Web API)
cd examples/csharp-dotnet8-webapi
./manage.sh build
./manage.sh push
# Deploy to your namespace
./manage.sh deploy
# Verify pod is ready for debugging
./manage.sh debug
# Open VS Code and start debugging (F5)
# Set breakpoints and attach to the remote pod
# Note: -n flag required if NAMESPACE env var not set
./manage.sh -n dev-yourname deployk8s-vscode-remote-debug/
├── examples/
│ ├── csharp-dotnet8-webapi/ # C# .NET 8 Web API
│ ├── fsharp-giraffe-dotnet8/ # F# Giraffe Web Framework
│ ├── nodejs-express/ # Node.js with Express
│ ├── python-fastapi/ # Python with FastAPI
│ ├── go-gin/ # Go with Gin
│ ├── java-spring-boot/ # Java Spring Boot 3.2
│ ├── rust-actix/ # Rust Actix-web 4.11
│ ├── elixir-phoenix/ # Elixir Phoenix
│ └── ...
├── shared/
│ ├── scripts/ # Common bash functions
│ └── k8s-templates/ # Reusable K8s manifests
├── docs/
│ ├── planning.md # Project planning
│ ├── development-phases.md # Development roadmap
│ └── ...
└── manage.sh # Root management script
| Language | Framework | Debugger | Status | Notes |
|---|---|---|---|---|
| C# | .NET 8 Web API | vsdbg | ✅ Complete | Full breakpoint support |
| F# | Giraffe .NET 8 | vsdbg | ✅ Complete | Full breakpoint support |
| Node.js | Express | Inspector Protocol | ✅ Complete | Full breakpoint support |
| Python | FastAPI | debugpy | ✅ Complete | Full breakpoint support |
| Go | Gin | Delve | ✅ Complete | Full breakpoint support |
| Java | Spring Boot 3.2 | JDWP | ✅ Complete | Full breakpoint support |
| Rust | Actix-web 4.11 | Tracing | ✅ Complete | Uses structured logging (LLDB breakpoints don't work with async) |
| Elixir | Phoenix | ElixirLS (Remote - Kubernetes) | ✅ Complete | In-pod debugging via Remote - Kubernetes extension |
The Rust example uses a tracing-based approach with structured logging instead of traditional breakpoint debugging. After extensive testing, we found that LLDB breakpoints do not work reliably with async Rust code running on Tokio worker threads, even though the debugger infrastructure (attachment, breakpoint resolution) works correctly.
What we tried:
- Regex breakpoints for async closures
- File-based breakpoint resolution
- Single-threaded Tokio runtime
- Manual thread selection
Result: All attempts failed to trigger breakpoints in async handlers.
Solution: The tracing crate provides excellent debugging capabilities for async code through structured logging with the #[instrument] macro.
If you find a solution: If you discover a way to make LLDB breakpoints work reliably with async Rust in Kubernetes, please open an issue or PR! The LLDB setup is documented in the appendix of the Rust example README for future reference.
The Elixir example uses the Remote - Kubernetes extension approach, running ElixirLS directly inside the pod rather than using distributed Erlang remote attach.
Why this approach:
- Erlang's
:intmodule (used for breakpoints) can only interpret modules before they're loaded - Traditional remote attach fails because Phoenix loads all modules at startup
- Running ElixirLS in-pod avoids distributed Erlang complexity and timing issues
Key breakthrough:
- Using
^Elixir\.ModuleName$pattern matching to limit module interpretation - Prevents ElixirLS from interpreting 300+ framework modules (causes OOM)
- Only interprets target modules, reducing memory usage from OOM to ~1Gi
Features:
- Full VS Code integration with breakpoints, variables, stepping
- Memory-optimized configuration embedded in Docker image
- Automatic setup via
.vscode-remote/launch.json
- Kubernetes cluster (kind, minikube, or cloud provider)
- kubectl configured and connected to your cluster
- Docker for building images
- VS Code with language-specific extensions
- Namespace access on your Kubernetes cluster
This repository promotes a per-developer namespace approach:
# Each developer has their own namespace
export NAMESPACE=dev-alice
# Deploy your services to your namespace (no -n flag needed!)
./manage.sh deploy-all
# Debug independently without affecting others
./manage.sh debug csharp
# Or use -n flag to override the env var
./manage.sh -n dev-bob statusBenefits:
- Isolation: Debug without interfering with teammates
- Flexibility: Run different versions/configurations
- Safety: Breaking changes stay in your namespace
- Realistic: Debug in actual K8s environment
Works seamlessly with nginx-dev-gateway to demonstrate microservices debugging:
- Route traffic through nginx gateway to debug-enabled services
- Debug request flow across multiple services
- Switch individual services between debug/stable versions
- Realistic microservices debugging scenarios
See docs/nginx-gateway-integration.md for details (coming soon).
./manage.sh [OPTIONS] COMMAND [ARGS]
OPTIONS:
-n, --namespace NAMESPACE Kubernetes namespace (required if NAMESPACE env var not set)
-r, --registry REGISTRY Docker registry URL (required if REGISTRY env var not set)
-t, --tag TAG Docker image tag (default: latest, or IMAGE_TAG env var)
-d, --debug Enable debug output
-h, --help Show help
COMMANDS:
create-ns Create developer namespace
deploy EXAMPLE Deploy specific example
deploy-all Deploy all examples
debug EXAMPLE Setup debugging (port-forward)
logs EXAMPLE Show logs
status Show namespace status
delete Delete namespace
list-examples List available examples
EXAMPLES:
# Using environment variables (recommended)
export NAMESPACE=dev-yourname
./manage.sh deploy csharp
# Using flags (overrides env vars)
./manage.sh -n dev-yourname deploy csharpcd examples/csharp-dotnet8-webapi
./manage.sh [OPTIONS] COMMAND
OPTIONS:
-n, --namespace NAMESPACE Kubernetes namespace (required if NAMESPACE env var not set)
-r, --registry REGISTRY Docker registry URL (required if REGISTRY env var not set)
-t, --tag TAG Docker image tag (default: latest, or IMAGE_TAG env var)
COMMANDS:
build Build Docker image
push Push image to registry
deploy Deploy to namespace
debug Verify pod ready for debugging
port-forward [PORT] Port-forward app port
logs [--follow] Show logs
delete Delete from namespace
shell Exec into pod
restart Restart deployment
status Show deployment status
EXAMPLES:
# Build and deploy with environment variable
export NAMESPACE=dev-yourname
./manage.sh build
./manage.sh deploy
# Or use flags
./manage.sh -n dev-yourname deploy- Planning Document - Project architecture and strategy
- Development Phases - Detailed roadmap
- Per-Developer Namespaces - Team workflow guide (coming soon)
- Debugging Setup Guide - General debugging setup (coming soon)
Each example has its own README with:
- Language/framework-specific setup
- Debugging walkthrough
- Troubleshooting guide
- VS Code configuration details
Issue: When debugging with breakpoints, your application stops responding to HTTP requests. If you have a liveness probe configured, Kubernetes will kill the pod after the timeout period, causing the debugger to disconnect with exit code 137.
Symptoms:
Application is shutting down...
Error from pipe program 'kubectl': command terminated with exit code 137
Solution: All examples in this repository have liveness probes disabled for debug deployments:
# Liveness probe disabled for debugging to prevent pod restarts when paused at breakpoints
# livenessProbe:
# httpGet:
# path: /health
# port: 8080Why this works:
- Readiness probe still runs → pod marked "Not Ready" when paused
- Pod stays alive → debugger connection maintained
- No traffic routed to paused pod → requests not affected
- For production, uncomment liveness probe
- Use per-developer namespaces - prevents debug sessions from interfering with teammates
- Set NAMESPACE env var - avoids typing
-nflag repeatedly - Keep debug builds separate - use image tags to distinguish debug/production builds
- Monitor resource usage - debug builds use more memory, adjust limits if needed
This is a reference repository intended to demonstrate best practices for K8s remote debugging. We recommend forking this repository for your own customization.
Contributions are welcome for:
- 🐛 Bug fixes
- 📖 Documentation improvements
- ✨ New language examples (evaluated case-by-case)
See CONTRIBUTING.md for details.
MIT License - Feel free to use and adapt for your projects.
- Inspired by real-world debugging challenges in Kubernetes environments
- Integrates with nginx-dev-gateway for microservices patterns
- Built with input from developers debugging across multiple languages and frameworks
🚀 Active Development - Core infrastructure complete, 8 languages implemented.
Completed:
- ✅ Phase 0: Repository foundation
- ✅ Phase 1: Shared infrastructure and management scripts
- ✅ Phase 2: C# .NET 8 Web API example
- ✅ Phase 3: F# Giraffe .NET 8 example
- ✅ Phase 4: Node.js Express example
- ✅ Phase 5: Python FastAPI example
- ✅ Phase 6: Go Gin example
- ✅ Phase 7.1: Java Spring Boot example
- ✅ Phase 7.2: Rust Actix-web example (tracing-based approach)
- ✅ Phase 7.3: Elixir Phoenix example (Remote - Kubernetes approach)
See Development Phases for detailed roadmap.
Questions or Issues? Open an issue or check the docs folder.