A lite development env for BOSH using Warden from within Vagrant.
This readme also includes demonstrates how to deploy Cloud Foundry into bosh-lite.
For all use cases, first prepare this project with bundler & librarian-chef.
-
Known to work for version:
$ vagrant -v Vagrant 1.3.1Note: Vagrant 1.3.2+ using OSX and VirtualBox may encounter this issue with Private Networking. The work-around is to downgrade to Vagrant 1.3.1 until Vagrant 1.3.4 is released.
-
Install Vagrant omnibus plugin
vagrant plugin install vagrant-omnibus -
Install Ruby + RubyGems + Bundler
-
Run Bundler from the base directory of this repository
bundle -
Run Librarian
librarian-chef install
Below are installation processes for different Vagrant providers.
- VMWare Fusion
- Virtualbox
- AWS
Known to work with Fusion version 5.0.3
-
Install vagrant Fusion Plugin + license
vagrant plugin install vagrant-vmware-fusion vagrant plugin license vagrant-vmware-fusion license.lic -
Start vagrant from the base directory of this repository (which uses the Vagrantfile)
vagrant up --provider vmware_fusion -
Bosh target (login with admin/admin)
bosh target 192.168.50.4 Target set to `Bosh Lite Director' Your username: admin Enter password: admin Logged in as `admin' -
Add a set of route entries to your local route table to enable direct warden container access. Your sudo password may be required.
scripts/add-route
###USE Virtualbox Provider
-
Start vagrant from the base directory of this repository (which uses the Vagrantfile)
vagrant up -
Bosh target (login with admin/admin)
bosh target 192.168.50.4 Target set to `Bosh Lite Director' Your username: admin Enter password: admin Logged in as `admin' -
Add a set of route entries to your local route table to enable direct warden container access. Your sudo password may be required.
scripts/add-route
###USE AWS Provider
-
Install Vagrant AWS provider
vagrant plugin install vagrant-aws -
Add dummy AWS box
vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box -
Rename Vagrantfile.aws to Vagrantfile
-
Set environment variables called
BOSH_AWS_ACCESS_KEY_IDandBOSH_AWS_SECRET_ACCESS_KEYwith the appropriate values. If you've followed along with other documentation such as (these steps to deploy Cloud Foundry on AWS)[http://docs.cloudfoundry.com/docs/running/deploying-cf/ec2/index.html#deployment-env-prep], you may simply need to source yourbosh_environmentfile. -
Make sure the EC2 secure group you are using in the
Vagrantfileexists and allows tcp/25555 -
Run Vagrant from the base directory of this repository (which uses the Vagrantfile):
vagrant up --provider=aws -
Bosh target (login with admin/admin)
bosh target 192.168.50.4 Target set to `Bosh Lite Director' Your username: admin Enter password: admin Logged in as `admin'
- If you want to start over again, you can use
vagrant destoryfrom the base directory of this project to remove the VM. - To start with a new VM just execute the appropriate
vagrant upcommand optionally with the provider option as shown in the earlier sections.
bosh-lite uses the Warden CPI, so we need to use the Warden Stemcell which will be the root file system for all Linux Containers created by the Warden CPI.
-
Download latest warden stemcell
wget http://bosh-jenkins-gems-warden.s3.amazonaws.com/stemcells/latest-bosh-stemcell-warden.tgz -
Upload Stemcell
bosh upload stemcell latest-bosh-stemcell-warden.tgz
-
Generate CF deployment manifest
cp manifests/cf-stub.yml manifests/[your-name-manifest].yml # replace director_uuid: PLACEHOLDER-DIRECTOR-UUID in [your-name-manifest].yml with the UUID from "bosh status" bosh deployment manifests/[your-name-manifest].yml bosh diff [cf-release]/templates/cf-aws-template.yml.erb ./scripts/transform.rb -f manifests/[your-name-manifest].ymlor simply
./scripts/make_manifest -
Create a CF release
-
Deploy!
Spiff is the way that Cloud Foundry is deployed in production, and can be used for Bosh-lite installs too.
-
Create a deployment stub, like the one below:.
(In the rest of this section, we refer to this as being at ~/deployment-stub.yml)
name: cf-warden director_uuid: [your director UUID, you can use 'bosh status' to get it, look for the UUID line] releases: - name: cf version: latest -
Generate a deployment manifest based on your stub.
The command will look something like:
NOTE: This uses spiff. Install that first, see https://github.com/vito/spiff
NOTE: This assumes you've checked out http://github.com/cloudfoundry/cf-release to ~/cf-release. Do that first too.
~/cf-release/generate_deployment_manifest warden ~/deployment-stub.yml > ~/deployment.yml -
Bosh target 192.168.50.4 and run bosh as normal, passing your generated manifest:
bosh create release bosh upload release bosh deployment ~/deployment.yml bosh deploy -
Run the yeti tests against your new deployment to make sure it's working correctly.
a. Set the environment variables VCAP_BVT_API_ENDPOINT, VCAP_BVT_ADMIN_USER, VCAP_BVT_ADMIN_USER_PASSWD
Might look like this:
# This is the router ip in bosh vms (not the cc) export VCAP_BVT_API_ENDPOINT=http://api.10.244.0.22.xip.io export VCAP_BVT_ADMIN_USER=admin export VCAP_BVT_ADMIN_USER_PASSWD=adminb. Run yeti as normal from cf-release/src/tests.. e.g.
bundle; bundle exec rake prepare; # create initial users/assets bundle exec rspec # run!
To use bosh ssh to SSH into running jobs of a deployment, you need to specify various bosh ssh flags to use the Vagrant VM as the gateway.
To make it simple, add the following alias to your environment:
alias ssh_boshlite='bosh ssh --gateway_host 192.168.50.4 --gateway_user vagrant --gateway_identity_file $HOME/.vagrant.d/insecure_private_key'You can now SSH into any VM with ssh_boshlite in the same way you would run bosh ssh:
$ ssh_boshlite
1. nats/0
2. syslog_aggregator/0
3. postgres/0
4. uaa/0
5. login/0
6. cloud_controller/0
7. loggregator/0
8. loggregator-router/0
9. health_manager/0
10. dea_next/0
11. router/0
Choose an instance: