/bosh-lite

A lite development env for BOSH

Primary LanguageShellApache License 2.0Apache-2.0

bosh-lite

A lite development env for BOSH using Warden from within Vagrant.

This readme also includes demonstrates how to deploy Cloud Foundry into bosh-lite. See Getting Started for a full example with detailed step-by-step commands.

Installation

For all use cases, first prepare this project with bundler .

  1. Install vagrant

    Known to work for version:

    $ vagrant -v
    Vagrant 1.3.5
    

    Note: for OSX and VirtualBox you are required to use Vagrant 1.3.4+

  2. Install Ruby + RubyGems + Bundler

  3. Run Bundler from the base directory of this repository

    bundle
    

Below are installation processes for different Vagrant providers.

  • VMWare Fusion
  • Virtualbox
  • AWS

USE VMWare Fusion Provider

Known to work with Fusion version 6.0.2 and vagrant plugin vagrant-vmware-fusion version 2.1.0 .

  1. Install vagrant Fusion Plugin + license

    vagrant plugin install vagrant-vmware-fusion
    vagrant plugin license vagrant-vmware-fusion license.lic
    
  2. Start vagrant from the base directory of this repository (which uses the Vagrantfile)

    vagrant up --provider vmware_fusion
    
  3. Bosh target (login with admin/admin)

    bosh target 192.168.50.4
    Target set to `Bosh Lite Director'
    Your username: admin
    Enter password: admin
    Logged in as `admin'
    
  4. Add a set of route entries to your local route table to enable direct warden container access. Your sudo password may be required.

    scripts/add-route
    

###USE Virtualbox Provider

  1. Start vagrant from the base directory of this repository (which uses the Vagrantfile)

    vagrant up
    
  2. Bosh target (login with admin/admin)

    bosh target 192.168.50.4
    Target set to `Bosh Lite Director'
    Your username: admin
    Enter password: admin
    Logged in as `admin'
    
  3. Add a set of route entries to your local route table to enable direct warden container access every time your networking gets reset (eg. reboot or connect to a different network). Your sudo password may be required.

    scripts/add-route
    

###USE AWS Provider

  1. Install Vagrant AWS provider

    vagrant plugin install vagrant-aws
    

    Known to work for version: vagrant-aws 0.4.0

  2. Add dummy AWS box

    vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
    
  3. Set environment variables called BOSH_AWS_ACCESS_KEY_ID and BOSH_AWS_SECRET_ACCESS_KEY with the appropriate values. If you've followed along with other documentation such as these steps to deploy Cloud Foundry on AWS, you may simply need to source your bosh_environment file.

  4. Make sure the EC2 security group you are using in the Vagrantfile exists and allows inbound TCP traffic on ports 25555 (for the BOSH director), 22 (for SSH), 80/443 (for Cloud Controller), and 4443 (for Loggregator).

  5. Run Vagrant from the aws folder:

    cd aws
    vagrant up --provider=aws
    cd ..
    
  6. Bosh target (login with admin/admin)

    bosh target <IP of the box>
    Target set to `Bosh Lite Director'
    Your username: admin
    Enter password: admin
    Logged in as `admin'
    
  7. Edit manifests/cf-stub-spiff.yml to include a 'domain' key under 'properties' that corresponds to a domain you've set up for this Cloud Foundry instance, or if you want to use xip.io, it can be .xip.io.

  8. Direct future traffic received on the instance to another ip (the HAProxy):

sudo iptables -t nat -A PREROUTING -p tcp -d <internal IP of instance> --dport 80 -j DNAT --to 10.244.0.34:80
sudo iptables -t nat -A PREROUTING -p tcp -d <internal IP of instance> --dport 443 -j DNAT --to 10.244.0.34:443
sudo iptables -t nat -A PREROUTING -p tcp -d <internal IP of instance> --dport 4443 -j DNAT --to 10.244.0.34:4443

These rules are cleared on restart. They can be saved and configured to be reloaded on startup if so desired (granted the internal ip remains the same).

Manage your local boxes

We keep publishing pre-built Vagrant boxes on Amazon S3. It is recommened to use the latest boxes because they should be smaller, faster or less problematic.

Download latest boxes

Just get a latest copy of the Vagrantfile from this repo and run vagrant up.

Delete old boxes

Free some disk space by deleting the old boxes.

$ vagrant box list
boshlite-ubuntu1204-build55 (virtualbox)
boshlite-ubuntu1204-build55 (vmware_desktop)
boshlite-ubuntu1204-build74 (virtualbox)
boshlite-ubuntu1204-build83 (virtualbox)

$ vagrant box remove boshlite-ubuntu1204-build55 virtualbox
Removing box 'boshlite-ubuntu1204-build55' with provider 'virtualbox'...

Troubleshooting

  1. If you want to start over again, you can use vagrant destroy from the base directory of this project to remove the VM.
  2. To start with a new VM just execute the appropriate vagrant up command optionally with the provider option as shown in the earlier sections.

Upload Warden stemcell

bosh-lite uses the Warden CPI, so we need to use the Warden Stemcell which will be the root file system for all Linux Containers created by the Warden CPI.

  1. Upload the latest warden stemcell

    bosh upload stemcell http://bosh-jenkins-gems-warden.s3.amazonaws.com/stemcells/latest-bosh-stemcell-warden.tgz
    

Deploy Cloud Foundry

  1. Use the make_manifest_spiff script to create a cf manifest. This step assumes you have cf-release checked out to ~/workspace and that you have spiff installed.

    make_manifest_spiff will target your bosh-lite director, find the uuid, create a manifest stub and run spiff to generate a manifest at manifests/cf-manifest.yml. (If this fails, try updating spiff)

    cd ~/workspace/bosh-lite
    ./scripts/make_manifest_spiff
    
  2. Bosh target 192.168.50.4 and run bosh as normal, passing your generated manifest:

    cd ~/workspace/cf-release
    # update all the submodules and nested submodules
    ./update
    bosh create release
    # enter the name from the deployment-stub.yml file  - for example, 'cf'
    bosh upload release
    bosh deploy
    
  3. Run the yeti tests against your new deployment to make sure it's working correctly.

    a. Set the environment variables VCAP_BVT_API_ENDPOINT, VCAP_BVT_ADMIN_USER, VCAP_BVT_ADMIN_USER_PASSWD

    Might look like this:

    # This is the HA Proxy ip in bosh vms (not the cc)
    export VCAP_BVT_API_ENDPOINT=http://api.10.244.0.34.xip.io
    export VCAP_BVT_ADMIN_USER=admin
    export VCAP_BVT_ADMIN_USER_PASSWD=admin
    

    b. Run yeti as normal from cf-release/src/tests.. e.g. (Make sure you are logged in via cf before running these commands)

    rake config:clear_bvt # clear the BVT from previous runs
    bundle; bundle exec rake prepare; # create initial users/assets
    bundle exec rspec # run!
    
    ./warden_rspec # Run tests in parallel
    

SSH into deployment jobs

To use bosh ssh to SSH into running jobs of a deployment, to run the following command:

scripts/add-route

Now you can now SSH into any VM with bosh ssh:

$ bosh ssh
1. nats/0
2. syslog_aggregator/0
3. postgres/0
4. uaa/0
5. login/0
6. cloud_controller/0
7. loggregator/0
8. loggregator-router/0
9. health_manager/0
10. dea_next/0
11. router/0
Choose an instance:

Restore your deployment

The warden container will lost after vm reboot, but you can restore your deployment with bosh cck.

$ bosh cck

Choose 2 to recreate each missing vm:

Problem 1 of 13: VM with cloud ID `vm-74d58924-7710-4094-86f2-2f38ff47bb9a' missing.
  1. Ignore problem
  2. Recreate VM using last known apply spec
  3. Delete VM reference (DANGEROUS!)
Please choose a resolution [1 - 3]: 2
...

Typing yes to confirm at the end:

Apply resolutions? (type 'yes' to continue): yes

Applying problem resolutions
  missing_vm 212: Recreate VM using last known apply spec (00:00:13)
  missing_vm 215: Recreate VM using last known apply spec (00:00:08)
...
Done                    13/13 00:03:48