vmware-archive/pcf-pipelines

Suggestion: new way how pcf-pipelines can work.

Closed this issue · 2 comments

I think that pcf-pipelines is getting way too more complicated that it could be. I just want to share one idea how it can work in a more efficient way.

The workflow that I suggest is the following

  1. An operator manually installs pcf and configure all tiles that he needs.
  2. An operator flyes pcf-pipelines. The only configuration he needs to provide are opsman credentials, concourse credentials and optionally credhub credentials, if concourse is configured to use credhub.
  3. pcf-pipelines runs and puls all configuration from the ops manager.
  4. For each installed tile including director pcf-pipelines automatically creates a new very simple pipeline that consists of 3 jobs: download-tile, configure-tile, apply-changes.
  5. download-tile and apply-changes are the same for all pipelines. Information about product version and product name we can take from the opsman API.
  6. In order to generate configure-tile job for a particular tile pcf-pipelines needs to first use opsman API to pull particular tile properties, then this properties file can be easily converted to a template, similar to this one - we can just replace all values in the properties files with a placeholders using some templating language. I did this a lot of times when writing pipelines that can install individual tiles.
  7. We also can generate params.yml file and fill it with the values, extracted from the tile properties.
  8. If some parameter is a secret, instead of using {{some-value}} syntax in the generated pipeline we can store the secret value in credhub and use ((some-value)) notation. This functionality might be optional.

That's it: after running pcf-pipelines concourse will be automatically filled with a set of pipelines. After this point we can delete our manual installation and it will be fully automated and fully reproducible.

Benefits of this approach

  1. no-ops solution. It can take a few hours to manually configure ops manages and that's basically all the time you are going to spend on pcf automation.
  2. Support is trivial: as soon as a new version is PAS is released we don't need to modify anything in the pcf-pipelines. And I envision the pipeline itself much easier than the current one.
  3. This solution supports all tiles, not only PAS.
  4. Easy upgrades and updates: just manually do whatever upgrade you want and rerun the main pipeline.
  5. You still have a fully automated solution, in case you need to reproduce it in a different region you can just copy generated pipelines and run them.
  6. If we need to run generated pipeline in the offline environment this can be easily implemented as a plugable option: the main pipeline can store all artifacts in S3 and generated pipelines can be easily adjusted to use all tiles, docker images and other artifacts from S3 as well

We have created an issue in Pivotal Tracker to manage this. Unfortunately, the Pivotal Tracker project is private so you may be unable to view the contents of the story.

The labels on this github issue will be updated when the story is started.

Thanks for your input @s-matyukevich. Will take it into consideration with future product endeavors. At this time, we have a different set of long term and short term goals for the product.

Thanks.