ben-grande/qusal

Installing and executing development workflow in Rust

Closed this issue · 2 comments

Commitment

I confirm that I have read the following resources:

Question

I have used Rust in the example, but this may be true for any other language or indeed tool to customise the workflow. Would the typical practice of development in Rust be to add Rust to tpl-dev with appropriate authenticity checks to the salt formula? Or is the expected practice to set up a separate tpl-rust, tpl-go etc. formulas, just the way you are suggesting with terraform or ansible? Or is there yet another way that you had in mind in the design of the flows here? (I went through the DESIGN documentation to find out more about this, but couldn't find anything relevant there.

Example workflow

  1. Develop code in dev (how to run? On dev or any other flow intended?)
  2. Interact with sys-git and/or GitHub, gitlab etc.
  3. Build a docker image and test locally (again - on dev? Probably not, as we are limiting the qusal.ConnectTCP policies, and probably there is a different way intended). Somewhere else?

Curious to know how the intention here is, so I can extend the required functionalities following good practice :)

Thanks!

.

Question

I have used Rust in the example, but this may be true for any other language or indeed tool to customise the workflow. Would the typical practice of development in Rust be to add Rust to tpl-dev with appropriate authenticity checks to the salt formula? Or is the expected practice to set up a separate tpl-rust, tpl-go etc. formulas, just the way you are suggesting with terraform or ansible? Or is there yet another way that you had in mind in the design of the flows here? (I went through the DESIGN documentation to find out more about this, but couldn't find anything relevant there.

Rust packages got from cargo are not signed, they are authenticated with TLS and shasum if provided. Same for Python with pip. Same for Golang with go. If you install the require dependencies from the repositories, that is a different story, the packages are authenticated and their signatures are verified with apt, dnf etc. Problem is those packages can be fast moving, when users need new updates and don't want to wait for your distribution to ship them, users resort to cargo and pip.

To answer the question if different templates for tpl-rust and tpl-go are necessary: I don't know, depends on your use case, if you need and environment that has both, a template with both is better. What you surely need is different states for install-rust or rust.install and install-go or go.install for example, to clearly separate the functionalities of each state.

The terraform and ansible have their own separate create states and templates is because they are different formulas and nothing guarantees that the user wants both ansible and terraform to manage their remotes VMs. What is special about them is not the create.sls, but the install.sls and configure.sls that can be shared with other formulas. See docker.install, it is shared with mirage-builder and qubes-builder. In the same sense you could apply to a new formula to manage remote servers, install terraform.install and terraform.ansible to a separate template.

Example workflow

  1. Develop code in dev (how to run? On dev or any other flow intended?)

This is up to the user. The best would be to run in a disposable qube. I use disp-dev for such operations, as it is based on dvm-dev -> tpl-dev, so they share the same packages and I can run less trusted code there. Workflow is develop on dev, qvm-copy to disp-dev, you could also target disp-dev as the target for qusal.GitPush, as it is lighter to push commit / git objects rather than copying the whole repository to another qube (which can take some time depending on the size of files in the repo).

  1. Interact with sys-git and/or GitHub, gitlab etc.
  2. Build a docker image and test locally (again - on dev? Probably not, as we are limiting the qusal.ConnectTCP policies, and probably there is a different way intended). Somewhere else?

disp-dev. The docker formula doesn't have a disposable qube yet but it could serve in these cases: disp-docker.

There is no pre-defined flow on how to run code on another qube. You could learn qubes.VMExec RPC service, but I find it hard for it to serve all user cases, with all languages that needs to be compiled, with scripts that have different names, so it is difficult to provide a default for users.

Curious to know how the intention here is, so I can extend the required functionalities following good practice :)

If it is your own code, I think it is the most trusted it can be. If you are importing libraries, then you are aware of everything that the import library is running. I also recommend running in disposables that have the necessary packages already installed in their templates so you don't need to download the packages each time.

Thanks @ben-grande, I am currently following the current dev setup with install-python.sls as a reference. See where that gets me :)