evryfs/github-actions-runner-operator

GitHub Application Configuration Documentation

sabre1041 opened this issue · 9 comments

Provide additional documentation on how to configure a GitHub application for use by the operator

Any particular area/info you are thinking of? There is some (but not extensive) info in the README.

A few items that need more context IMO:

  1. Architecture overview
    a. Operator and CRD relationship
    b. What role they each play
    c. How does it scale out? Is it automated?
  2. Pre-Requisites
    a. Create an app or a PAT
    b. What permissions are required
  3. Helm install
    a. Different repo I know but there are no Docs for the Chart (see here)
    b. Recommended to deploy Operator in its own Namespace?
  4. Docs for the CRD
    a. Pre-requisites
    b. Steps to get it deployed
    c. What are the inputs (.ie need to create a GH_TOKEN secret)
    d. Org level and Repo level runners

P.S.: Can the CRD be added to the Helm Chart and templated?

Valid points, do you want to contribute on any of them?
As for the CRD and chart, it's included https://github.com/evryfs/helm-charts/tree/master/charts/github-actions-runner-operator/crds. No need for templating. See https://helm.sh/docs/chart_best_practices/custom_resource_definitions/ for details on how Helm handles CRDs.

GitHub
OpenSourced Helm charts. Contribute to evryfs/helm-charts development by creating an account on GitHub.
How to handle creating and using CRDs.

We are still trying to figure it out. The recent release with the Registration Token is no longer working for us in our PoC which is what lead me to posting the comments above.

As for the CRD, I should clarify that I meant an instance of the CRD, or a "runner". Currently you need to take the example, fill in your own values, and then run a kubectl apply -f your_file.yaml. Instead, what if the Chart came with a "default" runner that got deployed alongside the Operator and was templated? That way folks can deploy the Operator and runner(s) in one simple helm deployment.

@fl-max this is maybe a separate issue than the one initiated by @sabre1041 ?

As runner-configuration varies from environment to environment it's not very straight-forward to provide a "default" (for instance we inject a settings.xml configmap for configuring for maven).

Also, because helm is tied to a namespace, and best-practice is to have the operator in one namespace, while the runners in another namespace (as reasoned in the README.md, it's better to not shoehorn both into the same chart.

@davidkarlsen i plan this weekend to rewrite the README with detailed usecases. If you want to assign a task to me, ill knock it out

@davidkarlsen Yes, I'd agree it's a different issue/discussion. I brought it up in this thread as I believe it would make the install easier and therefor the Docs as well.

I don't think the base runner configuration would vary that much. As for config injection (like maven settings in your example), other charts handle this by having an extraVolumes and extraVolumeMounts values to let the user specify n number of configurations. I'm also not advocating for strictly deploying the runner this way; simply suggesting to add it as an option with a value like runner.enabled.

As for the operator namespace separation, that may be an area to expand on. I think "enhanced security and increased API quota" is only realistic if the team managing the operator and the team deploying the runner are different.

@davidkarlsen i plan this weekend to rewrite the README with detailed usecases. If you want to assign a task to me, ill knock it out

Anything to add?

Closing as things got addressed in the pr from @sabre1041