AWS EC2
selonke opened this issue · 3 comments
It would be great to have a detailed documentation about AWS EC2..html
I'm afraid that that deployment should be treated differently from the packaging stage. Packaging an application is one step of a deployment pipeline. Deploying at AWS or even a bare-metal machine is just an approach to update our application.
This is something that I've been thinking a lot recently, specially for the 3.0.0 and further versions: Kikaha should be easy to expose Rest API (or RPC endpoints), should be easy to enforce security and even easier to generate a runable package. That said, I don't think that we should maintain too many extra modules or different deployment approaches. It's up to the architect to pick the deployment tool that best fit his needs to deploy a Kikaha application.
Taking the subject of this issue as an example, deploying at AWS has a wide range of approaches that would involve several different tools. The simplest approach indeed would be using SSH to upload the artifact to a given machine. But, what if the machine freezes and we have to spin up a second instance? Should we do that manually? Probably, we would end up using some mix of AWS CodeDeploy, AutoScaling, LoadBalancer and a Lauch Configuration to fit a basic scalable approach. A vendor agnostic approach would easily be achieved by generating a Docker image that runs the Kikaha's fatjar, allowing you to pick your favorite container platform to run the application (e.g. Kubernetes, Docker Swarm or AWS Fargate).
I'll issue an RFC to fit this packaging requirement the proper way. I think the discussion would be long, but it will worth in the long run.