hyper-kube-config - Provides a secure Serverless API to store and retrieve Kubernetes cluster config credentials. hyper-kube-config leverages AWS Secrets Manager for storing credential information. Included is a kubectl plugin to interface with hyperkube API.
You can also set cluster's status and retrieve this information. Useful in CI/CD pipelines when trying to pull access to particular cluters without needing to know their names, just their environments.
It requires a configuration file. See hyperkube-config.yaml.example for layout.
pip3 install hyper-kube-config
The default locations for the config file is ~/hyperkube-config.yaml
. You can also place the config file at a different location and pass the location as a command line option -c <hyper-kube-config-location>
or --config <hyper-kube-config-location>
kubectl hyperkube add --k8s-config ~/.kube/config
kubectl hyperkube remove --cluster-to-remove 'k8s-cluster-example.cloud'
# for single cluster
kubectl hyperkube get --cluster cloud-infra.cloud -m
kubectl hyperkube get \
--cluster cloud-infra.cloud \
--cluster bar-cluster.cloud \
--cluster baz-cluster.com -m
kubectl hyperkube get-all -m
kubectl hyperkube list
kubectl hyperkube add-pem --pem ~/.ssh/my-cluster.pem
kubectl hyperkube get-pem --cluster my-cluster.net
kubectl hyperkube add-ca-key --ca-key ca-key-file.key --cluser my-cluster.net
# Set arbitrary status string and environment reference for given cluster
kubectl hyperkube set-cluster-status --cluster my-cluster.net --status active --environment stage
# Returns list of clusters that are active for prod environment
kubectl hyperkube get-cluster-status --status active --environment prod
- Serverless - Serverless Framework
- serverless-python-requirements plugin. Uses Docker and Pip to package a newer vesion of Boto3 for AWS Lambda function use. AWS Lambda boto3 version by default doesn't have AWS Secrets Manager support for tags.
- click - for hyperkube kubectl plugin
- kubectl - version 1.12 or higher recommended for stable plugin support.
Example Serverless Config for API Key Authentication
This config should work out of the box. Feel free to copy to serverless.yml and deploy
Example Serverless Config for IAM Authentication
This configuration will require you to add IAM roles to the allowed principal section. These roles are managed outside the scope of this project.
See the section that has:
resourcePolicy:
- Effect: Allow
Principal:
AWS:
- arn:aws:iam::{{otherAWSAccountID}}:root
- arn:aws:iam::{{otherAWSAccountID}}:user/{{otherAWSUserName}}
- arn:aws:iam::{{otherAWSAccountID}}:role/{{otherAWSRoleName}}
replace with your roles you would like to grant access.
pipenv install
pipenv shell
sls deploy \
--stage dev \
--product k8s \
--owner myteam@foo.cloud \
--team myteam \
--environment dev
This will launch your hyperkube API. Capture the API URL, api key and stage for your hyperkube.yaml configuration. The kubectl hyperkube
commands will leverage the config to interact with your stored k8s configs.
Serverless will launch an AWS API Gateway to handle API requests forwardered to AWS Lambda functions. A Dynamodb table is configured to store non-senstative cluster config details, while sensative information in uploaded configs (passwords and certs) is stored in AWS Secrets Manager.
- Install Test Dependencies
pip install -U -r tests/requirements.txt
- Run flake8 to check for lint errors
flake8 *.py tests cli/kubectl-hyperkube
- Run unit tests
python -m unittest discover -s tests/ -p "*.py"