scholzj/terraform-aws-kubernetes

Support for private subnets

mozinrat opened this issue · 9 comments

Can we create a node in private subnet also, would like to force isolation of some services in a private subnet.

Interesting question.

The current script is downloading a lot of stuff from the internet - it doesn't use AMI image with pre-installed software. That means that one of the prerequisites is that your private subnet needs to have NAT service to have at least indirect internet access.

If you need to have private subnet without NAT than it would for sure not work - we would need to create an AMI image with all the software which is currently downloaded from the internet. But I'm not sure whether that would be worth doing.

Second potential problem might be if you would try to expose the services from the nodes in private subnets through loadbalancers - I'm not sure whether the loadbalancers would be properly configured to do the routing. But my guess is that they should be.

Overall I think that if the private subnet has NAT there is a good chance that it would work fine. Feel free to give it a try. I might try it as well, but I will probably get to it only towards the end of next week.

What currently needs to be on 100% in public network is the master. Because it uses Elastic IP and DNS name and it is accessed directly. (It would be also possible to workaround this using load balancer, but that would cost money when you run it)

Ok, I will add this to my todo list and have a look at it. I will try to do it later this week.

@mozinrat Sorry, I still didn't got to have a look at this, I was busy with something else. But I didn't forgot about it. I hope to get some more time to do it next week (i.e. the week from 22nd).

@scholzj Have you got some time to have a look at it. FYI here are my vpc details

provider "aws" {
  region = "us-east-1"
}

resource "aws_eip" "nat" {
  count = 1
  vpc   = true
}

resource "aws_default_security_group" "default" {
  vpc_id = "${module.vpc.vpc_id}"

  ingress {
    from_port   = 8
    to_port     = 0
    protocol    = "icmp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

module "vpc" {
  source                       = "terraform-aws-modules/vpc/aws"
  name                         = "${var.cluster_name}"
  cidr                         = "10.10.0.0/16"
  azs                          = ["us-east-1a", "us-east-1b"]
  public_subnets               = ["10.10.11.0/24", "10.10.12.0/24"]
  private_subnets              = ["10.10.1.0/24", "10.10.2.0/24"]
  database_subnets             = ["10.10.21.0/24", "10.10.22.0/24"]
  elasticache_subnets          = []
  enable_nat_gateway           = true
  single_nat_gateway           = true
  reuse_nat_ips                = true
  external_nat_ip_ids          = ["${aws_eip.nat.*.id}"]
  enable_vpn_gateway           = false
  create_database_subnet_group = true

  tags = {
    Owner       = "rohit"
    Environment = "dev"
    Name        = "${var.cluster_name}"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }
}

and used your module as

module "kubernetes" {
  source = "scholzj/kubernetes/aws"

  aws_region           = "${var.aws_region}"
  cluster_name         = "${var.cluster_name}"
  master_instance_type = "${var.master_instance_type}"
  worker_instance_type = "${var.worker_instance_type}"
  ssh_public_key       = "${var.ssh_public_key}"
  master_subnet_id     = "${element(module.vpc.public_subnets, 0)}"
// also tried with master_subnet_id = "${element(module.vpc.private_subnets,0)}"
  worker_subnet_ids    = "${module.vpc.private_subnets}"
  min_worker_count     = "2"
  max_worker_count     = "6"
  hosted_zone          = "${var.hosted_zone}"
  hosted_zone_private  = "${var.hosted_zone_private}"
  tags                 = "${var.tags}"
  tags2                = "${var.tags2}"
  addons               = "${var.addons}"
  ssh_access_cidr      = "${var.ssh_access_cidr}"
  api_access_cidr      = "${var.api_access_cidr}"
}

Any inputs are highly appreciated. Thanks

I gave it try today ... I used my own VPC setup and it worked fine. There was slight problem with creating load ballancers, but I fixed that.

So it seems the difference should be in the VPC setup. I will give it a try with your VPC setup.

I actually tried it with your VPC setup as well and it seems to work fine:

k get nodes
NAME                           STATUS    ROLES     AGE       VERSION
ip-10-10-1-13.ec2.internal     Ready     <none>    36s       v1.9.2
ip-10-10-11-187.ec2.internal   Ready     master    49s       v1.9.2

I used only the VPC setup it self. Could it be that the rest of your setup cause some issues with security groups or something similar? Also, could you check:

  • If you can reach the master node / port 6443 from the worker nodes?
  • The kubelet logs on workers / masters for errors?
  • The cloud-init logs on worker nodes? (should contain the details from the kubeadm bootstrapping)

Its working for me, thanks

Cool. Thanks. I will close this issue. If you run into some other problems feel free to open another one.