BishopFox/cloudfox

Stacktrace when enumerating EFS Filesystems

castrapel opened this issue · 3 comments

Great tool!

Description of Bug

I received a nil pointer dereference when running this across our environment. It seemed to occur when analyzing EFS:

[filesystems][prod/prod_admin] Supported Services: EFS, FSx
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x35a4485]

goroutine 6277 [running]:
github.com/BishopFox/cloudfox/aws.(*FilesystemsModule).getEFSfilesystemPermissions(...)
	/home/ccastrapel/localrepos/cloudfox/aws/filesystems.go:491
github.com/BishopFox/cloudfox/aws.(*FilesystemsModule).getEFSSharesPerRegion(0xc0027c5300, {0xc000a5f170, 0x9}, 0x6d4a8a?, 0x6f8d66?, 0xc0014c2a20?)
	/home/ccastrapel/localrepos/cloudfox/aws/filesystems.go:322 +0x885
created by github.com/BishopFox/cloudfox/aws.(*FilesystemsModule).executeChecks
	/home/ccastrapel/localrepos/cloudfox/aws/filesystems.go:193 +0x1ed

Here is the tf code that generates the EFS that I think this is stuck on:



# Creating Amazon EFS File system
resource "aws_efs_file_system" "data_storage" {
  # Creating the AWS EFS lifecycle policy
  # Amazon EFS supports two lifecycle policies. Transition into IA and Transition out of IA
  # Transition into IA transition files into the file systems's Infrequent Access storage class
  # Transition files out of IA storage
  lifecycle_policy {
    transition_to_ia = "AFTER_7_DAYS"
  }
  kms_key_id = var.kms_key_id
  encrypted  = true
  tags       = var.tags
}

# Creating the EFS access point for AWS EFS File system
resource "aws_efs_access_point" "data_storage_access_point" {
  file_system_id = aws_efs_file_system.data_storage.id
  tags           = var.tags
}
# Creating the AWS EFS System policy to transition files into and out of the file system.
resource "aws_efs_file_system_policy" "policy" {
  file_system_id                     = aws_efs_file_system.data_storage.id
  bypass_policy_lockout_safety_check = true
  # The EFS System Policy allows clients to mount, read and perform
  # write operations on File system
  # The communication of client and EFS is set using aws:secureTransport Option
  policy = <<POLICY
{
    "Version": "2012-10-17",
    "Id": "Policy01",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        },
        {
            "Sid": "Statement",
            "Effect": "Allow",
            "Principal": {
                "AWS": "${var.ecs_task_role_arn}"
            },
            "Resource": "${aws_efs_file_system.data_storage.arn}",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientRootAccess",
                "elasticfilesystem:ClientWrite"
            ]
        }
    ]
}
POLICY
}
# Creating the AWS EFS Mount point in a specified Subnet
# AWS EFS Mount point uses File system ID to launch.
resource "aws_efs_mount_target" "efs-mount-target" {
  file_system_id  = aws_efs_file_system.data_storage.id
  subnet_id       = var.subnet_ids[0]
  security_groups = [aws_security_group.efs-sg.id]
}

resource "aws_efs_mount_target" "efs-mount-target-2" {
  file_system_id  = aws_efs_file_system.data_storage.id
  subnet_id       = var.subnet_ids[1]
  security_groups = [aws_security_group.efs-sg.id]
}

resource "aws_security_group" "efs-sg" {
  name        = "${var.cluster_id}-efs-access-sg"
  description = "Allows access to EFS storage from the containers"
  vpc_id      = var.vpc_id

  ingress {
    description     = "NFS for container access to EFS storage"
    from_port       = 2049
    to_port         = 2049
    protocol        = "tcp"
    security_groups = var.ecs_security_group_id
  }

  egress {
    description = "Full egress access"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"] #tfsec:ignore:aws-vpc-no-public-egress-sgr
  }

  tags = merge(
    var.tags,
    {
      Name = "allow_access_to_efs"
    }
  )
}

Here's what the EFS file system looks like in the console:

image

This is where the nil reference appears to occur:

image

Access points (I suspect the issue is here due to blank Creation Info):

image

wow thank you for some much detail (and for reporting it!). I'll get on this really soon!

@castrapel - What version of cloudfox are you using?

I think i fixed this, but i was never able to deploy the terraform above in a way that caused this segfault even before i changed the code https://github.com/BishopFox/cloudfox/blob/main/aws/filesystems.go#L486, so i'm not sure if i did fix this issue or not. If you run into it again, let me know!