move to t2.nano instances
sprocketsecurity opened this issue · 9 comments
currently proxycannon-ng uses t2.mirco instances. test change with t2.nano to reduce cost.
Hey @sprocketsecurity. So I've tested with t2.nano instances instead of t2.micro - everything seems perfect. Ran speedtest through the whole setup - download & upload speeds are 95mbit/sec and 103.75mbit/sec respectively (my home connection is 100/100).
Here is average utilization figures during the speedtest:
t2.nano instances:
CPU utilization: 0-1%
Memory utilization: 22-23% (out of 0.5GB on nano instance)
Control server:
CPU utilization: 1-30% (due to OpenVPN encryption)
Memory utilization: 26%
Should I do a PR for "main.tf"?
ya, if you confident all is good, please submit the PR! Thanks @UrfinJusse !
I'll do some real-world testing today with password spraying OWA and VPN for two clients. If everything looks good, I'll submit PR tomorrow morning. Better be safe :)
Careful with this one, it violates AWS' Pentesting Policy if you plan to use this infra for that purpose.
"Testing of m1.small, t1.micro or t2.nano EC2 instance types is not permitted. " Source
Good point @metaDNA . I wonder if it's not allowed testing OF those smaller instances or FROM those smaller instances. Looking at their reasoning ("This is to prevent potential adverse performance impacts on resources that may be shared with other customers") it seems that they don't want you testing those instances. Gray area?
@UrfinJusse Most def a gray area.. but that's where we live isn't it ?¿ To your point, I'm sure the boxes can handle it in either direction, it's likely just out of an abundance of caution.
My only thought was when filling out the AWS form, it may get auto-rejected if they see the nano boxes in there.
Agree. I guess I can try to submit a form and see what happens. Worst case scenario - test with t2.micros. Price difference will be negligible for password spray purposes (1-2hr?).
If solely operating in AWS, I'm curious if we could ditch the instances altogether. re: #7
I don't know enough about AWS VPCs to understand if this is possible. I'd be curious to hear your thoughts.
I think you still need to have an endpoint with an IP on it or ElasticIP. VPCs on their own do not hold IP addresses. Now that said, we could potentially look into adding multiple ENIs (Elastic Network Interfaces) to a single host and doing the loadb routing through that. For example: a c1.medium can have a total of 2 ENIs and 6 IPs associated with a single VM. That said, 6 t2.micros are about $50USD/mo and a c1.medium is ~$95/mo so I don't know that you save a whole lot cost wise, may just be easier to manage programmatically though that's TBD too. Source