ipfs/kubo

Daemon triggers a Netscan alert from hosting company

cinderblock opened this issue · 60 comments

Solution

Use ipfs init --profile=server

~ Kubuxu


I just installed go-ipfs, did an init, and started the daemon. A couple minutes later, my hosting provider sent me an abuse email indicating that a "Netscan" was coming from my host and asked me to stop. Here is the log they sent me (edited for privacy).

##########################################################################
#               Netscan detected from host my.host.i.p                   #
##########################################################################

time                protocol src_ip src_port          dest_ip dest_port
---------------------------------------------------------------------------
Sun May 10 02:31:32 2015 UDP my.host.i.p 56809 => 192.internal.i.p 49939
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 100.external.i.p 12644
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:29 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 35879 =>  10.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:29 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:38 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:46 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:48 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:46 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:41 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:53 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  =>  10.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  =>  25.external.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:33 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:35 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:36 2015 TCP my.host.i.p 44194 => 172.internal.i.p 4001 
Sun May 10 02:31:39 2015 TCP my.host.i.p 44194 => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:53 2015 TCP my.host.i.p 49417 => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:33 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:35 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:46 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:44 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:43 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:53 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 50861 => 172.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 50863 => 172.internal.i.p 4001 
Sun May 10 02:31:29 2015 TCP my.host.i.p 50863 => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:48 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:53 2015 TCP my.host.i.p 4001  => 172.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:48 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:51 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:40 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:52 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:50 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:27 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:31 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:32 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:34 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:42 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:45 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:47 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:36 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:39 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:26 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:29 2015 TCP my.host.i.p 4001  => 192.internal.i.p 4001 
Sun May 10 02:31:37 2015 TCP my.host.i.p 4001  => 100.external.i.p 47389
Sun May 10 02:31:20 2015 TCP my.host.i.p 56610 =>  10.internal.i.p 55511
Sun May 10 02:31:22 2015 TCP my.host.i.p 56610 =>  10.internal.i.p 55511

Notice all but 3 destination addresses are internal network destination. There are also many repeats (same destination internal IP) and this all happened in 33 seconds. Nearly all of this was happening on port 4001 as well, reinforcing that this was IPFS doing this.

How does ipfs currently find peers to swarm with? Is there a way to throttle back the peer discovery process? Why is it even trying to scan internal IPs? (I'm on a externally facing machine)

it really should not scan internal IPs (it currently does try to dial other peers at their internal IPs in hopes that theyre on the same LAN). We have multicast DNS for finding peers on the local network, we should filter out local IP's from our advertised list.

I've seen these reports before as well.. Duplicate of #1173

it really should not scan internal IPs (it currently does try to dial other peers at their internal IPs in hopes that theyre on the same LAN). We have multicast DNS for finding peers on the local network, we should filter out local IP's from our advertised list.

@whyrusleeping (a) Multicast DNS does not work all the time. It is often disabled in many networks-- It's happened at 2/4 talks i've given recently-- and even OSes. (And it certainly does not work for containers.) (b) Look at the WebRTC standard. Dialing local network addresses is precisely how it works. I'm tired of having to justify this over and over.

Now, there are many ways to fix this sort of thing. For example, just two among many:

  • not dialing known local area network addresses when not in that local network. For example, it does not make sense to dial a 192.168.0.0/16 address when not within that subnet. This alone will cut out most -- if not all -- of the sysadmin netscan warnings. Most VPSes are in different networks.
  • having a config option to disable dialing local subnets entirely. maybe even allowing configurable address filters.

I suggest also looking at the silencing/niceness heuristics other (aggressively local) p2p applications use.

Next steps

  • Avoid dialing known local addresses when not in that network #1246
  • Config option to disable dialing specific subnets entirely #1247

Getting these down would go a long way for people trying to run go-ipfs in VPSes at providers that (rightly!) are concerned about random processes trying to dial lots of local addresses.

We received a similar letter from a dedicated server provider.

Long term, I really do see this as something ISPs need to become more comfortable with, as the web adjusts to a more decentralized model, and in the case of IPFS, even datacenters become the home to localized caches of content (and it's a good thing for them overall).

That said, in the interim they treat most of this sort of activity as being of a malicious nature. So a way to turn it off for is needed for now until more widespread adoption takes place.

I think your next steps would solve this issue for us.

Also, it's possible that a firewall rule could be used as a workaround for now. I'm not sure what that rule would look like, I'm not very savvy with iptables.

so an iptables solution to this would be to just block outgoing connections to other 'internal' networks like so:

iptables -A OUTPUT -d 172.17.2.0/24 -j REJECT
iptables -A OUTPUT -d 192.168.0.0/16 -j REJECT

and so on, for any other networks that you are accused of scanning. I personally dont think this is a good approach, but it may work in the short term.

@aSmig gave me some great feedback on iptables usage, and recommend this as a workaround:

iptables -A OUTPUT -d 10.0.0.0/8 -p tcp --sport 4001 --dport 4001 -j REJECT
iptables -A OUTPUT -d 172.16.0.0/12 -p tcp --sport 4001 --dport 4001 -j REJECT
iptables -A OUTPUT -d 192.168.0.0/16 -p tcp --sport 4001 --dport 4001 -j REJECT

This will block all private scans. Not ideal obviously, but all of the netscans I've gotten complaints about were related to local IP scanning.

If you're running Ubuntu, this service will persist the settings:

sudo apt-get install iptables-persistent

You may need to disable UFW if it is running (and then iptables -F), or make a version of these rules that uses UFW instead of iptables.

I'll report back if I get another netscan warning.

Also got a netscan report from my hoster looking quite similar to that one in the original post for this issue. Solved by some iptables rules quite similar to that ones @kyledrake posted above:

iptables -A OUTPUT -d 192.168.0.0/16 -o eth0 -p tcp -m tcp -j DROP
iptables -A OUTPUT -d 10.0.0.0/8 -o eth0 -p tcp -m tcp -j DROP
iptables -A OUTPUT -d 172.16.0.0/12 -o eth0 -p tcp -m tcp -j DROP

In this case I had the chance to block all transfer to private IPs using the external interface as the machine does not have any private networking on that interface.

Just a quick update that I have not had any more complaints from our DCO since we installed these filters.

@kyledrake thanks, good to know! still need to put this into IPFS soon. hopefully into 0.3.6 or 0.3.7

Just got another one:

Sat Jun 13 13:27:27 2015 TCP    MYIP 59245 =>    172.17.0.112 4001 
Sat Jun 13 13:27:27 2015 TCP    MYIP 54851 =>    172.17.0.113 4001 
Sat Jun 13 13:27:27 2015 TCP    MYIP 50660 =>     172.17.1.20 4001 
Sat Jun 13 13:27:27 2015 TCP    MYIP 51793 =>     172.17.1.21 4001 
...

Which is really weird, since the iptables policy is in place:

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
REJECT     tcp  --  anywhere             10.0.0.0/8           tcp spt:4001 dpt:4001 reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             172.16.0.0/12        tcp spt:4001 dpt:4001 reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             192.168.0.0/16       tcp spt:4001 dpt:4001 reject-with icmp-port-unreachable

So I'm not sure what's going on here. The above rule doesn't cover 172.17 for some reason? Ideas welcome.

I also found this list that claims to be all the private nets (RPC 1918), copied from here:

iptables -A valid-src -s 10.0.0.0/8     -j DROP
iptables -A valid-src -s 172.16.0.0/12  -j DROP
iptables -A valid-src -s 192.168.0.0/16 -j DROP
iptables -A valid-src -s 224.0.0.0/4    -j DROP
iptables -A valid-src -s 240.0.0.0/5    -j DROP
iptables -A valid-src -s 127.0.0.0/8    -j DROP
iptables -A valid-src -s 0.0.0.0/8       -j DROP
iptables -A valid-src -d 255.255.255.255 -j DROP
iptables -A valid-src -s 169.254.0.0/16  -j DROP
iptables -A valid-src -s $EXTERNAL_IP    -j DROP
iptables -A valid-dst -d 224.0.0.0/4    -j DROP

Use at your own risk. I haven't edited this to make it useful, and I have no idea what $EXTERNAL_IP does, and it may not be what you want.

172.17.. is definitely covered by 172.16.0.0/12. The report indicates that the source ports are in the 50000-60000 range. Your rules only match when source and destination port are 4001. Pull the --sport 4001 out of your commands to match any source port.

The valid-src chain described above has a few issues, including blocking all outbound traffic if you specify your external IP. Most of the rules block outbound traffic only when the source IP matches a private network, but you want to match against destination IP's. If you really want to block any and all traffic to private network ranges on a given external interface, this should get you closer:

EXTERNAL_IF=eth0  # or whatever interface connects to your ISP
iptables -A valid-out -d 10.0.0.0/8      -j REJECT
iptables -A valid-out -d 172.16.0.0/12   -j REJECT
iptables -A valid-out -d 192.168.0.0/16  -j REJECT
iptables -A valid-out -d 224.0.0.0/4     -j REJECT
iptables -A valid-out -d 240.0.0.0/5     -j REJECT
iptables -A valid-out -d 127.0.0.0/8     -j REJECT
iptables -A valid-out -d 0.0.0.0/8       -j REJECT
iptables -A valid-out -d 255.0.0.0/8     -j REJECT
iptables -A valid-out -d 169.254.0.0/16  -j REJECT
iptables -A valid-out -d 224.0.0.0/4     -j REJECT
iptables -A OUTPUT -o $EXTERNAL_IF -j valid-out  # make sure this happens before a global ACCEPT
# Use this instead of the previous line if you only want to block traffic to port 4001
#iptables -A OUTPUT -o $EXTERNAL_IF -p tcp --dport 4001 -j valid-out

we should up the priority on this and get it out sooner.

We now have ip/cidr connection filtering: #1226 #1378

could someone:

Is this the format you ended up going with?: #1378 (comment)

@kyledrake the format is /ip4/192.168.0.0/ipcidr/16 which is equivalent to just 192.168.0.0/16

So this should be fixed, and @whyrusleeping fixed it

(though would love people to play with it, make sure it does fix things, and make an example)

what's the PR#?

So, if i close this issue, i acquire currency?

@whyrusleeping what's an example config here? does this look right:

{ // in config
  "DialBlockList": [
    "/ip4/192.168.0.0/ipcidr/16",
    "/ip4/172.10.1.0/ipcidr/28"
  ]
}

Based on:

Would this also work with IPv6? /ip6/fc00::/ipcidr/8

mmm, yeah... thats an easy fix.

This should be all the needed filters for IPv4 private networks:

{
  "DialBlockList": [
    "/ip4/10.0.0.0/ipcidr/8",
    "/ip4/172.16.0.0/ipcidr/12",
    "/ip4/192.168.0.0/ipcidr/16",
    "/ip4/100.64.0.0/ipcidr/10"
  ]
}

@lgierth we should have ipv6 support shortly: whyrusleeping/multiaddr-filter#2

cc @kyledrake @Luzifer

Ok round 3! #1433 just merged, which fixes the filters loading from the config. but the filters moved location slightly, they're now at:

{
  "Swarm": {
    "AddrFilters": [ ]
  }
}

So set them with this line:

ipfs config --json Swarm.AddrFilters '[
  "/ip4/10.0.0.0/ipcidr/8",
  "/ip4/172.16.0.0/ipcidr/12",
  "/ip4/192.168.0.0/ipcidr/16",
  "/ip4/100.64.0.0/ipcidr/10"
]'

you should get

> ipfs config Swarm.AddrFilters
[
  "/ip4/10.0.0.0/ipcidr/8",
  "/ip4/172.16.0.0/ipcidr/12",
  "/ip4/192.168.0.0/ipcidr/16",
  "/ip4/100.64.0.0/ipcidr/10"
]

FYI, the authoritative list of non-Internet-routable ipv4 address ranges can be found on IANA's site. Anything with False in the Global column is not globally routable. There is a similar list for ipv6.

Haven't had a chance to test the new filters yet with the fix, but I wanted to share my latest flavor of the iptables block:

/sbin/iptables -A OUTPUT -d 10.0.0.0/8 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 172.16.0.0/12 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 192.168.0.0/16 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 100.64.0.0/10 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 192.0.2.0/24 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 198.51.100.0/24 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 203.0.113.0/24 -p tcp --dport 4001 -j REJECT
/sbin/iptables -A OUTPUT -d 198.18.0.0/15 -p tcp --dport 4001 -j REJECT

Note that I've taken out the source port.

could someone test the new filters? would love to know whether this is fixed or not

(and thanks @kyledrake for the new table)

I'm not able to connect to another node on my LAN with the filters set appropriately

@kyledrake, you can reset your rule match counters with iptables -Z. Then check them a week later to see if anything got past the built-in filters and was blocked by your firewall. To show only rules that have matched packets, you can do this:

iptables -nvL | awk '$1!=0{print}'

This helps with testing and ensures your ISP won't get grumpy.

Starting test on Hetzner server… We'll see whether there is a netscan alert…

Rule-Set:

  "Swarm": {
    "AddrFilters": [
      "/ip4/10.0.0.0/ipcidr/8",
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.0.0/ipcidr/29",
      "/ip4/192.0.0.8/ipcidr/32",
      "/ip4/192.0.0.170/ipcidr/32",
      "/ip4/192.0.0.171/ipcidr/32",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4"
    ]
  },

(Networks from iana list @aSmig posted above)

@Luzifer can you confirm that the swarm has them on? ipfs swarm filters ?

# docker exec ipfs ipfs swarm filters
/ip4/192.168.0.0/ipcidr/16
/ip4/198.18.0.0/ipcidr/15
/ip4/198.51.100.0/ipcidr/24
/ip4/203.0.113.0/ipcidr/24
/ip4/10.0.0.0/ipcidr/8
/ip4/172.16.0.0/ipcidr/12
/ip4/192.0.0.0/ipcidr/29
/ip4/192.0.0.170/ipcidr/32
/ip4/169.254.0.0/ipcidr/16
/ip4/192.0.0.0/ipcidr/24
/ip4/240.0.0.0/ipcidr/4
/ip4/100.64.0.0/ipcidr/10
/ip4/192.0.0.8/ipcidr/32
/ip4/192.0.0.171/ipcidr/32
/ip4/192.0.2.0/ipcidr/24

it would be really cool if my parsing for ipfs swarm filters ignored entries starting with a # that way you could comment your blocked addr list and still pipe it to the command

Until now neither feedback nor an alert from my hoster.

@Luzifer woot! lets keep it up

Still running, no complaints… I think the filters are working… Praise @whyrusleeping for building it!

i'm glad we've fixed that finally!

Had the same problem.

I'm not sure if this is a problem that pops up over and over. If it is, you could perhaps make a little note in the installation guide or disable local dialing in the default configuration. Afaik. there are approx. 250 nodes so I don't think it is that important at this stage.

Anyways, interesting and awesome project. Keep up the good work!

the path to improvement:

we could also add a warning to ipfs daemon.

I've also been wanting an ipfs init --interactive that asks users questions like:

  • enter peer ID keysize (2048):
  • bootstrap to public network (yes):
  • dial local network addresses (yes):
  • enable mdns service discovery (yes):

#1247 should be already implemented.

ah indeed. i didn't re read it closely enough

Just got blocked by Hetzner due to this a few minutes ago.

IMO, it really would make sense to at least printout a warning (until #1246 is implemented) while starting ipfs (maybe with a link to this bugreport) as having your host blocked due to ipfs is not a nice 'user experience'.

Hey @adrian-bl yeah it is annoying to deal with that.

How do you suggest detecting the environment to print out the warning? We need some good heuristics. Such warning should not be printed every time ipfs daemon runs, only when the user is likely to be running in an aggressive hosted environment like hetzner.

Hi @jbenet

in an aggressive hosted environment like hetzner.

I wouldn't call them 'aggressive': I can somehow understand that they consider requests to private networks fishy and assume such hosts to be compromised.

How do you suggest detecting the environment to print out the warning?

The cleanest solution would be to print out the warning if all interfaces of the host have public IPv4 addresses (while ignoring 127.0.0.0/8). Another (flaky) solution could be to check if the subnet mask of all interfaces (excluding lo) is bigger than /24 (most hosters use something like /27 or /28 as their networks are routed)

But i wonder how the private IPs are actually ending up in the DHT: Is this intentional?

Eg: BitTorrents Kademlia implementation doesn't have this issue/feature: A node doesn't need to know its own IP address (but could easily learn it by searching for its own node id):

An announce request only includes the listening port of the node - the remote node (which receives the announce) will then store the remote address of the UDP packet. (after verifying the token to avoid spoofing)

We may as well just implement #1246

agree a warning would be nice.

Also documenting in ipfs daemon

The mainline dht is not designed to work in private disconnected networks
across many levels of nat and many kinds of networks (including non IP
networks) and without access to the Internet. I'm tired of justifying the
addresses points. Look up for more.

On Tue, Dec 15, 2015 at 08:44 Adrian Ulrich notifications@github.com
wrote:

Hi @jbenet https://github.com/jbenet

in an aggressive hosted environment like hetzner.

I wouldn't call them 'aggressive': I can somehow understand that they
consider requests to private networks fishy and assume such hosts to be
compromised.

How do you suggest detecting the environment to print out the warning?

The cleanest solution would be to print out the warning if all interfaces
of the host have public IPv4 addresses (while ignoring 127.0.0.0/8).
Another (flaky) solution could be to check if the subnet mask of all
interfaces (excluding lo) is bigger than /24 (most hosters use something
like /27 or /28 as their networks are routed)

But i wonder how the private IPs are actually ending up in the DHT: Is
this intentional?

Eg: BitTorrents Kademlia implementation doesn't have this issue/feature: A
node doesn't need to know its own IP address (but could easily learn it by
searching for its own node id):

An announce request only includes the listening port of the node - the
remote node (which receives the announce) will then store the remote
address of the UDP packet. (after verifying the token to avoid spoofing)


Reply to this email directly or view it on GitHub
#1226 (comment).

I'm tired of justifying the addresses points. Look up for more.

You don't have to justify yourself: I had no intention to criticize the decision.

I'm pretty new to IPFS and was just wondering why it behaves like this (i've written my own BitTorrent client so my mind is 'locked' in the mainline DHT world).

I'll read trough https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf which will probably answer my questions :-)

No worries, I'm just excusing myself for not giving you a complete answer nor pointers.

Good thing to wonder though :)

Read through issues in this tepo about addresses

fnkr commented

+1. Got an abuse message from Hetzner today. Please add ability to disable local peer-discovery!

Please see #1226 (comment) and preceeding comments for a solution to this issue.

If you still have an issue with this after trying the address filters, please file a new issue with details of what you have tried and which addresses are being dialed.

These filters are now applied to your config if you initialize ipfs with the 'server' profile:

ipfs init --profile=server