Jakub Sokołowski
fe8dab7391
This fix puts an end to a saga that essentially start during the Status Prague Meetup at the end of October 2018. At the time we were experiencing massive issues with `Connecting...` spinners in the app in the venue we rented. We were pulling our hairs out what to do and we could not find the cause of the issue at the time. Three months later I deployed the following change: https://github.com/status-im/infra-eth-cluster/commit/63a13eed Which used `iptables` to map the `443` port onto our `30504` Status node port using `PREROUTING` chain and `REDIRECT` jump in order to fix issues people have been complaining about when using WiFi networks in various venues: https://github.com/status-im/status-react/issues/6351 Our thinking when trying to resolve the reported issue assumed that some networks might block outgoing connections on non-standard ports other than the usual `80`(HTTP)/`443`(HTTPS) which would disrupt Status connectivity. While this fix could have indeed helped a few edge cases, what it really did was cause the Status node to stop seeing actual public IPs of the clients. But __pure accident__ this change caused the code we inherited from `go-ethereum` implementation of DevP2P protocol to stop throttling new incoming connections, because the IP as which they appeared was a `172.16.0.0/12` network address of the Docker bridge. The `go-ethereum` code used the `!netutil.IsLAN(remoteIP)` check to avoid throttling connections from local addresses, which included the local Docker bridge address: https://github.com/status-im/status-go/blob/82680830/vendor/github.com/ethereum/go-ethereum/p2p/netutil/net.go#L36 The fix intended to target a small number of networks with fortified firewall configuration accidentally resolved our issues with `Connecting...` prompts that our application showed us en masse during our Prauge Meetup. Part of the reason for that is that venues like that normally give out local IP addresses and use NAT to translate them onto the only public IP address they possess. Since out application is supposed to be usable from within networks behind NAT like airport WiFi networks for example, it makes no sense to keep the inbound connection throttle time implemented in `go-ethereum`. I'm leaving `inboundThrottleTime` in because it's used to calculate value for `dialHistoryExpiration` in: `vendor/github.com/ethereum/go-ethereum/p2p/dial.go` I believe reducing that value one we deploy this change should also increase the speed with which the Status application is able to reconnect to a node that was temporarily unavailable, instead waiting the 5*30 seconds. Research issue: https://github.com/status-im/infra-eth-cluster/issues/35 Signed-off-by: Jakub Sokołowski <jakub@status.im> |
||
---|---|---|
.. | ||
discover | ||
discv5 | ||
enode | ||
enr | ||
nat | ||
netutil | ||
dial.go | ||
message.go | ||
metrics.go | ||
peer.go | ||
peer_error.go | ||
protocol.go | ||
rlpx.go | ||
server.go | ||
util.go |