* Revert "Revert "Expand Local Notifications to support multiple Notification types (#2100)""
This reverts commit 5887337b88.
* Revert "Revert "fix protocol.MessageNotificationBody marshalling""
This reverts commit cf0a16dff1.
* Bump version to 0.70.0
* Added localnotifications for Transaction messages
* Fixed bug where Message.SigPubKey was presumed to be set
* Added lookup for contact existing in Messenger.allContacts
Additionally added functionality to add a contact to the messenger store if it isn't present
* Get chat directly from Messenger.allChats store
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
- avoid making RPC request for `zero - zero` range
- avoid checking of nonce for a lower block in the range if it is zero
in a higher block
- on `wallet_getTransfersByAddress` scanning of history is skipped if
zero block is already reached
- no need to fetch block num before fetching token balances
This fix puts an end to a saga that essentially start during the
Status Prague Meetup at the end of October 2018. At the time we were
experiencing massive issues with `Connecting...` spinners in the app in the
venue we rented. We were pulling our hairs out what to do and we could not
find the cause of the issue at the time.
Three months later I deployed the following change:
https://github.com/status-im/infra-eth-cluster/commit/63a13eed
Which used `iptables` to map the `443` port onto our `30504` Status node port
using `PREROUTING` chain and `REDIRECT` jump in order to fix issues people
have been complaining about when using WiFi networks in various venues:
https://github.com/status-im/status-react/issues/6351
Our thinking when trying to resolve the reported issue assumed that some
networks might block outgoing connections on non-standard ports other than
the usual `80`(HTTP)/`443`(HTTPS) which would disrupt Status connectivity.
While this fix could have indeed helped a few edge cases, what it really
did was cause the Status node to stop seeing actual public IPs of the clients.
But __pure accident__ this change caused the code we inherited from
`go-ethereum` implementation of DevP2P protocol to stop throttling new
incoming connections, because the IP as which they appeared was a
`172.16.0.0/12` network address of the Docker bridge.
The `go-ethereum` code used the `!netutil.IsLAN(remoteIP)` check to
avoid throttling connections from local addresses, which included the
local Docker bridge address:
https://github.com/status-im/status-go/blob/82680830/vendor/github.com/ethereum/go-ethereum/p2p/netutil/net.go#L36
The fix intended to target a small number of networks with fortified
firewall configuration accidentally resolved our issues with
`Connecting...` prompts that our application showed us en masse during
our Prauge Meetup. Part of the reason for that is that venues like that
normally give out local IP addresses and use NAT to translate them onto
the only public IP address they possess.
Since out application is supposed to be usable from within networks
behind NAT like airport WiFi networks for example, it makes no sense to
keep the inbound connection throttle time implemented in `go-ethereum`.
I'm leaving `inboundThrottleTime` in because it's used to calculate
value for `dialHistoryExpiration` in:
`vendor/github.com/ethereum/go-ethereum/p2p/dial.go`
I believe reducing that value one we deploy this change should also
increase the speed with which the Status application is able to reconnect
to a node that was temporarily unavailable, instead waiting the 5*30 seconds.
Research issue: https://github.com/status-im/infra-eth-cluster/issues/35
Signed-off-by: Jakub Sokołowski <jakub@status.im>
It looks like profile chats are created as public chats, and the code
will drop messages for it.
This commit fixes the issue by checking for the "@" prefix in profile
chat names, until we fix the issue by migrating those chats.
- old existing ranges are merged when wallet service is started
- a new range is merged with an existing one if possible
This will decrease the number of entries in blocks_range table as
currently it can grow indefinitely (@flexsurfer reported 23307 entries).
This change is also needed for further optimisations of RPC usage.
With the introduction of the new non-blocking code, events for the
peerpool might come out-of-order. That's generally not an issue, but it
made tests fail.
I have changed the code so that order is more consistent (It's still
theoretically possible that a stop signal would arrive out of order in a
real scenario, but impact is low and I don't want to change this code
too much).
There was another deadlock in the peer pool.
Because we made the event handler asynchrnous, another deadlock popped
up, as the loop locks the global peerpool lock before processing events.
But the handlers also take the global look, effectively resulting in the
same situation we had before, i.e the loop is not running.
THE LOOP MUST BE RUNNING AT ALL TIMES OTHERWISE THE SERVER HANGS.
There might be an issue on how we handle metrics, which causes the p2p
server to hang.
updateNodeMetrics calls a method on the p2p server, which
blocks until the server is available:
e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L301)e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L746)
If there's back-pressure on the peer event feed
e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L783)
The event channel above might become while updateNodeMetrics
is called, which means is never consumed, the server blocks on publishing on
it, and the two will deadlock (server waits for the channel above to be consumed,
this code waits for the server to respond to peerCount, which is in the same
event loop).
Calling it in a different go-routine will allow this code to keep
processing peer added events, therefore the server will not lock and keep processing requests.
This is a bit complicated, so:
1) Peerpool was subscribing to `event.Feed`, which is a global event
emitter for ethereum.
2) The p2p.Server was publshing on `event.Feed`, this triggered in the
same routine a publish on `event.Feed`.
3) Peerpool was listening to `event.Feed`, react on it, and in the same
routine, trigger some code on p2p.Server that would publish on
`event.Feed`
This meant that if the size of the channel was unbufferred, it would deadlock, as
peerPool would not be consuming when it would publish (the same go
routine publishes and listen effectively, through a lot of indirection
and non-buffered channels, p2p.Server->event.Feed)
The channel though was a buffered channel with size 10, and this meant that most of the times is
fine.
The issue is that peerpool is not the only producer to this channel.
So it's possible that while is processing an event, the buffer would
fill up, and it would hange trying to publish, and nobody is listening
to the channel, hanging EVERYTHING.
At least that's what I think, needs to be tested, but definitely an
issue.
I kept the code changes to a minimum, this code is a bit hairy, but it's
fairly critical so I don't want to make too many changes.
We were not locking before accessing the contacts map and it would panic
in some cases.
I have changed the code to pull contacts from db so we move away from
having locks.
There seems to be an issue with version 1.3, querying for topics on
postgres returns
and error:
```
panic: pq: invalid byte sequence for encoding "UTF8"
```
Upgrading pq fixes the issue
¯\_(ツ)_/¯
Previous I added the prefix directly in `docker-image` target.
But that doesn't make sense if you override the `RELEASE_TAG`.
Signed-off-by: Jakub Sokołowski <jakub@status.im>
When an error occours (or the peer disconnect), we return from decorator
as an error is published to the chan.
There are though still 5 or 6 goroutines that want to write on that
channel and at least 3/4 of them will be hanging, leaving them stuck
publishing on the chan.
This though is probably not the cause of
https://github.com/status-im/infra-eth-cluster/issues/39
fills up the stack trace with hung go routines.
There was a bug on status-react where it would save filters that were
not listened to.
This commit adds a task to clean up those filters as they might result
in long syncing times.
This commit also returns topics/ranges/mailserves from messenger in
order to make the initialization of the app simpler and start moving
logic to status-go.
It also removes whisper from vendor.