* Revert "Revert "Expand Local Notifications to support multiple Notification types (#2100)""
This reverts commit 5887337b88.
* Revert "Revert "fix protocol.MessageNotificationBody marshalling""
This reverts commit cf0a16dff1.
* Bump version to 0.70.0
* Added localnotifications for Transaction messages
* Fixed bug where Message.SigPubKey was presumed to be set
* Added lookup for contact existing in Messenger.allContacts
Additionally added functionality to add a contact to the messenger store if it isn't present
* Get chat directly from Messenger.allChats store
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
- avoid making RPC request for `zero - zero` range
- avoid checking of nonce for a lower block in the range if it is zero
in a higher block
- on `wallet_getTransfersByAddress` scanning of history is skipped if
zero block is already reached
- no need to fetch block num before fetching token balances
This fix puts an end to a saga that essentially start during the
Status Prague Meetup at the end of October 2018. At the time we were
experiencing massive issues with `Connecting...` spinners in the app in the
venue we rented. We were pulling our hairs out what to do and we could not
find the cause of the issue at the time.
Three months later I deployed the following change:
https://github.com/status-im/infra-eth-cluster/commit/63a13eed
Which used `iptables` to map the `443` port onto our `30504` Status node port
using `PREROUTING` chain and `REDIRECT` jump in order to fix issues people
have been complaining about when using WiFi networks in various venues:
https://github.com/status-im/status-react/issues/6351
Our thinking when trying to resolve the reported issue assumed that some
networks might block outgoing connections on non-standard ports other than
the usual `80`(HTTP)/`443`(HTTPS) which would disrupt Status connectivity.
While this fix could have indeed helped a few edge cases, what it really
did was cause the Status node to stop seeing actual public IPs of the clients.
But __pure accident__ this change caused the code we inherited from
`go-ethereum` implementation of DevP2P protocol to stop throttling new
incoming connections, because the IP as which they appeared was a
`172.16.0.0/12` network address of the Docker bridge.
The `go-ethereum` code used the `!netutil.IsLAN(remoteIP)` check to
avoid throttling connections from local addresses, which included the
local Docker bridge address:
https://github.com/status-im/status-go/blob/82680830/vendor/github.com/ethereum/go-ethereum/p2p/netutil/net.go#L36
The fix intended to target a small number of networks with fortified
firewall configuration accidentally resolved our issues with
`Connecting...` prompts that our application showed us en masse during
our Prauge Meetup. Part of the reason for that is that venues like that
normally give out local IP addresses and use NAT to translate them onto
the only public IP address they possess.
Since out application is supposed to be usable from within networks
behind NAT like airport WiFi networks for example, it makes no sense to
keep the inbound connection throttle time implemented in `go-ethereum`.
I'm leaving `inboundThrottleTime` in because it's used to calculate
value for `dialHistoryExpiration` in:
`vendor/github.com/ethereum/go-ethereum/p2p/dial.go`
I believe reducing that value one we deploy this change should also
increase the speed with which the Status application is able to reconnect
to a node that was temporarily unavailable, instead waiting the 5*30 seconds.
Research issue: https://github.com/status-im/infra-eth-cluster/issues/35
Signed-off-by: Jakub Sokołowski <jakub@status.im>
It looks like profile chats are created as public chats, and the code
will drop messages for it.
This commit fixes the issue by checking for the "@" prefix in profile
chat names, until we fix the issue by migrating those chats.
- old existing ranges are merged when wallet service is started
- a new range is merged with an existing one if possible
This will decrease the number of entries in blocks_range table as
currently it can grow indefinitely (@flexsurfer reported 23307 entries).
This change is also needed for further optimisations of RPC usage.
With the introduction of the new non-blocking code, events for the
peerpool might come out-of-order. That's generally not an issue, but it
made tests fail.
I have changed the code so that order is more consistent (It's still
theoretically possible that a stop signal would arrive out of order in a
real scenario, but impact is low and I don't want to change this code
too much).
There was another deadlock in the peer pool.
Because we made the event handler asynchrnous, another deadlock popped
up, as the loop locks the global peerpool lock before processing events.
But the handlers also take the global look, effectively resulting in the
same situation we had before, i.e the loop is not running.
THE LOOP MUST BE RUNNING AT ALL TIMES OTHERWISE THE SERVER HANGS.
There might be an issue on how we handle metrics, which causes the p2p
server to hang.
updateNodeMetrics calls a method on the p2p server, which
blocks until the server is available:
e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L301)e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L746)
If there's back-pressure on the peer event feed
e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L783)
The event channel above might become while updateNodeMetrics
is called, which means is never consumed, the server blocks on publishing on
it, and the two will deadlock (server waits for the channel above to be consumed,
this code waits for the server to respond to peerCount, which is in the same
event loop).
Calling it in a different go-routine will allow this code to keep
processing peer added events, therefore the server will not lock and keep processing requests.
This is a bit complicated, so:
1) Peerpool was subscribing to `event.Feed`, which is a global event
emitter for ethereum.
2) The p2p.Server was publshing on `event.Feed`, this triggered in the
same routine a publish on `event.Feed`.
3) Peerpool was listening to `event.Feed`, react on it, and in the same
routine, trigger some code on p2p.Server that would publish on
`event.Feed`
This meant that if the size of the channel was unbufferred, it would deadlock, as
peerPool would not be consuming when it would publish (the same go
routine publishes and listen effectively, through a lot of indirection
and non-buffered channels, p2p.Server->event.Feed)
The channel though was a buffered channel with size 10, and this meant that most of the times is
fine.
The issue is that peerpool is not the only producer to this channel.
So it's possible that while is processing an event, the buffer would
fill up, and it would hange trying to publish, and nobody is listening
to the channel, hanging EVERYTHING.
At least that's what I think, needs to be tested, but definitely an
issue.
I kept the code changes to a minimum, this code is a bit hairy, but it's
fairly critical so I don't want to make too many changes.
We were not locking before accessing the contacts map and it would panic
in some cases.
I have changed the code to pull contacts from db so we move away from
having locks.
There seems to be an issue with version 1.3, querying for topics on
postgres returns
and error:
```
panic: pq: invalid byte sequence for encoding "UTF8"
```
Upgrading pq fixes the issue
¯\_(ツ)_/¯
Previous I added the prefix directly in `docker-image` target.
But that doesn't make sense if you override the `RELEASE_TAG`.
Signed-off-by: Jakub Sokołowski <jakub@status.im>
When an error occours (or the peer disconnect), we return from decorator
as an error is published to the chan.
There are though still 5 or 6 goroutines that want to write on that
channel and at least 3/4 of them will be hanging, leaving them stuck
publishing on the chan.
This though is probably not the cause of
https://github.com/status-im/infra-eth-cluster/issues/39
fills up the stack trace with hung go routines.
There was a bug on status-react where it would save filters that were
not listened to.
This commit adds a task to clean up those filters as they might result
in long syncing times.
This commit also returns topics/ranges/mailserves from messenger in
order to make the initialization of the app simpler and start moving
logic to status-go.
It also removes whisper from vendor.
One of the issues we noticed is that the partitioned topic
in push notification is heavy in traffic, as any user using a particular
mailserver will use that partitioned topic to register for PNs.
This commit moves from the partitioned topic to the personal topic of
the PN server, so it does not clash with other users that might happen
to have the same partitioned topic as the mailserver, resulting in long
sync times.
Another issue that will need to be addressed separately is that once you
send a message to a topic, because of the way how waku/whisper works,
you will have to register to that topic, meaning that you will receive
that data. Currently waku does not support unsubscribing from a topic
without logging in and out, so that needs also to be addressed.
When sending a message in a private group chat we send the whole history
for redundancy and allow out-of-order processing.
This can be very expensive in some chats, resulting in long delay when
sending a message and calculating the POW.
This commit improves the performance by only forwarding the events
necessary for the user to be able to construct a group chat and process
correctly the message.
This commit fixes a couple of issues:
1) Emojis were sent to any member of the group chat, regardless of
whether they joined
2) We don't want to wrap emojis, as there's no need to do so, only
messages are to be wrapped
The topics field was not passed to the mailserver, which meant that
queries were still using the old bloom filter.
Hopefully this is the last place where we need to pass this.
Using `latest` tag is dangerous for non-technical users.
And updating `latest` tag willy-nilly is also bad.
Signed-off-by: Jakub Sokołowski <jakub@status.im>