* Migrations in place, how to run them?
* Remove down migrations and touch database.go
* Database and Database Test package in place, added functions to get and store app metrics
* make generate output
* Minor bug fix on app metrics insert and select
* Add a validation layer to restrict what can be saved in the database
* Make validation more terse, throw error if schema doesn't exist, expose appmetrics service
* service updates
* Compute all errors before sending them out
* Trying to bring a closjure to appmetrics go
* Expose appmetrics via an api, skip fancy
* Address value as Jason Dawt Rawmasage to ease parsing
* Introduce a buffered chan with magic cap of 8 to minimize writes to DB. Tests for service and API. Also expose GetAppMetrics function.
* Lint issues
* Remove autoincrement, undo waku.json changes, fix error being shadowed, return nil where nil ought to be returned, get rid of buffered channel
* Bump migration number
* Fix API factory usage
* Add comment re:json.RawMessage instead of strings
* Get rid of test vars, throw save error inside the loop
* Update version
Co-authored-by: Samuel Hawksby-Robinson <samuel@samyoul.com>
This commit expands the confirmation mechanism to allow private group
chat messages to be confirmed:
Changes:
- Added a separate table for message confirmations as group chat
messages have same messageID but multiple datasyncID
- Removed DataSyncID from raw message (I haven't removed the column name
as it can't be done in sqlite without copying over the table)
* Revert "Revert "Expand Local Notifications to support multiple Notification types (#2100)""
This reverts commit 5887337b88.
* Revert "Revert "fix protocol.MessageNotificationBody marshalling""
This reverts commit cf0a16dff1.
* Bump version to 0.70.0
* Added localnotifications for Transaction messages
* Fixed bug where Message.SigPubKey was presumed to be set
* Added lookup for contact existing in Messenger.allContacts
Additionally added functionality to add a contact to the messenger store if it isn't present
* Get chat directly from Messenger.allChats store
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
- avoid making RPC request for `zero - zero` range
- avoid checking of nonce for a lower block in the range if it is zero
in a higher block
- on `wallet_getTransfersByAddress` scanning of history is skipped if
zero block is already reached
- no need to fetch block num before fetching token balances
This fix puts an end to a saga that essentially start during the
Status Prague Meetup at the end of October 2018. At the time we were
experiencing massive issues with `Connecting...` spinners in the app in the
venue we rented. We were pulling our hairs out what to do and we could not
find the cause of the issue at the time.
Three months later I deployed the following change:
https://github.com/status-im/infra-eth-cluster/commit/63a13eed
Which used `iptables` to map the `443` port onto our `30504` Status node port
using `PREROUTING` chain and `REDIRECT` jump in order to fix issues people
have been complaining about when using WiFi networks in various venues:
https://github.com/status-im/status-react/issues/6351
Our thinking when trying to resolve the reported issue assumed that some
networks might block outgoing connections on non-standard ports other than
the usual `80`(HTTP)/`443`(HTTPS) which would disrupt Status connectivity.
While this fix could have indeed helped a few edge cases, what it really
did was cause the Status node to stop seeing actual public IPs of the clients.
But __pure accident__ this change caused the code we inherited from
`go-ethereum` implementation of DevP2P protocol to stop throttling new
incoming connections, because the IP as which they appeared was a
`172.16.0.0/12` network address of the Docker bridge.
The `go-ethereum` code used the `!netutil.IsLAN(remoteIP)` check to
avoid throttling connections from local addresses, which included the
local Docker bridge address:
https://github.com/status-im/status-go/blob/82680830/vendor/github.com/ethereum/go-ethereum/p2p/netutil/net.go#L36
The fix intended to target a small number of networks with fortified
firewall configuration accidentally resolved our issues with
`Connecting...` prompts that our application showed us en masse during
our Prauge Meetup. Part of the reason for that is that venues like that
normally give out local IP addresses and use NAT to translate them onto
the only public IP address they possess.
Since out application is supposed to be usable from within networks
behind NAT like airport WiFi networks for example, it makes no sense to
keep the inbound connection throttle time implemented in `go-ethereum`.
I'm leaving `inboundThrottleTime` in because it's used to calculate
value for `dialHistoryExpiration` in:
`vendor/github.com/ethereum/go-ethereum/p2p/dial.go`
I believe reducing that value one we deploy this change should also
increase the speed with which the Status application is able to reconnect
to a node that was temporarily unavailable, instead waiting the 5*30 seconds.
Research issue: https://github.com/status-im/infra-eth-cluster/issues/35
Signed-off-by: Jakub Sokołowski <jakub@status.im>
It looks like profile chats are created as public chats, and the code
will drop messages for it.
This commit fixes the issue by checking for the "@" prefix in profile
chat names, until we fix the issue by migrating those chats.
- old existing ranges are merged when wallet service is started
- a new range is merged with an existing one if possible
This will decrease the number of entries in blocks_range table as
currently it can grow indefinitely (@flexsurfer reported 23307 entries).
This change is also needed for further optimisations of RPC usage.
With the introduction of the new non-blocking code, events for the
peerpool might come out-of-order. That's generally not an issue, but it
made tests fail.
I have changed the code so that order is more consistent (It's still
theoretically possible that a stop signal would arrive out of order in a
real scenario, but impact is low and I don't want to change this code
too much).
There was another deadlock in the peer pool.
Because we made the event handler asynchrnous, another deadlock popped
up, as the loop locks the global peerpool lock before processing events.
But the handlers also take the global look, effectively resulting in the
same situation we had before, i.e the loop is not running.
THE LOOP MUST BE RUNNING AT ALL TIMES OTHERWISE THE SERVER HANGS.
There might be an issue on how we handle metrics, which causes the p2p
server to hang.
updateNodeMetrics calls a method on the p2p server, which
blocks until the server is available:
e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L301)e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L746)
If there's back-pressure on the peer event feed
e60f425b45/vendor/github.com/ethereum/go-ethereum/p2p/server.go (L783)
The event channel above might become while updateNodeMetrics
is called, which means is never consumed, the server blocks on publishing on
it, and the two will deadlock (server waits for the channel above to be consumed,
this code waits for the server to respond to peerCount, which is in the same
event loop).
Calling it in a different go-routine will allow this code to keep
processing peer added events, therefore the server will not lock and keep processing requests.
This is a bit complicated, so:
1) Peerpool was subscribing to `event.Feed`, which is a global event
emitter for ethereum.
2) The p2p.Server was publshing on `event.Feed`, this triggered in the
same routine a publish on `event.Feed`.
3) Peerpool was listening to `event.Feed`, react on it, and in the same
routine, trigger some code on p2p.Server that would publish on
`event.Feed`
This meant that if the size of the channel was unbufferred, it would deadlock, as
peerPool would not be consuming when it would publish (the same go
routine publishes and listen effectively, through a lot of indirection
and non-buffered channels, p2p.Server->event.Feed)
The channel though was a buffered channel with size 10, and this meant that most of the times is
fine.
The issue is that peerpool is not the only producer to this channel.
So it's possible that while is processing an event, the buffer would
fill up, and it would hange trying to publish, and nobody is listening
to the channel, hanging EVERYTHING.
At least that's what I think, needs to be tested, but definitely an
issue.
I kept the code changes to a minimum, this code is a bit hairy, but it's
fairly critical so I don't want to make too many changes.