This commit introduces the following changes:
- `local-notifications` require as body an interface complying with
`json.Marshaler`
- removed unmarshaling of `Notifications` as not used (we only Marshal
notifications)
- `protocol/messenger.go` creates directly a `Notification` instead of
having an intermediate format
- add community notifications on request to join
- move parsing of text in status-go for notifications
* Removed region field from on ramp struct
* Added basic placeholder for latamex
* Added latamex on ramp option
* fee rate change to Ramp
* Updated VERSION
* Bump to major version
Sometimes eth_getBlockByNumber returns txs with chainId which is not
equal to chanin's id. That caused an error and tx fetching was
interrupted. From now on such txs will be skipped.
* Migrations in place, how to run them?
* Remove down migrations and touch database.go
* Database and Database Test package in place, added functions to get and store app metrics
* make generate output
* Minor bug fix on app metrics insert and select
* Add a validation layer to restrict what can be saved in the database
* Make validation more terse, throw error if schema doesn't exist, expose appmetrics service
* service updates
* Compute all errors before sending them out
* Trying to bring a closjure to appmetrics go
* Expose appmetrics via an api, skip fancy
* Address value as Jason Dawt Rawmasage to ease parsing
* Introduce a buffered chan with magic cap of 8 to minimize writes to DB. Tests for service and API. Also expose GetAppMetrics function.
* Lint issues
* Remove autoincrement, undo waku.json changes, fix error being shadowed, return nil where nil ought to be returned, get rid of buffered channel
* Bump migration number
* Fix API factory usage
* Add comment re:json.RawMessage instead of strings
* Get rid of test vars, throw save error inside the loop
* Update version
Co-authored-by: Samuel Hawksby-Robinson <samuel@samyoul.com>
This commit expands the confirmation mechanism to allow private group
chat messages to be confirmed:
Changes:
- Added a separate table for message confirmations as group chat
messages have same messageID but multiple datasyncID
- Removed DataSyncID from raw message (I haven't removed the column name
as it can't be done in sqlite without copying over the table)
* Revert "Revert "Expand Local Notifications to support multiple Notification types (#2100)""
This reverts commit 5887337b88.
* Revert "Revert "fix protocol.MessageNotificationBody marshalling""
This reverts commit cf0a16dff1.
* Bump version to 0.70.0
* Added localnotifications for Transaction messages
* Fixed bug where Message.SigPubKey was presumed to be set
* Added lookup for contact existing in Messenger.allContacts
Additionally added functionality to add a contact to the messenger store if it isn't present
* Get chat directly from Messenger.allChats store
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
- avoid making RPC request for `zero - zero` range
- avoid checking of nonce for a lower block in the range if it is zero
in a higher block
- on `wallet_getTransfersByAddress` scanning of history is skipped if
zero block is already reached
- no need to fetch block num before fetching token balances
This fix puts an end to a saga that essentially start during the
Status Prague Meetup at the end of October 2018. At the time we were
experiencing massive issues with `Connecting...` spinners in the app in the
venue we rented. We were pulling our hairs out what to do and we could not
find the cause of the issue at the time.
Three months later I deployed the following change:
https://github.com/status-im/infra-eth-cluster/commit/63a13eed
Which used `iptables` to map the `443` port onto our `30504` Status node port
using `PREROUTING` chain and `REDIRECT` jump in order to fix issues people
have been complaining about when using WiFi networks in various venues:
https://github.com/status-im/status-react/issues/6351
Our thinking when trying to resolve the reported issue assumed that some
networks might block outgoing connections on non-standard ports other than
the usual `80`(HTTP)/`443`(HTTPS) which would disrupt Status connectivity.
While this fix could have indeed helped a few edge cases, what it really
did was cause the Status node to stop seeing actual public IPs of the clients.
But __pure accident__ this change caused the code we inherited from
`go-ethereum` implementation of DevP2P protocol to stop throttling new
incoming connections, because the IP as which they appeared was a
`172.16.0.0/12` network address of the Docker bridge.
The `go-ethereum` code used the `!netutil.IsLAN(remoteIP)` check to
avoid throttling connections from local addresses, which included the
local Docker bridge address:
https://github.com/status-im/status-go/blob/82680830/vendor/github.com/ethereum/go-ethereum/p2p/netutil/net.go#L36
The fix intended to target a small number of networks with fortified
firewall configuration accidentally resolved our issues with
`Connecting...` prompts that our application showed us en masse during
our Prauge Meetup. Part of the reason for that is that venues like that
normally give out local IP addresses and use NAT to translate them onto
the only public IP address they possess.
Since out application is supposed to be usable from within networks
behind NAT like airport WiFi networks for example, it makes no sense to
keep the inbound connection throttle time implemented in `go-ethereum`.
I'm leaving `inboundThrottleTime` in because it's used to calculate
value for `dialHistoryExpiration` in:
`vendor/github.com/ethereum/go-ethereum/p2p/dial.go`
I believe reducing that value one we deploy this change should also
increase the speed with which the Status application is able to reconnect
to a node that was temporarily unavailable, instead waiting the 5*30 seconds.
Research issue: https://github.com/status-im/infra-eth-cluster/issues/35
Signed-off-by: Jakub Sokołowski <jakub@status.im>
It looks like profile chats are created as public chats, and the code
will drop messages for it.
This commit fixes the issue by checking for the "@" prefix in profile
chat names, until we fix the issue by migrating those chats.
- old existing ranges are merged when wallet service is started
- a new range is merged with an existing one if possible
This will decrease the number of entries in blocks_range table as
currently it can grow indefinitely (@flexsurfer reported 23307 entries).
This change is also needed for further optimisations of RPC usage.