* feat(@desktop/wallet): added iso4217 library for fiat currency display decimals
* feat(@desktop/wallet): added token peg info and use numbers for token market values
* feat(@desktop/wallet): added
* feat(@desktop/wallet): added iso4217 library for fiat currency display decimals
* feat(@desktop/wallet): added token peg info and use numbers for token market values
* feat(@desktop/wallet): extend wallet api to fetch prices in multiple currencies
* chore(@desktop/wallet): rename token peg field for clarity
When discovery fails to be seeded with bootstrap/fallback nodes, it
never recovers.
This commit changes the behavior so that status-go retries fetching
bootnodes, and restarts discovery when that happens.
This commit enables mailserver cycle logic by default and make a few
changes:
1) Nodes are graylisted instead of being blacklisted for a set amount of
time. The reason is that if we blacklist, any cut in connectivity
might result in long delays before reconnecting, especially on spotty
connections.
2) Fixes an issue on the devp2p server, whereby the node would not
connect to one of the static nodes since all the connection slots
where filled. The fix is a bit inelegant, it always connects to
static nodes, ignoring maxpeers, but it's tricky to get it to work
since the code is clearly not written to select a specific node.
3) Adds support to pinned mailservers
4) Add retries to mailservers requests. It uses a closure for now, I
think we should eventually have a channel etc, but I'd leave that for
later.
By integrating `zxcvbn` module, it has been added a new endpoint to get password strength quality information like Entropy, CrackTime, CrackTimeDisplay, Score, MatchSequence and CalcTime.
Added related dependences.
Closes#4980
* feat: obtain external address for rendezvous
If the ext ip returned by geth is 127.0.0.1, it will attempt to obtain the external IP address via rendezvous and use that to register the ens record later
* fix: failing test
* fix: code review, and adding external ip address to logs
* Adding wakunode module
* Adding wakuv2 fleet files
* Add waku fleets to update-fleet-config script
* Adding config items for waku v2
* Conditionally start waku v2 node depending on config
* Adapting common code to use go-waku
* Setting log level to info
* update dependencies
* update fleet config to use WakuNodes instead of BootNodes
* send and receive messages
* use hash returned when publishing a message
* add waku store protocol
* trigger signal after receiving store messages
* exclude linting rule SA1019 to check deprecated packages
* Migrations in place, how to run them?
* Remove down migrations and touch database.go
* Database and Database Test package in place, added functions to get and store app metrics
* make generate output
* Minor bug fix on app metrics insert and select
* Add a validation layer to restrict what can be saved in the database
* Make validation more terse, throw error if schema doesn't exist, expose appmetrics service
* service updates
* Compute all errors before sending them out
* Trying to bring a closjure to appmetrics go
* Expose appmetrics via an api, skip fancy
* Address value as Jason Dawt Rawmasage to ease parsing
* Introduce a buffered chan with magic cap of 8 to minimize writes to DB. Tests for service and API. Also expose GetAppMetrics function.
* Lint issues
* Remove autoincrement, undo waku.json changes, fix error being shadowed, return nil where nil ought to be returned, get rid of buffered channel
* Bump migration number
* Fix API factory usage
* Add comment re:json.RawMessage instead of strings
* Get rid of test vars, throw save error inside the loop
* Update version
Co-authored-by: Samuel Hawksby-Robinson <samuel@samyoul.com>
This fix puts an end to a saga that essentially start during the
Status Prague Meetup at the end of October 2018. At the time we were
experiencing massive issues with `Connecting...` spinners in the app in the
venue we rented. We were pulling our hairs out what to do and we could not
find the cause of the issue at the time.
Three months later I deployed the following change:
https://github.com/status-im/infra-eth-cluster/commit/63a13eed
Which used `iptables` to map the `443` port onto our `30504` Status node port
using `PREROUTING` chain and `REDIRECT` jump in order to fix issues people
have been complaining about when using WiFi networks in various venues:
https://github.com/status-im/status-react/issues/6351
Our thinking when trying to resolve the reported issue assumed that some
networks might block outgoing connections on non-standard ports other than
the usual `80`(HTTP)/`443`(HTTPS) which would disrupt Status connectivity.
While this fix could have indeed helped a few edge cases, what it really
did was cause the Status node to stop seeing actual public IPs of the clients.
But __pure accident__ this change caused the code we inherited from
`go-ethereum` implementation of DevP2P protocol to stop throttling new
incoming connections, because the IP as which they appeared was a
`172.16.0.0/12` network address of the Docker bridge.
The `go-ethereum` code used the `!netutil.IsLAN(remoteIP)` check to
avoid throttling connections from local addresses, which included the
local Docker bridge address:
https://github.com/status-im/status-go/blob/82680830/vendor/github.com/ethereum/go-ethereum/p2p/netutil/net.go#L36
The fix intended to target a small number of networks with fortified
firewall configuration accidentally resolved our issues with
`Connecting...` prompts that our application showed us en masse during
our Prauge Meetup. Part of the reason for that is that venues like that
normally give out local IP addresses and use NAT to translate them onto
the only public IP address they possess.
Since out application is supposed to be usable from within networks
behind NAT like airport WiFi networks for example, it makes no sense to
keep the inbound connection throttle time implemented in `go-ethereum`.
I'm leaving `inboundThrottleTime` in because it's used to calculate
value for `dialHistoryExpiration` in:
`vendor/github.com/ethereum/go-ethereum/p2p/dial.go`
I believe reducing that value one we deploy this change should also
increase the speed with which the Status application is able to reconnect
to a node that was temporarily unavailable, instead waiting the 5*30 seconds.
Research issue: https://github.com/status-im/infra-eth-cluster/issues/35
Signed-off-by: Jakub Sokołowski <jakub@status.im>
There seems to be an issue with version 1.3, querying for topics on
postgres returns
and error:
```
panic: pq: invalid byte sequence for encoding "UTF8"
```
Upgrading pq fixes the issue
¯\_(ツ)_/¯