status-go/healthmanager/blockchain_health_manager.go
shashankshampi 14dcd29eee test_: Code Migration from status-cli-tests
author shashankshampi <shashank.sanket1995@gmail.com> 1729780155 +0530
committer shashankshampi <shashank.sanket1995@gmail.com> 1730274350 +0530

test: Code Migration from status-cli-tests
fix_: functional tests (#5979)

* fix_: generate on test-functional

* chore(test)_: fix functional test assertion

---------

Co-authored-by: Siddarth Kumar <siddarthkay@gmail.com>

feat(accounts)_: cherry-pick Persist acceptance of Terms of Use & Privacy policy (#5766) (#5977)

* feat(accounts)_: Persist acceptance of Terms of Use & Privacy policy (#5766)

The original GH issue https://github.com/status-im/status-mobile/issues/21113
came from a request from the Legal team. We must show to Status v1 users the new
terms (Terms of Use & Privacy Policy) right after they upgrade to Status v2
from the stores.

The solution we use is to create a flag in the accounts table, named
hasAcceptedTerms. The flag will be set to true on the first account ever
created in v2 and we provide a native call in mobile/status.go#AcceptTerms,
which allows the client to persist the user's choice in case they are upgrading
(from v1 -> v2, or from a v2 older than this PR).

This solution is not the best because we should store the setting in a separate
table, not in the accounts table.

Related Mobile PR https://github.com/status-im/status-mobile/pull/21124

* fix(test)_: Compare addresses using uppercased strings

---------

Co-authored-by: Icaro Motta <icaro.ldm@gmail.com>

test_: restore account (#5960)

feat_: `LogOnPanic` linter (#5969)

* feat_: LogOnPanic linter

* fix_: add missing defer LogOnPanic

* chore_: make vendor

* fix_: tests, address pr comments

* fix_: address pr comments

fix(ci)_: remove workspace and tmp dir

This ensures we do not encounter weird errors like:
```
+ ln -s /home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907 /home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907@tmp/go/src/github.com/status-im/status-go
ln: failed to create symbolic link '/home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907@tmp/go/src/github.com/status-im/status-go': File exists
script returned exit code 1
```

Signed-off-by: Jakub Sokołowski <jakub@status.im>

chore_: enable windows and macos CI build (#5840)

- Added support for Windows and macOS in CI pipelines
- Added missing dependencies for Windows and x86-64-darwin
- Resolved macOS SDK version compatibility for darwin-x86_64

The `mkShell` override was necessary to ensure compatibility with the newer
macOS SDK (version 11.0) for x86_64. The default SDK (10.12) was causing build failures
because of the missing libs and frameworks. OverrideSDK creates a mapping from
the default SDK in all package categories to the requested SDK (11.0).

fix(contacts)_: fix trust status not being saved to cache when changed (#5965)

Fixes https://github.com/status-im/status-desktop/issues/16392

cleanup

added logger and cleanup

review comments changes

fix_: functional tests (#5979)

* fix_: generate on test-functional

* chore(test)_: fix functional test assertion

---------

Co-authored-by: Siddarth Kumar <siddarthkay@gmail.com>

feat(accounts)_: cherry-pick Persist acceptance of Terms of Use & Privacy policy (#5766) (#5977)

* feat(accounts)_: Persist acceptance of Terms of Use & Privacy policy (#5766)

The original GH issue https://github.com/status-im/status-mobile/issues/21113
came from a request from the Legal team. We must show to Status v1 users the new
terms (Terms of Use & Privacy Policy) right after they upgrade to Status v2
from the stores.

The solution we use is to create a flag in the accounts table, named
hasAcceptedTerms. The flag will be set to true on the first account ever
created in v2 and we provide a native call in mobile/status.go#AcceptTerms,
which allows the client to persist the user's choice in case they are upgrading
(from v1 -> v2, or from a v2 older than this PR).

This solution is not the best because we should store the setting in a separate
table, not in the accounts table.

Related Mobile PR https://github.com/status-im/status-mobile/pull/21124

* fix(test)_: Compare addresses using uppercased strings

---------

Co-authored-by: Icaro Motta <icaro.ldm@gmail.com>

test_: restore account (#5960)

feat_: `LogOnPanic` linter (#5969)

* feat_: LogOnPanic linter

* fix_: add missing defer LogOnPanic

* chore_: make vendor

* fix_: tests, address pr comments

* fix_: address pr comments

chore_: enable windows and macos CI build (#5840)

- Added support for Windows and macOS in CI pipelines
- Added missing dependencies for Windows and x86-64-darwin
- Resolved macOS SDK version compatibility for darwin-x86_64

The `mkShell` override was necessary to ensure compatibility with the newer
macOS SDK (version 11.0) for x86_64. The default SDK (10.12) was causing build failures
because of the missing libs and frameworks. OverrideSDK creates a mapping from
the default SDK in all package categories to the requested SDK (11.0).

fix(contacts)_: fix trust status not being saved to cache when changed (#5965)

Fixes https://github.com/status-im/status-desktop/issues/16392

test_: remove port bind

chore(wallet)_: move route execution code to separate module

chore_: replace geth logger with zap logger (#5962)

closes: #6002

feat(telemetry)_: add metrics for message reliability (#5899)

* feat(telemetry)_: track message reliability

Add metrics for dial errors, missed messages,
missed relevant messages, and confirmed delivery.

* fix_: handle error from json marshal

chore_: use zap logger as request logger

iterates: status-im/status-desktop#16536

test_: unique project per run

test_: use docker compose v2, more concrete project name

fix(codecov)_: ignore folders without tests

Otherwise Codecov reports incorrect numbers when making changes.
https://docs.codecov.com/docs/ignoring-paths

Signed-off-by: Jakub Sokołowski <jakub@status.im>

test_: verify schema of signals during init; fix schema verification warnings (#5947)

fix_: update defaultGorushURL (#6011)

fix(tests)_: use non-standard port to avoid conflicts

We have observed `nimbus-eth2` build failures reporting this port:
```json
{
  "lvl": "NTC",
  "ts": "2024-10-28 13:51:32.308+00:00",
  "msg": "REST HTTP server could not be started",
  "topics": "beacnde",
  "address": "127.0.0.1:5432",
  "reason": "(98) Address already in use"
}
```
https://ci.status.im/job/nimbus-eth2/job/platforms/job/linux/job/x86_64/job/main/job/PR-6683/3/

Signed-off-by: Jakub Sokołowski <jakub@status.im>

fix_: create request logger ad-hoc in tests

Fixes `TestCall` failing when run concurrently.

chore_: configure codecov (#6005)

* chore_: configure codecov

* fix_: after_n_builds
2024-10-30 14:49:26 +05:30

250 lines
7.6 KiB
Go

package healthmanager
import (
"context"
"sync"
status_common "github.com/status-im/status-go/common"
"github.com/status-im/status-go/healthmanager/aggregator"
"github.com/status-im/status-go/healthmanager/rpcstatus"
)
// BlockchainFullStatus contains the full status of the blockchain, including provider statuses.
type BlockchainFullStatus struct {
Status rpcstatus.ProviderStatus `json:"status"`
StatusPerChain map[uint64]rpcstatus.ProviderStatus `json:"statusPerChain"`
StatusPerChainPerProvider map[uint64]map[string]rpcstatus.ProviderStatus `json:"statusPerChainPerProvider"`
}
// BlockchainStatus contains the status of the blockchain
type BlockchainStatus struct {
Status rpcstatus.ProviderStatus `json:"status"`
StatusPerChain map[uint64]rpcstatus.ProviderStatus `json:"statusPerChain"`
}
// BlockchainHealthManager manages the state of all providers and aggregates their statuses.
type BlockchainHealthManager struct {
mu sync.RWMutex
aggregator *aggregator.Aggregator
subscribers sync.Map // thread-safe
providers map[uint64]*ProvidersHealthManager
cancelFuncs map[uint64]context.CancelFunc // Map chainID to cancel functions
lastStatus *BlockchainStatus
wg sync.WaitGroup
}
// NewBlockchainHealthManager creates a new instance of BlockchainHealthManager.
func NewBlockchainHealthManager() *BlockchainHealthManager {
agg := aggregator.NewAggregator("blockchain")
return &BlockchainHealthManager{
aggregator: agg,
providers: make(map[uint64]*ProvidersHealthManager),
cancelFuncs: make(map[uint64]context.CancelFunc),
}
}
// RegisterProvidersHealthManager registers the provider health manager.
// It removes any existing provider for the same chain before registering the new one.
func (b *BlockchainHealthManager) RegisterProvidersHealthManager(ctx context.Context, phm *ProvidersHealthManager) error {
b.mu.Lock()
defer b.mu.Unlock()
chainID := phm.ChainID()
// Check if a provider for the given chainID is already registered and remove it
if _, exists := b.providers[chainID]; exists {
// Cancel the existing context
if cancel, cancelExists := b.cancelFuncs[chainID]; cancelExists {
cancel()
}
// Remove the old registration
delete(b.providers, chainID)
delete(b.cancelFuncs, chainID)
}
// Proceed with the registration
b.providers[chainID] = phm
// Create a new context for the provider
providerCtx, cancel := context.WithCancel(ctx)
b.cancelFuncs[chainID] = cancel
statusCh := phm.Subscribe()
b.wg.Add(1)
go func(phm *ProvidersHealthManager, statusCh chan struct{}, providerCtx context.Context) {
defer status_common.LogOnPanic()
defer func() {
phm.Unsubscribe(statusCh)
b.wg.Done()
}()
for {
select {
case <-statusCh:
// When the provider updates its status, check the statuses of all providers
b.aggregateAndUpdateStatus(providerCtx)
case <-providerCtx.Done():
// Stop processing when the context is cancelled
return
}
}
}(phm, statusCh, providerCtx)
return nil
}
// Stop stops the event processing and unsubscribes.
func (b *BlockchainHealthManager) Stop() {
b.mu.Lock()
for _, cancel := range b.cancelFuncs {
cancel()
}
clear(b.cancelFuncs)
clear(b.providers)
b.mu.Unlock()
b.wg.Wait()
}
// Subscribe allows clients to receive notifications about changes.
func (b *BlockchainHealthManager) Subscribe() chan struct{} {
ch := make(chan struct{}, 1)
b.subscribers.Store(ch, struct{}{})
return ch
}
// Unsubscribe removes a subscriber from receiving notifications.
func (b *BlockchainHealthManager) Unsubscribe(ch chan struct{}) {
b.subscribers.Delete(ch) // Удаляем подписчика из sync.Map
close(ch)
}
// aggregateAndUpdateStatus collects statuses from all providers and updates the overall and short status.
func (b *BlockchainHealthManager) aggregateAndUpdateStatus(ctx context.Context) {
newShortStatus := b.aggregateStatus()
// If status has changed, update the last status and emit notifications
if b.shouldUpdateStatus(newShortStatus) {
b.updateStatus(newShortStatus)
b.emitBlockchainHealthStatus(ctx)
}
}
// aggregateStatus aggregates provider statuses and returns the new short status.
func (b *BlockchainHealthManager) aggregateStatus() BlockchainStatus {
b.mu.Lock()
defer b.mu.Unlock()
// Collect statuses from all providers
providerStatuses := make([]rpcstatus.ProviderStatus, 0)
for _, provider := range b.providers {
providerStatuses = append(providerStatuses, provider.Status())
}
// Update the aggregator with the new list of provider statuses
b.aggregator.UpdateBatch(providerStatuses)
// Get the new aggregated full and short status
return b.getStatusPerChain()
}
// shouldUpdateStatus checks if the status has changed and needs to be updated.
func (b *BlockchainHealthManager) shouldUpdateStatus(newShortStatus BlockchainStatus) bool {
b.mu.RLock()
defer b.mu.RUnlock()
return b.lastStatus == nil || !compareShortStatus(newShortStatus, *b.lastStatus)
}
// updateStatus updates the last known status with the new status.
func (b *BlockchainHealthManager) updateStatus(newShortStatus BlockchainStatus) {
b.mu.Lock()
defer b.mu.Unlock()
b.lastStatus = &newShortStatus
}
// compareShortStatus compares two BlockchainStatus structs and returns true if they are identical.
func compareShortStatus(newStatus, previousStatus BlockchainStatus) bool {
if newStatus.Status.Status != previousStatus.Status.Status {
return false
}
if len(newStatus.StatusPerChain) != len(previousStatus.StatusPerChain) {
return false
}
for chainID, newChainStatus := range newStatus.StatusPerChain {
if prevChainStatus, ok := previousStatus.StatusPerChain[chainID]; !ok || newChainStatus.Status != prevChainStatus.Status {
return false
}
}
return true
}
// emitBlockchainHealthStatus sends a notification to all subscribers about the new blockchain status.
func (b *BlockchainHealthManager) emitBlockchainHealthStatus(ctx context.Context) {
b.subscribers.Range(func(key, value interface{}) bool {
subscriber := key.(chan struct{})
select {
case <-ctx.Done():
// Stop sending notifications when the context is cancelled
return false
case subscriber <- struct{}{}:
default:
// Skip notification if the subscriber's channel is full (non-blocking)
}
return true
})
}
func (b *BlockchainHealthManager) GetFullStatus() BlockchainFullStatus {
b.mu.RLock()
defer b.mu.RUnlock()
statusPerChainPerProvider := make(map[uint64]map[string]rpcstatus.ProviderStatus)
for chainID, phm := range b.providers {
providerStatuses := phm.GetStatuses()
statusPerChainPerProvider[chainID] = providerStatuses
}
statusPerChain := b.getStatusPerChain()
return BlockchainFullStatus{
Status: statusPerChain.Status,
StatusPerChain: statusPerChain.StatusPerChain,
StatusPerChainPerProvider: statusPerChainPerProvider,
}
}
func (b *BlockchainHealthManager) getStatusPerChain() BlockchainStatus {
statusPerChain := make(map[uint64]rpcstatus.ProviderStatus)
for chainID, phm := range b.providers {
chainStatus := phm.Status()
statusPerChain[chainID] = chainStatus
}
blockchainStatus := b.aggregator.GetAggregatedStatus()
return BlockchainStatus{
Status: blockchainStatus,
StatusPerChain: statusPerChain,
}
}
func (b *BlockchainHealthManager) GetStatusPerChain() BlockchainStatus {
b.mu.RLock()
defer b.mu.RUnlock()
return b.getStatusPerChain()
}
// Status returns the current aggregated status.
func (b *BlockchainHealthManager) Status() rpcstatus.ProviderStatus {
b.mu.RLock()
defer b.mu.RUnlock()
return b.aggregator.GetAggregatedStatus()
}